While testing my server-side php functions to create device groups i lost track of the notification key returned as a result of successfully creating a device group.
As described in https://groups.google.com/forum/#!topic/firebase-talk/ytovugx8XNs i tried
curl -v -H Content-Type:application/json -H Authorization:key=<your api key>
-H project_id:<your project id>
https://android.googleapis.com/gcm/notification?notification_key_name=testgroup
where the project id is the one found in firebase console, the same shown in the url and the very same present even in my google-services.json
As a result i get
HTTP/1.1 400 Bad Request Content-Type: application/json;
charset=UTF-8 Date: Tue, 18 Apr 2017 08:21:30 GMT Expires: Tue, 18
Apr 2017 08:21:30 GMT Cache-Control: private, max-age=0
X-Content-Type-Options: nosniff X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block Server: GSE Alt-Svc: quic=":443";
ma=2592000; v="37,36,35" Accept-Ranges: none Vary: Accept-Encoding
Transfer-Encoding: chunked
* Curl_http_done: called premature == 0
* Connection #0 to host android.googleapis.com left intact {"error":"INVALID_PROJECT_ID"}
I can't find a way out of this, since after losing a notification key the only way to recover it is with that command (afaik) . Please help.
The Project ID that should be used for FCM is the Sender ID. This value can be seen in the Firebase Console > Settings > Cloud Messaging Tab.
If you refer to the google-services.json file, the value for project_number should be the one you use (same value as seen from the Firebase Console). It's a numerical-only value.
Related
I am attempting to add a custom header within a Laravel(8) response called X-Custom which works, but it places it at the top of the header stack as such:
HTTP/1.1 200 OK
Date: Mon, 01 Nov 2021 10:58:44 GMT
Server: Apache/2.4.29 (Ubuntu)
X-Custom: 8675309
Cache-Control: no-cache, private
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 59
Access-Control-Allow-Origin: *
Content-Length: 15
Content-Type: text/html; charset=UTF-8
Content is here
With low memory devices (an ATTiny85) the earliest header I manage to receive is X-RateLimit-Remaining: 59. So to get the X-Custom header I ideally need it to go on the bottom of the whole stack like so:
HTTP/1.1 200 OK
Date: Mon, 01 Nov 2021 10:58:44 GMT
Server: Apache/2.4.29 (Ubuntu)
Cache-Control: no-cache, private
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 59
Access-Control-Allow-Origin: *
Content-Length: 15
Content-Type: text/html; charset=UTF-8
X-Custom: 8675309
Content is here
From what I can see Laravel returns the various headers from different files rather than all in one go so I think a simple sort is out of the question.
Is sticking a header on the end achieveable within Laravel's means?
Edit: On Chris Haas's comment it made me realise I don't need the headers as I'm only looking for one. Middleware was a dead end as it all seems to run just prior to all these Laravel headers being pushed.
I have discovered if I comment out $this->sendHeaders() within symfony/http-foundation/Response.php::send() , it clears it up to this:
HTTP/1.1 200 OK
Date: Tue, 02 Nov 2021 01:21:29 GMT
Server: Apache/2.4.29 (Ubuntu)
X-Custom: 8675309
Content-Length: 15
Content-Type: text/html; charset=UTF-8
Content is here
I think this is as clean as I can get it from Laravel. This may be enough to get these devices working but in the effort to properly answer this question I am going to see what I can do from the Apache side.
Edit2: Following this answer https://stackoverflow.com/a/24941636/3417896 I can remove the Content-Type header by setting header("Content-type:") at the same place I'm commenting out sendHeaders(), resulting in this:
HTTP/1.1 200 OK
Date: Tue, 02 Nov 2021 02:23:45 GMT
Server: Apache/2.4.29 (Ubuntu)
X-Custom: 8675309
Content-Length: 15
Content is here
Ok now the Content-length I think remains in Apache. I saw another answer on how to remove it from php here https://stackoverflow.com/a/31563538/3417896, but it adds another header in its place.
I created simple react app as described in https://reactjs.org/docs/add-react-to-a-new-app.html and now want to make AJAX calls to the webserver (with PHP). To make it working with development server at localhost:3000 I am trying to set up proxy in package.json
"proxy": {
"/abc": {
"target": "http://web.local/app/path/public/abc",
"secure": false,
"changeOrigin": true,
"logLevel": "debug"
}
}
Unfortunately app answers with default index.html content. How to make proxy working when requesting /abc path?
Is there any way to test and debug proxy? When I try to open http://localhost:3000/abc I can see
[HPM] POST /abc/abc -> http://web.local/app/path/public/abc
in console. But there is no messages when I'm loading app and request is sent from the app. (I tried fetch and axios calls.)
When I build app and run it on web server, all works fine.
EDIT:
Response Headers for the axios ajax call:
HTTP/1.1 404 Not Found
X-Powered-By: Express
Content-Security-Policy: default-src 'self'
X-Content-Type-Options: nosniff
Content-Type: text/html; charset=utf-8
Content-Length: 143
Vary: Accept-Encoding
Date: Wed, 03 Jan 2018 17:44:02 GMT
Connection: keep-alive
For fetch:
HTTP/1.1 301 Moved Permanently
X-Powered-By: Express
Content-Type: text/html; charset=UTF-8
Content-Length: 165
Content-Security-Policy: default-src 'self'
X-Content-Type-Options: nosniff
Location: /abc/
Vary: Accept-Encoding
Date: Wed, 03 Jan 2018 17:58:37 GMT
Connection: keep-alive
And
HTTP/1.1 200 OK
X-Powered-By: Express
Accept-Ranges: bytes
Content-Type: text/html; charset=UTF-8
ETag: W/"649-W8GnY7MkgPFg6/GXObpRHPnVDeU"
Vary: Accept-Encoding
Content-Encoding: gzip
Date: Wed, 03 Jan 2018 17:58:37 GMT
Connection: keep-alive
Transfer-Encoding: chunked
I resolved issue. My php files were located in public/ dir of the app (to have them included in build). And it is accessible from development web server as static content. Now proxy checks if file exists in public/ and delivers it if possible. And if there is no such file, then rules are applied and it tries to fetch data from specified URL.
So I moved all php related files to other location and needed data are delivered from web server through proxy as inteded just with
"proxy": "http://web.local/app/path/static"
in package.json.
Also I had to add separate command to copy content of this dir to build/ and add it to build process.
I manage the HTTP caching in my applications. And it's not working as I think it should. Let's get to an actual example:
With the first serve of my PHP page I serve the following HTTP headers:
HTTP/1.1 200 OK
Date: Mon, 12 Dec 2016 16:39:33 GMT
Server: Apache/2.4.9 (Win64) PHP/5.5.12
Expires: Tue, 01 Jan 1980 19:53:00 GMT
Cache-Control: private, max-age=60, pre-check=60
Last-Modified: Mon, 12 Dec 2016 15:57:25 GMT
Etag: "a2883c859ce5c8153d65a4e904c40a79"
Content-Language: en
Content-Length: 326
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=UTF-8
My application manage the validation of Etags and send 304 if nothing has changed and when you refresh the page in the browser (F5) you get (if nothing has changed server side):
HTTP/1.1 304 Not Modified
Date: Mon, 12 Dec 2016 16:43:10 GMT
Server: Apache/2.4.9 (Win64) PHP/5.5.12
Connection: Keep-Alive
Keep-Alive: timeout=5, max=100
Since I serve Cache-Control: private with max-age=60 I would expect that after one minute the cache will be considered obsolete by the browser and it will request a fresh copy (equivalent of a Ctrl+F5 reload) but instead the cache is still valid several days after it's max-age.
Do I misunderstood these HTTP mechanism? Do I send something wrong or maybe miss something?
If a cached response is within the max-age, then it is considered fresh.
If it exceeds the max-age, then it is considered stale.
If a browser needs a resource and it has a fresh copy in the cache, then it will use that without checking back with the server.
If the browser has a stale copy then it will validate that against the server (in this case, using Etags) to see if it needs a new copy of it the cached copy is still OK.
I have a HAProxy health check configured with the following backend:
backend php_servers
http-request set-header X-Forwarded-Port %[dst_port]
option httpchk get /health
http-check expect status 200
server php1 internal_ip:80 check
HAProxy doesn't enable the server but when using CURL I receive a 200 OK response.
Command: curl -I internal_ip/health
Response:
HTTP/1.1 200 OK
Server: nginx
Date: Thu, 01 Dec 2016 20:53:48 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Access-Control-Allow-Origin: api.flex-appeal.nl
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-transform
Why doesn't HAProxy recognize the servers as "UP"? Seems I can connect just fine.
The correct HTTP verb is GET, not get:
option httpchk GET /health
You can also check/enable the stats page: on the LastChk column you will see why the check fails. In my case, I get a 501 Not Implemented response.
I can reproduce it by doing the same request as HAProxy:
$ telnet localhost 80
Trying ::1...
Connected to localhost.
Escape character is '^]'.
get /health HTTP/1.0
HTTP/1.1 501 Not Implemented
Date: Thu, 01 Dec 2016 21:53:09 GMT
Server: Apache/2.4.23 (Unix) PHP/7.0.13
[...]
Sorry for troubling you with yet another "Failed to validate oauth signature and token" error, but I just can't figure out what's wrong with my request.
I'm constructing my signature from this string:
POST&http%3A%2F%2Fapi.twitter.com%2Foauth%2Frequest_token&oauth_callback%3Dhttp%3A%2F%2Fcraiga.id.au%2Ftwitter%2Fconnected%26oauth_consumer_key%3Dtm5...DOg%26oauth_nonce%3D8...22b%26oauth_signature_method%3DHMAC-SHA1%26oauth_timestamp%3D1275453048%26oauth_version%3D1.0
From this I generate a 28 character signature using the following PHP code:
base64_encode(hash_hmac('sha1', $raw, 'YUo...HIU' . '&', true));
Using this signature, I send the following request:
POST http://api.twitter.com/oauth/request_token HTTP/1.1
Host: api.twitter.com
Pragma: no-cache
Accept: */*
Proxy-Connection: Keep-Alive
Authorization: OAuth oauth_nonce="3D8...22b", oauth_callback="http%3A%2F%2Fcraiga.id.au%2Ftwitter%2Fconnected", oauth_signature_method="HMAC-SHA1", oauth_timestamp="1275453048", oauth_consumer_key="tm5...DOg", oauth_signature="aYd...c6E%3D", oauth_version="1.0"
Content-Length: 266
Content-Type: application/x-www-form-urlencoded
oauth_callback=http%3A%2F%2Fcraiga.id.au%2Ftwitter%2Fconnected&oauth_consumer_key=tm5...DOg&oauth_nonce=3D8...22b&oauth_signature_method=HMAC-SHA1&oauth_timestamp= 1275453048&oauth_version=1.0
I get the following response from Twitter to this request:
HTTP/1.1 401 Unauthorized
Date: Wed, 02 Jun 2010 04:40:14 GMT
Server: hi
Status: 401 Unauthorized
X-Transaction: 1275453614-48409-7443
Last-Modified: Wed, 02 Jun 2010 04:40:14 GMT
X-Runtime: 0.01083
Content-Type: text/html; charset=utf-8
Content-Length: 44
Pragma: no-cache
X-Revision: DEV
Expires: Tue, 31 Mar 1981 05:00:00 GMT
Cache-Control: no-cache, no-store, must-revalidate, pre-check=0, post-check=0
Set-Cookie: k=58.161.42.101.1275453614748615; path=/; expires=Wed, 09-Jun-10 04:40:14 GMT; domain=.twitter.com
Set-Cookie: guest_id=12754536147577949; path=/; expires=Fri, 02 Jul 2010 04:40:14 GMT
Set-Cookie: _twitter_sess=BAh7CToPY3JlYXRlZF9hdGwrCKaq9fYoAToRdHJhbnNfcHJvbXB0MDoHaWQi%250AJWU0ZDFhMGQzMWU0NTZjMzJiZWFkNWUzMTA4ZDRjOTg3IgpmbGFzaElDOidB%250AY3Rpb25Db250cm9sbGVyOjpGbGFzaDo6Rmxhc2hIYXNoewAGOgpAdXNlZHsA--f1e5c7649858a1694f24307504354846bbc1d16b; domain=.twitter.com; path=/
Vary: Accept-Encoding
Connection: close
Failed to validate oauth signature and token
If anyone can cast any light on why this might be failing, I'd love to hear.
You're using the wrong information to generate the signature.
You should be using the ...
oauth_callback=http%3A%2F%2Fcraiga.id.au%2Ftwitter%2Fconnected&oauth_consumer_key=tm5...DOg&oauth_nonce=3D8...22b&oauth_signature_method=HMAC-SHA1&oauth_timestamp= 1275453048&oauth_version=1.0
... to generate the signature (Read: not using 'POST' and the request URI)
For more info...see Twitter Developers: Creating a signature
The problem exist cause time of you server not synhronized with twitter time
100% solution on server run (for sync clock)
sudo ntpdate -s time.nist.gov
and check twitter time with
lynx --dump --head https://api.twitter.com/1/help/test.json
I had similar issue recently in trying to connect to Twitter from Spout(Storm); had to synchronize my Ubuntu clock using:
sudo apt-get install ntp