Google Cloud Messaging for Chrome Error 500 - php

So I'm trying to send a message to a chrome extension through GCM, using php.
$data = json_encode(array(
'channelId' => 'channel id here',
'subchannelId' => '0',
'payload'=>'test'
));
$ch = curl_init();
$curlConfig = array(
CURLOPT_URL => "https://www.googleapis.com/gcm_for_chrome/v1/messages",
CURLOPT_POST => true,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POSTFIELDS => $data,
CURLOPT_SSL_VERIFYPEER => false,
CURLOPT_HTTPHEADER => array(
'Authorization: Bearer ' . $access_token,
'Content-Type: application/json'
)
);
curl_setopt_array($ch, $curlConfig);
$result = curl_exec($ch);
Each request returns { "error": { "code": 500, "message": null } }.
Thanks.

500 is the HTTP error code for internal error.
Sending a Google Cloud Message for Chrome from the Google oauthplayground website returns this for me:
HTTP/1.1 500 Internal Server Error
Content-length: 52
X-xss-protection: 1; mode=block
X-content-type-options: nosniff
X-google-cache-control: remote-fetch
-content-encoding: gzip
Server: GSE
Reason: Internal Server Error
Via: HTTP/1.1 GWA
Cache-control: private, max-age=0
Date: Wed, 15 May 2013 07:01:40 GMT
X-frame-options: SAMEORIGIN
Content-type: application/json; charset=UTF-8
Expires: Wed, 15 May 2013 07:01:40 GMT
{
"error": {
"code": 500,
"message": null
}
}
According to Google's Cloud Message for Chrome docs:
An internal error has occurred. This indicates something went wrong on the Google server side (for example, some backend not working or errors in the HTTP post such as a missing access token).
Essentially there's something wrong on Google's side of things. Considering Google I/O is going to start in a few hours I would assume they're currently making some changes.
Try checking again in a few hours.

I ran into the same problem today.
I found an issue tracker on the Chromium Apps group
https://groups.google.com/a/chromium.org/forum/?fromgroups=#!topic/chromium-apps/UXE_ASCN0gc

One of the possible reasons for that is if the app you use for testing was never published in the Chrome Web Store. So if you created an app locally and load it into Chrome unpackaged for testing for example - it will always fail like this because GCM does not know who owns the app. When publishing the app to the Store, use the same google account that was used in Api Console to create a project and Oauth clientId/client secret etc. The GCM for Chrome works only if those google accounts match.
GCM verifies the owner of an app matches the owner of an access token to make sure nobody but owner of an app publishes messages for it. Publishing the app in the Web Store creates a link between a google account and the appID so it can be verified.
Now, once you publish some version of your app, you can add the magic token generated by Web Store to the manifest of your local app and continue modify/reload/debug locally, now having your app correctly registered for GCM. See my answer in chromium-apps group for more details on that.

I got the same error too. I resolved this by packaging my app and upload to chrome webstore. Then I use new Channel ID and it works now

Related

HTTP/1.1 400 Bad Request between an AWS server to another AWS server but HTTP/1.1 200 between my local server and the same AWS server

Reference
PHP App1 AWS instances
AWSApp1Live (64bit Amazon Linux 2015.03 v1.4.6 running PHP 5.5)
AWSApp1Test (64bit Amazon Linux 2015.03 v1.3.2 running PHP 5.6)
PHP App2 AWS instances
AWSApp2Live (64bit Amazon Linux 2016.09 v2.3.2 running PHP 5.6)
AWSApp2Test (64bit Amazon Linux 2015.09 v2.0.4 running PHP 5.5)
Local App1 instance
LocalApp1 (Windows 10 running XAMPP Control Panel v3.2.1)
Overview
I have two PHP applications: PHP App1 and PHP App2 which work together by communicating with eachother using internal POST requests. PHP App1 sends a request using
$query = http_build_query($data);
$context = stream_context_create(array(
'http' => array(
'method' => 'POST',
'header' => 'Content-Type: application/x-www-form-urlencoded' . PHP_EOL .
'Cv-Forwarded-For: ' . $_SERVER["REMOTE_ADDR"] . PHP_EOL,
'content' => $query,
),
'ssl'=>array(
'verify_peer'=>false,
'verify_peer_name'=>false
)
));
$response = #file_get_contents(
$target = $this->api_target.$action,
$use_include_path = false,
$context
);
PHP App2 is running a RESTful API service which responds to the requests sent from PHP App1. The response is a JSON string.
Problem
I have loaded these PHP applications onto four different AWS instances as mentioned above. The following combinations of request/respone
AWSApp1Live communicating with AWSApp2Live
(HTTP/1.1 400 Bad Request)
AWSApp1Test communicating with AWSApp2Live
(HTTP/1.1 400 Bad Request)
AWSApp1Test communicating with AWSApp2Test
(HTTP/1.1 200 OK)
LocalApp1 communicating with AWSApp2Live
(HTTP/1.1 200 OK)
So as you can see, PHP App2 when loaded onto AWSApp2Test and sent a request from AWSApp1Test, it responds with (HTTP/1.1 200 OK). But when loaded onto AWSApp2Live and given the same request from either AWSApp1Live or AWSApp1Test it responds with (HTTP/1.1 400 Bad Request). The only time AWSApp2Live responds with (HTTP/1.1 200 OK) is when the request was sent from LocalApp1.
Attempts to fix so far
We thought there might be something to do with the Zlib output compression setting on the AWSApp2Live software configuration, but the response was still the same.
We've done an nslookup from the LocalApp1 server and the AWSApp2Test server for the AWSApp2Live server and both of them returned the same IP addresses to rule out any DNS issues
We SSH'd into the AWSApp1Test server and did a wget for one of the API commands on the AWSApp2Live server and we got back a JSON response which meant it is reaching the PHP application on that server.
There are no logs being generated on the AwsApp2Live server for me to see why the server is treating some requests as bad requests and others as not.
Using Postman (https://www.getpostman.com/) I was able to successfully communicate with the API on AWSApp2Live so the application is running as expected.
Any help would be greatly appreciated!
We are not sure where to look for any further clues as to what might be happening. If you need any further information please let us know. Thank you so much.
There's nothing in the documentation, but I think headers should be passed as a string only if there's a single one of them. Otherwise use an array:
$context = stream_context_create(array(
'http' => array(
'method' => 'POST',
'header' => array(
'Content-Type: application/x-www-form-urlencoded',
'Cv-Forwarded-For: ' . $_SERVER["REMOTE_ADDR"]
),
'content' => $query,
),
'ssl'=>array(
'verify_peer'=>false,
'verify_peer_name'=>false
)
));

Twitter Api Search doesn't work through proxy

I have a weird problem with the new twitter api. I followed the very good answer from this question to create a search in twitter and used the TwitterAPIExchange.php from here.
Everything works fine as long as I am directly calling it from my server with CURL. But in the live environment I have to use a proxy with Basic Authentication.
All I've done is add the proxy authentication to the performRequest function:
if(defined('WP_PROXY_HOST') && defined('WP_PROXY_PORT') && defined('WP_PROXY_USERNAME') && defined('WP_PROXY_PASSWORD'))
{
$options[CURLOPT_HTTPPROXYTUNNEL] = 1;
$options[CURLOPT_PROXYAUTH] = CURLAUTH_BASIC;
$options[CURLOPT_PROXY] = WP_PROXY_HOST . ':' . WP_PROXY_PORT;
$options[CURLOPT_PROXYPORT] = WP_PROXY_PORT;
$options[CURLOPT_PROXYUSERPWD] = WP_PROXY_USERNAME . ':' . WP_PROXY_PASSWORD;
}
Without the proxy I get a JSON response. But with the proxy I get:
HTTP/1.1 200 Connection established
HTTP/1.1 400 Bad Request
content-type: application/json; charset=utf-8
date: Fri, 20 Dec 2013 09:22:59 UTC
server: tfe
strict-transport-security: max-age=631138519
content-length: 61
Proxy-Connection: Keep-Alive
Connection: Keep-Alive
Set-Cookie: guest_id=v1%3A138753137985809686; Domain=.twitter.com; Path=/; Expires=Sun, 20-Dec-2015 09:22:59 UTC
Age: 0 {"errors":[{"message":"Bad Authentication data","code":215}]}
I've tried to simulate a proxy in my local environment with Charles Proxy, and it worked.
I'm assuming the proxy is either not sending the Authentication Header, or is changing data somehow.
Anybody with a clue....
EDIT:
Using the HTTP API works but HTTPS fails. I've tried CURLOPT_SSL_VERIFYPEER and CURLOPT_SSL_VERIFYHOST set to FALSE but the twitter SSL is valid so this is not recommended
Is your proxy response caches or is the date in the proxy response old because you did perform the API call on the 20th december?
If it is cached maybe your proxy is having a cached reply from an actual invalid request?

Making PHP cURL request on Windows yields "400 Bad Request" from proxy

Morning all
Basically, I am unable to make successful cURL requests to internal and external servers from my Windows 7 development PC because of an issue involving a proxy server. I'm running cURL 7.21.2 thru PHP 5.3.6 on Apache 2.4.
Here's a most basic request that fails:
<?php
$curl = curl_init('http://www.google.com');
$log_file = fopen(sys_get_temp_dir() . 'curl.log', 'w');
curl_setopt_array($curl, array(
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_VERBOSE => TRUE,
CURLOPT_HEADER => TRUE,
CURLOPT_STDERR => $log_file,
));
$response = curl_exec($curl);
#fclose($log_file);
print "<pre>{$response}";
The following (complete) response is received.
HTTP/1.1 400 Bad Request
Date: Thu, 06 Sep 2012 17:12:58 GMT
Content-Length: 171
Content-Type: text/html
Server: IronPort httpd/1.1
Error response
Error code 400.
Message: Bad Request.
Reason: None.
The log file generated by cURL contains the following.
* About to connect() to proxy usushproxy01.unistudios.com port 7070 (#0)
* Trying 216.178.96.20... * connected
* Connected to usushproxy01.unistudios.com (216.178.96.20) port 7070 (#0)
> GET http://www.google.com HTTP/1.1
Host: www.google.com
Accept: */*
Proxy-Connection: Keep-Alive
< HTTP/1.1 400 Bad Request
< Date: Thu, 06 Sep 2012 17:12:58 GMT
< Content-Length: 171
< Content-Type: text/html
< Server: IronPort httpd/1.1
<
* Connection #0 to host usushproxy01.unistudios.com left intact
Explicitly stating the proxy and user credentials, as in the following, makes no difference: the response is always the same.
<?php
$curl = curl_init('http://www.google.com');
$log_file = fopen(sys_get_temp_dir() . 'curl.log', 'w');
curl_setopt_array($curl, array(
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_VERBOSE => TRUE,
CURLOPT_HEADER => TRUE,
CURLOPT_STDERR => $log_file,
CURLOPT_PROXY => 'http://usushproxy01.unistudios.com:7070',
CURLOPT_PROXYUSERPWD => '<username>:<password>',
));
$response = curl_exec($curl);
#fclose($log_file);
print "<pre>{$response}";
I was surprised to see an absolute URL in the request line ('GET ...'), but I think that's fine when dealing with proxy servers - according to the HTTP spec.
I've tried all sorts of combinations of options - including sending a user-agent, following this and that, etc, etc - having been through Stack Overflow questions, and other sites, but all requests end in the same response.
The same problem occurs if I run the script on the command line, so it can't be an Apache issue, right?
If I make a request using cURL from a Linux box on the same network, I don't experience a problem.
It's the "Bad Request" thing that's puzzling me: what on earth is wrong with my request? Do you have any idea why I may be experiencing this problem? A Windows thing? A bug in the version of PHP/cURL I'm using?
Any help very gratefully received. Many thanks.
You might be looking at an issue between cURL (different versions between Windows and Linux) and your IronPort version. In IronPort documentation:
Fixed: Web Proxy uses the Proxy-Connection header instead of the
Connection header, causing problems with some user agents
Previously, the Web Proxy used the Proxy-Connection header instead of the
Connection header when communicating with user agents with explicit
forward requests. Because of this, some user agents, such as Real
Player, did not work as expected. This no longer occurs. Now, the Web
Proxy replies to the client using the Connection header in addition to
the Proxy-Connection header. [Defect ID: 46515]
Try removing the Proxy-Connection (or add a Connection) header and see whether this solves the problem.
Also, you might want to compare the cURL logs between Windows and Linux hosts.

Can't seem to get a web page's contents via cURL - user agent and HTTP headers both set?

For some reason I can't seem to get this particular web page's contents via cURL. I've managed to use cURL to get to the "top level page" contents fine, but the same self-built quick cURL function doesn't seem to work for one of the linked off sub web pages.
Top level page: http://www.deindeal.ch/
A sub page: http://www.deindeal.ch/deals/hotel-cristal-in-nuernberg-30/
My cURL function (in functions.php)
function curl_get($url) {
$ch = curl_init();
$header = array(
'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7',
'Accept-Language: en-us;q=0.8,en;q=0.6'
);
$options = array(
CURLOPT_URL => $url,
CURLOPT_HEADER => 0,
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_USERAGENT => 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13',
CURLOPT_HTTPHEADER => $header
);
curl_setopt_array($ch, $options);
$return = curl_exec($ch);
curl_close($ch);
return $return;
}
PHP file to get the contents (using echo for testing)
require "functions.php";
require "phpQuery.php";
echo curl_get('http://www.deindeal.ch/deals/hotel-walliserhof-zermatt-2-naechte-30/');
So far I've attempted the following to get this to work
Ran the file both locally (XAMPP) and remotely (LAMP).
Added in the user-agent and HTTP headers as recommended here file_get_contents and CURL can't open a specific website - before the function curl_get() contained all the options as current, except for CURLOPT_USERAGENTandCURLOPT_HTTPHEADERS`.
Is it possible for a website to completely block requests via cURL or other remote file opening mechanisms, regardless of how much data is supplied to attempt to make a real browser request?
Also, is it possible to diagnose why my requests are turning up with nothing?
Any help answering the above two questions, or editing/making suggestions to get the file's contents, even if through a method different than cURL would be greatly appreciated ;).
Try adding:
CURLOPT_FOLLOWLOCATION => TRUE
to your options.
If you run a simple curl request from the command line (including a -i to see the response headers) then it is pretty easy to see:
$ curl -i 'http://www.deindeal.ch/deals/hotel-cristal-in-nuernberg-30/'
HTTP/1.1 302 FOUND
Date: Fri, 30 Dec 2011 02:42:54 GMT
Server: Apache/2.2.16 (Debian)
Vary: Accept-Language,Cookie,Accept-Encoding
Content-Language: de
Set-Cookie: csrftoken=d127d2de73fb3bd72e8986daeca86711; Domain=www.deindeal.ch; Max-Age=31449600; Path=/
Set-Cookie: generic_cookie=1; Path=/
Set-Cookie: sessionid=987b1a11224ecd0e009175470cf7317b; expires=Fri, 27-Jan-2012 02:42:54 GMT; Max-Age=2419200; Path=/
Location: http://www.deindeal.ch/welcome/?deal_slug=hotel-cristal-in-nuernberg-30
Content-Length: 0
Connection: close
Content-Type: text/html; charset=utf-8
As you can see, it returns a 302 with a Location header. If you hit that location directly, you will get the content you are looking for.
And to answer your two questions:
No, it is not possile to block requests from something like curl. If the consumer can talk HTTP then it can get to anything the browser can get to.
Diagnosing with an HTTP proxy could have been helpful for you. Wireshark, fiddler, charles, et al. should help you out in the future. Or, do like I did and make a request from the command line.
EDIT
Ah, I see what you are talking about now. So, when you go to that link for the first time you are redirected and a cookie (or cookies) is set. Once you have those cookie, your request goes through as intended.
So, you need to use a cookiejar, like in this example: http://icfun.blogspot.com/2009/04/php-how-to-use-cookie-jar-with-curl.html
So, you will need to make an initial request, save the cookies, and make your subsequent requests including the cookies after that.

twilio bhrowser to browser call issue?

I am following http://www.twilio.com/docs/quickstart/client/browser-to-browser-calls for making a call from browser to browse.
According to documentation open two browser with:
http://127.0.0.1/client.php?client=test1 and 127.0.0.1/client.php?client=test2
When I am using sandbox APP ID:
The both browser couldn't able to connect but it a call connect to the the sandbox number.
When I am using my app ID:
It shows me the following error:
Component: TwiML errors
httpResponse: 502
ErrorCode: 11200
url: http://127.0.0.1/demo.php
Request: What Twilio sent your server
HTTP Method: GET
HTTP URL: http://127.0.0.1/demo.php
HTTP BODY: Key Value
AccountSid xxxxxxxxxxxxxxxxxxxxxxxxx
ApplicationSid dddddddddddddddddddddddddd
Caller client:jenny
CallStatus ringing
Called
To
PhoneNumber tommy
CallSid cccccccccccccccccccccccccccc
From client:jenny
Direction inbound
ApiVersion 2010-04-01
Response: What your web application responded with to Twilio
HTTP Headers: Key Value
Date Thu, 25 Aug 2011 11:36:22 GMT
Content-Length 137
Connection close
Content-Type text/html
Server TwilioProxy/0.7
with <html><head><title>502 Bad Gateway</title></head>
<body><h1>Bad Gateway</h1>An upstream server returned an invalid response.</body></html>
The VoiceURL assigned to your application must be a publicly-accessible URL so that the Twilio servers can reach it. If it's running only 127.0.0.1 that is only accessible from your local machine and we can't reach it.
You may be interested in something like localtunnel.com or ngrok which will let you expose your local server to the public.

Categories