PHP Responding back to Facebook Webhook API - php

Facebook made a recent change to Leadgen Webhooks where you've to respond with "200 OK" for each lead that comes in or they'll keep pinging you with that lead. Problem is, they don't mention if there's a way to do that and I've been trying to send a response but unsuccessfully so far.
What I've tried so far:
status_header(200);
http_response_code(200);
header[$_SERVER['SERVER_PROTOCOL'] . ' 200 OK']
header[$_SERVER['SERVER_PROTOCOL'] . ' 200 OK', true, 200);
function handle_200() {
status_header(200);
header($_SERVER['SERVER_PROTOCOL'] . ' 200 OK');
error_log('test'); // This never even runs
}
add_action('send_headers', 'handle_200', 1);
None of the above has worked. I've also checked to ensure that at the point I was trying to change the status header, the current headers have not been sent by checking with headers_sent() and that returned a FALSE.
I've also tried checking if the status header can be seen through either one of these:
error_log(print_r(getallheaders(), true));
error_log(print_r(headers_list(), true));
But neither printed out the status header, but if I were to do something like header('foo: bar'), that would be seen in the headers list.
I'm at a complete loss, any help to point in the right direction would be much appreciated.
That being said, it might be relevant to mention that this script is running in the theme functions of a Wordpress Environment. Furthermore, the script itself (where I'm making these changes) is running within ob_start() and ob_end_clean() (not sure if relevant).
EDIT: I just found out that when I cause a Fatal Error that causes my script to end prematurely, Facebook receives a 200 OK response (ironically). This makes me think that the problem is because of running the script within ob_start and ob_end_clean(). Could that be delaying the header response?
EDIT 2: This remains unresolved. I've a temporary solution which successfully sends a response 35% of the time. I'm at a point where I get a lead from the webhook, retrieve the info from the lead and then do a SQL query to check if that lead has been received before, if it has, exit(200). As simple as that, how can that take time for leads that already exist? It wouldn't! Yet my server (according to Facebook's Ad tool) still times out (without a response) 50% of the time? Something doesn't add up.

Related

AWS Sns publish silently dies when device is disabled?

I am hitting a strange error when I attempt to call $sns->publish (PHP) - it never returns, I am not sure if it dies silently, but I could not catch an exception or get a return code.
I was able to track this down to happen when device for the token (endpoint) appears to be already disabled in the SNS console. It gets disabled on the initial call, I would assume due to the error returned by GCM that token is invalid.
What am I doing wrong and how can I prevent the problem? I do not want to check every endpoint for being enabled since I may be pushing to 10 out of 1000. However I definitely want to continue executing my push loop.
Any thoughts? AWS team forum seems useless, it has been weeks since original reply by AWS team member asking for code with not response since that time.
you can check if the endpoint is disabled before sending push notification as -
$arn_code = ARN_CODE_HERE;
$arn_arr = array("EndpointArn"=>$arn_code);
$endpointAtt = $sns->getEndpointAttributes($arn_arr);
//print_r($endpointAtt);
if($endpointAtt != 'failed' && $endpointAtt['Attributes']['Enabled'] != 'false')
{
....PUBLISH CODE HERE....
}
It will not stop the execution.
Hope it will help you.

SOAP client throws "Error fetching http headers" after first request

I need to make the acquaintance of SOAP, and wrote a simple client connecting to some random web service. (Turns out even finding a working service is a bit of a hassle.)
The code I have so far seems to work - but here's the thing: it only works once every ten seconds.
When I first load the page it shows the result I expect - a var_dump of an object - but when I reload the page right after that, all I see is Error Fetching http headers. Now matter how many times I refresh, it takes around ten seconds until I get the right result again, and then the process repeats - refresh too quickly, get an error.
I can't see what's going on at the HTTP level, and even if I could, I'm not sure I'd be able to draw the right conclusions.
Answers to similar questions posted here include setting the keep_alive option to false, or extending the default_socket_timeout, but neither solution worked for me.
So, long story short: is this an issue on the service's end or a problem I can remedy, and if it's the latter, how?
Here's the code I got so far:
<?php
error_reporting(-1);
ini_set("display_errors", true);
ini_set("max_execution_time", 600);
ini_set('default_socket_timeout', 600);
$wsdl = "http://api.chartlyrics.com/apiv1.asmx?WSDL";
try
{
$client = new SoapClient($wsdl, array(
"keep_alive" => false,
"trace" => true
));
$response = $client->SearchLyricDirect(array(
"artist" => "beatles",
"song" => "norwegian wood"
));
var_dump($response);
}
catch (Exception $e)
{
echo $e->getMessage();
}
?>
Any help would be appreciated. (And as a bonus, if you could enlighten me as to why saving the WSDL locally speeds the process up by 30 seconds, that'd be great as well. I assume it's the DNS lookup that takes so much time?)
As it turns out, the connection to the server as a whole is rather shaky.
I (and a few others I've asked to) had similar issues just trying to open the WSDL file in a browser - it works the first time, but refreshing somehow aborts the connection for a good ten seconds.
Though I really can't say what its problem is, this does strongly suggest that the fault lies with the server, not my client.

Laravel Timeout Issue

I have a long running Laravel process to generate a report. When selecting long date ranges, I was getting a redirect back to the same URL after approximately 100s. I changed the code to this:
set_time_limit(20);
while(1) {
$var = 3 + 4 / 11;
}
It runs for 20s then redirects to the same URL. I'd like to add that I have 2 routes, a GET route, and a POST route. The timeout happens for the POST route.
I've tried
set_time_limit(0);
but it didn't make a difference. I've turned on debug, but nothing. Any help is appreciated.
EDIT: I am running PHP 5.4.x so its not safe mode.
EDIT: here is the controller - http://laravel.io/bin/WVdVz, Here is the last code that is supposed to execute - http://laravel.io/bin/aa2GW.
EDIT: The error handling library, Whoops, catches and logs timeout errors. My logs are clean. This has something to do with how Laravel is treating responses after my _download function...
After a lot of debugging, I figured it out. Apache was timing out. Apparently, when Apache times out, it throws a 500 response code. Apparently (again), when a browser gets a 500 error code to a POST request, it resends it as a GET request. I wrote it up here in more detail: http://blog.voltampmedia.com/2014/09/02/php-apache-timeouts-post-requests/
To be clear, its not a Laravel issue. Do note that the Whoops library does capture the timeout error.

PHP doesn't detect connection abort at all

I have read and deeply understood these:
http://www.php.net/manual/en/features.connection-handling.php
http://www.php.net/manual/en/function.register-shutdown-function.php
However, I have tested both PHP 5.1.6 and 5.3 and things DON'T work as described there. What I observe is:
connection_status() always return true, even after the client has closed the connection.
execution of the script keeps going on after the client has closed the connection, even though ignore_user_abort is 0
a function registered with register_shutdown_function() is not run until the script reaches ends. The script is NOT interrupted (and hence the function not called) when the client aborts the connection.
So basically PHP just doesn't detect the client's disconnection AT ALL.
Note that this is NOT as if ignore_user_abort was set to 1: if that was the case then connection_status() would return 1 even though the script would keep running and the shutdown function would not be called until the end. That is not the case.
ini_get("ignore_user_abort") returns 0, as expected.
Is this a bug in PHP, or may this be due to some Apache setting?
How do I get PHP to work as described in the abovementioned documentation?
Test script:
<?php
function myShutdown() {
error_log("myShutdown ".connection_status()." ".ini_get("ignore_user_abort"));
}
register_shutdown_function(myShutdown);
echo "Hi!";
error_log(" *** test/test *** ");
for ($i=0; $i<10; $i++) {
sleep(1);
error_log(".");
echo ".";
}
?>
Steps to reproduce:
- visit the url of the script
- abort the connection on the client before 10 seconds have elapsed (e.g. hit the stop button in the browser)
Expected/Desired behavior:
The logs should show less than 10 dots, and at the end "myShutdown 1 0" (if you watch the log in real time, the myShutDown should appear immediately when the client disconnects)
Observed/current behavior:
The logs show always exactly 10 dots, and at the end "myShutdown 0 0" (if you watch it in realtime, it goes on for 10 seconds no matter when the client disconnects).
First, I also failed to get it to work, using the basic ubuntu 12.04 LAMP installation (php5.3). But I've some information and hope that it is helpful. Any comments or edits appreciated! :)
I see two problems with your code. The first is a syntax error. You are missing the single quotes around myShutdown when calling register_shutdown_function(). Change the line to:
register_shutdown_function('myShutdown');
The second problem I see is the missing flush() call after echos. The documentation says:
PHP will not detect that the user has aborted the connection until an attempt is made to send information to the client. Simply using an echo statement does not guarantee that information is sent, see flush().
But even flush() will not help in any case. From the documentation of flush():
flush() may not be able to override the buffering scheme of your web server and it has no effect on any client-side buffering in the browser. It also doesn't affect PHP's userspace output buffering mechanism. This means you will have to call both ob_flush() and flush() to flush the ob output buffers if you are using those.
Several servers, especially on Win32, will still buffer the output from your script until it terminates before transmitting the results to the browser.
Server modules for Apache like mod_gzip may do buffering of their own that will cause flush() to not result in data being sent immediately to the client.
Even the browser may buffer its input before displaying it. Netscape, for example, buffers text until it receives an end-of-line or the beginning of a tag, and it won't render tables until the tag of the outermost table is seen.
Some versions of Microsoft Internet Explorer will only start to display the page after they have received 256 bytes of output, so you may need to send extra whitespace before flushing to get those browsers to display the page.
In the comments of the that page there is an advice to set several headers and apache configs:
apache_setenv('no-gzip', 1);
ini_set('zlib.output_compression', 0);
ini_set('implicit_flush', 1);
however, even this didn't work for me. I've investigated this using wiresharek, although the web server sends content ('Hi') after 0.0037 seconds, the web browser was buffering the page.

PHProxy hanging on response

I've been working on a PHProxy server for some time (you can see my recent posts) and I'm at a point where I have everything working except this problem.
do
{
$data = #fread($_socket, 8192);
$_response_body .= $data;
}
while (isset($data{0}));
unset($data);
My proxy server logs into a server running IIS without the user's intervention (you had to verify credentials somewhere else). Upon logging into this site the header requests are constructed and sent but the response waits for 120 seconds on this section of code. After that long period the proxy continues correctly as it is supposed to. The response that I'm waiting on is just a Object has moved here page that gives me a new location. I've verified headers are correct via Wireshark and LiveHttpHeaders. Again, everything IS working, it just takes forever to load this particular page.
Can any PHP developers give me a hint as to what I should be checking for malfunctions?
Thanks,
EDIT:
[17-Jul-2010 12:33:17] BEFORE RESPONSE
[17-Jul-2010 12:35:17] AFTER RESPONSE
It takes 120 seconds exactly. Is something timing out?
This code significantly increases response time, but doesn't identify the main problem of where/who/what is timing out to begin with.
stream_set_timeout($_socket, 1);
do
{
$data = #fread($_socket, 8192); // silenced to avoid the "normal" warning by a faulty SSL connection
$_response_body .= $data;
}
while (isset($data{0}));
unset($data);

Categories