SOAP client throws "Error fetching http headers" after first request - php

I need to make the acquaintance of SOAP, and wrote a simple client connecting to some random web service. (Turns out even finding a working service is a bit of a hassle.)
The code I have so far seems to work - but here's the thing: it only works once every ten seconds.
When I first load the page it shows the result I expect - a var_dump of an object - but when I reload the page right after that, all I see is Error Fetching http headers. Now matter how many times I refresh, it takes around ten seconds until I get the right result again, and then the process repeats - refresh too quickly, get an error.
I can't see what's going on at the HTTP level, and even if I could, I'm not sure I'd be able to draw the right conclusions.
Answers to similar questions posted here include setting the keep_alive option to false, or extending the default_socket_timeout, but neither solution worked for me.
So, long story short: is this an issue on the service's end or a problem I can remedy, and if it's the latter, how?
Here's the code I got so far:
<?php
error_reporting(-1);
ini_set("display_errors", true);
ini_set("max_execution_time", 600);
ini_set('default_socket_timeout', 600);
$wsdl = "http://api.chartlyrics.com/apiv1.asmx?WSDL";
try
{
$client = new SoapClient($wsdl, array(
"keep_alive" => false,
"trace" => true
));
$response = $client->SearchLyricDirect(array(
"artist" => "beatles",
"song" => "norwegian wood"
));
var_dump($response);
}
catch (Exception $e)
{
echo $e->getMessage();
}
?>
Any help would be appreciated. (And as a bonus, if you could enlighten me as to why saving the WSDL locally speeds the process up by 30 seconds, that'd be great as well. I assume it's the DNS lookup that takes so much time?)

As it turns out, the connection to the server as a whole is rather shaky.
I (and a few others I've asked to) had similar issues just trying to open the WSDL file in a browser - it works the first time, but refreshing somehow aborts the connection for a good ten seconds.
Though I really can't say what its problem is, this does strongly suggest that the fault lies with the server, not my client.

Related

Best way to guarantee a job is been executed

I have a script that is running continuously in the server, in this case a PHP script, like:
php path/to/my/index.php.
It's been executed, and when it's done, it's executed again, and again, forever.
I'm looking for the best way to be notified if that event stop running(been executed).
There are many reasons why it stops been called, like server memory, new deployment, human error... etc.
I just want to be notified(email, sms, slack...) if that script was not executed for certain amount of time(like 1 hour, 1 day, etc...)
My server is Ubuntu living in AWS.
An idea:
I was thinking on having an index in REDIS/MEMCACHED/ETC with a TTL. Every time the script run, renovate that TTL for this index.
If the script stop working for that TTL time, this index will expire. I just need a way to trigger a notification when that expiration happen, but looks like REDIS/MEMCACHED are not prepared for that
register_shutdown_function might help, but might not... https://www.php.net/manual/en/function.register-shutdown-function.php
I can't say i've ever seen a script that needs to run indefinitely in PHP. Perhaps there is another way to solve the problem you are after?
Update - Following your redis idea, I'd look at keyspace notifications. https://redis.io/topics/notifications
I've not tested the idea since I'm not actually a redis user. But it may be possible to subscribe to capture the expiration event (perhaps from another server?) and generate your notification.
There's no 'best' way to do this. Ultimately, what works best will boil down to the specific workflow you're supporting.
tl;dr version: Find what constitutes success and record the most recent time it happened. Use that for your notification trigger in another script.
Long version:
That said, persistent storage with a separate watcher is probably the most straight-forward way to do this. Record the last successful run, and then check it with a cron job every so often.
For what it's worth, for scripts like this I generally monitor exit codes or logs produced by the script in question. This isolates the error notification process from the script itself so a flaw in the script (hopefully) doesn't hamper the notification.
For a barebones example, say we have a script to invoke the actual script... (This is very much untested pseudo-code)
<?php
//Run and record.
exec("php path/to/my/index.php", $output, $return_code);
//$return_code will be 255 on fatal errors. You can use other return codes
//with exit in your called script to report other fail states.
if($return_code == 0) {
file_put_contents('/path/to/folder/last_success.txt', time());
} else {
file_put_contents('/path/to/folder/error_report.json', json_encode([
'return_code' => $return_code,
'time' => time(),
'output' => implode("\n", $output),
//assuming here that error output isn't silently logged somewhere already.
], JSON_PRETTY_PRINT));
}
And then a watcher.php that monitors these files on a cron job.
<?php
//Notify us immediately on failure maybe?
//If you have a lot of transient failures it may make more sense to
//aggregate and them in a single report at a specific time instead.
if(is_file('/path/to/folder/error_report.json')) {
//Mail details stored in JSON here.
//rename file so it's recorded, but we don't receive it again.
rename('/path/to/folder/error_report.json', '/path/to/folder/error_report.json'.'-sent-'.date('Y-m-d-H-i-s'));
} else {
if(is_file('/path/to/folder/last_success.txt')) {
$last_success = intval(file_get_contents('/path/to/folder/last_success.txt'));
if(strtotime('-24 hours') > $last_success) {
//Our script hasn't run in 24 hours, let someone know.
}
} else {
//No successful run recorded. Might want to put code here if that's unexpected.
}
}
Notes: There are some caveats to the specific approach displayed above. A script can fail in a non-fatal way and if you're not checking for it this example could record that as a successful run. For example, permissions errors causing warnings but the script still runs it's full course and exits normally without hitting an exit call with a specific return code. Our example invoker here would log that as a successful run - even though it isn't.
Another option is to log success from your script and only check for error exits from the invoker.

PHP ajax multiple calls

I have been looking for several answers around the web and here, but I could not find one that solved my problem.
I am making several JQuery ajax calls to the same PHP script. In a first place, I was seeing each call beeing executed only after the previous was done. I changed this by adding session_write_close() to the beginning of the script, to prevent PHP from locking the session to the other ajax calls. I am not editing the $_SESSION variable in the script, only reading from it.
Now the behaviour is better, but instead of having all my requests starting simultaneously, they go by block, as you can see on the image:
What should I do to get all my requests starting at the same moment and actually beeing executed without any link with the other requests ?
For better clarity, here is my js code:
var promises = [];
listMenu.forEach(function(menu) {
var res = sendMenu(menu);//AJAX CALL
promises.push(res);
});
$.when.apply(null, promises).done(function() {
$('#ajaxSpinner').hide();
listMenu = null;
});
My PHP script is just inserting/updating data, and start with:
<?php
session_start();
session_write_close();
//execution
I guess I am doing things the wrong way. Thank you in advance for you precious help !!
Thomas
This is probably a browser limitation, there is a maximum number of concurrent connections to a single server per browser instance. In Chrome this has been 6, which reflects the size of the blocks shown in your screenshot. Though this is from 09, I believe it's still relevant: https://bugs.chromium.org/p/chromium/issues/detail?id=12066

Laravel Timeout Issue

I have a long running Laravel process to generate a report. When selecting long date ranges, I was getting a redirect back to the same URL after approximately 100s. I changed the code to this:
set_time_limit(20);
while(1) {
$var = 3 + 4 / 11;
}
It runs for 20s then redirects to the same URL. I'd like to add that I have 2 routes, a GET route, and a POST route. The timeout happens for the POST route.
I've tried
set_time_limit(0);
but it didn't make a difference. I've turned on debug, but nothing. Any help is appreciated.
EDIT: I am running PHP 5.4.x so its not safe mode.
EDIT: here is the controller - http://laravel.io/bin/WVdVz, Here is the last code that is supposed to execute - http://laravel.io/bin/aa2GW.
EDIT: The error handling library, Whoops, catches and logs timeout errors. My logs are clean. This has something to do with how Laravel is treating responses after my _download function...
After a lot of debugging, I figured it out. Apache was timing out. Apparently, when Apache times out, it throws a 500 response code. Apparently (again), when a browser gets a 500 error code to a POST request, it resends it as a GET request. I wrote it up here in more detail: http://blog.voltampmedia.com/2014/09/02/php-apache-timeouts-post-requests/
To be clear, its not a Laravel issue. Do note that the Whoops library does capture the timeout error.

PHP - set time limit effectively

I have a simple script that makes redirection to mobile version of a website if it finds that user is browsing on mobile phone. It uses Tera-WURFL webservice to acomplish that and it will be placed on other hosting than Tera-WURFL itself. I want to protect it, in case of Tera-WURFL hosting downtime. In other words, if my script takes more than a second to run, then stop executing it and just redirect to regular website. How to do it effectively (so that the CPU would not be overly burdened by the script)?
EDIT: It looks that TeraWurflRemoteClient class have a timeout property. Read below. Now I need to find how to include it in my script, so that it would redirect to regular website in case of this timeout.
Here is the script:
// Instantiate a new TeraWurflRemoteClient object
$wurflObj = new TeraWurflRemoteClient('http://my-Tera-WURFL-install.pl/webservicep.php');
// Define which capabilities you want to test for. Full list: http://wurfl.sourceforge.net/help_doc.php#product_info
$capabilities = array("product_info");
// Define the response format (XML or JSON)
$data_format = TeraWurflRemoteClient::$FORMAT_JSON;
// Call the remote service (the first parameter is the User Agent - leave it as null to let TeraWurflRemoteClient find the user agent from the server global variable)
$wurflObj->getCapabilitiesFromAgent(null, $capabilities, $data_format);
// Use the results to serve the appropriate interface
if ($wurflObj->getDeviceCapability("is_tablet") || !$wurflObj->getDeviceCapability("is_wireless_device") || $_GET["ver"]=="desktop") {
header('Location: http://website.pl/'); //default index file
} else {
header('Location: http://m.website.pl/'); //where to go
}
?>
And here is source of TeraWurflRemoteClient.php that is being included. It has optional timeout argument as mentioned in documentation:
// The timeout in seconds to wait for the server to respond before giving up
$timeout = 1;
TeraWurflRemoteClient class have a timeout property. And it is 1 second by default, as I see in documentation.
So, this script won't be executed longer than a second.
Try achieving this by setting a very short timeout on the HTTP request to TeraWurfl inside their class, so that if the response doesn't come back in like 2-3 secs, consider the check to be false and show the full website.
The place to look for setting a shorter timeout might vary depending on the transport you use to make your HTTP request. Like in Curl you can set the timeout for the HTTP request.
After this do reset your HTTP request timeout back to what it was so that you don't affect any other code.
Also I found this while researching on it, you might want to give it a read, though I would say stay away from forking unless you are very well aware of how things work.
And just now Adelf posted that TeraWurflRemoteClient class has a timeout of 1 sec by default, so that solves your problem but I will post my answer anyway.

Twitter API Call Failing Intermittently

I'm using PHP to display the most recent tweet from a user. This is in Wordpress. This works most of the time - but sometimes, I get this error:
file_get_contents(http://api.twitter.com/1/statuses/user_timeline/[username].json) [function.file-get-contents]: failed to open stream: HTTP request failed! HTTP/1.1 400 Bad Request in [...]/twitter.php on line 47
I'm absolutely certain that I'm not going over the Twitter API limit, because even if my caching code is flawed, no one else can see this - it's hosted locally - and there's no way I viewed the page 150 times in an hour. I've tested that the username and database entries are indeed being retrieved. This is my code:
<?php
function twitter($username) {
$tweet = '';
echo $username;
if (!get_option('twitter_last_updated')) {
$format='json';
$tweet_raw=file_get_contents("http://api.twitter.com/1/statuses/user_timeline/{$username}.{$format}");
$tweet = json_decode($tweet_raw);
add_option('twitter_last_updated', time(), "", "yes");
add_option('twitter_last_updated_author', $username, "", "yes");
add_option('twitter_last_updated_data', $tweet_raw, "", "yes");
} elseif (time() - get_option('twitter_last_updated') > 30 || get_option('twitter_last_updated_author') != $username) {
$format='json';
$tweet_raw=file_get_contents("http://api.twitter.com/1/statuses/user_timeline/{$username}.{$format}");
$tweet = json_decode($tweet_raw);
update_option('twitter_last_updated', time());
update_option('twitter_last_updated_author', $username);
update_option('twitter_last_updated_data', $tweet_raw);
} else {
$tweet = json_decode(get_option('twitter_last_updated_data'));
} ?>
<!-- display the tweet -->
<?php } ?>
I would really appreciate some help with this. I feel totally stumped.
First, you should not be using file_get_contents to retrieve "files" over the network. You should use curl. It could be just system response delays, or twitter issuing a redirect for load balancing. file_get_contents doesn't follow redirects and basically times out immediately. Curl can be set to follow redirects and adheres to the network timeout (1 minute I think) if no time out is specified. Most importantly, curl can tell why it failed.
How often are you calling the function? If I remember correctly, twitter recently changed the maximum amount of calls per hour from 150~ to 75 per hour. You might want to cache the results, so as not to use up your allowance.
See this slashdot story: Twitter Throttling hits 3rd party apps
Why are you not using the WordPress HTTP API? This is exactly what it was designed for - a wrapper for working with HTTP using standard WordPress functions, regardless of platform or set-up.
I wrote a something like what you have and it keeps failing like every 3 requests, the solution was build up a little cache system and #'s on the file_get_contents to avoid php from throwing errors back to users.
When twitter fails, and it will fail a lot, you just fetch data from that previously built cache.
I also don't recommend you adding this onfly, it might slow down the whole page building due twitter issues.

Categories