I have a long running Laravel process to generate a report. When selecting long date ranges, I was getting a redirect back to the same URL after approximately 100s. I changed the code to this:
set_time_limit(20);
while(1) {
$var = 3 + 4 / 11;
}
It runs for 20s then redirects to the same URL. I'd like to add that I have 2 routes, a GET route, and a POST route. The timeout happens for the POST route.
I've tried
set_time_limit(0);
but it didn't make a difference. I've turned on debug, but nothing. Any help is appreciated.
EDIT: I am running PHP 5.4.x so its not safe mode.
EDIT: here is the controller - http://laravel.io/bin/WVdVz, Here is the last code that is supposed to execute - http://laravel.io/bin/aa2GW.
EDIT: The error handling library, Whoops, catches and logs timeout errors. My logs are clean. This has something to do with how Laravel is treating responses after my _download function...
After a lot of debugging, I figured it out. Apache was timing out. Apparently, when Apache times out, it throws a 500 response code. Apparently (again), when a browser gets a 500 error code to a POST request, it resends it as a GET request. I wrote it up here in more detail: http://blog.voltampmedia.com/2014/09/02/php-apache-timeouts-post-requests/
To be clear, its not a Laravel issue. Do note that the Whoops library does capture the timeout error.
Related
I am currently building a small application that uses the message queue built in PHP.
I have 1 "server" process and 1 "client" process. Messages flow from server to client.
They are simple JSON objects, that are serialised, then send.
This code is used
<?php
$send = msg_send($q, MESSAGE_TYPE_EXECUTION, $update, true, false, $error);
if (isset($error) && $error != 0) {
echo 'Execution error: ' . $error . PHP_EOL;
}
// $q is the message queue integer
// MESSAGE_TYPE_EXECUTION is integer 1
// $update is the JSON string
// true is that the JSON string is serialised
// false is that it is blocking (which it is not)
// $error get's filled when an error occurs (see below)
This works without issue, until it does not.
Sometimes after a couple of minutes, sometimes after a couple of hours the following error appears:
PHP Warning: msg_send(): msgsnd failed: Resource temporarily unavailable in
/var/www/server.php on line 57
The value of the $error variable is the integer 11.
All messages that follow this error will have error 11, until I restart the process and all is working again (for a while, until the same error appears again)
I have been searching but cannot find any explanation what error 11 is, how this can be managed and fixed without restarting the process.
Any clue, information, example etc is welcome. I would really like for server.php to be reliable.
-- edit --
client.php is the process that fetches the messages (which are all more or less the same, but with other values)
it uses this fetch the messages from the queue (filled in server.php)
<?php
$update = msg_receive($q, 0, $messagetype, 1024, $message, true, MSG_IPC_NOWAIT && MSG_NOERROR, $error);
if ($update) {
// Do stuff
}
usleep(1000000);
I have not yet checked memory usage, will look into that
Platform used
PHP 7.1.3
Centos 7
So, Solution was found after some information and leads (read the comments on my original question), brought up by #ChrisHaas (Thanks again!). After some tinkering all is running smoothly now, without error 11 for msg_send().
PHP msg_send() call is basically a wrapper of msgsnd
So a lot of information can be found there, also about errors you might encounter (in combination with flags used when reading messages with msg_receive() )
The queue is limited in total size and total messages it can hold (I, however, have not found a way to increase the total size of the queue).
The reason I was getting error 11 was due to a couple of things:
The client I created was too slow fetching messages from the queue, causing it to run into the max limit and crapping out. I did not find a solution for fixing this situation, other than restarting all processes involved. To repeat the same over and over again.
I also increased the size of reading messages in msg_receive() as sometimes the messages where big (most where small). But when you declare a too small size the big messages will remain in queue and clog it up until it craps out. Increasing the max_size helped with fetching the bigger messages too.
Long story short: error 11 is related to a full message queue in my perspective (I still do not have a 100% clear documented answer though).
Pointers to fix the issue:
Be sure you fetch all messages that are big.
Be sure to read out at least as fast as you send the messages in the queue.
Check your queue(s) with the command ipcs -q in the terminal. It allows you to see the queues currently active. Keeping an eye on that allows you to see it slowly filling up on problems.
Wish the documentation on php.net was better in this case...
I need to make the acquaintance of SOAP, and wrote a simple client connecting to some random web service. (Turns out even finding a working service is a bit of a hassle.)
The code I have so far seems to work - but here's the thing: it only works once every ten seconds.
When I first load the page it shows the result I expect - a var_dump of an object - but when I reload the page right after that, all I see is Error Fetching http headers. Now matter how many times I refresh, it takes around ten seconds until I get the right result again, and then the process repeats - refresh too quickly, get an error.
I can't see what's going on at the HTTP level, and even if I could, I'm not sure I'd be able to draw the right conclusions.
Answers to similar questions posted here include setting the keep_alive option to false, or extending the default_socket_timeout, but neither solution worked for me.
So, long story short: is this an issue on the service's end or a problem I can remedy, and if it's the latter, how?
Here's the code I got so far:
<?php
error_reporting(-1);
ini_set("display_errors", true);
ini_set("max_execution_time", 600);
ini_set('default_socket_timeout', 600);
$wsdl = "http://api.chartlyrics.com/apiv1.asmx?WSDL";
try
{
$client = new SoapClient($wsdl, array(
"keep_alive" => false,
"trace" => true
));
$response = $client->SearchLyricDirect(array(
"artist" => "beatles",
"song" => "norwegian wood"
));
var_dump($response);
}
catch (Exception $e)
{
echo $e->getMessage();
}
?>
Any help would be appreciated. (And as a bonus, if you could enlighten me as to why saving the WSDL locally speeds the process up by 30 seconds, that'd be great as well. I assume it's the DNS lookup that takes so much time?)
As it turns out, the connection to the server as a whole is rather shaky.
I (and a few others I've asked to) had similar issues just trying to open the WSDL file in a browser - it works the first time, but refreshing somehow aborts the connection for a good ten seconds.
Though I really can't say what its problem is, this does strongly suggest that the fault lies with the server, not my client.
Well this isn't true, I'm sure there's a reason, but I can't find it!!
I have a script that can take around 10 minutes to execute. It does a lot of communicating with an api on a service that we have that use. It pulls a bit of a fingerprint of everything every 24 hours. So what it's doing is pretty aside from the point. the probm I'm finding is the script stops executing somewhat randomly!!
I can't find any errors that would cause my script to stop executing, even with
//for debugging
error_reporting(E_ALL);
ini_set('display_errors', '1');
on for debugging, it's all clean. I've also used
set_time_limit(0);
so that it shouldn't ever time out.
With that said, I'm not sure how to get any more debug info to figure out what it's stopping. I can say that the script should NOT be hitting any memory limits or anything. I mean that should throw an error, and I've gone through and cleaned this script up as much as I can see to clean it up.
So my Question is: What are common causes for a cron ending when it shouldn't? How can I debug this more effectively?
You could try using a register_shutdown_function() to define a codeblock that will execute when the script shuts down. Then create a variable across the main code execution points in the cron with details of what is going on. In the shutdown function write this into a log and check your log to see what state the program was in when it stopped. Of course, this is based on the assumption that your code is not totally erroring out.
You could also redirect the standard echo statements and logs into a log file by using
/path/to/cron.php > /path/to/log.txt 2>&1
2>&1 indicates that the standard error (2>) is redirected to the same file descriptor that is pointed by standard output (&1).So, both standard output and error will be redirected to /path/to/log.txt
UPDATE:
Below is a function/flow that I usually use in my crons:
function addLog($msg)
{
if(empty($msg)) return;
$handle = fopen('log.txt', 'a');
$msg = $msg."\r\n";
fwrite($handle,$msg);
fclose($handle);
}
Then I use it like so:
addLog("Initializing...");
init();
addLog("Finished initializing...");
addLog("Calling blah-blah API...");
$result = callBlahBlah();
addLog("blah-blah API returned value". $result);
It is more tedious to have all these logs, but when cron messes up, it really helps!
For eg. when you look at your log.txt and if you see something like:
Initializing...
Finished initializing...
Calling blah-blah API...
And there is no entry which says blah-blah API returned value, then you know that the function call to blah-blah messed up.
What are common causes for a cron ending when it shouldn't?
The most common in my experience is that the cron user has different permissions or different environment variables than the way that you're executing it from the command line.
Make your cronned program dump its environment to a temporary file and see if it's what you expect.
I have a simple script that makes redirection to mobile version of a website if it finds that user is browsing on mobile phone. It uses Tera-WURFL webservice to acomplish that and it will be placed on other hosting than Tera-WURFL itself. I want to protect it, in case of Tera-WURFL hosting downtime. In other words, if my script takes more than a second to run, then stop executing it and just redirect to regular website. How to do it effectively (so that the CPU would not be overly burdened by the script)?
EDIT: It looks that TeraWurflRemoteClient class have a timeout property. Read below. Now I need to find how to include it in my script, so that it would redirect to regular website in case of this timeout.
Here is the script:
// Instantiate a new TeraWurflRemoteClient object
$wurflObj = new TeraWurflRemoteClient('http://my-Tera-WURFL-install.pl/webservicep.php');
// Define which capabilities you want to test for. Full list: http://wurfl.sourceforge.net/help_doc.php#product_info
$capabilities = array("product_info");
// Define the response format (XML or JSON)
$data_format = TeraWurflRemoteClient::$FORMAT_JSON;
// Call the remote service (the first parameter is the User Agent - leave it as null to let TeraWurflRemoteClient find the user agent from the server global variable)
$wurflObj->getCapabilitiesFromAgent(null, $capabilities, $data_format);
// Use the results to serve the appropriate interface
if ($wurflObj->getDeviceCapability("is_tablet") || !$wurflObj->getDeviceCapability("is_wireless_device") || $_GET["ver"]=="desktop") {
header('Location: http://website.pl/'); //default index file
} else {
header('Location: http://m.website.pl/'); //where to go
}
?>
And here is source of TeraWurflRemoteClient.php that is being included. It has optional timeout argument as mentioned in documentation:
// The timeout in seconds to wait for the server to respond before giving up
$timeout = 1;
TeraWurflRemoteClient class have a timeout property. And it is 1 second by default, as I see in documentation.
So, this script won't be executed longer than a second.
Try achieving this by setting a very short timeout on the HTTP request to TeraWurfl inside their class, so that if the response doesn't come back in like 2-3 secs, consider the check to be false and show the full website.
The place to look for setting a shorter timeout might vary depending on the transport you use to make your HTTP request. Like in Curl you can set the timeout for the HTTP request.
After this do reset your HTTP request timeout back to what it was so that you don't affect any other code.
Also I found this while researching on it, you might want to give it a read, though I would say stay away from forking unless you are very well aware of how things work.
And just now Adelf posted that TeraWurflRemoteClient class has a timeout of 1 sec by default, so that solves your problem but I will post my answer anyway.
I have noticed a few websites such as hypem.com show a "You didnt get served" error message when the site is busy rather than just letting people wait, time out or refresh; aggravating what is probably a server load issue.
We are too loaded to process your request. Please click "back" in your
browser and try what you were doing again.
How is this achieved before the server becomes overloaded? It sounds like a really neat way to manage user expectation if a site happens to get overloaded whilst also giving the site time to recover.
Another options is this:
$load = sys_getloadavg();
if ($load[0] > 80) {
header('HTTP/1.1 503 Too busy, try again later');
die('Server too busy. Please try again later.');
}
I got it from php's site http://php.net/sys_getloadavg, altough I'm not sure what the values represent that the sys_getloadavg returns
You could simply create a 500.html file and have your webserver use that whenever a 50x error is thrown.
I.e. in your apache config:
ErrorDocument 500 /errors/500.html
Or use a php shutdown function to check if the request timeout (which defaults to 30s) has been reached and if so - redirect/render something static (so that rendering the error itself cannot cause problems).
Note that most sites where you'll see a "This site is taking too long to respond" message are effectively generating that message with javascript.
This may be to do with the database connection timing out, but that assumes that your server has a bigger DB load than CPU load when times get tough. If this is the case, you can make your DB connector show the message if no connection happens for 1 second.
You could also use a quick query to the logs table to find out how many hits/second there are and automatically not respond to any more after a certain point in order to preserve QOS for the others. In this case, you would have to set that level manually, based on server logs. An alternative method can be seen here in the Drupal throttle module.
Another alternative would be to use the Apache status page to get information on how many child processes are free and to throttle id there are none left as per #giltotherescue's answer to this question.
You can restrict the maximum connection in apache configuration too...
Refer
http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxclients
http://www.howtoforge.com/configuring_apache_for_maximum_performance
This is not a strictly PHP solution, but you could do like Twitter, i.e.:
serve a mostly static HTML and Javascript app from a CDN or another server of yours
the calls to the actual heavy work server-side (PHP in your case) functions/APIs are actually done in AJAX from one of your static JS files
so you can set a timeout on your AJAX calls and return a "Seems like loading tweets may take longer than expected"-like notice.
You can use the php tick function to detect when a server isn't loading for a specified amount of time, then display an error messages. Basic usage:
<?php
$connection = false;
function checkConnection( $connectionWaitingTime = 3 )
{
// check connection & time
global $time,$connection;
if( ($t = (time() - $time)) >= $connectionWaitingTime && !$connection){
echo ("<p> Server not responding for <strong>$t</strong> seconds !! </p>");
die("Connection aborted");
}
}
register_tick_function("checkConnection");
$time = time();
declare (ticks=1)
{
require 'yourapp.php' // load your main app logic
$connection = true ;
}
The while(true) is just to simulate a loaded server.
To implement the script in your site, you need to remove the while statement and add your page logic E.G dispatch event or front controller action etc.
And the $connectionWaitingTime in the checkCOnnection function is set to timeout after 3 seconds, but you can change that to whatever you want