I have page listing few records from db.
After upgrading to PHP 5.3 site printing long records list is not displayed - Explorer says "Connection was reset"
I've changed SQL query in code to limit records and then page was shown correctly
So it seems to be some kind of timeout set.
I've tried find some settings in PHP.ini , HTTPD.conf - changed all sounds similar to timeout but nothing happened.
Any idea how to make it working ?
EDIT
Page resets after ~2 secs - so there is no extremely long time....
EDIT-2
I've tried set php vars: max_execution_time, max_input_time, memory_limit
WAMPServer 2 (PHP 5.3, Apache 2.2.11)
At the top of your .php file, insert something like:
set_time_limit(120);
That sets the timeout for the script to 2 minutes. Increase it as you need.
I would recommend avoiding this problem by paginating your results, otherwise you're opening yourself up to a world of trouble. Slow pages are an open door for a denial of service attack by resource exhaustion.
Related
A basic page page with just session_start(); loads just fine, but once I've set something, for example $_SESSION['pet']="dog";, the page load time is around 5 seconds.
I'm using AWS's memcached server and the connection time to it from the EC2 instance is really fast. I'm not sure where the slow down is coming from.
The session.save_handler is set to memcached and session.save_path is set to xxx.cfg.use1.cache.amazonaws.com:11211
phpinfo also displays Registered save handlers as files user memcache memcached
EDIT :
I uploaded test files to demonstrate the issue. The first file is simply session_start(); print_r($_SESSION); (http://rr915webapi.us-east-1.elasticbeanstalk.com/session.php). The second file is session_start();$_SESSION['pet']="dog";$_SESSION['name']="bob";(http://rr915webapi.us-east-1.elasticbeanstalk.com/session-set.php). After you load the second file, you can see the first takes a while longer to load than initially did.
By setting the following in the PHP ini file, the response time was reduced down to milliseconds.
session.lazy_write = 0
memcached.sess_locking = Off
Some possibilities :
if your PHP server running your PHP code and your memcached server / cfg.use1.cache.amazonaws.com are hosted on different regions, it can explain all this time...
there seems to be a a bug in libmemcached 1.0.16...if you update to 1.0.18, will fix the problem, see https://github.com/iuscommunity/wishlist/issues/143 comments and https://bugs.launchpad.net/libmemcached/+bug/1589344
We have an uploader which takes customer data, maps the data to our database structure and then uploads to the database. As part of the uploader, we load all the uploaded data into memory.
The issue we are having is that "large" datasets seem to cause session issues (current example failing is around 800kb). In the code base there is a check one every page load for the contents of the session to ensure the user is logged in and is valid etc - when performing this upload and things going awry, that check fails upon form submission. Through troubleshooting we discovered that when the issue occurs the session is emptied hence the check fails and causes the user to be logged out.
The code works fine with the 800kb dataset in development and also works fine on live with around 100kb dataset. This points to the code being OK and the issue being "environmental".
Session code is:
// If the session hasn't been started yet.
if (session_status() == PHP_SESSION_NONE)
{
// Start the session.
session_start();
}
and also then in the file itself:
$_SESSION['csv_data'] = array_map('str_getcsv', file($file_path));
and finally MUCH later (doesn't get here):
// Remove the session data.
unset($_SESSION['csv_data']);
We tried changing the memory_limit in PHP up to 1GB and even 10GB to no avail -likewise we upped max_filesize and post_max_size to 512MB respectively and same thing happens (we restarted both HTTPD and PHP-FPM)
The server is quad-core, 32GB RAM with 4 x 256GB SSD in RAID 10 - at no point does free memory drop below 28GB and both uptime and top show low usage so it doesn't seem to be lack of traditional resources causing an issue. It is running Centos 6.9 64 bit and Apache 2.4.9 with PHP 5.6.33
Is there anything else on the server which could be setting a limit to the (PHP) session/memory size? What else can we try to figure out what might be causing this?
I have Apache 2.2 PHP 5.3 MySQL 5.5 application. A form on page1.php accepts user input. Values are passed to page2.php using GET. PHP script on page2.php runs MySQL query and shows results. Depending on the user input parameters query may run from 3 to 900 seconds.
In my tests results from any query that runs < 300 sec are shown OK. Longer running queries are completed OK on the server (I see CPU load goes from 90% to 0% after lets say 500 sec) but browser is not showing result and keeps showing in the status bar "Transferring data from my.host.org ..."
At this time when I am trying to open any page of my application in new instance of the same browser (Firefox) it says "Connecting..." on the tab header and "Waiting for my.host.org ..." in the status bar. Opening any page of my application in other browser (IE) at this time goes OK.
Below are setting I have changed/set so far but they did not help. Any ideas would be helpful. Thank you.
apache2.conf:
Timeout 300 -> 1800
php.ini:
user_ini.cache_ttl = 300 -> 1800
max_execution_time = 30 -> 1800
default_socket_timeout = 60 -> 1800
mysql.connect_timeout = 60 -> 1800
page2.php:
ignore_user_abort(1);
Considering this is a long running query, do you really need to run it in the browser window?
You should consider using a Cron job and redirecting the output to an e-mail address, PHP processes running in the CLI have, by default, unlimited execution time.
As my last piece of help, if you're running PHP as (F)CGI you may have to change the CGI setting to wait for output from the PHP process a bit longer.
I am running a huge import to my database(about 200k records) and I'm having a serious issue with my import script timing out. I used my cell phone as a stop watch and found that it times out at exactly 45 seconds every pass(internal server error)... it only does about 200 records at a time, sometimes less. I scanned my phpinfo() and nothing is set to 45 seconds; so, I am clueless as to why it would be doing this.
My max_execution_time is set to 5 minutes and my max_input_time is set to 60 seconds. I also tried setting set_time_limit(0); ignore_user_abort(1); at the top of my page but it did not work.
It may also be helpful to note that my error file reads: "Premature end of script headers" as the execution error.
Any assistance is greatly appreciated.
I tried all the solutions on this page and, of course, running from the command line:
php -f filename.php
as Brent says is the sensible way round it.
But if you really want to run a script from your browser that keeps timing out after 45 seconds with a 500 internal server error (as I found when rebuilding my phpBB search index) then there's a good chance it's caused by mod_fcgid.
I have a Plesk VPS and I fixed it by editing the file
/etc/httpd/conf.d/fcgid.conf
Specifically, I changed
FcgidIOTimeout 45
to
FcgidIOTimeout 3600
3600 seconds = 1 hour. Should be long enough for most but adjust upwards if required. I saw one example quoting 7200 seconds in there.
Finally, restart Apache to make the new setting active.
apachectl graceful
HTH someone. It's been bugging me for 6 months now!
Cheers,
Rich
It's quite possible that you are hitting an enforced resource limit on your server, especially if the server isn't fully under your control.
Assuming it's some type of Linux server, you can see your resource limits with ulimit -a on the command line. ulimit -t will also show you just the limits on cpu time.
If your cpu is limited, you might have to process your import in batches.
First, you should be running the script from the command line if it's going to take a while. At the very least your browser would timeout after 2 minutes if it receives no content.
php -f filename.php
But if you need to run it from the browser, try add header("Content-type: text/html") before the import kicks.
If you are on a shared host, then it's possible there are restrictions on the system when any long running queries and/or scripts are automatically killed after a certain length of time. These restrictions are generally loosened for non-web running scripts. Thus, running it from the command line would help.
The 45 seconds could be a coincidence -- it could be how long it takes for you to reach the memory limit.. increasing the memory limit would be like:
ini_set('memory_limit', '256M');
It could also be the actual db connection that is timing out.. what db server are you using?
For me, mssql times out with an extremely unhelpful error, "Database context changed", after 60 seconds by default. To get around this, you do:
ini_set('mssql.timeout', 60 * 10); // 10 min
First of all
max_input_time and
set_time_limit(0)
will only work with VPS or dedicated servers . Instead of that you can follow some rules to your implementation like below
First read the whole CSV file .
Then grab only 10 entries (rows) or less and make a ajax calls to import in DB
Try to call ajax every time with 10 entries and after that echo out something on browser . In this method your script will never timeout .
Follow the same method untill the CSV rows are finished .
I have an upload form that uploads mp3s to my site. I have some intermittent issues with some users which I suspect to be slow upload connections...
But anyway the first line of code is set_time_limit(0); which did fix it for SOME users that had connections that were taking a while to upload, but some are still getting timed out and I have no idea why.
It says the script has exceeded limit execution of 60 seconds. The script has no loops so it's not like it's some kind of infinite loop.
The weird thing is that no matter what line of code is in the first line it will always say "error on line one, two, etc" even if it's set_time_limit(0);. I tried erasing it and the very first line of code always seems to be the error, it doesn't even give me a hint of why it can't execute the php page.
This is an issue only few users are experiencing and no one else seems to be affected. Could anyone throw some ideas as to why this could be happening?
set_time_limt() will only effect the actual execution of the PHP code on the page. You want to set the PHP directive max_input_time, which controls how long the script will accept input (like files) for. The catch is that you need to set this in php.ini, as if the default max_input_time is exceeded, it'll never reach the script which is attempting to change it with ini_set().
Sure, a couple of things noted in the PHP Manual.
Make sure PHP is not running in safe-mode. set_time_limit has no affect when PHP is running in safe_mode.
Second, and this is where I assume your problem lies.....
Note: The set_time_limit() function and the configuration directive max_execution_time only affect the execution time of the script itself. Any time spent on activity that happens outside the execution of the script such as system calls using system(), stream operations, database queries, etc. is not included when determining the maximum time that the script has been running. This is not true on Windows where the measured time is real.
So your stream may be the culprit.
Can you post a little of your upload script, are you calling a separate file to handle the upload using Headers?
Try ini_set('max_execution_time', 0); instead.