i have a strange problem with a Sonata/Symfony project. Randomly the site is not available. It seems that there is a process blocking all other requests from any other user / client. The webserver / database are not involved, other projects run fine. After some minutes the site is available again. I allready set the max_execution_time to 60s and minimized other values. The problem keeps coming randomly and no error is written in the logs.
Do you have any ideas how i can try to solve this problem?
Thank you very much.
set the PHP paramater to minimum execution time
Related
In general we have around 2 requests / second. However, after we pushed notification to 3000 users, we suddenly get to 120 requests / second. Unfortunately around half of those users were getting 5XX server errors, meaning half of the users who came up were getting blank pages. After the hype is gone, no server error ever happened again.
I did some research and it seems like it is because of the start up time, that is was taking too long for the instance to start up and therefore aborted. I checked my instance number, there were as many as 90 instances created, but active instances dropped from 40 to 0 after a second. This problem only occurred when there was a sudden increase of request, but I thought app engine was supposed to be able to handle this type of increase.
My question is how can I fix this problem? Or where should I keep digging to find the root of the problem. Thanks in advance!
Thank you all for the help, I've figured out the problem.
Credit goes to Dan Cornilescu, his comments gave me the leads to find the root of the problem, which was because I did not have enough min_idle_instances. Once I had enough number of min_idle_instance set in my auto scaling section in my app.yaml I did not receive any 5XX server errors.
Which 5XX codes where you seeing?
I experienced an issue with instances mysteriously hanging & dieing on start-up:
app engine instance dies instantly, locking up deferred tasks until they hit 10 minute timeout
It was due to a 3rd party lib I was using which was trying to bind to port during instantiation, and I ended up editting the source code of that lib.
I've also experienced crashes after an instance sent it's ~20th push notification to APNS, due to a memory leak in app engine's version of python's ssl library.
Your issue is a bit different than these but the steps to hunt it down feels the same:
Setup a sandbox by deploying your project to a different project id and reproduce the issue. Making a script that hits this sandbox with thousands of requests over the course of a few minutes from your local machine should do it.
Comment stuff out of your code, deploy again to the sandbox, see if it still crashes, repeat until your script no longer crashes it.
Proceeding with the process of elimination like this should lead you to whats causing the issue by ruling out everything that isnt causing the issue.
You can also do this the opposite direction, by starting from a 'hello world' type project and systematically copy paste chunks of your application code in until the issue starts happening.
If you are experiencing high traffic, then maybe it is now good time for you to run load tests. Try to simulate real world traffic as closely as possible and try to find bottlenecks using Stackdriver Trace or by profiling request handling in your code and database operations.
Also check your project scaling settings in your yaml file, especially these parameters:
automaticScaling:
coolDownPeriod: 120s
cpuUtilization:
targetUtilization: 0.5
maxTotalInstances: 8
minTotalInstances: 1
Not necessarily the solution, but worth checking: Make sure your listening at the port specified by the environment variable provided by Google. This solved it for me.
I am running my application on laravel 5.1.27 on server hosted on hostgator.
Most of the times my POST requests end up in a gateway timeout error. I've restfull APIs which allow user to send POST requests and also I am using datatables. Datatables post request also mostly end up as Timeout Error.
I've read many other threads but can't seem to be successful in removing these Errors. Everything is working fine on my local machine but on server timeout errors occur.
Here are my live server specifications:
Any help/suggestions would be really appreciated.
Note: I am using shared hosting plan so I don't have root access on my server to solve my problem. So keep this thing in mind while suggesting any solutions.
Try using
<?php
set_time_limit (60);
?>
Set the number of seconds a script is allowed to run. If this is reached, the script returns a fatal error. The default limit is 30 seconds or, if it exists, the max_execution_time value defined in the php.ini.
The PHP default is 30 seconds, however your host may set this even lower.
If you change to 60 to 0 this will tell PHP never to time out.
This is not recommended, as if you have a leaky/looping script, this can cause havoc on the server (and your host will probably disable your site until the script stops).
i have a script that load a csv by CURL, once i have the csv it add each of the records to the database and when finished, display the total amount of registries added.
on less than 500 registries, it execute just fine, the problem is that whennever the amount of registries is too big, the execution is interrupted at some point and the browser displays the download dialog with a file named like the last part of my url withouth extension containing nothing. no warning, error or any kind of message. the database shows that it added some of the registries, if i run the script several times it adds a small amount more.
i have tried to look for someone with a similar situation but haven't find it yet.
i would appreciate any insight in the matter, i'm not sure if this is a symfony2 problem, a server configuration problem or what.
thanks in advance
Probably your script is reaching the maximum php execution time which is by default 30 secs. You can change it in the controller doing the lengthy operation with the php set_time_limit() function. For example:
set_time_limit (300); //300 seconds = 5 minutes
That's more a limitation of your webserver/environment PHP is running in.
Increase max_execution_time to allow your webserver running the request longer - alternative would be writing a console command, the cli environment isn't restricted in many cases.
For some odd reason, just today our server decided to be very slow during the starting of sessions. For every session_start, the server either times out after 30 seconds, or it'll take about 20 seconds for it to start the session. This is very weird, seeing as it hasn't done this for a very long time (the last time our server did this was about 7 months ago). I've tried to change the session to run through a database instead, and that works fine, however, as our current website is built, it'd take days to go on every page and change the loading of sessions to include a new session handler. Therefore my question remains:
Why is it so slow, and why only sometimes?
We run on a dedicated hetzner server with 24GB's of ram, and a CPU fast enough to just run a simple webserver (a Xeon, I believe, but I'm not sure). We run debian on the server with an apache+fastcgi+php5 setup.
The server doesn't report much load, neither through server-status as well as the top command. Vnstat reports no problem whatsoever with our network link (again, that wouldn't result in a slow local session handling). IOtop reports no problem with processes taking over the entire harddrive. Writing to the tmp folder where the session files are located works fast if done through vim.
Again, to make this clear, my main concern here isn't whether or not we should switch to a DB or a memory-cached version of the sessions, it's simply to ask why this happens, because everything I take a look at seems to be working fine, except for the PHP itself.
EDIT:
The maximum file in our PHP tmp directory is 2.9 MB, so nothing that should make an impact, I believe.
UPDATE: I did never figure out what was wrong and/or how to fix it, but the problem disappeared after we switched over to memcached/db sessions.
Have you tried session_write_close(); ?
This will disable write-ability in session variables but you can still read data from them. And later when you need to write a session variable, reopen it.
I have also suffered from this problem but this thing worked like a charm. This is what i do:
session_start(); //starts the session
$_SESSION['user']="Me";
session_write_close(); // close write capability
echo $_SESSION['user']; // you can still access it
I had the same problem: suddenly the server took 30 seconds to execute a request. I noticed it was because of session_start(). The first request was fast, but each next request took some 30 sec to be executed.
I found that the session file in c:\wamp\tmp was locked by the first request for some 30 sec. During this time the second request was waiting for the file to be unlocked.
I found out it had something to do with rewrite_mod and .htaccess. I disabled rewrite_mod and commented out every line in .htaccess and it works again like a charm. I don't know why this happend because I don't remember change any settings or conf on wamp.
I ran into this problem too. It was answered here:
Problem with function session_start() (works slowly)
Sessions are locked by PHP while one script is executing, so if scripts are stacked under the same session, they can cause these surprisingly long delays.
Each session is stored by apache as a text file.
When session start is use to resume an existing session (via cookie identifier for example) maybe a big session file (a Session with a lot of content inside) can be slow to be started?
If this is the case probably you application is putting to much data into sessions.
Please check if you have correct memcache settings e.g. in /etc/php.d/memcached.ini
I know this is an old question but I've just fixed this issue on my server. All I did was turn on the bypass cache for the domains in the cache manager in the cpanel.
My sessions were taking ages to start and close now they are instant.
Sessions also may start slow if you put many data in it. For example 50MB of data in session in docker image may result in 3 seconds of session start time.
I was asked to help troubleshoot someone's website. It is written in php, on a linux box, using an apache server and mysql, and I have never worked with any of these before (except maybe linux in school).
I got most of the issues fixed (most code is really the same no matter what langues it is) however there is still one page that is timing out when processing huge files. I'm fairly sure the problem is a timeout somewhere but I have no idea where all the php timeouts would be.
I have adjusted max_execution_time, max_input_time, mysql.connect_timeout, default_socket_timeout, and realpath_cache_ttl in php.ini but it is still timing out after about 10 minutes. What other settings might exist that I could increase to try and fix this?
As a sidenote, I'm aware that 10min is generally not desired when processing an file, however this section of the site is only used by one person once or twice a week and she doesn't mind providing the process finishes as expected !and I really don't want to go rewrite someone else's bad coding in a language I don't understand, for a process I don't understand)
EDIT: The sql process finishes in the background, its just the webpage itself that times out.
Per Frank Farmer's suggestion, I added flush() to the code and it works now. Definitely a browser timeout, thanks Frank!
You can use set_time_limit() if you set it to zero the script should not time out at all.
This would be placed within your script, not in any config etc...
Edit: Try changing apache's timeout settings. In the config look for TimeOut directive (should be the same for apache 2.x and apache 1.3.x), once changed restart apache and check it.
Edit 3:
Did you go to the link I provided? It lists there the default, which is 300 seconds (5 minutes). Also if the setting IS NOT in the config file, you CAN add it.
According to the docs:
The TimeOut directive currently defines the amount of time Apache will wait for three things:
The total amount of time it takes to receive a GET request.
The amount of time between receipt of TCP packets on a POST or PUT request.
The amount of time between ACKs on transmissions of TCP packets in responses.
So it is possible it doesn't relate, but try it and see.