How to solve 30 second response times because of mysql sleep? - php

I'm in a situation which really puzzles me, and nobody seems to know what the problem is.
I've got a website written in php/Laravel, which connects to a mysql server. If I call an api endpoint several times very fast (by clicking on a website button about 10-15 times very fast), the last few times, the call takes very long. When it normally lasts about 200ms, it suddenly takes multiples of 30 full seconds. So one call takes 30 seconds, another one (or more) 60, and another one or more calls take 90 seconds, etc. All calls end successfully, but they just take an enormous time to finish.
I first thought it could maybe be the php max_execution_time, so I set that to 15. Unfortunately, there was no change; the call still takes multiples of 30 seconds. Plus, if that would be the problem, it would return an error, while in this case I get correct 200 responses.
After some fiddling around I ran watch -n 0.3 "mysqladmin processlist" to see if mysql was maybe the cause. During the long lasting calls I see the following:
I'm not exactly sure what this is and why it might happen. I thought that mysql might be hanging, so I surrounded the mysql query in php with syslogs to see if the code actually hangs on the mysql call:
syslog(LOG_ERR, "BEFORE");
$results = TheTable::where('edg', $edg)->get();
$theResponse = response(json_encode($results), 200)->header('Content-Type', 'application/json');
syslog(LOG_ERR, "AFTER");
return $theResponse
But that doesn't seem to be the case. The BEFORE and AFTER syslogs always appear immediately after eachother, even if I see the queries in the mysqladmin processlist on "sleep".
Furthermore, as you can see it is a very simply read query, so I guess some kind of mysql lock can't be the problem (because I think locks only get used on writes).
And from this point I'm kinda lost on how to proceed debugging this. Does anybody know what these results tell me and where I can find the solution to this problem? All tips are welcome!
[PS: Some more info on the setup]
The setup is slightly more complex than I described above, but since I don't think that has anything to do with it, I saved that information for the last. We've actually got two Ubuntu 16.04 servers with a load balancer in front of them. The two servers both contain a mysql server which is in some kind of master/master sync mode. Although I feel very confortable on Linux, I didn't set this up and I'm not a sysadmin, so I might be missing something there. I asked the guys who administer this setup, but they say the problem must be in the code. I'm not sure where that could be though.
Again; all tips are welcome!

Ideally the code should close the connection post completion of task or should reuse the connection upon requirement based on how to code is written.
If you want MySQL to take care of this task, i would suggest check wait_timeout setting/variable. The value for this setting/veritable represents in second(s).
For example, if you set wait_timeout=10 than any connection inside MySQL in sleep state for more than 10 seconds will be closed by MySQL automatically.
Note: Above setting is global or dynamic in nature hence it can be set without restarting the MySQL.
Example command:
mysql> set global wait_timeout=10;

Related

Memory not freed after execution of a PHP script

I have a LAMP server on which I run a PHP script that makes a SELECT query on a table containing about 1 million rows.
Here is my script (PHP 8.2 and mariaDB 10.5.18) :
$db = new PDO("mysql:host=$dbhost;dbname=$dbname;", $dbuser, $dbpass);
$req = $db->prepare('SELECT * FROM '.$f_dataset);
$req->execute();
$fetch = $req->fetchAll(PDO::FETCH_ASSOC);
$req->closeCursor();
My problem is that each execution of this script seems to consume about 500MB of RAM on my server, and this memory is not released at the end of the execution, so having only 2GB of RAM, after 3 executions, the server kills the Apache2 task, which forces me to restart the Apache server each time.
Is there a solution to this? A piece of code that allows to free the used memory?
I tried to use unset($fetch) and gc_collect_cycles() but nothing works and I haven't found anyone who had the same problem as me.
EDIT
After the more skeptical among you about my problem posted several responses asking for evidence as well as additional information, here is what else I can tell you:
I am currently developing a trading strategy testing tool where I set the parameters manually via an HTML form. This one is then processed by a PHP script that will first perform calculations in order to reproduce technical indicators (using the Trader library for some of them, and reprogrammed for others) from the parameters returned by the form.
In a second step, after having reproduced the technical indicators and having stored their values in my database, the PHP script will simulate a buy or sell order according to the values of the stock market price I am interested in, and according to the values of the technical indicators calculated just before.
To do this, I have in my database for example 2 tables, the first one stores the information of the candles of size 1 minute (opening price, closing price, max price, min price, volume ...), that is to say 1 candle per line, the second table stores the value of a technical indicator, corresponding to a candle, thus to a line of my 1st table.
The reason why I need to make calculations, and therefore to get my 1 million candles, is that my table contains 1 million candles of 1 minute on which I want to test my strategy. I could do this with 500 candles as well as with 10 million candles.
My problem now, is only with the candle retrieval, there are not even any calculations yet. I shared my script above which is very short and there is absolutely nothing else in it except the definitions of my variables $dbname, $dbhost etc. So look no further, you have absolutely everything here.
When I run this script on my browser, and I look at my RAM load during execution, I see that an apache process consumes up to 697 MB of RAM. I'd like to say that so far, nothing abnormal, the table I'm retrieving candles from is a little over 100 MB. The real problem is that once the script is executed, the RAM load remains the same. If I run my script a second time, the RAM load is 1400 MB. And this continues until I have used up all the RAM, and my Apache server crashes.
So my question is simple, do you know a way to clear this RAM after my script is executed?
What you describe is improbable and you don't say how you made these measurements. If your assertions are valid then there are a couple of ways to solve the memory issue, however this is the xy problem. There is no good reason to read a million rows into a web page script.
After several hours of research and discussion, it seems that this problem of unreleased memory has no solution. It is simply the current technical limitations of Apache compared to my case, which is not able to free the memory it uses unless it is restarted every time.
I have however found a workaround in the Apache configuration, by only allowing one maximum request per server process instead of the default 5.
This way, the process my script is running on gets killed at the end of the run and is replaced by another one that starts automatically.

Excessive resource usage on a very simple query

I am not sure if I need to be worried about this, but I want to make sure that my script is not bugging mysql or something.
I get an email from lfd about every 30 seconds for a script that I run in php using a query in mysql.
session_start();
include('connect.php');
$sql = "SELECT * FROM table WHERE (dest_id = '".$_SESSION['session_user_id']."' OR dest_id = '0') AND user_id != '".$_SESSION['session_user_id']."' AND `read` = 0 AND org_code = '".$_SESSION['session_org_code']."'";
$result = $GLOBALS['db']->query($sql);
echo $result->num_rows;
This query when I run it manually seems to run very quickly.
The email from the lfd says
Time: Fri Jun 9 01:20:55 2017 -0700
Account: ********
Resource: Process Time
Exceeded: 719621 > 1800 (seconds)
Executable: /usr/bin/php
Command Line: /usr/bin/php /home/myname/public_html/example/includes/inbox_total.php
PID: 17579 (Parent PID:16310)
Killed: No
Is it my understanding that Exceeded: 719621 > 1800 (seconds) means that my script is taking 719621 seconds to run?
Is there something need to worry about and if so are there some trouble shooting tips I can use to find the issue?
Best way to find out if this is accurate: turn on the slow query log (if you have access to your MySQL configuration). That will tell you more details for sure. It isn't crazy for a query that seems fast to you to take a long time. If this is a table that also gets written to frequently, then you can end up hanging up MySQL (or at least freezing up one particular transaction) due to an inability to read while waiting to write. It depends very much on what MySQL storage engine you are using, the variability of your load, and some other factors which can be hard to predict. I have certainly had this problem before, and definitively had issues where queries worked fine for me but were bogging down during high load times, unexpected traffic spikes, or other things outside of my control. The latter is especially likely if you are running on a poorly controlled VPS (in this case, poorly controlled by the hosting company: with a poorly configured virtual host a VPS can suck up CPU resources "dedicated" to another VPS, to your detriment).
So is this possible? Absolutely. What do you do about it? Depends on what the root issue is: traffic spikes, poor VPS allocation, etc. Sometimes a lot of digging can be needed to get to the root of the issue.
One immediate issue could be a largish table without proper indexing. It is actually impossible to index an OR condition, so I can actually say without seeing anything else that your query is not using an index. If this table has even a few thousand records, under the wrong load conditions, it could very easily turn into a super slow query, especially if you commonly write to the table and are using MyISAM.
That's just a shot in the dark though without more details.

Long running PHP scraper returns 500 Internal Error

mostly I find the answers on my questions on google, but now i'm stuck.
I'm working on a scraper script, which first scrapes some usernames of a website, then gets every single details of the user. there are two scrapers involved, the first goes through the main page, gets the first name, then gets the details of it's profile page, then it goes forward to the next page...
the first site I'm scraping has a total of 64 names, displayed on one main page, while the second one, has 4 pages with over 365 names displayed.
the first one works great, however the second one keeps getting me the 500 internal error. I've tried to limit the script, to scrape only a few names, which works like charm, so I'm more then sure that the script itself is ok!
the max_execution_time in my php ini file is set to 1500, so I guess that's not the problem either, however there is something causing the error...
not sure if adding a sleep command after every 10 names for example will solve my situation, but well, i'm trying that now!
so if any of you have any idea what would help solve this situation, i would appreciate your help!
thanks in advance,
z
support said i can higher the memory upto 4gigabytes
Typical money gouging support answer. Save your cash & write better code because what you are doing could easily be run from the shared server of a free web hosting provider even with their draconian resource limits.
Get/update the list of users first as one job then extract the details in smaller batches as another. Use the SQL BULK Insert command to reduce connections to the database. It also runs much faster than looping through individual INSERTS.
Usernames and details is essentially a static list, so there is no rush to get all the data in realtime. Just nibble away with a cronjob fetching the details and eventually the script will catch up with new usernames being added to the incoming list and you end up with a faster,leaner more efficient system.
This is definitely a memory issue. One of your variables is growing past the memory limit you have defined in php.ini. If you do need to store a huge amount of data, I'd recommend writing your results to a file and/or DB at regular intervals (and then free up your vars) instead of storing them all in memory at run time.
get user details
dump to file
clear vars
repeat..
If you set your execution time to infinity and regularly dump the vars to file/db your php script should run fine for hours.

Advise for parsing Apache-logs to display

I was just wondering what would be the better way to show a graph of '# of visitors' per month/week.
1: Write a few functions that go off and parse apaches logs then returns an array and converts it into a graph.
2: cronjobs run at night and insert the log files into a mysql db then when the 'client' requests to see a graph of the visitors per month/week, sends query to mysql and returns and graphs.
With #1 I first thought this would be a good idea but then began to think about the toll on the server plus it seems that if a user refreshed the page the whole process would start over when the data would more-or-less be the same(Wasting processor/memory time)
With #2, I think this is the better idea or the two but was wondering if anyone else did something similar and if so how did it go.
Any advise would be appreciated.
Thanks.
If you have a database handy, there's no reason not to use it. You can parse up to say, one second prior to start of script, store that time, and start again from there the next go-around. You can get the cron to run as quickly as every minute with very little server impact that way.
Further, in languages like Python and Perl, you can run an infinite loop on readline() / readline, and it will keep returning either an empty string or the net line as soon as one exists. Add a short sleep every time you see an empty line and you can have realtime updates with a long-lived process, without the overhead of constant seeks and parses. Naturally you might want to have a cron that tests if they're alive and revives them if not.
I can provide code if you like.

php mysql db connect very slow

I have a small PHP framework that basically just loads a controller and a view.
An empty page loads in about 0.004 seconds, using microtime() at the beginning and end of execution.
Here's the "problem". If I do the following:
$link = #mysql_connect($server,$user,$pass,$link);
#mysql_select_db($database, $link);
the page loads in about 0.500 seconds. A whopping 12500% jump in time to render the empty page.
is this normal or am I doing something seriously wrong here... (I'm hoping for the latter).
EDIT: Could someone say what a normal time penalty is for just connecting to a mysql db like above.
Error suppression with # will slow down your script. Also an SQL connection is reliant on the speed of the server. So if the server is slow to respond, you will get a slow execution of your script.
Actually, I don't get what you're trying to do with the $link variable .
$link = #mysql_connect($server,$user,$pass,$link); is probably wrong, possibly not doing what you want (ie nothing in your example) unless if you have more than one link to databases (advanced stuff)
the php.net documention states
new_link
If a second call is made to
mysql_connect() with the same
arguments, no new link will be
established, but instead, the link
identifier of the already opened link
will be returned. The new_link
parameter modifies this behavior and
makes mysql_connect() always open a
new link, even if mysql_connect()
was called before with the same
parameters. In SQL safe mode, this
parameter is ignored.
On my webserver, the load time is always about the same on average (4000 µsecs the first time and 600 µsec the second time)
Half a second to connect to a mysql database is a bit slow but not unusual either. If it's on another server on the network with an existing load, it might be absolutely normal.
I wouldn't worry too much about this problem.
(oh, old question ! Nevermind, still replying)
Have you tried verifying your data integrity? do a repair table, and an optimze table to all tables. I know that sometimes when a table has gone corrupt the connection time can take an enormous amount of time / fail all together.
May be the reason is domain resolve slow.
skip-name-resolve
Add this to my.cnf, then restart mysqld.
And if you skip-name-resolve, you can't use hostname in mysql for user permission.

Categories