i'm testing an app hosted at: app.promls.net, but there is some mistake on the script execution, on localhost
takes only -> timer: 0.12875008583069 seconds. .
in the execution when is just plain text that are created via php.
and when content is created dinamically and cames from mysql database:
timer: 0.44203495979309 seconds. /timer: 0.65762710571289 seconds. / timer: 0.48272085189819 seconds.
the times are diferent on the server. takes like 8 seconds on execution.
does anyone could give me a recomendation of how test and optimize my php execution.
i was optimizing the mysql database, cause some querys returns a tons of rows for a simple search, using describe and explain.
but know i have finished, and i would like to explore some new options for php execution.
i know that adding compression to html helps, but it only help on time of trasportation between server and final host when returns an html response. know i want to optimize php execution and if there are some tricks on mysql that could be implemented to help me improve the time response better.
note: i have thinking in use the hiphop for php and memcache or cassandra. but i guess those thinks are not re result for problem, cause i have no activities( means user actions) and no much information on my app.
thanks in advance i'm available for any comments or suggestions.
with such a big difference on execution we would need details on the host configuration (shared? dedicated?).
Is mysql skipping DNS test? if not try using skip-name-resolve setting in my.cnf, or use IP and not DNS in the PRIVILEGE query/user table, the only time I have seen such latency it was because of DNS timeout in the connection between MySQL and PHP.
First, off, try doing the following to your MYSQL DB:
Run "OPTIMIZE TABLE mytable" on all of your tables
Run "ANALYZE TABLE mytable" on all of your tables.
Add indexes to the table fields that you are using
Be sure to substitute each table name for "mytable" in the above statements.
See if doing the first two makes a difference, then add the indexes.
Related
I'm in a situation which really puzzles me, and nobody seems to know what the problem is.
I've got a website written in php/Laravel, which connects to a mysql server. If I call an api endpoint several times very fast (by clicking on a website button about 10-15 times very fast), the last few times, the call takes very long. When it normally lasts about 200ms, it suddenly takes multiples of 30 full seconds. So one call takes 30 seconds, another one (or more) 60, and another one or more calls take 90 seconds, etc. All calls end successfully, but they just take an enormous time to finish.
I first thought it could maybe be the php max_execution_time, so I set that to 15. Unfortunately, there was no change; the call still takes multiples of 30 seconds. Plus, if that would be the problem, it would return an error, while in this case I get correct 200 responses.
After some fiddling around I ran watch -n 0.3 "mysqladmin processlist" to see if mysql was maybe the cause. During the long lasting calls I see the following:
I'm not exactly sure what this is and why it might happen. I thought that mysql might be hanging, so I surrounded the mysql query in php with syslogs to see if the code actually hangs on the mysql call:
syslog(LOG_ERR, "BEFORE");
$results = TheTable::where('edg', $edg)->get();
$theResponse = response(json_encode($results), 200)->header('Content-Type', 'application/json');
syslog(LOG_ERR, "AFTER");
return $theResponse
But that doesn't seem to be the case. The BEFORE and AFTER syslogs always appear immediately after eachother, even if I see the queries in the mysqladmin processlist on "sleep".
Furthermore, as you can see it is a very simply read query, so I guess some kind of mysql lock can't be the problem (because I think locks only get used on writes).
And from this point I'm kinda lost on how to proceed debugging this. Does anybody know what these results tell me and where I can find the solution to this problem? All tips are welcome!
[PS: Some more info on the setup]
The setup is slightly more complex than I described above, but since I don't think that has anything to do with it, I saved that information for the last. We've actually got two Ubuntu 16.04 servers with a load balancer in front of them. The two servers both contain a mysql server which is in some kind of master/master sync mode. Although I feel very confortable on Linux, I didn't set this up and I'm not a sysadmin, so I might be missing something there. I asked the guys who administer this setup, but they say the problem must be in the code. I'm not sure where that could be though.
Again; all tips are welcome!
Ideally the code should close the connection post completion of task or should reuse the connection upon requirement based on how to code is written.
If you want MySQL to take care of this task, i would suggest check wait_timeout setting/variable. The value for this setting/veritable represents in second(s).
For example, if you set wait_timeout=10 than any connection inside MySQL in sleep state for more than 10 seconds will be closed by MySQL automatically.
Note: Above setting is global or dynamic in nature hence it can be set without restarting the MySQL.
Example command:
mysql> set global wait_timeout=10;
I'm puzzled; I assume a slow query.
Note: all my queries are tested and run great when there`s less people using my app/website (less then 0.01sec each).
So I've some high cpu usage with my current setup and I was wondering why? Is it possible it's an index issue?
Our possible solution: we thought we could use an XML cache file to store the informations each hour, and then reduce the load on our MySQL query? (update files each hour).
Will it be good for us to do such things? Since we have an SSD drive? Or will it be slower then before?
Currently in high traffic time, our website/app can take up to 30 seconds before return the first byte. My website is running under a Plesk 12 server.
UPDATE
Here's more informations about my mysql setup..
http://pastebin.com/KqvFYy8y
Is it possible it's an index issue?
Perhaps but not necessarily. You need first to identify which query is slow. You find that in the slow query log. Then analyze the query. This is explained in literature or you can contact a consultant / tutor for that.
We thought we could use an xml cache file to store the informations each hour.. and then reduce the load on our mysql query?
Well, cache invalidation is not the easiest thing to do, but with a fixed rythm every hour this seems easy enough. But take care that it will only help if the actual query you cache was slow. Mysql normally has a query cache built in, check if it is enabled or not first.
Will it be good for us to do such things?
Normally if the things to do are good, the results will be good, too. Sometimes even bad things will result in good results, so such a general question is hard to answer. Instead I suggest you gain more concrete information first before you continue to ask around. Sounds more like guessing. Stop guessing. Really, that's only for the first two minutes, after that, just stop guessing.
Since we have an ssd drive? Or will it be slower then before?
You can try to throw hardware on it. Again lierature and a consultant / tutor can help you greatly with that. But just stop guessing. Really.
I assume the query is not slow all the time. If this is true, the query is not very likely the problem.
You need to know what is using the CPU. Likely a runaway script with an infinite loop.
Try this:
<?php
header('Content-Type: text/plain; charset=utf-8');
echo system('ps auxww');
?>
This should return a list in this format:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
Scan down the %CPU column and look for your user name in the USER column
If you see a process taking 100% CPU, you may want to get the PID number and:
system('kill 1234');
Where 1234 is the PID
The mysql processes running at 441% and 218% seems very problematic.
Assuming this is a shared server, there may be another user running queries that is hogging the CPU. you may need to take that up with your provider.
I've been watching on one of my shared servers and the CPU for the mysql process has not gone over 16%.
MySQLTuner
From the link it appears you have heavy traffic.
The Tuner was running 23.5 minutes
Joins performed without indexes: 69863
69863 in 23.5 min. comes out to almost 50 queries per second.
Does this sound correct? Running a query with a JOIN 150 times per second.
Index JOIN Table
You have a query with a JOIN.
The tables are joined by column(s).
On the joined table add an index to the column that joins the two table together.
I had to change the blueprint of my webapplication to decrease loading time (http://stackoverflow.com/questions/5096127/best-way-to-scale-data-decrease-loading-time-make-my-webhost-happy).
This change of blueprint implies that the data of my application has to be migrated to this new blueprint (otherwise my app won't work). To migrate all my MySQL records (thousands of records), I wrote a PHP/MySQL script.
Opening this script in my browser doesn't work. I've set the time limit of the script to 0 for unlimited loading time, but after a few minutes the script stops loading. A cronjob is also not really an option: 1) strange enough it doesn't load, but the biggest problem: 2) I'm afraid this is going to cost too much resources of my shared server.
Do you know a fast and efficient way to migrate all my MySQL records, using this PHP/MySQL script?
You could try PHP's "ignore_user_abort". It's a little dangerous in that you need SOME way to end it's execution, but it's possible your browser is aborting after the script takes too long.
I solved the problem!
Yes, it will take a lot of time, yes, it will cause an increase in server load, but it just needs to be done. I use the errorlog to check for errors while migrating.
How?
1) I added ignore_user_abort(true); and set_time_limit(0); to make sure the scripts keeps running on te server (stops when the while() loop is completed).
2) Within the while() loop, I added some code to be able to stop the migration script by creating a small textfile called stop.txt:
if(file_exists(dirname(__FILE__)."/stop.txt")) {
error_log('Migration Stopped By User ('.date("d-m-Y H:i:s",time()).')');
break;
}
3) Migration errors and duplicates are logged into my errorlog:
error_log('Migration Fail => UID: '.$uid.' - '.$email.' ('.date("d-m-Y H:i:s",time()).')');
4) Once migration is completed (using mail()), I receive an email with the result of migration, so I don't have to check this manually.
This might not be the best solution, but it's a good solution to work with!
I have a db with over 5 million rows and for each row i have to do a http post to a server with some parameters at maximum rate of 500 connections. each post request takes 12 secs to process. so as old connections are completed i have to do new ones and maintain ~500 connection. I have to then update DB with values returned from these webcalls.
How do I make the webcalls as above?
My App is in PHP. Can I use PHP or should I switch to something else for this.
Actually you can definitely do this with PHP using a technique called long-polling. Basically how it works is the client machine pings the server and says "Do you have anything for me" and the server sees that it does not. Instead of responding it holds onto the request and responds when it has something to send.
Long polling is a method that is used by both DrupalChat and the APE project (AJAX Push Engine).
http://drupal.org/project/drupalchat
http://www.ape-project.org/
Here is some more info on push tech: http://en.wikipedia.org/wiki/Push_technology and http://en.wikipedia.org/wiki/Comet_%28programming%29
And here is a stackoverflow post about it: How do I implement basic "Long Polling"?
Now I have to say that 12 seconds is really dang long for a DB query to run. It sounds like either the query needs to be optimized or the DB does (or both). Have you normalized the database and setup good table and inter-table indexing?
Now as for preventing DB update collisions you need to use transactions (which both PostGres and newer versions of MySQL offer along with most enterprise DB systems). Transactions will allow you to rollback db changes and reserve table IDs and things like that.
http://en.wikipedia.org/wiki/Database_transaction
PHP isn't the right tool to make long-running scripts, since it by default has a maximum execution time which is pretty short. You might look into using python for this task. Also note that you can call external scripts from PHP (such as python scripts) using the system() function, if the only reason you're using PHP is to make it easy to integrate a web front-end.
However, you [b]can[/b] do this in php with a cron-job by simply having your php script only handle a single row at a time, and have the cron-job call the php script every second. Just maintain the index into the table elsewhere (either elsewhere in the DB or just write the number to a file)
If you wanted to saturate your 500 connection limit, have your script do 40 rows at a time. 40 rows / second is roughly 500 rows / 12 seconds
I have a small PHP framework that basically just loads a controller and a view.
An empty page loads in about 0.004 seconds, using microtime() at the beginning and end of execution.
Here's the "problem". If I do the following:
$link = #mysql_connect($server,$user,$pass,$link);
#mysql_select_db($database, $link);
the page loads in about 0.500 seconds. A whopping 12500% jump in time to render the empty page.
is this normal or am I doing something seriously wrong here... (I'm hoping for the latter).
EDIT: Could someone say what a normal time penalty is for just connecting to a mysql db like above.
Error suppression with # will slow down your script. Also an SQL connection is reliant on the speed of the server. So if the server is slow to respond, you will get a slow execution of your script.
Actually, I don't get what you're trying to do with the $link variable .
$link = #mysql_connect($server,$user,$pass,$link); is probably wrong, possibly not doing what you want (ie nothing in your example) unless if you have more than one link to databases (advanced stuff)
the php.net documention states
new_link
If a second call is made to
mysql_connect() with the same
arguments, no new link will be
established, but instead, the link
identifier of the already opened link
will be returned. The new_link
parameter modifies this behavior and
makes mysql_connect() always open a
new link, even if mysql_connect()
was called before with the same
parameters. In SQL safe mode, this
parameter is ignored.
On my webserver, the load time is always about the same on average (4000 µsecs the first time and 600 µsec the second time)
Half a second to connect to a mysql database is a bit slow but not unusual either. If it's on another server on the network with an existing load, it might be absolutely normal.
I wouldn't worry too much about this problem.
(oh, old question ! Nevermind, still replying)
Have you tried verifying your data integrity? do a repair table, and an optimze table to all tables. I know that sometimes when a table has gone corrupt the connection time can take an enormous amount of time / fail all together.
May be the reason is domain resolve slow.
skip-name-resolve
Add this to my.cnf, then restart mysqld.
And if you skip-name-resolve, you can't use hostname in mysql for user permission.