I am currently experiencing slowness with one of my servers. It is running an apache2 server with PHP and MySQL. The MySQL server is hosted on the same machine as the webserver itself.
Whenever I request a PHP file containing MySQL queries the page needs approximately 24 seconds to show up. While requesting the page the CPU usage of apache2 goes up to 11% (!) which is very much in comparison to what it used to be a week ago.
Non-PHP files or PHP files without MySQL queries are showing up immediately.
What could be causing the problems with scripts containing MySQL queries?
I was unable to find any useful information inside the apache error logs.
In mysql console
show full processlist; <-- to show what are the current SQL
To check where is the log file:-
show variables like '%log%'; <-- to show mysql variables
When doing query benchmark / testing, always remember to turn off query cache, using :-
set session query_cache_type=off;
database queries take time to run, and each query involves opening up at least one file. file access is slow.
you can speed up the requests by running the database in RAM instead of from the hard-drive, but the real answer is probably to cache as much as you can so you're doing as little database querying as possible.
You can check if the mysql database is greater then 2GB (or 4GB) because of some cms logging function and exceed a file size limit.
Related
I am using AWS EC2 instance, and my database size is approx 4GB. Using ubuntu OS and mysql database within the instance. Whenever i dump my database the time its dumping, website stops to respond. Time period is about 15 to 20 seconds.
Kindly assist i anything goes better than this backup procedure.
I think you forgot to turn off the lock tables option. By default, MySQL sets table locks when doing a data export.
The lock isn't released until the data export is complete, which explains why your website process cannot do anything on the tables for about 15-20 seconds.
If you are taking the database dump through MySQL workbench, go to advanced options and uncheck lock-tables.
Please check max_execution_time and memory_limit in your php.ini file.
you also can use set_time_limit function.
obviously, when you are taking back up the database is busy with that and could not listen to other requests you may consider to get something like RDS which will handle the backup jobs for you behind the scene also if you get done with the read replica you can get rid of this timeout issue.
I have a large php web scraping script that logs the results onto a mysql database as it goes.The script generally runs for 5 to 10 minutes at a time.
The problem is that when this script is running other pages on the application will not load.
The script is on a dedicated server with plenty of RAM so I have tried increasing the allowed memory usage for MYSQL and PHP. Also increased the max allowed connections. None of this has helped.
Does anyone have any ideas about what else I can try?
Probably, problem in your session. Try to use session_write_close() before you start "big script".
well, there's a big difference between "slowing down" and "not load"!
try the following:
build a static html site and check if it is loaded well during the execution of the big script
build a php site that doesn't connect to the DB (just echo something) during the execution of the big script
build a small php site that connects to the DB (just select something from a table)
if 1 or 2 doesn't work well, your problem has something todo with the web server or server resources. if 3 doesn't work well, there could be resource issues with the mysql server.
if everything works well, check the scraping script. does it lock any table that is needed by the main application?
How can I determine the maximum $query parameter received by function mysqli_multi_query (or mysqli_query), in PHP?
I have a php program which generates a large string made of UPDATE sql commands, separated by ';' The Problem is that if that string exceeds a certain length mysqli_query generates an error like 'MySQL server has gone away'. I notice that that length seems to be around 1MB, but how can I probe-it so that I can make sure that I never exceed that length?
The script needs to run about 7000 updates, on 25 or so fields. Executing one update at a time proved very slow, Concatenating multiple updates runs much faster.
Any possibility to run multiple queries even faster?
Thanck you for any advice!
You should take a look at MySQL error logs.
If you dont have access to machine (hosting etc) you may ask your administrator or helpdesk for that log.
MySQL supports very big queries. Im not sure if there is any limit, but when you are using network - you may have problem with packet size.
You may check --max_allowed_packet in MySQL configuration, and try to set bigger packet size. Im not sure about default configuration, but it may be 1MB which may be too small value to get query with 7000 updates at once.
MySQL may need more RAM to process query like this.
If you cant reconfigure MySQL you have to split your big query to smaller queries somehow.
You may also read this for more information:
devshed - MySQL server has gone away
You asked:
Any possibility to run multiple queries even faster?
there is no simple answer for that question. It depends on query, database schema etc.
Increasing MySQL cache size in configuration file may help a lot in most cases related with big simple updates with not much computing, because database engine will operate on RAM memory, not on hard disk. When big cache is used - sometimes first big query may be slower, because data is not yet loaded into RAM, but when it finally loads - queries that need a lot of read/write operations will work much faster.
Added later:
I assume your data processing needs php deserialize() function which may be hard to implement in pure SQL and you have to do it in PHP :) If you have access to server console you may create cron (linux sheduler) job, that call PHP script from shell during night.
Added later later
After discussion in comments i have one more idea. You can make full database or one table backup from phpmyadmin, download it, restore data on home computer (on Windows you may use XAMPP, WAMP server). On your home computer you can run mysql.exe and process data locally.
I found a limit of 16 field/value pairs on an INSERT statement. Beyond that number I got a "Forbidden" error. My total INSERT statement length on the working statement was 392 characters.
Use a for loop to do any massive work and just use regular mysqli_query. I got over 16000 queries to go in like that. Some things have to be changed in the php.ini file also. post mb size needs to change. Make it as big as you can if your sending a lot of characters. var input max should be changed also if your sending a lot of different variables. Make php memory size bigger. Make sure your not running out of system memory when running when running the queries. If you do everything right you could send over 20000 queries. Set php memory to at least 256mb. You might need to increase the timeout times from 30 and 60 to 200 or higher sometimes if your sending really large amounts of queries. If your don't change any settings you will have your post fail even if everything is true. PHP will make the script conditions false if your going beyond any php.ini setting limits/maxes. You don't have to change any mysql settings doing it one by one. It will take some time if your inserting or updating anything over a 1000 queries.
My linux server websites keep going down again and again but SSH, FTP, etc are alive. So I had a look at the server through SSH and used top command which lists all the processes. It shows that when some PHP pages are executed, mysql CPU usage reaches 100%. So is there any command/log which can be used to find out which PHP pages are taking up so much of mysql usage? Thank you...
You may want to take a look at your Apache log format to see if it includes the %D parameter as this indicates the amount of time taken to to serve a request in microseconds.
If you exclude anything but requests to PHP scripts, you should get an idea of which scripts are taking the longest suggesting high execution time. Obviously this could also mean a very large response payload...
There are multiple aspects to resource consumption.
As mobius mentioned, you can use SHOW FULL PROCESSLIST in MySQL to see what is currently running. Look at the processed taking longer than you would expect and check out the query to find hints about where it originates in your application.
The problem may not be with the application. It might simply be a matter of tuning MySQL, which will be about adding or changing indexes most of the time. EXPLAIN is the command that will you help analyze the execution plan MySQL decided to use. Reading EXPLAIN takes some practice. The best reference I have is High Performance MySQL.
You can also use the MySQL slow query log to get information about the slow queries happening when you are not in front of the server.
If MySQL is running at 100%, you will probably find the problem from there. If you really want to track the usage from PHP, you can set up XHProf, a high performance profiler created by Facebook to run on production sites. You can set it up to sample one request out of 100 and get a bigger picture of the performance of your site. There are a few articles out there that explain how to set it up.
Finally, XDebug and KCacheGrind can be used in development to profile one request at a time.
If MySQL is getting stuck at 100% then you've probably got some badly tuned MySQL queries inside one of your PHP applications. This time will clock up in the MySQL daemon and so won't show up in the %D value. This could be indexes out of date.
If you have access to the D/B through at the command prompt through SSH then you could try doing an ANALYZE TABLE and OPTIMIZE TABLE on any large tables. Also look at "The Slow Query Log" in the MySQL documentation.
Unfortunately fixing this will probably need you to get into the Application internals.
mytop - http://jeremy.zawodny.com/mysql/mytop/ (SHOW FULL PROCESSLIST on your mySQL)
Xdebug Profiler - http://xdebug.org/docs/profiler
Recently we've been having problems with our LAMP setup and we started to see the number of MySQL database connections spike up every now and then. We suspect that some mysql operation is taking longer than usual and apache just started to build a backlog of connections to deal with incoming requests.
Question is, is there a way to per page statistics on things like average load time? median load time? max/min load time for each php page (page1.php, page2.php, page3.php etc). So that we can narrow down where the problem is. Is there such thing included as part of apache? Maybe a separate module?
From the log format, you can just log the time taken (%D) in your access logs, and after an incident, sort on time-taken, and check the urls. I'm not aware of any application that checks this out of the box, but a lot of applications can handle apache's access logs, so chances are there are those who can work with it. I seldomly look at page-specific logs, only server totals, so I can't help you there.
If MySQL is busy / the cause:
Close a connection to MySQL if you're done with it, so the connection is released sooner.
Increase the maximum allowed connections if you really need them.
If you still have hanging processes, check the output of SHOW FULL PROCESSLIST to see what queries are being performed.
You can enable the slow_query_log, logging all queries above a certain amount of miliseconds (in newer versions, old versions only supported seconds) or not using indexes. The command line tool mysqldumpslow can accurately group / count the queries.
If you have access to php.ini, you can use Xdebug : http://xdebug.org/