PHP system() using MySQL to run queries on many databases - php

I have a script below that goes through 380 MySQL innodb databases and runs various create table, inserts, updates...etc to migrate schema. It runs from a web server that connects to a cloud database server. I am leaving the migration script out of this question as I don't think it is relevant.
I ran into an issue and I am trying to find a workaround.
I have a 4gb ram cloud database server running MySQL 5.6. I migrated 380 database with 40 tables to 59 tables. About 70% of the way through I got The errors below. It died in the middle of one migration and the server went down. I was watching memory usage and it ran out of memory. It is a database as a service so I don't have root access to server so I don't know all details.
Running queries on phppoint_smg
Warning: Using a password on the command line interface can be insecure.
ERROR 2013 (HY000) at line 355: Lost connection to MySQL server during query
Running queries on phppoint_soulofhalloween
Warning: Using a password on the command line interface can be insecure.
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
Running queries on phppoint_srvais
Warning: Using a password on the command line interface can be insecure.
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
Here is a simplified version of the PHP script.
db_host = escapeshellarg($db_host);
$db_user = escapeshellarg($db_user);
$db_password = escapeshellarg($db_password);
foreach($databases as $database)
{
echo "Running queries on $database\n***********************************\n";
system("mysql --host=$db_host --user=$db_user --password=$db_password --port=3306 $database < ../update.sql");
echo "\n\n";
}
My questions:
Is there any way to avoid memory usage going up as I do migration? I am doing it one database at a time. Or is the addition of tables and data the reason it goes up?
I was able to use the server afterwords and removed 80 databases and finished the migration. It has 800 mb free; and I expect it to go down to 600mb. Before the migration it was at 500mb

Your PHP sample doesn't use much memory, and it's not running on the Database server, which is the one that went down, right? So the problem is in your configured MySQL parameters.
Based on your Gist, and using a simple MySQL memory calculator, we can see that your MySQL service can use up to 3817MB of memory. If the server only had 4GB when the error happened, it's pretty probable that this was the cause (you need to have some additional memory for the OS and running applications). Increasing the memory or finetuning the server would resolve it. Take a look at the MySQL documentation page on server variables to best understand each value.
However, this may not be the only cause for disconnects/timeouts (but it does seem to be your case, since increasing memory resolved the problem). Another common problem is to underestimate the max_allowed_packet value (16MB in your configuration), because such scripts can easily have queries beyond this value (for example, if you have several values for a single INSERT INTO table ...).
Consider that max_allowed_packet should be bigger than the biggest command you issue to your database (it's not the SQL file, but each command within it, the block between ;).
But please, do consider a more careful tuning, since a bad configured server may suddenly crash, or become unresponsive —while it could perfectly run without adding more memory. I suggest running performance tuning scripts like MySQLTuner-perl that will analyze your data, index usage, slow queries and even propose the adjustments you need to optimize your server.

It's pretty obvious that your migration SQL queries kill the server. Seems that the database simply have to low free RAM for such actions.
Depending on the database filesize and queries it can for sure boost up your RAM usage.
Without knowing exact server specs, the data in the database and the queries you fire there is no exact answer that can help you here.

Instead of spawning lots of processes, generate one file, then run it. Generate the file something like
$out = fopen('tmp_script.sql', 'w');
foreach($databases as $database)
{
fwrite($out, "USE $database;\n");
fwrite($out, "source ../update.sql;\n");
}
fclose($out);
Then, either manually or programatically, do
mysql ... < tmp_script.sql
It might be safer to do it manually so that PHP is out of the way.

One thing you should try to relieve your RAM, as your server is obviously extremely low on RAM, is to force garbage collection after unsetting big arrays once the loop is complete.
I was facing a similar problem with PTHREADS under PHP7 (and 512Go of RAM) that was handling 1024 async connections to MariaDB and Postgresql on a massive server.
Try this for each loop.
//first unset main immediately at loop start:
unset($databases[$key]);
// second unset process and purge immediately
unset($database);
gc_collect_cycles();
Also, set a control to constantly monitor the RAM usage under load to see if this happens on a particular $database. In case your RAM goes too low , set the control to chunk your $database and do multi inserts batches and unset them as they are done. This will purge more RAM and avoid too big array copies before sub inserts loop. This is especially the case if you are using classes with construct. With 4Go, I would tend to set batches of 400 to 500 async inserts max, depends on your insert global length.

If your database server is crashing (or being killed by the oom killer) then the reason is that it has been configured to use more memory than is available on the device.
You forgot to tell us what OS is running on the database nodes.
If you don't have root access to the server then this is the fault of whoever configured it. Memory overcommit should be disabled (requires root access). A tool like mysqltuner will show you how much memory the DBMS is configured to use (requires admin privilege). See also this percona post

I think they are right about ram, but it is worth noting the tools you use is important.
Have you tried http://www.mysqldumper.net/
if you use it (php script) check the settings for php memory limit and let it auto detect.
I used to use http://www.ozerov.de/bigdump/
but its so slow that I dont anymore.
The mysqldumper on the otherhand, is both fast at backups and restores, doesnt crash (if you set memory limit)
I have found this tool to be exceptional.

Updated:
Your comments completely changes the situation.
Here is my updated answer:
Since you have no access to MySQL server, you need to do some alternative approach.
Mandatory remove all special "things" from import file such enclosing transactions, insert delayed / ignored and so on.
Mandatory do SQL's with single statement -
I do not know how inserts look like, but do it single statement - single insert - do not bundle many rows in single statement,
e.g.
instead of
insert into x
(...fields...)values(...single row...),
(...fields...)values(...single row...),
(...fields...)values(...single row...),
(...fields...)values(...single row...)
;
do
insert into x(...fields...)values(...single row...);
insert into x(...fields...)values(...single row...);
insert into x(...fields...)values(...single row...);
insert into x(...fields...)values(...single row...);
Then try these:
You might try to "upload" my.ini with big buffers and so on. It is possible the provider of MySQL server to give you more RAM then. It is service after all :)
You might try to generate file with schema and files with data. Then import schema, then began to import table by table and see where it crashes and resume crashed file.
You might import everything with MyISAM tables. Then you can convert these in InnoDB. alter table x engine=innodb. However, doing so will lost all referential integrity and you will need to enforce it later.
You might import everything with MyISAM tables. Then instead of convert those, you can do
something like this for each table:
alter table x rename to x_myisam;
create table x(...);
insert into x select * from x_myisam;
I believe there is single table that breaks the process. If you find it, you can proceed with it manually. For example import 10000 rows at a time or something.
Alternative approach
If your server is in Amazon AWS or similar service, you can try "scale-out" ("enlarge") the server for the import, and to "scale-down" ("shrink") after import is done.
Old answer
why do you use php script? try create or generate via php a shell script. then run shell script.
also is very important to create huge swap file on the system. here is
one way to do it. It might not work on older systems:
sudo su # became root
cd /somewhere
fallocate -l 16gb file001
mkswap file001
chmod 0 file001
swapon file001
Then execute the php or shell script.
Once is done, you can swapoff and remove the file or make it permanent
in fstab.
Let me know if i need to clarify anything.

Related

User File Uploads From Browser Slow Entire Server (web app)

SEE THE LAST UPDATE AT THE BOTTOM FOR MY FINAL EVAL / SUGGESTIONS
This seems so basic that it should be a common problem.. but I've already searched for anything pertaining to this issue with no luck
-- Scenario --
I have a web application that, as one of it's functions, allows users to upload photos to the server. File size limits aren't the issue, but i can notice a visible difference in the speed of the server when i'm uploading a file vs not.
-- Testing --
I uploaded a 3MB file while signed in to another account (on another computer completely) to test the page load times in firebug. Caching has been disabled. The results are below:
Baseline page speed (without upload): 0.409, 0.449, 0.468
During 3MB file upload:1.28, 8.58, --upload complete -- 0.379
This problem obviously compounds if more than one user is uploading a photo at the same time. This seems insane considering all the power i have with the current setup.
-- Setup --
Mediatemple DV Level 3 Application server (4GB ram, 16 cores)
Mediatemple DV Dev level 1 database server (running mysiam tables)
Cloudflare CDN
Custom PHP application
Wordpress sales website (front end, same server - not connected in any way to the web app)
CentOS 6.5
Mysql 5.5
-- So Far --
I had the cloudtech team at MT tune the apache & nginx settings for me since i thought i had screwed this up, but the issue is still showing up
I am considering changing all the DB tables to innodb for concurrency, but this is not related to the question at hand
The server load averages do not seem to be significantly affected when uploading my test file
the output of "free -m" is below
total used free shared buffers cached
Mem: 4096 634 3461 0 0 215
-/+ buffers/cache: 419 3676
Swap: 1536 36 1499
EDIT 1
Is it possible to offload these types of things to an independent server? I realize the PHP used to upload the file would also have to be run from that server, but at least then only the upload / long process server would be affected and not the entire application.
Also, is there any other language or workflow that would work better for this? Perl? Python?
EDIT 2 (2014-08-28)
I forgot to mention two things
1) this issue isn't just with file uploads - it happens whenever a php script runs for an extended time. As a test, i ran a 3 minute php script on my end and sure enough, got a phone call from a client during the execution about the "slow" system.
2) I have high concurrent log in sessions running. Many of these users are likely on the same script at the same time.
Here is the output from htop. The "php-cgi" processes are the obvious offenders, but i don't know how to see which scripts are causing this load. Again, even if i did find the script, i feel like i should be able to run more than a handful of php scripts at a time.
EDIT 3 (2014-08-28)
Here is the htop at peak hours (right now). What's annoying is that the system is flying at the moment with 2x the traffic and processes.
EDIT 4 / UPDATE (2014-09-30)
After a month of staring at this, I've found some things to be true. I'm hoping some of this will help others get their high-growth applications in check before it turns into an issue of racing traffic with server upgrades (which is what happened here).
The Issue I was using MyISAM as the exclusive database engine. I had read through hundreds of docs and forum posts regarding whether InnoDB or MyISAM is the better engine to use, most sources giving vague evaluations or (at best) overly complicated benchmarking with vague settings claiming to increase (or even decrease..?) performance. Forget it all and USE InnoDB FOR ALL APPLICATIONS
Find a good resource to help you tune your MySQL server settings and run with it (see links below). Apparently the concurrent traffic on the server was overloading PHP while waiting for the table locks to release in MyISAM. This was causing excessive loads on the application server, while the DB server was just hanging out with hardly any CPU or MEM load. Transitioning to InnoDB allows high-concurrency at the cost of CPU and Memory (both GOOD things, buy a bigger DB server if you have to).
In the end, the load has transferred to the DB server, increasing concurrent traffic performance. To summarize DON'T USE MyISAM ON WEB APPLICATIONS. Period. I'm sure i'll get burned a bit by saying that, but the slight performance hit (yes, MyISAM is a BIT faster at low-concurrency, but who cares if your web app crashes) is well worth the increase in concurrency.
IMPORTANT POINTS
1) When you move your database over to InnoDB "ALTER TABLE my_table SET ENGINE="InnoDB" you will need to use the info found in the following links to set your innodb specific settings.
2) Be sure to code a loop wrapper in PHP (3 iterations sounds good) for your query calls to account for deadlocks (a situation where queries are basically competing for the same row(s) and one or both are stalled completely).
3) Write your PHP to look for ERRORS coming out of your new query wrapper.
EG: Send the query -> Error found -> Check for deadlock -> if deadlock, retry query after waiting 0.1 sec -> check again, if error found that isn't deadlock, return error to app, else continue until iteration limit is reached (3 in this example) -> if iteration limit is hit, return error to application
Do something now that your query has failed.
Helpful Links
http://www.percona.com/blog/2013/09/20/innodb-performance-optimization-basics-updated/
Handling innoDB deadlock
https://dba.stackexchange.com/questions/27328/how-large-should-be-mysql-innodb-buffer-pool-size
CAUTION
I crashed the MySQL server with the setting in the following link. You MUST completely stop mysqld (service mysqld stop) before renaming (NOT deleting) the original files (ib_logfile0, ib_logfile1), usually found here on RH/CentOS
/var/lib/mysql/ib_logfile0
http://www.percona.com/blog/2008/11/21/how-to-calculate-a-good-innodb-log-file-size/
Once they are renamed, start your mysql daemon service mysqld start
Hopefully this helps someone,
-B

Is there a way to figure out why my production mysql is so slow?

I have a php file which parses a txt file and writes the data to a Mysql table. The xml file is quite big, with over 6 million lines. I did this on my home computer, and it took about six hours for the whole process. Now I'm trying to do the exact same thing on my beefed-up dedicated server (32GB ram), and 12 hours later, it barely got through 10% of the records.
I don't know if it's connected, but I also imported a large sql file through phpmyadmin several days ago, and I thought it took much longer than it should.
What could be the problem?
TIA!
Unless you do profiling and stuff like EXPLAIN queries, it's hard to say.
There are some possibilities that may be worth investigating though:
Lots of indexes: If you're doing INSERTS, then every index associated with the table you're INSERTING into will need to be updated. If there's a lot of indexes, then a single insert can trigger a lot of writes. You can solve this by dropping the indexes before you start and reinstating them afterward
MyISAM versus InnoDB: The former tends to be faster as it sacrifices features for speed. Writing to an InnoDB table tends to be slower. NOTE: I'm merely pointing out that this is a potential cause of an application running slower, I'm not recommending that you change an InnoDB table to MyISAM!
No transaction: If using InnoDB, you can speed up bulk operations by doing them inside a transaction. If you're not using a transaction, then there's an implicit transaction around every INSERT you do.
Connection between the PHP machine and the SQL server: In testing you were probably running both PHP and the SQL server on the same box. You may have been connecting through a named pipe or over a TCP/IP connection (which has more overhead), but in either case the bandwidth is effectively unlimited. If the SQL server isn't the same machine as the one running the PHP script then it will be restricted to whatever bandwidth exists in the connection between the two.
Concurrent users: You were the only user at any given time of your test SQL database. The live system may and will have any number of additional users connected and running queries at a given time. That's going to take time away from your script, adding to its run time. You should run big SQL jobs at night so as not to inconvenience other users, but also so they can't take performance away from you too.
There are other reasons too, but the ones above are worth investigating first.
Of course the problem may be on the PHP side, you can't be sure that it's on the database until you investigate exactly where it's slowing down and why.
Check if php memory_limit setting or Mysql buffer settings is lower on server than local.
Well, I ended up implementing all the changes to the DB settings as advised here: http://www.mysqlperformanceblog.com/2006/09/29/what-to-tune-in-mysql-server-after-installation/
And now the db is roaring along! I'm not sure exactly which setting was the one that made the difference, but it's working now, so that the main thing! In any case all of you also gave me great advice which I'll be following up on, so thanks!

maximum query length in mysqli_query

How can I determine the maximum $query parameter received by function mysqli_multi_query (or mysqli_query), in PHP?
I have a php program which generates a large string made of UPDATE sql commands, separated by ';' The Problem is that if that string exceeds a certain length mysqli_query generates an error like 'MySQL server has gone away'. I notice that that length seems to be around 1MB, but how can I probe-it so that I can make sure that I never exceed that length?
The script needs to run about 7000 updates, on 25 or so fields. Executing one update at a time proved very slow, Concatenating multiple updates runs much faster.
Any possibility to run multiple queries even faster?
Thanck you for any advice!
You should take a look at MySQL error logs.
If you dont have access to machine (hosting etc) you may ask your administrator or helpdesk for that log.
MySQL supports very big queries. Im not sure if there is any limit, but when you are using network - you may have problem with packet size.
You may check --max_allowed_packet in MySQL configuration, and try to set bigger packet size. Im not sure about default configuration, but it may be 1MB which may be too small value to get query with 7000 updates at once.
MySQL may need more RAM to process query like this.
If you cant reconfigure MySQL you have to split your big query to smaller queries somehow.
You may also read this for more information:
devshed - MySQL server has gone away
You asked:
Any possibility to run multiple queries even faster?
there is no simple answer for that question. It depends on query, database schema etc.
Increasing MySQL cache size in configuration file may help a lot in most cases related with big simple updates with not much computing, because database engine will operate on RAM memory, not on hard disk. When big cache is used - sometimes first big query may be slower, because data is not yet loaded into RAM, but when it finally loads - queries that need a lot of read/write operations will work much faster.
Added later:
I assume your data processing needs php deserialize() function which may be hard to implement in pure SQL and you have to do it in PHP :) If you have access to server console you may create cron (linux sheduler) job, that call PHP script from shell during night.
Added later later
After discussion in comments i have one more idea. You can make full database or one table backup from phpmyadmin, download it, restore data on home computer (on Windows you may use XAMPP, WAMP server). On your home computer you can run mysql.exe and process data locally.
I found a limit of 16 field/value pairs on an INSERT statement. Beyond that number I got a "Forbidden" error. My total INSERT statement length on the working statement was 392 characters.
Use a for loop to do any massive work and just use regular mysqli_query. I got over 16000 queries to go in like that. Some things have to be changed in the php.ini file also. post mb size needs to change. Make it as big as you can if your sending a lot of characters. var input max should be changed also if your sending a lot of different variables. Make php memory size bigger. Make sure your not running out of system memory when running when running the queries. If you do everything right you could send over 20000 queries. Set php memory to at least 256mb. You might need to increase the timeout times from 30 and 60 to 200 or higher sometimes if your sending really large amounts of queries. If your don't change any settings you will have your post fail even if everything is true. PHP will make the script conditions false if your going beyond any php.ini setting limits/maxes. You don't have to change any mysql settings doing it one by one. It will take some time if your inserting or updating anything over a 1000 queries.

PHP in combination with MySQL is extremely slow

I am currently experiencing slowness with one of my servers. It is running an apache2 server with PHP and MySQL. The MySQL server is hosted on the same machine as the webserver itself.
Whenever I request a PHP file containing MySQL queries the page needs approximately 24 seconds to show up. While requesting the page the CPU usage of apache2 goes up to 11% (!) which is very much in comparison to what it used to be a week ago.
Non-PHP files or PHP files without MySQL queries are showing up immediately.
What could be causing the problems with scripts containing MySQL queries?
I was unable to find any useful information inside the apache error logs.
In mysql console
show full processlist; <-- to show what are the current SQL
To check where is the log file:-
show variables like '%log%'; <-- to show mysql variables
When doing query benchmark / testing, always remember to turn off query cache, using :-
set session query_cache_type=off;
database queries take time to run, and each query involves opening up at least one file. file access is slow.
you can speed up the requests by running the database in RAM instead of from the hard-drive, but the real answer is probably to cache as much as you can so you're doing as little database querying as possible.
You can check if the mysql database is greater then 2GB (or 4GB) because of some cms logging function and exceed a file size limit.

How to find root cause for "too many connections" error in MySQL/PHP

I'm running a web service which runs algorithms that serve millions of calls daily and run some background processing as well.
Every now and than I see "Too many connections" error in attempts to connect to the MySQL box" for a few seconds. However this is not necessarily attributed to high traffic times or anything I can put my finger on.
I want to find the bottleneck causing it. Other than in the specific times this happens the server isn't too loaded in terms of CPU and Memory, and has 2-3 connections (threads) open and everything works smoothly. (I use Zabbix for monitoring)
Any creative ideas on how to trace it?
try to have an open mysql console when this happens and issue a SHOW PROCESSLIST; to see what queries are being executed.
Alternatively you could enable logging slow queries (in my.cnf insert this line:
log-slow-queries=/var/log/mysql-log-slow-queries.log
in the [mysqld] section and use set-variable=long_query_time=1 to define what's the minimum time a query should take in order to be considered slow. (remember to restart mysql in order for changes to take effect)
What MySQL table type are you using? MyISAM or InnoDB (or another one)? MyISAM will use table level locking, so you could run into a scenario where you have a heavy select running, followed by an update on the same table and numerous select queries. The last select queries will then have to wait until the update is finished (which in turn has to wait until the first - heavy - select is finished).
For InnoDB a tool like innotop could be useful to find the cause of the deadlock (see http://www.xaprb.com/blog/2006/07/31/how-to-analyze-innodb-mysql-locks/).
BTW The query that is causing the lock to occur should be one of those not in locked state.
The SHOW OPEN TABLES command will display the lock status of all the tables in MySQL. If one or more of your queries is causing the connection backlock, combining SHOW PROCESSLIST and the open tables should narrow it down as to exactly which query is holding up the works.
Old topic. However, I just had this issue and it was because I had a mysqldump script scheduled for 3 times per day. At these times, if my web application was also getting a fair amount of usage, all of the web application queries just queued themselves up on top of each other while the mysqldump was locking all of the tables in the database. The best option is to setup a replication slave on a separate machine, and take your backups from the slave rather than from the production server.
May be related to this bug in MySQL for FULLTEXT search:
http://bugs.mysql.com/bug.php?id=37067
In this case, the FULLTEXT initialization actually hangs MySQL. Unfortunately there doesn't seem to be a solution.
Without knowing too much of your implementation, and PHP in general, but are you sure that you do not have any problems with lingering DB connections? E.g connections that stay open even after the request has been processed?
In PHP a connection is usually closed automatically when the script ends or when calling mysql_close($conn); but if you use any sort of homegrown connection pooling, that could introduce problems.

Categories