maximum query length in mysqli_query - php

How can I determine the maximum $query parameter received by function mysqli_multi_query (or mysqli_query), in PHP?
I have a php program which generates a large string made of UPDATE sql commands, separated by ';' The Problem is that if that string exceeds a certain length mysqli_query generates an error like 'MySQL server has gone away'. I notice that that length seems to be around 1MB, but how can I probe-it so that I can make sure that I never exceed that length?
The script needs to run about 7000 updates, on 25 or so fields. Executing one update at a time proved very slow, Concatenating multiple updates runs much faster.
Any possibility to run multiple queries even faster?
Thanck you for any advice!

You should take a look at MySQL error logs.
If you dont have access to machine (hosting etc) you may ask your administrator or helpdesk for that log.
MySQL supports very big queries. Im not sure if there is any limit, but when you are using network - you may have problem with packet size.
You may check --max_allowed_packet in MySQL configuration, and try to set bigger packet size. Im not sure about default configuration, but it may be 1MB which may be too small value to get query with 7000 updates at once.
MySQL may need more RAM to process query like this.
If you cant reconfigure MySQL you have to split your big query to smaller queries somehow.
You may also read this for more information:
devshed - MySQL server has gone away
You asked:
Any possibility to run multiple queries even faster?
there is no simple answer for that question. It depends on query, database schema etc.
Increasing MySQL cache size in configuration file may help a lot in most cases related with big simple updates with not much computing, because database engine will operate on RAM memory, not on hard disk. When big cache is used - sometimes first big query may be slower, because data is not yet loaded into RAM, but when it finally loads - queries that need a lot of read/write operations will work much faster.
Added later:
I assume your data processing needs php deserialize() function which may be hard to implement in pure SQL and you have to do it in PHP :) If you have access to server console you may create cron (linux sheduler) job, that call PHP script from shell during night.
Added later later
After discussion in comments i have one more idea. You can make full database or one table backup from phpmyadmin, download it, restore data on home computer (on Windows you may use XAMPP, WAMP server). On your home computer you can run mysql.exe and process data locally.

I found a limit of 16 field/value pairs on an INSERT statement. Beyond that number I got a "Forbidden" error. My total INSERT statement length on the working statement was 392 characters.

Use a for loop to do any massive work and just use regular mysqli_query. I got over 16000 queries to go in like that. Some things have to be changed in the php.ini file also. post mb size needs to change. Make it as big as you can if your sending a lot of characters. var input max should be changed also if your sending a lot of different variables. Make php memory size bigger. Make sure your not running out of system memory when running when running the queries. If you do everything right you could send over 20000 queries. Set php memory to at least 256mb. You might need to increase the timeout times from 30 and 60 to 200 or higher sometimes if your sending really large amounts of queries. If your don't change any settings you will have your post fail even if everything is true. PHP will make the script conditions false if your going beyond any php.ini setting limits/maxes. You don't have to change any mysql settings doing it one by one. It will take some time if your inserting or updating anything over a 1000 queries.

Related

PHP system() using MySQL to run queries on many databases

I have a script below that goes through 380 MySQL innodb databases and runs various create table, inserts, updates...etc to migrate schema. It runs from a web server that connects to a cloud database server. I am leaving the migration script out of this question as I don't think it is relevant.
I ran into an issue and I am trying to find a workaround.
I have a 4gb ram cloud database server running MySQL 5.6. I migrated 380 database with 40 tables to 59 tables. About 70% of the way through I got The errors below. It died in the middle of one migration and the server went down. I was watching memory usage and it ran out of memory. It is a database as a service so I don't have root access to server so I don't know all details.
Running queries on phppoint_smg
Warning: Using a password on the command line interface can be insecure.
ERROR 2013 (HY000) at line 355: Lost connection to MySQL server during query
Running queries on phppoint_soulofhalloween
Warning: Using a password on the command line interface can be insecure.
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
Running queries on phppoint_srvais
Warning: Using a password on the command line interface can be insecure.
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
Here is a simplified version of the PHP script.
db_host = escapeshellarg($db_host);
$db_user = escapeshellarg($db_user);
$db_password = escapeshellarg($db_password);
foreach($databases as $database)
{
echo "Running queries on $database\n***********************************\n";
system("mysql --host=$db_host --user=$db_user --password=$db_password --port=3306 $database < ../update.sql");
echo "\n\n";
}
My questions:
Is there any way to avoid memory usage going up as I do migration? I am doing it one database at a time. Or is the addition of tables and data the reason it goes up?
I was able to use the server afterwords and removed 80 databases and finished the migration. It has 800 mb free; and I expect it to go down to 600mb. Before the migration it was at 500mb
Your PHP sample doesn't use much memory, and it's not running on the Database server, which is the one that went down, right? So the problem is in your configured MySQL parameters.
Based on your Gist, and using a simple MySQL memory calculator, we can see that your MySQL service can use up to 3817MB of memory. If the server only had 4GB when the error happened, it's pretty probable that this was the cause (you need to have some additional memory for the OS and running applications). Increasing the memory or finetuning the server would resolve it. Take a look at the MySQL documentation page on server variables to best understand each value.
However, this may not be the only cause for disconnects/timeouts (but it does seem to be your case, since increasing memory resolved the problem). Another common problem is to underestimate the max_allowed_packet value (16MB in your configuration), because such scripts can easily have queries beyond this value (for example, if you have several values for a single INSERT INTO table ...).
Consider that max_allowed_packet should be bigger than the biggest command you issue to your database (it's not the SQL file, but each command within it, the block between ;).
But please, do consider a more careful tuning, since a bad configured server may suddenly crash, or become unresponsive —while it could perfectly run without adding more memory. I suggest running performance tuning scripts like MySQLTuner-perl that will analyze your data, index usage, slow queries and even propose the adjustments you need to optimize your server.
It's pretty obvious that your migration SQL queries kill the server. Seems that the database simply have to low free RAM for such actions.
Depending on the database filesize and queries it can for sure boost up your RAM usage.
Without knowing exact server specs, the data in the database and the queries you fire there is no exact answer that can help you here.
Instead of spawning lots of processes, generate one file, then run it. Generate the file something like
$out = fopen('tmp_script.sql', 'w');
foreach($databases as $database)
{
fwrite($out, "USE $database;\n");
fwrite($out, "source ../update.sql;\n");
}
fclose($out);
Then, either manually or programatically, do
mysql ... < tmp_script.sql
It might be safer to do it manually so that PHP is out of the way.
One thing you should try to relieve your RAM, as your server is obviously extremely low on RAM, is to force garbage collection after unsetting big arrays once the loop is complete.
I was facing a similar problem with PTHREADS under PHP7 (and 512Go of RAM) that was handling 1024 async connections to MariaDB and Postgresql on a massive server.
Try this for each loop.
//first unset main immediately at loop start:
unset($databases[$key]);
// second unset process and purge immediately
unset($database);
gc_collect_cycles();
Also, set a control to constantly monitor the RAM usage under load to see if this happens on a particular $database. In case your RAM goes too low , set the control to chunk your $database and do multi inserts batches and unset them as they are done. This will purge more RAM and avoid too big array copies before sub inserts loop. This is especially the case if you are using classes with construct. With 4Go, I would tend to set batches of 400 to 500 async inserts max, depends on your insert global length.
If your database server is crashing (or being killed by the oom killer) then the reason is that it has been configured to use more memory than is available on the device.
You forgot to tell us what OS is running on the database nodes.
If you don't have root access to the server then this is the fault of whoever configured it. Memory overcommit should be disabled (requires root access). A tool like mysqltuner will show you how much memory the DBMS is configured to use (requires admin privilege). See also this percona post
I think they are right about ram, but it is worth noting the tools you use is important.
Have you tried http://www.mysqldumper.net/
if you use it (php script) check the settings for php memory limit and let it auto detect.
I used to use http://www.ozerov.de/bigdump/
but its so slow that I dont anymore.
The mysqldumper on the otherhand, is both fast at backups and restores, doesnt crash (if you set memory limit)
I have found this tool to be exceptional.
Updated:
Your comments completely changes the situation.
Here is my updated answer:
Since you have no access to MySQL server, you need to do some alternative approach.
Mandatory remove all special "things" from import file such enclosing transactions, insert delayed / ignored and so on.
Mandatory do SQL's with single statement -
I do not know how inserts look like, but do it single statement - single insert - do not bundle many rows in single statement,
e.g.
instead of
insert into x
(...fields...)values(...single row...),
(...fields...)values(...single row...),
(...fields...)values(...single row...),
(...fields...)values(...single row...)
;
do
insert into x(...fields...)values(...single row...);
insert into x(...fields...)values(...single row...);
insert into x(...fields...)values(...single row...);
insert into x(...fields...)values(...single row...);
Then try these:
You might try to "upload" my.ini with big buffers and so on. It is possible the provider of MySQL server to give you more RAM then. It is service after all :)
You might try to generate file with schema and files with data. Then import schema, then began to import table by table and see where it crashes and resume crashed file.
You might import everything with MyISAM tables. Then you can convert these in InnoDB. alter table x engine=innodb. However, doing so will lost all referential integrity and you will need to enforce it later.
You might import everything with MyISAM tables. Then instead of convert those, you can do
something like this for each table:
alter table x rename to x_myisam;
create table x(...);
insert into x select * from x_myisam;
I believe there is single table that breaks the process. If you find it, you can proceed with it manually. For example import 10000 rows at a time or something.
Alternative approach
If your server is in Amazon AWS or similar service, you can try "scale-out" ("enlarge") the server for the import, and to "scale-down" ("shrink") after import is done.
Old answer
why do you use php script? try create or generate via php a shell script. then run shell script.
also is very important to create huge swap file on the system. here is
one way to do it. It might not work on older systems:
sudo su # became root
cd /somewhere
fallocate -l 16gb file001
mkswap file001
chmod 0 file001
swapon file001
Then execute the php or shell script.
Once is done, you can swapoff and remove the file or make it permanent
in fstab.
Let me know if i need to clarify anything.

What is the max of maximum_execution_time in PHP?

I have about 200,000 rows that need to add to the database.
I have set my maximum_excution_time = 4000, I still get this error.
What is max of maximum_execution_time in PHP ?
I want to take off this restriction completely and set it to unlimited if possible.
I know using a value of 0 in set_time_limit will tell PHP to not timeout a script/program before it's finished running. I'm pretty sure setting the same value in maximum_excution_time will have the same effect.
That said: Some hosting companies have other systems running to look for long running processes of any sort (PHP, Ruby, Perl, random programs, etc.) and kill them if they're running too long. There's nothing you can do to stop these system from killing your process (other than moving to a different host)
Also, certain versions of PHP have a number of small memory leaks/inefficient garbage collection that can start to eat up memory when using in long running processes. You may hit PHP memory limit with this, or you may eat up the amount of memory available to your virtual machine.
If you run into these challenges, the usual approach is to batch process the rows in some way.
Hope that helps, and good luck!
Update: Re batch processing -- if you find you're stuck on a system that can only insert around 10,000 rows at a time, rather than write a program to insert all 200,000 rows at once, you write a program/system that will insert, say, 9,000 and then stop. And then you run it again and it inserts the next 9,000. And then next 9,000 until you're complete. How you do this will depend on where you're getting your data from. If you're pulling this data from flat files it can be as simple as splitting the flat files into multiple files. If your'e pulling from another database table it can be as simple as writing a program to pull out arrays of IDs in groups of 9,000 and have your main program select those 9,000 rows. Messaging queue systems are another popular approach for this sort of task.

Is there a way to figure out why my production mysql is so slow?

I have a php file which parses a txt file and writes the data to a Mysql table. The xml file is quite big, with over 6 million lines. I did this on my home computer, and it took about six hours for the whole process. Now I'm trying to do the exact same thing on my beefed-up dedicated server (32GB ram), and 12 hours later, it barely got through 10% of the records.
I don't know if it's connected, but I also imported a large sql file through phpmyadmin several days ago, and I thought it took much longer than it should.
What could be the problem?
TIA!
Unless you do profiling and stuff like EXPLAIN queries, it's hard to say.
There are some possibilities that may be worth investigating though:
Lots of indexes: If you're doing INSERTS, then every index associated with the table you're INSERTING into will need to be updated. If there's a lot of indexes, then a single insert can trigger a lot of writes. You can solve this by dropping the indexes before you start and reinstating them afterward
MyISAM versus InnoDB: The former tends to be faster as it sacrifices features for speed. Writing to an InnoDB table tends to be slower. NOTE: I'm merely pointing out that this is a potential cause of an application running slower, I'm not recommending that you change an InnoDB table to MyISAM!
No transaction: If using InnoDB, you can speed up bulk operations by doing them inside a transaction. If you're not using a transaction, then there's an implicit transaction around every INSERT you do.
Connection between the PHP machine and the SQL server: In testing you were probably running both PHP and the SQL server on the same box. You may have been connecting through a named pipe or over a TCP/IP connection (which has more overhead), but in either case the bandwidth is effectively unlimited. If the SQL server isn't the same machine as the one running the PHP script then it will be restricted to whatever bandwidth exists in the connection between the two.
Concurrent users: You were the only user at any given time of your test SQL database. The live system may and will have any number of additional users connected and running queries at a given time. That's going to take time away from your script, adding to its run time. You should run big SQL jobs at night so as not to inconvenience other users, but also so they can't take performance away from you too.
There are other reasons too, but the ones above are worth investigating first.
Of course the problem may be on the PHP side, you can't be sure that it's on the database until you investigate exactly where it's slowing down and why.
Check if php memory_limit setting or Mysql buffer settings is lower on server than local.
Well, I ended up implementing all the changes to the DB settings as advised here: http://www.mysqlperformanceblog.com/2006/09/29/what-to-tune-in-mysql-server-after-installation/
And now the db is roaring along! I'm not sure exactly which setting was the one that made the difference, but it's working now, so that the main thing! In any case all of you also gave me great advice which I'll be following up on, so thanks!

Is there a way to find out which PHP pages are taking more resources in a linux server?

My linux server websites keep going down again and again but SSH, FTP, etc are alive. So I had a look at the server through SSH and used top command which lists all the processes. It shows that when some PHP pages are executed, mysql CPU usage reaches 100%. So is there any command/log which can be used to find out which PHP pages are taking up so much of mysql usage? Thank you...
You may want to take a look at your Apache log format to see if it includes the %D parameter as this indicates the amount of time taken to to serve a request in microseconds.
If you exclude anything but requests to PHP scripts, you should get an idea of which scripts are taking the longest suggesting high execution time. Obviously this could also mean a very large response payload...
There are multiple aspects to resource consumption.
As mobius mentioned, you can use SHOW FULL PROCESSLIST in MySQL to see what is currently running. Look at the processed taking longer than you would expect and check out the query to find hints about where it originates in your application.
The problem may not be with the application. It might simply be a matter of tuning MySQL, which will be about adding or changing indexes most of the time. EXPLAIN is the command that will you help analyze the execution plan MySQL decided to use. Reading EXPLAIN takes some practice. The best reference I have is High Performance MySQL.
You can also use the MySQL slow query log to get information about the slow queries happening when you are not in front of the server.
If MySQL is running at 100%, you will probably find the problem from there. If you really want to track the usage from PHP, you can set up XHProf, a high performance profiler created by Facebook to run on production sites. You can set it up to sample one request out of 100 and get a bigger picture of the performance of your site. There are a few articles out there that explain how to set it up.
Finally, XDebug and KCacheGrind can be used in development to profile one request at a time.
If MySQL is getting stuck at 100% then you've probably got some badly tuned MySQL queries inside one of your PHP applications. This time will clock up in the MySQL daemon and so won't show up in the %D value. This could be indexes out of date.
If you have access to the D/B through at the command prompt through SSH then you could try doing an ANALYZE TABLE and OPTIMIZE TABLE on any large tables. Also look at "The Slow Query Log" in the MySQL documentation.
Unfortunately fixing this will probably need you to get into the Application internals.
mytop - http://jeremy.zawodny.com/mysql/mytop/ (SHOW FULL PROCESSLIST on your mySQL)
Xdebug Profiler - http://xdebug.org/docs/profiler

PHP in combination with MySQL is extremely slow

I am currently experiencing slowness with one of my servers. It is running an apache2 server with PHP and MySQL. The MySQL server is hosted on the same machine as the webserver itself.
Whenever I request a PHP file containing MySQL queries the page needs approximately 24 seconds to show up. While requesting the page the CPU usage of apache2 goes up to 11% (!) which is very much in comparison to what it used to be a week ago.
Non-PHP files or PHP files without MySQL queries are showing up immediately.
What could be causing the problems with scripts containing MySQL queries?
I was unable to find any useful information inside the apache error logs.
In mysql console
show full processlist; <-- to show what are the current SQL
To check where is the log file:-
show variables like '%log%'; <-- to show mysql variables
When doing query benchmark / testing, always remember to turn off query cache, using :-
set session query_cache_type=off;
database queries take time to run, and each query involves opening up at least one file. file access is slow.
you can speed up the requests by running the database in RAM instead of from the hard-drive, but the real answer is probably to cache as much as you can so you're doing as little database querying as possible.
You can check if the mysql database is greater then 2GB (or 4GB) because of some cms logging function and exceed a file size limit.

Categories