I have already followed the question:Data Limit on MySQL DB Insert I was unable to solve it with the limited info.
I am using WAMP.
I have numerous Rich Text editors and 4 images which are being sent over to another page by a POST request. After a certain threshold limit, the query is failing. Is there a way around?
EDIT: while displaying the query string it seems that I am able to retrieve every bit of data that was sent via POST. I am quite sure that it is DB related.
Images are being stored as a BLOB.
EDIT #2: Error showing is "MySQL server has gone away".
You may be violating the max_allowed_packet setting. See here for more data.
Quote,
If you are using the mysql client program, its default
max_allowed_packet variable is 16MB.
If you are uploading uncompressed images, this value is fairly easy to reach.
Also, it would be great if you could name the specific database interface class that you use (PDO? mysql_? mysqli_?), as different classes handle errors differently. It could just as well not handle an oversized packet situation at all.
P.S.: You should really be checking your logs for the specific error you encounter. The first place to look for would be /var/log/mysql/error.log (could vary depending on your env)
Update:
mysql_error() returned "MySQL server has gone away"
From the manual pages for the error: "You can also get these errors if
you send a query to the server that is incorrect or too large. If
mysqld receives a packet that is too large or out of order, it assumes
that something has gone wrong with the client and closes the
connection. If you need big queries (for example, if you are working
with big BLOB columns), you can increase the query limit by setting
the server's max_allowed_packet variable, which has a default value of
1MB. You may also need to increase the maximum packet size on the
client end..."
(quote courtesy of #Colin Morelli)
sometimes, php tends to reach the memory limit if the file uploaded is too large. depending on your config, this might help:
set_time_limit(0);
ini_set('memory_limit', '-1');
EDIT:
if it is not the memory allocation thing we all rushed to answer
then it could be a memory engine tidbit so you could probably check that,
comment:
in my experience, it is most likely a memory issue, since it only occurs when you try bigger imports
(happens to my application when i try to return a 20MB result set from a single query)
Related
I'm sure a lot of developers have run into the dreaded "Mysql has gone away" issue, especially when dealing with long running scripts such as those reserved for background or cron jobs. This is caused by the connection between mysql and php being dropped. What exactly is the best way to prevent that from happening?
I currently use a custom CDbConnection class with a setActive method straight out of here:
http://www.yiiframework.com/forum/index.php/topic/20063-general-error-2006-mysql-server-has-gone-away/page__p__254495#entry254495
This worked great and has stopped my MySQL gone away issue. Unfortunately, I've been running into a really random issue where after inserting a new record to the database via CActiveRecord, yii fails to set the primary key value properly. You end up with a pk value of 0. I looked into the issue more deeply and was finally able to reproduce it on my local machine. It seems like my custom CDbConnection::setActive() method might be the culprit. When you run the CActiveRecord::save() method, yii prepares the necessary sql and executes it via PDO. Immediately after this, yii uses PDO::lastInsertId() to grab the latest inserted ID and populates your models PK attribute. What happens though if for whatever reason, the initial insert command takes more than a few seconds to complete? This triggers the mysql ping action in my custom setActive() method, which only waits for a 2 second difference between the current timestamp and the last active timestamp. I noticed that when you do a PDO insert query, followed by a PDO select query, then finally the PDO::lastInsertId(), you end up with a last insert id value of 0.
I can't say for certain if this is what's happening on our live servers where the issue randomly occurs but it's been the only way I have been able to reproduce it.
There are actually many reasons for the Server Has Gone Away error, which are well documented in the MySQL Documentation. A couple common tricks to try are:
Increase the wait_timeout in your my.cnf file. See also innodb_lock_wait_timeout and lock_wait_timeout if your locks need to remain locked for a longer period of time.
The number of seconds the server waits for activity on a noninteractive connection before closing it.
Increase max_allowed_packet in your my.cnf file. Large data packets can trip up the connection and cause it to be closed abruptly.
You must increase this value if you are using large BLOB columns or long strings. It should be as big as the largest BLOB you want to use. The protocol limit for max_allowed_packet is 1GB. The value should be a multiple of 1024; nonmultiples are rounded down to the nearest multiple.
We are developing a script that reads data from a SQL Server 2005 located in other server.
At this moment we are having some trouble with the connection time, because the data we are retrieving is rather large.
One solution that came to us was calling mssql_close() just after mssql_query() and before mssql_fetch_array(), because after mssql_query() the data is on our PHP server, or that is what the documentation says. That would shorten the connection time quite a bit because of the data manipulations we have to do on the returned records.
Is that possible? Do we need to have an open connection for executing mssql_fetch_array()?
If you have a large data set to be pulled you can either:
Pull the data in chunks (multiple requests to DB)
Increase the timeout connection (if DB is under your control)
I also hope you have read this part from the manual:
The downside of the buffered mode is that larger result sets might require quite a lot memory. The memory will be kept occupied till all references to the result set are unset or the result set was explicitly freed, which will automatically happen during request end the latest.
For the question:
Do we need to have an open connection for executing mssql_fetch_array()?
Not needed if you have already fetched data.
I am importing a csv file with more then 5,000 records in it. What i am currently doing is, getting all file content as an array and saving them to the database one by one. But in case of script failure, the whole process will run again and if i start checking the them again one by one form database it will use lots of queries, so i thought to keep the imported values in session temporarily.
Is it good practice to keep that much of records in the session. Or is there any other way to do this ?
Thank you.
If you have to do this task in stages (and there's a couple of suggestions here to improve the way you do things in a single pass), don't hold the csv file in $_SESSION... that's pointless overhead, because you already have the csv file on disk anyway, and it's just adding a lot of serialization/unserialization overhead to the process as the session data is written.
You're processing the CSV records one at a time, so keep a count of how many you've successfully processed in $_SESSION. If the script times out or barfs, then restart and read how many you've already processed so you know where in the file to restart.
What can be the maximum size for the $_SESSION ?
The session is loaded into memory at run time - so it's limited by the memory_limit in php.ini
Is it good practice to keep that much of records in the session
No - for the reasons you describe - it will also have a big impact on performance.
Or is there any other way to do this ?
It depends what you are trying to achieve. Most databases can import CSV files directly or come with tools which will do it faster and more efficently than PHP code.
C.
It's not a good idea imho since session data will be serialized/unserialized for every page request, even if they are unrelated to the action you are performing.
I suggest using the following solution:
Keep the CSV file lying around somewhere
begin a transaction
run the inserts
commit after all inserts are done
end of transaction
Link: MySQL Transaction Syntax
If something fails the inserts will be rolled back so you know you can safely redo the inserts without having to worry about duplicate data.
To answer the actual question (Somebody just asked a duplicate, but deleted it in favour of this question)
The default session data handler stores its data in temporary files. In theory, those files can be as large as the file system allows.
However, as #symcbean points out, session data is auto-loaded into the script's memory when the session is initialized. This limits the maximum size you should store in session data severely. Also, loading lots of data has a massive impact on performance.
If you have huge amounts of data you need to store connected to a session, I would recommend using temporary files that you name by the current session ID. You can then deal with those files as needed, and as possible within the limits of the script's memory_limit.
If you are using Postgresql, you can use a single query to insert them all using pg_copy_from., or you can use pg_put_line like it is shown in the example (copy from stdin), which I found very useful when importing tons of data.
If you use MySql, you'll have to do multiple inserts. Remember to use transactions, so that if you use transactions, if your query fails it will be canceled and you can start over. Note that 5000 rows is not that large! You snould however be aware of the max_execution_time constraint which will kill your script after a number of seconds.
For what the SESSION is concerned, I believe that you are limited by the maximum amount of memory a script can use (memory_limit in php.ini). Session data are saved in files, so you should consider also the disk space usage if many clients are connected.
It depends on operating system file size, Whatever the session size, per page default is 128 MB.
I have a mysql database that I am trying to migrate into another database. THey have different schema's and I have written a php script for each table of the old database in order to populate its data in to the new one. The script works just fine but the problem is that it does not move all the data. for example if I have a table and all its info are being selected and then inserted into the new table but only half of them are done. The way I am doing it I am opening a database selecting * and puting it in an associative array. then I close the db connection and connect to the other one go through each element of the array and insert them in the new one. Is there a limit to how big an array could be? what is wrong here?
You should read the rows from the first database in chunks (of 1000 rows for example), write those rows to the second database, clean the array (with unset() or an empty array) and repeat the process until you read all the rows.
This overcomes the memory limitations.
Another problem might be that the script is running for too long (if the table is too large), so try using the function set_time_limit(). This function resets the timeout for a script after which it should be terminated. I suggest calling it after processing each chunk.
First of all, I don't see the point in writing a script to do this. Why don't you just get a SQL dump from phpMyAdmin and edit it so that it fits the other database? Or are they that different?
But to reply on your question: my first thought would be, like other people already said, that the problem would be the time limit. Before you try to do something about this, you should check the value of max_execution_time in php.ini (this is about 30 seconds most of the time) and how long it takes for the script to execute. If it terminates after roughly 30 seconds (or the value of max_execution_time if it's different), then it's likely that that's the problem, although php should throw an error (or at least a warning).
I don't think there's a limit on the size of an array in php. However, there is a directive in php.ini, namely memory_limit that defines the amount of memory a script can use.
If you are have acces to your php.ini file, I suggest setting both max_execution_time and memory_limit to a higher value. If you don't have acces to php.ini, you won't be able to change the memory_limit directive. You will have to work your way around this, for example by using LIMIT in your SQL. Be sure to unset your used variables, or you could run in to the same problem.
You may have constraints in the target database that are rejecting some of your attempted inserts.
Why not do this via sql scripts?
If you prefer to do it via php then you could open connections to both databases and insert to target as you read from source. That way you can avoid using too much memory.
Using php to do the transform/convert logic is a possibility. I would do it, if you are doing complex transformations and if your php skills are much better thant your mysql skillset.
If you need more memory in your php script use:
memory_limit = 2048M
max_execution_time = 3600
This will give you 2gigs of possible space for the array and about an hour for processing. But if your database is really this big, it would much (really a lot) much faster to use:
1.
mysqldump, to make a dump of your source-server
Check it here: http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
2.
Upload the dumpfile and iport it. There are a bunch of example on the mysql documentation page. (Look also in the comments).
After this you can transform your database through CREATE/SELECT-statements.
CREATE TABLE one SELECT * FROM two;
As an alternative you can use UPDATE-statements. What is best depends heavily on the kind of job that you are doing.
Good luck!
It would be preferable to do a mysql dump at the command line:
mysqldump -a -u USER_NAME -p SOURCE_DATABASE_NAME > DATA.mysql
You can also gzip the file to make it smaller for transfer to another server:
gzip DATA.mysql
After transfer, unzip the file:
gunzip -f DATA.mysql.gz
And import it:
mysql -u USER_NAME -p TARGET_DATABASE_NAME < DATA.sql
Your server (as all server do) will have a memory limit for PHP - if you use more than the assigned limit, then the script will fail.
Is it possible to just Dump the current MySQL Database into text files, perform find-and-replaces or RegExp-based replacements to change the schemas within the text files, and then reload the amended test files into MySQL to complete the change? If this is a one-off migration, then it may be a better way to do it.
You may be running into PHP's execution time or memory limits. Make sure the appropriate settings in php.ini are high enough to allow the script to finish executing.
I have 2 websites, lets say - example.com and example1.com
example.com has a database fruits which has a table apple with 7000 records.
I exported apple and tried to import it to example1.com but I'm always getting "MYSQL Server has gone away" error. I suspect this is due to some server side restriction.
So, how can I copy the tables without having to contact the system admins? Is there a way to do this using PHP? I went through example of copying tables, but that was inside the same database.
Both example.com and example1.com are on the same server.
One possible approach:
On the "source" server create a PHP script (the "exporter") that outputs the contents of the table in an easy to parse format (XML comes to mind as easy to generate and to consume, but alternatives like CSV could do).
On the "destination" server create a "importer" PHP script that requests the exporter one via HTTP, parses the result, and uses that data to populate the table.
That's quite generic, but should get you started. Here are some considerations:
http://ie2.php.net/manual/en/function.http-request.php is your friend
If the table contains sensitive data, there are many techniques to enhance security (http_request won't give you https:// support directly, but you can encrypt the data you export and decrypt on importing: look for "SSL" on the PHP manual for further details).
You should consider adding some redundancy (or even full-fleshed encryption) to prevent the data from being tampered with while it sails the web between the servers.
You may use GET parameters to add flexibility (for example, passing the table name as a parameter would allow you to have a single script for all tables you may ever need to transfer).
With large tables, PHP timeouts may play against you. The ideal solution for this would be efficient code + custom timeout for the export and import scripts, but I'm not even sure if that's possible at all. A quite reliable workaround is to do the job in chunks (GET params come handy here to tell the exporter what chunk do you need, and a special entry on the output can be enough to tell the importer how much is left to import). Redirects help a lot with this approach (each redirect is a new request, to the servers, so timeouts get reset).
Maybe I'm missing something, but I hope there is enough there to let you get your hands dirty on the job and come back with any specific issue I might have failed to foresee.
Hope this helps.
EDIT:
Oops, I missed the detail that both DBs are on the same server. In that case, you can merge the import and export task into a single script. This means that:
You don't need a "transfer" format (such as XML or CSV): in-memory representation within the PHP is enough, since now both tasks are done within the same script.
The data doesn't ever leave your server, so the need for encryption is not so heavy. Just make sure no one else can run your script (via authentication or similar techniques) and you should be fine.
Timeouts are not so restrictive, since you don't waste a lot of time waiting for the response to arrive from the source to the destination server, but they still apply. Chunk processing, redirection, and GET parameters (passed within the Location for the redirect) are still a good solution, but you can squeeze much closer to the timeout since execution time metrics are far more reliable than cross-server data-transfer metrics.
Here is a very rough sketch of what you may have to code:
$link_src = mysql_connect(source DB connection details);
$link_dst = mysql_connect(destination DB connection details);
/* You may want to truncate the table on destination before going on, to prevent data repetition */
$q = "INSERT INTO `table_name_here` (column_list_here) VALUES ";
$res = mysql_query("SELECT * FROM `table_name_here`", $link_src);
while ($row = mysql_fetch_assoc($res)) {
$q = $q . sprintf("(%s, %s, %s), ", $row['field1_name'], $row['field2_name'], $row['field3_name']);
}
mysql_free_result($res);
/* removing the trailing ',' from $q is left as an exercise (ok, I'm lazy, but that's supposed to be just a sketck) */
mysql_query($q, $link_dst);
You'll have to add the chunking logics in there (those are too case- & setup- specific), and probably output some confirmation message (maybe a DESCRIBE and a COUNT of both source and destination tables and a comparison between them?), but that's quite the core of the job.
As an alternative you may run a separate insert per row (invoking the query within the loop), but I'm confident a single query would be faster (however, if you have too small RAM limits for PHP, this alternative allows you to get rid of the memory-hungry $q).
Yet another edit:
From the documentation link posted by Roberto:
You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section B.5.2.10, “Packet too large”.
An INSERT or REPLACE statement that inserts a great many rows can also cause these sorts of errors. Either one of these statements sends a single request to the server irrespective of the number of rows to be inserted; thus, you can often avoid the error by reducing the number of rows sent per INSERT or REPLACE.
If that's what's causing your issue (and by your question it seems very likely it is), then the approach of breaking the INSERT into one query per row will most probably solve it. In that case, the code sketch above becomes:
$link_src = mysql_connect(source DB connection details);
$link_dst = mysql_connect(destination DB connection details);
/* You may want to truncate the table on destination before going on, to prevent data repetition */
$q = "INSERT INTO `table_name_here` (column_list_here) VALUES ";
$res = mysql_query("SELECT * FROM `table_name_here`", $link_src);
while ($row = mysql_fetch_assoc($res)) {
$q = $q . sprintf("INSERT INTO `table_name_here` (`field1_name`, `field2_name`, `field3_name`) VALUE (%s, %s, %s)", $row['field1_name'], $row['field2_name'], $row['field3_name']);
}
mysql_free_result($res);
The issue may also be triggered by the huge volume of the initial SELECT; in such case you should combine this with chunking (either with multiple SELECT+while() blocks, taking profit of SQL's LIMIT clause, or via redirections), thus breaking the SELECT down into multiple, smaller queries. (Note that redirection-based chunking is only needed if you have timeout issues, or your execution time gets close enough to the timeout to threaten with issues if the table grows. It may be a good idea to implement it anyway, so even if your table ever grows to an obscene size the script will still work unchanged.)
After struggling with this for a while, came across BigDump. It worked like a charm! Was able to copy LARGE databases without a glitch.
Here are reported the most common causes of the "MySQL Server has gone away" error in MySQL 5.0:
http://dev.mysql.com/doc/refman/5.0/en/gone-away.html
You might want to have a look to it and to use it as a checklist to see if you're doing something wrong.