Better way to keep mysql connection alive other than pinging - php

I'm sure a lot of developers have run into the dreaded "Mysql has gone away" issue, especially when dealing with long running scripts such as those reserved for background or cron jobs. This is caused by the connection between mysql and php being dropped. What exactly is the best way to prevent that from happening?
I currently use a custom CDbConnection class with a setActive method straight out of here:
http://www.yiiframework.com/forum/index.php/topic/20063-general-error-2006-mysql-server-has-gone-away/page__p__254495#entry254495
This worked great and has stopped my MySQL gone away issue. Unfortunately, I've been running into a really random issue where after inserting a new record to the database via CActiveRecord, yii fails to set the primary key value properly. You end up with a pk value of 0. I looked into the issue more deeply and was finally able to reproduce it on my local machine. It seems like my custom CDbConnection::setActive() method might be the culprit. When you run the CActiveRecord::save() method, yii prepares the necessary sql and executes it via PDO. Immediately after this, yii uses PDO::lastInsertId() to grab the latest inserted ID and populates your models PK attribute. What happens though if for whatever reason, the initial insert command takes more than a few seconds to complete? This triggers the mysql ping action in my custom setActive() method, which only waits for a 2 second difference between the current timestamp and the last active timestamp. I noticed that when you do a PDO insert query, followed by a PDO select query, then finally the PDO::lastInsertId(), you end up with a last insert id value of 0.
I can't say for certain if this is what's happening on our live servers where the issue randomly occurs but it's been the only way I have been able to reproduce it.

There are actually many reasons for the Server Has Gone Away error, which are well documented in the MySQL Documentation. A couple common tricks to try are:
Increase the wait_timeout in your my.cnf file. See also innodb_lock_wait_timeout and lock_wait_timeout if your locks need to remain locked for a longer period of time.
The number of seconds the server waits for activity on a noninteractive connection before closing it.
Increase max_allowed_packet in your my.cnf file. Large data packets can trip up the connection and cause it to be closed abruptly.
You must increase this value if you are using large BLOB columns or long strings. It should be as big as the largest BLOB you want to use. The protocol limit for max_allowed_packet is 1GB. The value should be a multiple of 1024; nonmultiples are rounded down to the nearest multiple.

Related

Simple query slow in Laravel, but insanely fast in database console

I have a very strange problem, that I cannot get my head around.
I am using Laravel for my backend application, where I am running a very simple query on table with 30k records all with proper indexes on it.
Here is the query:
DB::select('select * from Orders where ClientId = ?', [$id])
From the Laravel application this query runs for 1.2 seconds (The same thing is if I use Eloquent model.):
"query" => "select * from Orders where ClientId = ?"
"bindings" => array:1 [▼
0 => "44087"
]
"time" => 1015.2
The problem is, if I run THE SAME query inside the database console or PHPMyAdmin, the query takes approximate 20miliseconds.
I do not understand how is that possible since I am using the same database, same query, same computer and same connection to the database.
What can be the reason?
PHPMyAdmin will automatically add LIMIT for you.
This is because PHPMyAdmin will always by default paginate your query.
In your Laravel/Eloquent query, you are loading all 30k records in one go. It must take time.
To remedy this try pagination or chunking your query.
The total will take long, yes, but the chunks themselves will be very quick.
I would try debug the queries with the Debug Bar, to see how much time it takes, and which is taking longer,... It's very easy to use and install: https://github.com/barryvdh/laravel-debugbar
I think you are interested in DB administrations.. read this also,you can get some idea.good luck
There are several issues here. First one is how laravel works. Laravel only loads services and classes that are executed during your script. This is done to conserve resources, since PHP is meant to be run as a CGI script instead of a long running process. As a result, your timing might include the connection setup step instead of just executing the query. For a more "reliable" result, execute any query before timing your simple query.
There's another side of that behavior. In long running process, like Job runner, you ought not to change service parameters. This can cause undesired behavior and cause your parameter changes spill into other jobs. For example, if you provide SMTP login feature, you ought to reset the Email Sender credentials after sending the email, otherwise you will come into an issue where a user who doesn't use that feature will send an email as another user who does. This comes from thinking that services are reloaded every time a job is executed, as such is a behavior when running HTTP part.
Second, you're not using limit. As some other posters pointed out.
I'm almost sure this is due to the using limit by PHPMyAdmin, related to what you are seeing in the page output.
If you see top of the PHPMyAdmin page you see something like this:
Showing rows 0 - 24 (314 total, Query took 0.0009 seconds.)
You should have the same performance when you add the limit to your query.
How to enable MySQL Query Log?
Run query through phpmyadmin.
See which queries you actually have in MySQL.
Run app.
See which queries you actually have in MySQL.
Tell us, what was those extra, that slows down.
Query should be have the same speed in phpmyadmin or else whatever was the application try to use explain statement to see more details about query
Cause of this conflict may be due to many reasons other than MySQL as example
The php script itself have some functions that causes slow loading
Try to check server error.log maybe there's errors in functions
Basically phpmyadmin could have different than larval in the MySQL connection function try to check extension used in connection maybe it's not compatible with php version you use and I think this is the cause of slow query
I have noticed that in some app I have made and the cause was always in the php functions or in connection as example mysql_connect was much faster than PDO exten on php < 5.6 as I experienced but cause was always from php functions in the script

Limiting mysql use per process

I have Debian VPS configured with a standard LAMP.
On this server, there is only one site (shop) which has a few cron jobs - mostly PHP scripts. One of them is update script executed by Lynx browser, which sends tons of queries.
When this script runs (it takes 3-4 minutes to complete) it consumes all MySQL resources, and the site almost doesn't work (page generates in 30-60 seconds instead of 1-2s).
How can I limit this script (i.e. extending its execution time limiting available resources) to allow other services to run properly? I believe there is a simple solution to the problem but can't find it. Seems my Google superpowers are limited last two days.
You don't have access to modify the offending script, so fixing this requires database administrator work, not programming work. Your task is called tuning the MySQL databse.
(I guess you already asked your vendor for help with this, and they said no.)
Ron top or htop while the script runs. Is CPU pinned at 100%? Is RAM exhausted?
1) Just live with it, and run the update script at a time of day when your web site doesn't have many visitors. Fairly easy, but not a real solution.
2) As an experiment, add RAM to your VPS instance. It may let MySQL do things all-in-RAM that it's presently putting on the hard drive in temporary tables. If it helps, that may be a way to solve your problem with a small amount of work, and a larger server rental fee.
3) Add some indexes to speed up the queries in your script, so each query gets done faster. The question is, what indexes will help? (Just adding indexes randomly generally doesn't help much.)
First, figure out which queries are slow. Give the command SHOW FULL PROCESSLIST repeatedluy while your script runs. The Info column in that result shows all the running queries. Copy them into a text file to keep them. (Or you can use MySQL's slow query log, about which you can read online.)
Second, analyze the worst offending queries to see whether there's an obvious index to add. Telling you how to do that generally is beyond the scope of a Stack Overflow answer. You might ask another question about a specific query. Before you do, please
reead this note about asking good SQL questions, and pay attention to the section on query performance.
3) It's possible your script is SELECTing many rows, or using SELECT to summarize many rows, from tables that also need to be updated when users visit your web site. In that case your visitors may be waiting for those SELECTs to finish. If you could change the script, you could put this statement right before long-running SELECTS.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
This allows the SELECT statement after it to do a "dirty read", in which it might get an earlier version of an updated row. See here.
Or, if you can figure out how to insert one statement into your obscured script, put this one right after it opens a database session.
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
Without access to the source code, though, you only have one way to see if this is the problem. That is, access the MySQL server from a privileged account, right before your script runs, and give these SQL commands.
SHOW VARIABLES LIKE 'tx_isolation';
SET GLOBAL TRANSACTION ISOLATION LEVEL READ UNCOMMITED;
then see if the performance problem is improved. Set it back after your script finishes, probably like this (depending on the tx_isolation value retrieved above)
SET GLOBAL TRANSACTION ISOLATION LEVEL READ COMMITED;
Warning a permanent global change to the isolation level might foul up your application if it relies on transaction consistency. This is just an experiment.
4) Harass the script's author to fix this problem.
Slow queries? High CPU? High I/O? Then you must look at the queries. You cannot "tune your way out of a performance problem". Tuning might give you a few percent improvement; fixing the indexes and queries is likely to give you a lot more improvement.
See this for finding the 'worst' queries; then come back with SELECTs, EXPLAINs, and SHOW CREATE TABLEs for help.

MySQL Query Not Inserting All Records (using PHP)

I have a fairly large amount of data that I'm trying to insert into MySQL. It's a data dump from a provider that is about 47,500 records. Right now I'm simply testing the insert method through a PHP script just to get things dialed in.
What I'm seeing is that, first, the inserts will continue to happen long after the PHP script "finishes". So by the time I can see the browser no longer has an "X" to cancel the request and now has a "reload" (indicating the script is done from the browser perspective) I can see for a good 10+ minutes that inserts are still occurring. I assume this is MySQL caching the queries. Is there any way to keep the script "alive" until all queries have completed? I put a 15 minute timeout on my script.
Second, and more disturbing, is that I won't get every insert. Of the 47,500 records I'll get anywhere between 28,000 and 38,000 records but never more - and that number is random each time I run the script. Anything I can do about that?
Lastly, I have a couple simple echo statements at the end of my script for debugging, these never fire - leading me to believe that a time out might be happening (although I don't get any errors about time-outs or memory outages). I'm thinking this has something to do with the problem but am not sure.
I tried changing my table to an archive table but not only didn't that help but it also means I lose the ability to update the records in the table when I want to, I did it only as a test.
Right now the insert is in a simple loop, it loops each record in the JSON data that I get from the source and runs an insert statement, then on to the next iteration. Should I be trying to instead using the loop to build a massive insert and run a single insert statement at the end? My concern with this is that I fear I would go beyond my max_allowed_packet configuration that is hard coded by my hosting provider.
So I guess the real question is what is the best method to insert nearly 50,000 records into MySQL using PHP based on what I've explained here.

MySQL insert query failing when data gets large

I have already followed the question:Data Limit on MySQL DB Insert I was unable to solve it with the limited info.
I am using WAMP.
I have numerous Rich Text editors and 4 images which are being sent over to another page by a POST request. After a certain threshold limit, the query is failing. Is there a way around?
EDIT: while displaying the query string it seems that I am able to retrieve every bit of data that was sent via POST. I am quite sure that it is DB related.
Images are being stored as a BLOB.
EDIT #2: Error showing is "MySQL server has gone away".
You may be violating the max_allowed_packet setting. See here for more data.
Quote,
If you are using the mysql client program, its default
max_allowed_packet variable is 16MB.
If you are uploading uncompressed images, this value is fairly easy to reach.
Also, it would be great if you could name the specific database interface class that you use (PDO? mysql_? mysqli_?), as different classes handle errors differently. It could just as well not handle an oversized packet situation at all.
P.S.: You should really be checking your logs for the specific error you encounter. The first place to look for would be /var/log/mysql/error.log (could vary depending on your env)
Update:
mysql_error() returned "MySQL server has gone away"
From the manual pages for the error: "You can also get these errors if
you send a query to the server that is incorrect or too large. If
mysqld receives a packet that is too large or out of order, it assumes
that something has gone wrong with the client and closes the
connection. If you need big queries (for example, if you are working
with big BLOB columns), you can increase the query limit by setting
the server's max_allowed_packet variable, which has a default value of
1MB. You may also need to increase the maximum packet size on the
client end..."
(quote courtesy of #Colin Morelli)
sometimes, php tends to reach the memory limit if the file uploaded is too large. depending on your config, this might help:
set_time_limit(0);
ini_set('memory_limit', '-1');
EDIT:
if it is not the memory allocation thing we all rushed to answer
then it could be a memory engine tidbit so you could probably check that,
comment:
in my experience, it is most likely a memory issue, since it only occurs when you try bigger imports
(happens to my application when i try to return a 20MB result set from a single query)

PHP taking much longer to perform query than MySQL

Now I must start by saying, I can't copy the string. This is a general question.
I've got a query with several joins in that takes 0.9 seconds when run using the mysql CLI. I'm now trying to run the same query on a PHP site and it's taking 8 seconds. There are some other big joins on the site that are obviously slower, but this string is taking much too long. Is there a PHP cache for database connections that I need to increase? Or is this just to be expected.
PHP doesn't really do much with MySQL; it sends a query string to the server, and processes the results. The only bottleneck here is if it's an absolutely vast query string, or if you're getting a lot of results back - PHP has to cache and parse them into an object (or array, depending on which mysql_fetch_* you use). Without knowing what your query or results are like, I can only guess.
(From comments): If we have 30 columns and around, say, a million rows, this will take an age to parse (we later find that it's only 10k rows, so that's ok). You need to rethink how you do things:-
See if you can reduce the result set. If you're paginating things, you can use LIMIT clauses.
Do more in the MySQL query; instead of getting PHP to sort the results, let MySQL do it by using ORDER BY clauses.
Reduce the number of columns you fetch by explicitly specifying each column name instead of SELECT * FROM ....
Some wild guesses:
The PHP-version uses different parameters and variables each query: MySQL cannot cache it. While the version you type on the MySQL-CLI uses the same parameter: MySQL can fetch it from its cache. Try adding the SQL_NO_CACHE to your query on CLI to see if the result requires more time.
You are not testing on the same machine? Is the MySQL database you test the PHP-MySQL query with and the CLI the same machine? I mean: you are not testing one on your laptop and the other one on some production server, are you?
You are testing over a network: When the MySQL server is not installed on the same host as your PHP app, you will see some MySQL connection that uses "someserver.tld" instead of "localhost" as database host. In that case PHP will need to connect over a network, while your CLI already has that connection, or connects only local.
The actual connection is taking a long time. Try to run and time the query from your PHP-system a thousand times after each other. Instead of "connect to server, query database, disconnect", you should see the query timing when it is "connect to server, query database thousand times, disconnect". Some PHP-applications do that: they connect and disconnect for each and every query. And if your MySQL server is not configured correctly, connecting can take a gigantic amount of the total time.
How are you timing it?
If the 'long' version is accessed through a php page on a website, could the additional 7.1 seconds not just be the time it takes to send the request and then process and render the results?
How are you connecting? Does the account you're using use a hostname in the grant tables? If you're connectinv via TCP, MySQL will have to do a reverse DNS lookup on your IP to figure out if you're allowed in.
If it's the connection causing this, then do a simple test:
select version();
if that takes 8seconds, then it's connection overhead. If it return instantly, then it's PHP overhead in processing the data you've fetched.
The function mysql_query should should take the same time as mysql client. But any extra mysql_fetch_* will add up.

Categories