Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 days ago.
Improve this question
I have a huge problem with investigating a recent issue with the DB (MySQL) connection and Symfony.
SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try
restarting transaction
This is a common issue with lots of solutions on the internet but none of them I've found is a cure for my one. My issue has been occurring randomly for common HTTP requests, it's impossible to reproduce it. See the below trace stack:
Uncaught PHP Exception Doctrine\DBAL\Driver\PDO\Exception: "SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction" at /home/ubuntu/projects/web/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDO/Exception.php line 18
"class": "Doctrine\\DBAL\\Driver\\PDO\\Exception",
"message": "SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction",
"code": 0,
"file": "/home/ubuntu/projects/web/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDO/Exception.php:18",
"trace": [
"/home/ubuntu/projects/web/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOStatement.php:119",
"/home/ubuntu/projects/web/vendor/symfony/http-foundation/Session/Storage/Handler/PdoSessionHandler.php:337",
"/home/ubuntu/projects/web/vendor/symfony/http-foundation/Session/Storage/Handler/AbstractSessionHandler.php:120",
"/home/ubuntu/projects/web/vendor/symfony/http-foundation/Session/Storage/Proxy/SessionHandlerProxy.php:71",
"/home/ubuntu/projects/web/vendor/symfony/http-foundation/Session/Storage/NativeSessionStorage.php:268",
"/home/ubuntu/projects/web/vendor/symfony/http-foundation/Session/Session.php:194",
"/home/ubuntu/projects/web/vendor/symfony/http-kernel/EventListener/AbstractSessionListener.php:112",
"/home/ubuntu/projects/web/vendor/symfony/event-dispatcher/EventDispatcher.php:270",
"/home/ubuntu/projects/web/vendor/symfony/event-dispatcher/EventDispatcher.php:230",
"/home/ubuntu/projects/web/vendor/symfony/event-dispatcher/EventDispatcher.php:59",
"/home/ubuntu/projects/web/vendor/symfony/http-kernel/HttpKernel.php:190",
"/home/ubuntu/projects/web/vendor/symfony/http-kernel/HttpKernel.php:178",
"/home/ubuntu/projects/web/vendor/symfony/http-kernel/HttpKernel.php:79",
"/home/ubuntu/projects/web/vendor/symfony/http-kernel/Kernel.php:195",
"/home/ubuntu/projects/web/public/index.php:30"
],
"previous": {
"class": "PDOException",
"message": "SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction",
"code": 0,
"file": "/home/ubuntu/projects/web/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOStatement.php:117",
"trace": [
"/home/ubuntu/projects/web/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOStatement.php:117",
"/home/ubuntu/projects/web/vendor/symfony/http-foundation/Session/Storage/Handler/PdoSessionHandler.php:337",
"/home/ubuntu/projects/web/vendor/symfony/http-foundation/Session/Storage/Handler/AbstractSessionHandler.php:120",
"/home/ubuntu/projects/web/vendor/symfony/http-foundation/Session/Storage/Proxy/SessionHandlerProxy.php:71",
"/home/ubuntu/projects/web/vendor/symfony/http-foundation/Session/Storage/NativeSessionStorage.php:268",
"/home/ubuntu/projects/web/vendor/symfony/http-foundation/Session/Session.php:194",
"/home/ubuntu/projects/web/vendor/symfony/http-kernel/EventListener/AbstractSessionListener.php:112",
"/home/ubuntu/projects/web/vendor/symfony/event-dispatcher/EventDispatcher.php:270",
"/home/ubuntu/projects/web/vendor/symfony/event-dispatcher/EventDispatcher.php:230",
"/home/ubuntu/projects/web/vendor/symfony/event-dispatcher/EventDispatcher.php:59",
"/home/ubuntu/projects/web/vendor/symfony/http-kernel/HttpKernel.php:190",
"/home/ubuntu/projects/web/vendor/symfony/http-kernel/HttpKernel.php:178",
"/home/ubuntu/projects/web/vendor/symfony/http-kernel/HttpKernel.php:79",
"/home/ubuntu/projects/web/vendor/symfony/http-kernel/Kernel.php:195",
"/home/ubuntu/projects/web/public/index.php:30"
]
}
I've found that increasing innodb_lock_wait_timeout could help but my issues are strictly connected to the production server so I don't want to make too hasty moves. What do you think about it?
Some people just kill blocking transactions manually but here it would a useless because criticals have been throwing as a result of HTTP requests. I believe it's not a longterm solution and it's a bit hackish.
Related
All of a sudden (there has been no modifications or updates to this live application to my current knowledge) I am getting the following error when trying to queue jobs in Lumen:
(3/3) QueryException SQLSTATE[HY000]: General error: 1205 Lock wait
timeout exceeded; try restarting transaction (SQL: insert into jobs
(queue, payload, attempts, reserved_at, available_at,
created_at) values (helpdocs,
{"displayName":"App\Jobs\Helpdocs\FetchUpdateCategories","job":"Illuminate\Queue\CallQueuedHandler#call","maxTries":null,"timeout":null,"timeoutAt":null,"data":{"commandName":"App\Jobs\Helpdocs\FetchUpdateCategories","command":"O:39:\"App\Jobs\Helpdocs\FetchUpdateCategories\":5:{s:6:\"\u0000*\u0000job\";N;s:10:\"connection\";N;s:5:\"queue\";N;s:5:\"delay\";N;s:7:\"chained\";a:0:{}}"}},
0, , 1520427665, 1520427665))
in Connection.php (line 664)
It seems like that it does not matter which concrete job is queued.
Excerpt of the relevant PHP code:
case "update-categories": {
Queue::pushOn('helpdocs',new FetchUpdateCategories());
return "The Jobs has been queued.";
break;
}
case "articles-to-local": {
Queue::pushOn('helpdocs',new FetchRemoteArticles());
return "The Job has been queued.";
break;
}
What may be the cause for this problem and how to solve it?
Note: If I run the Jobs handle() method directly they succeed.
This question already has answers here:
php, mysql - Too many connections to database error
(6 answers)
Closed 8 years ago.
Guys i am having errors in my opencart website, but actually the problem is that this error also displays my database login and password. How can i fix this problem.
The error raised is like
Fatal error: Uncaught exception "ErrorException" with message "Error: Could not make a database link (1040) Too many connections" in /home/*******/public_html/system/database/mysqli.php:9 Stack trace: #0 /home//*******//public_html/vqmod/vqcache/vq2-system_library_db.php(13): DBMySQLi->__construct("localhost", "/*******/", "/*******/", "*******") #1 /home/******/public_html/index.php(46): DB->__construct("mysqli", "localhost", "/*******/", "/*******/", "/*******/") #2 {main} thrown in /home/*******/public_html/system/database/mysqli.php on line 9
OpenCart Version 1.5.6.1
To address the most pressing issue: in production environments you should turn off error reporting.
There are some configuration settings you need to look into:
error_reporting: http://php.net/manual/en/function.error-reporting.php
display_errors: http://php.net/manual/en/errorfunc.configuration.php#ini.display-errors
For safety's sake, you should probably change your database password as well. Even if no one has seen it, it's not worth the risk.
There are numerous questions on SO about the too many connections issue. Here's one quite well upvoted answer: php, mysql - Too many connections to database error
What kind of precautions can I take (if I actually can) in my scrip to avoid getting this error.
Zend_Db_Statement_Exception: SQLSTATE[HY000]: General error: 1205 Lock wait time
out exceeded; try restarting transaction in C:\wamp\www\library\Zend\Db\Statemen
t\Pdo.php on line 234
I understand this is a database error, but is there a way how I can check for locked tables before I run my update query ?
Like check if there the table that I am updating is free, before actually updating it
Thanks guys,
Aman
My company recently added MongoDB to the databases that we use and things have been going mostly smoothly, but every once in a while we get an odd error. It is near impossible to duplicate, and has only happened four times in the last week of testing, but once we go live to production our clients will be using the site much more frequently than we were for testing. We are trying to solve the bug before it gets out of hand.
The error we get is: (line breaks added for readability)
Fatal error: Uncaught exception 'MongoCursorException' with message
'Failed to connect to: 10.0.1.114:27017: send_package: error reading from socket:
Timed out waiting for header data' in
/opt/local/apache2/htdocs/stage2/library/Shanty/Mongo/Connection/Group.php on line 134
We are using ShantyMongo in PHP and it is a remote connection. The error is really intermittent, and refreshing the page is enough to get it to go away. As a temporary solution, we have wrapped all of our mongo methods in a for...try/catch so that if a MongoException is thrown we retry the method up to two more times, the hope being that it will succeed one of the three attempts since the error is so unpredictable.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have Nginx + PHP FPM installed on my server. We are putting load on server for long period for 30 concurrent users.
For initial user it works fine but after some time it starts throwing 502 bad gateway error.
I have placed some of the log of nginx php-fpm and slow log of php-fpm.
There are entries getting logged in slow log of php-fpm because of long running script and load on server. I think this is the reason for 502 bad gateway error. But I dont know how to solve that problem.
What are the tweaks I need to make in php-fpm.conf so that this errors gets resolved?
How to make nginx wait for long time for response from php-fpm?
How to increase php-fpm max execution time?
Here are the logs attached.
NGINX LOG
2013/01/29 15:03:38 [error] 2493#0: 1046562 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 49.248.0.2, server: ****.com, request: "GET MY_SCRIPT_URI HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "*****.com", referrer: "MY_SCRIPT_URL"
2013/01/29 15:03:39 [error] 2493#0: 1046561 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 49.248.0.2, server: ***.com, request: "GET MY_SCRIPT_URI HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "*****.com", referrer: "MY_SCRIPT_URL"
There are many errors of this type and they are repeating in entire file.
PHP FPM LOG
[14-Feb-2013 12:54:13] ERROR: failed to ptrace(PEEKDATA) pid 10748: Input/output error (5)
[14-Feb-2013 12:54:18] ERROR: failed to ptrace(PEEKDATA) pid 10112: Input/output error (5)
[14-Feb-2013 12:54:18] ERROR: failed to ptrace(PEEKDATA) pid 12147: Input/output error (5)
[14-Feb-2013 12:54:19] ERROR: failed to ptrace(PEEKDATA) pid 30857: Input/output error (5)
[snip: many more]
PHP FPM SLOW LOG
[14-Feb-2013 12:55:13] [pool www] pid 10748
script_filename = MY_SCRIPT_PATH
[0x00007f446e8e06b0] curl_exec() MY_SCRIPT_PATH_1.php:317
[0x00007f446e8e0490] callService() MY_SCRIPT_PATH_2:1331
[0x00007f446e8e0148] convertToPurchaseOrders() MY_SCRIPT_PATH_3:15
[0x00007fff0102b4d0] convertToPurchaseOrders() unknown:0
[0x00007f446e8de0d8] call_user_func_array() MY_SCRIPT_PATH_4:359
[0x00007f446e8dd4d0] +++ dump failed
[14-Feb-2013 12:55:13] [pool www] pid 10117
script_filename = MY_SCRIPT_PATH
[0x00007f446e8e06b0] curl_exec() MY_SCRIPT_PATH_1.php:317
[0x00007f446e8e0490] callService() MY_SCRIPT_PATH_2:1331
[0x00007f446e8e0148] convert() MY_SCRIPT_PATH_3:15
[0x00007fff0102b4d0] convert() unknown:0
[0x00007f446e8de0d8] call_user_func_array() MY_SCRIPT_PATH_4:359
[0x00007f446e8dd4d0] +++ dump failed
Firstly, 30 concurrent users is a fairly low load - depending on the application and the hardware, I'd be expecting significantly more.
Reading the slowlog, it looks like your application is invoking a curl_exec() command, and that this command is slow. What I'm guessing is happening is that your 30 concurrent users are all requesting your script; your script, in turn, is calling another web application somewhere, which is either very slow in responding, or timing out altogether (based on max_execution_time in php.ini). I don't know the ins and outs of NGINX and PHP-FPM, but I assume that there's a maximum number of concurrent PHP instances it fires up; as those instances are all tied up waiting for your CURL request to return, NGINX is unable to fire up any more instances of PHP, and returns a bad gateway instead.
The first thing I'd look at is speeding up the response time of your script, either by running the CURL request asynchronously, or caching it, or finding another way of speeding it up - running a synchronous CURL request basically means the performance and scalability of your site depends completely on the performance and scalability of the URL you're calling.
If you can't do that, reduce the timeout for CURL (max_execution_time in php.ini) to 5 seconds; this will cause some of your CURL requests to fail, but at least your app can handle that and return more quickly to the user; it also means you have far fewer PHP threads waiting around.
There is, presumably, a way of increasing the number of PHP instances NGINX fires up; you can play with that, but you're only moving the problem marginally - no web server can support lots of waiting threads gracefully.