Mysql Error 1205 Lock wait time exceeded; try restarting transaction - php

What kind of precautions can I take (if I actually can) in my scrip to avoid getting this error.
Zend_Db_Statement_Exception: SQLSTATE[HY000]: General error: 1205 Lock wait time
out exceeded; try restarting transaction in C:\wamp\www\library\Zend\Db\Statemen
t\Pdo.php on line 234
I understand this is a database error, but is there a way how I can check for locked tables before I run my update query ?
Like check if there the table that I am updating is free, before actually updating it
Thanks guys,
Aman

Related

Lumen/Laravel queuing jobs: General error: 1205 Lock wait timeout exceeded; try restarting transaction

All of a sudden (there has been no modifications or updates to this live application to my current knowledge) I am getting the following error when trying to queue jobs in Lumen:
(3/3) QueryException SQLSTATE[HY000]: General error: 1205 Lock wait
timeout exceeded; try restarting transaction (SQL: insert into jobs
(queue, payload, attempts, reserved_at, available_at,
created_at) values (helpdocs,
{"displayName":"App\Jobs\Helpdocs\FetchUpdateCategories","job":"Illuminate\Queue\CallQueuedHandler#call","maxTries":null,"timeout":null,"timeoutAt":null,"data":{"commandName":"App\Jobs\Helpdocs\FetchUpdateCategories","command":"O:39:\"App\Jobs\Helpdocs\FetchUpdateCategories\":5:{s:6:\"\u0000*\u0000job\";N;s:10:\"connection\";N;s:5:\"queue\";N;s:5:\"delay\";N;s:7:\"chained\";a:0:{}}"}},
0, , 1520427665, 1520427665))
in Connection.php (line 664)
It seems like that it does not matter which concrete job is queued.
Excerpt of the relevant PHP code:
case "update-categories": {
Queue::pushOn('helpdocs',new FetchUpdateCategories());
return "The Jobs has been queued.";
break;
}
case "articles-to-local": {
Queue::pushOn('helpdocs',new FetchRemoteArticles());
return "The Job has been queued.";
break;
}
What may be the cause for this problem and how to solve it?
Note: If I run the Jobs handle() method directly they succeed.

mysql howto find / debug error "mysqld got signal 11 ;"

Hope somebody can help here. We're running a MySQL Server (5.7.18-0ubuntu0.17.04.1), only a single application. A few clients connected pushing and pulling data from and into the database.
Sometimes, it goed all well for a long time... up to hours. Sometime, we're getting errors like
PHP Warning: mysqli::__construct(): (HY000/2002): Connection refused
or
PHP Warning: mysqli::query(): MySQL server has gone away
or
PHP Warning: mysqli::query(): Error reading result set's header
and
PHP Fatal error: Uncaught TypeError: Return value of mysqliConnection::escapeValue() must be of the type string, null returned
Errors above (At least the fatal errors), causes apache to exit with a 500-status-error. Not so good for our clients.
After searching, we found out all are mysql-related. Searching in the mysql-logs, gave us the mysqld got signal 11 ;. We're want to find out why this error / signal happens, but can't really find it. Tried a bunch of mysql-settings in my.cnf already, but doesn't seem to fix the issue.
See also these log-lines at pastbin: https://pastebin.com/GWDADudL
How can we find out what causes this error? Already used mysql.log to find out if it happens at a specific query, but seems to be random. Most tables are MyISAM, some InnoDB. Also had a MEMORY-table, but it's MyISAM now.
Help? Please? There seems to be something wrong, but what?

Mongo Cursor Exception: Timeout Waiting for Header Data

My company recently added MongoDB to the databases that we use and things have been going mostly smoothly, but every once in a while we get an odd error. It is near impossible to duplicate, and has only happened four times in the last week of testing, but once we go live to production our clients will be using the site much more frequently than we were for testing. We are trying to solve the bug before it gets out of hand.
The error we get is: (line breaks added for readability)
Fatal error: Uncaught exception 'MongoCursorException' with message
'Failed to connect to: 10.0.1.114:27017: send_package: error reading from socket:
Timed out waiting for header data' in
/opt/local/apache2/htdocs/stage2/library/Shanty/Mongo/Connection/Group.php on line 134
We are using ShantyMongo in PHP and it is a remote connection. The error is really intermittent, and refreshing the page is enough to get it to go away. As a temporary solution, we have wrapped all of our mongo methods in a for...try/catch so that if a MongoException is thrown we retry the method up to two more times, the hope being that it will succeed one of the three attempts since the error is so unpredictable.

Which query is causing Deadlock found when trying to get lock; try restarting transaction

I cannot figure out which Query is causing Deadlock found when trying to get lock; try restarting transaction.
My wrapper for mysql has the following lines
if (mysql_errno($this->conn) == 1213) {
$this->bug_log(0,"Deadlock. SQL:".$this->sql);
}
where bug_log writes to a file.
The bug log file has no Deadlock errors, but /var/log/mysqld.log has multiple records:
111016 3:00:02 [ERROR] /usr/libexec/mysqld: Deadlock found when trying to get lock; try restarting transaction
111016 3:00:02 [ERROR] /usr/libexec/mysqld: Sort aborted
111016 3:00:02 [ERROR] /usr/libexec/mysqld: Deadlock found when trying to get lock; try restarting transaction
111016 3:00:02 [ERROR] /usr/libexec/mysqld: Sort aborted
111016 3:00:02 [ERROR] /usr/libexec/mysqld: Deadlock found when trying to get lock; try restarting transaction
111016 3:00:02 [ERROR] /usr/libexec/mysqld: Sort aborted
How can i track it down?
An update with WHERE clause which is not by unique column will cause deadlock if another transaction waits for the current transaction to complete. Here's a quick test:
CREATE TABLE test (pk int PRIMARY KEY, a int);
INSERT INTO test VALUES (0, 0);
INSERT INTO test VALUES (1, 0);
Session 1
BEGIN;
SELECT a FROM test WHERE pk=0 FOR UPDATE;
Session 2
BEGIN;
SELECT a FROM test WHERE pk=0 FOR UPDATE;
(Session 2 is now blocked)
Session 1
UPDATE test SET a=1 WHERE a>0;
In session 2 we receive an error
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
If in the WHERE clause of the update we use the pk column only, the error does not occur.
I've seen this occur on one or more of the following conditions:
Joining on the same table multiple times in a query (SELF JOIN)
When using transactions that contain queries that manipulate the same table in multiple ways concurrently
When using transactions and using the same table as a SELF JOIN or a Sub-query
It can be difficult to track down but the situation is basically saying one query is preventing another from running which in turn prevents the first from finishing etc...
http://en.wikipedia.org/wiki/Deadlock

SQLSTATE[HY000]: General error: 5 Out of memory (Needed 4194092 bytes)

I'm receiving the following error on my shared hosting box:
SQLSTATE[HY000]: General error: 5 Out of memory (Needed 4194092 bytes)
This error is only triggered on a specific page.
I guess this indicates that I am reaching the upper limit of the 64MB allocated to me in my current MySQL environment.
Does this mean that a single query is going over (returning) 64MB of data? If so, i guess i can just track down and tune that specific query? Or isnt that the correct approach?
Appears it failed to allocate about 4 MB of data during the query. You may be able to see this in the log output, such as with slow_queries. It's most likely a SELECT query, you may be able to find it by doing this in a near parent directory:
grep "SELECT" `find | grep "php$"`

Categories