Laravel / PHP / MySQL - Packets out of order [duplicate] - php

I have a Laravel Spark project that uses Horizon to manage a job queue with Redis.
Locally, (on my Homestead box, Mac OS) everything works as expected, but on our new Digital Ocean (Forge provisioned) Droplet, which is a memory-optimized 256GB, 32vCPUs, 10TB, and 1x 800GB VPS, I keep getting the error:
PDOException: Packets out of order. Expected 0 received 1. Packet size=23
Or some variation of that error, where the packet size info may be different.
After many hours/days of debugging and research, I have come across many posts on StackOverflow and elsewhere, that seem to indicate that this can be fixed by doing a number of things, listed below:
Set PDO::ATTR_EMULATE_PREPARES to true in my database.php config. This has absolutely no effect on the problem, and actually introduces another issue, whereby integers are cast as strings.
Set DB_HOST to 127.0.0.1 instead of localhost, so that it uses TCP instead of a UNIX socket. Again, this has no effect.
Set DB_SOCKET to the socket path listed in MySQL by logging into MySQL (MariaDB) and running show variables like '%socket%'; which lists the socket path as /run/mysqld/mysqld.sock. I also leave DB_HOST set to localhost. This has no effect either. One thing I did note, was that the pdo_mysql.default_socket variable is set to /var/run/mysqld/mysqld.sock, I'm not sure if this is part of the problem?
I have massively increased the MySQL configuration settings found in /etc/mysql/mariadb.conf.d/50-server.cnf to the following:
key_buffer_size = 2048M
max_allowed_packet = 2048M
max_connections = 1000
thread_concurrency = 100
query_cache_size = 256M
I must admit, that changing these settings was a last resort/clutching at straws type scenario. However, this did alleviate the issue to some degree, but it did not fix it completely, as MySQL still fails 99% of the time, albeit at a later stage.
In terms of the queue, I have a total of 1,136 workers split between 6 supervisors/queues and it's all handled via Laravel Horizon, which is being run as a Daemon.
I am also using the Laravel Websockets PHP package for broadcasting, again, which is also being run as a Daemon.
My current environment configuration is as follows (sensitive info omitted).
APP_NAME="App Name"
APP_ENV=production
APP_DEBUG=false
APP_KEY=thekey
APP_URL=https://appurl.com
LOG_CHANNEL=single
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=databse
DB_USERNAME=username
DB_PASSWORD=password
BROADCAST_DRIVER=pusher
CACHE_DRIVER=file
QUEUE_CONNECTION=redis
SESSION_DRIVER=file
SESSION_LIFETIME=120
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
MAIL_MAILER=smtp
MAIL_HOST=smtp.gmail.com
MAIL_PORT=587
MAIL_USERNAME=name#email.com
MAIL_PASSWORD=password
MAIL_ENCRYPTION=tls
MAIL_FROM_ADDRESS=name#email.com
MAIL_FROM_NAME="${APP_NAME}"
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION="us-east-1"
AWS_BUCKET=
PUSHER_APP_ID=appid
PUSHER_APP_KEY=appkey
PUSHER_APP_SECRET=appsecret
PUSHER_APP_CLUSTER=mt1
MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
AUTHY_SECRET=
CASHIER_CURRENCY=usd
CASHIER_CURRENCY_LOCALE=en
CASHIER_MODEL=App\Models\User
STRIPE_KEY=stripekey
STRIPE_SECRET=stripesecret
# ECHO SERVER
LARAVEL_WEBSOCKETS_PORT=port
The server setup is as follows:
Max File Upload Size: 1024
Max Execution Time: 300
PHP Version: 7.4
MariaDB Version: 10.3.22
I have checked all logs (see below) at the time the MySQL server crashes/goes away, and there is nothing in the MySQL logs at all. No error whatsoever. I also don't see anything in:
/var/log/nginx/error.log
/var/log/nginx/access.log
/var/log/php7.4-fpm.log
I'm currently still digging through and debugging, but right now, I'm stumped. This is the first time I've ever come across this error.
Could this be down to hitting the database (read/write) too fast?
A little information on how the queues work.
I have an initial controller that dispatches a job to the queue.
Once this job completes, it fires an event which then starts the process of running several other listeners/events in sequence, all of which depend on the previous jobs completing before new events are fired and new listeners/jobs take up the work.
In total, there are 30 events that are broadcast.
In total, there are 30 listeners.
In total there are 5 jobs.
These all work sequentially based on the listener/job that was run and the event that it fires.
I have also monitored the laravel.log live and when the crash occurs, nothing is logged at all. Although, I do occasionally get production.ERROR: Failed to connect to Pusher. whether MySQL crashes or not, so I don't think that has any bearing on this problem.
I even noticed that the Laravel API rate limit was being hit, so I made sure to drastically increase that from 60 to 500. Still no joy.
Lastly, it doesn't seem to matter which Event, Job, or Listener is running as the error occurs on random ones. So, not sure it's code-specific, although, it may well be.
Hopefully, I've provided enough background and detailed information to get some help with this, but if I've missed anything, please do let me know and I'll add it to the question. Thanks.

For me what fixed it was increasing the max packet size.
In my.cnf, I added:
max_allowed_packet=200M
And then service mysql stop, service mysql start, and it worked :)

We were getting a similar PHP warning about packets out of order.
What solved it for us is increasing max_connections in the MySQL my.cnf.
Your current max_connections are probably 1024. We increased ours to 4096 and the warning went away.
In MySQL you can see your current max_connections with this command:
SHOW VARIABLES LIKE "%max_connections%";
or
mysqladmin variables | grep max_connections

I hit a similar issue that was reproducible, it was a programming error:
I was using an unbuffered database cursor and did not close the cursor before firing off other DB operations. The exact error thrown was Packets out of order. Expected 1 received 2.

The first thing to check is the wait_timeout of the MySQL server, in relation to the time that your application takes between queries. I'm able to recreate this error consistently by sleeping longer than wait_timeout seconds between SQL queries.
If your application performs a query, then does something else for a while that takes longer than that period, the MySQL server terminates the connection, but your PHP code may not be aware that the server has disconnected. If the PHP application then tries to issue another query using the the closed connection, it will generate this error (in my tests, consistently with Expected 0 received 1.
You could fix this by:
Extending the wait_timeout, either globally on the server, or on a per-session basis using the command SET session wait_timeout=<new_value>;
Catching the error and retrying once
Preemptively reconnecting to the server when you know that more than wait_timeout seconds have elapsed between queries.
This error could probably occur because of other problems as well.
I would check that you are using a persistent connection and not connecting to the server over and over again. Sometimes the connection process, especially with many simultaneous workers, causes a lot of network overhead that could cause a problem such as this.
Also, sometimes, in a production, high-transaction volume server, weird network stuff happens and this may just happen occasionally, even, it seems over the loopback interface in your case.
In any case, it is best to write your code so that it can gracefully handle errors and retry. Often, you could wrap your SQL query in a try..catch to catch this error when it happens and try again.

MySQL 8 - in mysql.cnf, disable all this ->
# For error - ( MySQL server has gone away )
#wait_timeout=90
#net_read_timeout=90
#net_write_timeout=90
#interactive_timeout=300
and looks like help me.

Related

PDOException: Packets out of order. Expected 0 received 1. Packet size=23

I have a Laravel Spark project that uses Horizon to manage a job queue with Redis.
Locally, (on my Homestead box, Mac OS) everything works as expected, but on our new Digital Ocean (Forge provisioned) Droplet, which is a memory-optimized 256GB, 32vCPUs, 10TB, and 1x 800GB VPS, I keep getting the error:
PDOException: Packets out of order. Expected 0 received 1. Packet size=23
Or some variation of that error, where the packet size info may be different.
After many hours/days of debugging and research, I have come across many posts on StackOverflow and elsewhere, that seem to indicate that this can be fixed by doing a number of things, listed below:
Set PDO::ATTR_EMULATE_PREPARES to true in my database.php config. This has absolutely no effect on the problem, and actually introduces another issue, whereby integers are cast as strings.
Set DB_HOST to 127.0.0.1 instead of localhost, so that it uses TCP instead of a UNIX socket. Again, this has no effect.
Set DB_SOCKET to the socket path listed in MySQL by logging into MySQL (MariaDB) and running show variables like '%socket%'; which lists the socket path as /run/mysqld/mysqld.sock. I also leave DB_HOST set to localhost. This has no effect either. One thing I did note, was that the pdo_mysql.default_socket variable is set to /var/run/mysqld/mysqld.sock, I'm not sure if this is part of the problem?
I have massively increased the MySQL configuration settings found in /etc/mysql/mariadb.conf.d/50-server.cnf to the following:
key_buffer_size = 2048M
max_allowed_packet = 2048M
max_connections = 1000
thread_concurrency = 100
query_cache_size = 256M
I must admit, that changing these settings was a last resort/clutching at straws type scenario. However, this did alleviate the issue to some degree, but it did not fix it completely, as MySQL still fails 99% of the time, albeit at a later stage.
In terms of the queue, I have a total of 1,136 workers split between 6 supervisors/queues and it's all handled via Laravel Horizon, which is being run as a Daemon.
I am also using the Laravel Websockets PHP package for broadcasting, again, which is also being run as a Daemon.
My current environment configuration is as follows (sensitive info omitted).
APP_NAME="App Name"
APP_ENV=production
APP_DEBUG=false
APP_KEY=thekey
APP_URL=https://appurl.com
LOG_CHANNEL=single
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=databse
DB_USERNAME=username
DB_PASSWORD=password
BROADCAST_DRIVER=pusher
CACHE_DRIVER=file
QUEUE_CONNECTION=redis
SESSION_DRIVER=file
SESSION_LIFETIME=120
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
MAIL_MAILER=smtp
MAIL_HOST=smtp.gmail.com
MAIL_PORT=587
MAIL_USERNAME=name#email.com
MAIL_PASSWORD=password
MAIL_ENCRYPTION=tls
MAIL_FROM_ADDRESS=name#email.com
MAIL_FROM_NAME="${APP_NAME}"
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION="us-east-1"
AWS_BUCKET=
PUSHER_APP_ID=appid
PUSHER_APP_KEY=appkey
PUSHER_APP_SECRET=appsecret
PUSHER_APP_CLUSTER=mt1
MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
AUTHY_SECRET=
CASHIER_CURRENCY=usd
CASHIER_CURRENCY_LOCALE=en
CASHIER_MODEL=App\Models\User
STRIPE_KEY=stripekey
STRIPE_SECRET=stripesecret
# ECHO SERVER
LARAVEL_WEBSOCKETS_PORT=port
The server setup is as follows:
Max File Upload Size: 1024
Max Execution Time: 300
PHP Version: 7.4
MariaDB Version: 10.3.22
I have checked all logs (see below) at the time the MySQL server crashes/goes away, and there is nothing in the MySQL logs at all. No error whatsoever. I also don't see anything in:
/var/log/nginx/error.log
/var/log/nginx/access.log
/var/log/php7.4-fpm.log
I'm currently still digging through and debugging, but right now, I'm stumped. This is the first time I've ever come across this error.
Could this be down to hitting the database (read/write) too fast?
A little information on how the queues work.
I have an initial controller that dispatches a job to the queue.
Once this job completes, it fires an event which then starts the process of running several other listeners/events in sequence, all of which depend on the previous jobs completing before new events are fired and new listeners/jobs take up the work.
In total, there are 30 events that are broadcast.
In total, there are 30 listeners.
In total there are 5 jobs.
These all work sequentially based on the listener/job that was run and the event that it fires.
I have also monitored the laravel.log live and when the crash occurs, nothing is logged at all. Although, I do occasionally get production.ERROR: Failed to connect to Pusher. whether MySQL crashes or not, so I don't think that has any bearing on this problem.
I even noticed that the Laravel API rate limit was being hit, so I made sure to drastically increase that from 60 to 500. Still no joy.
Lastly, it doesn't seem to matter which Event, Job, or Listener is running as the error occurs on random ones. So, not sure it's code-specific, although, it may well be.
Hopefully, I've provided enough background and detailed information to get some help with this, but if I've missed anything, please do let me know and I'll add it to the question. Thanks.
For me what fixed it was increasing the max packet size.
In my.cnf, I added:
max_allowed_packet=200M
And then service mysql stop, service mysql start, and it worked :)
We were getting a similar PHP warning about packets out of order.
What solved it for us is increasing max_connections in the MySQL my.cnf.
Your current max_connections are probably 1024. We increased ours to 4096 and the warning went away.
In MySQL you can see your current max_connections with this command:
SHOW VARIABLES LIKE "%max_connections%";
or
mysqladmin variables | grep max_connections
I hit a similar issue that was reproducible, it was a programming error:
I was using an unbuffered database cursor and did not close the cursor before firing off other DB operations. The exact error thrown was Packets out of order. Expected 1 received 2.
The first thing to check is the wait_timeout of the MySQL server, in relation to the time that your application takes between queries. I'm able to recreate this error consistently by sleeping longer than wait_timeout seconds between SQL queries.
If your application performs a query, then does something else for a while that takes longer than that period, the MySQL server terminates the connection, but your PHP code may not be aware that the server has disconnected. If the PHP application then tries to issue another query using the the closed connection, it will generate this error (in my tests, consistently with Expected 0 received 1.
You could fix this by:
Extending the wait_timeout, either globally on the server, or on a per-session basis using the command SET session wait_timeout=<new_value>;
Catching the error and retrying once
Preemptively reconnecting to the server when you know that more than wait_timeout seconds have elapsed between queries.
This error could probably occur because of other problems as well.
I would check that you are using a persistent connection and not connecting to the server over and over again. Sometimes the connection process, especially with many simultaneous workers, causes a lot of network overhead that could cause a problem such as this.
Also, sometimes, in a production, high-transaction volume server, weird network stuff happens and this may just happen occasionally, even, it seems over the loopback interface in your case.
In any case, it is best to write your code so that it can gracefully handle errors and retry. Often, you could wrap your SQL query in a try..catch to catch this error when it happens and try again.
MySQL 8 - in mysql.cnf, disable all this ->
# For error - ( MySQL server has gone away )
#wait_timeout=90
#net_read_timeout=90
#net_write_timeout=90
#interactive_timeout=300
and looks like help me.

Get 500 Error in Laravel with SQL Server database in long results

I connect to SQL Server in laravel using PDO (in Wamp Server).
Normally I have no problem, but when the number of rows of results increases (more than 50,000 rows) and I also use Left Join, I get 500 error or a blank white page.
Is there a way to solve this problem?
It could be a timeout issue on processing the request (check the PHP error log to confirm this hypothesis).
By default there is this setting in php.ini:
max_execution_time = 30 ; Maximum execution time of each script, in seconds
If processing takes more than max_execution_time value (in seconds), the script ends with fatal error.
If you are not in safe mode, you can change this value on runtime, only for single script, by:
set_time_limit ( int $seconds )
See PHP manual for details
I was getting the same issue. Not able to check laravel error page.
then I got solution.
Check .env file
APP_DEBUG=false
I have changed it to true.

PHP Script stops executing with many objects

i got a script which creates a list implementation of messages being sent between users.
Everything works fine, till the amount of messages rises up to about 77.000.
For every message a object will be created and every object has a reference to the next message object.
I enabled error reporting and increased the memory limit - I don't get any errors and the http status code is a 200 Ok, even if the developer console tells me that the request failed.
If you have verified that it is not a memory limit issue, this could be a limitation of PHP....similar to this question:
How to Avoid PHP Object Nesting/Creation Limit?
If you need to work with 77 000 objects in the same PHP script - it is something wrong with the architecture, php is not right choice for such calculations (even if it can handle this under some circumstances)
to track this particular error try to set in php.ini:
display_errors=1
display_startup_errors=1
error_reporting=-1
log_errors=1
memory_limit=to any reasonable value
max_input_time=to any reasonable value
max_execution_time=to any reasonable value
report_memleaks=1
error_log=writable path
consider using xdebug extension
don't forget to restart apache after changing proper php.ini (you can have different php.ini for apache and cli)
check if any set_error_handler or set_exception_handler functions are called in your code

PHP Redis timeout, read error on connection?

"PHP Fatal error: Uncaught exception 'RedisException' with message 'read error on connection'"
The driver here is phpredis
$redis->blpop('a', 0);
This always times out after ~1 minute. My redis.conf says timeout 0 and $redis->getOption(Redis::OPT_READ_TIMEOUT) returns double(0)
If I do this it has never timed out $redis->setOption(Redis::OPT_READ_TIMEOUT, -1);
Why do I need -1? Redis documentation says timeout 0 in redis.conf should never time me out.
"By default recent versions of Redis don't close the connection with the client if the client is idle for many seconds: the connection will remain open forever."
The current solution I know of is to disable persistent connections for phpredis, as they have been reported as buggy since October 2011. If you’re using php-fpm or other threaded models, the library specifically disables persistent connections.
Reducing the frequency of this error might be possible by adjusting the php.ini default_socket_timeout value.
Additionally, read timeout configurations in phpredis are not universally supported. The feature (look for OPT_READ_TIMEOUT) was introduced in tag 2.2.3.
$redis->connect(host, port, timeout1);
.....
$redis->blpop($key, timeout2);
In which timeout1 must be longer than timeout2.
After a lot of study of articles and doing my own strace's of redis and php, it seemed the issue was easily fixed by this solution. The main issue in my use case was that redis server is not able to fork a process towards saving the in-memory writes to the on-disk db.
I have left all the timeout values in php.ini and redis.conf as they were without making the hacky changes suggested and then tried the above solution alone, and this issue 'read error on connection' that was unfixable using all the suggestions around changing timeout values across php and redis conf files went away.
I also saw some suggestions around increasing limit on file descriptors to 100000 etc. I am running my use case on a cloud server with file descriptor limit at 1024 and my use case runs even with that limit perfectly.
I added the code ini_set(‘default_socket_timeout’, -1) in my php program, but I found it didn't work immediately.
However after 3 minutes when I started to run the php program again, at last I found the reason: the redis connection is not persistent
So I set timeout=0 in my redis.conf, and the problem is solved!

Frequent "Connection Timeout" errors on a shared server using PHP/MYSQL

I have a Drupal site on a shared web host, and it's getting a lot of connection errors. It's the first time I have seen so many connection timeout errors on a server. I'm thinking it's something in the configuration settings. Non-drupal parts of the site are not giving as many connection errors.
Since this hosting provider doesn't give me access to the php.ini file, I put one at my docroot to modify the lines that I thought would be causing this:
memory_limit = 128M
max_execution_time = 259200
set_time_limit = 30000
But it didn't work. There is no improvement in the frequency of the timeout errors. Does anyone have any other ideas about this type of error?
Thanks.
You can control the time limit on a script while your script is running. Add a called to set_time_limit near the top of your PHP pages to see if it helps.
Ideally you need to figure out what you actual limits are as defined by your host. A call to phpinfo() somewhere will let you see all the config settings that your server has in place.

Categories