I'm running a server at my office to process some files and report the results to a remote MySQL server.
The files processing takes some time and the process dies halfway through with the following error:
2006, MySQL server has gone away
I've heard about the MySQL setting, wait_timeout, but do I need to change that on the server at my office or the remote MySQL server?
I have encountered this a number of times and I've normally found the answer to be a very low default setting of max_allowed_packet.
Raising it in /etc/my.cnf (under [mysqld]) to 8 or 16M usually fixes it. (The default in MySql 5.7 is 4194304, which is 4MB.)
[mysqld]
max_allowed_packet=16M
Note: Just create the line if it does not exist
Note: This can be set on your server as it's running.
Note: On Windows you may need to save your my.ini or my.cnf file with ANSI not UTF-8 encoding.
Use set global max_allowed_packet=104857600. This sets it to 100MB.
I had the same problem but changeing max_allowed_packet in the my.ini/my.cnf file under [mysqld] made the trick.
add a line
max_allowed_packet=500M
now restart the MySQL service once you are done.
I used following command in MySQL command-line to restore a MySQL database which size more than 7GB, and it works.
set global max_allowed_packet=268435456;
It may be easier to check if the connection exists and re-establish it if needed.
See PHP:mysqli_ping for info on that.
There are several causes for this error.
MySQL/MariaDB related:
wait_timeout - Time in seconds that the server waits for a connection to become active before closing it.
interactive_timeout - Time in seconds that the server waits for an interactive connection.
max_allowed_packet - Maximum size in bytes of a packet or a generated/intermediate string. Set as large as the largest BLOB, in multiples of 1024.
Example of my.cnf:
[mysqld]
# 8 hours
wait_timeout = 28800
# 8 hours
interactive_timeout = 28800
max_allowed_packet = 256M
Server related:
Your server has full memory - check info about RAM with free -h
Framework related:
Check settings of your framework. Django for example use CONN_MAX_AGE (see docs)
How to debug it:
Check values of MySQL/MariaDB variables.
with sql: SHOW VARIABLES LIKE '%time%';
command line: mysqladmin variables
Turn on verbosity for errors:
MariaDB: log_warnings = 4
MySQL: log_error_verbosity = 3
Check docs for more info about the error
Error: 2006 (CR_SERVER_GONE_ERROR)
Message: MySQL server has gone away
Generally you can retry connecting and then doing the query again to solve this problem - try like 3-4 times before completely giving up.
I'll assuming you are using PDO. If so then you would catch the PDO Exception, increment a counter and then try again if the counter is under a threshold.
If you have a query that is causing a timeout you can set this variable by executing:
SET ##GLOBAL.wait_timeout=300;
SET ##LOCAL.wait_timeout=300; -- OR current session only
Where 300 is the number of seconds you think the maximum time the query could take.
Further information on how to deal with Mysql connection issues.
EDIT: Two other settings you may want to also use is net_write_timeout and net_read_timeout.
In MAMP (non-pro version) I added
--max_allowed_packet=268435456
to ...\MAMP\bin\startMysql.sh
Credits and more details here
If you are using xampp server :
Go to xampp -> mysql -> bin -> my.ini
Change below parameter :
max_allowed_packet = 500M
innodb_log_file_size = 128M
This helped me a lot :)
This error is occur due to expire of wait_timeout .
Just go to mysql server check its wait_timeout :
mysql> SHOW VARIABLES LIKE 'wait_timeout'
mysql> set global wait_timeout = 600 # 10 minute or maximum wait time
out you need
http://sggoyal.blogspot.in/2015/01/2006-mysql-server-has-gone-away.html
I was getting this same error on my DigitalOcean Ubuntu server.
I tried changing the max_allowed_packet and the wait_timeout settings but neither of them fixed it.
It turns out that my server was out of RAM. I added a 1GB swap file and that fixed my problem.
Check your memory with free -h to see if that's what's causing it.
On windows those guys using xampp should use this path xampp/mysql/bin/my.ini and change max_allowed_packet(under section[mysqld])to your choice size.
e.g
max_allowed_packet=8M
Again on php.ini(xampp/php/php.ini) change upload_max_filesize the choice size.
e.g
upload_max_filesize=8M
Gave me a headache for sometime till i discovered this. Hope it helps.
It was RAM problem for me.
I was having the same problem even on a server with 12 CPU cores and 32 GB RAM. I researched more and tried to free up RAM. Here is the command I used on Ubuntu 14.04 to free up RAM:
sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
And, it fixed everything. I have set it under cron to run every hour.
crontab -e
0 * * * * bash /root/ram.sh;
And, you can use this command to check how much free RAM available:
free -h
And, you will get something like this:
total used free shared buffers cached
Mem: 31G 12G 18G 59M 1.9G 973M
-/+ buffers/cache: 9.9G 21G
Swap: 8.0G 368M 7.6G
In my case it was low value of open_files_limit variable, which blocked the access of mysqld to data files.
I checked it with :
mysql> SHOW VARIABLES LIKE 'open%';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| open_files_limit | 1185 |
+------------------+-------+
1 row in set (0.00 sec)
After I changed the variable to big value, our server was alive again :
[mysqld]
open_files_limit = 100000
This generally indicates MySQL server connectivity issues or timeouts.
Can generally be solved by changing wait_timeout and max_allowed_packet in my.cnf or similar.
I would suggest these values:
wait_timeout = 28800
max_allowed_packet = 8M
If you are using the 64Bit WAMPSERVER, please search for multiple occurrences of max_allowed_packet because WAMP uses the value set under [wampmysqld64] and not the value set under [mysqldump], which for me was the issue, I was updating the wrong one. Set this to something like max_allowed_packet = 64M.
Hopefully this helps other Wampserver-users out there.
There is an easier way if you are using XAMPP.
Open the XAMPP control panel, and click on the config button in mysql section.
Now click on the my.ini and it will open in the editor. Update the max_allowed_packet to your required size.
Then restart the mysql service. Click on stop on the Mysql service click start again. Wait for a few minutes.
Then try to run your Mysql query again. Hope it will work.
It's always a good idea to check the logs of the Mysql server, for the reason why it went away.
It will tell you.
MAMP 5.3, you will not find my.cnf and adding them does not work as that max_allowed_packet is stored in variables.
One solution can be:
Go to http://localhost/phpmyadmin
Go to SQL tab
Run SHOW VARIABLES and check the values, if it is small then run with big values
Run the following query, it set max_allowed_packet to 7gb:
set global max_allowed_packet=268435456;
For some, you may need to increase the following values as well:
set global wait_timeout = 600;
set innodb_log_file_size =268435456;
For Vagrant Box, make sure you allocate enough memory to the box
config.vm.provider "virtualbox" do |vb|
vb.memory = "4096"
end
This might be a problem of your .sql file size.
If you are using xampp. Go to the xampp control panel -> Click MySql config -> Open my.ini.
Increase the packet size.
max_allowed_packet = 2M -> 10M
The unlikely scenario is you have a firewall between the client and the server that forces TCP reset into the connection.
I had that issue, and I found our corporate F5 firewall was configured to terminate inactive sessions that are idle for more than 5 mins.
Once again, this is the unlikely scenario.
uncomment the ligne below in your my.ini/my.cnf, this will split your large file into smaller portion
# binary logging format - mixed recommended
# binlog_format=mixed
TO
# binary logging format - mixed recommended
binlog_format=mixed
I found the solution to "#2006 - MySQL server has gone away" this error.
Solution is just you have to check two files
config.inc.php
config.sample.inc.php
Path of these files in windows is
C:\wamp64\apps\phpmyadmin4.6.4
In these two files the value of this:
$cfg['Servers'][$i]['host']must be 'localhost' .
In my case it was:
$cfg['Servers'][$i]['host'] = '127.0.0.1';
change it to:
"$cfg['Servers'][$i]['host']" = 'localhost';
Make sure in both:
config.inc.php
config.sample.inc.php files it must be 'localhost'.
And last set:
$cfg['Servers'][$i]['AllowNoPassword'] = true;
Then restart Wampserver.
To change phpmyadmin user name and password
You can directly change the user name and password of phpmyadmin through config.inc.php file
These two lines
$cfg['Servers'][$i]['user'] = 'root';
$cfg['Servers'][$i]['password'] = '';
Here you can give new user name and password.
After changes save the file and restart WAMP server.
I got Error 2006 message in different MySQL clients software on my Ubuntu desktop. It turned out that my JDBC driver version was too old.
I had the same problem in docker adding below setting in docker-compose.yml:
db:
image: mysql:8.0
command: --wait_timeout=800 --max_allowed_packet=256M --character-set-server=utf8 --collation-server=utf8_general_ci --default-authentication-plugin=mysql_native_password
volumes:
- ./docker/mysql/data:/var/lib/mysql
- ./docker/mysql/dump:/docker-entrypoint-initdb.d
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
I also encountered this error. But even with the increased max_allowed_packet or any increase of value in the my.cnf, the error still persists.
What I did is I troubleshoot my database:
I checked the tables where the error persists
Then I checked each row
There are rows that are okay to fetch and there are rows where the error only shows up
It seems that there are value in these rows that is causing this error
But even by selecting only the primary column, the error still shows up (SELECT primary_id FROM table)
The solution that I thought of is to reimport the database. Good thing is I have a backup of this database. But I only dropped the problematic table, then import my backup of this table. That solved my problem.
My takeaway of this problem:
Always have a backup of your database. Either manually or thru CRON job
I noticed that there are special characters in the affected rows. So when I recovered the table, I immediately changed the collation of this table from latin1_swedish_ci to utf8_general_ci
My database was working fine before then my system suddenly encountered this problem. Maybe it also has something to do with the upgrade of the MySQL database by our hosting provider. So frequent backup is a must!
Just in case this helps anyone:
I got this error when I opened and closed connections in a function which would be called from several parts of the application.
We got too many connections so we thought it might be a good idea to reuse the existing connection or throw it away and make a new one like so:
public static function getConnection($database, $host, $user, $password){
if (!self::$instance) {
return self::newConnection($database, $host, $user, $password);
} elseif ($database . $host . $user != self::$connectionDetails) {
self::$instance->query('KILL CONNECTION_ID()');
self::$instance = null;
return self::newConnection($database, $host, $user, $password);
}
return self::$instance;
}
Well turns out we've been a little too thorough with the killing and so the processes doing important things on the old connection could never finish their business.
So we dropped these lines
self::$instance->query('KILL CONNECTION_ID()');
self::$instance = null;
and as the hardware and setup of the machine allows it we increased the number of allowed connections on the server by adding
max_connections = 500
to our configuration file. This fixed our problem for now and we learned something about killing mysql connections.
For users using XAMPP, there are 2 max_allowed_packet parameters in C:\xampp\mysql\bin\my.ini.
This error happens basically for two reasons.
You have a too low RAM.
The database connection is closed when you try to connect.
You can try this code below.
# Simplification to execute an SQL string of getting a data from the database
def get(self, sql_string, sql_vars=(), debug_sql=0):
try:
self.cursor.execute(sql_string, sql_vars)
return self.cursor.fetchall()
except (AttributeError, MySQLdb.OperationalError):
self.__init__()
self.cursor.execute(sql_string, sql_vars)
return self.cursor.fetchall()
It mitigates the error whatever the reason behind it, especially for the second reason.
If it's caused by low RAM, you either have to raise database connection efficiency from the code, from the database configuration, or simply raise the RAM.
For me it helped to fix one's innodb table's corrupted index tree. I localized such a table by this command
mysqlcheck -uroot --databases databaseName
result
mysqlcheck: Got error: 2013: Lost connection to MySQL server during query when executing 'CHECK TABLE ...
as followed I was able to see only from the mysqld logs /var/log/mysqld.log which table was causing troubles.
FIL_PAGE_PREV links 2021-08-25T14:05:22.182328Z 2 [ERROR] InnoDB: Corruption of an index tree: table `database`.`tableName` index `PRIMARY`, father ptr page no 1592, child page no 1234'
The mysqlcheck command did not fix it, but helped to unveil it.
Ultimately I fixed it as followed by a regular mysql command from a mysql cli
OPTIMIZE table theCorruptedTableNameMentionedAboveInTheMysqld.log
I couldn't think what else to title this strange problem.
We have a "Worker" Compute Engine which is a MySQL SLAVE. Its primary role is to process a large set of data and then place it back on the Master. All handled via a PHP Script.
Now the processing of data takes roughly 4 hours to complete. During this time we noticed the following CPU pattern.
What you can see above is the 50% solid CPU starts after a server reboot. Then after about 2 hours its starts to produce a ECG style pattern on the CPu. Around every 5/6 minutes CPU spikes to ~48% then drops over the 5 minutes.
My question is, why. Can anyoen please explain why. We ideally want this server to be Maxing out ots cpu at 100% (50% as there are 2 cores)
The spec of the server: 2 VCPU's with 7.5GB Memory.
As mentioned, if we can have this running full throttle it would be great. Below is the my.cnf
symbolic-links=0
max_connections=256
innodb_thread_concurrency = 0
innodb_additional_mem_pool_size = 1G
innodb_buffer_pool_size = 6G
innodb_flush_log_at_trx_commit = 1
innodb_io_capacity = 800
innodb_flush_method = O_DIRECT
innodb_log_file_size = 24M
query_cache_size = 1G
query_cache_limit = 512M
thread_cache_size = 32
key_buffer_size = 128M
max_allowed_packet = 64M
table_open_cache = 8000
table_definition_cache = 8000
sort_buffer_size = 128M
read_buffer_size = 8M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 128M
tmp_table_size = 256M
query_cache_type = 1
join_buffer_size = 256M
wait_timeout = 300
server-id = 2
relay-log = /var/log/mysql/mysql-relay-bin.log
log_bin = /var/log/mysql/mysql-bin.log
log-error=/var/log/mysqld.log
read-only = 1
innodb_flush_log_at_trx_commit=2
I have cleaned up the above to remove any configs with private information which are not relevant to performance.
UPDATE
I have noticed when the VPU starts dropping during the heartbeat section of the graph the PHP script is no longer running. This is impossible, as the script I know takes 4 hours. No errors, and after another 4 hours the data is where I expected it.
Changing innodb_io_capacity = 800 to 1500 will likely reduce your 4 hour elapsed time to process by raising the limit to what you know you can achieve with your slave processing.
For your 7.5G indicated environment, configuration has
innodb_additional_mem_pool_size=1G
innodb_buffer_pool_size=6G
query_cache_size=1G
so before you start, you are overcommitted.
Another angle to consider, with
max_connections=256
max_allowed_packet=64M
could on a fully busy 256 connections need 16GB + just for this function to survive.
It is unlikely max_allowed_packet at 64M is reasonable.
Changing read_rnd_buffer_size = 4M to SET GLOBAL read_rnd_buffer_size=16384; could be significant on your slave then 24 hours later on master. They can be different but if it is significant in reducing your 4 hours on the slave, implement on both instances. Let us know what this single change does for you, please.
The 50% cpu utilization you are seeing is the script maxing out the --- single core that it is capable of utilizing --- . As indicated by PressingOnAlways recently. You can not tune around limit in your running script.
For a more thorough analysis, provide from SLAVE AND MASTER
RAM size (nnG)
SHOW GLOBAL STATUS
SHOW GLOBAL VARIABLES
SHOW INNODB STATUS
CPU % is measured by all the cores - so 100% cpu usage == both cores maxing out. PHP by default runs in a single thread and does not utilize multi-cores. The 50% cpu utilization you are seeing is the script maxing out the single core that it is capable of utilizing.
In order to utilize 100% cpu, consider spawning 2 PHP scripts that work on 2 separate datasets - e.g. script 1 processes records 1-1000000, while script 2 processes 1000001-2000000.
Other option is to rewrite the script to utilize threads. You may want to consider changing the language altogether for something that is more conducive to threads, like Golang? Though this might not be necessary if the main work is done within mysql.
The other issue you're seeing when the graph is below 50% may be due to IO wait. It's hard to tell from a graph though, you may be having a data flow transfer bottleneck where your CPU isn't working and waiting while large bits of data is transferred.
Optimizing CPU utilization is an exercise in finding the bottlenecks and removing them - good luck.
'Monitoring Service' could enabled to periodically capture a 'health check' of your system since it appears to be on a 6 minute cycle when you see spikes.
SHOW GLOBAL STATUS LIKE 'Com_show_%status' may confirm activity of this nature.
Divide your com_show_%status counters by (uptime/3600) to get rate-per-hour.
10 times an hour would be every 6 minutes.
My client has got a pretty large Joomla-based website hosted on Amazon EC2 with 1.5GB of RAM. The server hosts both Apache and MySQL. Right now the database size is around 250MB and the website gets daily traffic of about 5000. It looks like there is a severe memory leak on the website as sometimes MySQL uses about 99% of CPU memory and then crashes. I have tried optimizing database tables, modifying my.cnf, but still there is no improvement.
There are finder tables used by Joomla smart search which occupy over 100MB of db size. I have disabled smart search, but still the problem occurs.
Friends, please throw some suggestions in fixing this.
Thanks.
Below is the my.cnf file
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
bind-address = 127.0.0.1
default-storage-engine=innodb
transaction-isolation = REPEATABLE-READ
character-set-server = UTF8
collation-server = UTF8_general_ci
max_connections = 5000
wait_timeout = 30
connect_timeout = 60
#interactive_timeout = 600
#max_connect_errors = 1000000
#max_allowed_packet = 10M
skip-external-locking
key_buffer_size = 384M
max_allowed_packet = 1M
table_open_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 32M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8
slow_query_log
long_query_time = 2
[mysqld_safe]
log-error=/var/log/mysqld.log
myisam_sort_buffer_size = 64M
My bet would be that you are being hit by a rogue robot - one of the many SEO spiders out there, or tools like 80legs that let people program a network of bots to carry out tasks - often with errors in their programming that result in a heavy bombardment.
I can never remember which of the MySQL settings take memory once and which are per connection - but as you are set to allow up to 5000 simultaneous connections and some of the buffers are 2 and 8 MB I'd bet that the total memory usage under heavy load could easily be in excess of the total ram available.
Your current settings would allow all of your daily traffic to hit simultaneously. I'd knock that down to a setting of a hundred or less and see if that gives more stability.
There are various MySQL tuner scripts out there that can help you spot where too much memory is allocated.
If you have access logs from around the time of the crashes / high load I'd check for malicious bots though - we've had a constant battle to reign them in on some sites we monitor/control.
You might also check the thread_concurrency value - depending upon how many CPUs you have available.
I'm completely lost as to how or why this error is displaying when I go to browse the table data.
The one thing I did notice was that the Storage Engine has been switched to MyISAM with InnoDB saying it has been disabled.
I'm waiting to hear back from the hosting company but is there something I can explore until I hear back from them?
The sql should have been backed up on the server but when I download it, the file is empty.
Any tips on accessing this data is very much appreciated.
Sounds like your host may have disabled InnoDB, which will make any existing InnoDB tables unusable. They may also have accidentally destroyed the InnoDB data file.
Either way, there's nothing you can do yourself to recover it.
Come to /etc/my.cnf an change config to
max_connections = 2500
query_cache_limit = 2M
query_cache_size = 150M
tmp_table_size = 200M
max_heap_table_size = 300M
key_buffer_size = 300M
tmpdir = /dev/shm
Run command:
service mysqld restart
check again, Good luck
Just try to restart mysql. It helped me fix the problem
I'd like to ask your help on a longstanding issue with php/mysql connections.
Every time I execute a "SHOW PROCESSLIST" command it shows me about 400 idle (Status: Sleep) connections to the database Server emerging from our 5 Webservers.
That never was much of a problem (and I didn't find a quick solution) until recently traffic numbers increased and since then MySQL reports the "to many connections" Problems repeatedly, even so 350+ of those connections are in "sleep" state. Also a server can't get a MySQL connection even if there are sleeping connection to that same server.
All those connections vanish when an apache server is restated.
The PHP Code used to create the Database connections uses the normal "mysql" Module, the "mysqli" Module, PEAR::DB and Zend Framework Db Adapter. (Different projects). NONE of the projects uses persistent connections.
Raising the connection-limit is possible but doesn't seem like a good solution since it's 450 now and there are only 20-100 "real" connections at a time anyways.
My question:
Why are there so many connections in sleep state and how can I prevent that?
-- Update:
The Number of Apache requests running at a time never exceeds 50 concurrent requests, so i guess there is a problem with closing the connection or apache keeps the port open without a phpscript attached or something (?)
my.cnf in case it's helpful:
innodb_buffer_pool_size = 1024M
max_allowed_packet = 5M
net_buffer_length = 8K
read_buffer_size = 2M
read_rnd_buffer_size = 8M
query_cache_size = 512M
myisam_sort_buffer_size = 128M
max_connections = 450
thread_cache = 50
key_buffer_size = 1280M
join_buffer_size = 16M
table_cache = 2048
sort_buffer_size = 64M
tmp_table_size = 512M
max_heap_table_size = 512M
thread_concurrency = 8
log-slow-queries = /daten/mysql-log/slow-log
long_query_time = 1
log_queries_not_using_indexes
innodb_additional_mem_pool_size = 64M
innodb_log_file_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table
Basically, you get connections in the Sleep state when :
a PHP script connects to MySQL
some queries are executed
then, the PHP script does some stuff that takes time
without disconnecting from the DB
and, finally, the PHP script ends
which means it disconnects from the MySQL server
So, you generally end up with many processes in a Sleep state when you have a lot of PHP processes that stay connected, without actually doing anything on the database-side.
A basic idea, so : make sure you don't have PHP processes that run for too long -- or force them to disconnect as soon as they don't need to access the database anymore.
Another thing, that I often see when there is some load on the server :
There are more and more requests coming to Apache
which means many pages to generate
Each PHP script, in order to generate a page, connects to the DB and does some queries
These queries take more and more time, as the load on the DB server increases
Which means more processes keep stacking up
A solution that can help is to reduce the time your queries take -- optimizing the longest ones.
The above solutions like run a query
SET session wait_timeout=600;
Will only work until mysql is restarted. For a persistant solution, edit mysql.conf and add after [mysqld]:
wait_timeout=300
interactive_timeout = 300
Where 300 is the number of seconds you want.
Increasing number of max-connections will not solve the problem.
We were experiencing the same situation on our servers. This is what happens
User open a page/view, that connect to the database, query the database, still query(queries) were not finished and user leave the page or move to some other page.
So the connection that was open, will remains open, and keep increasing number of connections, if there are more users connecting with the db and doing something similar.
You can set interactive_timeout MySQL, bydefault it is 28800 (8hours) to 1 hour
SET interactive_timeout=3600
Before increasing the max_connections variable, you have to check how many non-interactive connection you have by running show processlist command.
If you have many sleep connection, you have to decrease the value of the "wait_timeout" variable to close non-interactive connection after waiting some times.
To show the wait_timeout value:
SHOW SESSION VARIABLES LIKE 'wait_timeout';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wait_timeout | 28800 |
+---------------+-------+
the value is in second, it means that non-interactive connection still up to 8 hours.
To change the value of "wait_timeout" variable:
SET session wait_timeout=600;
Query OK, 0 rows affected (0.00 sec)
After 10 minutes if the sleep connection still sleeping the mysql or MariaDB drop that connection.
Alright so after trying every solution out there to solve this exact issues on a wordpress blog, I might have done something either really stupid or genius... With no idea why there's an increase in Mysql connections, I used the php script below in my header to kill all sleeping processes..
So every visitor to my site helps in killing the sleeping processes..
<?php
$result = mysql_query("SHOW processlist");
while ($myrow = mysql_fetch_assoc($result)) {
if ($myrow['Command'] == "Sleep") {
mysql_query("KILL {$myrow['Id']}");}
}
?>
So I was running 300 PHP processes simulatenously and was getting a rate of between 60 - 90 per second (my process involves 3x queries). I upped it to 400 and this fell to about 40-50 per second. I dropped it to 200 and am back to between 60 and 90!
So my advice to anyone with this problem is experiment with running less than more and see if it improves. There will be less memory and CPU being used so the processes that do run will have greater ability and the speed may improve.
Look into persistent MySQL connections: I connected using mysqli('p:$HOSTNAME') and had Laravel database.php settings like:
'options' => [
PDO::ATTR_PERSISTENT => true,
],
For some reason, for some time, I believed it was smart to keep connections persistent as I thought my applications would share them. They didn't. They just opened connections and left them unused until they timed out.
After I removed my mad dream of persistency I went from 120-150+ connections from several hosts to only a handful, most of the time actually just one (being the one that runs SHOW PROCESSLIST).