Related
I'm facing a really weird behaviour while testing persistent connections from php to mysql. I have a small script that looks like this:
<?php
$db = new mysqli('p:db-host','user','pass','schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
My setup is :
OS: CentOS 7.3
PHP/7.1.18
php-fpm
nginx/1.10.2
MySQL-5.6.30
I tried to do some requests with ab:
$ ab -c 100 -n 500 http://mysite/my_test_script.php
PHP-FPM was configured to have 150 workers ready, and i saw what i was expecting, 150 established connections to mysql, which stayed open after the ab finished. I launched ab once again, and the behaviour was still the same, 150 connections, no new connections where opened. All fine. Then i created a script which did the the same exact requests, same IP, same HTTP headers, but used curl to make the request, and BOOM i had 300 connections on mysql instead of 150. I launched the script again, i got still 300 connections. Subsequent runs of the same script didn't increase the number of connections. Did anyone ever faced anything like this? Does anyone know what could make php open more connections than needed? Am I missing something obvious?
If it's not clear what i'm asking, please comment below and i will try to better my explain problem.
P.S. I tried this with PDO too, same behaviour.
EDIT: My tests where not accurate
After further testing i noticed that my first tests where not accurate. I was in a multi-tenant environment and different connections ( different schema ) where initialized when i launched ab. In my case the php documentation was a bit missleading, it says:
PHP checks if there's already an identical persistent connection (that remained open from earlier) - and if it exists, it uses it. If it does not exist, it creates the link. An 'identical' connection is a connection that was opened to the same host, with the same username and the same password (where applicable).
http://php.net/manual/en/features.persistent-connections.php
Maybe its i obvious to everyone, I don't know, it was not for me. Passing the 4th parameter to mysqli made php consider connections not identical. Once i changed my code to something like this:
<?php
$db = new mysqli('p:db-host','user','pass');
$db->select_db('schema');
$res = $db->query('select * from users limit 1');
print_r($res->fetch_assoc());
The application started to behave as i expected, one connection per worker.
I have some cron job set up on Linux through wget, those jobs run once every 24 hours. All the jobs are basically calling the APIs, pulling the data and I am strong it on the database. Now the issue is some API calls are very very slow and takes a lot of time to get a response that eventually ends up getting the below error.
--2017-07-24 06:00:02-- http://wwwin-cam-stage.cisco.com/cron/mh.php Resolving
wwwin-cam-stage.cisco.com (wwwin-cam-stage.cisco.com)... 171.70.100.25
Connecting to wwwin-cam-stage.cisco.com
(wwwin-cam-stage.cisco.com)|171.70.100.25|:80... connected. HTTP
request sent, awaiting response... Read error (Connection reset by
peer) in headers. Retrying.
--2017-07-24 06:05:03-- (try: 2) http://wwwin-cam-stage.cisco.com/cron/mh.php Connecting to
wwwin-cam-stage.cisco.com
(wwwin-cam-stage.cisco.com)|171.70.100.25|:80... connected. HTTP
request sent, awaiting response... Read error (Connection reset by
peer) in headers. Retrying.
--2017-07-24 06:10:05-- (try: 3) http://wwwin-cam-stage.cisco.com/cron/mh.php Connecting to
wwwin-cam-stage.cisco.com
(wwwin-cam-stage.cisco.com)|171.70.100.25|:80... connected. HTTP
request sent, awaiting response... 200 OK Length: 0 [text/html] Saving
to: ‘mh.php.6’
0K 0.00 =0s
2017-07-24 06:14:58 (0.00 B/s) - ‘mh.php.6’ saved [0/0]
Though at third try it gave response 200 OK, it messes up the actual data as it timed out in a first and second try.
How can I change the URL timeout settings to unlimited time or highest limit in order to complete job in one try and without getting error like
(Connection reset by peer)....
wget --timeout 10 http://url
This can be used in case of wget.
EDIT
Or
If you are asking about the Keep-Alive of linux machine this might help.
On RedHat Linux modify the following kernel parameter by editing the /etc/sysctl.conf file, and restart the network daemon (/etc/rc.d/init.d/network restart).
"Connection reset by peer" is the TCP/IP equivalent of slamming the
phone back on the hook. It's more polite than merely not replying,
leaving one hanging. But it's not the FIN-ACK expected of the truly
polite TCP/IP converseur.
Code:
# Decrease the time default value for tcp_keepalive_time
tcp_keepalive_time = 1800
EDIT
-T seconds
‘--timeout=seconds’
Set the network timeout to seconds seconds. This is equivalent to specifying ‘--dns-timeout’, ‘--connect-timeout’, and ‘--read-timeout’, all at the same time.
When interacting with the network, Wget can check for timeout and abort the operation if it takes too long. This prevents anomalies like hanging reads and infinite connects. The only timeout enabled by default is a 900-second read timeout. Setting a timeout to 0 disables it altogether. Unless you know what you are doing, it is best not to change the default timeout settings.
All timeout-related options accept decimal values, as well as subsecond values. For example, ‘0.1’ seconds is a legal (though unwise) choice of timeout. Subsecond timeouts are useful for checking server response times or for testing network latency.
‘--dns-timeout=seconds’
Set the DNS lookup timeout to seconds seconds. DNS lookups that don’t complete within the specified time will fail. By default, there is no timeout on DNS lookups, other than that implemented by system libraries.
‘--connect-timeout=seconds’
Set the connect timeout to seconds seconds. TCP connections that take longer to establish will be aborted. By default, there is no connect timeout, other than that implemented by system libraries.
‘--read-timeout=seconds’
Set the read (and write) timeout to seconds seconds. The “time” of this timeout refers to idle time: if, at any point in the download, no data is received for more than the specified number of seconds, reading fails and the download is restarted. This option does not directly affect the duration of the entire download.
Of course, the remote server may choose to terminate the connection sooner than this option requires. The default read timeout is 900 seconds.
Source Wget Manual.
See this wget Manual page for more information.
I am running a websockets server using https://github.com/ghedipunk/PHP-Websockets/blob/master/websockets.php on an Ubuntu 16 box with PHP7
After 256 users connect to the websocket, it stops taking connections and I can't figure out why. In the client, I get a 1006 error code (connection was closed abnormally (locally) by the browser implementation) and no further information. The websockets request doesn't appear to make it to the websockets server (which normally echos "Client Connected" right after a socket connection is made).
In the connect() function, one of the things I do is echo the count of the number of users, sockets and overall memory usage to the log. This problem occurs whenever the user count hits 256 (at which point the socket count is 257 and memory usage around 4Mb). The fact that it happens at 256 makes me think that a limit is being hit somewhere, but I can't find that limit. If I restart the websockets server, everything works fine again.
From my investigation so far, I have tried and checked:
ulimit (says it's unlimited)
MySQL connection limit (was set to default, now is 1000, but that didn't help)
Increased the PHP memory limit (because why not, just to see)
SOMAXCONN is set to 128, so I don't think this is the problem, but I would have to recompile PHP to test it. I haven't tried this yet.
Apache: The message I get when the problem occurs is: [proxy:error] [pid 16785] (111)Connection refused: AH00957: WS: attempt to connect to 10.0.0.240:9000 (websockets.mydomain.com) failed, which doesn't tell me much about anything. Apache is running MPM prefork and I have increased the spare servers and MaxRequestWorkers
I am open to any suggestions as to where to look next or how to get more detail out of the "Connection Refused" error log from apache!
Thanks
It's Apache that holding you up. Try setting the following in your conf file...
MaxClients 512
ServerLimit 512
(you must set both)
Of course, you can use whatever numbers work for you. In mpm-prefork, you should be able to go to 20,000 but that really shouldn't be necessary.
I'm trying to upload a CSV into a mysql database using phpmyadmin
When I try with a shortened version of the database, the process works ok, but when I try with the full database, I get the error:
#2006 - MySQL server has gone away
The section of my CSV that is working is:
trans_id,price_paid,date,postcode,property_type,poperty_type_2,hold,add_num,add_flat,add_road,add_area,add_city,add_borough,add_county,add_rand
{33C588EE-BB09-4F6F-BA8C-000312C72B3B},159950,23/05/2014 00:00,SL6 9LX,F,N,L,2,,THE SHAW,COOKHAM,MAIDENHEAD,WINDSOR AND MAIDENHEAD,WINDSOR AND MAIDENHEAD,A
{2C650B8C-57C0-421C-A4A9-00037BDFDCFB},158000,30/05/2014 00:00,NN14 1RJ,T,N,F,4,,MIDLAND COTTAGES,RUSHTON,KETTERING,KETTERING,NORTHAMPTONSHIRE,A
{74FA45D0-CB64-40E1-94C4-00055AEBF72C},470000,30/05/2014 00:00,KT20 5SF,D,N,F,11,,CHAPEL ROAD,,TADWORTH,REIGATE AND BANSTEAD,SURREY,A
{054AB14B-0EED-48FD-B3CD-0005B154A5C3},135000,23/05/2014 00:00,NR27 9AZ,F,N,L,48,,ALBANY COURT,,CROMER,NORTH NORFOLK,NORFOLK,A
{86896E40-68BA-4BA2-8468-0006258B9C41},124995,09/05/2014 00:00,L24 9NA,S,Y,L,131,,ADDENBROOKE DRIVE,SPEKE,LIVERPOOL,LIVERPOOL,MERSEYSIDE,A
{A948BD6F-DD91-4DE9-82D1-0008226FC360},95000,13/06/2014 00:00,HU6 7XE,S,N,F,51,,DOWNFIELD AVENUE,,HULL,CITY OF KINGSTON UPON HULL,CITY OF KINGSTON UPON HULL,A
{7191F69F-7648-4603-9CE7-000882808E16},174000,19/05/2014 00:00,DT5 1HX,T,N,F,2,,LONG ACRE,,PORTLAND,WEYMOUTH AND PORTLAND,DORSET,A
{525BE511-1351-475F-9765-0009645D0B60},328000,11/06/2014 00:00,TW18 2EP,T,N,F,1,,EDGELL ROAD,,STAINES-UPON-THAMES,SPELTHORNE,SURREY,A
I've tried:
increasing the max_packet=64M in /etc/my.cnf to 64M, and the wait_timeout= 1000 but no luck.
I've also made the same changes for the packet size limit on the php.ini but no luck.
Any help would be appreciated
Thanks,
Mo
Check out the documentation on MySQL error #2006. I've included some of the more likely possibilities below:
You have encountered a timeout on the server side and the automatic reconnection in the client is disabled (the reconnect flag in the MYSQL structure is equal to 0).
You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section B.5.2.10, “Packet Too Large”. This likely isn't the issue, as you've already done the suggested solution.
You also get a lost connection if you are sending a packet 16MB or larger if your client is older than 4.0.8 and your server is 4.0.8 and above, or the other way around.
There's also an error specific for Windows applications, so if you're using Windows, check that out:
You are using a Windows client and the server had dropped the connection (probably because wait_timeout expired) before the command was issued. The problem on Windows is that in some cases MySQL does not get an error from the OS when writing to the TCP/IP connection to the server, but instead gets the error when trying to read the answer from the connection. Prior to MySQL 5.0.19, even if the reconnect flag in the MYSQL structure is equal to 1, MySQL does not automatically reconnect and re-issue the query as it doesn't know if the server did get the original query or not. The solution to this is to either do a mysql_ping() on the connection if there has been a long time since the last query (this is what Connector/ODBC does) or set wait_timeout on the mysqld server so high that it in practice never times out.
I am using MySQL 5.0 for a site that is hosted by GoDaddy (linux).
I was doing some testing on my web app, and suddenly I noticed that the pages were refreshing really slowly. Finally, after a long wait, I got to a page that said something along the lines of "MySQL Error, Too many connections...", and it pointed to my config.php file which connects to the database.
It has just been me connecting to the database, no other users. On each of my pages, I include the config.php file at the top, and close the mysql connection at the end of the page. There may be several queries in between. I fear that I am not closing mysql connections enough (mysql_close()).
However, when I try to close them after running a query, I receive connection errors on the page. My pages are PHP and HTML. When I try to close a query, it seems that the next one won't connect. Would I have to include config.php again after the close in order to connect?
This error scared me because in 2 weeks, about 84 people start using this web application.
Thanks.
EDIT:
Here is some pseudo-code of my page:
require_once('../scripts/config.php');
<?php
mysql_query..
if(this button is pressed){
mysql_query...
}
if(this button is pressed){
mysql_query...
}
if(this button is pressed){
mysql_query...
}
?>
some html..
..
..
..
..
<?php
another mysql_query...
?>
some more html..
..
..
<?php mysql_close(); ?>
I figured that this way, each time the page opens, the connection opens, and then the connection closes when the page is done loading. Then, the connection opens again when someone clicks a button on the page, and so on...
EDIT:
Okay, so I just got off the phone with GoDaddy. Apparently, with my Economy Package, I'm limited to 50 connections at a time. While my issue today happened with only me accessing the site, they said that they were having some server problems earlier. However, seeing as how I am going to have 84 users for my web app, I should probably upgrade to "Deluxe", which allows for 100 connections at a time. On a given day, there may be around 30 users accessing my site at a time, so I think the 100 would be a safer bet. Do you guys agree?
Shared-hosting providers generally allow a pretty small amount of simultaneous connections for the same user.
What your code does is :
open a connection to the MySQL server
do it's stuff (generating the page)
close the connection at the end of the page.
The last step, when done at the end of the page is not mandatory : (quoting mysql_close's manual) :
Using mysql_close() isn't usually
necessary, as non-persistent open
links are automatically closed at the
end of the script's execution.
But note you probably shouldn't use persistent connections anyway...
Two tips :
use mysql_connect insead of mysql_pconnect (already OK for you)
Set the fourth parameter of mysql_connect to false (already OK for you, as it's the default value) : (quoting the manual) :
If a second call is made to
mysql_connect() with the same
arguments, no new link will be
established, but instead, the link
identifier of the already opened link
will be returned.
The new_link
parameter modifies this behavior and
makes mysql_connect() always open a
new link, even if mysql_connect() was
called before with the same
parameters.
What could cause the problem, then ?
Maybe you are trying to access several pages in parallel (using multiple tabs in your browser, for instance), which will simulate several users using the website at the same time ?
If you have many users using the site at the same time and the code between mysql_connect and the closing of the connection takes lots of time, it will mean many connections being opened at the same time... And you'll reach the limit :-(
Still, as you are the only user of the application, considering you have up to 200 simultaneous connections allowed, there is something odd going on...
Well, thinking about "too many connections" and "max_connections"...
If I remember correctly, max_connections does not limit the number of connections you can open to the MySQL Server, but the total number of connections that can bo opened to that server, by anyone connecting to it.
Quoting MySQL's documentation on Too many connections :
If you get a Too many connections
error when you try to connect to the
mysqld server, this means that all
available connections are in use by
other clients.
The number of connections allowed is
controlled by the max_connections
system variable. Its default value is
100. If you need to support more connections, you should set a larger
value for this variable.
So, actually, the problem might not come from you nor your code (which looks fine, actually) : it might "just" be that you are not the only one trying to connect to that MySQL server (remember, "shared hosting"), and that there are too many people using it at the same time...
... And if I'm right and it's that, there's nothing you can do to solve the problem : as long as there are too many databases / users on that server and that max_connection is set to 200, you will continue suffering...
As a sidenote : before going back to GoDaddy asking them about that, it would be nice if someone could validate what I just said ^^
I had about 18 months of dealing with this (http://ianchanning.wordpress.com/2010/08/25/18-months-of-dealing-with-a-mysql-too-many-connections-error/)
The solutions I had (that would apply to you) in the end were:
tune the database according to MySQLTuner.
defragment the tables weekly based on this post
Defragmenting bash script from the post:
#!/bin/bash
# Get a list of all fragmented tables
FRAGMENTED_TABLES="$( mysql -e `use information_schema; SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN ('information_schema','mysql') AND
Data_free > 0` | grep -v '^+' | sed 's,t,.,' )"
for fragment in $FRAGMENTED_TABLES; do
database="$( echo $fragment | cut -d. -f1 )"
table="$( echo $fragment | cut -d. -f2 )"
[ $fragment != "TABLE_SCHEMA.TABLE_NAME" ] && mysql -e "USE $database;
OPTIMIZE TABLE $table;" > /dev/null 2>&1
done
Make sure you are not using persistent connections. This is usually a bad idea..
If you've got that .. At the very most you will need to support just as much connections as you have apache processes. Are you able to change the max_connections setting?
Are you completely sure that the database server is completely dedicated to you?
Log on to the datbase as root and use "SHOW PROCESSLIST" to see who's connected. Ideally hook this into your monitoring system to view how many connections there are over time and alert if there are too many.
The maximum database connections can be configured in my.cnf, but watch out for running out of memory or address space.
If you have shell access, use netstat to see how many sockets are opened to your database and where they come from.
On Linux, type:
netstat -n -a |grep 3306
On windows, type:
netstat -n -a |findstr 3306
The solution could one of these, i came across this in a MCQA test, even i did not understood which one is right!
Set this in my.cnf "set-variable=max_connections=200"
Execute the command "SET GLOBALmax_connections = 200"
Use always mysql_connect() function in order to connect to the mysql server
Use always mysql_pconnect() function in order to connect to the mysql server
Followings are possible solutions:
1) Increase the max connection setting by setting the global variable in mysql.
set global max_connection=200;
Note: It will increase the server load.
2) Empty your connection pool as below :
FLUSH HOSTS;
3) check your processList and kill specific processlist if you don't want any of them.
You may refer this :-
article link