I'm having an issue with memcached. Not sure if it's memcached, php, or tcp sockets but everytime I try a benchmark with 50 or more concurrency to a page with memcached, some of those request failed using apache ab. I get the (99) Cannot assign requested address error.
When I do a concurrency test of 5000 to a regular phpinfo() page. Everything is fine. No failed requests.
It seems like memcached cannot support high concurrency or am I missing something? I'm running memcached with the -c 5000 flag.
Server: (2) Quad Core Xeon 2.5Ghz, 64GB ram, 4TB Raid 10, 64bit OpenSUSE 11.1
Ok, I've figured it out. Maybe this will help others who have the same problem.
It seems like the issue can be a combination of things.
Set the sever.max-worker in the lighttpd.conf to a higher number
Original: 16 Now: 32
Turned off keep-alive in lighttpd.conf, it was keeping the connections opened for too long.
server.max-keep-alive-requests = 0
Change ulimit -n open files to a higher number.
ulimit -n 65535
If you're on linux use:
server.event-handler = "linux-sysepoll"
server.network-backend = "linux-sendfile"
Increase max-fds
server.max-fds = 2048
Lower the tcp TIME_WAIT before recycling, this keep close the connection faster.
In /etc/sysctl.conf add:
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 3
Make sure you force it to reload with: /sbin/sysctl -p
After I've made the changes, my server is now running 30,000 concurrent connections and 1,000,000 simultaneous requests without any issue, failed requests, or write errors with apache ab.
Command used to benchmark: ab -n 1000000 -c 30000 http://localhost/test.php
My Apache can't get even close to this benchmark. Lighttd make me laugh at Apache now. Apache crawl at around 200 concurrency.
I'm using just a 4 byte integer, using it as a page counter for testing purposes. Other php pages works fine even with 5,000 concurrent connections and 100,000 requests. This server have alot of horsepower and ram, so I know that's not the issue.
The page that seems to die have nothing but 5 lines to code to test the page counter using memcached. Making the connection gives me this error: (99) Cannot assign requested address.
This problem start to arise starting with 50 concurrent connections.
I'm running memcached with -c 5000 for 5000 concurrency.
Everything is on one machine (localhost)
The only process running is SSH, Lighttpd, PHP, and Memcached
There are no users connected to this box (test machine)
Linux -nofile is set to 32000
That's all I have for now, I'll post more information as I found more. It seems like there are alot of people with this problem.
I just tested something similar with a file;
$mc = memcache_connect('localhost', 11211);
$visitors = memcache_get($mc, 'visitors') + 1;
memcache_set($mc, 'visitors', $visitors, 0, 30);
echo $visitors;
running on a tiny virtual machine with nginx, php-fastcgi, and memcached.
I ran ab -c 250 -t 60 http://testserver/memcache.php from my laptop in the same network without seeing any errors.
Where are you seeing the error? In your php error log?
This is what I used for Nginx/php-fpm adding this lines in /etc/sysctl.conf # Rackspace dedicate servers with Memcached/Couchbase/Puppet:
# Memcached fix
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 3
I hope it helps.
Related
My software makes a lot of MySQL queries to my server, and I have never had any issues in the past with it, but just recently nothing was loading, no webpages, no SQL was running, nothing. I managed to get on WHM for my server and kill the process, only to watch it spike back up to 300%. Nothing I have been able to do has made it go down. What information do I need to share to get help with this? I am not a sys admin nor do I have one or resources for one. I wouldn't usually be asking for help and just optimize all my queries for something like this as it wasn't a problem for the past 3 months but suddenly became one out of nowhere, at least not that I noticed. At this point my program is saying that one of my database tables has crashed and needs repaired... What can I do? Thanks in advance for any help...
I have already considered optimization but I was hoping for a quick solution to implement as I have customers waiting, then I can spend a few days working on optimizing my SQL that, like I said, wasn't having any issues before. I am confused about it.
Also I am not sure if this helps but tracing the process in WHM prints this repeatedly and nothing else:
fcntl(16, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(16, F_SETFL, O_RDWR|O_NONBLOCK) = 0
accept(16, {sa_family=AF_LOCAL, NULL}, [2]) = 35
fcntl(16, F_SETFL, O_RDWR) = 0
setsockopt(35, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported)
futex(0x13298a4, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x13298a0, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x1327240, FUTEX_WAKE_PRIVATE, 1) = 1
poll([{fd=14, events=POLLIN}, {fd=16, events=POLLIN}], 2, -1) = 1 ([{fd=16, revents=POLLIN}])
/etc/my.conf
innodb_file_per_table=1
default-storage-engine=MyISAM
performance-schema=0
max_allowed_packet=268435456
open_files_limit=10000
This is all that is available to me as far as my.conf file. The error log doesn't exist in /var/log so I don't have anything to give in that regard...
SQL version:
[Server] # mysql -V
mysql Ver 14.14 Distrib 5.6.41, for Linux (x86_64) using EditLine wrapper
I have an additional question or add-on to this. I don't know if it makes much of a difference but, say my code is running using 30% CPU on the mysql process, I can actually turn off the code and the mysql process CPU usage will not change. What does this mean?
Edit: (these are all expiring within a week from 12/09/2018)
Global Status
Current Settings
ulimit -a
df -h
mysqltuner report
The my.cnf file contents that I listed is all that was there. Nothing else. I will get the top command and iostat -xm 5 3 when I am running the software full speed again to see the results.
Rate Per Second=RPS Suggestions to consider based on your Linux ulimit -a report.
ulimit -n 16384 to raise Open Files limit from 1024 to support your activities.
For this to persist over Linux Shutdown/Restart, review this url.
https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/
Your specifics may be slightly different due to version of Linux.
Suggestions to consider for your my.cnf [mysqld] section
innodb_lru_scan_depth=100 # from 1024 to reduce CPU busy every second. 93% savings for this one function.
thread_cache_size=32 # from 9 for thread breathing room and growth.
innodb_io_capacity=1800 # from 200 to take advantage of your HDD IOPS capacity
key_cache_age_threshold=7200 # from 300 seconds to reduce key_reads RPS of 16
query_cache_size=0 # from 1M to conserve RAM - QC is OFF and not used
query_cache_limit=0 # from 1M to conserve RAM - QC is OFF and not used
key_buffer_size=128M # from 8M which had NO free space at the end of your work day
For additional suggestions, see my profile, Network profile for contact information.
I am running a websockets server using https://github.com/ghedipunk/PHP-Websockets/blob/master/websockets.php on an Ubuntu 16 box with PHP7
After 256 users connect to the websocket, it stops taking connections and I can't figure out why. In the client, I get a 1006 error code (connection was closed abnormally (locally) by the browser implementation) and no further information. The websockets request doesn't appear to make it to the websockets server (which normally echos "Client Connected" right after a socket connection is made).
In the connect() function, one of the things I do is echo the count of the number of users, sockets and overall memory usage to the log. This problem occurs whenever the user count hits 256 (at which point the socket count is 257 and memory usage around 4Mb). The fact that it happens at 256 makes me think that a limit is being hit somewhere, but I can't find that limit. If I restart the websockets server, everything works fine again.
From my investigation so far, I have tried and checked:
ulimit (says it's unlimited)
MySQL connection limit (was set to default, now is 1000, but that didn't help)
Increased the PHP memory limit (because why not, just to see)
SOMAXCONN is set to 128, so I don't think this is the problem, but I would have to recompile PHP to test it. I haven't tried this yet.
Apache: The message I get when the problem occurs is: [proxy:error] [pid 16785] (111)Connection refused: AH00957: WS: attempt to connect to 10.0.0.240:9000 (websockets.mydomain.com) failed, which doesn't tell me much about anything. Apache is running MPM prefork and I have increased the spare servers and MaxRequestWorkers
I am open to any suggestions as to where to look next or how to get more detail out of the "Connection Refused" error log from apache!
Thanks
It's Apache that holding you up. Try setting the following in your conf file...
MaxClients 512
ServerLimit 512
(you must set both)
Of course, you can use whatever numbers work for you. In mpm-prefork, you should be able to go to 20,000 but that really shouldn't be necessary.
Im currently writing a php script which accesses a csv file on a remote server, processes the data then writes data to the local MySQL database. Because there is so much data to process and insert into the database (50,000 lines), the script takes longer than 60 seconds to run. The problem I have is, the script times out after 60 seconds.
To make sure its not a MySQL issue, i created another script that enters an infinite loop, and it too times out at 60 seconds.
I have tried increasing/changing the following settings on the Ubuntu server but it hasn't helped:
max_execution_time
max_input_time
mysql.connect_timeout
default_socket_timeout
the TimeOut value in the apache2.conf file.
Could it possibly be an issue because i'm accessing the PHP file from a web browser? Do web browsers have time out limits?
Any help would be appreciated.
The simplest and least intrusive way to get over this limit is to add this line to your script.
Then you are only amending the execution time for this script and not all PHP scripts which would be the case if you amended either of the 2 PHP.INI files
ini_set ('max_execution_time', -1);
When you were trying to amend the php.ini file I would guess you were amending the wrong one, there are 2, one used only be the PHP CLI and one used by PHP running with Apache.
For future reference to find the actual file used by php-apache just do a
<?php
phpinfo();
?>
And look for Loaded Configuration File
I finally worked out the reason the request times out. The problem lies with having virtual server hosting.
The request from the web browser is sent to the hosting server which then directs the request to the virtual server (acts like a separate server). Because the hosting server doesn't get a response back from the virtual server after 60 seconds, it times out and sends a response back to the web browser saying exactly this. Meanwhile, the virtual server is still processing the script.
When the virtual server finally finishes processing the script, it is too late as the hosting server has already returned a timeout error to the front-end user.
Because the hosting server is used to host many virtual servers (for multiple different users), it is generally not possible to change the timeout settings on this server.
So, final verdict: The timeout error cannot be avoided with virtual hosting. If this is a serious issue, you may need to look into getting dedicated server hosting.
Michael,
Your problem should come from the PHP file and not the web browser accessing it.
Did you try putting the following lines at the beginning of your PHP file ?
set_time_limit(0);
ini_set ('max_execution_time', 0);
PHP has 2 configuration files, one for Apache and one for CLI, which explains why when running the script in command line, you don't have a timeout. The phpinfo you gave me has a max_execution_time at 6000
See set time limit documentation.
For CentOS8, the below settings worked for me:
sed -i 's/default_socket_timeout = 60/default_socket_timeout = 6000/g' /etc/php.ini
sed -i 's/max_input_time = 60/max_input_time = 30000/g' /etc/php.ini
sed -i 's/max_execution_time = 30000/max_execution_time = 60000/g' /etc/php.ini
echo "Timeout 6000" >> /etc/httpd/conf/httpd.conf
Restarting apache the usual way isn't good enough anymore. You have to do this now:
systemctl restart httpd php-fpm
Synopsis:
If the script(PHP function) takes 61 seconds or above, then you will get a gateway timeout error. The term Gateway is referred to as the PHP worker, meaning the worker timed out because thats how it was configured. It has nothing to do with networking.
php-fpm is a new service in CentOS8. From what I gathered from the internet (I have not verified this myself), it basically has executables(workers) running in the background waiting for you to give it scripts (PHP) to execute. The time saving is the executables are always running. Because they are already running you suffer no start-up time penalty.
I've done quite a bit of reading before asking this, so let me preface by saying I am not running out of connections, or memory, or cpu, and from what I can tell, I am not running out of file descriptors either.
Here's what PHP throws at me when MySQL is under heavy load:
Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (11 "Resource temporarily unavailable")
This happens randomly under load - but the more I push, the more frequently php throws this at me. While this is happening I can always connect locally through the console and from PHP through 127.0.0.1 instead of "localhost" which uses the faster unix socket.
Here's a few system variables to weed out the usual problems:
cat /proc/sys/fs/file-max = 4895952
lsof | wc -l = 215778 (during "outages")
Highest usage of available connections: 26% (261/1000)
InnoDB buffer pool / data size: 10.0G/3.7G (plenty o room)
soft nofile 999999
hard nofile 999999
I am actually running MariaDB (Server version: 10.0.17-MariaDB MariaDB Server)
These results are generated both under normal load, and by running mysqlslap during off hours, so, slow queries are not an issue - just high connections.
Any advice? I can report additional settings/data if necessary - mysqltuner.pl says everything is a-ok
and again, the revealing thing here is that connecting via IP works just fine and is fast during these outages - I just can't figure out why.
Edit: here is my my.ini (some values may seem a bit high from my recent troubleshooting changes, and please keep in mind that there are no errors in the MySQL logs, system logs, or dmesg)
socket=/var/lib/mysql/mysql.sock
skip-external-locking
skip-name-resolve
table_open_cache=8092
thread_cache_size=16
back_log=3000
max_connect_errors=10000
interactive_timeout=3600
wait_timeout=600
max_connections=1000
max_allowed_packet=16M
tmp_table_size=64M
max_heap_table_size=64M
sort_buffer_size=1M
read_buffer_size=1M
read_rnd_buffer_size=8M
join_buffer_size=1M
innodb_log_file_size=256M
innodb_log_buffer_size=8M
innodb_buffer_pool_size=10G
[mysql.server]
user=mysql
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
open-files-limit=65535
Most likely it is due to net.core.somaxconn
What is the value of /proc/sys/net/core/somaxconn
net.core.somaxconn
# The maximum number of "backlogged sockets". Default is 128.
Connections in the queue which are not yet connected. Any thing above that queue will be rejected. I suspect this in your case. Try increasing it according to your load.
as root user run
echo 1024 > /proc/sys/net/core/somaxconn
This is something that can and should be solved by analysis. Learning how to do this is a great skill to have.
Analysis to find out just what is happening under a heavy load...number of queries, execution time should be your first step. Determine the load and then make the proper db config settings. You might find you need to optimize the sql queries instead!
Then make sure the PHP db driver settings are in alignment as well to fully utilize the database connections.
Here is a link to the MariaDB threadpool documentation. I know it says version 5.5, but its still relevant and the page does reference version 10. There are settings listed that may not be in your .cnf file that you can use.
https://mariadb.com/kb/en/mariadb/threadpool-in-55/
From the top of my head, I can think of max_connections as a possible source of the problem. I'd increase the limit, to at least eliminate the possibility.
Hope it helps.
After installing APC, see the apc.php script, the uptime restart every one or two hours? why?
How can I change that?
I set apc.gc_ttl = 0
APC caches lives as long as their hosting process, it could be that your apache workers reach their MaxConnectionsPerChild limit and they get killed and respawned clearing the cache with it. This a safety mechanism against leaking processes.
mod_php: MaxConnectionsPerChild
mod_fcgid or other fastcgi: FcgidMaxRequestsPerProcess and PHP_FCGI_MAX_REQUESTS (enviroment variable, the example is for lighttpd but it should be considered everywhere php -b used)
php-fpm: pm.max_requests individually for every pool.
You could try setting the option you are using to it's "doesn't matter" value (usually 0) and run test the setup with a simple hello world php script, and apachebench ab2 -n 10000 -c 10 http://localhost/hello.php (tweak the values as needed) to see if the worker pid's are changing or not.
If you use a TTL of 0 APC will clear all cache slots when it runs out of memory. This is what appends every 2 hours.
TTL must never be set to 0
Just read the manual to understand how TTL is used : http://www.php.net/manual/en/apc.configuration.php#ini.apc.ttl
Use apc.php from http://pecl.php.net/get/APC, copy it to your webserver to check memory usage.
You must allow enough memory so APC have 20% free after some hours running. Check this on a regular basis.
If you don't have enough memory available, use filters option to prevent rarely accessed files from being cached.
Check my answer there
What is causing "Unable to allocate memory for pool" in PHP?
I ran into the same issue today, found the solution here:
http://www.itofy.com/linux/cpanel/apc-cache-reset-every-2-hours/
You need to go to AccesWHM > Apache Configuration > Piped Log Configuration and Enable Piped Apache Logs.