What is the setting in PHP that enables the connection limit? - php

I am setting up a website. I have PHP code that will run for long periods of time. When the code runs, it takes 1-2 hours to finish the process.
This script works great up to 6, and interestingly the 7th link is "pending" until the previous one is finished.
CPU & Ram and Disk stable.
I tried the following;
Change PHP limits:
pm = dynamic
pm.max_children = 200
pm.start_servers = 1
pm.min_spare_servers = 1
pm.max_spare_servers = 200
Change Apache Limits (I wrote these numbers very high on purpose, but it didn't work.):
FcgidConnectTimeout 5
FcgidMaxProcesses 1000000
FcgidMaxProcessesPerClass 1000000
FcgidMaxRequestInMem 65536000000
FcgidMaxRequestLen 13107200000
PHP Version: 7.2
In chrome console this php showing "pending", apache error log and journal error log is empty, just white screen and wait until the other is finished. What else should I try?

Related

Ajax request is getting cancelled

In a php application. I am uploading 20-30 files at once. Each files is around 100-200MB. Means more than 2GB of data i am uploading on server.
Because it takes time around 20-30 mins to upload. One general ajax pooling job getting cancelled after some time.
I have following configuration:
upload_max_filesize = 4096M
post_max_size = 4096M
max_input_time = 600
max_execution_time = 600
During this process my CPU consumption goes only upload 10-20%. I have 32 GB RAM and 12 CORE Linux machine.
Application is running on PHP 8.0, APACHE 2, MYSQL 8, Ubuntu 20.
Can anyone suggest what else i can check?
max_execution_time: This sets the maximum time in seconds a script is
allowed to run before it is terminated by the parser. This helps
prevent poorly written scripts from tying up the server. The default
setting is 30. When running PHP from the command line the default
setting is 0.
max_input_time: This sets the maximum time in seconds a script is
allowed to parse input data, like POST and GET. Timing begins at the
moment PHP is invoked at the server and ends when execution begins.
The default setting is -1, which means that max_execution_time is used
instead. Set to 0 to allow unlimited time.
I think change it:
max_input_time = 1800 & max_execution_time = 1800 (30 minutes)

How to fix problem with website speed (500 records in one page) on Nginx + php-fpm server

I moved me shop to new server and I have problem with loading time. I have 500 products in one page (no pagination), first loading takes about 8 seconds (in old server was 2 sec maximum). Then website works very fast because cache plugin is working.
I have Prestashop 1.7.5.2 with very good cache plugin and two powerful servers:
Only with database: Apache, phpMyAdmin (RAM 60 GB, Processor 16 vCores, SSD)
Nginx, php-fpm 7.2 (RAM 60 GB, Processor 16 vCores, SSD)
Only page with products have this problem. I know, 500 products without pagination is not perfect idea but it have to be like that.
Maybe can be wrong php-fpm config?
Currently I have this:
pm = ondemand
pm.max_children = 16
pm.max_requests = 4000
pm.process_idle_timeout = 30s
I will be very grateful for your help.

Compute Engine MYSQL Server CPU Strange

I couldn't think what else to title this strange problem.
We have a "Worker" Compute Engine which is a MySQL SLAVE. Its primary role is to process a large set of data and then place it back on the Master. All handled via a PHP Script.
Now the processing of data takes roughly 4 hours to complete. During this time we noticed the following CPU pattern.
What you can see above is the 50% solid CPU starts after a server reboot. Then after about 2 hours its starts to produce a ECG style pattern on the CPu. Around every 5/6 minutes CPU spikes to ~48% then drops over the 5 minutes.
My question is, why. Can anyoen please explain why. We ideally want this server to be Maxing out ots cpu at 100% (50% as there are 2 cores)
The spec of the server: 2 VCPU's with 7.5GB Memory.
As mentioned, if we can have this running full throttle it would be great. Below is the my.cnf
symbolic-links=0
max_connections=256
innodb_thread_concurrency = 0
innodb_additional_mem_pool_size = 1G
innodb_buffer_pool_size = 6G
innodb_flush_log_at_trx_commit = 1
innodb_io_capacity = 800
innodb_flush_method = O_DIRECT
innodb_log_file_size = 24M
query_cache_size = 1G
query_cache_limit = 512M
thread_cache_size = 32
key_buffer_size = 128M
max_allowed_packet = 64M
table_open_cache = 8000
table_definition_cache = 8000
sort_buffer_size = 128M
read_buffer_size = 8M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 128M
tmp_table_size = 256M
query_cache_type = 1
join_buffer_size = 256M
wait_timeout = 300
server-id = 2
relay-log = /var/log/mysql/mysql-relay-bin.log
log_bin = /var/log/mysql/mysql-bin.log
log-error=/var/log/mysqld.log
read-only = 1
innodb_flush_log_at_trx_commit=2
I have cleaned up the above to remove any configs with private information which are not relevant to performance.
UPDATE
I have noticed when the VPU starts dropping during the heartbeat section of the graph the PHP script is no longer running. This is impossible, as the script I know takes 4 hours. No errors, and after another 4 hours the data is where I expected it.
Changing innodb_io_capacity = 800 to 1500 will likely reduce your 4 hour elapsed time to process by raising the limit to what you know you can achieve with your slave processing.
For your 7.5G indicated environment, configuration has
innodb_additional_mem_pool_size=1G
innodb_buffer_pool_size=6G
query_cache_size=1G
so before you start, you are overcommitted.
Another angle to consider, with
max_connections=256
max_allowed_packet=64M
could on a fully busy 256 connections need 16GB + just for this function to survive.
It is unlikely max_allowed_packet at 64M is reasonable.
Changing read_rnd_buffer_size = 4M to SET GLOBAL read_rnd_buffer_size=16384; could be significant on your slave then 24 hours later on master. They can be different but if it is significant in reducing your 4 hours on the slave, implement on both instances. Let us know what this single change does for you, please.
The 50% cpu utilization you are seeing is the script maxing out the --- single core that it is capable of utilizing --- . As indicated by PressingOnAlways recently. You can not tune around limit in your running script.
For a more thorough analysis, provide from SLAVE AND MASTER
RAM size (nnG)
SHOW GLOBAL STATUS
SHOW GLOBAL VARIABLES
SHOW INNODB STATUS
CPU % is measured by all the cores - so 100% cpu usage == both cores maxing out. PHP by default runs in a single thread and does not utilize multi-cores. The 50% cpu utilization you are seeing is the script maxing out the single core that it is capable of utilizing.
In order to utilize 100% cpu, consider spawning 2 PHP scripts that work on 2 separate datasets - e.g. script 1 processes records 1-1000000, while script 2 processes 1000001-2000000.
Other option is to rewrite the script to utilize threads. You may want to consider changing the language altogether for something that is more conducive to threads, like Golang? Though this might not be necessary if the main work is done within mysql.
The other issue you're seeing when the graph is below 50% may be due to IO wait. It's hard to tell from a graph though, you may be having a data flow transfer bottleneck where your CPU isn't working and waiting while large bits of data is transferred.
Optimizing CPU utilization is an exercise in finding the bottlenecks and removing them - good luck.
'Monitoring Service' could enabled to periodically capture a 'health check' of your system since it appears to be on a 6 minute cycle when you see spikes.
SHOW GLOBAL STATUS LIKE 'Com_show_%status' may confirm activity of this nature.
Divide your com_show_%status counters by (uptime/3600) to get rate-per-hour.
10 times an hour would be every 6 minutes.

heavy Drupal 7 site performance issue

I am running a drupal 7 website on a Linux server (with 4 cores and 12GB RAM) with LEMP (nginx+php5-fpm+mysql).
The drupal installation has a large number of modules enabled(all of which are needed).
I also use apc + boost + memcache + authcache for caching. The caching seems to be working (i see pages being served from cache)
and the site has a reasonable response time.
I have run stress tests with the website running in a url like www-1.example.com. (www-1.example.com points to the ip of my webserver, let's say x.x.x.x)
and the results are fine (for up to 100 concurrent users)
The problem starts when I change the dns so that www.example.com also points to x.x.x.x. Then the cpu of my webserver (all 4 cores) reach 100% at short time.
I have been expereimenting with the following parameters on the www.conf file with no luck:
Configuration 1:
pm.max_children = 100
pm.start_servers = 20
pm.min_spare_servers = 10
pm.max_spare_servers = 20
pm.max_requests = 200
result: 100% cpu usage, low memory usage
Configuration final:
pm.max_children = 300
pm.start_servers = 20
pm.min_spare_servers = 10
pm.max_spare_servers = 20
pm.max_requests = 200
result: low cpu usage, 100% memory usage
Can anyone guide me to find the optimal comfiguration or has any idea on what can cause the 100% cpu usage?
How do i culculate the maximum number of concurrent usages that can run without problem based on server parameters?

PHP Uploading with FastCGI on IIS 7.5 stalling/taking forever

Okay first off; this works 100% okay on one server set up, but is messed up on a very similar one - which is why I think it has to be an IIS issue somewhere - I just don't know where.
I have a very standard PHP upload script, but it keeps locking up/freezing then resuming itself on larger files (over 250mb.)
No errors return, and the upload does finish and work fine for files up to 4gb - but it takes forever. You can watch the size of the tmp files as they upload, and it will just stop receiving data - sometimes for several minutes at a time, then just pick back up right where it left off and continue the upload.
I have configured the following in IIS:
CGI:
Time-out: 00:30:00
Activity Timeout: 300000
Idle Timeout: 300000
Request Timeout: 300000
Request Filtering:
Max allowed content length: 4294967295
Max URL Length: 4096
Max query string: 2048
PHP:
post_max_size: 4G
upload_max_filesize: 4G
max_execution_time: 300000
max_file_uploads: 300000
max_input_time: -1
memory_limit: -1
I was previously getting errors from the script taking too long, however upping the Activity, Idle, and Request times have fixed that issue. The uploads do work fine, but take FOREVER.
I have the exact same IIS settings on another dev box running the same upload script and it works flawlessly - so I don't know what I'm missing.
PHP is 5.4.14. I get nothing in the PHP error log or Windows Event Viewer (since no errors are actually thrown as far as I can tell.)
Anyone have any idea of what settings I could be missing somewhere?
Welp, that was stupid. I just asked around and someone did infact turn on "intrusion prevention" at the router level for the one server that was having issues. Disabling that seems to resolve the problem.

Categories