Max number of PHP CLI processes - php

I have a PHP script that is called by an external process when certain events happen, like when an email arrives. So if during a period of time, the triggering event happens multiple times, the script is invoked multiple times as well.
What's the limit on max number of instances of the script running concurrently? How would I go about loosening the limit?
I have read about various pieces of info on max number of concurrent connections in the context of Apache/PHP, but I think the CLI context works differently.
My environment is Ubuntu 16.04LTS/PHP 7.0.
Thanks!

On Linux, the max number of processes is determined by several factors:
cat /proc/sys/kernel/pid_max
This will show the maximum PID value + 1 (i.e. the highest value PID is one less than this value). When you hit this value, the kernel wraps around. Note that the first 300 are reserved. The default value is 32768, so the maximum number of PIDs is 32767 - 300 = 32467.
This will show you the maximum number of processes that your user account can run:
ulimit -u
So if you're running all of your processes as a single user, and this number is less than the pid_max value, then this may be your upper limit.
These values can be adjusted (particularly on 64-bit systems), but they're unlikely to be your real-world upper limit. In most cases, the maximum number of processes is going to be based on your hardware resources. And I suspect that if you try to run anywhere near 32,000 PHP CLI instances, you'll run out of RAM far earlier than you'll run out of available process space.
The maximum number of processes is going to depend on what your script does; particularly, how much CPU and RAM it uses. Your best bet is to kick off a script that runs X number of your CLI processes for an extended period of time, then watch the load on your system. Start with 10.. if the load is negligible, bump it to 100. Then continue that process until you have a noticeable load. Find out what your max load is, then do the match to figure out what your max number of processes is.

Related

Max connection limit in php apache server

I have 32 GB ubuntu server where my site is hosted. I have installed the XAMPP and running my site. So here my question is what is the limit of maximum concurrent connections apache will handle and how I can check that? At which extent I can increase it and how?
My server must have 5000 concurrent users at a time So for that I have to configure it.
Generally the formula is :
(Total available memory - Memory needed by operating system) / memory each PHP process needs.
Honestly it's a bit hard to predict sometimes, so it might be worth doing some experimentation. The goal is that you never use more memory that available, so your operating system never swaps.
However, you can also turn it around. 5000 concurrent requests is frankly a lot, so I'm going by your 5000 concurrent users.
Say if 5000 users are actively using your application at a given time, and maybe they do on average each 1 request every 30 seconds or so. And say that the average PHP script takes 100ms to execute.
That's about 166 requests per second made by your users. Given that it takes 100ms to fulfill a request, it means you need about 17 connections to serve all that up. Which is easy for any old server.
Anyway, the key to all these types of dilemmas is to:
Make an educated guess
Measure
Make a better guess
Repeat

How to run 50k jobs per second with gearman

According to Gearman website
"A 16 core Intel machine is able to process upwards of 50k jobs per second."
I have load balancer that moves traffic to 4 different machines. Each machine has 8 cores. I want to have the ability to run 13K jobs per machine, per second (it's definitely more then 50K jobs).
Each job takes between 0.02 - 0.8 MS.
How many workers do I need to open for this type of performance?
What is the steps that I need to take to open these amount of workers?
Depending on what kind of processing you're doing, this will require a little experimentation and load testing. Before you start, make sure you have a way to reboot the server without SSH, as you can easily peg the CPU. Follow these steps to find the optimum number of workers:
Begin by adding a number of workers equal to the number of cores minus one. If you have 8 cores, start with 7 workers (hopefully leaving a core free for doing things like SSH).
Run top and observe the load average. The load average should not be higher than the number of cores. For 8 cores, a load average of 7 or above would indicate you have too many workers. A lower load average means you can try adding another worker.
If you added another worker in step 2, observe the load average again. Also observe the increase in RAM usage.
If you repeat the above steps, eventually you will either run out of CPU or RAM.
When doing parallel processing, keep in mind that you could run into a point of diminishing returns. Read about Amdahl's law for more information.

What is the max of maximum_execution_time in PHP?

I have about 200,000 rows that need to add to the database.
I have set my maximum_excution_time = 4000, I still get this error.
What is max of maximum_execution_time in PHP ?
I want to take off this restriction completely and set it to unlimited if possible.
I know using a value of 0 in set_time_limit will tell PHP to not timeout a script/program before it's finished running. I'm pretty sure setting the same value in maximum_excution_time will have the same effect.
That said: Some hosting companies have other systems running to look for long running processes of any sort (PHP, Ruby, Perl, random programs, etc.) and kill them if they're running too long. There's nothing you can do to stop these system from killing your process (other than moving to a different host)
Also, certain versions of PHP have a number of small memory leaks/inefficient garbage collection that can start to eat up memory when using in long running processes. You may hit PHP memory limit with this, or you may eat up the amount of memory available to your virtual machine.
If you run into these challenges, the usual approach is to batch process the rows in some way.
Hope that helps, and good luck!
Update: Re batch processing -- if you find you're stuck on a system that can only insert around 10,000 rows at a time, rather than write a program to insert all 200,000 rows at once, you write a program/system that will insert, say, 9,000 and then stop. And then you run it again and it inserts the next 9,000. And then next 9,000 until you're complete. How you do this will depend on where you're getting your data from. If you're pulling this data from flat files it can be as simple as splitting the flat files into multiple files. If your'e pulling from another database table it can be as simple as writing a program to pull out arrays of IDs in groups of 9,000 and have your main program select those 9,000 rows. Messaging queue systems are another popular approach for this sort of task.

How to use RLimitCPU

How I can Limit cpu usage of apache2 php scirpts using
RLimitCPU seconds|max [seconds|max]
Please show me an example.
e.g RLimitCPU 2 2 ? whats that mean ?
I know its cpu seconds but question is how to convert GHz to seconds.
One php for video streaming script sometimes is taking 100% CPU usage on 2 cores.
http://httpd.apache.org/docs/2.2/mod/core.html#rlimitcpu
1 GHz is 1,000,000,000 CPU cycles per second - so a 2.6 GHz CPU is going to go through 2,600,000,000 cycles in one second. How many instructions actually get executed in a cycle is going to vary with the CPU - they'll all take a certain number of cycles to actually complete an instruction.
2 CPU seconds is "the CPU is completely maxed out for two full seconds or the equivalent". So if your program uses the CPU at half capacity for 4 full seconds that's 2 CPU seconds.
For your app, if you have a 2.6 GHz CPU and you run at 2 CPU seconds, you'll have executed 5,200,000,000 CPU cycles. How many instructions that is harder to work out, and how many instructions you actually need for your "video streaming script" is going to be incredibly hard to work out (and is going to vary with the length of the video).
I'd advise just running the script for the biggest video you'd ever send, seeing how many CPU seconds you use (top -u apache-usr will let you see the PHP process running, "TIME+" column is CPU time) and then tripling that as your RLimitCPU.
Bear in mind that RLimitCPU is just going to kill your PHP script when it takes more CPU time than the limit. It's not some magical tool that means your script will take less CPU time, it's just a limit on the maximum time the script can take.
Apache Reference: http_core, RLimitCPU
RLimitCPU
Resource Limit on CPU Usage
Syntax: RLimitCPU soft-seconds [hard-seconds]
Example: RLimitCPU 60 120
Since: Apache 1.2
This directive sets the soft and hard limits for maximum CPU usage of a process in seconds. It takes one or two parameters. The first parameter, soft-seconds, sets the soft resource limit for all processes. The second parameter, hard-seconds, sets the maximum resource limit. Either parameter can be a number, or max'', which indicates to the server that the limit should match the maximum allowed by the operating system configuration. Raising the maximum resource limit requires the server to be running as the userroot'', or in the initial start-up phase.
http://www.apacheref.com/ref/http_core/RLimitCPU.html

Is CPU execution time different in Loop with Sleep() and Long Loops without sleep(), with both having the same total running time?

I have a loop that runs for approx. 25 minutes i.e 1500 seconds. [100 loops with sleep(15)]
The execution time for the statements inside loop is very less.
My scripts are hosted on GoDaddy. I am sure that they are having some kind of limit on execution time.
My question is, are they concerned with "the total CPU execution time" or the total running time.
They will be concerned with the CPU Execution Time, not the total running time unless connections are an issue and you're using a lot of them (which it doesn't sound like you are).
Running time, as in a stopwatch, doesn't matter much to a shared host, if your loop runs for 3 years but only uses 0.01% CPU doing it, it doesn't impact their ability to host. However if you ran for 3 years at 100% CPU, that directly impacts how many other applications/VMs/whatever can be run on that same hardware. This would mean more servers to host the same number of people which means money...that they care about.
For the question in the title: they are very different. With sleep() and the same amount of total time, that means the actual work the CPU is doing is much less because it can do the work, sleep/idle, and still finish in the same amount of time. When you're calling sleep() you're not taxing the CPU, it's a very low-power operation for it to keep the timer going until calling your code again.
This is the typical time limit:
http://es2.php.net/manual/en/info.configuration.php#ini.max-execution-time
It can normally be altered in a per-script basis with ini_set(), e.g.:
ini_set('max_execution_time', 20*60); // 20 minutes (in seconds)
Whatever, the exact time limits probably depend on how PHP is running (Apache module, fastCGI, CLI...).

Categories