I want to find an opportunity to limit the CPU usage CPU for script php.
My script runs with the help of Cron tasks and works in the mode CLI.
The problem is that after starting the CPU usage is 100%.
What leads to the fact that the site on the same server stops responding to the execution time of the background task.
Is it possible to limit CPU usage for this script? For example, to 50% maximum.
VPS Linux Ubuntu 16.
RAM 6 GB.
CPU 2x.
PHP 7.2.
You could use nice or renice to low-priorize the process, e.g. renice +10 1234 will make the process 1234 low priorized on scheduling (limits are -20 to +19 with smaller values renders to higher priority).
With cpulimit it is possible to limit the cpu usage, eg. cpulimit -l 50 -p 1234 limits the process 1234 to 50%.
see also https://scoutapm.com/blog/restricting-process-cpu-usage-using-nice-cpulimit-and-cgroups
Related
I have a Windows Server that has random spikes of high CPU usage and upon looking at ProcessExplorer and Windows Task Manager, it seems that there are a high number of php-cgi.exe processes running concurrently, sometimes up to 6-8 instances, all taking around 10-15% of CPU each. Sometimes they are so bad that they cause the server to be unresponsive.
In the FastCGI settings, I've set MaxInstances to 4 so by right, so there shouldn't be more than 4 php-cgi.exe processes that are running simultaneously. Hence I would like some advice or directions on how to limit the number of instances to 4..
Additional notes: I've also set instanceMaxRequests to 10000 and also PHP_FCGI_MAX_REQUESTS to 10000 as well.
According to Gearman website
"A 16 core Intel machine is able to process upwards of 50k jobs per second."
I have load balancer that moves traffic to 4 different machines. Each machine has 8 cores. I want to have the ability to run 13K jobs per machine, per second (it's definitely more then 50K jobs).
Each job takes between 0.02 - 0.8 MS.
How many workers do I need to open for this type of performance?
What is the steps that I need to take to open these amount of workers?
Depending on what kind of processing you're doing, this will require a little experimentation and load testing. Before you start, make sure you have a way to reboot the server without SSH, as you can easily peg the CPU. Follow these steps to find the optimum number of workers:
Begin by adding a number of workers equal to the number of cores minus one. If you have 8 cores, start with 7 workers (hopefully leaving a core free for doing things like SSH).
Run top and observe the load average. The load average should not be higher than the number of cores. For 8 cores, a load average of 7 or above would indicate you have too many workers. A lower load average means you can try adding another worker.
If you added another worker in step 2, observe the load average again. Also observe the increase in RAM usage.
If you repeat the above steps, eventually you will either run out of CPU or RAM.
When doing parallel processing, keep in mind that you could run into a point of diminishing returns. Read about Amdahl's law for more information.
How I can Limit cpu usage of apache2 php scirpts using
RLimitCPU seconds|max [seconds|max]
Please show me an example.
e.g RLimitCPU 2 2 ? whats that mean ?
I know its cpu seconds but question is how to convert GHz to seconds.
One php for video streaming script sometimes is taking 100% CPU usage on 2 cores.
http://httpd.apache.org/docs/2.2/mod/core.html#rlimitcpu
1 GHz is 1,000,000,000 CPU cycles per second - so a 2.6 GHz CPU is going to go through 2,600,000,000 cycles in one second. How many instructions actually get executed in a cycle is going to vary with the CPU - they'll all take a certain number of cycles to actually complete an instruction.
2 CPU seconds is "the CPU is completely maxed out for two full seconds or the equivalent". So if your program uses the CPU at half capacity for 4 full seconds that's 2 CPU seconds.
For your app, if you have a 2.6 GHz CPU and you run at 2 CPU seconds, you'll have executed 5,200,000,000 CPU cycles. How many instructions that is harder to work out, and how many instructions you actually need for your "video streaming script" is going to be incredibly hard to work out (and is going to vary with the length of the video).
I'd advise just running the script for the biggest video you'd ever send, seeing how many CPU seconds you use (top -u apache-usr will let you see the PHP process running, "TIME+" column is CPU time) and then tripling that as your RLimitCPU.
Bear in mind that RLimitCPU is just going to kill your PHP script when it takes more CPU time than the limit. It's not some magical tool that means your script will take less CPU time, it's just a limit on the maximum time the script can take.
Apache Reference: http_core, RLimitCPU
RLimitCPU
Resource Limit on CPU Usage
Syntax: RLimitCPU soft-seconds [hard-seconds]
Example: RLimitCPU 60 120
Since: Apache 1.2
This directive sets the soft and hard limits for maximum CPU usage of a process in seconds. It takes one or two parameters. The first parameter, soft-seconds, sets the soft resource limit for all processes. The second parameter, hard-seconds, sets the maximum resource limit. Either parameter can be a number, or max'', which indicates to the server that the limit should match the maximum allowed by the operating system configuration. Raising the maximum resource limit requires the server to be running as the userroot'', or in the initial start-up phase.
http://www.apacheref.com/ref/http_core/RLimitCPU.html
Hey there I am having some CPU spikes due to PHP script I run every 30 mins.
Script sends twits to signed up twitter users everyday and there are a lot of users.
So basically when PHP script sends out twits it causes a CPU spike.
I am asking for a direction on how should I handle this situation. Thanks a lot.
Usleep
Just a tiny little usleep will return the CPU to other available process(CPU scheduling).
Hog
Take this simple script for example:
<?php
for ($i=0;$i<1000000;$i++) {
echo "$i\n";
}
This process consumes 20% of my CPU-time on average.
Schedule
This simple script only consumes 10% CPU-time on averqage.
<?php
for ($i=0;$i<1000000;$i++) {
echo "$i\n";
usleep(100);
}
Of-course this script does take a little longer, but the CPU is better scheduled. The longer you usleep the better the CPU can schedule. usleep(1000) for example only used 2% CPU-time.
I tested this on my Ubuntu Box
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 10.10
Release: 10.10
Codename: maverick
Message Queue
Also your operating system is very good at scheduling processes(of course that process needs to be friendly to your CPU) so I would advice you to use a message queue to speed up your work(sending tweets). For example Redis can also be used as a message queue or beanstalkd. Run a couple of worker processes which process work(sending out tweets). As a bonus you don't incur the price of spawning processes which is relative expensive. On the web there is more than enough information available using message queue.
Buy more CPU power or [u]sleep() every n requests.
You can also get the CPU load from sys_getloadavg and decide if (and how much) you need to sleep(). Bare in mind that sleeping too much may cause each CRON to take longer than 30 minutes.
I'm running a few PHP job which fetches 100th thousands of data from a webservice and insert them to database. These jobs take up the CPU usage of the server.
My question is, how much is it considered high?
When i do a "top" command on linux server,
it seems like 77% .. It will go up to more than 100% if i run more jobs simultaneously. It seems high to me, (does more than 100% means it is running on the 2nd CPU ?)
28908 mysql 15 0 152m 43m 5556 S 77.6 4.3 2099:25 mysqld
7227 apache 15 0 104m 79m 5964 S 2.3 7.8 4:54.81 httpd
This server is also has also webpages/projects hosted in it. The hourly job since to be affecting the server as well as the other web project's loading time.
If high, is there any way of making it more efficient on the CPU?
Anyone can enlighten?
A better indicator is the load average, if I simplify, it is the amount of waiting tasks because of insufficient resources.
You can access it in the uptime command, for example: 13:05:31 up 6 days, 22:54, 5 users, load average: 0.01, 0.04, 0.06. The 3 numbers at the end are the load averages for the last minute, the last 5 minutes and the last 15 minutes. If it reaches 1.00, (no matter of the number of cores) it is that something it waiting.
I'd say 77% is definitly high.
There are probably many ways to make the job more efficient, (recursive import), but not much info given.
A quick fix would be invoking the script with the nice cmd,
and add a few sleeps to stretch the load over time.
I guess you also saturate the network during import, so can you split up the job it would prevent your site from stalling.
regards,
/t
You can always nice your tasks
http://unixhelp.ed.ac.uk/CGI/man-cgi?nice
With the command nice you can give proccesses more or less priority
These jobs take up the CPU usage of the server.
My question is, how much is it considered high?
That is entirely subjective. On computing nodes, the CPU usage is pretty much 100% per core all the time. Is that high? No, not at all, it is proper use of hardware that has been bought for money.
Nice won't help much, since it's mysql that's occupying your cpu,
putting nice on a php-client as in
nice -10 php /home/me/myjob.php
won't make any significant difference.
Better to split up the job so smaller parts, call your php-script
from cron and build it like
<?
ini_set("max_execution_time", "600")
//
//1. get the file from remote server, in chunks to avoid net saturation
$fp = fopen('http://example.org/list.txt');
$fp2 = fopen('local.txt','w');
while(!feof($fp)) {
fwrite($fp2,fread($fp,10000));
sleep(5);
}
fclose($fp/fp2);
while(!eof(file) {
//read 1000 lines
//do insert..
sleep(10);
}
//finished, now rename to .bak, log success or whatever...