Hey there I am having some CPU spikes due to PHP script I run every 30 mins.
Script sends twits to signed up twitter users everyday and there are a lot of users.
So basically when PHP script sends out twits it causes a CPU spike.
I am asking for a direction on how should I handle this situation. Thanks a lot.
Usleep
Just a tiny little usleep will return the CPU to other available process(CPU scheduling).
Hog
Take this simple script for example:
<?php
for ($i=0;$i<1000000;$i++) {
echo "$i\n";
}
This process consumes 20% of my CPU-time on average.
Schedule
This simple script only consumes 10% CPU-time on averqage.
<?php
for ($i=0;$i<1000000;$i++) {
echo "$i\n";
usleep(100);
}
Of-course this script does take a little longer, but the CPU is better scheduled. The longer you usleep the better the CPU can schedule. usleep(1000) for example only used 2% CPU-time.
I tested this on my Ubuntu Box
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 10.10
Release: 10.10
Codename: maverick
Message Queue
Also your operating system is very good at scheduling processes(of course that process needs to be friendly to your CPU) so I would advice you to use a message queue to speed up your work(sending tweets). For example Redis can also be used as a message queue or beanstalkd. Run a couple of worker processes which process work(sending out tweets). As a bonus you don't incur the price of spawning processes which is relative expensive. On the web there is more than enough information available using message queue.
Buy more CPU power or [u]sleep() every n requests.
You can also get the CPU load from sys_getloadavg and decide if (and how much) you need to sleep(). Bare in mind that sleeping too much may cause each CRON to take longer than 30 minutes.
Related
I want to find an opportunity to limit the CPU usage CPU for script php.
My script runs with the help of Cron tasks and works in the mode CLI.
The problem is that after starting the CPU usage is 100%.
What leads to the fact that the site on the same server stops responding to the execution time of the background task.
Is it possible to limit CPU usage for this script? For example, to 50% maximum.
VPS Linux Ubuntu 16.
RAM 6 GB.
CPU 2x.
PHP 7.2.
You could use nice or renice to low-priorize the process, e.g. renice +10 1234 will make the process 1234 low priorized on scheduling (limits are -20 to +19 with smaller values renders to higher priority).
With cpulimit it is possible to limit the cpu usage, eg. cpulimit -l 50 -p 1234 limits the process 1234 to 50%.
see also https://scoutapm.com/blog/restricting-process-cpu-usage-using-nice-cpulimit-and-cgroups
I have a PHP script that enables me to have a Social Network and such similiar.
Normally, there isn't any problem, my server is a VPS with:
2.4 GHz CPU
4 Cores
8 GB of RAM
150GB SSD
CentOS 7.1 with cPanel.
The problem is that normally server can mantain at a CPU load of 30-40% around 30 concurrent users. But sometimes, I don't know for what reason, the load goes really high, to 98-100% all the time. Even if users log out and there is even just 3-4 persons in the website, the server load remains to 98-100% all the time 'til I don't restart the server.
So, I noticed, using top command via SSH, that gets created a process in PHP with the user as the owner of the webspace (created via cPanel) and as command, PHP. The load for this process is from 20% to 27%.
The fact is that more of these PHP processes get created more time that pass.
For example, after 30 minutes, there is another PHP process with the same characteristics of the first process. And both, together, take 50-60% of the CPU load. More time pass, more process get created, to a max of 4 processes like this. (Is because my CPU has 4 cores?).
If I kill these processes via kill [pid] in 1-2 minutes, server goes back to 3% even with 10-15 concurrent users.
What is the problem? It is strictly php-file related or what? I even tried doing events on the website to check WHAT actions these PHP processes (even useless) that start. Because if I kill them, website continues to work very good!
What could be the problem?
There is a screen of CPU usage:
Thank you all.
If a process is making a lot of I/O operations like database calls etc, it can considerably increase the CPU load. In your case you are sure of the process which is the cause behind this high load. Noticing that load increases overt time,you should carefully look at the PHP script for memory leaks, lots of sessions, lots of nested loops with IO tugged in between and try to isolate the reason for it. good luck
I have a code that needs to run in 5 parts, each 10 minutes apart. I know I can run 5 different cron jobs, but the script lends itself to being one script with 10 minute sleep()s at different points.
So I have:
set_time_limit(3600);
//code
sleep(600);
//continutes
sleep(600);
//etc
Is doing this highly inefficient, or should I find a way to have it split into 5 different cron jobs run 10 minutes apart?
sleep() doesn't consume CPU time but the processes ongoing will consume RAM because the php engine needs to keep running. It shouldn't be a problem if you have a lot of free RAM but I would still suggest splitting it into other crons.
Personally, I've used long sleep (10-20 minutes) in previous web crawlers that I've written in PHP and that ran from my local 4 GB RAM machine with no problem.
Depends on the task you have, but generally speaking - it is bad because it consumes unneeded resources for a long time and has a high risk of being interrupted (by system crash/reboot or external changes to resources which whom script operates).
I'd recommend to use a job queue daemon like RabbitMQ with delaying features. So that after each block you could enqueue a next one in 10 minutes. That will save resources and increase stability
Here's what I'm trying to do: use PHP to push a message into a queue (either Beanstalkd, IronMQ, Amazon SQS).
Then I want a Python script to instantly pick up the message. I guess I'll just have 1 Python script running a while(true) loop to keep polling the message server?
Then process it using a new thread for each job. If there's 10 messages in the queue, I want python to run 10 threads, 1 for each job.
My questions are:
Is this a solid way of doing things, or is there a better way to set this up?
How do I ensure my Python script is up and polling forever?
Is this a solid way of doing things, or is there a better way to set this up?
Sounds reasonable to me, although if you're expecting a large number of simultaneous jobs, you may want to limit the total number of threads by using a thread pool.
You won't gain much more CPU performance, for CPU-intensive threads, when the total number of threads exceeds the total number of CPU cores, and if the threads require significant disk I/O, you'll want to limit them to avoid thrashing the disks.
How do I ensure my Python script is up and polling forever?
It's common for daemon processes to have another process which monitors them, and restarts them if they crash or become unresponsive. A simple cronjob might suffice for this purpose.
I'm running a few PHP job which fetches 100th thousands of data from a webservice and insert them to database. These jobs take up the CPU usage of the server.
My question is, how much is it considered high?
When i do a "top" command on linux server,
it seems like 77% .. It will go up to more than 100% if i run more jobs simultaneously. It seems high to me, (does more than 100% means it is running on the 2nd CPU ?)
28908 mysql 15 0 152m 43m 5556 S 77.6 4.3 2099:25 mysqld
7227 apache 15 0 104m 79m 5964 S 2.3 7.8 4:54.81 httpd
This server is also has also webpages/projects hosted in it. The hourly job since to be affecting the server as well as the other web project's loading time.
If high, is there any way of making it more efficient on the CPU?
Anyone can enlighten?
A better indicator is the load average, if I simplify, it is the amount of waiting tasks because of insufficient resources.
You can access it in the uptime command, for example: 13:05:31 up 6 days, 22:54, 5 users, load average: 0.01, 0.04, 0.06. The 3 numbers at the end are the load averages for the last minute, the last 5 minutes and the last 15 minutes. If it reaches 1.00, (no matter of the number of cores) it is that something it waiting.
I'd say 77% is definitly high.
There are probably many ways to make the job more efficient, (recursive import), but not much info given.
A quick fix would be invoking the script with the nice cmd,
and add a few sleeps to stretch the load over time.
I guess you also saturate the network during import, so can you split up the job it would prevent your site from stalling.
regards,
/t
You can always nice your tasks
http://unixhelp.ed.ac.uk/CGI/man-cgi?nice
With the command nice you can give proccesses more or less priority
These jobs take up the CPU usage of the server.
My question is, how much is it considered high?
That is entirely subjective. On computing nodes, the CPU usage is pretty much 100% per core all the time. Is that high? No, not at all, it is proper use of hardware that has been bought for money.
Nice won't help much, since it's mysql that's occupying your cpu,
putting nice on a php-client as in
nice -10 php /home/me/myjob.php
won't make any significant difference.
Better to split up the job so smaller parts, call your php-script
from cron and build it like
<?
ini_set("max_execution_time", "600")
//
//1. get the file from remote server, in chunks to avoid net saturation
$fp = fopen('http://example.org/list.txt');
$fp2 = fopen('local.txt','w');
while(!feof($fp)) {
fwrite($fp2,fread($fp,10000));
sleep(5);
}
fclose($fp/fp2);
while(!eof(file) {
//read 1000 lines
//do insert..
sleep(10);
}
//finished, now rename to .bak, log success or whatever...