For some reason today my EC2 server keeps hitting 100% CPU and high disk reads. I've turned all of my cronjobs off, and yet it's still happening. My Database is an RDS outside the server. Any initial items I should check? I'm using PHP scripts for my cronjobs (8 seperate scripts that run anywhere from every 5 minutes to twice a day).
Both happen at the same time. I've also attached a screenshot of what shows when I run top.
If it is happening right now, run top to see if it is actually a PHP script: Maybe it's some other process.
If you can't catch it on the act, I suggest you set up atop to provide the same data in retrospect.
Related
I have a PHP script that enables me to have a Social Network and such similiar.
Normally, there isn't any problem, my server is a VPS with:
2.4 GHz CPU
4 Cores
8 GB of RAM
150GB SSD
CentOS 7.1 with cPanel.
The problem is that normally server can mantain at a CPU load of 30-40% around 30 concurrent users. But sometimes, I don't know for what reason, the load goes really high, to 98-100% all the time. Even if users log out and there is even just 3-4 persons in the website, the server load remains to 98-100% all the time 'til I don't restart the server.
So, I noticed, using top command via SSH, that gets created a process in PHP with the user as the owner of the webspace (created via cPanel) and as command, PHP. The load for this process is from 20% to 27%.
The fact is that more of these PHP processes get created more time that pass.
For example, after 30 minutes, there is another PHP process with the same characteristics of the first process. And both, together, take 50-60% of the CPU load. More time pass, more process get created, to a max of 4 processes like this. (Is because my CPU has 4 cores?).
If I kill these processes via kill [pid] in 1-2 minutes, server goes back to 3% even with 10-15 concurrent users.
What is the problem? It is strictly php-file related or what? I even tried doing events on the website to check WHAT actions these PHP processes (even useless) that start. Because if I kill them, website continues to work very good!
What could be the problem?
There is a screen of CPU usage:
Thank you all.
If a process is making a lot of I/O operations like database calls etc, it can considerably increase the CPU load. In your case you are sure of the process which is the cause behind this high load. Noticing that load increases overt time,you should carefully look at the PHP script for memory leaks, lots of sessions, lots of nested loops with IO tugged in between and try to isolate the reason for it. good luck
I have about 35 cron jobs right now. Most of them are PHP scripts that either scrape or do some calculations. The scripts also loop over 10-20 different servers to do those scrapes. (They are different countries so they have to be separate calls).
So we have 30 scripts, each has a loop over 20 servers and therefore take about 5-15 minutes to run per script. I have each script spaced out right now.
But is it better to have 80 individual scripts run instead of 35 scripts that loop and take a while? Each script would take maybe 1-2 minutes instead of 10-15min.
That would of course spawn a ton more PHP processes. Is there any issue or limit with 10-15 or more PHP processes running at once?
I'm running a cloud server performance on Rackspace.
Personally if the jobs need to complete in a certain order I would make it as linear as possible.....it might take longer but I always err . The side of data accuracy.
It depends.
If you are creating more processes that will be running at the same time you are going to increase your overall memory footprint. Each process will carry it's own overhead of memory for the process to run, and to load any libraries needed for it's process. (aside from whatever it needs to do whatever it does). You will also more than have twice as many script to monitor that they are successfully running all the time.
However in creating more processes you will be able to speed things us since you are essentially creating a multi-thread. Allowing one process to continue while another is blocking waiting for i/o.
If each script doesn't have a dependency on another, breaking them into smaller scripts should be fine. If you can handle monitoring more scripts, and the server can handle it, then I would do it.
If scripts do have dependencies, or if you would have to run so many at the same time you server usage maxes out, keep them together.
That being said, I would also try to optimize the script, make sure there isn't something you can do to make them faster without create more processes.
Depending on how you have the servers setup, I would run them at once. In addition, I would also run them at night, off hours when the web servers aren't in use and not during business operations unless your web app depends on it. If you're on a Cloud server on Rackspace I wouldn't worry about bandwidth although increasing your ram could be an issue further down the road.
Spawning a ton more PHP process shouldn't be a worry if you have sufficient amount of ram; there is no limitation on the linux side.
a) Figure out which cron needs to run in which order
b) Order the cron to be run at night, around mid-night
c) Run and fireoff the 80 scripts at once
it would also be a good idea to send you an email with cron results or report that it all went through successfully, based on the batch but not individual cron.
(Our server is Linux based)
I'm an experienced PHP developer but first time i'll develop a bot which always running and fetch some datas.
I'll explain my application with a simple (and sample) scenario. I have about 2000 web site url and my application will visit this url's and record contents of web page's . This application will work 7 days 24 hours. It will start working again when it's finish 2000 web sites.
But i need some suggestions for my server. As you see, my application will be run infinity until i shut down server. I can do this infinity loop with this :
while(true)
{
APPLICATION CODES HERE
}
But i think this will be an evil for server :) Is it possible to doing something like this, on server side?
Also i think using cronjobs but it's not work for my scenario. Because my script start working again asap it's finish working. I have to "start again when you finish your work" , not "start every 30 minutes" . Because i don't know, maybe fetching all 2000 websites, will take more than 30 minutes or less than 30 minutes.
I hope i explained it very well.
Also i'm worried about memory usage. As you know garbage collector cleans memory after every PHP script stop. But as i said, my app won't stop for days (maybe weeks) . So garbage collector won't be triggered. I'm manually unsetting (unset() function) all used variables at end of script. Is it enough?
I need some suggestions from server administrators :)
PS. I'm developing it as console application, not a web application. I can execute it from command line.
Batch processing.. store all the sites in a csv or something, mark them after completion, then work on all the ones non-marked, then work on all the marked.. etc. Only do say 1 or 5 at a time, initiate batch script every minute from cron..
Don't even try to work on all of them at once.. any errors and you won't know what happened..
Could even store the jobs in a database, store processing stats etc.. allows for fine-tuning and better reporting.
You will probably hit time-limits trying to run infinite php scripts, even from the command line.. also your server admin will hate you. Will probably run into Memory limits if you don't release resources properly.. far too easily done with php.
Read: http://www.ibm.com/developerworks/opensource/library/os-php-batch/
Your script could just run through the list once and quit. That way, what ever resources php is holding can be freed.
Then have a shell script that calls the php script in an infinite loop.
As php is not designed for long running task, I am not sure if the garbage collection is up to the task. Quiting after every run will force it to release everything.
I built an app in php where a feature analyzes about 10000 text files and extracts stuff from them and puts it into a mysql database. The code itself is just a for loop where every file is loaded through file_get_contents() and after the end of that iteration, its unset() from memory. The file analysis is a cron job and a single php file does all this processing.
The problem however is that the app was built (initially) entirely on a shared server everything worked seamlessly really well. I didn't notice any delays or major lags neither did users however in order for it to be able to handle more of a load, I moved everything to an EC2 server (the micro instance).
The problem I am having now is that every time I run the cronjob (process the files on hourly basis) it slows the entire server down so much that a normal page takes about 5-8 seconds to load, which sort of defeats the purpose of moving it to EC2.
The cron itself is a very long process. Here are some tests results of the script process (every hr)
SQL Insertion Time: 23.138303995132 seconds
Memory Used: 10.05 MB
Execution: 411.00507092476 seconds
But on the top of every hour the server slows down so much for 7 minutes despite of having more dedicated hardware acceleration compared to a shared server (I think at least). The graphs from EC2 dashboard show that the CPU usage is close to 100% but I don't understand how it gets to that level.
Can anyone help me determine the reason as to why this could be happening? I have noticed not even the slightest lag when the cron runs on the shared server but the case is completely different for EC2.
Please feel free to ask me anything I missed mentioning.
Micro instances are pretty slow. If you use a larger instance, it'll run a lot faster.
We use EC2 for all of our production boxes. I can't say enough good things about that platform. I'll never go back to another host.
Also, if you want to write your code in C++, it'll run A LOT faster. I wrote a simple mysql insert with this code here. It's multi-threaded, so you can asyncronously run mysql updates or inserts.
Please let me know if you need any help with it, but I'm sure you'll be able to just use a micro instance still and get great speeds.
Hope that helps...
PS. I'd be willing to help you write a C++ version for your uses... just because it's fun! :-)
Well EC2 is designed to be scalable.
Since your code is running in 1 loop to open each file one after another, it does not make for a scalable design.
Try changing your codes to break them up so that the files are handled concurrently by different instances of the php script. That way, each copy of the script can run in a thread by itself. If you have multiple servers (or instances of servers in EC2), you can run them on different machines to speed it up even more.
I have a PHP-script running on my server via a cronjob. The job runs every minute. In the php script i have a loop that executes, then waits one sevond and loops again. Essentially creating a script to run once every second.
Now I'm wondering, if i make the cronjob run only once per hour and have the script still loop for an entire hour or possible an entire day.. Would this have any impact on the servers cpu and or memory and if so, will it be positive or negative?
I spot a design flaw.
You can always have a PHP script permanently running in a loop performing whatever functionality you require, without dependency upon a webserver or clients.
You are obviously checking something with this script, any incites into what? There may be better solutions for you. For example if it is a database consider SQL triggers.
In my opinion it would have a negative impact. since the scripts keeps using resources.
cron is called on a time based scale that is already running on the server.
But cronjob can only run once a minute at most.
Another thing is if the script times out, fails, crashes for whatever reason you end up with not running the script for at max one hour. Would have a positive impact on server load but not what you're looking for i guess? :)
maybe run it every 2 or even 5 minutes to spare server load?
OR maybe change the script so it does not wait but just executes once and calling it from cron job. should have a positive impact on server load.
I think you should change script logic if it is possible.
If tasks your script executes are not periodic but are triggered by some events, the you can use some Message Queue (like Gearman).
Otherwise your solution is OK. Memory leaks can occurs, but in new PHP versions (5.3.x) Garbage Collector is pretty good. Some extensions can lead to memory leaks. Or your application design can lead to hungry memory usage (like Doctrine ORM loaded objects cache).
But you can control script memory usage by tools like monit and restart your script when mempry limit reaches some point or start script again when your script unexpectedly shuts down.