Why long execution time of PHP scripts on new server? - php

I have a server that has 2 quad core processors (2.4 GHz, 16GB RAM). I have a some PHP scripts that run under very heavy load. Most of these scripts do few things:
Fetch Data from database (just a single row, from a small table)
Fetch Data from other server (mainly Facebook)
Upload a small photo
Update Database table (this table is very heavily used, and number of rows grows very quickly, almost 2 rows per second)
The problem is that, the scripts are taking too much time to execute. I had a server previously which has lower configuration (one quad core processor, 6GB RAM), but scripts took 4-5 sec to complete. But now, execution time is 30-40sec, even more.
HOW I MEASURE EXECUTION TIME? I measure microtime() at start of script and end of script and subtract them. I just needed a rough estimate.
SERVER CONFIGURATION: Here are some parameters set in apache config:
server_limit = 350
max_chlid = 350
keep_alive = off
Other Characteristics:
1. When server is not under heavy load, execution time is very small
2. Previous server took very less time to execute, even under heavy load
I don't know what else details should I include. Please ask me, and I will post them here.
What should I do to improve this?
Update:
I have figured out the problem is with ImageMagick library. I googled and tried few soution like disabling OpenMP. But it hasn't helped much

I'm suggesting to do profiling with xdebug and then analyze it with software like kcachegrind. Then you will know what's taking time.

This could have many reasons:
Are your queries "slow"?
Is the server configuration right?
Has it a slow bandwidth?
Is MySql-Server configuration right?
What is the format of the table you insert?
Is something else (a cronjob e.g.) killing the database?
I would post this as a comment, but unfortunatly i can't please clear up those questions and tell what you find out ;)

I would start to decouple the problem. Test each action (fetch from db, fetch from fb, upload, etc.) separately.
At the same time check if all the components of your new server env are the same (packages, version, config, etc.) as before.

Related

Caching data to spare mysql queries

I have a PHP application that is executed up to one hundred times simultaneously, and very often. (its a telegram anti-spam bot with 250k+ users)
The script itself makes various DB calls (tickers update, counters etc.) but it also load each time some more or less 'static' data from the database, like regexes or json config files.
My script is also doing image manipulation, so the server's CPU and RAM are sometimes under pressure.
Some days ago i ran into a problem, the apache2 OOM-Killer was killing the mysql server process due to lack of avaible memory. The mysql server were not restarting automaticaly, leaving my script broken for hours.
I already made some code optimisations that enabled my server to breathe, but what i'm looking now, is to have some caching method to store data between script executions, with the possibility to update them based on a time interval.
First i thought about flat file where i could serialize data, but i would like to know if it is a good idea or not regarding performances.
In my case, is there a benefit of using caching data over mysql queries ?
What are the pro/con, regarding speed of access, speed of execution ?
Finaly, what caching method should i implement ?
I know that the simplest solution is to upgrade my server capacity, I plan to do so anytime soon.
Server is running Debian 11, PHP 8.0
Thank you.
If you could use a NoSQL to provide those queries it would speed up dramatically.
Now if this is a no go, you can go old school and keep that "static" data in the filesystem.
You can then create a timer of your own that runs, for example, every 20 minutes to update the files.
When you ask info regarding speed of access, speed of execution the answer will always be "depends" but from what you said it would be better to access the file system that being constantly querying the database for the same info...
The complexity, consistency, etc, lead me to recommend against a caching layer. Instead, let's work a bit more on other optimizations.
OOM implies that something is tuned improperly. Show us what you have in my.cnf. How much RAM do you have? How much RAM des the image processing take? (PHP's image* library is something of a memory hog.) We need to start by knowing how much RAM can MySQL can have.
For tuning, please provide GLOBAL STATUS and VARIABLES. See http://mysql.rjweb.org/doc.php/mysql_analysis
That link also shows how to gather the slowlog. In it we should be able to find the "worst" queries and work on optimizing them. For "one hundred times simultaneously", even fast queries need to be further optimized. When providing the 'worst' queries, please provide SHOW CREATE TABLE.
Another technique is to decrease the number of children that Apache is allowed to run. Apache will queue up others. "Hundreds" is too many for Apache or MySQL; it is better to wait to start some of them rather than having "hundreds" stumbling over each other.

Keeping two distant programs in-sync using MySql

I am trying to write a client-server app.
Basically, there is a Master program that needs to maintain a MySQL database that keeps track of the processing done on the server-side,
and a Slave program that queries the database to see what to do for keeping in sync with the Master. There can be many slaves at the same time.
All the programs must be able to run from anywhere in the world.
For now, I have tried setting up a MySQL database on a shared hosting server as where the DB is hosted
and made C++ programs for the master and slave that use CURL library to make request to a php file (ex.: www.myserver.com/check.php) located on my hosting server.
The master program calls the URL every second and some PHP code is executed to keep the database up to date. I did a test with a single slave program that calls the URL every second also and execute PHP code that queries the database.
With that setup however, my web hoster suspended my account and told me that I was 'using too much CPU resources' and I that would need to use a dedicated server (200$ per month rather than 10$) from their analysis of the CPU resources that were needed. And that was with one Master and only one Slave, so no more than 5-6 MySql queries per second. What would it be with 10 slaves then..?
Am I missing something?
Would there be a better setup than what I was planning to use in order to achieve the syncing mechanism that I need between two and more far apart programs?
I would use Google App Engine for storing the data. You can read about free quotas and pricing here.
I think the syncing approach you are taking is probably fine.
The more significant question you need to ask yourself is, what is the maximum acceptable time between sync's that is acceptable? If you truly need to have virtually realtime syncing happening between two databases on opposite sites of the world, then you will be using significant bandwidth and you will unfortunately have to pay for it, as your host pointed out.
Figure out what is acceptable to you in terms of time. Is it okay for the databases to only sync once a minute? Once every 5 minutes?
Also, when running sync's like this in rapid succession, it is important to make sure you are not overlapping your syncs: Before a sync happens, test to see if a sync is already in process and has not finished yet. If a sync is still happening, then don't start another. If there is not a sync happening, then do one. This will prevent a lot of unnecessary overhead and sync's happening on top of eachother.
Are you using a shared web host? What you are doing sounds like excessive use for a shared (cPanel-type) host - use a VPS instead. You can get an unmanaged VPS with 512M for 10-20USD pcm depending on spec.
Edit: if your bottleneck is CPU rather than bandwidth, have you tried bundling up updates inside a transaction? Let us say you are getting 10 updates per second, and you decide you are happy with a propagation delay of 2 seconds. Rather than opening a connection and a transaction for 20 statements, bundle them together in a single transaction that executes every two seconds. That would substantially reduce your CPU usage.

PHP execution time: Factor to consider in determining the speed to execution

As all my requests goes through an index script, I tried to time the respond time of all my requests.
Its simply the difference between the start time (start of the script) and end time (end of the script).
As I cache my data on memcached and user are all served using memcached.
I mostly get less than a second respond time but at times there's wierd spike of more than a seconds. the worse case can go up to 200+ seconds.
I was wondering if mobile users had a slow connection, does that reflect on my respond time?
I am serving primary mobile users.
Thanks!
No, it's the runtime of your script. It does not count the latency to the user, that's something the underlying web server is worrying about. Something in your script just takes very long. I recommend you profile your script to find what that is. Xdebug is a good way to do so.
If you're measuring in PHP (which it sounds like you are), thats the time it takes for the page to be generated on the server side, not the time it takes to be downloaded.
Drop timers in throughout the page, and try and break it down to a section that is causing the huge delay of 200+ seconds.
You could even add a small script that will email you details of how long each section took to load if it doesn't happen often enough to see it yourself.
It could be that the script cannot finish because a client downloads the results very-very slowly. If you don't use a front-end server like nginx, the first thing to do is to try it.
Someone already mentioned xdebug, but normally you would not want to run xdebug in production. I would suggest using xhprof to profile pages on development/staging/production. You can turn on xhprof conditionally, which makes it really easy to run on production.

which is best, either fetching from another PHP file or from mysql db in PHP?

i have a module called online events in my website, this module will run for every 8 sec and fetches a recent activities in my websites. So, if more than 100 users are using my website at a time, it consumes more memory and more cpu usuage. what should i do to reduce cpu usage and memory, provided it should not affect a online events module. will it be would to do something with files here? pls help me.
If I understand it correctly the module needs to run every 8 seconds?
Why not store the time when the module ran last and then with every access from those thousands of users you only check if the time since the module last ran is bigger than 8 seconds? If it is you ran the module and store thwe new time, if it isn't you do nothing and wait for another user to access the site, to generate new online events for him?
Caching
http://www.mnot.net/cache_docs/
http://memcached.org/
http://php.net/manual/en/book.apc.php
Database optimization
http://dev.mysql.com/doc/refman/5.0/en/optimization.html
http://20bits.com/articles/10-tips-for-optimizing-mysql-queries-that-dont-suck/
http://forge.mysql.com/wiki/Top10SQLPerformanceTips
Apache optimization
http://www.prelovac.com/vladimir/wordpress-optimization-guide (About Wordpress, but has good tips for all applications)
Load balancing
http://httpd.apache.org/docs/2.1/mod/mod_proxy_balancer.html
http://www.howtoforge.com/high_availability_loadbalanced_apache_cluster
http://wiki.nginx.org/NginxHttpUpstreamModule
At first, make the interval higher. Then, cache the module somehow on server side. E.g. make a static html file that gets loaded through a php file, and that will only get updated if something has changed in the db, so you only need to query the DB and render the data from it if really something happened. A simple approach to ensure cache update would be to remove the static file everytime something changes in the DB. This only makes sense if there are not too many DB changes.

Scaling a PHP website

Is there a standard solution to scale up a website which runs on PHP + Apache web server ?
As in I get a traffic of about 100,000 requests/day as of now. 6 months down the line I expect it to grow to 200,000 requests/day. The first cut solution which comes to my mind is deploying more Apache web servers with mod_php, but something seems so wrong about it.
Any ideas ?
Try these two options first before adding new servers. They may allow you to stick with one server, but your results may vary.
For speeding the site up when you are hit with many concurrent users, look into installing the APC PECL extension (http://us2.php.net/manual/en/book.apc.php). APC will allow you to cache the compiled version of your scripts, saving the step of the PHP interpreter running each time a script is executed.
Also, if you are experiencing heavy load on the database server, look into installing memcached and caching database results for a certain time period, if possible (http://us2.php.net/manual/en/book.memcache.php).
Finally, if you do decide to get a separate server, look into possibly getting a dedicated SQL box. This, of course, assumes that your application is a database heavy application, as web apps are these days. Segregating SQL into a separate box allows it to take advantage of all of the resources on that box, with more cache and processing power. It could be the way to go.
i don't have any experience with scaling realy large websites, but i don't think you'll need so scale to different servers in this case. i have a browsergame with 40.000-60.000 requests per day, some cronjobs doing a lot of stuff every 5 minutes and a teamspeak-server on a small server (40 $ / month) and havn't got any performance problems till now.
20.000 requests / day is only one every fifth second, sounds like one box should be able to deal with that just fine? If not I'd first have a look at bottlenecks in your code. Redundant database calls? Double-looping database calls rather than simple joins? Are you caching anything?
How to scale after this is totally dependent on your application, how/where do you keep session state and so forth, general advice has limited applicability.
if you like it then you should have put a cache on it

Categories