CakePHP 1.3: High CPU Usage - php

I have a CakePHP application that is running on a shared hosting account, which the provider has been bugging me a lot (A2HOSTING) that my account is excessively using CPU resources, sometimes 100%.
Just on the last few hours alone, CPanel is reporting high CPU Usage. I have spoken with them and they said that everything is pointing to the 'webroot' directory. There I only have index.php and css.php.
Any ideas, what can be causing this issue, and what can I do to fix it, as they are threatening to suspend my account.
Thanks,

There is not a whole lot to say without a more
information. For now, Let's start with what is using 100% cpu? MySQL or PHP?
Also, the conditions you use. Any associations and so on...
If it all works ok on your local setup then I would start by looking
at any differences between that and your server.
Versions of CakePHP, PHP, MySQL... are they different? Is the server
running some infamous version of one of them?
Let's blame the database:
Is the database structure and data really identical, really? Look
carefully as every detail.
Do you have the same content in them? Exactly? Clone your dev database
including all table definitions and data.
Sometimes I notice a lapse of logic on my part where a "clean"
database will cause problems because I have had data in it during the
whole development and for some reason I have missed that something
(seemingly unrelated) will fail if a table is empty.
Let's blame PHP:
When PHP ends up at 100% cpu, the problem is usually that it is stuck
in a loop somewhere. Do you have one near that line?
If you let the request run, do you just get a timeout or an out of
memory?
Finding "first" should never result in out of memory unless your
server
has 200'000 related records being loaded. Try specifying recursive -1.
That is: load absolutely nothing from any other table.
Reference: https://groups.google.com/forum/?fromgroups#!topic/cake-php/lS91s355_Pw
This post might help you to decrease the CPU overload.

Related

Too many connections issue on wordpress, using database cache, non shared host

I'm at my wits end here.
A client has a wordpress website that is expected to have heavy traffic incoming next saturday. I was asked to find out if the site can handle the load, and using JMeter my answer is "no", as it keeps having database connection issues.
Now, for the configurations and issue I'm finding.
This is a cloud hosting that as far as I could find out is not shared, using a standard apache / linux configuration. Mysql's "max_connections" variable is 1000, and my tests fail even at something like 50 connections / sec. SHOW PROCESSLIST shows nothing out of the usual, at worst a couple threads sleeping for 3 seconds, but no hanged queries or anything of the sort.
Wordpress itself is a fairly standard configuration. Does use a couple plugins, the one most obviously affecting database performance being woocommerce. Everything else is a gallery plugin then some minor stuff like contact forms. 10 plugins total.
For cache I'm using W3 Total Cache's page, object and database caches. I'm even forcing caches for queries that aren't on the default configuration of W3 cache like COUNT() queries, and it seems to be working as it's showing all queries on homepage cached.
However, JMETER shows as high as 50% failure at 50/sec connections. It's not exactly consistent, sometimes goes up and down but still way above what would be considered acceptable and as I understand way below the server's 1000 connections limit. Still getting too many connections issue. If I turn off cache it goes to like 90%, so that's clearly helping.
At this point, I'm not sure how to further mitigate this problem. Even if I disable every plugin, the number stays above 1% since the homepage then displays barely anything, but obviously I cannot just disable all plugins as that pretty much breaks the website. I can hopefully disable a couple or temporarily force a static response out of them, but there has to be some underlying issue causing this, since I'm not sure that would even be acceptable for the client.
How could I further debug this problem? Is it possible that each plugin creates it's own new connection, for example? Is there a way I can debug, for example, how many connections were opened at the end of a script execution?
Look into using query monitor to see exactly what code is running what kind of queries how long they take etc.
https://wordpress.org/plugins/query-monitor/
This is a great tool for benchmarking performance optimizing templates and various calls.

How much overhead does the NewRelic PHP agent add?

By no means, NewRelic is taking the world by storm with many successful deployments.
But what are the cons of using it in production?
PHP monitoring agent works as a .so extension. If I understand correctly, it connects to another system aggregation service, which filters data out and pushes them into the NewRelic cloud.
This simply means that it works transparently under the hood. However, is this actually true?
Any monitoring, profiling or api service adds some overhead to the entire stack.
The extension itself is 0.6 MB, which adds up to each php process, this isn't much so my concern is rather CPU and IO.
The image shows CPU Utilization on a production EC2 t1.micro instances with NewRelic agent (top blue one) and w/o the agent (other lines)
What does NewRelic really do what cause the additional overhead?
What are other negative sides when using it?
Your mileage may vary based on the settings, your particular site's code base, etc...
The additional overhead you're seeing is less the memory used, but the tracing and profiling of your php code and gathering analytic data on it as well as DB request profiling. Basically some additional overhead hooked into every php function call. You see similar overhead if you left Xdebug or ZendDebugger running on a machine or profiling. Any module will use some resources, ones that hook deep in for profiling can be the costliest, but I've seen new relic has config settings to dial back how intensively it profiles, so you might be able to lighten it's hit more than say Xdebug.
All that being said, with the newrelic shared PHP module loaded with the default setup and config from their site my company's website overall server response latency went up about 15-20% across the board when we turned it on for all our production machines. I'm only talking about the time it takes for php-fpm to generate an initial response. Our site is http://www.nara.me. The newrelic-daemon and newrelic-sysmon services running as well, but I doubt they have any impact on response time.
Don't get me wrong, I love new relic, but the perfomance hit in my specific situation hit doesn't make me want to keep the PHP module running on all our live load balanced machines. We'll probably keep it running on one machine all the time. We do plan to keep the sysmon stuff going 100% and keep the module disabled in case we need it for troubleshooting.
My advice is this:
Wrap any calls to new relic functions in if(function_exists ( $function_name )) blocks so your code can run without error if the new relic module isn't loaded
If you've multiple identical servers behind a loadbalancer sharing the same code, only enable the php module on one image to save performance. You can keep the sysmon stuff running if you use new relic for this.
If you've just one server, only enable the shared php module when you need it--when you're actually profiling your code or mysql unless a 10-20% performance hit isn't a problem.
One other thing to remember if your main source of info is the new relic website: they get paid by the number of machines you're monitoring, so don't expect them to convince you to not use it on anything less than 100% of your machines even if it not needed. I think one of their FAQ's or blogs state basically you should expect some performance impact, but if you use it as intended and fix the issues you see from it, you should recoup the latency lost. I agree, but I think once you fix the issues, limit the exposure to the smallest needed number of servers.
The agent shouldn't be adding much overhead the way it is designed. Because of the level of detail required to adequately troubleshoot the problem, this seems like a good question to ask at https://support.newrelic.com

PHP/MySQL Performance Testing with Just PHP

I'm trying to diagnose a server where the website is loading very slowly, but unfortunately my client has only provided me with FTP access.
I've got FTP access so I can upload PHP scripts, but can't set up any other server side tools.
I have access to phpMyAdmin, but not direct access to the MySQL server. It is also unfortunately a Windows server (and we've been a Linux shop for over a decade now).
So, if I wan to evaluate MySQL & disk speed performance through PHP on a generic server, what is the best way to do this?
There are already tools like:
https://github.com/raphaelm/php-benchmark or https://github.com/InfinitySoft/php-benchmark
But I'm surprised there isn't something that someone has already set up & configured to just run through and do some basic testing of a server's responsiveness.
Every time we evaluate a new server environment it's handy to be able to compare it to an existing one quickly to see if there are any anomalies. I guess I'd just hoped that someone else had written up a script to do this already. I know I have, but that was before Github when there was a handy place to post scraps of code like this.
You've probably already done this, but just in case... If I were in your shoes, the first thing I'd be looking at are the indexes on the mysql tables and the queries in the application. I've seen some sites get huge speed boosts just by fixing a join or adding a missing index.
Don't forget to check the code for performance issues or calls to sleep(). If you haven't yet, it may be helpful to get the code running locally so you can run it through xdebug.

Website slow response (all other users) when large MySQL query running

This may seem like an obvious question but we have a PHP/MySQL app that runs on Windows 2008 server. The server has about 10 different sites running from it in total. Admin options on the site in question allow an administrator to run reports (through the site) which are huge and can take about 10mins in some cases. These reports are huge mysql queries that display the data on screen. When these reports are running the entire site goes slow for all users. So my questions are:
Is there a simple way to allocate server resources so if a (website) administrator runs reports, other users can still access the site without performance issues?
Even though running the report kills the website for all users of that site, it doesn't affect other sites on the same server. Why is that?
As mentioned, the report can take about 10 minutes to generate - is
it bad practice to make these kinds of reports available on the
website? Would these typically be generated by overnight scheduled tasks?
Many thanks in advance.
The load your putting on the server will most likely have nothing to do with the applications but the mysql table that you are probably slamming. Most people get around this by generating reports in down time or using mysql replication to have a second database which is used purely for reporting.
I recommend trying to get some server monitoring to see what is actually going on. I think Newrelic just released windows versions of its platform and you can try it out for free for 30 days i think.
There's the LOW_PRIORITY flag, but I'm not sure whether that would have any positive effect, since it's most likely a table / row locking issue that you're experiencing. You can get an idea of what's going on by using the SHOW PROCESSLIST; query.
If other websites run fine, it's even more likely that this is due to database locks (causing your web processes to wait for the lock to get released).
Lastly, it's always advisable to run big reporting queries overnight (or when the server load is minimal). Having a read replicated slave would also help.
I strongly suggest you install a replicated MySQL server, then running large administrator queries (SELECT only naturally) on it, to avoid the burden of having your website blocked!
If there's not too much transaction per second, you could even run the replica on a desktop computer remotely from your production server, and thus have a backup off-site of your DB!
Are 100% sure you have added all necessary indexes?
You need to have a insanely large website to have this kinds of problems unless you are missing indexes.
Make sure you have the right indexing and make sure you do not have connection fields of varchar, not very fast.
I have a database with quite a few large tables and millions of records that is working 24/7.
Has loads of activity and automated services processing it without issues due to proper indexing.

Should $new_link be used in mysql_connect()?

I'm maintaining an inherited site built on Drupal. We are currently experiencing "too many connections" to the database.
In the /includes/database.mysql.inc file, #mysql_connect($url['host'], $url['user'], $url['pass'], TRUE, 2) (mysql_connect() documentation) is used to connect to the database.
Should $new_link = TRUE be used? My understanding is that it will "always open a new link." Could this be causing the "too many connections"?
Editing core is a no no. You'll forget you did, upgrade the version, and bam, changes are gone.
Drupal runs several high-performance sites without problems. For instance, the Grammy Awards site switched to Drupal this year and for the first time the site didn't go down during the cerimony! some configuration needs tweaking on your setup. Probably mysql.
Edit your my.cfg and restart your mysql server (/etc/my.cfg in fedora, RH, centos and /etc/mysql/my.cfg on *buntu)
[mysqld]
max_connections=some-number-here
alternatively, to first try the change without restarting the server, login to mysql and try:
show variables like 'max_connections' #this tells you the current number
set global max_connections=some-number-here
Oh, and like another person said: DO. NOT. EDIT. DRUPAL. CORE. It does pay off if you want to keep your site updated, may cause inflexible headache and bring you a world of hurt.
MySQL, just like any RDBMS out there will limit the amount of connections that it accepts at any time. The my.cnf configuration file specifies this value for the server under the max_connections configuration. You can change this configuration, but there are real limitations depending on the capacity of your server.
Persistent connections may help reducing the amount of time it takes to connect to the database, but it has no impact on the total amount of connections MySQL will accept.
Connect to MySQL and use 'SHOW PROCESSLIST'. It will show you the currently open connections and what they do. You might have multiple connections sitting idle or running queries that take way too long. For idle connections, it might just be a matter of making sure your code does not keep connections open when they don't need them. For the second one, they may be parts of your code that need to be optimized so that the queries don't take too long.
If all connections are legitimate, you simply have more load than your current configuration allows for. If you MySQL load is low even with the current connection count, you can increase it a little and see how it evolves.
If you are not on a dedicated server, you might not be able to do much about this. It may just be someone else's code causing trouble.
Sometimes, those failures are just temporary. When it fails, you can simply retry the connection a few milliseconds later. If it still fails, it might be a problem and stopping the script is the right thing to do. Don't put that in an infinite loop (seen it before, terrible idea).
FYI: using permanent connections with Drupal is asking for trouble. Noone uses that as far as I know.
The new_link parameter only has effect, if you have multiple calls to mysql_connect(), during 1 request, which is probably not the case here.
I suspect it is caused by too many users visiting your site, simultaneously, because for each visitor, a new connection to the DB will be made.
If you can confirm that this is the case, mysql_pconnect() might help, because your problem is not with the stress on your database server, but the number of connections. You should also read Persistent database connections to see if it is applicable to your webserver setup, if you choose to go this route.

Categories