i have 2 servers, one test one production.
Test - WAMP on Win7 (Intel i7 4GB Ram)
Production - IIS + FastCGI + MYSQL on Windows Server 2008 (Intel Xeon 2.4GHZ 12GB Ram)
We have an innodb database setup on both, and after deploying PHP source files, it would appear that a particular group of functions querying/displaying data from the database takes 100% LONGER on the production server.
On my test environment, I am seeing results come back after ~3 secs.
On production, it takes ~10 secs for exactly the same query, with exactly the same source files and same mirror of dbase.
If anything, I was expecting for timings to be reversed, ie production 100% faster than wamp. I checked my.ini files on both sides, and although they are not the same obviously there is nothing glaringly wrong in there which would cause this. I tested with a few different configs of my.ini on production and this had no effect in reducing the processing times.
The query used for this test is a complex multiple loop which interogates the database (sends one query, then loops through results and does further queries to mysql server on each row) All results are kept and modified in memory using multidimensional arrays.
I am timing the actual script execution only (mysql + php data retrieve and process), not the time it takes to send this data to the browser :)
Where should I look next?
just to close this, it was due to the incorrect localisation within php environment - setting my locale at the top of the php page cured the problem.
(despite it being correctly set within php.ini, IIS slowed down when performing date based functions)
love microsoft
Related
SEE THE LAST UPDATE AT THE BOTTOM FOR MY FINAL EVAL / SUGGESTIONS
This seems so basic that it should be a common problem.. but I've already searched for anything pertaining to this issue with no luck
-- Scenario --
I have a web application that, as one of it's functions, allows users to upload photos to the server. File size limits aren't the issue, but i can notice a visible difference in the speed of the server when i'm uploading a file vs not.
-- Testing --
I uploaded a 3MB file while signed in to another account (on another computer completely) to test the page load times in firebug. Caching has been disabled. The results are below:
Baseline page speed (without upload): 0.409, 0.449, 0.468
During 3MB file upload:1.28, 8.58, --upload complete -- 0.379
This problem obviously compounds if more than one user is uploading a photo at the same time. This seems insane considering all the power i have with the current setup.
-- Setup --
Mediatemple DV Level 3 Application server (4GB ram, 16 cores)
Mediatemple DV Dev level 1 database server (running mysiam tables)
Cloudflare CDN
Custom PHP application
Wordpress sales website (front end, same server - not connected in any way to the web app)
CentOS 6.5
Mysql 5.5
-- So Far --
I had the cloudtech team at MT tune the apache & nginx settings for me since i thought i had screwed this up, but the issue is still showing up
I am considering changing all the DB tables to innodb for concurrency, but this is not related to the question at hand
The server load averages do not seem to be significantly affected when uploading my test file
the output of "free -m" is below
total used free shared buffers cached
Mem: 4096 634 3461 0 0 215
-/+ buffers/cache: 419 3676
Swap: 1536 36 1499
EDIT 1
Is it possible to offload these types of things to an independent server? I realize the PHP used to upload the file would also have to be run from that server, but at least then only the upload / long process server would be affected and not the entire application.
Also, is there any other language or workflow that would work better for this? Perl? Python?
EDIT 2 (2014-08-28)
I forgot to mention two things
1) this issue isn't just with file uploads - it happens whenever a php script runs for an extended time. As a test, i ran a 3 minute php script on my end and sure enough, got a phone call from a client during the execution about the "slow" system.
2) I have high concurrent log in sessions running. Many of these users are likely on the same script at the same time.
Here is the output from htop. The "php-cgi" processes are the obvious offenders, but i don't know how to see which scripts are causing this load. Again, even if i did find the script, i feel like i should be able to run more than a handful of php scripts at a time.
EDIT 3 (2014-08-28)
Here is the htop at peak hours (right now). What's annoying is that the system is flying at the moment with 2x the traffic and processes.
EDIT 4 / UPDATE (2014-09-30)
After a month of staring at this, I've found some things to be true. I'm hoping some of this will help others get their high-growth applications in check before it turns into an issue of racing traffic with server upgrades (which is what happened here).
The Issue I was using MyISAM as the exclusive database engine. I had read through hundreds of docs and forum posts regarding whether InnoDB or MyISAM is the better engine to use, most sources giving vague evaluations or (at best) overly complicated benchmarking with vague settings claiming to increase (or even decrease..?) performance. Forget it all and USE InnoDB FOR ALL APPLICATIONS
Find a good resource to help you tune your MySQL server settings and run with it (see links below). Apparently the concurrent traffic on the server was overloading PHP while waiting for the table locks to release in MyISAM. This was causing excessive loads on the application server, while the DB server was just hanging out with hardly any CPU or MEM load. Transitioning to InnoDB allows high-concurrency at the cost of CPU and Memory (both GOOD things, buy a bigger DB server if you have to).
In the end, the load has transferred to the DB server, increasing concurrent traffic performance. To summarize DON'T USE MyISAM ON WEB APPLICATIONS. Period. I'm sure i'll get burned a bit by saying that, but the slight performance hit (yes, MyISAM is a BIT faster at low-concurrency, but who cares if your web app crashes) is well worth the increase in concurrency.
IMPORTANT POINTS
1) When you move your database over to InnoDB "ALTER TABLE my_table SET ENGINE="InnoDB" you will need to use the info found in the following links to set your innodb specific settings.
2) Be sure to code a loop wrapper in PHP (3 iterations sounds good) for your query calls to account for deadlocks (a situation where queries are basically competing for the same row(s) and one or both are stalled completely).
3) Write your PHP to look for ERRORS coming out of your new query wrapper.
EG: Send the query -> Error found -> Check for deadlock -> if deadlock, retry query after waiting 0.1 sec -> check again, if error found that isn't deadlock, return error to app, else continue until iteration limit is reached (3 in this example) -> if iteration limit is hit, return error to application
Do something now that your query has failed.
Helpful Links
http://www.percona.com/blog/2013/09/20/innodb-performance-optimization-basics-updated/
Handling innoDB deadlock
https://dba.stackexchange.com/questions/27328/how-large-should-be-mysql-innodb-buffer-pool-size
CAUTION
I crashed the MySQL server with the setting in the following link. You MUST completely stop mysqld (service mysqld stop) before renaming (NOT deleting) the original files (ib_logfile0, ib_logfile1), usually found here on RH/CentOS
/var/lib/mysql/ib_logfile0
http://www.percona.com/blog/2008/11/21/how-to-calculate-a-good-innodb-log-file-size/
Once they are renamed, start your mysql daemon service mysqld start
Hopefully this helps someone,
-B
I am having a bit of an issue with a php scropt. When I open my site (hosted locally) it pauses for 1-2 seconds then it loads the page.
the database where I am readying data from is very small and has indexes. The queries are quick.
My PHP code is somewhat optimized, and my databases are indexed.
PHP5.3.19 is installed on Windows 2008 R2 Server (Intel Xeon(R) CPU E5-2400 0 #2.20 GHz (2 processors) 16GB of RAM and MySQL Server in installed on a different server. Both servers are on the same network so all connection should be internal.
I also use PDO to connect to my databases.
How can I determine what is causing the extra delay?
What things can I check for to expedite the page load?
Thanks
According to my Experience I can say, there might JavaScript or any script files which you might have called in your code.
if browser find any missing script(due to wrong path or what so ever reason it may be) file then it search for it again and again until the searching time is out, which is around 2 to 5 sec, depending on the setting of the browser.
I've written some JS scripts on my school's VLE.
It uses the UWA Widget Format and to communicate with a locally-hosted PHP script, it uses a proxy and AJAX requests.
Recently we've moved the aforementioned locally-hosted server from a horrible XP-based WAMP server to a virtual Server 2008 distribution running IIS and FastCGI PHP.
Since then - or maybe it was before and I just didn't notice - my AJAX calls are starting to take in excess of 1 second to run.
I've run the associated PHP script's queries on PHPMyAdmin and, for example, the associated getCategories SQL takes 0.00023s to run so I don't think the problem lies there.
I've pinged the server and it consistently returns <1ms as it should for a local network server on a relatively small scale network. The VLE is on this same network.
My question is this: what steps can I take to determine where the "bottleneck" might be?
First of all, test how long your script is actually running:
Simplest way to profile a PHP script
Secondly, you should check the disk activity on the server. If it is running too many FastCGI processes for the amount of available RAM, it will swap and it will be very slow. If the disk activity is very high, then you know you've found your culprit. Solve it by reducing the maximum number of fastcgi processes or by increasing the amount of server RAM.
I've got 2 servers: my local server and remote production server. They've got basically the same config: Ubuntu 10.10, Apache 2, PHP 5.3, PHP-APC, MYsql etc. I also have copies of a webapp on both servers and here's the problem with PHP:
On my local server webapp uses only ~4 MB of memory, but on my production server memory usage spikes up to 50 MB of memory for no good reason. I tried to run memory_get_peak_usage() function to get memory usage at different stages of webapp execution and i've found that on production server memory spikes from 0.7 up to 49 MB on such function calls as class_exists().
What could be the problem?
Tanks.
Hate to sound a bore, but have you verified that they have exactly the same Apache/PHP config as they can easily become the source of these sort of differences..
Also are they experiencing the same sort of load, as code running on a server under load can behave very differently to code running with ample uncontested resources.
Are there any other differences in terms of other running applications that could be affecting stuff?
It maybe worth profiling the code on both the servers to see if there are any per-request differences, XHprof[1] is a great tool for this and it can safely be run in production (as long as you read the instructions)
[1] http://phpadvent.org/2010/profiling-with-xhgui-by-paul-reinheimer
Ok, i've found where was a problem. There is a class that was creating cache file containing information on user's browser (in order to recognize them later). Apparently there was a problem with that file and/or parser so it was using too much memory. Since then i've cleared cache files and if situation will repeat, i'll ditch that class altogether.
Thanks to all who answered/commented on problem.
I have a Lighttpd(1.4.28) web server running on Centos 5.3 and PHP 5.3.6 in fastcgi mode.
The server itself is a quad core with 1gb ram and is used to record viewing statistics for a video platform.
Each request consists of a very small bit of xml being posted and the receiving php script performs a simple INSERT or UPDATE mysql query. The php returns a very small response to acknowledge the request.
These requests are performed very frequently and i need the system to be able to handle as many concurrent connections as possible at a high rate of requests/second.
I have disabled keep alive as only single requests will be made and so I don't need to keep connections open.
One of the things that concern me is that in server-status I am seeing a lot of connections in the 'read' state. I take it this is controlled by server.max-read-idle which is set to 60 by default? Is it ok to change this to something like 5 as I am seeing the majority of connections being kept open for long periods of time.
Also what else can I do to optimise lighttpd to be able to server lots of small requests
This is my first experience setting up lighttpd as I thought it would be more suitable than apache in this case.
Thanks
Irfan
I believe the problem is not in the webserver, but in your PHP application, especially in MySQL part.
I would replace lighty with apache + mod_php, and mysql with some NoSQL such Redis, which will queue the INSERT requests to the database. Then I would write a daemon / crontab that insert the data in MySQL.
We had such thing before, but instead of Redis, we created TXT files in one directory.