As per my knowledge I know that PHP file can not serve two clients at same time, so in Wordpress index.php will be the file handling all requests, so how it is all the way efficient and faster? is there any logical clustering? or any programming techniques in PHP that Wordpress follows? I have a website built on index.php where all requests comes to index.php and seems it lags in performance for fewer requests, dont know how it is going well for Wordpress or to some other CMS?
There is no built-in limit such as "one client per file". You can customize your limits on your webserver and/or your fastCGI pool if you're using one. It will be more accurate to think in terms of "one php thread per request", but even that could be misleading depending on your scenario.
Wordpress's index.php is just a router that in turn picks a theme template and renders it replacing each variable according to the request. No magic there, just basic templating logic.
You index.php lag might be caused by several reasons, including but not limited to:
You are referencing external js script in the header, which will
temporarily block page rendering.
You are trying to establish an early DB connection to a slow or high latency DB server
You are making heavy SQL queries on each request
You are not caching sections of the page that are subject to little or no modification from one request to the next
You are using file based sessions on a slow storage machine
You are not using an opcode cache AND you're doing an expensive PHP calculation on each request.
You have a poorly tuned webserver that allocates too much resources for each request, even for static assets.
Related
I am currently tasked with finding a solution for a serious PHP bottleneck which is apparently caused by server-side minification of CSS and JS when our sites are under high load.
Some details and what I have found out so far
I inherited a web application running on Wordpress and which uses a complex constellation of Doctrine, Memcached and W3 Total Cache for minification and caching. When under heavy load our application begins to slow down rapidly. So far we have narrowed part of the problem down to the server-side minification process. Preliminary analysis has shown that the number PHP processes start to stack up under load, and when reaching the process limit of 500 processes, start to slow everything down. Something which is also mentioned by the author of the minify library.
Solutions I have evaluated so far
Pre-minification
The most logical solution would be to pre-minify any of the files before going live. Unfortunately our workflow demands that non-developers should be able to edit said files on our production servers (i.e. after the web app has gone live). Therefore I think that pre-processing is out of the question, as it limits the editability of minified files.
Serving unminified files
75% of our users are accessing our web application with their mobile devices, especially smartphones. Unminified JS and CSS amounts to 432KB and is reduced by 60-80% in size when minified. Therefore serving unminified files, while solving the performance and editability problem, is for the sake of mobile users out of the question.
I understand that this is as much a technical problem as it is a workflow problem and I guess we are open to working on both as long as we end up with a better overall performance.
My questions
Is there a reasonable compromise which solves the PHP bottleneck
problem, allows for non-devs to make changes to live CSS/JS and
still serves reasonably sized files to clients.
If there is no such one-size-fits-all solution, what can I do to
better our workflow and / or server-side behaviour?
EDIT: Because there were some questions / comments regarding the server configuration, our servers run Debian and are equipped with 32GB of RAM and 24 core CPUs.
You can run a css/javascript compilation service like Gulp or Grunt via Node.js that minifies all your js and css assets on change.
This service can run in production but that is not recommended without some architectural setup ( having multiple versioned compiled files and auto-checking them via gulp or another extension ).
I emphasize that patching features into production and directly
editing it is strongly discouraged as it can present live issues to
your visitors reducing your credibility.
http://gulpjs.com/
Using Gulp/Grunt would require you to change how you write your css/javascript files.
I would solve this with 2 solutions - first, removing any WP-CRON operation that runs every time a user runs the application and move that to actual CRON on the server. Second I would use load balancing so that a single server is not taking the load of the work. That is your real problem and even if you fix your perceived code issues you are still faced with the load issue.
I don't believe you need to change your workflow at all or go down the road of major modification to your existing system.
The WP-CRON tasks that runs each time a page is loaded causes significant load and slowness. You can shift this from the users visiting running that process to your server just running it at the server level. This reduces load. It is also most likely running these processes that you believe are slowing down the site.
See this guide:
http://www.inmotionhosting.com/support/website/wordpress/disabling-the-wp-cronphp-in-wordpress
Next - load balancing. Having a single server supplying all your users when you have a lot of traffic is a terrible idea. You need to split the webservers load.
I'm not sure where or how you are hosted but I would move things to AWS. Setup my WordPress site for load balancing # AWS: http://www.mornin.org/blog/scalable-wordpress-amazon-web-services/
This will involve:
Load Balancer
EC2 Instances running PHP/Apache
RDS for your database storage for all EC2 instances
S3 Storage for the sites media
For user sessions I suggest you just setup stickiness on the load balancer so users are continually served the same node they arrived on.
You can get a detailed guide on how to do this here:
http://www.mornin.org/blog/scalable-wordpress-amazon-web-services/
Or at server fault another approach:
https://serverfault.com/questions/571658/load-balancing-wordpress-on-amazon-web-services-managing-changes
The assumption here is that if you are high traffic you are making revenue from this high traffic so anytime your service responds slowly it will turn away users or possibly discourage them from returning. Changing the software could help - but you're treating the symptom not the illness. The illness is that your server comes under heavy load. This isn't uncommon with WordPress and high traffic, so you need to spread the load instead of try and micro-optimize. The difference is the optimizations will be small gains while the load balancing and spread of load actually solves the problem.
Finally - consider using a CDN for serving all of your media. This loads media faster and it removes load from your system by reducing the amount of requests to the server and it's output to the clients. It also loads pages faster consistently for people wherever they are visiting from by supplying media from nodes closest to them. At AWS this is called CloudFront. WordPress also offers this service free via Jetpack (I believe) but it does not handle all media from my understanding.
I like the idea of using GulpJS. One thing you might consider is to have a wp-cron or even just a system cron that runs every 5 minutes or so and then runs a gulp task to minify and concatenate your css and js files.
Another option that doesn't require scheduling but is based off of watching the file system for changes and then triggering a Gulp build to happen is to use incron (inotify cron). Check out the incron man page. Incron is great in that it triggers actions based on file system events such as file changes. You could use this to trigger a gulp build when any css file changes on the file system.
One caveat is that this is a Linux solution so if you're hosting on Windows you might have to look for something similar.
Edit:
Incron Documentation
I am trying to reduce the number of requests on my site (to improve page speed). In one file, I have 10 separate php require statements calling 10 different php files.
My question is, are these 10 require statements considered as 10 separate requests? By replacing the require statements with actual contents from the called php file can I reduce the number of requests?
I would greatly appreciate if someone could please clarify this form. Please note that I am not an experience programmer or web designer. Thanks!
As the first comment mentions; requests are only served in response to clients; none of the language constructs of PHP should be described as requests.
As simple as I can make it ...
Execution is a three stage process:
Load file from disk
Compile into Zend's intermediate form
Execute intermediate form
A cache improves performance by usurping those parts of Zend normally responsible for loading files from disk and compiling it to Zend's intermediate representation; such that, if possible, the loading and compilation stage of execution can be bypassed, improving performance.
Without an Opcode Cache
When a request is made such as
GET /index.php HTTP/1.1
The web server servicing the request is invoked in the normal way, which in turn invokes the PHP interpreter. The interpreter then loads index.php from disk, compiles it into Zend's intermediate representation, and begins executing the code.
When a script contains a statement such as
include "my_awesome_code.php";
The (same) interpreter loads my_awesome_code.php from disk, compiles it into Zend's intermediate representation and executes it.
With an Opcode Cache
When a request is made such as
GET /index.php HTTP/1.1
The web server servicing the request is invoked in the normal way, which in turn invokes the PHP interpreter. Rather than loading index.php from the disk, the cache retrieves the code from memory where it is stored in something very close to the form of Zend's intermediate representation. Preparing the cached code (which means copying from shared memory and finalizing it's form) for execution is much faster than loading from disk and compiling to that representation first.
When a script contains a statement such as
include "my_awesome_code.php";
The codes are again retrieved from the cache, bypassing compilation once again.
Should the cache not be able to find index.php or my_awesome_code.php in shared memory, it invokes the parts of Zend it usurped; resulting in the normal loading from disk and compilation of those codes into Zend's intermediate representation. The cache intercepts the resulting intermediate code and stores it in shared memory before allowing it to be executed for the first time.
Like my comment, it's depending how PHP is installed.
If you using PHP as Mod (mod_php), your Webserver create only one PHP process. All requests will be handled here.
But if you have implement PHP as an CGI-Wrapper, each request starts an PHP instance.
To perform your loading time, use Opcode-Caches like eAccelerator, APC or other for PHP Scripts. Other solution is to handle requests with the right way: Cache statical files like Stylesheets, Javascripts, Images,..
Edit
An other solution is: optimize your PHP scripts. A good inspector is New Relic, here you can see, what scripts have very large execution time. I had create an Blog post of New Relic with their features, but it's only in german: http://hovida-design.de/php-die-macht-der-performance/
I'm coding a site in PHP and getting "pretty urls" (also hiding my directories) by directing all requests to one index.php file (using .htaccess). The index file then parses the uri and includes the requested files. These files also have more than a couple of includes in them, and each may open up a MySQL connection. And then those files have includes too, which open sql connections. It goes down to about 3-4 levels.
Is this process CPU and memory intensive, both from the PHP includes and opening (and closing) MySQL connections in each included file?
Also, would pretty urls using purely htaccess use less resources?
PHP Overheads
The answer re the logical decomposition of your app into a source hierarchy depends on how your solution is being hosted.
If you use a dedicated host/VM then you will probably have mod_php+Xcache or equiv and the answer will be: no, it doesn't really hit the runtime since everything gets cached in-memory at the PHP Opcode level.
If you use a shared hosting service then it will impact performance since any PHP scripts will be loaded through PHP-cgi probably via suPHP and the entire source hierarchy that is included will need to be read in and compiled per request. Worse, on a shared solution, if this request is the first in say 1 min, then the servers file cache will have been flushed and marshalling this source will involve a lot of physical I/O = seconds time delay.
I administer a few phpBB forums and have found that by aggregating common include hierarchies for shared hosting implementations, I can half the user response time. Here are some are articles which describe this in more detail (Terry Ellison [phpBB]). And to quote one article:
Let me quantify my views with some ballpark figures. I need to emphasise that the figures below are indicative. I have included the benchmarks as attachments to this article, just in case you want to validate them on your own service.
20–40. The number of files that you can open and read per second, if the file system cache is not primed.
1,500–2,500. The number of files that you can open and read per second, if the file system cache is primed with their contents.
300,000–400,000. The number of lines per second that the PHP interpreter can compile.
20,000,000. The number of PHP instructions per second that the PHP interpreter can interpret.
500-1,000. The number of MySQL statements per second that the PHP interpreter can call, if the database cache is primed with your table contents.
For more see More on optimising PHP applications in a Webfusion shared service where you can copy the benchmarks to run yourself.
MySQL connection
The easiest thing to do here is to pool the connection. I use my own mysqli class extension which uses a standard single-object-per-class template. In my case any module can issue a:
$db = AppDB::get();
to return this object. This is cheap as it is an internal call involve half a dozen PHP opcodes.
An alternative but traditional method is to use a global to hold the object and just do a
global $db;
in any function that need to use it.
Footnote for Small Applications
You suggested combining all includes into a single include file. This is OK for stable production, but a pain during testing. Can I suggest a simple compromise? Keeps them separate for testing but allow loading of a single composite. You do this in two parts (i) I assume each include defines a function or class, so use a standard template for each include, e.g.
if( !function_exists( 'fred' ) ) {
require "include/module1.php";
}
Before any loads in the master script simple do:
#include "include/_all_modules.php";
This way, when you are test you delete _all_modules.php and the script falls back to loading individual modules. When you're happy you can recreate the _all_modules.php. You can event do this server side by a simple "release" script which does a
system( 'cp include/[a-z]*.php include/_all_modules.php' );
This way, you get the best of both worlds
It depends on the MySQL client code, I know for one that connections often get reused when opening a MySQL connection with the same parameters.
Personally I wouldd only initialize the database connection in the front controller (your index.php file), because everything should come through there anyway.
You could use the include_once() or require_once() methods to ensure that PHP only parses them once, thus saving processing time. This would be particularly valuable if you suspect that your code might attempt to include files more than once per script execute.
http://php.net/manual/en/function.include-once.php
I would imagine that using .htaccess to parse URLs would always use more resources than any other method, purely because those rules would be activated upon every single .php file request your server encountered.
I've seen many web apps that implement progress bars, however, my question is related to the non-uploading variety.
Many PHP web applications (phpBB, Joomla, etc.) implement a "smart" installer to not only guide you through the installation of the software, but also keep you informed of what it's currently doing. For instance, if the installer was creating SQL tables or writing configuration files, it would report this without asking you to click. (Basically, sit-back-and-relax installation.)
Another good example is with Joomla's Akeeba Backup (formerly Joomla Pack). When you perform a backup of your Joomla installation, it makes a full archive of the installation directory. This, however, takes a long time, and hence requires updates on the progress. However, the server itself has a limit on PHP script execution time, and so it seems that either
The backup script is able to bypass it.
Some temp data is stored so that the archive is appended to (if archive appending is possible).
Client scripts call the server's PHP every so often to perform actions.
My general guess (not specific to Akeeba) is with #3, that is:
Web page JS -> POST foo/installer.php?doaction=1 SESSID=foo2
Server -> ERRCODE SUCCESS
Web page JS -> POST foo/installer.php?doaction=2 SESSID=foo2
Server -> ERRCODE SUCCESS
Web page JS -> POST foo/installer.php?doaction=3 SESSID=foo2
Server -> ERRCODE SUCCESS
Web page JS -> POST foo/installer.php?doaction=4 SESSID=foo2
Server -> ERRCODE FAIL Reason: Configuration.php not writable!
Web page JS -> Show error to user
I'm 99% sure this isn't the case, since that would create a very nasty dependency on the user to have Javascript enabled.
I guess my question boils down to the following:
How are long running PHP scripts (on web servers, of course) handled and are able to "stay alive" past the PHP maximum execution time? If they don't "cheat", how are they able to split the task up at hand? (I notice that Akeeba Backup does acknowledge the PHP maximum execution time limit, but I don't want to dig too deep to find such code.)
How is the progress displayed via AJAX+PHP? I've read that people use a file to indicate progress, but to me that seems "dirty" and puts a bit of strain on I/O, especially for live servers with 10,000+ visitors running the aforementioned script.
The environment for this script is where safe_mode is enabled, and the limit is generally 30 seconds. (Basically, a restrictive, free $0 host.) This script is aimed at all audiences (will be made public), so I have no power over what host it will be on. (And this assumes that I'm not going to blame the end user for having a bad host.)
I don't necessarily need code examples (although they are very much appreciated!), I just need to know the logic flow for implementing this.
Generally, this sort of thing is stored in the $_SESSION variable. As far as execution timeout goes, what I typically do is have a JavaScript timeout that sets the innerHTML of an update status div to a PHP script every x number of seconds. When this script executes, it doesn't "wait" or anything like that. It merely grabs the current status from the session (which is updated via the script(s) that is/are actually performing the installation) then outputs that in whatever fancy method I see fit (status bar, etc).
I wouldn't recommend any direct I/O for status updates. You're correct in that it is messy and inefficient. I'd say $_SESSION is definitely the way to go here.
My question is whether or not using multiple PHP includes() is a bad idea. The only reason I'm asking is because I always hear having too many stylesheets or scripts on a site creates more HTTP requests and slows page loading. I was wondering the same about PHP.
The detailed answer:
Every CSS or JS file referenced in a web page is actually fetched over the network by the browser, which involves often 100s of milliseconds or more of network latency. Requests to the same server are (by convention, though not mandated) serialized one or two at a time, so these delays stack up.
PHP include files, on the other hand, are all processed on the server itself. Instead of 100s of milliseconds, the local disk access will be 10s of milliseconds or less, and if cached, will be direct memory accesses which is even faster.
If you use something like http://eaccelerator.net/ or http://php.net/manual/en/book.apc.php then your PHP code will all be precompiled on the server once, and it doesn't even matter if you're including files or dumping them all in one place.
The short answer:
Don't worry about it. 99 times out of 100 with this issue, the benefits of better code organization outweigh the performance increases.
The use of includes helps with code organization, and is no hindrance in itself. If you're loading up a bunch of things you don't need, that will slow things down -- but that's another problem. Clarification: As you include pages, be aware what you're adding to the load; don't carelessly include unneeded resources.
As already said, the use of multiple PHP includes helps to keep your code organized, so it is not a bad idea. It will become a problem when too many includes are used, because the web server will have to perform an I/O operetation for each include you have.
If you have a large web application, you can boost it using a PHP accelerator, which caches data and compiled code from the PHP bytecode compiler in shared memory. If you have lots of PHP includes in a specific file they will be performed just once. Further calls to that file will hit the cache, so no PHP require will be performed.
It really depends on what you want to do, I mean if you have a piece of code that is used all the time it is really convenient to include it instead of copying and pasting all the time and that will make your code more clear and not slower, but if you include all the functions or classes you have written in your files without using them of course thats not a good practice...I would suggest using a framework (like codeigniter or something else you find convinient) because it really helps clearing this things out... good luck!
The only reason I'm asking is because
I always hear having too many
stylesheets or scripts on a site
creates more HTTP requests and slows
page loading. I was wondering the same
about PHP.
Do notice that an HTTP request is several orders of magnitude slower that a PHP include.
Several HTTP requests -> Client has to request and accept over the wire several files
Several PHP includes -> Client has to request and accept only one file
The includes, obviously, will have a server penalty. But by your question... don't sweat it. Only on really large-scale PHP projects will you face such problems.