Known problems with filemtime() on Windows - files getting touched arbitrarily? - php

Is there a known issue leading to file modification times of cache files on Windows XP SP 3 getting arbitrarily updated, but without any actual change?
Is there some service on a standard Windows XP - Backup, Sync, Versioning, Virus scanner - known to touch files? They all have a .txt extension.
If there isn't, forget it. Then I'm getting something wrong in my cache routines, and I'll debug my way through.
Background:
I'm building a simple caching wrapper around a slow web site on a Windows server.
I am comparing the filemtime() time stamp to some columns in the data base to determine whether a cached file is stale.
I'm having problems using this method because the modification time of the cache files seems to get updated in between operations without me doing anything. THis results in stale files being displayed.
I'm the only user on the machine. The operating system is Windows XP, the webserver a XAMPP Apache 2 with PHP 5.2

You could setup logging* on the machine to find out what is changing your files. From your description I take this happens frequently, so you might find ProcessMonitor (german) to be a better solution for monitoring.
* I think you can setup logging with on-board tools as well, just not sure anymore how

The only mtime issue I can think of is the dreaded DST bug. It doesn't sound quite like what you're getting though.
Certainly there are other Windows tools that might modify a file behind your back, but typically it's user-level stuff like WMP screwing with the ID3 tags, or dodgy AV... not anything I would expect to be touching your cache files.
(Maybe you could try an equality comparison of mtimes rather than greater-than/less-than, only using the cache if there is an exact match? This at least means that if some anti-social bleeder is touching files it'll just slow you down a bit, instead of making you serve stale files. FWIW this is what Python does with its bytecode cache.)

Related

Laravel 5 losing sessions and .env configuration values in AJAX-intensive applications

I am using Laravel 5 (to be specific, "laravel/framework" version is "v5.0.27"), with session driver = 'file'.
I'm developing on Windows 7 64 bit machine.
I noticed that sometimes (once a week or so) I get unexpectedly and randomly logged out. Sometimes this happens even immediately after I log in. I have added log messages to my auth logic code, but the log code was not triggered. Laravel behaved as if it has completely lost the session file.
Another, more serious issue, was that sometimes after debugging sessions (using xdebug and Netbeans) Laravel started losing also other files - .env settings, some debugbar JS files etc. The error log had messages like:
[2015-07-08 13:05:31] local.ERROR: exception 'ErrorException' with message 'mcrypt_encrypt(): Key of size 7 not supported by this algorithm. Only keys of sizes 16, 24 or 32 supported' in D:\myproject\vendor\laravel\framework\src\Illuminate\Encryption\Encrypter.php:81
[2015-07-08 13:05:31] local.ERROR: exception 'PDOException' with message 'SQLSTATE[HY000] [1044] Access denied for user ''#'localhost' to database 'forge'' in D:\myproject\vendor\laravel\framework\src\Illuminate\Database\Connectors\Connector.php:47
This clearly signals that .env file was not read by Laravel, so it is using default settings:
'database' => env('DB_DATABASE', 'forge'),
'key' => env('APP_KEY', 'somekey'),
Losing files happened rarely, maybe once a month or so, and it always happened after debugging sessions. I always had to restart Apache to make it work again.
To stress-test the system and reproduce the issues reliably, I used a quick hack in my Angular controller:
setInterval(function(){
$scope.getGridPagedDataAsync();
}, 500);
It is just a basic data request from Angular to Laravel.
And that was it - now I could reproduce the session losing and .env losing in 3 minutes or less.
I have developed AJAX-intensive web applications earlier on the same PC with the same Apache+PHP, but without Laravel, without .env, and I hadn't noticed such issues before.
While debugging through code, I found out that Laravel is not using PHP built-in sessions at all, but has implemented their own files-based session. Obviously, it does not provide the same reliability as default PHP sessions and I'm not sure why.
Of course, in real life scenarios my app won't be that AJAX-intensive, but in my experiences on some occasions it is enough with just two simultaneous AJAX requests to lose the session.
I have seen some related bug reports on Laravel for various session issues. I haven't yet seen anything about dot-env, though, but it seems suffering from the same issue.
My guess is that Laravel does not use file locks and waiting, thus if a file cannot be read for some reason (maybe locked by some parallel process or Apache) then Laravel just gives up and returns whatever it can.
Is there any good solution to this? Maybe it is specific to Windows and the problems will go away on a Linux machine?
Curious, why Laravel (or Symfony) developers haven't fixed their session file driver yet. I know that locking/waiting would slow it down, but it would be great to at least have some option to turn on "reliable sessions".
Meanwhile I'll try to step through Laravel code and see if I can invent some "quick&dirty" fix, but it would be much better to have some reliable and "best practices" solution.
Update about .env
The issue turned to be not related to locking files. I found the Laravel bug report for .env issue, which lead me to a linked report for Dotenv project which, in turn, says that it is a core PHP issue. What disturbs me is that Dotenv devs say that Dotenv was never meant to be used for production, but Laravel seems to rely upon Dotenv.
In https://github.com/laravel/framework/pull/8187 there seems to be a solution which should work in one direction but some commenter says that in their case the issue was the opposite. Someone called crynobone gave a clever code snippet to try:
$value = array_get($_ENV, $key, getenv($key));
There appeared another suggestion to use "makeMutable()" on both Dotenv and Laravel Githubs, but commenters report that this might break tests.
So I tried the crynobone's code but it did not work for me. While debugging, I found out that in my case when things break down for concurrent requests, the $key cannot be found nor in getenv(), nor in $_ENV and not even in $_SERVER.
The only thing that worked (quick&dirty experminet) was to add:
static::$cached[$name] = $value;
to Dotenv class and then in helpers.php env() method I see that:
Dotenv::$cached[$key]
is always good, even when $_ENV and getenv both give nothing.
Although Dotenv was not meant for production, I don't want to change our deployment and configuration workflow.
Next I'll have to investigate the session issues...
Addendum
Related Laravel bug reports (some even from version 4. and it seems, not fixed):
https://github.com/laravel/framework/issues/4576
https://github.com/laravel/framework/issues/5416
https://github.com/laravel/framework/issues/8172
and an old article which sheds some light on what's going on (at least with session issues):
http://thwartedefforts.org/2006/11/11/race-conditions-with-ajax-and-php-sessions/
After two days of intensive debugging I have some workarounds which might be useful to others:
Here is my patch for Dotenv 1.1.0 and Laravel 5.0.27 to fix .env issues:
https://gist.github.com/progmars/db5b8e2331e8723dd637
And here is my workaround patch to make session issues much less frequent (or fix them completely, if you don't write to session yourself on every request):
https://gist.github.com/progmars/960b848170ff4ecd580a
I tested them with Laravel 5.0.27 and Dotenv 1.1.0.
Also recently recreated patches for Laravel 5.1.1 and Dotenv 1.1.1:
https://gist.github.com/progmars/e750f46c8c12e21a10ea
https://gist.github.com/progmars/76598c982179bc335ebb
Make sure you add
'metadata_update_threshold' => 1,
to your config/session.php for this patch to become effective.
All the patches should be applied to vendor folder each time it gets recreated.
Also you might want to separate the session patch out because you update session.php just once, but the other parts of the patch should be applied to vendor folder each time it gets recreated before deployment.
Be warned: "It works on my machine". I really hope Laravel and Dotenv developers will come up with something better, but meanwhile I can live with these fixes.
My personal opinion that using .env to configure Laravel is a bad decision. Having .php files that contained key:value style of configuration was much better.
However, the problem you are experiencing is not PHP's fault, nor Apache's - it's most likely Windows issue.
A few other things: Apache contains a module that allows PHP binary to be integrated into Apache's process or thread, called mod_php - the issue with this is that PHP is not only slow, but getting another binary integrated into an existing one is super tricky and things might be missed. PHP also must be built with thread-safety in this case. If it's not, then weird bugs can (and will) occur.
To circumvent the problem of tricky integration of one program into another, we can avoid this completely and we can have .php served over FastCGI protocol. This means that the web server (Apache or Nginx) will take the HTTP request and pass it to another "web" server. In our case, this will be PHP FastCGI Process Manager or PHP-FPM.
PHP-FPM is preferred way of serving .php pages - not only because it's faster (much, much faster than integrating via mod_php), but you can easily scale your HTTP frontend and have multiple machines serve .php pages, allowing you to easily horizontally scale your HTTP frontend.
However, PHP-FPM is something called a supervisor process and it relies on process control. As far as I'm aware, Windows do not support process control in the way *nix does, therefore php-fpm is unavailable for Windows (in case I am wrong here, please correct me).
What does all of this mean for you? It means that you should use software that's designed to play nicely with what you want to do.
This is the logic that should be followed:
A web server accepts HTTP requests (Apache or Nginx)
Web server validates the request, parses the raw HTTP request, determines whether the request is too big and if everything goes well in this case, it proxies the request to php-fpm.
php-fpm processes the request (in your case it boots up Laravel) and returns the HTML which the web server shows to the user
Now, this process while great, comes with a few issues and one huge problem here is how PHP deals with sessions. A default PHP session is a file stored somewhere on the server. This means that if you have 2 physical machines serving your php-fpm, you're going to have problems with sessions. This is where Laravel does something great - it lets you use encrypted cookie based sessions. It comes with limitations (you can't store resources in those sessions and you have a size limit), but a correctly built app wouldn't store too much info in a session in the first place. There are, of course, multiple ways of dealing with sessions, but in my opinion the encrypted cookie is super, super trivial to use and powerful. When such a cookie is used, it's the client who carries the session information and any machine that contains decryption key can read this session, which means that you can easily scale your setup to multiple servers - all they have to do is have access to same decryption key (it's the APP_KEY in the .env). Basically you need to copy the same Laravel installation to machines that you wish to serve your project.
The way I would deal with the issue that you have while developing is the following:
Use a virtual machine (let's say Oracle Virtualbox)
Install Ubuntu 14.04
Map a network drive to your Windows host machine (using Samba)
Use your preferred way of editing PHP files, but they would be stored on the mapped drive
Boot an nginx or Apache, along with php-fpm on the VM to serve your project
Now what you gain via this process is this: you don't pollute your Windows machine with a program that listens on ports 80 / 443, when you're done working you can just shut the VM down without losing work, you can also easily simulate how your website would behave on an actual production machine and you wouldn't have surprises such as "it works on my dev machine but it doesn't work on my production machine" because you'd have the same software for both purposes.
These are my opinions, they are not all cold-facts and what I wrote here should be taken with a grain of salt. If you think what I wrote might help you, then please try to approach the problem that way. If not, well, no hard feelings and I wish you good luck with your projects.

WordPress on IIS 7 php-cgi hogging CPU

Running WordPress on IIS 7 (Windows Server 2008) with WP-SuperCache as per IIS.net's guide.
Was running great but recently we changed the permissions on some folders and the administrator password and we're getting huge spikes in our CPU usage as a result of the PHP-cgi.exe processes.
This leads me to believe it's not caching however the pages themselves have the "Cached with WP-SuperCache" comments at the bottom, and the caching seems to be working correctly.
What else could be the issue here?
I think I may have found a solution or at least a work round to this problem, at least it seems to be working for me reliably.
Try setting the Max Instances setting, under IIS Server --> FastCGI Settings, to 1.
It seemed to me that only certain requests were causing a php-cgi.exe process to go rogue and hog the cpu, usually when updating a post. When reading other posts on this issue one of them mentioned the Max Instances setting and that it is set to default at 0 or automatic. I wondered if this might not have a good effect when things aren't as they should be. I'm guessing (but this isn't quite my field of expertise) if a certain request(s) is causing the process to lock-up, so FastCGI just creates another, whilst leaving the first in place. Somehow it seems only having a single instance allows PHP to move on from the lock-up and the cpu stays under control.
For servers with high-levels of requests setting FastCGI to only a single instance may not be ideal, but it certainly beats the delays I was getting before. Used in combination with WP-SuperCache and WinCache, things seem to nipping along nicely now.
Looking at that task mgr looks like its missing the cache on every request. Plus that article dates to 2008 so difficult to say whether the directions as written would still work. Something with WP-SuperCache could have changed.
I would recommend using W3 Total Cache. I've done extensive testing with it on Windows Server 2008 and IIS 7 and it works great. It is also compatible with and leverages the WinCache extension for PHP. Has some other great features too if you're interested, minification, CDN support, etc. It's a really great performance plugin for WordPress. You can get the plugin here, http://wordpress.org/extend/plugins/w3-total-cache/
some other things to check...
What size is the app pool? (# of processes?)
Make sure you are using PHP 5.3.
Make sure you are using WinCache.
Make sure to set MaxInstanceRequests to something less than PHP_FCGI_MAX_REQUESTS. Definitely do not allow PHP to handle recyling the app pool. The default is 10K requests. If you are seeing these results during a load test then this might be the cause. Increase MaxInstanceRequests and keep it one less than PHP_FCGI_MAX_REQUESTS.
Hope that helps.

File system optimizations (ext3)

I have a PHP application that for every request loads 1 ini file, and at least 10 PHP files.
As these same files are loaded for every single request I thought about mounting them on a ram disk but I have been told that the linux filing system (ext3) will basically cache them in some way that a ram disk would not improve performance.
Can anyone verify this and possibly explain what is actually happening?
Many thanks.
The virtual file system of (not only) linux uses a cache for virtually every filesystem. So yes, that's in place for ext3, too.
But you might be interested in something like apc which stores the byte/intermediate code for php's zend engine in memory.

Httpd Process High memory usage and slow page loads

I am running wampserver on my windows vista machine. I have been doing this for a long time and it has been working great. I have completed loads of projects with this setup.
However, today, without me changing anything (no configuration etc) only PHP code changes, I find that every time I load pages of my site (those with user sessions or access the database) are really slow to load - Over 30 seconds, they use to take 1 or 2 seconds.
When I have a look at the task manager, I can see on page loads the httpd process jumps from 10mb to 30mb, 90mb, 120mb, 250mb and then back down again.
I have tested previous php code projects and they seem to all be slow as well!
What is going on?
Thanks all for any help on this confusion issue!
Check the following:
Check if you your data-access library to access your database has been changed/updated lately (if you use one).
Just a guess, but did you changed your antivirus/firewall (or settings) since last time you checked those previous projects? A more aggresive security can slow things a lot.
Did you changed the Apache/PHP/MySQL version in the WAMPSERVER menu?
Maybe you can try to reinstall WAMPSERVER (do this last and if it's not an hassle for you because I really doubt this will help but it can in some really really weird cases).
But from experience and the memory usage you explain in your question it seems that your SQL queries are long to execute and/or return a really large data set.
Try to optimise your queries, it can help for speed but not really memory usage (at least if the result set is the same). For the memory, maybe you can use LIMIT to reduce your returning data set (if your design allows it - but it should).
Since we don't really know what you do with your data, take note than "playing" (like parsing large XML documents) with large data sets can take much time/memory (again it depends much on what you do with all this data).
Bottom line, if nothing in this post helps, try to post more information on your setup and what exactly you do (with even code samples) when it's slow.
Try checking the size of your wamp log files.
i.e.
C:\wamp\logs
Sometimes, when they get really big, they can cause Apache to slow down.
Have you recently changed your network configuration or upgraded your system? That may be causing this issue through your network config or anti-virus/security software. People have had issues with zonealarm causing this in the past, for example.
Also, if you've recently switched from typing "127.0.0.1" to "localhost" or moved around networks, you may benefit from removing the IPV6 localhost setting from C:\Windows\System32\drivers\etc\hosts if you have one:
change the line with
::1 localhost
to
# ::1 localhost
I am surprised that no one has suggested this. You should always try to see why things are really slowing down:
Use Performance monitor to see where the bottleneck is, first. (perfmon.exe)
Is hard page fault actually the bottleneck? Is your hard drive busy reading/writing to the pagefile? Check the length of IO queue for the hard disk.
Are CPUs busy?
If nothing looks busy, use procmon to see if your php process is blocked on some system calls.
Hope this helps.
though not related to finding database bottlenecks, XDebug in combination with a cachegrind viewer (e.g. WebGrind, WinCacheGrind) can help you find the part of your PHP code, that takes longest to execute.

APC not recommended for production?

I have started having problems with my VPS in the way that it would faill to serve the pages on all the websites. It just showed a blank page, or offered to download the php file ( luckily the code was not in the download file :) ).
The server was still running, but this seemed to be a problem with PHP, since i could login into WHM.
If i did a apache restart, the sites would work again.
After some talks with the server support they told me this is a problem with the APC extension witch they considered to be old and not recommended for production servers. So they removed it for now, to see if the same kind of fails would continue to appear.
I haven't read anywhere that APC could have some problems or that its not always recommended to use, quite the contrary ... everywhere people are saying to always use it.
The APC extension was installed ssh and is the latest version.
Edit:
They also dont recomend MemCache and say that a more reliable extension would be eAccelerator
Um APC is current tech and almost a must for any performant PHP site.
Not only that but it will ship as standard in PHP 6 (rather than being an optional module like it is now).
I don't know what your issue is/was but it's not APC being outdated or old tech.
I run several servers myself and the only time I have ever had trouble with APC was when trying to run it concurrently with Zend Optimizer. They don't work together so if I must use Optimizer (like if some commercial, third-party code requires it) I run eAccelerator instead of APC. Effectively 6 of one, half-dozen of another when it comes to performance but I really doubt APC is the problem here.
Just to add, memcached is only going to benefit you greatly if you are running multiple servers which need to access a shared data cache. Memcached does not do opcode caching like APC/eAccelerator/Xcache/etc.
The problem is not to do with APC. If APC had a problem, it would either show up in your php log file or you simply wouldn't be able to access your website until you adjusted APC. The problem is more likely with apache itself. I've experience the same problem as you with blank pages before and it was to do with mod_security playing up and preventing pages from being sent that looked "suspicious". Also, memory usage in apache is good at killing the server under load. I've also had experience with a web host that had compiled apache with a memory leak so every X amount of requests (say 100,000) the server would crash! Most annoying.
Your web host doesn't sound the most competent out there as they are giving some bad advice, most likely based on ignorance.
APC should be used on production (with the mstat check turned off on production, but on for development). You can get more stats about your apc setup while it's working by loading the apc status file that comes with it and you get a nice page like this: http://drupal.org/files/images/APC%20Status-1.png
Memcache is very heavily used as it's also distributed! The use for such is as follows:
APC is the fastest as it works most closely to php, but only works on the same server executing the PHP itself so it's use is limited in that scope. Used primarily as an opcode cache.
Memcache is like a very fast database spread over many computers working as one unit. However, a powercut will wipe the lot!!! Hence why they are heavily used to remove preasure from the persistant database. Facebook and many other sites have hundreds of servers running memcache.
My advice would be to find a web host that understands PHP. Fighting web hosts is hard work about whos right and whos wrong... until you find a good one ;)
Sounds to me like they are pushing a product that they probably have referral kickbacks on.
I run my own servers (have for a while) and I've never had this problem, not any MAJOR problems with MemCache.

Categories