clearstatcache + include_path + sessions - php

Im having a problem where we run an upgrade for our web application.
After the upgrade script completes and access the web app via the browser, we get file not found errors on require_once() because we shifted some files around and PHP still has the old directory structure cached.
If we have the default 120 seconds for the realpath_cache_ttl to expire, then everything resolves itself, but this is not acceptable for obvious reasons.
So I tried using clearstatcache with limited success. I created a separate file (clearstatcache.php) that only calls this function (this is a one line file), and placed a call to it in our install script via curl:
<?php
clearstatcache(true);
This does not seem to work, however if I call this file via the browser it immediately begins to work.
I'm running PHP version 5.3
I started looking at the request header differences between my browser and curl, and the only thing I can see that might matter is the PHPSESSID cookie.
So my question is, does the current PHPSESSID matter (I don't think it should). Am I doing something wrong with my curl script? I am using
curl -L http://localhost/clearstatcache.php
EDIT: Upon further research, I've decided this probably has something to do with multiple apache processes running. clearstatcache will only clear the cache of the current apache process - when the browser is making a request a different apache process serves the request, and this process still has the old cache.

Given that the cache is part of the Apache child process thanks to mod_php, your solution here is probably going to be restarting the Apache server.
If you were using FastCGI (under Apache or another web server), the solution would probably be restarting whatever process manager you were using.
This step should probably become part of your standard rollout plan. Keep in mind that there may be other caches that you might also need to clear.

Related

Apache 2.4 + PHP 7.2 prefork mpm + file uploads lead to partial upload errors after 20 seconds

I've been noticing an issue with a web application we have where our application's code that handles file uploads would intermittently encounter file upload error 3. I wasn't sure how our users were triggering this error but I do know that the ones who did would be uploading files through an unstable or slow internet connection (mobile hotspots, public wifi, etc). So I tested this by using Chrome Dev Tools' throttling feature (fast 3g) and uploading a 10mb file that would take somewhere around a minute or more to complete. We use the Dropzonejs library to handle uploads by the way with no chunking whatsoever. After exactly 22 - 23 seconds, the connection always seems to get aborted but Apache still proceeds to handle the incomplete request body it receives and passes it to PHP, leading to a partial upload error.
I can't seem to figure out what is causing this to happen. PHP config has max_execution_time and max_input_time set to 0 and -1 respectively. Post and upload max sizes are set relatively high and well, the file upload size doesn't even seem to matter. As long as the upload request takes longer than 22 - 23 seconds, the problem occurs. I tried disabling mod_reqtimeout and it didn't make a difference. Other things I've tried are tinkering with the apache Timeout value and disabling keepalive and it still gives me issues a little after 20 seconds always (that time comes from the browser's network tab).
I don't see anything in apache's error logs and the access logs make these requests appear legit, since apache still continues to handle the incomplete request like normal.
I initially thought it might be dropzonejs killing the ajax request connection but I have also tested the same code on my local dev environment which uses laradock (software versions will be slightly different. Still both apache 2.4 and php 7.2) and I cannot replicate the issue, so it couldn't be a client-side problem.
We had the same issue and solved thanks to this question.
However there is no need to entirely disable the module.
The problem is caused by this bug in Apache 2.4.39:
https://bz.apache.org/bugzilla/show_bug.cgi?id=63325
As suggested in the bug report you can explicitly set the default in Apache config file.
Looks like it was mod_reqtimeout stopping my post requests. I think my apache changes just weren't propagating fully when I initially tried disabling it (I'm guessing due to a mix of me using a graceful restart and having keepalive enabled).

Multiple PHPSESSID collisions

I have noticed that multiple users per day are being assigned the same session_id. I am using php 7.2 now, but looking back into the history of user sessions this has been happening since I was using php 5.4.
I am just using php defaults of session_start(), no custom session handler.
I have read that the session_id is a combo of the client ip and time, but give that I am using a load balancer, that might be limiting the randomness of the ip_addresses?
What is the proper way to increase the uniqueness of session_ids to prevent collisions when using a load balancer?
If you are using Nginx you may want to check if FastCGI micro-caching is enabled and disable it. This has caused some errors before noted in PHP.net developers bugs listings in PHP 7.1 running nginx
Bug #75496 Session ID Collision happened few times
After the first case [of collision] we changed a hash entropy php settings in php.ini so session_id is now 48 chars but it didn't help to prevent second case.
Solution:
FastCGI micro caching at nginx that cached 200 responses together with session cookies.
Of course maybe it was set wrong at our server, but its definitely has nothing to do with PHP.
Please see:
https://bugs.php.net/bug.php?id=75496
Assuming you're running PHP-FPM on each web node, but I guess #2 and #3 would probably work running PHP as an Apache plugin as well.
You've got a few options here:
Keep your LB'd web nodes with apache and have each one point to the same upstream PHP-FPM server. Obviously the third box running the single PHP-FPM process might need to be beefier, since it's handling PHP parsing for both web nodes.
Change the session storage to point to a file location (probably a NFS or SMB share) that both servers can access. Not sure if I've ever done this honestly but it seems like it would work. Really, your web files should probably be on an NFS/SMB share already so you can deploy changes to only one location.
Spin up a redis server and have both web nodes' PHP-FPM process use that for session.
Number three is probably the best option in most cases.

CakePHP cookies not persisting after browser close

I am in the process of moving away from Apache in favor of nginx due to the lower resource consumption. I have set up an Ubuntu Server box with the LEMP stack installed. After moving all my applications over (3 CakePHP 2.0.5 apps, 1 Wordpress install), everything seems to be working perfectly except for one thing - Cake's cookies suddenly disappear when the browser is closed.
I have created a very simple test PHP page to test if cookies are working at all and they are in fact working, just not in Cake. Wordpress is also not having any troubles remembering me when I close my browser.
Using the Chrome developer tools, I have inspected to see if the cookie is being set at all, and it is as you can see below:
The expiry date is even set a month into the future as well, so I don't understand why they don't live past browser close. As soon as I fire my browser up and navigate to my app, the cookie is now gone:
One thing I did notice is that with my app running on Apache, the CAKEPHP cookie you see above above has the same value before and after close. However on the nginx server, that cookie has a different value everytime I close and re-open my browser.
I thought this might have to do with sessions, so I checked my session settings in core.php and it's set to let PHP do the session handling:
Configure::write('Session', array(
'defaults' => 'php'
));
I've checked my /tmp directory and session files are being created. I tried changing the session handler to cake so that Cake would store sessions in its app/tmp/sessions directory, and while the sessions would successfully get created in this directory my cookies are still lost on browser close.
Has anybody experienced this behavior between nginx and Cake before, or have any ideas as to why this might be happening?
The problem is related to encrypted cookies and the Suhosin patch. Apparently Suhosin ignores any mt_srand() and srand() calls you make and initializes the randomizer itself [see here]. Because Cake relies on these functions, it was interfering with my encrypted cookies. To fix it, I added these two lines to my php.ini file and rebooted the server (note that simply restarting nginx didn't work):
suhosin.srand.ignore = Off
suhosin.mt_srand.ignore = Off

APC doesn't cache files, but caches user data

Apc doesn't cache files, it only caches user data. When I tested on localhost, APC cached all files I used. But it doesn't work on my shared hosting. Is this a configuration issue?
These are the stats from my apc.php (APC 3.0.19):
On the above picture, APC doesn't use any memory.
This is what phpinfo() gives me:
On localhost, i only access http://localhost/test.php. Apc will cache localhost/test.php ( type file ) imediately. but on shared host, i don't see it cache file ( it can cache variable, if i store but don't with file );
apc_add('APC TEST', '123');
echo apc_fetch('APC TEST'); //-- it work with this code
i want Apc cache test.php if i access test.php.
Is there a configure make APC can't cache file type or it is limit of shared hosting?.
In response to your comment "Apc is enabled , and apc.cache_by_default = 1; php setup with CGI, i checked phpinfo();": That's the problem. If you run PHP over CGI a new PHP process is created on every page load. As APC is bound to the PHP process it is newly instantiated on every page access, too. So it obviously doesn't have any data in it. Your user cache example only works, because you store and fetch the variable on a single page load.
So: APC can not work with PHP over CGI. Use FastCGI (which keeps the processes alive, thus making the Cache work and generally being faster).
APC in CGI mode on shared hosting is generally not feasible although it may be possible. Depending on your application it may also be a security risk. As nikic said you should be able to get it working with FastCGI but even that it not easy depending on your host. Here is a detailed account of someone that got it working. It may give you some help in trying to get it to work in CGI mode
FastCGI with a PHP APC Opcode Cache
If your hosting is setup with php in fastcgi mode APC may not work. can you check this with a standard phpinfo() page?
edit: I stand corrected, the chosen answer is right. I confused CGI/fastcgi. yeah CGI will not work. But I want to note that even fastcgi is not that great with opcode caching.

PHP running as a FastCGI application (php-cgi) - how to issue concurrent requests?

EDIT: Update - scroll down
EDIT 2: Update - problem solved
Some background information:
I'm writing my own webserver in Java and a couple of days ago I asked on SO how exactly Apache interfaces with PHP, so I can implement PHP support. I learnt that FastCGI is the best approach (since mod_php is not an option). So I have looked at the FastCGI protocol specification and have managed to write a working FastCGI wrapper for my server. I have tested phpinfo() and it works, in fact all PHP functions seem to work just fine (posting data, sessions, date/time, etc etc).
My webserver is able to serve requests concurrently (ie user1 can retrieve file1.html at the same time as user2 requesting some_large_binary_file.zip), it does this by spawning a new Java thread for each user request (terminating when completed or user connection with client is cancelled).
However, it cannot deal with 2 (or more) FastCGI requests at the same time. What it does is, it queues them up, so when request 1 is completed immediately thereafter it starts processing request 2. I tested this with 2 PHP pages, one contains sleep(10) and the other phpinfo().
How would I go about dealing with multiple requests as I know it can be done (PHP under IIS runs as FastCGI and it can deal with multiple requests just fine).
Some more info:
I am coding under windows and my batch file used to execute php-cgi.exe contains:
set PHP_FCGI_CHILDREN=8
set PHP_FCGI_MAX_REQUESTS=500
php-cgi.exe -b 9000
But it does not spawn 8 children, the service simply terminates after 500 requests.
I have done research and from Wikipedia:
Processing of multiple requests
simultaneously is achieved either by
using a single connection with
internal multiplexing (ie. multiple
requests over a single connection)
and/or by using multiple connections
Now clearly the multiple connections isn't working for me, as everytime a client requests something that involves FastCGI it creates a new socket to the FastCGI application, but it does not work concurrently (it queues them up instead).
I know that internal multiplexing of FastCGI requests under the same connection is accomplished by issuing each unique FastCGI request with a different request ID.
(also see the last 3 paragraphs of 'The Communication Protocol' heading in this article).
I have not tested this, but how would I go about implementing that? I take it I need some kind of FastCGI Java thread which contains a Map of some sort and a static function which I can use to add requests to. Then in the Thread's run() function it would have a while loop and for every cycle it would check whether the Map contains new requests, if so it would assign them a request ID and write them to the FastCGI stream. And then wait for input etc etc, As you can see this becomes too complicated.
Does anyone know the correct way of doing this? Or any thoughts at all? Thanks very much.
Note, if required I can supply the code for my FastCGI wrapper.
Update:
Basically, I downloaded nginx and set it up to use PHP as a FastCGI application and it too suffered from the same problem as my server. It could not handle concurrent PHP requests. This is leads me to believe my code is in fact correct. So something is wrong with PHP or I am not setting it up correctly. Maybe it is because I am using Windows because some lighttpd users claim Windows can't handle FastCGI properly (this doesn't make much sense). I'll install Linux sometime soon and report any progress with that.
Okay, I managed to find the cause of the problem. It wasn't my code at all. It's PHP, it cannot spawn additional php-cgi's under Windows when running as FastCGI mode, under Linux it works perfectly, I simply pointed my server to my linux box IP and it had no problems with concurrent FCGI requests. Sucks, but I guess that's the way it is...
I did look deeper into the PHP source code after that and found that the section of code which responds to PHP_FCGI_CHILDREN has been encapsulated by #ifndef WIN32 So the developers must be aware of the issue
Hi this comes a little late, I've wrote a spawner for php-cgi.exe on windows, not perfect but it might be what you needed. Check it at here.
re: spawn-php python script...
Thanks #nosam that really helped.
For those wanting to get it working quickly you'll need the following (if 64bit system)
ActivePython-2.7.2.5-win64-x64.msi pywin32-217.win-amd64-py2.7.exe
ActivePython does not have older versions of these on their www so you will need to do a bit of googling around to find a working mirror (there are plenty out there)
Once you have downloaded the src from bitbucket you may need to edit spawn-php.py (to fix up the tab spacing), as bit-bucket seemed to mess up the tab's in the file preventing it from running.
All-in-all that saved my day for a busy little windows website using nginx + fast-cgi.
Thanks mate!

Categories