Request timeout with 200 and refresh header - php

I have a simple php script running on Ubuntu 14, Apache 2.4.7 and PHP 5.5.9. Every once and a while the request seems to stall, and after about 40 secs (normally < 100ms), it responds with an empty page, 200 OK and a Refresh header to the same URL. I think it's happening more often, if there hasn't been a recent request to the script for a while.
Response:
HTTP/1.1 200 OK
Content-Type: text/html
Pragma: no-cache
Refresh: 1; URL=http://xxx.xxx.xxx.xxx/script.php?t=1407272586793
Connection: Close
Script content:
<?php
//# Provide server time
header('Access-Control-Allow-Origin: *');
echo '{"time":"'. gmdate("Y-m-d H:i:s") .' +0000"}';
There doesn't seem to be a record of the request in either the Apache access or error logs.
I'm not sure how to find at what point this is stalling. Has anyone experienced this before, or have any suggestion on how I might debug this further?

Related

Detecting 413 (or other HTTP Status) in PHP

I Send Data to a webserver with POST or PUT method (server is FreeBSD, Apache httpd, php72).
I want to detect a 413 (too large) in my PHP script. But var_dump(http_response_code()); gives 200 even i uploaded data was too large (Apache httpd answers with it's 413 page, and my script gets executed as well).
var_dump($_SERVER); does also not show anything with 413.
How can i detect http status like 413 or any other in my script? and is it possible to let my script output the error-message, not apache?
my php-script looks like this
<?php
header('Content-Type: application/octet-stream');
echo ">---- output from php-script begins here\n";
if (http_response_code() != 200) { # <---- this does not work, always 200, how to detect http status code 413 from apache httpd?
echo ">---- not ok\n";
}
else {
echo ">---- ok\n";
}
?>
i use this command to upload a file
curl --head --silent --show-error --upload-file test-big-file.txt "my-domain.com/upload.php"
HTTP/1.1 413 Request Entity Too Large
Date: Thu, 09 Apr 2020 12:08:46 GMT
Server: Apache
Connection: close
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>413 Request Entity Too Large</title>
</head><body>
<h1>Request Entity Too Large</h1>
The requested resource does not allow request data with PUT requests, or the amount of data provided in
the request exceeds the capacity limit.
</body></html>
>---- output from php-script begins here
>---- ok
it does not make much sense to me that apache is prepending some error-message to my php-script-output. either user should get an error message or the output of my script. but not both at the same time.
if i clould detect this 413 in php i could just output some meaningful message and exit the script.
SOLUTION
i found out that if i use https:// for uploading the file apache does not call my php-script if file was too big.
only if i use http:// apache does this strange thing showing the error-document AND executing the php-script. i have absolutly no clue why this is. however... because i want to use https anyways... problem for me is gone.
here is an example output to see the diffrence in http and https in my case (CustomErrorDocument is emtpy, so no error output besides the header is shown here, in the final version, there will be an error-document of course)
curl --head --silent --show-error --upload-file test-big-file.txt "https://my-domain.com/test.php"
HTTP/2 413
date: Thu, 09 Apr 2020 17:22:23 GMT
server: Apache
content-type: text/html; charset=UTF-8
curl --head --silent --show-error --upload-file test-big-file.txt "http://my-domain.com/test.php"
HTTP/1.1 413 Request Entity Too Large
Date: Thu, 09 Apr 2020 17:22:28 GMT
Server: Apache
Upgrade: h2,h2c
Connection: Upgrade, close
Content-Type: text/html; charset=UTF-8
--> here is the output of the script that should not be executed <--
the line "--> here is the output of the script that should not be executed <--" is missing in the https call... it's only present in http call. strange...
notice: use of https gets an HTTP/2 response instead of HTTP/1.1
however, must be a misconfiguration from my web-hoster or a bug in HTTP/1.1 processing in apache. so problem solved by forcing everything to HTTPS
and usage of ErrorDocument in .htacces to show own document on error as suggested by #ÁlvaroGonzález fixed it for me (users trying to use http will get an 301 Moved Permanently error with correct https-link). works for me.
thanks

PHP rand only works once

I got the following index.php on a testsite:
<?php
$r = rand(1, 1000);
$mtr=mt_rand(1, 1000);
echo "rand(1, 1000): " . $r;
echo "<br>mt_rand(1, 1000): " . $mtr;
?>
For some reason i can only get it to run once when the page loads, giving me two random numbers, as it should, and maybe once more if i reload the page with F5. But then it refuses to produce any random numbers until a couple minutes have passed.
I feel I am missing something obvious. The server is hosted by MissHosting.se, and runs php5.6. Customer support insists it is a code issue. I will be glad to provide any further information on request.
Thanks for the help!
The problem seems to come from the fact that your server employs some kind of cache. To rule out client side caching (i.e. browser cache) I requested the page several times with curl, which does not do any caching. So it is a server cache.
Now if we look at the headers with curl:
~$ curl http://sithu.net/testinggrounds/ -I
HTTP/1.1 200 OK
Date: Fri, 17 Feb 2017 16:09:36 GMT
Vary: Accept-Encoding
Content-Type: text/html; charset=UTF-8
X-Varnish: 6817501 6109691
Age: 9
X-Cache: HIT
X-Cache-Hits: 1
Accept-Ranges: none
Connection: keep-alive
The headers clearly indicate that the server does caching and we have hit the server cache (X-Cache: HIT and X-Cache-Hits: 1). So the next step would be to find out how/where you can change your server caching mechanism.
The X-Varnish header indicates that your server/hoster is using the Varnish HTTP Cache to do the caching.

cURL default GET request isn't recognized by PHP

I'm trying to create a simple Web service that has to return various HTTP codes depending on some conditions, mainly the existence of files related to the specific resources requested via URI. However, i'm stuck on a really strange behaviour i keep getting when I try to generate a 404 header via PHP.
The first snippet, that works, is as follows:
$isNotFound = TRUE;
if ($isNotFound) header('HTTP/1.1 404 Not Found');
Using a simple command-line cURL to request the URI behind which this script runs, I get:
$ curl -LI http://www.example.com/
HTTP/1.1 404 Not Found
Date: Wed, 18 Sep 2013 20:57.25 GMT
Server: Apache/2.2.22 (Ubuntu)
X-Powered-By: PHP/5.3.10-1ubuntu3.8
Vary: Accept-Encoding
Connection: clse
Content-Type: text/html
Now, the second take is like this:
$isNotFound = FALSE;
if ($_SERVER['REQUEST_METHOD'] === 'GET') {
$isNotFound = TRUE;
}
if ($isNotFound === TRUE) {
header('HTTP/1.1 404 Not Found');
}
Running cURL again, this time I get this:
$ curl -LI http://www.example.com/
HTTP/1.1 200 OK
Date: ...
The header is the same as the former, except for the code. To check the obvious, I also printed the value of $isNotFound just before the last if, and it was indeed evaluated to TRUE, so the header call with the 404 code should be executed. I also added an exit() inside the last if, and another header() at the end of the script, giving other codes in response (like 302), and the result is always that the header inside the if is ignored.
I managed to make the second script work by explicitly specifying the request method as GET in the cURL call:
$ curl -X GET -LI http://www.example.com/
HTTP/1.1 404 Not Found
Date: ...
I also had the doubt that cURL wasn't using GET as the default method, but printing the $_SERVER array showed that the request method was indeed GET.
So, what is the reason of this strange behaviour? Is cURL's fault when using the implicit GET method, or is something happening inside PHP? Or maybe i'm so tired that i'm missing something trivial?
Thank you guys, and sorry for the long post.
Next time read the manual:
-I, --head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature the
command HEAD which this uses to get nothing but the header of a
document. When used on an FTP or FILE file, curl displays the
file size and last modification time only.
(or your webserver log files, or your TCP stream)

PHP or Apache seems to be caching files read via file_get_contents or include (unwanted behaviour)

Our web application has version numbers that get served out to the client on each request so we can detect an update to the code (ie rolling updates) and displays a popup informing them to reload to take advantage of the latest update.
But I'm experiencing some weird behaviour after the update of the version number on the server, where some requests return the new version number and some return the old, so the popup keeps poping up until you have reloaded the page a few times.
Originally I suspected maybe apache was caching files it read off disk via file_get_contents so instead of storing the version number in a plain text file, I now store it in a php file that gets included with each request, but I'm experiencing the exact same issue!
Anyone have any ideas what might be causing apache or php it self to be serving out old information after i have done an update?
EDIT: I have confirmed its not browser caching as I can have the client generate unique urls to the server (that it can deal with via rewrite) and i still see the same issue where some requests return the old version number and some the new, and clearing the browser cache doesn't help
EDIT 2: The response headers as requested
HTTP/1.1 200 OK
Date: Mon, 23 Jul 2012 16:50:53 GMT
Server: Apache/2.2.14 (Ubuntu)
X-Powered-By: PHP/5.3.2-1ubuntu4.7
Cache-Control: no-cache, must-revalidate
Pragma: no-cache
Expires: Sat, 26 Jul 1997 05:00:00 GMT
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 500
Connection: close
Content-Type: text/html
EDIT 3: So trying to reproduce to get the response headers I found I could only make it happen going through our full deploy process which involves creating versioned folders storing the code and symlinking the relavant folder into the webroot. Just changing the version number wasn't enough to cause it to happen! So seems to be somehow related to the symlinks i create!
I have the same problem when there is a change in the symlink. Have a look at https://bugs.php.net/bug.php?id=36555 it's maybe what you are looking for.
Try (as said in this bug report) setting realpath_cache_size is 0.

Test if X-Sendfile header is working

I am looking for a way to confirm if X-Sendfile is properly handling requests handed back to the webserver by a script (PHP). Images are being served correctly but I thought I would see the header in curl requests.
$ curl -I http://blog2.stageserver.net/wp-includes/ms-files.php?file=/2011/05/amos-lee-feature.jpg
HTTP/1.1 200 OK
Date: Wed, 04 Jan 2012 17:19:45 GMT
Server: Cherokee/1.2.100 (Arch Linux)
ETag: "4dd2e306=9da0"
Last-Modified: Tue, 17 May 2011 21:05:10 GMT
Content-Type: image/jpeg
Content-Length: 40352
X-Powered-By: PHP/5.3.8
Content-Disposition: inline; filename="amos-lee-feature.jpg"
Configuration
Cherokee 1.2.100 with PHP-FPM 5.3.8 in FastCGI:
cherokee.conf: vserver!20!rule!500!handler!xsendfile = 1
(Set by vServer > Behavior > Extensions php > Handler: Allow X-Sendfile [check Enabled])
Wordpress Network / WPMU 3.3.1:
define('WPMU_SENDFILE',true); is set in the wp-config.php the following just before wp-settings.php is included. This will trigger the following code to be executed in WP's wp-includes/ms-files.php:50 serves up files for a particular blog:
header( 'X-Sendfile: ' . $file );
exit;
I have confirmed that the above snippet is executing by adding an additional header for disposition right before the exit(); call. That Content-Disposition is present with curl results above and not originally in the ms-files.php code. The code that was added is:
header('Content-Disposition: inline; filename="'.basename($file).'"');
Research
I have:
Rebooted php-fpm / cherokee daemons after making configuration changes.
Tried several tricks in the comments over at php.net/readfile and replaced the simple header in ms-files.php with more complete code from examples.
php.net/manual/en/function.readfile.php
www.jasny.net/articles/how-i-php-x-sendfile/
*codeutopia.net/blog/2009/03/06/sending-files-better-apache-mod_xsendfile-and-php/*
Confirmed [cherokee support][5] and tested [with and without][6] compression even though I don't think it would apply since my images are serving correctly. I also found a suspiciously similar problem from a lighttpd post.
*cherokee-project.com/doc/other_goodies.html*
code.google.com/p/cherokee/issues/detail?id=1228
webdevrefinery.com/forums/topic/4761-x-sendfile/
Found a blurb here on SO that may indicate the header gets stripped
stackoverflow.com/questions/7296642/django-understanding-x-sendfile
Tested that the headers above are consistent from curl, wget, Firefox, Chrome, and web-sniffer.net.
Found out that I can't post more than 2 links yet due to lack of reputation.
Questions
Will X-Sendfile be present in the headers when it is working correctly or is it stripped out?
Can the access logs be used to determine if X-Sendfile is working?
I am looking for general troubleshooting tips or information here, not necessarily specific to PHP / Cherokee.
Update
I have found a suitable way to confirm X-Sendfile or X-Accel-Redirect in a test or sandbox environment: Disable X-Sendfile and check the headers.
With Allow X-Sendfile disabled in Cherokee:
$ curl -I http://blog2.stageserver.net/wp-includes/ms-files.php?file=/2011/05/amos-lee-feature.jpg
HTTP/1.1 200 OK
Date: Fri, 06 Jan 2012 15:34:49 GMT
Server: Cherokee/1.2.101 (Ubuntu)
X-Powered-By: PHP/5.3.6-13ubuntu3.3
Content-Type: image/jpeg
X-Sendfile: /srv/http/wordpress/wp-content/blogs.dir/2/files/2011/05/amos-lee-feature.jpg
Content-Length: 40352
The image will not load in browsers but you can see that the header is present. After re-enabling Allow X-Sendfile the image loads and you can be confident that X-Sendfile is working.
According to the source on github X-Sendfile headers will be stripped.
If I'm skimming the file correctly, it's only logging success if it's been compiled in debug mode.
You could check memory usage of sending large files with and without xsendfile.
They are being stripped, simply because having them present will prevent one of the reasons to use it, namely having the file served without the recepient knowing the location of the file being served.

Categories