I am going a little crazy here trying to find a solution to something that is probably pretty straight forward.
I have a group of reports on an intranet (not accesible to the outside world) and each report has an input form that has a bunch of HTML inputs that vary the report data.
Problem being when you hit back to the form from the report the form is reset to it's original state. I want it to cache (remember the HTML input variables) and all I can find is how to turn caching off, I want it on! I would prefer not to do this with $_SESSION and $_COOKIE storing as I have 120 reports with roughly 10 or so inputs each, so its going to take forever to store everyone of them and re-load variables on refresh.
I am not the server administrator, but I beleive we are running Apache 2.2 web server. These are all PHP/HTML based pages. Any advice would be great!
It is not to do with my Browser as other forms are being cached. I am more looking into what modules on the server need to be activated to allow caching and what notes I should put in the header of the forms to allow caching. The intranet runs through a proxy so I am thinking I will need cache-control to be public.
EDIT:
When I run the form page, the HTTP headers show me this which I feel should be changed:
(under Response Headers)
X-Powered-By: PHP/5.3.3
Via: *[REMOVED]*
Server: Apache/2.2.3 (Red Hat)
Proxy-Connection: Keep-Alive
Pragma: no-cache
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Date: Wed, 13 Feb 2013 23:33:32 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 5191
Connection: Keep-Alive
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
I have a feeling I need to change the Cache-Control and Pragma values. Anyone know how to acheive this?
trying adding these headers to the top of the page:
header("Cache-Control: private, max-age=10800, pre-check=10800");
header("Pragma: private");
header("Expires: " . date(DATE_RFC822,strtotime("+2 day")));
NOTE: if the form submits and post data to a second page, you may want to put it at the top of both pages. also, make sure the code is after any session_start(); if you are using sessions.
Try setting the autocomplete attribute of the inputs to on.
<input name="myinput" autocomplete="on" type="text">
Related
I'm the sole developer building a LAMP web application for a small infancy-stage startup and have been crying myself to sleep over a bug that only occurs when using the web app in Internet Explorer 10-11 and Edge (Chrome, FF, and Opera work like a charm). Worse yet, it happens randomly and about 50% of the time after a user has authenticated and logged into the web app. Here's a screenshot:
Here's what shows up in the DOM Explorer when inspecting:
15447HTTP/1.1 200 OK
Date: Wed, 17 Aug 2016 09:27:27 GMT
Server: Apache/2.4.12 (Ubuntu)
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 4972
Keep-Alive: timeout=15, max=97
Connection: Keep-Alive
Content-Type: text/html; charset=UTF-8
‹í]ëvÛ¶²þÝ<ʶGöÙÖÅò-±-u9²8©/µ¤mVVDBlŠdJ²Òößy’ýo¿Æy”ó$gðŠÔ
l÷ê^‹j‹¸|33`$}zݹÿùæ
‚¡Ý~vý!Øj?Cð:’CnàUÉç·ÓuâÕ`ê…O-# OAW?BæûŒw÷çÕç†Šå£ —d4°IûÄSæú/©õÿóÏãºLTêz¾ë?˜¶
º.ö9IËS2ñ\?PŠO¨ZS“TÅâ
(¶«ÌÄ6imóÚÿSõÝIµã=ÐnŠ‰‹³±ú$‡<
®¯Í)ÓwݾMªŒ¤:&>íQ(¸ŽRë`ûm÷Ç;ëf·7ö¼ó÷¿ü8=øùËã®Îß¿ï¼ß}8MN'ýæ¥ê!
Ë`ÁÔ&l#ˆ‚÷^Øi&cø¤}ËXÝqý!¶éRãš ÒíÙã¡KŠ(TáÂëõˆ‡Õ¤ø°GYÍt‡ìûR{Úºöˆó;ì
k·Ñ¨6*ÉF%b£2Ër…A÷æ(#:£2P§Ã~Ývûn
R+\à²Ú×Õ*úÁÅz%xB'¶§5ªVCdfúÔäB½‘cò¾Þ [lËÝêoù[xk¸ùýX‘1Äu÷˜AåSË?¢ýO-þÏï¿Çõ7!
~ÿýã§Íš7bƒ
<more garbled text>
As one can see from the response headers, the server returned a status of 200, and there are no errors or warnings in the console. Under the 'Network' tab, everything appears to have returned with either 200 or 302, with the exception of a couple 404s when retrieving profile pictures from the LinkedIn REST API (the pics still show up though in the other 50% of the time that the page actually displays properly...). On the server side, there is nothing in the Apache error log, and syslog is clean. The actual content appears to be compressed, which shouldn't be a problem given that the server is specifying the content encoding as gzip. Either that, or I'm looking at encrypted content.
I'm running Apache 2.4.12 on Ubuntu 15.10. Content is (of course) served over HTTPS, and the cert doesn't expire for another year. The application is written in PHP, and this happens on both the staging and production servers. I've scoured SO, Serverfault, and Google for a similar problem but haven't been successful. If anyone has encountered this error before or has any possible idea as to what's going on, any help would be greatly appreciated.
What is the definitive solution for avoid any kind of caching of http data? We can modify the client as well as the server - so I think we can split the task between client and the server.
Client can append to each request a random parameter http://URL/path?rand=6372637263 – My feeling is that using only this way it is not working 100% - might be there are some intelligent proxies, which can detect that… On the other side I think that if the URL is different from the previous one, the proxy cannot simply decide to send back some cached response.
On server can control a bunch of HTTP headers:
Expires: Tue, 03 Jul 2001 06:00:00 GMT
Last-Modified: {now} GMT
Cache-Control: no-store, no-cache, must-revalidate, max-age=0
Cache-Control: post-check=0, pre-check=0
Pragma: no-cache
Any comments to this, what is the best approach?
Server-side cache control headers should look like:
Expires: Tue, 03 Jul 2001 06:00:00 GMT
Last-Modified: {now} GMT
Cache-Control: max-age=0, no-cache, must-revalidate, proxy-revalidate
Avoid rewriting URLs on the client because it pollutes caches, and causes other weird semantic issues. Furthermore:
Use one Cache-Control header (see rfc 2616) because behaviour with multiple entries is undefined. Also the MSIE specific entries in the second cache-control are at best redundant.
no-store is about data security. (it only means don't write this to disk - caches are still allowed to store the response in memory).
Pragma: no-cache is meaningless in a server response - it's a request header meaning that any caches receiving the request must forward it to the origin.
Using both Expires (http/1.0) and cache-control (http/1.1) is not redundant since proxies exist that only speak http/1.0, or will downgrade the protocol.
Technically, the last modified header is redundant in light of no-cache, but it's a good idea to leave it in there.
Some browsers will ignore subsequent directives in a cache-control header after they come across one they don't recognise - so put the important stuff first.
Adding header
Cache-control: private
guarantees, that gataway cache won't cache such request.
I'd like to recommend you Fabien Potencier lecture about caching: http://www.slideshare.net/fabpot/caching-on-the-edge
To disable the cache, you should use
Expires: 0
Or
Cache-Control: no-store
If you use one then should not use other one.
I am building a very simple page here: http://www.wordjackpot.com
My problem appears in Google Chrome only, when I reload the page, the images are reloaded each time as if there's no cache, I'm not sure if the problem comes from my code or from chrome because for the example on stackoverflow.com images have http code 304 when I reload the page.
Then my question is: what am I doing wrong ?
Thanks.
These are your return headers... you are explicitly telling the browsers to not cache.
This will be an apache (web server) setting.
Accept-Ranges:bytes
Accept-Ranges:bytes
Cache-Control:no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Connection:keep-alive
Content-Length:4026
Content-Type:image/png
Date:Tue, 03 Feb 2015 14:33:44 GMT
Pragma:no-cache
Server:Apache
Set-Cookie:300gp=R3396092545; path=/; expires=Tue, 03-Feb-2015 15:46:10 GMT
X-Cacheable:Not cacheable: no-cache
X-Geo:varn34.rbx5
X-Geo-Port:1011
X-Pad:avoid browser bug
Look at your HTTP Headers, you have no-cache all over it.
I've noticed my sites are not ranking as well as they did before and when I checked Webmaster tools I see that gooblebot cannot crawl pages that I can perfectly crawl with my browser and I'm getting an 500 error.
The websites are not WordPress and use PHP.
What can be causing this problem?
This is the actual error in WMT
HTTP/1.1 500 Internal Server Error
Date: Tue, 06 Nov 2012 21:04:38 GMT
Server: Apache
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Set-Cookie: PHPSESSID=blkss9toirna36p2mjl44htv01; path=/
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 3840
Connection: close
Content-Type: text/html
You may be blocking Googlebot with .htaccess, robots.txt or by some other means (maybe firewall settings?)
a. this is not good
b. you should use WMT to get Crawl stats/Crawl Error reports and use these to get better understanding of this issue (at what URLs / How Often does this occur...)
Also, try to look at your last Google Cache date (direct search the domain and click on the Cache link in the preview window)
This may be temporary, downtime related issue that will solve itself or a site wide blocking rule that you'll need to change.
GL
If you're still having a problem with googlebot receiving a 500 error code, I suggest you register with Google Webmaster Tools not Analytics. If you choose Health then Fetch As Google. You should get what the googlebot receives and see what the error is.
I had the same problem and discovered that it was one of the plugins that was causing this. Basically I disabled every plugin and then re-enabled one, tested, re-enabled the next .......
Took about 1 hour to find the culprit but now all is good
I'm trying to stream an mp3 file with PHP and play it on the browser.
I'm using Ubuntu for both the server ( apache ) and client for testing. My code works on Chrome, but not on FireFox.
When I access the mp3 directly ( so it's served by the web server ) it works on FireFox as well, but comparing the headers that the web server generates with the headers I send in PHP I couldn't find how to fix the problem. ( I'm spying the headers using FireBug )
Here are the webserver generated headers ( That does work ):
Accept-Ranges bytes
Connection Keep-Alive
Content-Length 490265
Content-Type audio/mpeg
Date Sun, 11 Mar 2012 04:01:45 GMT
Etag "22064e-77b19-4badff4a88200"
Keep-Alive timeout=5, max=100
Last-Modified Sat, 10 Mar 2012 09:15:52 GMT
Server Apache/2.2.20 (Ubuntu)
Here are the headers that are sent to the browser from my PHP script:
Accept-Ranges bytes
Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Connection Keep-Alive
Content-Length 490265
Content-Type audio/mpeg
Date Sun, 11 Mar 2012 04:16:00 GMT
Expires Thu, 19 Nov 1981 08:52:00 GMT
Keep-Alive timeout=5, max=100
Pragma no-cache
Server Apache/2.2.20 (Ubuntu)
X-Powered-By PHP/5.3.6-13ubuntu3.6
This is the code I use to stream the mp3:
header('Content-length: ' . filesize($path));
header('Content-Type: audio/mpeg');
header('Accept-Ranges: bytes');
readfile($path);
exit;
I did also tried other headers which didn't help, such as:
header('Content-Disposition: inline; filename="name.mp3"');
header('Expires: '.gmdate('D, d M Y H:i:s').' GMT');
header('Pragma: no-cache');
header('Cache-Control: no-cache');
But like I said, none of these fixed the problem.
Many thanks for any help,
Oded.
EDIT:
OK this appears to be extremely strange. After much debugging, I made sure that the headers and content of the PHP version and the webserver versions are the same, and then I found out what breaks it, but I have no idea why. Here is the scenario that breaks it:
1) Store a string of a path in $_SESSION in a previous script.
2) Read this string in the script that streams the mp3.
3) Use this string as the path to load the mp3 file.
If I do that, FireFox cannot play the file, when I press on the mp3 player, it prints a "GstDecodeBin2: This appears to be a text file" message.
If I hard code the path instead of using the $_SESSION, it works. The crazy thing is that I made absolutely sure that the path in the $_SESSION is correct! Remember that the headers and content of the PHP and webserver versions are identical!
The HTTP Accept-Ranges header allows the browser to send a starting and ending point of the file to download, this allows for multi-part downloading of the same file. There are plenty of PHP implementations of this, here is one found on the PHP.net documentation page for fread().
http://www.php.net/manual/en/function.fread.php#106999
I found what the problem is using WireShark to monitor the requests. Earlier I used FireBug and HTTPFox, and they don't show all the requests!
WireShark showed me that after the initial successful request there is another request for the same URI. This second request was not caught by xdebug, and was missed by FireBug, and HTTPFox. The problem is that this request does not include the PHPSESSID! Obviously as a result the session did not work, and because it did work on the first request I was confused.
This seems to me like a bug in FireFox with its media player module.
I can work around this by manually adding the PHPSESSID to the URL as query string.