Understanding “408 Request Timeout” on Apache with PHP - php

Issue description - Apache logs
I found items similar to this one in the Apache log file:
166.147.68.243 [24/Feb/2013:06:06:25 -0500] 19 web-site.com "-" 408 - "-"
I’ve got custom log format and 408 here stands for status. The log format is:
LogFormat "%h %t %D %V \"%r\" %>s %b \"%{User-agent}i\"" detailed
And normally the line in the log file looks like
184.73.232.108 [26/Feb/2013:08:38:16 -0500] 30677 www.site.com "GET /api/search... HTTP/1.1" 200 205 "Zend_Http_Client"
This is why 408 error lines look strange to me. No request is logged and I have no idea on what should be optimized.
Questions
How to tackle the issue?
What additional information or logs should I gather?
What might cause the issue? Is this something wrong on the server? Or is this absolutely a network connectivity problem?
I’m addressing this because our customer complained that he has got 408 error on his mobile phone. I found many records in the log file but I have to admit I don’t know what to do with this.
My own research
There are several questions on this subject already here. But people are much more specific. Like they discus issues with some specific client software and scripts. Here I just got the error when opening some page on iPhone.
For example in HTTP, 408 Request timeout, it is suggested to do the GET request before POST. If I have custom client I can do this. But I can not control the behavior of the user’s browser.
Guess #1
When searching the Internet and thinking about the issue I found https://serverfault.com/questions/383290/too-many-408-error-codes-in-access-log
The suggestion is to update the Timeout config parameter back to its default value.
#
# Timeout: The number of seconds before receives and sends time out.
#
Timeout 300
I tried the value 30 first because I thought 30 seconds should be enough. But even with 300 seconds default value, I continue to get the errors in the log. I did tail -f when I was writing this text and got more then 10 lines in a few minutes.
To me this does not look a complete solution.

After some studies on the subject I came to the following answer. It is provided by our lead developer and I think it gives good explanation of the subject.
These errors are perfectly normal. They aren't a sign of a larger issue, but normal connections that are holding Apache open for longer than allowed.
For example, client's queries running them over and over kept Apache open. Apache responded by shutting him down appropriately.
If it hadn't, than a handful of people could take over our server and not allow anyone else to connect.
Most often these errors are coming from systems looking for exploits, and you can recreate it by opening a telnet session and leaving it open.
At the same time, tail -f the access log, and within X time (KeepAliveTimeout) you'll see your IP popup with the same error codes.
Back in the days of Apache 1.3, this error was common, but then 2.2 came out and they had it removed until enough of us asked for it to be returned since it give us ideas on how many people are holding open just the port, and not requesting an actual resource, etc.
I think nothing else should be done here except to make sure to set Timeout to some reasonable value as I have described in original question.

Actually a lot of 408 messages in apache logs are a result pre-fetching mechanism in modern browsers.
From looking in apache logs in the last 3 years the amount of 408 errors had more than doubled for the same traffic.

If there is a Proxy set up in Apache, and the back end is not responding in a timely fashion for some reason, the same 408 - - will be seen in the logs. Proxy timeouts are configured separately, that's why changing the Apache default timeout doesn't seem to do anything about those requests.

All http requests support username and pw. No username="-", no password="-", password="*".

Related

(failed)net::ERR_HTTP2_PROTOCOL_ERROR AJax erorr [duplicate]

I'm currently working on a website, which triggers a net::ERR_HTTP2_PROTOCOL_ERROR 200 error on Google Chrome. I'm not sure exactly what can provoke this error, I just noticed it pops out only when accessing the website in HTTPS. I can't be 100% sure it is related, but it looks like it prevents JavaScript to be executed properly.
For instance, the following scenario happens :
I'm accessing the website in HTTPS
My Twitter feed integrated via https://publish.twitter.com isn't loaded at all
I can notice in the console the ERR_HTTP2_PROTOCOL_ERROR
If I remove the code to load the Twitter feed, the error remains
If I access the website in HTTP, the Twitter feed appears and the error disappears
Google Chrome is the only web browser triggering the error: it works well on both Edge and Firefox.
(NB: I tried with Safari, and I have a similar kcferrordomaincfnetwork 303 error)
I was wondering if it could be related to the header returned by the server since there is this '200' mention in the error, and a 404 / 500 page isn't triggering anything.
Thing is the error isn't documented at all. Google search gives me very few results. Moreover, I noticed it appears on very recent Google Chrome releases; the error doesn't pop on v.64.X, but it does on v.75+ (regardless of the OS; I'm working on Mac tho).
Might be related to Website OK on Firefox but not on Safari (kCFErrorDomainCFNetwork error 303) neither Chrome (net::ERR_SPDY_PROTOCOL_ERROR)
Findings from further investigations are the following:
error doesn't pop on the exact same page if server returns 404 instead of 2XX
error doesn't pop on local with a HTTPS certificate
error pops on a different server (both are OVH's), which uses a different certificate
error pops no matter what PHP version is used, from 5.6 to 7.3 (framework used : Cakephp 2.10)
As requested, below is the returned header for the failing ressource, which is the whole web page. Even if the error is triggering on each page having a HTTP header 200, those pages are always loading on client's browser, but sometimes an element is missing (in my exemple, the external Twitter feed). Every other asset on the Network tab has a success return, except the whole document itself.
Google Chrome header (with error):
Firefox header (without error):
A curl --head --http2 request in console returns the following success:
HTTP/2 200
date: Fri, 04 Oct 2019 08:04:51 GMT
content-type: text/html; charset=UTF-8
content-length: 127089
set-cookie: SERVERID31396=2341116; path=/; max-age=900
server: Apache
x-powered-by: PHP/7.2
set-cookie: xxxxx=0919c5563fc87d601ab99e2f85d4217d; expires=Fri, 04-Oct-2019 12:04:51 GMT; Max-Age=14400; path=/; secure; HttpOnly
vary: Accept-Encoding
Trying to go deeper with the chrome://net-export/ and https://netlog-viewer.appspot.com tools is telling me the request ends with a RST_STREAM :
t=123354 [st=5170] HTTP2_SESSION_RECV_RST_STREAM
--> error_code = "2 (INTERNAL_ERROR)"
--> stream_id = 1
For what I read in this other post, "In HTTP/2, if the client wants to abort the request, it sends a RST_STREAM. When the server receives a RST_STREAM, it will stop sending DATA frames to the client, thereby stopping the response (or the download). The connection is still usable for other requests, and requests/responses that were concurrent with the one that has been aborted may continue to progress.
[...]
It is possible that by the time the RST_STREAM travels from the client to the server, the whole content of the request is in transit and will arrive to the client, which will discard it. However, for large response contents, sending a RST_STREAM may have a good chance to arrive to the server before the whole response content is sent, and therefore will save bandwidth."
The described behavior is the same as the one I can observe. But that would mean the browser is the culprit, and then I wouldn't understand why it happens on two identical pages with one having a 200 header and the other a 404 (same goes if I disable JS).
In my case it was - no disk space left on the web server.
For several weeks I was also annoyed by this "bug":
net :: ERR_HTTP2_PROTOCOL_ERROR 200
In my case, it occurred on images generated by PHP.
It was at header() level, and on this one in particular:
header ('Content-Length:'. Filesize($cache_file));
It did obviously not return the exact size, so I deleted it and everything works fine now.
So Chrome checks the accuracy of the data transmitted via the headers, and if it does not correspond, it fails.
EDIT
I found why content-length via filesize was being miscalculated: the GZIP compression is active on the PHP files, so excluding the file in question will fix the problem. Put this code in the .htaccess:
SetEnvIfNoCase Request_URI ^ / thumb.php no-gzip -vary
It works and we keep the header Content-length.
I am finally able to solve this error after researching some things I thought is causing the error for 24 errors. I visited all the pages across the web. And I am happy to say that I have found the solution.
If you are using NGINX, then set gzip to off and add proxy_max_temp_file_size 0; in the server block like I have shown below.
server {
...
...
gzip off;
proxy_max_temp_file_size 0;
location / {
proxy_pass http://127.0.0.1:3000/;
....
Why? Because what actually happening was all the contents were being compressed twice and we don't want that, right?!
The fix for me was setting minBytesPerSecond in IIS to 0. This setting can be found in system.applicationHost/webLimits in IIS's Configuration Editor. By default it's set to 240.
It turns out that some web servers will cut the connection to a client if the server's data throughput to the client passes below a certain limit. This is to protect against "slow drip" denial of service attacks. However, this limit can also be triggered in cases where an innocent user requests many resources all at once (such as lots of images on a single page), and the server is forced to ration the bandwidth for each request so much that it causes one or more requests to drop below the throughput limit, which causes the server to cut the connection and shows up as net::ERR_HTTP2_PROTOCOL_ERROR in Chrome.
For example, if you request 11 GIF images all at once, and each individual GIF is 10 megabytes (11 * 10 = 110 megabytes total), and the server is only able to serve at 100 megabytes per second (per thread), the server will have to slow the throughput on the last GIF image until the first 10 are finished. If the throughput on that last GIF is slowed so much that it drops below the minBytesPerSecond limit, it will cut the connection.
I was able to resolve this by following these steps:
I used Chrome's Network Log Export tool at chrome://net-export/ to see exactly what was behind the ERR_HTTP2_PROTOCOL_ERROR error. I started the log, reproduced the error, and stopped the log.
I imported the log into the log viewer at https://netlog-viewer.appspot.com/#import, and saw an interesting event titled HTTP2_SESSION_RECV_RST_STREAM, with error code 8 (CANCEL).
I did some Googling on the term "RST_STREAM" (which appears to be an abbreviated form of "reset stream") and found a discussion between some people talking about an IIS setting called minBytesPerSecond (discussion here: https://social.msdn.microsoft.com/Forums/en-US/aeb01c46-bcdf-40ed-a417-8a3558221137). I also found another discussion where there was some debate about whether minBytesPerSecond was intended to protect against slow HTTP DoS (slow drip) attacks (discussion here: IIS 8.5 low minBytesPerSecond not working against slow HTTP POST). In any case, I learned that IIS uses minBytesPerSecond to determine whether to cancel a connection if it cannot sustain the minimum throughput. This is relevant in cases where a single user makes many requests to a large resource, and each new connection ends up starving all the other unfinished ones, to the point where some may fall below the minBytesPerSecond threshold.
To confirm that the server was canceling requests due to a minBytesPerSecond error, I checked my server's HTTPERR log at c:\windows\system32\logfiles\httperr. Sure enough, I opened the file and did a text search for "MinBytesPerSecond" and there were tons of entries for it.
So after I changed the minBytesPerSecond to 0, I was no longer able to reproduce the ERR_HTTP2_PROTOCOL_ERROR error. So, it appears that the ERR_HTTP2_PROTOCOL_ERROR error was being caused by my server (IIS) canceling the request because the throughput rate from my server fell below the minBytesPerSecond threshold.
So for all you reading this right now, if you're not using IIS, maybe there is a similar setting related to minimum throughput rate you can play with to see if it gets rid of the ERR_HTTP2_PROTOCOL_ERROR error.
I experienced a similar problem, I was getting ERR_HTTP2_PROTOCOL_ERROR on one of the HTTP GET requests.
I noticed that the Chrome update was pending, so I updated the Chrome browser to the latest version and the error was gone next time when I relaunched the browser.
I encountered this because the http2 server closed the connection when sending a big response to the Chrome.
Why?
Because it is just a setting of the http2 server, named WriteTimeout.
I had this problem when having a Nginx server that exposing the node-js application to the external world. The Nginx made the file (css, js, ...) compressed with gzip and with Chrome it looked like the same.
The problem solved when we found that the node-js server is also compressed the content with gzip. In someway, this double compressing leading to this problem. Canceling node-js compression solved the issue.
I didn't figure out what exactly was happening, but I found a solution.
The CDN feature of OVH was the culprit. I had it installed on my host service but disabled for my domain because I didn't need it.
Somehow, when I enable it, everything works.
I think it forces Apache to use the HTTP2 protocol, but what I don't understand is that there indeed was an HTTP2 mention in each of my headers, which I presume means the server was answering using the right protocol.
So the solution for my very particular case was to enable the CDN option on all concerned domains.
If anyone understands better what could have happened here, feel free to share explanations.
I faced this error several times and, it was due to transferring large resources(larger than 3MB) from server to client.
This error is currently being fixed: https://chromium-review.googlesource.com/c/chromium/src/+/2001234
But it helped me, changing nginx settings:
turning on gzip;
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
In my case, Nginx acts as a reverse proxy for Node.js application.
We experienced this problem on pages with long Base64 strings. The problem occurs because we use CloudFlare.
Details: https://community.cloudflare.com/t/err-http2-protocol-error/119619.
Key section from the forum post:
After further testing on Incognito tabs on multiple browsers, then
doing the changes on the code from a BASE64 to a real .png image, the
issue never happened again, in ANY browser. The .png had around 500kb
before becoming a base64,so CloudFlare has issues with huge lines of
text on same line (since base64 is a long string) as a proxy between
the domain and the heroku. As mentioned before, directly hitting
Heroku url also never happened the issue.
The temporary hack is to disable HTTP/2 on CloudFlare.
Hope someone else can produce a better solution that doesn't require disabling HTTP/2 on CloudFlare.
In our case, the reason was invalid header.
As mentioned in Edit 4:
take the logs
in the viewer choose Events
chose HTTP2_SESSION
Look for something similar:
HTTP2_SESSION_RECV_INVALID_HEADER
--> error = "Invalid character in header name."
--> header_name = "charset=utf-8"
By default nginx limits upload size to 1MB.
With client_max_body_size you can set your own limit, as in
location /uploads {
...
client_max_body_size 100M;
}
You can set this setting also on the http or server block instead (See here).
This fixed my issue with net::ERR_HTTP2_PROTOCOL_ERROR
Just posting here to let people know that ERR_HTTP2_PROTOCOL_ERROR in Chrome can also be caused by an unexpected response to a CORS request.
In our case, the OPTIONS request was successful, but the following PUT that should upload an image to our infrastructure was denied with a 410 (because of a missing configuration allowing uploads) resulting in Chrome issuing a ERR_HTTP2_PROTOCOL_ERROR.
When checking in Firefox, the error message was much more helpful:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://www.[...] (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 410.
My recommendation would be to check an alternative browser in this case.
I'm not convinced this was the issue but through cPanel I'd noticed the PHP version was on 5.6 and changing it to 7.3 seemed to fix it. This was for a WordPress site. I noticed I could access images and generic PHP files but loading WordPress itself caused the error.
Seems like many issues may cause ERR_HTTP2_PROTOCOL_ERROR: in my case it was a minor syntax error in a php-generated header, Content-Type : text/plain . You might notice the space before the colon... that was it. Works no problem when the colon is right next to the header name like Content-Type: text/plain. Only took a million hours to figure out... The error happens with Chrome only, Firefox loaded the object without complaint.
If simply restarting e.g., Chrome Canary, with a fresh profile fixes the problem, then one surely
is the "victim" of a failed Chrome Variation! Yes, there are ways to opt out of being a Guinea pig in Chrome's field testing.
In my case
header params can not set null or empty string
{
'Authorization': Authorization //Authorization can't use null or ''
}
I got the same issue (asp, c# - HttpPostedFileBase) when posting a file that was larger than 1MB (even though application doesn't have any limitation for file size), for me the simplification of model class helped. If you got this issue, try to remove some parts of the model, and see if it will help in any way. Sounds strange, but worked for me.
I have been experiencing this problem for the last week now as I've been trying to send DELETE requests to my PHP server through AJAX. I recently upgraded my hosting plan where I now have an SSL Certificate on my host which stores the PHP and JS files. Since adding an SSL Certificate I no longer experience this issue. Hoping this helps with this strange error.
I also faced this error and I believe there can be multiple reasons behind it. Mine was, ARR was getting timed-out.
In my case, browser was making a request to a reverse proxy site where I have set my redirection rules and that proxy site is eventually requesting the actual site. Now for huge data it was taking more than 2 minutes 5 seconds and Application Request Routing timeout for my server was set to 2 minutes. I fixed this by increasing the ARR timeout by below steps:
1. Go to IIS
2. Click on server name
3. Click on Application Request Routing Cache in the middle pane
4. Click Server Proxy settings in right pane
5. Increase the timeout
6. Click Apply
My team saw this on a single javascript file we were serving up. Every other file worked fine. We switched from http2 back to http1.1 and then either net::ERR_INCOMPLETE_CHUNKED_ENCODING or ERR_CONTENT_LENGTH_MISMATCH. We ultimately discovered that there was a corporate filter (Trustwave) that was erroneously detecting an "infoleak" (we suspect it detected something in our file/filename that resembled a social security number). Getting corporate to tweak this filter resolved our issues.
For my situation this error was caused by having circular references in json sent from the server when using an ORM for parent/child relationships. So the quick and easy solution was
JsonConvert.SerializeObject(myObject, new JsonSerializerSettings { ReferenceLoopHandling = ReferenceLoopHandling.Ignore })
The better solution is to create DTOs that do not contain the references on both sides (parent/child).
I had another case that caused an ERR_HTTP2_PROTOCOL_ERROR that hasn't been mentioned here yet. I had created a cross reference in IOC (Unity), where I had class A referencing class B (through a couple of layers), and class B referencing class A. Bad design on my part really. But I created a new interface/class for the method in class A that I was calling from class B, and that cleared it up.
I hit this issue working with Server Sent Events. The problem was solved when I noticed that the domain name I used to initiate the connection included a trailing slash, e.g. https://foo.bar.bam/ failed with ERR_HTTP_PROTOCOL_ERROR while https://foo.bar.bam worked.
In my case (nginx on windows proxying an app while serving static assets on its own) page was showing multiple assets including 14 bigger pictures; those errors were shown for about 5 of those images exactly after 60 seconds; in my case it was a default send_timeout of 60s making those image requests fail; increasing the send_timeout made it work
I am not sure what is causing nginx on windows to serve those files so slow - it is only 11.5MB of resources which takes nginx almost 2 minutes to serve but I guess it is subject for another thread
In my case, the problem was that Bitdefender provided me with a local ssl certificate, when the website was still without a certificate.
When I disabled Bitdefender and reloaded the page, the actual valid server ssl certificate was loaded, and the ERR_HTTP2_PROTOCOL_ERROR was gone.
In my case, it was WordPress that now requires PHP 7.4 and I was running 7.2.
As soon as I updated, the errors disappeared.
Happened again and this time it was the ad-blocker that didn't like the name of my images (yt.png, ig.png, url.png). I added a prefix and all loaded ok.
In my case, the time on my computer (browser client) was out of date, synced it using settings in windows, and then the error got away
I had line breaks in my Content-Security-Policy in my nginx.conf that produced this error when used in an docker container running in Kube in GCP (serving angular but I doubt that matters).
Putting them all back on the same line and the problem went away.
A curl -v helped diagnose.
http2 error: Invalid HTTP header field was received: frame type: 1, stream: 1, name: [content-security-policy], value: [script-src 'unsafe-inline' 'self....
It was much easier to edit on separate lines but never again!

What's the net::ERR_HTTP2_PROTOCOL_ERROR about?

I'm currently working on a website, which triggers a net::ERR_HTTP2_PROTOCOL_ERROR 200 error on Google Chrome. I'm not sure exactly what can provoke this error, I just noticed it pops out only when accessing the website in HTTPS. I can't be 100% sure it is related, but it looks like it prevents JavaScript to be executed properly.
For instance, the following scenario happens :
I'm accessing the website in HTTPS
My Twitter feed integrated via https://publish.twitter.com isn't loaded at all
I can notice in the console the ERR_HTTP2_PROTOCOL_ERROR
If I remove the code to load the Twitter feed, the error remains
If I access the website in HTTP, the Twitter feed appears and the error disappears
Google Chrome is the only web browser triggering the error: it works well on both Edge and Firefox.
(NB: I tried with Safari, and I have a similar kcferrordomaincfnetwork 303 error)
I was wondering if it could be related to the header returned by the server since there is this '200' mention in the error, and a 404 / 500 page isn't triggering anything.
Thing is the error isn't documented at all. Google search gives me very few results. Moreover, I noticed it appears on very recent Google Chrome releases; the error doesn't pop on v.64.X, but it does on v.75+ (regardless of the OS; I'm working on Mac tho).
Might be related to Website OK on Firefox but not on Safari (kCFErrorDomainCFNetwork error 303) neither Chrome (net::ERR_SPDY_PROTOCOL_ERROR)
Findings from further investigations are the following:
error doesn't pop on the exact same page if server returns 404 instead of 2XX
error doesn't pop on local with a HTTPS certificate
error pops on a different server (both are OVH's), which uses a different certificate
error pops no matter what PHP version is used, from 5.6 to 7.3 (framework used : Cakephp 2.10)
As requested, below is the returned header for the failing ressource, which is the whole web page. Even if the error is triggering on each page having a HTTP header 200, those pages are always loading on client's browser, but sometimes an element is missing (in my exemple, the external Twitter feed). Every other asset on the Network tab has a success return, except the whole document itself.
Google Chrome header (with error):
Firefox header (without error):
A curl --head --http2 request in console returns the following success:
HTTP/2 200
date: Fri, 04 Oct 2019 08:04:51 GMT
content-type: text/html; charset=UTF-8
content-length: 127089
set-cookie: SERVERID31396=2341116; path=/; max-age=900
server: Apache
x-powered-by: PHP/7.2
set-cookie: xxxxx=0919c5563fc87d601ab99e2f85d4217d; expires=Fri, 04-Oct-2019 12:04:51 GMT; Max-Age=14400; path=/; secure; HttpOnly
vary: Accept-Encoding
Trying to go deeper with the chrome://net-export/ and https://netlog-viewer.appspot.com tools is telling me the request ends with a RST_STREAM :
t=123354 [st=5170] HTTP2_SESSION_RECV_RST_STREAM
--> error_code = "2 (INTERNAL_ERROR)"
--> stream_id = 1
For what I read in this other post, "In HTTP/2, if the client wants to abort the request, it sends a RST_STREAM. When the server receives a RST_STREAM, it will stop sending DATA frames to the client, thereby stopping the response (or the download). The connection is still usable for other requests, and requests/responses that were concurrent with the one that has been aborted may continue to progress.
[...]
It is possible that by the time the RST_STREAM travels from the client to the server, the whole content of the request is in transit and will arrive to the client, which will discard it. However, for large response contents, sending a RST_STREAM may have a good chance to arrive to the server before the whole response content is sent, and therefore will save bandwidth."
The described behavior is the same as the one I can observe. But that would mean the browser is the culprit, and then I wouldn't understand why it happens on two identical pages with one having a 200 header and the other a 404 (same goes if I disable JS).
In my case it was - no disk space left on the web server.
For several weeks I was also annoyed by this "bug":
net :: ERR_HTTP2_PROTOCOL_ERROR 200
In my case, it occurred on images generated by PHP.
It was at header() level, and on this one in particular:
header ('Content-Length:'. Filesize($cache_file));
It did obviously not return the exact size, so I deleted it and everything works fine now.
So Chrome checks the accuracy of the data transmitted via the headers, and if it does not correspond, it fails.
EDIT
I found why content-length via filesize was being miscalculated: the GZIP compression is active on the PHP files, so excluding the file in question will fix the problem. Put this code in the .htaccess:
SetEnvIfNoCase Request_URI ^ / thumb.php no-gzip -vary
It works and we keep the header Content-length.
I am finally able to solve this error after researching some things I thought is causing the error for 24 errors. I visited all the pages across the web. And I am happy to say that I have found the solution.
If you are using NGINX, then set gzip to off and add proxy_max_temp_file_size 0; in the server block like I have shown below.
server {
...
...
gzip off;
proxy_max_temp_file_size 0;
location / {
proxy_pass http://127.0.0.1:3000/;
....
Why? Because what actually happening was all the contents were being compressed twice and we don't want that, right?!
The fix for me was setting minBytesPerSecond in IIS to 0. This setting can be found in system.applicationHost/webLimits in IIS's Configuration Editor. By default it's set to 240.
It turns out that some webservers will cut the connection to a client if the server's data throughput to the client passes below a certain limit. This is to protect against "slow drip" denial of service attacks. However, this limit can also be triggered in cases where an innocent user requests many resources all at once (such as lots of images on a single page), and the server is forced to ration the bandwidth for each request so much that it causes one or more requests to drop below the throughput limit, which causes the server to cut the connection and shows up as net::ERR_HTTP2_PROTOCOL_ERROR in Chrome.
For example, let's say you request 10 GIF images all at once, and each GIF is 10 megabytes (100MB total). If your download speed from the server is 1MB per second, the server will have to divide that 1MBps amongst the 10 images somehow. Now, here is where it gets interesting, as how the bandwidth gets divided seems to be random:
The server may evenly divide the bandwidth by 10, resulting in 0.1MBps allocated to each image. None of the download speeds fall below the default IIS minBytesPerSecond limit of 240 bytes, so all the GIFs download successfully.
The server may serve the first 5 at 0.2MBps, and put the last 5 "on hold" at 0MBps, to be downloaded after the first 5. However, since 0MBps is below the default IIS minBytesPerSecond limit of 240 bytes, the server cuts the connection to the remaining downloads.
How IIS decides to divide the bandwidth across multiple connections is still unknown to me; it appears to be random from the testing I've done. If you have any insight, please comment below. Note: the web browser may actually be throttling the bandwidth too, so don't rule out the browser.
I was able to stop the connections from being cut by following these steps:
I used Chrome's Network Log Export tool at chrome://net-export/ to see exactly what was behind the ERR_HTTP2_PROTOCOL_ERROR error. I started the log, reproduced the error, and stopped the log.
I imported the log into the log viewer at https://netlog-viewer.appspot.com/#import, and saw an interesting event titled HTTP2_SESSION_RECV_RST_STREAM, with error code 8 (CANCEL).
I did some Googling on the term "RST_STREAM" (which appears to be an abbreviated form of "reset stream") and found a discussion between some people talking about an IIS setting called minBytesPerSecond (discussion here: https://social.msdn.microsoft.com/Forums/en-US/aeb01c46-bcdf-40ed-a417-8a3558221137). I also found another discussion where there was some debate about whether minBytesPerSecond was intended to protect against slow HTTP DoS (slow drip) attacks (discussion here: IIS 8.5 low minBytesPerSecond not working against slow HTTP POST). In any case, I learned that IIS uses minBytesPerSecond to determine whether to cancel a connection if it cannot sustain the minimum throughput. This is relevant in cases where a single user makes many requests to a large resource, and each new connection ends up starving all the other unfinished ones, to the point where some may fall below the minBytesPerSecond threshold.
To confirm that the server was canceling requests due to a minBytesPerSecond error, I checked my server's HTTPERR log at c:\windows\system32\logfiles\httperr. Sure enough, I opened the file and did a text search for "MinBytesPerSecond" and there were tons of entries for it.
So after I changed the minBytesPerSecond to 0, I was no longer able to reproduce the ERR_HTTP2_PROTOCOL_ERROR error. So, it appears that the ERR_HTTP2_PROTOCOL_ERROR error was being caused by my server (IIS) canceling the request because the throughput rate from my server fell below the minBytesPerSecond threshold.
So for all you reading this right now, if you're not using IIS, maybe there is a similar setting related to minimum throughput rate you can play with to see if it gets rid of the ERR_HTTP2_PROTOCOL_ERROR error.
I experienced a similar problem, I was getting ERR_HTTP2_PROTOCOL_ERROR on one of the HTTP GET requests.
I noticed that the Chrome update was pending, so I updated the Chrome browser to the latest version and the error was gone next time when I relaunched the browser.
I encountered this because the http2 server closed the connection when sending a big response to the Chrome.
Why?
Because it is just a setting of the http2 server, named WriteTimeout.
I had this problem when having a Nginx server that exposing the node-js application to the external world. The Nginx made the file (css, js, ...) compressed with gzip and with Chrome it looked like the same.
The problem solved when we found that the node-js server is also compressed the content with gzip. In someway, this double compressing leading to this problem. Canceling node-js compression solved the issue.
I didn't figure out what exactly was happening, but I found a solution.
The CDN feature of OVH was the culprit. I had it installed on my host service but disabled for my domain because I didn't need it.
Somehow, when I enable it, everything works.
I think it forces Apache to use the HTTP2 protocol, but what I don't understand is that there indeed was an HTTP2 mention in each of my headers, which I presume means the server was answering using the right protocol.
So the solution for my very particular case was to enable the CDN option on all concerned domains.
If anyone understands better what could have happened here, feel free to share explanations.
I faced this error several times and, it was due to transferring large resources(larger than 3MB) from server to client.
This error is currently being fixed: https://chromium-review.googlesource.com/c/chromium/src/+/2001234
But it helped me, changing nginx settings:
turning on gzip;
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
In my case, Nginx acts as a reverse proxy for Node.js application.
We experienced this problem on pages with long Base64 strings. The problem occurs because we use CloudFlare.
Details: https://community.cloudflare.com/t/err-http2-protocol-error/119619.
Key section from the forum post:
After further testing on Incognito tabs on multiple browsers, then
doing the changes on the code from a BASE64 to a real .png image, the
issue never happened again, in ANY browser. The .png had around 500kb
before becoming a base64,so CloudFlare has issues with huge lines of
text on same line (since base64 is a long string) as a proxy between
the domain and the heroku. As mentioned before, directly hitting
Heroku url also never happened the issue.
The temporary hack is to disable HTTP/2 on CloudFlare.
Hope someone else can produce a better solution that doesn't require disabling HTTP/2 on CloudFlare.
In our case, the reason was invalid header.
As mentioned in Edit 4:
take the logs
in the viewer choose Events
chose HTTP2_SESSION
Look for something similar:
HTTP2_SESSION_RECV_INVALID_HEADER
--> error = "Invalid character in header name."
--> header_name = "charset=utf-8"
By default nginx limits upload size to 1MB.
With client_max_body_size you can set your own limit, as in
location /uploads {
...
client_max_body_size 100M;
}
You can set this setting also on the http or server block instead (See here).
This fixed my issue with net::ERR_HTTP2_PROTOCOL_ERROR
Just posting here to let people know that ERR_HTTP2_PROTOCOL_ERROR in Chrome can also be caused by an unexpected response to a CORS request.
In our case, the OPTIONS request was successful, but the following PUT that should upload an image to our infrastructure was denied with a 410 (because of a missing configuration allowing uploads) resulting in Chrome issuing a ERR_HTTP2_PROTOCOL_ERROR.
When checking in Firefox, the error message was much more helpful:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://www.[...] (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 410.
My recommendation would be to check an alternative browser in this case.
I'm not convinced this was the issue but through cPanel I'd noticed the PHP version was on 5.6 and changing it to 7.3 seemed to fix it. This was for a WordPress site. I noticed I could access images and generic PHP files but loading WordPress itself caused the error.
Seems like many issues may cause ERR_HTTP2_PROTOCOL_ERROR: in my case it was a minor syntax error in a php-generated header, Content-Type : text/plain . You might notice the space before the colon... that was it. Works no problem when the colon is right next to the header name like Content-Type: text/plain. Only took a million hours to figure out... The error happens with Chrome only, Firefox loaded the object without complaint.
If simply restarting e.g., Chrome Canary, with a fresh profile fixes the problem, then one surely
is the "victim" of a failed Chrome Variation! Yes, there are ways to opt out of being a Guinea pig in Chrome's field testing.
In my case
header params can not set null or empty string
{
'Authorization': Authorization //Authorization can't use null or ''
}
I got the same issue (asp, c# - HttpPostedFileBase) when posting a file that was larger than 1MB (even though application doesn't have any limitation for file size), for me the simplification of model class helped. If you got this issue, try to remove some parts of the model, and see if it will help in any way. Sounds strange, but worked for me.
I have been experiencing this problem for the last week now as I've been trying to send DELETE requests to my PHP server through AJAX. I recently upgraded my hosting plan where I now have an SSL Certificate on my host which stores the PHP and JS files. Since adding an SSL Certificate I no longer experience this issue. Hoping this helps with this strange error.
I also faced this error and I believe there can be multiple reasons behind it. Mine was, ARR was getting timed-out.
In my case, browser was making a request to a reverse proxy site where I have set my redirection rules and that proxy site is eventually requesting the actual site. Now for huge data it was taking more than 2 minutes 5 seconds and Application Request Routing timeout for my server was set to 2 minutes. I fixed this by increasing the ARR timeout by below steps:
1. Go to IIS
2. Click on server name
3. Click on Application Request Routing Cache in the middle pane
4. Click Server Proxy settings in right pane
5. Increase the timeout
6. Click Apply
My team saw this on a single javascript file we were serving up. Every other file worked fine. We switched from http2 back to http1.1 and then either net::ERR_INCOMPLETE_CHUNKED_ENCODING or ERR_CONTENT_LENGTH_MISMATCH. We ultimately discovered that there was a corporate filter (Trustwave) that was erroneously detecting an "infoleak" (we suspect it detected something in our file/filename that resembled a social security number). Getting corporate to tweak this filter resolved our issues.
For my situation this error was caused by having circular references in json sent from the server when using an ORM for parent/child relationships. So the quick and easy solution was
JsonConvert.SerializeObject(myObject, new JsonSerializerSettings { ReferenceLoopHandling = ReferenceLoopHandling.Ignore })
The better solution is to create DTOs that do not contain the references on both sides (parent/child).
I had another case that caused an ERR_HTTP2_PROTOCOL_ERROR that hasn't been mentioned here yet. I had created a cross reference in IOC (Unity), where I had class A referencing class B (through a couple of layers), and class B referencing class A. Bad design on my part really. But I created a new interface/class for the method in class A that I was calling from class B, and that cleared it up.
I hit this issue working with Server Sent Events. The problem was solved when I noticed that the domain name I used to initiate the connection included a trailing slash, e.g. https://foo.bar.bam/ failed with ERR_HTTP_PROTOCOL_ERROR while https://foo.bar.bam worked.
In my case (nginx on windows proxying an app while serving static assets on its own) page was showing multiple assets including 14 bigger pictures; those errors were shown for about 5 of those images exactly after 60 seconds; in my case it was a default send_timeout of 60s making those image requests fail; increasing the send_timeout made it work
I am not sure what is causing nginx on windows to serve those files so slow - it is only 11.5MB of resources which takes nginx almost 2 minutes to serve but I guess it is subject for another thread
In my case, the problem was that Bitdefender provided me with a local ssl certificate, when the website was still without a certificate.
When I disabled Bitdefender and reloaded the page, the actual valid server ssl certificate was loaded, and the ERR_HTTP2_PROTOCOL_ERROR was gone.
In my case, it was WordPress that now requires PHP 7.4 and I was running 7.2.
As soon as I updated, the errors disappeared.
Happened again and this time it was the ad-blocker that didn't like the name of my images (yt.png, ig.png, url.png). I added a prefix and all loaded ok.
In my case, the time on my computer (browser client) was out of date, synced it using settings in windows, and then the error got away
I had line breaks in my Content-Security-Policy in my nginx.conf that produced this error when used in an docker container running in Kube in GCP (serving angular but I doubt that matters).
Putting them all back on the same line and the problem went away.
A curl -v helped diagnose.
http2 error: Invalid HTTP header field was received: frame type: 1, stream: 1, name: [content-security-policy], value: [script-src 'unsafe-inline' 'self....
It was much easier to edit on separate lines but never again!

HTTP appending GET input twice after enabling SSL Certification (HTTPS)

We have upgraded our hosting platform with latest tech stack which includes PHP updates from Version 7.0 to 7.3 and enabled SSL certification.
After the upgrade, one of our user authentication method has failed though, it was working till the hosting platform upgrade.
Here is copy of PHP code - codecheck.php,
<html>
<body>
<?php
$header = "Content-Type: application/json";
header($header);
$code = $_GET["code"];
$codelistFile = "./codelist.txt";
$codeList = file( $codelistFile, FILE_SKIP_EMPTY_LINES);
$codelistOutput = sprintf('%s%s', $code, "\r\n" );
file_put_contents( $codelistFile, $codelistOutput, FILE_APPEND);
?>
</body>
</html>
Here is result of codelist.txt before the platform upgrade (with PHP version 7.0)
65cafead50f6d205d66f90c74f1683344ca86c8cc60fc0370c278ecb880da5c8
6e85e436538335da64f6e9172bd4191686e591aa390cca69acb9346668a48bd5
Here is result of codelist.txt after the platform upgrade (with PHP version 7.3)
774cad9dd07761fe79db8baa9370a3dd84abca558c73c1f46b39e7c996a26d70?code=774cad9dd07761fe79db8baa9370a3dd84abca558c73c1f46b39e7c996a26d70
f10bb27fb82b0d539d3607012655012764c60794cc656aa6912eccc16d927a82?code=f10bb27fb82b0d539d3607012655012764c60794cc656aa6912eccc16d927a82
Here is value of code repeated along with 'code' text itself hence the value of 'code' does not match when it compared.
Here is what I can see in ssl_access log files, ssl_access.log-20190629:79.1.200.79 - - [29/Jun/2019:07:46:24 +0100] "GET /codelist.php?code=ae21250db8b20cac3b7016e6d36a63de5846d537f032ed841a3e5c9121202cf4?code=ae21250db8b20cac3b7016e6d36a63de5846d537f032ed841a3e5c9121202cf4 HTTP/1.1" 200 19 "-" "Registration"
From this log file, I can see all GET requests to server appending the data twice.
I would expect it would be something like,
example.com/?code=123456789
but not as
example.com/?code=123456789?code=123456789
I am very new to PHP and HTTPS stuff, please help to figure out the issue. Thank you.
Here is an update:
As suggested, the issue seems to be more with SSL re-writing,
Here is code from desktop app where the app will connect and check the code with the server,
C++:
CString RegistrationServer::Uri( CString page, CString code )
{
CString sServer;
sServer.Format("http://www.mywebsite,com/%s?code=%s", page, code);
//Here page=codecheck.php and code = 10;
return sServer;
}
Here is log when submitted through desktop app,
27.62.66.34 - - [30/Jun/2019:21:55:51 +0100] "GET /codecheck.php?code=10?code=10 HTTP/1.1" 200 - "-" "Hack-o-Matic ver 0.01"
I can simulate the same request through web browser as below,
https://www.mywebsite/codecheck.php?code=10
Here is log when submitted through web browser,
27.62.66.34 - - [30/Jun/2019:21:46:28 +0100] "GET /codecheck.php?code=10 HTTP/1.1" 200 - "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36"
You can see the difference in both the request is http vs https.
When the request is coming from desktop app, the code data is appended twice which uses http.
It appears that changing desktop app to have https will help fix the issue but that's something that we can't do anything with desktop app.
So we have to relay on fix from Server side but our hosting company doesn't seem to understand the problem exactly.
They keep analysing the issue since last 3 days and coming up some fixes like googleapi call fixes but that's not helping to fixing up our real issue.
I'm not sure if I'm missing some better phrases/terms to explain this issue to them better. Please let me know if there is better way to explain the issue to our hosting company.
If nothing working out, Can I ask them to remove SSL certification?
Another Update:
Here is response from our hosting company,
We have this referred to our engineers and they confirmed that this only happens when calling http and not https. You need to use https now since you have enabled SSL.
Latitude-E6540:~$ curl -I http://www.mywebsite.com/codecheck.php?code=10
HTTP/1.1 301 Moved Permanently
Server: nginx/1.15.8
Date: Mon, 01 Jul 2019 11:03:47 GMT
Content-Type: text/html
Content-Length: 169
Connection: keep-alive
Location: https://www.mywebsite.com/codecheck.php?code=10?code=10
Strict-Transport-Security: max-age=15768000
Our engineers made some tests and they were not able to replicate when they set to https.
Latitude-E6540:~$ curl -I https://www.mywebsite.com/codecheck.php?code=10
HTTP/1.1 200 OK
Server: nginx/1.15.8
Date: Mon, 01 Jul 2019 11:03:35 GMT
Content-Type: application/json
Connection: keep-alive
Strict-Transport-Security: max-age=15768000
Here is log from server,
213.171.217.184 - - [01/Jul/2019:12:03:35 +0100] "HEAD /usage7.php?code=10 HTTP/1.1" 200 - "-" "curl/7.58.0"
They confirmed that this looks to be something with your local software settings as this only seems to get in the case of "after submitting the requests through browser, HTTP GET data is not appended twice but when the same is submitted through their desktop software, the HTTP GET data is appended twice"
What I wanted to ask you is, from below curl output itself where I can see the code is appended twice when request is made with http, Does this having any clue to spot where the issue resides?
Location: https://www.mywebsite/codecheck.php?code=10?code=10
How to solve PHP upgrade errors:
Post-event, how to find, diagnose and fix errors apparently caused by PHP updates?
1) Check your scripts for PHP Errors.
2) Check changes to your php.ini file caused by updates.Depending on your system and upgrade method, the php.ini file may be adjusted or even a new default one. Read the Migration Notes to see if this may apply to you. You will need to review and explore what's changed. Also manually compare your reserved/backup php.ini with the current/new live one.
3) Read the PHP Migration notes for each version you have upgraded into and then out of(These are best done from oldest to newest).
4) Read the corresponding PHP Changelog(s) and search this text (it's loooong) for the functions you've found be failing in step (1).
For your specific instances; your code is of a very low quality (you are sending HTTP heders after you are sending HTML code) so the issue may well be caused by PHP upgrading an already existing error from E_WARNING to E_ERROR, or similar.
Low quality code is most easily fixed by turing on error_reporting(E_ALL); either in the scripts or in the php.ini and reading the resulting error logs.
Good Luck.
Update
Even with this SSL log, I can see the value for code twice and the same written to the file. I would expect it would be something like example.com/?code=123456789 but not as example.com/?code=123456789?code=123456789.
The sign you have two ? means you should be exploring the code that sets the code= value, please update your question with this information, how is code set?
Your issue may be with your HTTP Host routing, Apache, Nginx, etc., your HTTP Host is possibly double loading, first the HTTP_ page and then secondly redirecting on to the HTTPS page with the original query string appended, thus appending twice.
I think one or both of the above is where your problem lies.
Update 2:
Comment by Thi:
Here is what my hosting company responded, "as per our engineers the cause of the logs is de to the website making http (not https) calls to the google api for css and other things. They have advised that you need to ensure that any code that relates to http is switched to https." - There is below line in all of our html pages and have changed it to https but it didn't help <link href="fonts.googleapis.com/…" rel="stylesheet" type="text/css">
This relates to what I reference above about checking your server routing for HTTP and HTTPS protocols.
Solutons:
1) Update all your outgoing links to https:// (or simply //) so:
<link href="//fonts.googleapis.com/..." rel="stylesheet" type="text/css">
will always connect securely, if loaded securely.
2) Use Content Security Policy (CSP) Upgrade Insecure Requests flag to do just that; to force all http:// links within your website to be turned into https:// links by the client browser.
In your .htaccess, or equivilant file:
Content-Security-Policy: upgrade-insecure-requests;
However, insecure calls to 3rd party resources will NOT be the cause of your code block being appended to your URL twice.

Random 403 errors with apache+php-fpm

On a server of mine, running Ubuntu 14.04.5 with Apache 2.4.23 and php-fpm 7.0.11, I'm getting random 403 errors.
I say "random" because the page I see in logs with 403 are running fine when I try them. Also, I experienced directly (I mean by visiting a site on the server with my browser) that I got a 403 error, then retried (just refreshing) and I got a 200.
The server is running some websites (about a dozen), with various kind of solutions (a couple of Wordpress, a few old spaghetti php apps, mostly modern apps based on Symfony framework).
I'd also be happy if someone can point me to some way to increase the verbosity of some logs, to try resolving this issue on myself. Currently I see the 403 errors in the apache logs of vhosts.
Is `mod_evasive' enabled ? To see please try
ls /etc/apache2/mods-enabled/ and if you see mod-evasive.load the apache module mod-evasive is enabled.
The goal of this module is to deny access with a 403 request when too many request come from the same pc(ip) or or when a lot of pages were viewed in a short amount of time. The ip is somewhat blocked for a certain period of time.
Sometimes refreshing the page can fix the problem, but it is still annoying.
What you can do is
1)to disable it with
a2dismod mod-evasive and
service apache2 restart
or
2)Find the httpd.conf file and modify the different parameters. Increase the thresholds for mod_evasive to be less sensitive
modify the default value by something like:
<IfModule mod_dosevasive.c>
DOSHashTableSize 3097
DOSPageCount 5
DOSSiteCount 100
DOSPageInterval 1
DOSSiteInterval 1
DOSBlockingPeriod 2
</IfModule>
MODEV_DOSPageCount
This is the threshhold for the number of requests for the same page (or URI) per page interval. Once the threshhold for that interval has been exceeded, the IP address of the client will be added to the blocking list.
MODEV_DOSPageInterval
The interval for the page count threshhold; defaults to 1 second intervals.
etc... You can change them
All the parameters and best solutions are explained here
https://wiki.atomicorp.com/wiki/index.php/Mod_evasive

http error 500 every day only in a specific time of the day

I have the 500 http errors every day in the time between 1:00 - 2:00 and only at this time.
My web it's in a shared server of 1and1 and I think it should be some problem with a maintenance process scheduled at this time because the error always appears at the same time (more or less) and the rest of the day all it's ok.
I've contacted with 1and1 and they are investigating it but I don't trust on them.
I've seen in the log that during this problematic period the http calls to some image for example it works (return code 200). But the calls to a php file with a mySql query fails with 500.
Could be some problem with too muchs access to databases? is the 500 error possible in this cases?
In this scripts I access to a file located in a protected folder (rwx------) to take the user and password. I don't know if it matters.
What can I do to try to know more about the problem?
Any idea?
Thank you.
Some causes of 500 Internal Server Errors
1.File permissons set incorrectly.
2.Coding errors in the .htaccess file.
Analyse the logs which should give further information about the error.

Categories