Until now my php application assumed HTTP 1.1 everywhere. So I defined all headers like so:
header("HTTP/1.1 500 Internal Server Error");
But now my server also supports HTTP 2 and I want to update all header responses with the right HTTP status code.
How to I get the HTTP Protocol version of the http request?
(My webserver is nginx, but I guess it is irrelevant if I am using nginx or apache.)
The server protocol should be available through SERVER_PROTOCOL from the server environment, usually exposed through $_SERVER['SERVER_PROTOCOL'] inside your application.
From phpinfo() under Apache 2.4:
SERVER_PROTOCOL => HTTP/1.1
changing /etc/nginx/factcgi_params:
#fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param SERVER_PROTOCOL HTTP/2.0;
Header should be:-
header($_SERVER['SERVER_PROTOCOL'].' 404 Not Found');
Related
I have a built-in PHP development server running on a random port and I am trying to configure secure API requests with Apache Reverse Proxy.
Apache2 listens on port 443 and passes requests to the server, the server processes the request and passes the JSON response back to Apache2 but for some reason, there is an error.
Everything works well when I try to access the upstream server directly. There is no typo.
SSL and ReverseProxy work.
Servers are running on Ubuntu.
Mime mod is enabled and I have added json to the file formats Apache2 should serve in /etc/apache2/sites-enabled/000-default.conf.
The PHP is most likely the culprit but how do I make it better? I've been trying to figure this out for a month now.
Or is there a better way to secure API requests to the PHP server? I tried to achieve this using tunnel but was not successful due to the constant occurrence of errors.
The PHP code on the upstream server.
header ("Access-Control-Allow-Origin");
header ("Content-Type: application/json; charset=utf8");
header ("Access-Control-Allow-Methods: OPTIONS,GET,POST,PUT,DELETE");
header ("Access-control-Max-Age: 3600");
header ("Access-Control-Allow-header: Content-Type, Access-Control-Allow-Headers, Authorization, X-Requested-With");
header ("HTTP/1.1 200 OK");
echo json_encode (array ("BODY"=>"hello world", "status_code_header"=>"HTTP/1.1 200 Ok"));
apache2.conf
<VirtualHost *:443>
ServerName ...
ErrorLog ...
CustomLog ...
SSLEngine On
SSLCertificateFile ...
SSLCertificateKeyFile ...
ProxyPass / http://localhost:port/ retry=1 acquire=3000 timout=600 Keepalive=On
ProxyPassReverse / http://localhost:port/
</VirtualHost>
Apache2 error.log
bad HTTP/1.1 header returned by /person/
When user using proxy (Google data saver etc), the browser adds X-Forwarded-For for clients' real ip address to server. Our load balancer passes all headers + the clients' ip address as X-Forwarded-For header to nginx server. The example request headers:
X-Forwarded-For: 1.2.3.4
X-Forwarded-Port: 80
X-Forwarded-Proto: http
Host: *.*.*.*
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8,tr;q=0.6
Save-Data: on
Scheme: http
Via: 1.1 Chrome-Compression-Proxy
X-Forwarded-For: 1.2.3.5
Connection: Keep-alive
Is there any way to pass both of the X-Forwarded-For headers to php, respectively?
TL;DR
nginx: fastcgi_param HTTP_MERGED_X_FORWARDED_FOR $http_x_forwarded_for
php: $_SERVER['HTTP_MERGED_X_FORWARDED_FOR']
Explanation
You can access all http headers with the $http_<header_name> variable. When using this variable, nginx will even do header merging for you so
CustomHeader: foo
CustomHeader: bar
Gets translated to the value:
foo, bar
Thus, all you need to do is pass this variable to php with fastcgi_param
fastcgi_param HTTP_MERGED_X_FORWARDED_FOR $http_x_forwarded_for
Proof of concept:
in your nginx server block:
location ~ \.php$ {
fastcgi_pass unix:run/php/php5.6-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTP_MERGED_X_FORWARDED_FOR $http_x_forwarded_for;
include fastcgi_params;
}
test.php
<?php
die($_SERVER['HTTP_MERGED_X_FORWARDED_FOR']);
And finally see what happens with curl:
curl -v -H 'X-Forwarded-For: 127.0.0.1' -H 'X-Forwarded-For: 8.8.8.8' http://localhost/test.php
Gives the following response:
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /test.php HTTP/1.1
> Host: localhost
> User-Agent: curl/7.47.0
> X-Forwarded-For: 127.0.0.1
> X-Forwarded-For: 8.8.8.8
>
< HTTP/1.1 200 OK
< Server: nginx/1.10.3 (Ubuntu)
< Date: Wed, 01 Nov 2017 09:07:51 GMT
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
127.0.0.1, 8.8.8.8
Boom! There you go, you have access to all X-FORWARDED-FOR headers, as a comma-delimited string in $_SERVER['HTTP_MERGED_X_FORWARDED_FOR']
Of course, you can use whatever name you want and not just HTTP_MERGED_X_FORWARDED_FOR.
You can get the original client address of the connecting ELB in the variable $realip_remote_addr, but be aware that this variable was only added in nginx 1.9.7, so you'll need to be running a very recent version of nginx.
For more info. ngx_http_realip_module variables
For example, with this config:
set_real_ip_from 127.0.0.1;
set_real_ip_from 192.168.2.1;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
And an X-Forwarded-For header resulting in:
X-Forwarded-For: 123.123.123.123, 192.168.2.1, 127.0.0.1
By default, nginx will pick up the leftmost IP 123.123.123.123 as the client's IP address apart from trusted proxies.
But $realip_remote_addr keeps the original client address
What you are looking for needs to be handled at web server level. So I created two servers one using apache and one using nginx for this. The test command
curl -H "X: Y" -H "X: Z" http://localhost:8088/router.php | jq
Apache
When executed using apache the output is below
{
"HEADERS": {
"Host": "localhost:8088",
"User-Agent": "curl/7.47.0",
"Accept": "*/*",
"X": "Y, Z"
}
}
As you can see we passed two headers to apache and apache combined them using ,. If we change our first header to already contain , it would still work fine
$ curl -H "X: Y, A" -H "X: Z" http://localhost:8088/router.php | jq
{
"HEADERS": {
"Host": "localhost:8088",
"User-Agent": "curl/7.47.0",
"Accept": "*/*",
"X": "Y, A, Z"
}
}
Nginx
Now same request on nginx yields
{
"HEADERS": {
"X": "Z",
"Accept": "*/*",
"User-Agent": "curl/7.47.0",
"Host": "localhost"
}
}
Now it is not that Nginx is not sending those headers to PHP-FPM, it does send them as it is. PHP-FPM doesn't merge these duplicate headers into one. So in the script you only get the latest header.
Edit-1: Merge using fastcgi_param
Thanks to #AronCederholm for pointing out that merging does work by specifying FASTCGI_PARAM
I originally had tested the same approach but it had resulted in blank headers. I had tried adding
fastcgi_param X-Forwarded-For $http_x_forwarder_for;
Just now after reading his message I realized that I had a typo in my config. It should have been
fastcgi_param X-Forwarded-For $http_x_forwarded_for;
And after this change the header does work fine. It won't come in getallheaders() though. It would be available through $_SERVER[] as shown in below response
$ curl -v -H 'X-Forwarded-For: 127.0.0.1' -H 'X-Forwarded-For: 8.8.8.8' http://localhost/router.php | jq
{
"HEADERS": {
"X-Forwarded-For": "8.8.8.8",
"Accept": "*/*",
"User-Agent": "curl/7.47.0",
"Host": "localhost"
},
"SERVER": {
"USER": "vagrant",
"HOME": "/home/vagrant",
"HTTP_X_FORWARDED_FOR": "8.8.8.8",
"HTTP_ACCEPT": "*/*",
"HTTP_USER_AGENT": "curl/7.47.0",
"HTTP_HOST": "localhost",
"X-Forwarded-For": "127.0.0.1, 8.8.8.8",
Original Answer
Unfortunately I found no settings or plugins for Nginx or PHP-FPM which allows you to merge the duplicate headers into one. And you cannot handle this situation at PHP level, because you will never be able to see the raw headers.
Possible Solutions
Put apache in front of Nginx. Make nginx listen on a unix socket and use apache to reverse proxy the request to nginx
Replace Nginx by Apache
Create a Nginx plugin to merge headers. Below two projects should give you a head start
https://github.com/giom/nginx_accept_language_module
https://github.com/openresty/headers-more-nginx-module
The headers for X-Forwarded-For should be appended to by each proxy inline of your request. You should not be getting two headers. Because the values are appended to by design, anyone can add ip's to that list, so do not use it for security checks. If you need to check an ip for security, set the X-Real-IP header on your web server, overwriting any passed in value.
i use nginx with Fast CGI cache settings , it's working great but i read all around the internet about OPcache and APC .
i don't understand the difference between them and FPM and why do people use them when cache with FPM is much easier just adding few sentences to nginx conf .
i use these setting for reference
fastcgi_cache microcache;
fastcgi_cache_key $scheme$host$request_uri$request_method;
fastcgi_cache_valid 200 302 1d;
fastcgi_cache_valid 400 500 504 404 1m;
if anyone can explain in details the differences .
Consider the following GET request: www.foo.com/bar.php/rest/resource, then the following should be the case:
$_SERVER['SCRIPT_NAME'] === 'bar.php';
This is true in my local machine, as well in our dev server. But in our test server:
echo $_SERVER['SCRIPT_NAME']; // bar.php/rest/resource
which is wrong. I'm pretty sure that this is caused by some Apache configuration, since the test server's failure started happening when it was upgraded from Apache 2.2 to 2.4.7 (with added configuration for our organization). I read the Apache upgrade/release notes and can't seem to pin down what's up.
More Info:
I've checked out PHP_SELF vs PATH_INFO vs SCRIPT_NAME vs REQUEST_URI and it seems that my PHP_SELF and SCRIPT_NAME is switched.Instead of
[PHP_SELF] => /test.php/foo/bar
[SCRIPT_NAME] => /test.php
I get
[PHP_SELF] => /test.php
[SCRIPT_NAME] => /test.php/foo/bar
SCRIPT_NAME is defined by the webserver (Apache, NGINX, etc). Depending on your server configuration, the value of SCRIPT_NAME will be different. You need to check the vhost configs on both machines and make sure they match.
I have this code
if(ereg("^(https)",$url))
curl_setopt($curl,CURLOPT_SSL_VERIFYPEER,false);
// execute, and log the result to curl_put.log
$result = curl_exec($curl);
$error = curl_error($curl);
The error specified is
SSL read: error:00000000:lib(0):func(0):reason(0), errno 104
Any ideas on the cause
I encountered a similar cryptic error while working with a third-party library. I tried the CURLOPT_SSL_VERIFY[PEER|HOST] but it made no difference. My error message was similar:
SSL read: error:00000000:lib(0):func(0):reason(0), errno 54
So I visited http://curl.haxx.se/libcurl/c/libcurl-errors.html, looking for the error code 54.
CURLE_SSL_ENGINE_SETFAILED (54) Failed setting the selected SSL crypto engine as default!
This was wrong though - I was making other HTTPS requests using curl in other parts of the application. So I kept digging and found this question, R & RCurl: Error 54 in libcurl, which had this gem:
The output you see is from lib/ssluse.c in libcurl's source code and the "errno" mentioned there is not the libcurl error code but the actual errno variable at that time.
So, don't let the output of curl_error() mislead you. Instead, use curl_errno() to obtain the correct error code, which in this case was actually 56, CURLE_RECV_ERROR. Had the wrong host name...
With SSL, make sure that you have openssl extension turned on from php.ini.
I've had the same problem. It turned out, that the ssl on the target system had a bad configuration.
After checking the php curl module, the GuzzleHttp version, the openssl version I called the link in the browser and it worked. But with curl --tlsv1 -kv https://www.example.com on the console there was still an error.
So I checked the ssl configuration at https://www.ssllabs.com/ssltest/ It was rated with B. And there where some Online Certificate Status Protocol (OCSP) errors I haven't seen before. Finally I changed my configuration on the target system to the suggestions at https://cipherli.st/ restarted the webserver and everything worked. The new rating at ssllabs is now A+.
My nginx configuration (Ubuntu 14.04, nginx 1.4.6-1ubuntu3.5):
ssl on;
ssl_certificate /etc/ssl/certs/1_www.example.com_bundle.crt;
ssl_certificate_key /etc/ssl/private/www.example.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
ssl_session_cache shared:SSL:10m;
#ssl_session_tickets off; # Requires nginx >= 1.5.9
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify off; # Requires nginx => 1.3.7
ssl_dhparam /etc/ssl/private/dhparams.pem;
ssl_trusted_certificate /etc/ssl/startssl.ca.pem;
resolver 8.8.8.8 valid=300s;
resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; www.example.com; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
I think you mean to use CURLOPT_SSL_VERIFYHOST, not CURLOPT_SSL_VERIFYPEER
add this:
curl_setopt( $curl, CURLOPT_SSL_VERIFYHOST, 0);
I had the same error and worked fine for me.
I had the same error printed by the function curl_error but this is not necessarily related to SSL. It is better to print the precise error number with the function curl_errno and you can diagnose better from there. In my case it returned me a 52 error code and I could debug from there, in fact the other server was not sending any data.
It means the destination server require an SSL communication.
You should generate an SSL certificate for your sending server from wich you run the CURL request
Let's Encrypt is the first free and open CA
Everything is described here sslforfree.com
I solved this curl error: "SSL read: error:00000000:lib(0):func(0):reason(0), errno 104" by removing extra space from my url query parameter value (comma separated values).
For example:
https://example.com?a=123,456,SPACE_ADDED_BY_MISTAKE789
to
https://example.com?a=123,456,789