When user using proxy (Google data saver etc), the browser adds X-Forwarded-For for clients' real ip address to server. Our load balancer passes all headers + the clients' ip address as X-Forwarded-For header to nginx server. The example request headers:
X-Forwarded-For: 1.2.3.4
X-Forwarded-Port: 80
X-Forwarded-Proto: http
Host: *.*.*.*
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8,tr;q=0.6
Save-Data: on
Scheme: http
Via: 1.1 Chrome-Compression-Proxy
X-Forwarded-For: 1.2.3.5
Connection: Keep-alive
Is there any way to pass both of the X-Forwarded-For headers to php, respectively?
TL;DR
nginx: fastcgi_param HTTP_MERGED_X_FORWARDED_FOR $http_x_forwarded_for
php: $_SERVER['HTTP_MERGED_X_FORWARDED_FOR']
Explanation
You can access all http headers with the $http_<header_name> variable. When using this variable, nginx will even do header merging for you so
CustomHeader: foo
CustomHeader: bar
Gets translated to the value:
foo, bar
Thus, all you need to do is pass this variable to php with fastcgi_param
fastcgi_param HTTP_MERGED_X_FORWARDED_FOR $http_x_forwarded_for
Proof of concept:
in your nginx server block:
location ~ \.php$ {
fastcgi_pass unix:run/php/php5.6-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTP_MERGED_X_FORWARDED_FOR $http_x_forwarded_for;
include fastcgi_params;
}
test.php
<?php
die($_SERVER['HTTP_MERGED_X_FORWARDED_FOR']);
And finally see what happens with curl:
curl -v -H 'X-Forwarded-For: 127.0.0.1' -H 'X-Forwarded-For: 8.8.8.8' http://localhost/test.php
Gives the following response:
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET /test.php HTTP/1.1
> Host: localhost
> User-Agent: curl/7.47.0
> X-Forwarded-For: 127.0.0.1
> X-Forwarded-For: 8.8.8.8
>
< HTTP/1.1 200 OK
< Server: nginx/1.10.3 (Ubuntu)
< Date: Wed, 01 Nov 2017 09:07:51 GMT
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
127.0.0.1, 8.8.8.8
Boom! There you go, you have access to all X-FORWARDED-FOR headers, as a comma-delimited string in $_SERVER['HTTP_MERGED_X_FORWARDED_FOR']
Of course, you can use whatever name you want and not just HTTP_MERGED_X_FORWARDED_FOR.
You can get the original client address of the connecting ELB in the variable $realip_remote_addr, but be aware that this variable was only added in nginx 1.9.7, so you'll need to be running a very recent version of nginx.
For more info. ngx_http_realip_module variables
For example, with this config:
set_real_ip_from 127.0.0.1;
set_real_ip_from 192.168.2.1;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
And an X-Forwarded-For header resulting in:
X-Forwarded-For: 123.123.123.123, 192.168.2.1, 127.0.0.1
By default, nginx will pick up the leftmost IP 123.123.123.123 as the client's IP address apart from trusted proxies.
But $realip_remote_addr keeps the original client address
What you are looking for needs to be handled at web server level. So I created two servers one using apache and one using nginx for this. The test command
curl -H "X: Y" -H "X: Z" http://localhost:8088/router.php | jq
Apache
When executed using apache the output is below
{
"HEADERS": {
"Host": "localhost:8088",
"User-Agent": "curl/7.47.0",
"Accept": "*/*",
"X": "Y, Z"
}
}
As you can see we passed two headers to apache and apache combined them using ,. If we change our first header to already contain , it would still work fine
$ curl -H "X: Y, A" -H "X: Z" http://localhost:8088/router.php | jq
{
"HEADERS": {
"Host": "localhost:8088",
"User-Agent": "curl/7.47.0",
"Accept": "*/*",
"X": "Y, A, Z"
}
}
Nginx
Now same request on nginx yields
{
"HEADERS": {
"X": "Z",
"Accept": "*/*",
"User-Agent": "curl/7.47.0",
"Host": "localhost"
}
}
Now it is not that Nginx is not sending those headers to PHP-FPM, it does send them as it is. PHP-FPM doesn't merge these duplicate headers into one. So in the script you only get the latest header.
Edit-1: Merge using fastcgi_param
Thanks to #AronCederholm for pointing out that merging does work by specifying FASTCGI_PARAM
I originally had tested the same approach but it had resulted in blank headers. I had tried adding
fastcgi_param X-Forwarded-For $http_x_forwarder_for;
Just now after reading his message I realized that I had a typo in my config. It should have been
fastcgi_param X-Forwarded-For $http_x_forwarded_for;
And after this change the header does work fine. It won't come in getallheaders() though. It would be available through $_SERVER[] as shown in below response
$ curl -v -H 'X-Forwarded-For: 127.0.0.1' -H 'X-Forwarded-For: 8.8.8.8' http://localhost/router.php | jq
{
"HEADERS": {
"X-Forwarded-For": "8.8.8.8",
"Accept": "*/*",
"User-Agent": "curl/7.47.0",
"Host": "localhost"
},
"SERVER": {
"USER": "vagrant",
"HOME": "/home/vagrant",
"HTTP_X_FORWARDED_FOR": "8.8.8.8",
"HTTP_ACCEPT": "*/*",
"HTTP_USER_AGENT": "curl/7.47.0",
"HTTP_HOST": "localhost",
"X-Forwarded-For": "127.0.0.1, 8.8.8.8",
Original Answer
Unfortunately I found no settings or plugins for Nginx or PHP-FPM which allows you to merge the duplicate headers into one. And you cannot handle this situation at PHP level, because you will never be able to see the raw headers.
Possible Solutions
Put apache in front of Nginx. Make nginx listen on a unix socket and use apache to reverse proxy the request to nginx
Replace Nginx by Apache
Create a Nginx plugin to merge headers. Below two projects should give you a head start
https://github.com/giom/nginx_accept_language_module
https://github.com/openresty/headers-more-nginx-module
The headers for X-Forwarded-For should be appended to by each proxy inline of your request. You should not be getting two headers. Because the values are appended to by design, anyone can add ip's to that list, so do not use it for security checks. If you need to check an ip for security, set the X-Real-IP header on your web server, overwriting any passed in value.
Related
I am attempting to get nginx-proxy to work with the php-fpm variant of the official php image via fastcgi. Unfortunately, I seem to be unable to do so. I'm sure the problem is just something simple that I don't know about.
I have followed the instructions for nginx-proxy to the best of my ability and have boiled it down to a very simple way to re-create the issue. Here's my docker-compose.yml file:
version: "3"
services:
proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
- DEFAULT_HOST=test.local
fpm:
image: php:fpm
environment:
- VIRTUAL_HOST=test.local
- VIRTUAL_PROTO=fastcgi
I then drop in a simple index.php file by running:
docker container exec -it web_fpm_1 /bin/bash -c 'echo "<?php phpinfo(); ?>" > /var/www/html/index.php'
(It puts web_ in front because this project is in a directory named web/.)
I also modify my hosts file to point test.local to 127.0.0.1, so I can test it.
However, every attempt to browse to test.local results in a blank white page.
The logs for the web_proxy_1 container don't indicate anything out of the ordinary, as far as I know:
❯ docker container logs web_proxy_1
WARNING: /etc/nginx/dhparam/dhparam.pem was not found. A pre-generated dhparam.pem will be used for now while a new one
is being generated in the background. Once the new dhparam.pem is in place, nginx will be reloaded.
forego | starting dockergen.1 on port 5000
forego | starting nginx.1 on port 5100
dockergen.1 | 2020/07/20 19:24:54 Generated '/etc/nginx/conf.d/default.conf' from 2 containers
dockergen.1 | 2020/07/20 19:24:54 Watching docker events
dockergen.1 | 2020/07/20 19:24:54 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx.1 | test.local 172.18.0.1 - - [20/Jul/2020:19:25:12 +0000] "GET / HTTP/1.1" 200 5 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36"
nginx.1 | test.local 172.18.0.1 - - [20/Jul/2020:19:25:13 +0000] "GET /favicon.ico HTTP/1.1" 200 5 "http://test.local/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36"
The logs for the web_fpm_1 container show that nothing gets sent except a 200 response:
❯ docker container logs web_fpm_1
[20-Jul-2020 19:24:54] NOTICE: fpm is running, pid 1
[20-Jul-2020 19:24:54] NOTICE: ready to handle connections
172.18.0.3 - 20/Jul/2020:19:25:12 +0000 "- " 200
172.18.0.3 - 20/Jul/2020:19:25:13 +0000 "- " 200
What am I doing wrong?
Incidentally, I have asked this question on the nginx-proxy repo, the nginx-proxy Google Group, and the php repo. I either get no response or they pass the buck.
The default generated config of nginx-proxy is not fully working.
I think something is messed up with VIRTUAL_ROOT environment variable, because the root of the problem is PHP getting a wrong path via SCRIPT_FILENAME (that's why you see no PHP output) and there is no try_files with =404 symbol (that's why you get 200 with everything).
I have a prepared working setup using docker-compose in GitHub to demonstrate that it would work with an existing SCRIPT_FILENAME in the nginx config.
I have changed test.local to test.localhost.
I think to get it working as it should, you would have to use an nginx template for nginx-proxy, so the generated default.conf does work with php fpm and have the missing fastcgi param included.
Another, yet different approach would be to pack PHP and a manually configured webserver (nginx) in a project and having the automated reverse nginx proxy in a standalone project.
This would cost you an additional process running but gives you more control and easier deployment.
Alternatively, you might want to have a look into traefik which does essentially the same as nginx-proxy.
Daniel's answer is definitely on the right track. I use the php-fpm image with nginx as my main stack for php sites. Having said that, I don't use the nginx-proxy docker image. Instead, I use plain nginx on the host machine, and configure ports to point to backend php-fpm docker images.
I'm not using docker-compose either. Since it's just docker containers running single sites, I don't need it. Here's an example docker run command:
docker rm -f www.example.com || true
docker run -itd -p 9001:9000 -P \
--name www.example.com \
--volume /var/www/html/www.example.com:/var/www/html/www.example.com \
--link mariadb:database.example.com \
--restart="always" \
--hostname="example.com" \
--log-opt max-size=2m \
--log-opt max-file=5 \
mck7/php-fpm:7.4.x-wordpress
And here is an example nginx config:
server {
server_name example.com www.example.com;
location ~ /.well-known {
allow all;
}
location ~ /\.ht {
deny all;
}
root /var/www/html/www.example.com/src;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
fastcgi_pass 127.0.0.1:9001;
fastcgi_index index.php;
}
}
A few things about this setup are key. The port re-mapping for the docker container. In this example I map port 9001 to 9000. The other "gotcha" is that the root for the container must be an actual location on the host. I have no idea why this is the case, but for whatever reason the path docker thinks it's using has to actually be the path on the host as well.
Please excuse lengthy write up - I would really appreciate any help in following regard.
I am trying to setup multi tenant subdomain + custom domain with SSL using LetsEncrypt:
(some will use subdomain some will use custom domain)
https://customer1.myapp.com
https://customer2.myapp.com
https://customer1.com (customer sets up A/CNAME recoreds at his DNS provider)
I am on EC2 instance using Ubuntu OS with username 'ubuntu'.
I learned from following tutorials:
https://sandeep.dev/how-we-generate-and-renew-ssl-certs-for-arbitrary-custom-domains-using-letsencrypt-cjtk0utui000c1cs1f7y9ua5n
https://www.digitalocean.com/community/tutorials/how-to-use-the-openresty-web-framework-for-nginx-on-ubuntu-16-04
https://sandro-keil.de/blog/openresty-nginx-with-auto-generated-ssl-certificate-from-lets-encrypt/
I have successfully done following:
Installed build-essential on server
Install OpenResty (Comes with its own Nginx & OpenSSL)
Install LuaRocks
Install lua-resty-auto-ssl
Created directory for resty auto ssl
sudo mkdir /etc/resty-auto-ssl
sudo chown -R ubuntu /etc/resty-auto-ssl
sudo chown -R www-data /etc/resty-auto-ssl
chmod -R 777 /etc/resty-auto-ssl/
Created Fallback Self-signed Certificate which expires in 3600 days
This is my starter conf file (/usr/local/openresty/nginx/conf/nginx.conf)
(I would refine it further to suite my redirect & security needs)
#user nginx;
error_log /usr/local/openresty/nginx/logs/error.log warn;
events {
worker_connections 1024;
}
http {
lua_shared_dict auto_ssl 1m;
lua_shared_dict auto_ssl_settings 64k;
init_by_lua_block {
auto_ssl = (require "resty.auto-ssl").new()
auto_ssl:set("allow_domain", function(domain)
return true
end)
auto_ssl:set("dir", "/etc/resty-auto-ssl")
auto_ssl:init()
}
init_worker_by_lua_block {
auto_ssl:init_worker()
}
# access_log /usr/local/openresty/nginx/logs/access.log main;
server {
listen 443 ssl;
ssl_certificate_by_lua_block {
auto_ssl:ssl_certificate()
}
ssl_certificate /etc/ssl/resty-auto-ssl-fallback.crt;
ssl_certificate_key /etc/ssl/resty-auto-ssl-fallback.key;
root /var/www/myapp.com/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
# location ~ \.php$ {
# include snippets/fastcgi-php.conf;
# fastcgi_pass unix:/run/php/php7.4-fpm.sock;
# fastcgi_read_timeout 600;
# }
location ~ /\.ht {
deny all;
}
}
server {
listen 80;
server_name *.myapp.com myapp.com;
location /.well-known/acme-challenge/ {
content_by_lua_block {
auto_ssl:challenge_server()
}
}
location / {
return 301 https://myapp.com$request_uri;
}
}
server {
listen 8999;
location / {
content_by_lua_block {
auto_ssl:hook_server()
}
}
}
}
I am facing multiple issues like:
Cant mention user in nginx config - still works without it also
Trying to mention user in 1st line of config files gives me error.
So i commented it out and tried to caryy on anyways
Dehydrated Failure but certificate is created
keep getting following error in my log:
lets_encrypt.lua:40: issue_cert(): auto-ssl: dehydrated failed: env HOOK_SECRET=XXXX HOOK_SERVER_PORT=8999 /usr/local/openresty/luajit/bin/resty-auto-ssl/dehydrated --cron --accept-terms --no-lock --domain myapp.com --challenge http-01 --config /etc/resty-auto-ssl/letsencrypt/config --hook /usr/local/openresty/luajit/bin/resty-auto-ssl/letsencrypt_hooks status: 256 out: # INFO: Using main config file /etc/resty-auto-ssl/letsencrypt/config
But it still goes on & does create a certificate after which it gives random number generator error.
Sometimes, if I delete everything inside /etc/resty-auto-ssl - it dosent give me such errors.
Can't find OpenSSL random number generator
I keep getting following error in my log:
Can't load ./.rnd into RNG
random number generator:RAND_load_file:Cannot open file:../crypto/rand/randfile.c:98:Filename=./.rnd
curl: (22) The requested URL returned error: 500 Internal Server Error
PHP-FPM on nginx provided with OpenResty
I have properly installed php-fpm and have tested it when using nginx standalone.
But, now that I am using nginx provided with openresty, it dosent seem to work
Error (Shown when tested config using: nginx -t command):
"/usr/local/openresty/nginx/conf/snippets/fastcgi-php.conf" failed (2: No such file or directory)
Failed to create certificate
Sometimes this error is followed by error in above point number 2:
auto-ssl: could not get certificate for myapp.com - using fallback - failed to get or issue certificate, context: ssl_certificate_by_lua*, client: 123.201.226.209, server: 0.0.0.0:443
set_response_cert(): auto-ssl: failed to set ocsp stapling for xxxx.myapp.com - continuing anyway - failed to get ocsp response: OCSP responder query failed (http://ocsp.int-x3.letsencrypt.org): no resolver defined to resolve "ocsp.int-x3.letsencrypt.org", context: ssl_certificate_by_lua*, client: 123.201.226.209, server: 0.0.0.0:443
connect() to unix:/run/php/php7.4-fpm.sock failed (13: Permission denied) while connecting to upstream, client: 123.201.226.209, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/run/php/php7.4-fpm.sock:", host: "xxxx.myapp.com"
When trying to access customer1.com whoes A record points to myapp.com server IP
"Error creating new order :: Cannot issue for \"X.X.X.X\": The ACME server can not issue a certificate for an IP address"
ssl_certificate.lua:281: auto-ssl: could not determine domain for request (SNI not supported?) - using fallback - , context: ssl_certificate_by_lua*, client: 45.148.10.72, server: 0.0.0.0:443
... where x.x.x.x is A recored for customer1.com whch was opened from browser
I have following confusions:
Should I get one proper (paid) wildcard positive ssl certificate for myapp.com ? (And use it as fallback)
This covers all my subdomain and I won't have to deal with limits on subdomain by letsencrypt.
This way I only have to use lets encrypt for custom domains like customer1.com
I am not sure if my users & permission are properly set up - any pointers would help
I would wish my final nginx config to fulfill following needs
Redirect http://myapp.com & http://www.myapp.com to -> https://myapp.com
Redirect https://www.myapp.com to -> https://myapp.com
Redirect http://customer1.com & http://www.customer1.com to -> https://customer1.com
And then on my acutal ssl server block - write all logic for auto ssl generation
It is somewhat hard to answer all these question, so I'll attempt to answer part of 5 & 6. I have setup open resty myself in a prod environment, see link.
I ran into this OCSP stapling issue. I found that it was resolved by adding this to my NGINX config:
# A DNS resolver must be defined for OSCP stapling to function.
resolver 172.20.0.10 ipv6=off;
Regarding question 6, I would suggest that customer1.com should be a CNAME to myapp.com.
I would also recommend using as a base the openresty docker image, or at least a reverse engineered version of the docker image into an EC2 instance. Here is my dockerfile:
FROM openresty/openresty:latest-xenial
RUN /usr/local/openresty/luajit/bin/luarocks install lua-resty-auto-ssl
RUN /usr/local/openresty/luajit/bin/luarocks install lua-resty-http
RUN apt-get update
RUN apt-get install -y dnsutils
RUN openssl req -new -newkey rsa:2048 -days 3650 -nodes -x509 -subj '/CN=sni-support-required-for-valid-ssl' -keyout /etc/ssl/resty-auto-ssl-fallback.key -out /etc/ssl/resty-auto-ssl-fallback.crt
ADD nginx.conf /usr/local/openresty/nginx/conf/nginx.conf
Hopefully this is helpful.
I have two applications one for mobile devices and other for other devices.
What I am trying to do is to show both application on the same domain instead of 2 different domain
I have google it but every one it showing url redirection.
Below is the code which, I have trying
server {
listen 80;
set $root /var/www/ng/webApplication;
if ($http_user_agent ~* "android|blackberry|googlebot-mobile|iemobile|ipad|iphone|ipod|opera mobile|palmos|webos") {
set $root /var/www/html/mobileApplication;
}
root $root;
}
but nginx stop working if add this condition.
Edit
nginx -t result
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Edit
access.log
180.151.19.20 - - [02/Jul/2019:05:12:10 +0000] "GET / HTTP/1.1" 404 178 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:67.0) Gecko/20100101 Firefox/67.0"
also in browser it shows 404 Not Found
Assuming both applications can respond independently. You can try to use the reverse proxy option. Here is an example.
proxy_pass http://localhost/webApplication;
if ($http_user_agent ~* "android|blackberry|googlebot-mobile|iemobile|ipad|iphone|ipod|opera mobile|palmos|webos") {
proxy_pass http://localhost/mobileApplication;
}
A Zend Expressive project my company is working on is ready to be shipped but in our staging environment we seem to be missing response headers for a CORS pre-flight request. This does not happen in our development environment. We're using CorsMiddleware in our pipeline but it doesn't look like that middleware is the culprit.
The problem
During runtime, the middleware detects incoming pre-flight requests and it will reply with a response like so:
HTTP/1.1 200 OK
Date: Mon, 20 Aug 2018 15:09:03 GMT
Server: Apache
X-Powered-By: PHP/7.1.19
Access-Control-Allow-Origin: https://example.com
Vary: Origin
Access-Control-Allow-Headers: content-type
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
Well, that only works on our development servers and php's built-in webservers. The response is different from our staging server, even though the request is exactly the same, apart from the host:
HTTP/1.1 200 OK
Date: Mon, 20 Aug 2018 15:11:29 GMT
Server: Apache
Keep-Alive: timeout=5, max=100
Cache-Control: max-age=0, no-cache
Content-Length: 0
Content-Type: text/html; charset=UTF-8
What we've tried
Investigating the middleware
We've verified that CorsMiddleware runs perfectly fine and actually sets the required headers. When we modify CorsMiddleware's response code and set it to 202 instead of 200 we now do get the headers we're looking for. Changing the response code back to 200 makes the headers disappear again.
Setting the headers manually
Using the following example:
header('Access-Control-Allow-Origin: https://example.com');
header('Access-Control-Allow-Headers: content-type');
header('Vary: Origin');
exit(0);
This has the same behavior until we modify the response code to 204 or anything other than 200.
Looking at the body
The response body is empty and shouldn't contain anything but when we add content to the response body the headers appear as if nothing was wrong.
So if I add body content, the headers are present. No body content? No CORS headers. Is this some setting in Apache? Am I missing some configuration in PHP? Am I forgetting anything?
Further details
All requests have been tested with httpie, Postman, curl and PhpStorm's http client.
Here's the httpie example:
http -v OPTIONS https://staging.****.com \
'access-control-request-method:POST' \
'origin:https://example.com' \
'access-control-request-headers:content-type'
Here's the curl example:
curl "https://staging.****.com" \
--request OPTIONS \
--include \
--header "access-control-request-method: POST" \
--header "origin: https://example.com" \
--header "access-control-request-headers: content-type"
Cors configuration in pipeline.php (wildcard only for testing):
$app->pipe(new CorsMiddleware([
"origin" => [
"*",
],
"headers.allow" => ['Content-Type'],
"headers.expose" => [],
"credentials" => false,
"cache" => 0,
// Get list of allowed methods from matched route or provide empty array.
'methods' => function (ServerRequestInterface $request) {
$result = $request->getAttribute(RouteResult::class);
/** #var \Zend\Expressive\Router\Route $route */
$route = $result->getMatchedRoute();
return $route ? $route->getAllowedMethods() : [];
},
// Respond with a json response containing the error message when the CORS check fails.
'error' => function (
ServerRequest $request,
Response $response,
$arguments
) {
$data['status'] = 'error';
$data['message'] = $arguments['message'];
return $response->withHeader('Content-Type', 'application/json')
->getBody()->write(json_encode($data));
},
]);
The staging environment:
OS: Debian 9.5 server
Webserver: Apache/2.4.25 (Debian) (built: 2018-06-02T08:01:13)
PHP: PHP 7.1.20-1+0~20180725103315.2+stretch~1.gbpd5b650 (cli) (built: Jul 25 2018 10:33:20) ( NTS )
Apache2 vhost on staging:
<IfModule mod_ssl.c>
<VirtualHost ****:443>
ServerName staging.****.com
DocumentRoot /var/www/com.****.staging/public
ErrorLog /var/log/apache2/com.****.staging.error.log
CustomLog /var/log/apache2/com.****.staging.access.log combined
<Directory /var/www/com.****.staging>
Options +SymLinksIfOwnerMatch
AllowOverride All
Order allow,deny
allow from all
</Directory>
SSLCertificateFile /etc/letsencrypt/live/staging.****.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/staging.****.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>
</IfModule>
Apache2 vhost on development:
<VirtualHost *:443>
ServerName php71.****.com
ServerAdmin dev#****.com
DocumentRoot /var/www/
<Directory /var/www/>
Options Indexes FollowSymlinks
AllowOverride All
Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.ssl.log
CustomLog ${APACHE_LOG_DIR}/access.ssl.log combined
SSLEngine On
SSLCertificateFile /etc/ssl/certs/****.crt
SSLCertificateKeyFile /etc/ssl/certs/****.key
</VirtualHost>
To everybody pointing fingers to Cloudflare:
Try this direct link with httpie. This link is not using cloudflare:
http -v OPTIONS http://37.97.135.33/cors.php \
'access-control-request-method:POST' \
'origin:https://example.com' \
'access-control-request-headers:content-type'
Check the source code in your browser: http://37.97.135.33/cors.php?source=1
From everything I read here, including your comments it seems your "production" server is behind a PROXY more exactly CloudFlare. You have given details about your working development envinroment, but nothing about the non-working production environment.
Your setup seems correct, and if it does work on a development setup without a PROXY, it means that the PROXY is modifying the headers.
A quick search about this regarding CloudFlare has given enough indication that CloudFlare can be the cause of your problem.
I strongly suggest you enable "Development Mode" in CloudFlare so it will bypass the cache and you can see everything coming/going to the origin server.
The following article should help you understand and resolve your issue:
https://support.cloudflare.com/hc/en-us/articles/203063414-Why-can-t-I-see-my-CORS-headers-
UPDATE:
It appears that your issue is from Apache Mod Pagespeed, by turning it off your headers are present all times.
It is still unclear why the mod is stripping your headers, but that's for another question and time.
Your configuration makes it clear that the headers do get generated, so it's not the code or the middleware's fault.
I believe that the headers get removed by something - check Apache's mod_headers and configuration in case there's a rogue unset directive.
Another, less likely possibility is that you're looking at the staging server through a load balancer or proxy of some kind, which rewrites the headers and leaves the CORS out (to verify that, you might need to intercept Apache's outgoing traffic).
I have made both mistakes, myself.
Please make sure you have the right configuration in Zend Expressive. For example the code below will allow CORS access to any calling domain
use Psr\Http\Message\ServerRequestInterface;
use Tuupola\Middleware\CorsMiddleware;
use Zend\Expressive\Router\RouteResult;
$app->pipe(new CorsMiddleware([
"origin" => ["*"],
"methods" => ["GET", "POST", "PUT", "PATCH", "DELETE"]
}
]));
Until now my php application assumed HTTP 1.1 everywhere. So I defined all headers like so:
header("HTTP/1.1 500 Internal Server Error");
But now my server also supports HTTP 2 and I want to update all header responses with the right HTTP status code.
How to I get the HTTP Protocol version of the http request?
(My webserver is nginx, but I guess it is irrelevant if I am using nginx or apache.)
The server protocol should be available through SERVER_PROTOCOL from the server environment, usually exposed through $_SERVER['SERVER_PROTOCOL'] inside your application.
From phpinfo() under Apache 2.4:
SERVER_PROTOCOL => HTTP/1.1
changing /etc/nginx/factcgi_params:
#fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param SERVER_PROTOCOL HTTP/2.0;
Header should be:-
header($_SERVER['SERVER_PROTOCOL'].' 404 Not Found');