I am running ubuntu with apache2, I have a php system that is working with rewrite rules - all is working perfect.
Recently, due to increasing traffic I added a load balancer (haproxy), this is the second time I am using haproxy so I am not a pro.
When working with the haproxy I see the the .htaccess rules are not working - accessing the pages without the rules stil works.
This is the haproxy.cnf:
global
maxconn 2048
defaults
option forwardfor
option http-server-close
timeout connect 5000
timeout client 10000
timeout server 10000
frontend www-http
mode http
bind LOADBALANCERIP:80
reqadd X-Forwarded-Proto:\ http
default_backend www-backend
frontend www-https
mode http
bind LOADBALANCERIP:443 ssl crt /home/bee.pem
reqadd X-Forwarded-Proto:\ https
default_backend www-backend
backend www-backend
mode http
cookie SRVNAME insert
server node1 WEB1IP:80 cookie S1 check
Related
I'm trying to code a chat with Symfony 5 and Mercure, but I have some issues with the configuration. I work with Windows 10.
This is the documentation that I followed : https://github.com/dunglas/mercure/blob/main/docs/hub/install.md
I installed this version on my project: mercure_0.13.0_Windows_arm64.zip.
Then, I decompressed it, and right after in my terminal, I ran "composer require symfony/mercure".
This is in my .env:
# See https://symfony.com/doc/current/mercure.html#configuration
# The URL of the Mercure hub, used by the app to publish updates (can be a local URL)
MERCURE_URL=:https://127.0.0.1:8000/.well-known/mercure
# The public URL of the Mercure hub, used by the browser to connect
MERCURE_PUBLIC_URL=https://127.0.0.1:8000/.well-known/mercure
# The secret used to sign the JWTs
MERCURE_JWT_SECRET="!ChangeMe!"
###< symfony/mercure-bundle ###```
Then I ran the Mercure server with this command line : ```$env:MERCURE_PUBLISHER_JWT_KEY='!ChangeMe!'; $env:MERCURE_SUBSCRIBER_JWT_KEY='!ChangeMe!'; .\mercure.exe run -config Caddyfile.dev```.
In my PowerShell, I have this :
```2021/11/16 01:39:58.029 INFO http server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS {"server_name": "srv0", "https_port": 443}
2021/11/16 01:39:58.029 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2021/11/16 01:39:58.111 INFO tls cleaning storage unit {"description": "FileStorage:C:\\Users\\toufi\\AppData\\Roaming\\Caddy"}
2021/11/16 01:39:58.113 INFO tls finished cleaning storage units
2021/11/16 01:39:58.134 INFO pki.ca.local root certificate is already trusted by system {"path": "storage:pki/authorities/local/root.crt"}
2021/11/16 01:39:58.135 INFO http enabling automatic TLS certificate management {"domains": ["localhost"]}
2021/11/16 01:39:58.136 WARN tls stapling OCSP {"error": "no OCSP stapling for [localhost]: no OCSP server specified in certificate"}
2021/11/16 01:39:58.143 INFO autosaved config (load with --resume flag) {"file": "C:\\Users\\toufi\\AppData\\Roaming\\Caddy\\autosave.json"}
2021/11/16 01:39:58.143 INFO serving initial configuration```
It seems to run well, but in my browser when I run ```https://localhost/.well-known/mercure```,
I have :
```Not Found
The requested URL was not found on this server.
Apache/2.4.46 (Win64) OpenSSL/1.1.1h PHP/7.4.25 Server at localhost Port 443```
Someone can help me because I don't know how to access to my Mercure server with my browser ?
Thank you very much
Hey ben you should try and run https://localhost:8000/.well-known/mercure in ur browser
instead of https://localhost/.well-known/mercure
Maybe you've allready figured out this problem in the last 3 months, but here are a thing it came to my mind.
You start Mercure 0.13 in dev mode (Caddyfile.dev) allowing to access the demo page.
(Btw I miss the log entry here which tells you, that the server uses the file specified by you, and should be something like {"level":"info","ts":1646214769.1484525,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile.dev","config_adapter":""}) You may want to open this demo page to see if Mercure works or not. The default url is https://localhost/.well-known/mercure/ui/. It may vary depending on your settings. It worth to try also with http.
You don't provide the SERVER_NAME env variable, so I assume caddy attempts to use 80 and 443 ports. Therefore https://127.0.0.1:8000 in symfony config won't work. And 80 might fail if there is already a web server running to provide access for symfony.
You can use SERVER_NAME=:8000 to launch mercure on port 8000, but in this case it will be only http, not https.
So, what I would do in your case, I would start mercure in dev mode, and check the demo page. If both http and https attempts fail, I would start mercure with additional SERVER_NAME (ex. 8000) and check http://localhost:8000/.well-known/mercure/ui/. If nothing goes wrong, one of them should work. And then you can proceed to configure symfony to use the mercury service.
This is the warning when I open my phpMyAdmin's login (index) page:
There is mismatch between HTTPS indicated on the server and client.
This can lead to non working phpMyAdmin or a security risk.
Please fix your server configuration to indicate HTTPS properly.
The error should be caused by a loadbalancer in between my client and phpmyadmin itself. SSL terminates on the loadbalancer so the URL being used (that phpmyadmin receives in request headers, I assume) is https://mydomain/phpmyadmin.
The loadbalancer communicates with phpmyadmin via http, so the URL being used between lb and pma is http://mydomain/phpmyadmin (not https).
I found this very fitting article on github: Possibility to deactivate SSL connection #170 which is for Docker containers and describes an env var to be passed to the container called "PMA_ABSOLUTE_URI" to fix the problem.
Which setting would this be in phpmyadmin NON Docker?
Any other solution to my problem is also highly appreciated.
Sidenote: Phpmyadmin works fine after the login. You can log in, there are no warnings after the log in and you can perform all interactions without problems. I am just worried about the warning.
I have exactly the same setup as you are describing. A front load balancer acts as reverse proxy and also as SSL/TLS terminator. The LB talks in plain http with the backend server where phpMyAdmin is running.
When I upgraded from 4.0.4.1 to 4.9.0.1 I got the same warning appearing at the phpMyAdmin login screen as you. I was able to solve this on the reverse proxy by "faking" the protocol from http to https. In my case my reverse proxy is a Nginx web server and just before I'd pass to the backend server, I added X-Forwarded-Proto:
server {
listen 443;
server_name my.phpmyadmin.example.com;
[... log and ssl settings ...]
location / {
include /etc/nginx/proxy.conf;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://backendserver;
}
}
By adding proxy_set_header X-Forwarded-Proto https; this tells the backend server that the client to proxy communication happens over https. Without setting this header, phpMyAdmin probably identifies (not sure, just a guess) that it was loaded on a https:// URL yet the communication (between reverse proxy and phpMyAdmin server) happened over http. Therefore it's a correct warning to be shown.
As soon as Nginx was reloaded, the warning disappared from the phpMyAdmin login screen.
I'm using haproxy to loadbalance and get high availability of my (RESTFUL)API, the problem I'm facing is: I can't send REST requests to the API.
I mean haproxy does not support REST API by default and I've figured that I should configure an ACL to make it work, but I couldn't find anything about configuring RESTFUL Support and enabaling http rewrite rules for haproxy.
MY API is based on laravel framework.
For example If I hit 192.168.1.139/login I get 404 error message. the only route which is working is / Which shows the user "you are not logged in." message.
This is haproxy configuration :
listen http_front
bind *:80
mode http
stats enable
stats uri /haproxy?stats
option httpclose
option forwardfor
#acl api_exp hdr(host) -i domain_name.com
#use_backend api_servers if api_exp
default_backend api_servers
backend api_servers
balance roundrobin
server replica1 192.168.100.110:80 check
server replica2 192.168.100.111:80 check
It's a bit strange but I've solved my problem with this configuration :
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
timeout connect 5000
timeout client 100s
timeout server 100s
listen ha-www
bind 0.0.0.0:80
mode http
stats enable
stats uri /haproxy?stats
stats realm Strictly\ Private
balance roundrobin
option httpclose
option forwardfor
server app-www-1 192.168.100.110:80 check
server app-www-2 192.168.100.111:80 check
I currently have two apps at AppFog, they are.
http://sru-forums-prod.aws.af.cm/ and http://sru-home-prod.aws.af.cm/
I have haProxy running locally on my computer, this is my current config file.
global
debug
defaults
mode http
timeout connect 500ms
timeout client 50000ms
timeout server 50000ms
backend legacy
server forums sru-forums-prod.aws.af.cm:80
frontend app *:8232
default_backend legacy
The end-goal is that localhost:8232 forwards traffic to sru-home-prod, while localhost:8232/forums/* forwards traffic to sru-forums-prod. However I cant even get a simple proxy up and running.
When I run HAProxy off this config file I receive AppFog 404 Not Found at localhost:8232.
What am I missing, is this even possible?
EDIT:
New config works but now i have a port 60032 coming back in the response.
global
debug
defaults
mode http
timeout connect 500ms
timeout client 50000ms
timeout server 50000ms
backend legacy
option forwardfor
option httpclose
reqirep ^Host: Host:\ sru-forums-prod.aws.af.cm
server forums sru-forums-prod.aws.af.cm:80
frontend app *:8000
default_backend legacy
The reason you are getting an AppFog 404 Not Found is because applications hosted on AppFog are routed by domain name. In order for AppFog to know what app to serve you, the domain name is required to be in the HTTP request. When you go to localhost:8232/forums/ it sends localhost as the domain name which AppFog does not have as a registered app name.
There is a good way to get around this issue
1) Map your application to a second domain name, for example:
af map <appname> sru-forums-prod-proxy.aws.af.cm
2) Edit your /etc/hosts file and add this line:
127.0.0.1 sru-forums-prod-proxy.aws.af.cm
3) Go to http://sru-forums-prod-proxy.aws.af.cm:8232/forums/ which will map to the local machine but will go through your haproxy successfully ending up with the right host name mapped to your app hosted at AppFog.
Here is a working haproxy.conf file that demonstrates how this has worked for us so far using similar methodologies.
defaults
mode http
timeout connect 500ms
timeout client 50000ms
timeout server 50000ms
backend appfog
option httpchk GET /readme.html HTTP/1.1\r\nHost:\ aroundtheworld.appfog.com
option forwardfor
option httpclose
reqirep ^Host: Host:\ aroundtheworld.appfog.com
server pingdom-aws afpingdom.aws.af.cm:80 check
server pingdom-rs afpingdom-rs.rs.af.cm:80 check
server pingdom-hp afpingdom-hp.hp.af.cm:80 check
server pingdom-eu afpingdom-eu.eu01.aws.af.cm:80 check
server pingdom-ap afpingdom-ap.ap01.aws.af.cm:80 check
frontend app *:8000
default_backend appfog
listen stats 0.0.0.0:8080
mode http
stats enable
stats uri /haproxy
I have a load balanced dev site that I'm working out bugs for SSL on and I have ran into one last very annoying issue. On some pages I need to force it to SSL so easy enough, I just wanted to create a
header ("Location: https://www.example.com/mypage.php");
I thought that was easy enough and no worries. However, every time I do this it transforms it back to http. Well as you can figure it creates an endless loop that can't be resolved. I can't figure out how to keep that https in there so that it will pull the secure version of the page. If I navigate directly to the secure page with https it works just fine. The only issue is on this redirect.
Any help would be awesome! I'm using POUND as a load balance proxy. Apache on the web-server nodes. The SSL cert is setup at the Load Balancer.
When loadbalancing, 'internal' SSL usually goes out the door: Clients connect through a load-balancer with which you can do SSL encryption, but behind that in most loadbalancers I've seen is plain 'HTTP'. Try to get your loadbalancer to set a custom header to you indicating that there is a HTTPS connection between loadbalancer & client.
From http://www.apsis.ch/pound/index_html
WHAT POUND IS:
...
an SSL wrapper: Pound will decrypt HTTPS requests from client browsers and pass them as plain HTTP to the back-end servers.
And from more manual pages:
HTTP Listener
RewriteLocation 0|1|2
If 1 force Pound to change the Location: and Content-location:
headers in responses. If they point to the back-end itself or to
the listener (but with the wrong protocol) the response will be
changed to show the virtual host in the request. Default: 1
(active). If the value is set to 2 only the back-end address is
compared; this is useful for redirecting a request to an HTTPS
listener on the same server as the HTTP listener.
redirecting to https pages is no problem.
you can check for the port, scheme or server variable (probably server variable is the best) to see if https is on, and have it as a condition for redirecting
$_SERVER['SERVER_PORT'] == 443
parse_url($_SERVER['REQUEST_URI'],PHP_URL_SCHEME) == 'https'
$_SERVER['HTTPS'] == 'on'
but as you have an infinite loop there must be something else wrong!
try using the load blancer "balance" instead. it only takes about 5 minutes to set up, and instead of proxying, will do "real" load balancing. I would guess your proxy is currently redirecting https requests to the http address. Try making a request without using the balancer. you can do this by setting up the host name in your /etc/hosts file to point directly to a server instead of to the load balancer's IP