Laravel 5.3 HTTPS Routes - php

I've read everywhere that Laravel can detect when the user is browsing via HTTPS and uses that to generate routes accordingly, but this appears to be untrue.
I've used a configuration in the AppServiceProvider to force all generated URLs to be prefixed for HTTPS but this only masks an underlying problem.
I have Laravel sitting on an EC2 instance. There is no load balancer and I haven't configured a proxy. This is purely a development instance.
How can I get URLs generated by the route helper to use HTTPS?

If a user is on HTTPS page, Laravel's route() helper will generate HTTPS URL. Since Google Chrome is already marks HTTP websites as insecure, it is a good idea is to rewrite all HTTP requests to HTTPS. There are many ways to do that, but as far as I know the best is to setup your web server to do the job.
Sample VH for Apache:
<VirtualHost my.app:80>
ServerName my.app
Redirect permanent / https://my.app
</VirtualHost>
<VirtualHost my.app:443>
DocumentRoot /home/my/public
ServerName my.app
ServerAlias my.app
ServerAlias *.my.app
SSLEngine on
SSLCertificateFile conf/ssl.crt/server.crt
SSLCertificateKeyFile conf/ssl.key/server.key
<Directory /home/my/public>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>
</VirtualHost>

Related

How to ban access Laravel file from other IP, just allow localhost can access

OS: Microsoft Windows Server
web server: Apache
front-end framework: Vue.js
back-end framework: Laravel
I set when path is "example.com", I can see my Vue.js page; When path is "example.com:9999", I can see my Laravel project.
I want access to "example.com" from any IP and "example.com:9999" just can access from this website, how to do that?
This is my httpd-vhost.conf.
<VirtualHost *:80>
DocumentRoot "C:\Apache24\htdocs\index"
ServerName example.com
<Directory "C:\Apache24\htdocs\index">
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
<VirtualHost *:9999>
DocumentRoot "C:\Apache24\htdocs\Laravel_project_name\public"
ServerName example.com
<Directory "C:\Apache24\htdocs\Laravel_project_name\public">
Options FollowSymLinks
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
My final goal is I don't want anyone know what framework I'm using. and prohibit anyone can see my laravel index.php and other file in laravel_project_name/public.
Look at the documentation for VirtualHost:
<VirtualHost addr[:port] [addr[:port]] ...> ... </VirtualHost>
Addr can be any of the following, optionally followed by a colon and a
port number (or *):
The IP address of the virtual host;
A fully qualified domain name for the IP address of the virtual host (not recommended);
The character *, which acts as a wildcard and matches any IP address.
The string default, which is an alias for *
If you only want it accessible from localhost, then replace * with the localhost IP address.
That said, your goal is a bit unclear.
The above will stop a client running on a different computer from accessing on that virtual host.
There's no way to allow users of the Vue application to access that VirtualHost without letting people bypass the Vue application and access it directly. They will still be making an HTTP request to your server and there's no way to tell if it was initiated by your code built into your Vue application or someone else's code (or manually constructed request).

Configuring virtualhost for secure websocket using ratchet websocket library on an apache webserver

I have implemented/tried to implement a websocket for communcation between users on an ec2 instance running linux with an apache webserver. I had it working when i first configured it where my ratchet websocket pointed to port 8081 without any TLS. With this configuration i was able to upgrade to a websocket and send/recieve data - through a non secure websocket. This was only possible through the ip address though and not through the actual url.
I am running the websocket at a subdomain.
<VirtualHost *:443>
DocumentRoot "/var/www/html/video"
ServerName video.domain.com
SSLEngine on
SSLCertificateFile ./certs/server.crt
SSLCertificateKeyFile ./certs/server.key
# ProxyPass /ratchet/ ws://video.domain.com:8081/
<Directory "/var/www/html/video">
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
The above solution works when i use the ip based websocket connection to connect to the websocket through the JS websocket API.
I have tried both WSS, WS, with and without ports etc for the websocket API but still the beneath written code is the only i can get to work.
let socket = new WebSocket("ws://server_ip:8081");
I have read a lot of stackoverflow questions regarding adding a proxypass to the VH but it doesn't upgrade the request. Furthermore, i have tried to create it's own virtualhost and that doesn't work either.
I think it's worth to mention i have a cloudflare CDN the requests are proxied through.
Hope to get some fresh eyes. Been stuck for a while.
I do not have enough rep for a comment, so answer it is.
It has been a while since I have dabbled into this stuff, and my first thought was that you indeed need a ProxyPass, but when I looked at my config this is not the case.
I'm going out on a limb and guess that your VH is the issue here, you are explicitly listening on port 443(https) but I believe wss has another port it listens on, so maybe you could try another port. Other than that you could also try to do new WebSocket('https://video.domain.com') and enable the proxy in the VH, this way the secure connection is handled by the http layer. But since the browser will then try to upgrade the request to a socket I doubt this will work.
I should mention that in my case I used websockets to open an mqtt connection, since browser don't implement mqtt this is done via wss.
If non of this works I could try to dive deeper into the inner workings of the mqtt lib I use in order to dissect how the connection is set up.
I hope any of this helps :D
edit
since there was not enough space in the comments I'll place it here:
not related to sockets but to apache and proxies: the ProxyPass directive has a counterpart ProxyPassReverse for that very goal.
<virtualhost IPv4:443 [IPv6]:443>
Servername knowledge.domain.com:443
ServerAlias knowledge.domain.com
ServerAdmin webmaster#domain.com
DocumentRoot /path/to/documentRoot
<Directory /path/to/documentRoot>
Options -Indexes -FollowSymLinks -SymLinksIfOwnerMatch
</Directory>
SSLEngine On
SSLCertificateFile /path/to/ssl.crt
SSLCertificateKeyFile /path/to/ssll.key
SSLCACertificateFile /path/to/ssll.cer
Header always set Strict-Transport-Security: "max-age=31536000; includeSubDomains; preload"
Header always edit Set-Cookie (.*) "$1;HttpOnly;Secure"
ProxyRequests Off
ProxyPreserveHost On
ProxyVia Full
<Proxy *>
Require all granted
</Proxy>
<Location />
ProxyPass http://127.0.0.1:3000/
ProxyPassReverse http://127.0.0.1:3000/
</Location>
<Directory />
Options -FollowSymLinks -Indexes -SymLinksIfOwnerMatch
</Directory>
CustomLog "/path/to/logs/access.log" combined
ErrorLog "/path/to/logs/error.log"
LogLevel warn
</virtualhost>
this is an example of my proxy conf for a nodejs app

How to access virtual host from the internet?

I want to access my website via virtual host from the internet. For now, I am using the public IP address of my server to access my website. Here is what I am using (please see below).
http://122.4.195.12:7777/site/index.php
Is there a way to access my virtual host from the internet? When I am accessing my virtual host from my internet (https://mysite/site/index.php) I am getting
DNS_PROBE_FINISHED_NXDOMAIN error
mysite’s server IP address could not be found.
Is there a way to add a SSL when accessing my website via public IP address? When I change http into https I am getting
ERR_SSL_PROTOCOL_ERROR
122.4.195.12 sent an invalid response.
http://122.4.195.12:7777/site/index.php -> https://122.4.195.12:7777/site/index.php
Here is my Virtual Host Config:
<VirtualHost *:7777>
DocumentRoot "C:\xampp\htdocs"
ServerName mysite
<Directory "C:\xampp\htdocs">
Require all granted
AllowOverride All
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
<VirtualHost *:443>
DocumentRoot "C:/xampp/htdocs"
ServerName mysite
SSLEngine on
SSLCertificateFile "crt/scratchitsite/server.crt"
SSLCertificateKeyFile "crt/mysite/server.key"
<Directory "C:\xampp\htdocs">
Require all granted
AllowOverride All
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
Here is the host file of my server:
127.0.0.1 mysite
For question 1
The easier way will still be registering a domain name, point it to your IP address, and setup your VirtualHost ServerName for it
The VirtualHost actually detecting the Host HTTP Header from server site, so the key thing here is:
How to make the client browser send the Host header the same with you defined on server
For example, by using CURL you can force it to use the user definied Host header like this: curl -H 'Host: mysite' 122.4.195.12:7777/site/index.php
If you're using Chrome, you can try to use a browser extension, like this
For question 2
You've enabled HTTPS on port 443 instead of 7777 in your Apache configuration
Which means you should access your HTTPS service like this https://122.4.195.12:443/site/index.php instead of this https://122.4.195.12:7777/site/index.php

Laravel and Apache2 - create subdomain for each new registered user

I am working with Laravel 5 and Apache 2.
I would like to prepare simple registration module. Each new user should have own subdomain to log in to system. This subdomain should be created dynamically (on the fly - during registration process).
The main problem is how to make Apache and Laravel code to create that new subdomain?
Thanks for any help.
you have to use apache wildcard configuration and according to the subdomain you have to setup project configuration, this will not put the load on your server
<VirtualHost *:80>
ServerAlias localhost *.host.com #wildcard catch all
VirtualDocumentRoot /path/to/your/workspace/public
UseCanonicalName Off
<Directory "path/to/your/workspace/public">
Options FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all
Require all granted
</Directory>
</VirtualHost>

PHP Namespace collision with Phalcon, Vagrant, Apache2

I have this strange problem that I can best describe as "namespace leakage". I have setup a vagrant box running Apache2 with VHosts setup to replicate the production server in terms of domains & sub domains. I have edited my local machine's hosts file accordingly, and my VHosts work fine.
I have also created a skeleton/base app coded with PhalconPHP. It's a multi modular app with all the generic features I require to develop my apps (actually redevelop a load of very old, outdated apps). The skeleton app works fine.
The problem I have is that if I go to app1.dev in my browser, it works. Then if I go to app2.dev, app2 is clearly trying to load some components - views etc from app1 and is giving errors. However, close the browser and try again by going to app2.dev and it now works fine. Then go to app1.dev and that is now broken and trying to load components from app2! I managed to track this odd behaviour down to namespace collision.
It has to be namespace collision because my apps are all based on the skeleton app and use it's name spaces, which are obviously the same for generic parts of the app - modules such as App\backend, App\frontend etc. If on a broken app, I navigate in my browser to a part of that app that is unique, and therefore has a unique namespace, it works fine because there is no collision! Also a couple of apps are coded with Codeigniter3 which does not use namespaces, and those apps do not have this issue. It only effects the Phalcon apps with namespaces.
I will add that each app uses .htaccess to direct requests to the front controller located in public/ directory.
Options FollowSymLinks
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteRule ^$ public/ [L]
RewriteRule (.*) public/$1 [L]
</IfModule>
I wondered if the .htaccess was the issue, but I think I've ruled that out. It is doing what it is supposed to do.
Here's an example of one of my VHost settups in apache - they all follow this pattern.
<VirtualHost *:80>
ServerName app1.dev
ServerAlias www.app1.dev
DocumentRoot /vagrant/www/app1
<Directory "/vagrant/www/app1">
Options Indexes Followsymlinks
AllowOverride All
Order allow,deny
Allow from All
Require all granted
</Directory>
</VirtualHost>
Changing all the namespaces throughout every app would be a pretty major job, and I don't think that should be necessary. I don't think this should be an issue on the production server as that uses CloudLinux/Centos & WHM, but it is a bit of a worry!
Clearly namespaces should not collide across different document routes and VHosts right? What am I missing?
I have solved similar problem in Apache using different local IP's for every app:
/etc/hosts
127.1.0.1 edu.l
127.1.1.1 tech.l
...
/etc/apache2/sites-available/001-blogs.conf
<VirtualHost edu.l:80>
ServerName edu.l
ServerAdmin webmaster#localhost
DocumentRoot /var/www/blog/edu
ErrorLog ${APACHE_LOG_DIR}/edu.error.log
CustomLog ${APACHE_LOG_DIR}/edu.access.log combined
</VirtualHost>
<VirtualHost tech.l:80>
ServerName tech.l
ServerAdmin webmaster#localhost
DocumentRoot /var/www/blog/tech
ErrorLog ${APACHE_LOG_DIR}/tech.error.log
CustomLog ${APACHE_LOG_DIR}/tech.access.log combined
</VirtualHost>
...
You can ommit configuring names in hosts ans use IP's in .conf tho.
It works because if you define few apps on same IP and port, Apache seems to remember directory it is working on and mismatching files.

Categories