Laravel hostname issue - php

I am trying to implement Single Sign On via SAML and MS Azure in my Laravel application. I am using this package: 24Slides/laravel-saml2.
I did setup everything and when I tested the login with Microsoft I was redirected to the correct domain. But when I need to send a response to Azure back I get error that the URLs do not match with the one I have put in Azure and the one Laravel generates. Laravel generates the url with http://localhost:8090 (notice it is not https) but Azure doesn't accept localhost and http.
When I run echo $_SERVER['HTTP_HOST']; I get localhost:8090
I have changed the APP_URL in .env but no difference. I also tried putting the localhost in the TrustProxies:
class TrustProxies extends Middleware
{
protected $proxies = [
'localhost',
'localhost:8090',
'127.0.0.1',
'127.0.0.1:8090'
];
protected $headers =
Request::HEADER_X_FORWARDED_FOR |
Request::HEADER_X_FORWARDED_HOST |
Request::HEADER_X_FORWARDED_PORT |
Request::HEADER_X_FORWARDED_PROTO |
Request::HEADER_X_FORWARDED_AWS_ELB;
}
I am using Docker with nginx and reverse proxy for HTTPS.

Related

Laravel Route to other port/application

I have a laravel 8 project running on port 8000 and metabase (an analytic framework) running on port 3000.
metabase itself is password protected. But if possible I would like to make normal users unable to even get to the login screen of metabase. My idea was that metabase exposes port 3000 only to localhost and laravel forwards port 3000 inside a middlewarte protected route.
I was hoping for a solution which would look something like this:
Route::middleware(['auth','isAdmin'])->prefix('metabase')->group(function() {
// forward port 3000 here
});
AFAIK, Laravel can't protect endpoints outside of his requests. Instead, you can use HTTP auth or maybe you can try subdomain routing https://laravel.com/docs/8.x/routing#route-group-subdomain-routing

Laravel redirect after login to url with port (Nginx)

I have a server (Nginx). On this server I have Laravel application, adres to thi application is: https://myapp.test.com:31443/login. When I logged in correctly to the application Laravel redirect me on address: https://myapp.test.com WITHOUT PORT NUMBER.
Then I can't load e.g. Vue file beacuse addres is without port and in console I have 404 error "not found file".
How I can add port number to redirect url ?
P.S this redirect not work:
'url' => env('APP_URL', 'https://myapp.test.com:31443'),
You using mac? Valet is the best option :(Local) https://laravel.com/docs/6.x/valet and for outside: Valet and https://ngrok.com/.
if not using mac: (Local) https://laravel.com/docs/6.x/homestead (outside) homestead + https://ngrok.com/
Valet and homestead makes url yourProject.test with no Port Number

Deploy a Laravel App on AWS EB - inconsistent Sessions

I try to Deploy a Laravel App via AWS ElasticBeanstalk with Classic Load Balancer.
I store my laravel sessions in a database on an AWS RDS, the connection is working fine.
It is all working fine, except that every request is generating a new session.
In my .ebextensions I added therefore, session stickiness
option_settings:
aws:elasticbeanstalk:environment:process:default:
Port: '80'
Protocol: HTTP
aws:elb:listener:443:
ListenerProtocol: HTTPS
SSLCertificateId: my_ssl_arn...
InstancePort: 80
InstanceProtocol: HTTP
aws:elb:listener:80:
ListenerEnabled: true
aws:elb:policies:sessionstickiness:
CookieName: laravel_session
LoadBalancerPorts: 443, 80
In the App itself, i force via middleware to use https://
Its Laravel Version 5.5, in the TrustProxiesMiddleware i added:
protected $proxies = '**';
I just don't understand where the problem is and tried already a lot of different settings.
Did anyone get experience with that? What do I oversee here?

Amazon Web Services PHP FTP Get

I am using Elastic Beanstalk and have deployed my application to Worker Tier.
Part of my application is to connect to remote ftp and download remote files using PHP.
It works without a problem on localhost. When I execute the PHP script on amazon web services, I get this weird error:
IP1 = XXX.XX.XX.XX
IP2 = XX.XX.XXX.XXX
PHP Error[2]: ftp_get(): I won't open a connection to IP1 (only to IP2)
Application runs on a single instance (non-balanced), on default VPC.
What is really weird is that IP1 does not match to the host I'm trying to download the file from (ie example.com). Could that be the Internet Gateway IP?
Same application also downloads pictures and connects to APIs, it's definitely connected to the Internet.
I assume the VPC routing configuration won't allow instance to talk to other protocols with target 0.0.0.0/0 (ie any location) but only HTTP.
VPC ID: vpc-53cc2236
Network ACL: acl-c850baad
State: available
Tenancy: Default
VPC CIDR: 172.31.0.0/16
DNS resolution: yes
DHCP options set: dopt-f2998e90
DNS hostnames: yes
Route table: rtb-2b64914e
EC2 instance belongs to subnet-1250b265:
Route Table: rtb-2b64914e
Destination: 172.31.0.0/16 target: local
Destination: 0.0.0.0/0 target: igw-a48199c6
Route table rtb-2b64914e:
Destination | Target | Status | Propagated
172.31.0.0/16 | local | Active | No
0.0.0.0/0 | igw-a48199c6 | Active |No
There are also two other subnets, subnet-ab0003ed, and subnet-96f335f3 which belong to same route table as subnet-1250b265.
I had the same issue, and resolved it by using passive mode.
ftp> pass
Passive mode on.
In ruby, the code ended up being:
Net::FTP.open(host) do |ftp|
ftp.login(user, pwd)
ftp.passive = true
ftp.put(output_filename, "#{target_dir]}/#{target_filename}")
end
Some folks suggested that firewall might block access to FTP. That wasn't the case for me, proper ports were open.
I tried ftp_connect, ftp_get but didn't work (tried with PASV on/off).
I tried to curl ftp download the file but got: No Route to Host error.
Then, I tried to curl ftp download the file using php, but I only got an empty file.
Since I have SSH access to the server, I tried to get the file using SCP but got a weird error saying Warning: ssh2_scp_recv(): Unable to receive remote file.
I also found a PHP patch for this but didn't want to mess patch PHP version on AWS.
I ended up using phpseclib which worked fine - no trouble there.

Client side ssl certificate using Zend_Http_Client

i've been trying to set up a secure communication using client side certificate between my client application and my server application.
I've set up a the configuration in my apache web server and both in my browser just to make sure that it works and it does.
i'm using zend framework 1.12, and according to the documentation on Zend website
the following example should work:
$config = array( 'sslcert' => 'path/to/ca.crt', 'sslpassphrase' => 'p4ssw0rd');
$adapter = new Zend_Http_Adapter_Socket();
$adapter->setConfig($config);
$http_client = new Zend_Http_Client();
$http_client->setAdapter($adapter);
$http_client->setUri('https://somewhere.overtherainbow.com:1337/bluebird');
$http_client->request();
but everytime i just get the same exception
Unable to Connect to ssl://somewhere.overtherainbow.com:1337
There is no doubt that i'm using the right certificate and passphrase and there is access to the remote machine
so where could be the downfall ?
Sounds like a simple firewall issue - login to the server and stop iptables and then see if it connects. Or add an exception to the clients IP to access mysql. Also check :1337 is open

Categories