I have this Laravel App which I'm deplying to Heroku.
I have followed all of the steps until I encountered a problem relating some assets (asset('css/app.css'), for example) refering to http urls, instead of https urls.
I solved that by adding
if(config('app.env')==='production'){
\URL::forceScheme('https');
}
in the boot method of my AppServiceProvider.php file, and it worked.
But now I have encountered another http related problem that the previous code couldn't solve.
I am fetching my data using simplePaginate() function like so
public function index(Question $question){
$answers = $question->answers()->with('user');
return $answers->simplePaginate(3);
}
This code returns me a response with my 3 answers, as well as with a property called 'next_page_url'
which is, still, plain http (not https as i need it to be).
What can I do for this to be https as Heroku requires?
Heroku's load balancing setup means the indication of whether the request is HTTP or HTTPS comes from the X-Forwarded-Proto header. (Laravel also needs the X-Forwarded-For header to get the users' real IP addresses, incidentally.)
By default, Laravel doesn't trust these headers (as in a different setup it might come from a malicious client), so none of the requests will be detected as HTTPS. You can fix this by configuring the Laravel trusted proxy to trust the header.
In the default config, just setting $proxies = '*', will do the trick, and is safe on Heroku because the load balancers can't be bypassed by end users.
The correct way is to change the URL of your app to https://example.com in the configuration file (.env file as an example). Just write APP_URL=https://example.com
But, when you use Heroku - their balancers can route your requests to https://yourDomain.com to your application over HTTP. So, the Laravel app receives the request to http://yourDomain.com and decides that you need a response with HTTP links.
As #seejayoz said you need to configure trusted proxies list for your app.
I think you can use withPath (or setPath alias) :
$pagi=$answers->simplePaginate(3);
$pagi->withPath("https://link/xxx/");
return $pagi;
I have multiple Laravel sites hosted on the same server. With the latest site I've created, the contact form refuses to submit without throwing a 419 error. I have set up the routing in my web.php file just like the other websites, which have live, working contact forms, and I'm generating and sending the token exactly the same way - with {{ csrf_field() }}.
I found an answer to a similar question stating that you can disable Csrf checking by adding entries to the $except array in app/Http/Middleware/VerifyCsrfToken.php. I have verified that this does indeed resolve the 419 error:
protected $except = [
'contact',
'contact*',
];
But of course I wish to keep the Csrf functionality, and I only updated the $except array for troubleshooting value.
Does anyone know what may be different about the new Laravel environment that would have this 419 behavior despite passing the generated token? I have tried updating a number of ENV settings and toggling different things, but nothing other than modifying the $except array has had any influence on the issue.
Update
Since there has been a bit of discussion so far, I figured I'd provide some additional info and code.
First, this is an ajax form, but don't jump out of your seat just yet. I have been testing the form both with and without ajax. If I want to test with ajax, I just click the button that's hooked up to the jQuery listener. If not, I change or remove the button's ID, or run $("#formName").submit(); in the console window.
The above (ajax, old-fashioned submit, and the jquery selector with .submit();) all result in the exact same response - a 419 error.
And for the sake of completeness, here's my ajax code which is working on all of the other websites I'm hosting. I define a postData array to keep it all tidy, and I added a console.log() statement directly after it to (again) confirm that token is generated just fine and is being passed correctly with the request.
var postData = {
name: $("#name").val(),
email: $("#email").val(),
message: $("#message").val(),
_token: $("input[name=_token]").val()
};
console.log(postData);
$.post("/contact", postData, function (data) {
...
Any ideas? Could there be a configuration issue with my ENV or another file?
Progress Update!
Because the other sites are working just fine, I cloned an old site and simply overwrote the files that I changed for the new website, and bam! It's working now. Doing a little bit more digging, I ran php artisan --version on the cloned version of the site versus the non-working version, and here are the results:
Working Version: Laravel Framework 5.7.3
Non-working Version: Laravel Framework 5.7.9
Perhaps this is a bug with Laravel? Or perhaps some packages on my server are out of date and need to be updated to work with the new version of Laravel?
TLDR: This post contains lots of potential issues and fixes; it is intended for those scouring for related bonus information when stuck.
I just encountered this error using Laravel Sanctum in what looks like improperly setup middleware. Sanctum uses the auth:sanctum middleware for the guard, which is some kind of extension of the auth guard of which Laravel uses as the default, but session is handled by the web middleware group.
I can't exactly verbalize some of this internal-Laravel stuff; I am more experienced with JavaScript than PHP at the moment.
In my api.php file, I had the login/register/logout routes, and in my Kernel.php file, I copied \Illuminate\Session\Middleware\StartSession::class, from the web group into the api group.
I had to do that to fix my login unit test that was throwing an error about "Session store not on request". Copying that allowed me my postJson request to work in the unit test, but sometime later, I started seeing 419 CSRF error posting from the JavaScript app (which is bad because it worked fine earlier).
I started chasing some filesystem permission red-herring in the /storage/framework/sessions folder, but the issue wasn't that (for me).
I later figured out that with Laravel Sanctum and the default AuthenticatesUsers trait, you must use the web guard for auth, and the auth:sanctum middleware for protected routes. I was trying to use the api guard for auth routes and that was central to my 419 errors with the AuthenticatesUsers trait.
If anyone gets 419 while CSRF was working or should work, I recommend doing some \Log::debug() investigations at all the key points in your system where you need these to work:
Auth::check()
Auth::user()
Auth::logout()
If you get strange behaviour with those, based on my observations, there is something wrong with your config related to sessions or something wrong with your config related to web, api guards.
The guards have bearing on the AuthManager guard which maintains state over multiple requests and over multiple unit tests.
This is the best description I found, which took over a week for me to discover:
Method Illuminate\Auth\RequestGuard::logout does not exist Laravel Passport
As a random final example, if your session is somehow generating the CSRF token using data from the web middleware group while your routes are set to use api, they may interpret the received CSRF incorrectly.
Besides that, open Chrome dev tools and goto the Applications tab, and look at the cookies. Make sure you have the XSRF-TOKEN cookie as unsecure (ie: not httpOnly).
That will allow you to have an Axios request interceptor such as this:
import Cookies from 'js-cookie';
axios.interceptors.request.use(async (request) => {
try {
const csrf = Cookies.get('XSRF-TOKEN');
request.withCredentials = true;
if (csrf) {
request.headers.common['XSRF-TOKEN'] = csrf;
}
return request;
} catch (err) {
throw new Error(`axios# Problem with request during pre-flight phase: ${err}.`);
}
});
That is how my current Laravel/Vue SPA is working successfully.
In the past, I also used this technique here:
app.blade.php (root layout file, document head)
<meta name="csrf-token" content="{{ csrf_token() }}">
bootstrap.js (or anywhere)
window.axios = require('axios');
window.axios.defaults.headers.common['X-Requested-With'] = 'XMLHttpRequest';
const token = document.head.querySelector('meta[name="csrf-token"]');
if (token) {
window.axios.defaults.headers.common['X-CSRF-TOKEN'] = token.content;
} else {
console.error('CSRF token not found: https://laravel.com/docs/csrf#csrf-x-csrf-token');
}
In my opinion, most problems will stem from an incorrect value in one or more of these files:
./.env
./config/auth.php
./config/session.php
Pay close attention to stuff like SESSION_DOMAIN, SESSION_LIFETIME, and SESSION_DRIVER, and like I said, filesystem permissions.
Check your nginx access.log and/or error.log file; they might contain a hint.
just found your issue on the framework repo.
It is not a laravel issue, your installation is missing write permissions on the storage folder, thus laravel can't write session, logs, etc.
You get a 419 error because you can't write to the files, thus you can't create a sessionn, thus you can't verify the csrf token.
Quick fix: chmod -R 777 storage
Right fix: move your installation to a folder where nginx/apache/your user can actually write.
If you are using nginx/apache, move you app there and give the right permissions on the project (chown -R www-data: /path-to-project)
If you are using php artisan serve, change it's permissions to your user: chown -R $(whoami) /path-to-project
You get it, let writers write and you're good.
Probably your domain in browser address bar does not match domain key in config/session.php config file or SESSION_DOMAIN in your env file.
I had the same issue, but the problem in my case was https. The form was on http page, but the action was on https. As a result, the session is different, which is causing the csrf error.
run this command
php artisan key:generate
I used the same app name for staging and prod, being staging a subdomain of prod. After changing name of app in staging it worked
We had this issue, it turned out that our sessions table wasn't correct for the version of Laravel we were using. I'd recommend looking to see if it's being populated or remaining empty (like ours was).
If it's empty, even when you have people visiting the site, I'd say that's what the issue is.
(If you're not using a database to store your sessions, obviously I'd suggest checking wherever you are instead.)
I'm having issues with a new Laravel app behind a load balancer.
I would like to have Laravel do the Auth middleware 302 redirects to relative path like /login instead of the http://myappdomain.com/login is actually doing.
I only see 301 redirects in the default .htaccess Laravel ships which makes me believe the behavior is right within Laravel, am I wrong?
Can someone point me in the right direction?
If you need to properly determine whether a request was secure when behind a load balancer you need to let the framework know that you're behind a proxy. This will ensure that the route() and url() helpers generate correct URLs and remove the need to create relative redirects which are both not 100% supported by browsers and also won't work properly when serving a webpage from a sub-path.
This is what we use to solve this problem and it's working so far for us:
.env
LOAD_BALANCER_IP_MASK=aaa.bbb.ccc.ddd/xx #Subnet mask
LoadBalanced Middleware
class LoadBalanced {
public function handle($request, $next) {
if (env("LOAD_BALANCER_IP_MASK")) {
$request->setTrustedProxies([ env("LOAD_BALANCER_IP_MASK") ]);
}
$next($request);
}
}
Then put the middleware in your Kernel.php:
protected $middleware = [
LoadBalanced::class,
//.... It shouldn't matter if it's first or last as long as other global middleware don't need it
];
This is a feature available to Laravel because it is using the Symfony request as a base. How this work is that the load balancer forwards some important headers. Symfony currently understands:
protected static $trustedHeaders = array(
self::HEADER_FORWARDED => 'FORWARDED',
self::HEADER_CLIENT_IP => 'X_FORWARDED_FOR',
self::HEADER_CLIENT_HOST => 'X_FORWARDED_HOST',
self::HEADER_CLIENT_PROTO => 'X_FORWARDED_PROTO',
self::HEADER_CLIENT_PORT => 'X_FORWARDED_PORT',
);
which have information regarding the user making the request to the load balancer and the protocol used.
Also according to framework comments:
The FORWARDED header is the standard as of rfc7239.
The other headers are non-standard, but widely used
by popular reverse proxies (like Apache mod_proxy or Amazon EC2).
Update:
Since version 5.5, the Laravel boilerplate package includes the TrustedProxy middleware which uses the fideloper/TrustedProxy package.
To have it working you need to (a) make sure it's in your $middleware array in your App\Http\Kernel class and that you place the IPs of the trusted proxies in this middleware e.g.
protected $proxies = [
'1.2.3.4'
];
I would highly recommend to explicitly specify which forwarded headers your proxy sends e.g.:
protected $headers = Request::HEADER_X_FORWARDED_AWS_ELB;
if you're using an AWS load balancer.
The reason for this is quite important in that if you are using an AWS load balancer then someone could craft a request with the 'Forwarded` header and that will be forwarded by AWS and then processed by the middleware essentially allowing users to spoof their IP host/port etc.
I use Lumen 5.4.
This is how my route is setup:
$app->get('/ip/{ip}', GeoIpController::class . '#show');
The {ip} route parameter should be an IP address, with dots in it. However, it seems there is a problem when a route has dots in it. It returns a 404 not found error.
I am aware I could pass the IP address in as a simple GET request parameter, but want the IP to be part of the URL and to be handled like a route parameter.
For testing purposes, I use php -S localhost:8080 -t public to serve the application.
This is a limitation on PHP's built in server, not with Lumen (or Laravel, or Slim, or any other frameworks/apps with a router). You can view the PHP bug report here.
Basically, if the URL has a dot in the url after the script name, the built-in server treats the request as a static file request, and it never actually attempts to run through the application.
This request should work fine on a real web server (apache, nginx), but it will fail when run on PHP's built-in development web server.
I'm new to Laravel, I have learned about the models, views, blade, controllers and routes and how they work together. So far everything has been working smoothly.
I'm having trouble with sessions though.
When I use the AuthController that comes with Laravel and hit auth/register with a POST request, the data for the user that I register does get inserted into the users table (using mysql) and I do get back a response with the "Location" header redirecting to / like it does out of the box. It redirects like it should. But in the same response there is no "Set-Cookie" header set/sent. The session part of Laravel is not working properly for me. This is the same for a POST to auth/login, it authenticates properly and redirects to the profile page but no session cookie is sent back in the response.
I'm using:
Laravel 5.2.11
PHP 5.5.9
xubuntu 14.04 (Ubuntu)
Linux kernel 3.19.0-42-generic
Composer 1.0
All of the php modules that Laravel requires are installed. I'm running the app with php's built in web server. I run that with sudo. The exact command I run is this:
sudo php -S localhost:8888 -t public/
All routes are being responded to properly.
I have tried both ways of installing Laravel that the installation docs recommend, through the laravel executable and composer create-project. Still no cookies set either way. I have made all the files and directories of the laravel project mod 777. The app key is set in .env if that makes any difference.
The config/session.php file is using the file driver for the session.
There are no session files in the storage/framework/sessions directory after setting a session.
When I try setting a session myself with the session function like it states in the docs:
session(['sesskey' => 'somevalue']);
Again no "Set-Cookie" header is sent in the response and no session file is created. There are no error messages reported back either I should add.
When I do set a session key with the session function like above I can get that value back however and echo it back to the browser like so:
echo session('sesskey');
So it does seem to save it at least in php's memory.
When I try setting a cookie using the withCookie method, I do get the proper response with the Set-Cookie header set:
return response()->view('welcome')->withCookie(cookie("test", "val" , 3600));
I tried going down the illuminate rabbit hole to see if I could find a problem but that is a bit over my head atm.
Any help would be much appriciated, thanks!
in laravel 5.2 you need to use "web" middleware for your problem,like that
Route::group(['middleware' => ['web']], function () {
//
});
use middleware for the request \Illuminate\Session\Middleware\StartSession::class
Route::group(['middleware' => [\Illuminate\Session\Middleware\StartSession::class]], function () {
});