I have a very weird problem.
On my local machine (Windows 8, XAMPP) the Laravel filters work as they should, but on the server they don't. (Ubuntu with Apache).
Route::filter('test_filter', function($request) {
echo 'Inside filter<br />';
});
Route::get('test_server2', array('before' => 'test_filter', function() {
return 'After filter<br />';
}));
When I run this from my local server, the output is:
Inside filter
After filter
When I run the same script from the web, I get:
After filter
As you can see, the filter is not being applied. They are never executed. It's not a random or temporary thing.
I noticed this problem in a large application that I have. I created this simple code to check if the basic stuff works, but it doesn't.
Does anyone know why filters may not be executed?
I've checked the routing classes in the source code of Laravel and I haven't found anything that might help to solve my issue.
First: always define what specific version of Laravel you are using, and where everything is executed.
Some code is executed on production only. If you defined your filter in app/start/artisan.php or app/start/local.php it will not reflect when you execute your application on a server with environment set to production. We are unable to help if you do not exactly specify where your lines of code are defined.
Last but not least: try to reduce the differences between your development environment and your production environment. I recommend using Vagrant and possibly even Laravel Homestead. That way you can develop Laravel applications on Windows and run in a virtual Ubuntu environment.
Hope this helps.
The user Matt Burrow was right. Filters are not executed when the environment is called "testing". I had to change it to something different in order to get them working.
When your application is in 'testing' mode, the route filters are disabled. Because testing is reserved for Unit Testing.
I found it on this issue
Related
I'm facing a wired issue with Laravel where routes with multiple parameters (both mandatory/optional) aren't working.
Environment Information
Local: Windows, XAMPP, PHP 7.3
Production: Ubuntu 18.04, PHP 7.4
Initially, I suspected issue with .htaccess file but that seems not to be an issue.
This works perfectly on my Local, but for some reason, that doesn't work on Ubuntu Server.
The following code works perfectly.
Route::any('route/me/','Tst#routeme');
However, any of the following doesn't work:
Route::any('route/me/here/','Tst#routeme');
Route::any('route/me/here/{id?}','Tst#routeme');
Route::any('route/me/here/and/here','Tst#routeme');
Any suggestions where I can look up to fix this out, please?
My first suggestion would be to place the route with most params at the top like:
Route::any('route/me/here/and/here','Tst#routeme');
Route::any('route/me/here/{id?}','Tst#routeme');
Route::any('route/me/here/','Tst#routeme');
It's more like which ever route matches first gets executed first so from top to bottom the least param route matches last.
Second thing i would suggest you to group the routes like:
Route::prefix('route/me')->group(function () {
Route::get('here/and/here', 'Tst#routeme');
Route::get('here/{id?}', 'Tst#routeme');
Route::get('here', 'Tst#routeme');
});
for better readability...
I can't give you specifics on why this particular scenario is happening but, matching your development and production environments should elimate these problems in future.
Homestead Docs
The Homestead vagrant box provided by the Laravel team is a solid choice and well documented. It is an Ubuntu 18.04 / 20.04 machine and can be configured with many add-ons. You can easily configure which version of PHP any given project is using with a single line in the Homestead.yaml file.
Docker Docs
Docker is a little more advanced but very flexible in how it can be configured. It's container design allows you to isolate the dependencies of one project from the next.
These aren't that difficult to setup (easily done within a day or two) and allow you to replicate your production environments almost perfectly.
It will help massively in those "but it works on my machine" moments!
updating my own question so that might be of some help for others.
I would say probably this is the last thing that someone (or at least me) failed to check. I tried to list the routes in my server and found my newly added route was not found.
php artisan route:list
Earlier, I cleared cache, restarted apache but didn't help. Finally found the following commands to be a lifesaver when routes are cache & aren't working.
So the thing that worked for me - clearing the "route" cache.
php artisan route:cache
php artisan route:clear
My hearty thanks to #Spholt,#Akbar khan, #Spholt, #Gzai Kun for helping out.
Until yesterday my Phalcon PHP application was running perfectly on PROD and today is working only on DEV and LOCAL environments... and I don't have a clue what is going on there! The codebase is exactly the same on all environments, the configs and the routes are correct as well.
For example, if I want to get to a custom defined route, like "/my-custom-route", it always gives me the error message "MyCustomRouteController handler class cannot be loaded". But the rest routes are working fine, like "/contacts" which comes from ContactsController.
As an additional information, "/my-custom-route" has been implemented through ToolsController and gearAction().
The problem appears only on PROD! On DEV and LOCAL there are no such issues which is super strange... The LIVE server is Debian with Apache. DEV server is the same (Debian/Apache), and LOCAL has Ubuntu/Apache installed. All versions are the latest ones - Phalcon Framework (3.4.5), Apache (2.4.41), PHP7 (7.0.33), MariaDB (10.1.43).
Does anyone have an idea where might be the issue?
My first guess would be a case sensitive issue. But since you're running Debian on dev as well don't think this is the issue.
Not sure what changes are done but maybe you're looking at a file being cached by opcache?
Problem solved! Turns out that it was a configuration issue. I use values from an INI file where env, site_url, api_url are defined and the site_url was set without 'www.' which caused the custom URLs to be unavailable.
I have a strange issue with my Laravel 4.2 app.
I have two servers with DirectAdmin installed on both of them.
I use .env.testing.php on one, and .env.production.php on the second one.
First (testing) works fine, but on the second one the .env.production.php isnt handled at all.
I made a simple test and added echo 'test'; in the file on both servers - and on production nothing happend and as I expected on testing 'test' word was displayed on the screen.
I'll be glad for any tips, sollutions - anything than might help with this.
And, yes the server is recognised as production one, yes I tried to use putenv and getenv to see if it is doing its job - both works fine.
And no I have no idea why it's not working :/
Note: You may create a file for each environment supported by your application. For example, the development environment will load the .env.development.php file if it exists. However, the production environment always uses the .env.php file.
I'm trying to get a laravel programming enviroment up to finish my masters' degree project but I can't get this to work no matter how hard I try.
I've followed various tutorials but the last one I've tried has been http://sherriflemings.blogspot.com.es/2015/03/laravel-homestead-on-windows-8.html
and I think I got somewhere but I get the following error trying to initialize vagrant
Vagrant up error:
and I've confirmed that the file C:\Users\Administrator\homestead\Homestead\scripts\homestead.rb is available and permissions are correct.
Also In the error I see C:/Users/Administrator/.ssh/id_rsa (Errno::ENOENT)
but I have other routes difined in my Homestead.yaml
Is there any other way I can run homstead of have a Laravel development enviromet?
What tutorial would you recommend to get this up and running?
You should generate key to make it work. So, you will have two files
id_rsa and id_rsa.pub in your C:/Users/Administrator/.ssh/ folder.
As I totally agree with the previous answer, I wanted to provide some more information about your state.
There is no need to check if the homestead.rb exists at that stage. Because it is already the running code and informing you about the problem that it can't fix by herself.
That is also not a route problem (to tell the truth; the provided information under 'sites' key of Homestead.yaml is not about routes. You are listing the projects inside your homestead virtual machine there. One site per project is generally enough. Like:
-sites:
- map: foo.com
- to: /home/vagrant/Code/Foo/public
Your homestead.rb file is blocked while running because the created virtual machine's operating system needs something to trust your code working using ssh. And one of the ways of achieving that is using a public key - private key pair (also used by homestead). It looks like you don't have a pair. And the script running can't access the public key: AKA id_rsa.
TL;DR
I get a 404 (Not Found) error on calling an API method (api/auth/authenticated~GET). I only get this on my live-server, not on local and not on any other methods.
The Problem
I use Codeigniter with a Rest-Server library. I have a simple method api/auth/authenticated(GET) that returns true if the user is logged in and false if not. On the live-server this method gives me a 404 (Not Found) error. Other calls to the same api class work, for example api/auth/login(POST) works and api/auth/logout(GET) works as well.
so how is this possible?
I have tried to delete the .htaccess file, but that didn't work. It can't be a typo since it works locally. Maybe some setting in Apache? But then why do the other methods work just fine?
I would be grateful for any ideas and hints.
my app
CodeIgniter 3.0.3
CodeIgniter Rest Server
AngularJs 1.5 with ngResource for client-side
my server env
digital ocean droplet
Ubuntu 14.04
PHP 5.6.15
Apache 2.4.7
The worst problems are always the most stupid ones ...
There should be a list on my desk.
1. Check for typos (very important but was not the problem here)
Now if we are working on Windows or OSX and everything works fine and then some parts just don't work on Linux this one is very important:
2. Check filenames for case-sensitivity problems (also check the GIT config)
The Problem was still a bit harder to find. I use Git to push my live-deploybranch to my remote repository, which is then picked up by Deploybot and uploaded to my server. Git doesn't care about case sensitivity on default and files will stay like they were first added forever. Thanks to this post i could just type one command to change it.
git config core.ignorecase false