Configuring Laravel 4 with AWS Elasticache Memcached - php

I have Amazon Elasticache Memcached node (just one)
I have a webserver in the same region.
Cache subnet group VPC ID is the same as EC2 instance's, the permissions are set properly from AWS perspective.
In laravel in config/cache.php
'driver' => 'memcached',
and
'memcached' => array(
array('host' => 'xxxxx.xxxx.xxx.xxxx.cache.amazonaws.com', 'port' => 11211, 'weight' => 100),
),
However, Cache::has('key') and Cache::add('key'); do not work.
Do I need a special package for Laravel to work with AWS Elasticache? I only have one node and do not need auto-discovery.
Thanks
P.S. Is there way to get a log for AWS Elasticache? or laravel? logs directory is empty

You should be able to use the elasticache-laravel package, available here: https://github.com/atyagi/elasticache-laravel
Or conversely, check out this post: http://blog.hapnic.com/2013/09/11/Laravel-4-and-ElastiCache/
For your PS: Elasticache logs can be accessed this way:
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/ManagingEvents.html
Your Laravel logs should be in app/storage/logs - if there's nothing in there, check the permissions of the storage directory and make sure it's writeable by the web server. Barring that, check the default error log location for your web server (such as /var/log/httpd/error_log), as defined by your server configuration.

Related

HTTP request from a Laravel app on one Apache virtual host to a Lumen app on sibling virtual hosts: MySQL Access denied for user 'root'#'localhost' [duplicate]

I am developing an API service that another site I've developed will be using. So locally when building and testing, obviously I want both local copies of the site to work. However, it seems to mix up the environment variables.
For example:
Site A has APP_URL=http://a.local
Site B has APP_URL=http://b.local
I send a GET Request (using Guzzle) from Site A code to http://b.local/test
The /test endpoing in Site B simply dumps out dump(env('APP_URL'))
Result retrieved by Site A is "http://a.local"
Expected result: "http://b.local"
So the code in Site B is running with environment variables loaded from Site A. This is an issue as Site B cannot access the correct database, it's trying to use the Site A's database.
Is this an issue with my local setup (Win10 + WAMP), PHP settings, Laravel settings?
I also encountered this issue, and it is mentioned here. The resolution for it is to run php artisan config:cache in both projects to cache configuration from .env files or patch the code from here.
are you using artisan commands to run both projects with different ports ?
php artisan serve --port=8000
php artisan serve --port=8010
You can set Environment variables in either the vhost config OR in an .htaccess file:
SetEnv APP_URL http://b.local
Apart from #Daniel Protopopov answer above there is also another way, that is also works when both Site A and Site B are Lumen.
In short just rename your DB_DATABASE variable on each side to a different name. Then change the respective variable names in the respective config/<configfilename>.php files.
So that on Site A you would have SITE_A_DB_DATABASE in .env and matching 'database' => env('API_A_DB_DATABASE', 'forge'), line in config/database.php.
Then your Site B SITE_B_DB_DATABASE will not be overwritten due to variable names are different.
The same solution applies for any .env variables which names match.
Because the command php artisan config:cache doesn't work here (closure needed in routes file config file)
LogicException : Your configuration files are not serializable.
I add phpdotenv with composer :
composer require vlucas/phpdotenv
And at the begginning of the file "/bootstrap/app.php" (after "new Illuminate\Foundation\Application"), I add :
$app->detectEnvironment(function () {
$dotenv = Dotenv\Dotenv::create(__DIR__ . '/../', '.env');
$dotenv->overload();
});
Maybe an alternative
If you are calling a Lumen 8 API from within a Laravel 6 application using GuzzleHttp and the Laravel env is being inherited to Lumen, creating config file worked for me.
In bootstrap/app.php comment below lines to prevent loading current env values from Laravel
// (new Laravel\Lumen\Bootstrap\LoadEnvironmentVariables(
// dirname(__DIR__)
// ))->bootstrap();
In bootstrap/app.php add below line after $app has been created.
$app->configure('database');
Create config/database.php in lumen root folder. Return all env values needed for Lumen api in an array in the config file.
<?php
return [
'timezone' => 'UTC',
'default' => 'pdbmysql',
'connections' => [
'pdbmysql' => [
'driver' => 'mysql',
'host' => 'localhost',
'port' => '3306',
'database' => 'db2',
'username' => 'root',
'password' => 'root',
],
],
];

How to set maintenance mode for Laravel 5.8 behind load balancer in EC2 instance and access from my office?

I want to use Laravel maintenance mode on EC2 instance behind the load balancer because I do not want to touch AWS console for returning maintenance content.
Moreover, I want to access my app via web browser from my office while maintenance mode.
I did following and it turns into maintenance mode.
But, I can not see my app from my office although the IP at my office in the allow list.
php artisan down --allow=127.0.0.1 --allow=myip/34
Do you have any suggestions for this?
Here is my environment information-
PHP: 5.7
Laravel: 5.8
Also, I have following source code in App/Http/Middleware/TrustProxies.php
class TrustProxies extends Middleware
{
protected $proxies = '*';
protected $headers = [
Request::HEADER_FORWARDED => 'FORWARDED',
Request::HEADER_X_FORWARDED_FOR => 'X_FORWARDED_FOR',
Request::HEADER_X_FORWARDED_HOST => 'X_FORWARDED_HOST',
Request::HEADER_X_FORWARDED_PORT => 'X_FORWARDED_PORT',
Request::HEADER_X_FORWARDED_PROTO => 'X_FORWARDED_PROTO',
];
}
Regards,
Since you're behind a load balancer you'll be receiving the ip of that load balancer rather than the client ip.
TL;DR
In your app/Http/Middleware/TrustProxies.php change the protected $proxies; line to be:
protected $proxies = '*';
Since Laravel 5.4, there is an out-of-the-box way of dealing with this: TrustedProxy. If you're using an earlier version of Laravel you can still use the package, however, you'll have to install it yourself.
Where possible, you should try and set the ip addresses of the reverse proxy, however, this isn't possible with AWS since the ip addresses of the load balancer change all the time (source: https://github.com/fideloper/TrustedProxy/wiki/IP-Addresses-of-Popular-Services#aws-elastic-load-balancers).
For more information, you can refer to the Laravel documentation Configuring Trusted Proxies or the Github page for the underlying package.
Instead of modifying your Middleware code there, it is better to put this into your configuration file. Add config/trustedproxy.php file with the following code in it:
<?php
return [
'proxies' => env('TRUSTED_PROXIES'),
];
Then add the following line to your .env file:
TRUSTED_PROXIES="*"
Also a side note: it is generally not really a good idea to trust all proxies, because people can just fake the X-Forwarded-For headers. Instead, you can put the IPs for your private IP subnet, whatever it is in your EC2 VPC. It would be one of the RFC 1918 private IP networks, by default it would be one of the 172.x.x.x subnets. You can then substitute the "*" in the code above with something like "172.16.0.0/12" or whatever your private subnet is — you can look it up in your VPC settings.

Laravel not publishing to Redis

I am trying to implement Redis publishing in my local RESTful API which is built in Laravel for the purposes of implementing a chat system later on with Web Sockets. I intend to read them from a Node.JS server later on.
I am using Redis::publish to publish a simple message to my test-channel.
However, for some reason Laravel doesn't seem to publish to it.
I have also noticed that when I call Redis::set, whatever I set doesn't get persisted in Redis, but using Redis::get I can read the values that I'm setting.
public function redis(Request $request) {
$data = $request->validate([
'message' => 'string|required'
]);
Redis::publish('test-channel', 'a test message');
return 'Done';
}
I am using the code above in the api/redis route:
Route::post('/redis', 'API\MessageController#redis');
I have subscribed to the test-channel using the redis-cli command.
If I manually publish a message to the test-channel using the redis-cli in a terminal instance, I properly receive the messages that I am publishing. However, they don't seem to get published with Laravel for some reason.
What I can notice while running php artisan serve and visiting the aforementioned route is Laravel logging the following:
[*timestamp*] 127.0.0.1:39448 Accepted
[*timestamp*] 127.0.0.1:39448 Closing
The port after 127.0.0.1 appears to be random.
I tried both php-redis php extension and the predis package, just to be sure that it isn't any one of them, but I get the same result with both of them. I am currently using php-redis with both igbinary and redis extensions enabled in /etc/php/config.d and have removed the Redis alias from config/app.php.
I am using PHP 7.4, Laravel 6.0 and Redis 5.0.7 on Manjaro.
Been there, discovered that with:
$ redis-client
psubscribe *
will show you what's going on.
Chances are that your default config/database.php contains something like:
'redis' => [
'client' => env('REDIS_CLIENT', 'predis'),
'options' => [
'cluster' => env('REDIS_CLUSTER', 'redis'),
'prefix' => env('REDIS_PREFIX', Str::slug(env('APP_NAME', 'laravel'), '_').'_database_'),
],
In that case, the channel name will be prefixed with this prefix option.
So you can just comment this option, or, if you keep it, be sure to subscribe to the right channel
Redis::publish('test-channel', 'a test message');
$prefix = config('database.redis.options.prefix');
$channel = $prefix . 'test-channel';
return "Done. (published on $channel)";

Laravel ENV variable collision in a Kubernetes cluster

I came to a very specific case by using Laravel framework as a part of a kubernetes cluster. These are the facts, which have to be known:
I've created a Docker container for caching called redis
I've created a Docker container for application called application
These two work together in a Kubernetes cluster
Kubernetes is setting ENV variables in each Docker container. Commonly, one is called {container-name}_PORT. Therefore, Kubernetes has created the ENV variable REDIS_PORT in my application container, which is set to something like that: tcp://{redis-container-ip}:{redis-container-port}.
Laravel sets this ENV variable too, but use it as a standalone port variable like 6379. However, in this specific case, Redis does not work in Laravel, because of overwritten REDIS_PORT variable. The framework try to fetch redis on this example host string inside Kubernetes: tcp://redis:tcp://10.7.240.204:6379. Laravel logic behind: {scheme}://{REDIS_HOST}:{REDIS_PORT}. You can see, REDIS_PORT is filled with tcp://10.7.240.204:6379.
What is preferable to solve the issue?
In my opinion, Kubernetes uses the ENV variable for {container-name}_PORT in a wrong way, but I do understand the internal logic behind Kubernetes ENV variables.
At the moment, I have changed my config/database.php configuration in Laravel, but this causes a review of changelogs on every update.
Some of other details can be read here: https://github.com/laravel/framework/issues/24999
#Florian's reply to himself on github:
My solution was to change the config in config/database.php like that:
'redis' => [
'client' => 'predis',
'default' => [
'scheme' => 'tcp',
'host' => env('REDIS_SERVICE_HOST', env('REDIS_HOST','127.0.0.1')),
'port' => env('REDIS_SERVICE_PORT', env('REDIS_PORT',6379)),
'password' => env('REDIS_PASSWORD', null),
'database' => 0,
],
],
Now, the config checks first, if the REDIS_SERVICE_HOST and REDIS_SERVICE_PORT are present as ENV variable. This is the case, if you have a container in a docker/kubernetes cluster which is called REDIS.
Advantage of this solution is, that REDIS_SERVICE_HOST returns the IP address of the container, not a hostname. Therefore, there is no dns resolution anymore for this internal connections.

Laravel under a load balancer + centralized redis session server

I have 2 laravel nodes running in separate servers under a load balancer, and a dedicated redis server for session and cache storage.
I configured the session and cache drivers accordingly to "redis" and it connects just fine. I see files being stored inside the redis server.
The issue is when I try to login, the page just gets refreshed without printing the "Invalid credential" errors that are normally stored in session.
Since the load balancer keeps redirecting from one node to another, the session is somehow getting lost. As a single instance it works just fine though. Is there anyone having the same issue with laravel and load balancing?
If there is a possible fix without configuring the balancer to use sticky sessions, that would be great!
Thanks in advance!
I think this package TrustedProxy solves your issue. Install it and then just add it to config/trustedproxy.php:
return [
'proxies' => [
'192.168.10.10',
],
// These are defaults already set in the config:
'headers' => [
(defined('Illuminate\Http\Request::HEADER_FORWARDED') ? Illuminate\Http\Request::HEADER_FORWARDED : 'forwarded') => 'FORWARDED',
\Illuminate\Http\Request::HEADER_CLIENT_IP => 'X_FORWARDED_FOR',
\Illuminate\Http\Request::HEADER_CLIENT_HOST => 'X_FORWARDED_HOST',
\Illuminate\Http\Request::HEADER_CLIENT_PROTO => 'X_FORWARDED_PROTO',
\Illuminate\Http\Request::HEADER_CLIENT_PORT => 'X_FORWARDED_PORT',
]
];

Categories