I have a web app which uses Laravel 4.2 with the session file driver. It runs over a https protocol and all users are stored in one database. We get a lot of reports that users are being randomly logged out, but I can't reproduce the issue in our dev environment. I suspect the garbage collector, could it be cleaning out the wrong session files or something? Should we switch to database session storage?
Here's some of our session config:
'lifetime' => 720,
'expire_on_close' => false,
'lottery' => array(2, 100),
Our php.ini has the gc_maxlifetime set to 43200.
As far as I know the server runs debian 7 with no load balancer or extra session managers installed or configured. Whatever comes with debian 7 is what is used.
Thankful for any help!
No your problem is elsewhere, the lottery is not for selecting sessions to randomly delete, it is to pick random requests to delete EXPIRED sessions and expired sessions only.
In an ideal world you would want in certain acceptable intervals the system to delete all expired sessions but the Laravel developers have decided many will not actually set up the php artisan schedule:run to their CRON Jobs or Scheduled Tasks on the operating system.
So instead of taking every user page requests to run a SQL query to
DELETE FROM sessions WHERE lastactivity < [session_time]
The default 2 in a 100 lottery makes sure this happens during random HTTP requests at the expense of some unluck visitor for price of an average of a tenth miliecond.
You can see in the StartSession middleware where it cleans up the sessions using the method collectGarbage which cleanrs the session based on the lottery config (above).
https://github.com/illuminate/session/blob/master/Middleware/StartSession.php
The default configurations are [2, 100]. It means that a random integer is chosen between 1 and 100, if it's lower or equals to 2 the cache will be cleared. (Aka you the visitor have a 2% possibility to clear the cache every call).
You can enhance this if you think this is what is happening to it becoming a php artisan command that you place in the Kernel.php
<?php
namespace App\Console\Commands;
use Illuminate\Console\Command;
use App\Models\Session; //for sake of simplicity I am assuming this exists
class PruneExpiredSessions extends Command
{
protected $signature = 'sessions:prune';
protected $description = 'Override default DB Session garbage collection';
public function handle()
{
Session::where('last_activity',< time()-(60*60*48))->delete();
$this->info('Successfully deleted Expired Sessions' );
return Command::SUCCESS;
}
}
Now simply in app/console/kernel.php
<?php
namespace App\Console;
class Kernel extends ConsoleKernel
{
/**
* Define the application's command schedule.
*
* #param \Illuminate\Console\Scheduling\Schedule $schedule
* #return void
*/
protected function schedule(Schedule $schedule)
{
// $schedule->command('inspire')->hourly();
$schedule->command('telescope:prune --hours=48')->daily();
$schedule->command('session:prune')->daily(); <--This is the line
}
...
}
Just make sure to add php artisan schedule:run to your OS Scheduled Taks/Cron Job every minute
Related
I want to prevent a user from making the same request two times by using the Symfony Lock component. Because now users can click on a link two times(by accident?) and duplicate entities are created. I want to use the Unique Entity Constraint which does not protect against race conditions itself.
The Symfony Lock component does not seem to work as expected. When I create a lock in the beginning of a page and open the page two times at the same time the lock can be acquired by both requests. When I open the test page in a standard and incognito browser window the second request doesn't acquire the lock. But I can't find anything in the docs about this being linked to a session. I have created a small test file in a fresh project to isolate the problem. This is using php 7.4 symfony 5.3 and the lock component
<?php
namespace App\Controller;
use Sensio\Bundle\FrameworkExtraBundle\Configuration\Template;
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;
use Symfony\Component\Lock\LockFactory;
use Symfony\Component\Routing\Annotation\Route;
class LockTest extends AbstractController
{
/**
* #Route("/test")
* #Template("lock/test.html.twig")
*/
public function test(LockFactory $factory): array
{
$lock = $factory->createLock("test");
$acquired = $lock->acquire();
dump($lock, $acquired);
sleep(2);
dump($lock->isAcquired());
return ["message" => "testing"];
}
}
I slightly rewrote your controller like this (with symfony 5.4 and php 8.1):
class LockTestController extends AbstractController
{
#[Route("/test")]
public function test(LockFactory $factory): JsonResponse
{
$lock = $factory->createLock("test");
$t0 = microtime(true);
$acquired = $lock->acquire(true);
$acquireTime = microtime(true) - $t0;
sleep(2);
return new JsonResponse(["acquired" => $acquired, "acquireTime" => $acquireTime]);
}
}
It waits for the lock to be released and it counts the time the controller waits for the lock to be acquired.
I ran two requests with curl against a caddy server.
curl -k 'https://localhost/test' & curl -k 'https://localhost/test'
The output confirms one request was delayed while the first one slept with the acquired lock.
{"acquired":true,"acquireTime":0.0006971359252929688}
{"acquired":true,"acquireTime":2.087146043777466}
So, the lock works to guard against concurrent requests.
If the lock is not blocking:
$acquired = $lock->acquire(false);
The output is:
{"acquired":true,"acquireTime":0.0007710456848144531}
{"acquired":false,"acquireTime":0.00048804283142089844}
Notice how the second lock is not acquired. You should use this flag to reject the user's request with an error instead of creating the duplicate entity.
If the two requests are sufficiently spaced apart to each get the lock in turn, you can check that the entity exists (because it had time to be fully committed to the db) and return an error.
Despite those encouraging results, the doc mentions this note:
Unlike other implementations, the Lock Component distinguishes lock instances even when they are created for the same resource. It means that for a given scope and resource one lock instance can be acquired multiple times. If a lock has to be used by several services, they should share the same Lock instance returned by the LockFactory::createLock method.
I understand two locks acquired by two distinct factories should not block each other. Unless the note is outdated or wrongly phrased, it seems possible to have non working locks under some circumstances. But not with the above test code.
StreamedResponse
A lock is released when it goes out of scope.
As a special case, when a StreamedResponse is returned, the lock goes out of scope when the response is returned by the controller. But the StreamedResponse has yet to return anything!
To keep the lock while the response is generated, it must be passed to the function executed by the StreamedResponse:
public function export(LockFactory $factory): Response
{
// create a lock with a TTL of 60s
$lock = $factory->createLock("test", 60);
if (!$lock->acquire(false)) {
return new Response("Too many downloads", Response::HTTP_TOO_MANY_REQUESTS);
}
$response = new StreamedResponse(function () use ($lock) {
// now $lock is still alive when this function is executed
$lockTime = time();
while (have_some_data_to_output()) {
if (time() - $lockTime > 50) {
// refresh the lock well before it expires to be on safe side
$lock->refresh();
$lockTime = time();
}
output_data();
}
$lock->release();
};
$response->headers->set('Content-Type', 'text/csv');
// lock would be released here if it wasn't passed to the StreamedResponse
return $response;
}
The above code refreshes the lock every 50s to cut down on communication time with the storage engine (such as redis).
The lock remains locked for at most 60s should the php process suddenly die.
(Laravel 8, PHP 8)
Hi. I have a bunch of data in the PHP APC cache that I can access across my Laravel application with the apcu commands.
I decided I should fire an async job to process some of that data for the user during a session and throw the results in the database.
So I made a middleware that fires (correctly) when the user accesses the page, and (correctly) dispatches a job called "MemoryProvider".
The dispatch command promply instantiates the MemoryProvider class, running its constructor, and then queues the job for execution.
About a second later, the queue is processed and the handle method in MemoryProvider is run.
I check the content of the php cache with "apcu_cache_info()" and "apcu_exists()" in the middleware and both in the MemoryProvider constructor and in its handle method.
The problem:
The PHP cache appears populated throughout my Laravel app.
The PHP cache appears populated in the middleware.
The PHP cache appears populated in the job's constructor.
The PHP cache appears EMPTY in the job's handle method.
Here's the middleware:
{
$a = apcu_cache_info(); // 250,000 entries
$b = apcu_exists('the:2:0'); // true
MemoryProvider::dispatch($request);
return $next($request);
}
Here's the job's (MemoryProvider) constructor:
{
$this->request = $request->all();
$a = apcu_cache_info(); // 250,000 entries
$b = apcu_exists('the:2:0'); // true
}
And here's the job's (MemoryProvider) handle method:
{
$a = apcu_cache_info(); // 0 entries
$b = apcu_exists('the:2:0'); // false
}
Question: is this a PHP limitation or a bad Laravel problem? And how can I access the content of my PHP cache in an async class?
p.s. I have apc.enable_cli=1 in php.ini
I found the answer. Apparently, it's a PHP limitation.
According to a good explanation given by gview back in 2017, a cli process doesn't share state or memory with other cli processes. So the apc memory space will never be shared this way.
I did find a workaround for my specific case: instead of running an async process to handle the heavy work in the background, I can get the same effect by simply issuing an AJAX request. The request is handled independently by PHP, with full access to the APC cache, and I can populate my database and let the user know when it's all done (or gradually done, as is the case).
I wish I had thought of this sooner.
I have Laravel cron issue ,In Console Kernel I have defined job which will hit Rollovercron.php file every 10 mins and every time it will hit it will pass one country. atleast 100 Countries are defined in an array and will be passed one by one to Rollovercron.php according to foreach loop. Rollovercron.php file takes minimum 2 hrs to run for single country.
I have multiple issues with this cron job:
100 elements in an array not getting fetched one by one means I can see 'GH' country(Ghana) has run for 5 times continuously and many of the countries are skipped.
when ever I get country missing issue I do composer update and clear cache frequently.
I want my cron should run smoothly and fetch all countries not even single country should miss and I should not need to do composer update for this all the time.
Please help me in this ,struggling for this since many months.
bellow is Kernel.php file:
<?php
namespace App\Console;
use Illuminate\Console\Scheduling\Schedule;
use Illuminate\Foundation\Console\Kernel as ConsoleKernel;
use DB;
class Kernel
extends ConsoleKernel
{
/**
* The Artisan commands provided by your application.
*
* #var array
*/
protected $commands = [
\App\Console\Commands\preAlert::class,
\App\Console\Commands\blCron::class,
\App\Console\Commands\mainRollover::class,
\App\Console\Commands\refilingSync::class,
\App\Console\Commands\TestCommand::class,
\App\Console\Commands\rollOverCron::class,
\App\Console\Commands\FrontPageRedis::class,
\App\Console\Commands\filingStatusRejectionQueue::class,
\App\Console\Commands\VesselDashboardRedis::class,
\App\Console\Commands\Bookingcountupdate::class,
// \App\Console\Commands\Voyagetwovisit::class,
];
/**
* Define the application's command schedule.
*
* #param \Illuminate\Console\Scheduling\Schedule $schedule
* #return void
*/
protected function schedule(Schedule $schedule)
{
$countrylist=array('NL','AR','CL','EC','DE','PH','ID','TT','JM','KR','BE','VN','US','BR','CM','MG','ZA','MU','RU','DO','GT','HN','SV', 'PR','SN', 'TN', 'SI','CI','CR','GM','GN','GY','HR','LC','LR','MR','UY','KH','BD','TH','JP','MM','AT','IE','CH','LB','PY','KE','YT','TZ','MZ','NA','GQ','ME');
foreach ($countrylist as $country) {
$schedule->command('rollOverCron:send ' . $country)
->everyTenMinutes()
->withoutOverlapping();
}
foreach ($countrylist as $country) {
$schedule->command('mainRollover:send ' . $country)
->daily()
->withoutOverlapping();
}
$schedule->command('filingStatusRejectionQueue')
->hourly()
->withoutOverlapping();
$schedule->command('Bookingcountupdate')
->everyTenMinutes()
->withoutOverlapping();
$schedule->command('preAlert')
->hourly()
->withoutOverlapping();
}
protected function commands()
{
require base_path('routes/console.php');
}
}
/**
* Register the Closure based commands for the application.
*
* #return void
*/
Laravel scheduling, knowing how it works helps, so you can debug it when it doesn't work as expected. This does involve diving in the source.
You invoke command on the scheduler, this returns an event.
Let's check how Laravel decides what defines overlapping, we see it expires after 1440 minutes, aka 24 hours.
So after one day, if the scheduled items have not run these scheduled items just stop being scheduled.
We see that a mutex is being used here. Let's see where it comes from. It seems it's provided in the constructor.
So lets see which mutex is being provided. In the exec and the call functions the mutex defined in the Scheduler constructor is used.
The mutex used there is an interface, probably used as a Facade, and the real implementation is most likely in CacheSchedulingMutex, which creates a mutex id using the mutexName from the event and the current time in hours and minutes.
Looking at the mutexName we see that the id exists out of the expression and command combined.
To summarise, all events called in one Scheduler function, share the same mutex that is used in checking if method calls don't overlap, but the mutex generates an unique identifier for each command, including differing parameters, and based on the time.
Your scheduled jobs will expire after 24 hours, which means that with jobs that take 2 hours to complete, you'll get about 12 jobs in a day completed. More if the jobs are small, less if the jobs take longer. This is because PHP is a single threaded process by default.
First task 1, then task 2, then task 3. Etc... This means that if each tasks takes 2 hours, then after 12 tasks their queued jobs expire because the job has been running for 1440 minutes and then the new jobs are scheduled and it starts again from the top.
Luckily there is a way to make sure they run simultaneously.
I suggest you add ->runInBackground() to your scheduling calls.
$schedule->command('rollOverCron:send ' . $country)
->everyTenMinutes()
->withoutOverlapping()
->runInBackground()
->emailOutputTo(['ext.amourya#cma-cgm.com','EXT.KKURANKAR#cma-cgm.com']);cgm.com']);
}
My Laravel is setup with a MultiSite Middleware Provider that checks the subdomain of the address and based on this subdomain changes the connection on-the-fly to another database.
e.g.
Config::set('database.connections.mysql.host', $config['host'] );
Config::set('database.connections.mysql.database', $config['db_name'] );
Config::set('database.connections.mysql.username', $config['user']);
Config::set('database.connections.mysql.password', $config['password']);
Config::set('database.connections.mysql.prefix', $config['prefix']);
Config::set('database.connections.mysql.theme', $config['theme']);
// purge main to prevent issues (and potentially speed up connections??)
DB::disconnect('main');
DB::purge();
DB::reconnect();
return $next($request);
This all works fantastic, except that I now want to use Laravel Queues with the built-in Database driver (sync actually works fine but blocks the user experience for long report generations).
Except Artisan isn't sure which database to connect to so I'm guessing it connects to the default, which is a kind of supervisor database that stores all the subdomains and corresponding db names etc.
Note none of these databases are setup in my database conf as connections, they're stored in a singular management database as there's quite a lot of them.
I've tried cloning the built-in Queue listener and modifying it to swap to the different site connection as so:
/**
* Create a new queue listen command.
*
* #param \Illuminate\Queue\Listener $listener
* #return void
*/
public function __construct(Listener $listener)
{
// multisite swap
$site = MultiSites::where('machine_name', $this->argument('site'));
MultiSites::changeSite($site->id);
parent::__construct();
$this->setOutputHandler($this->listener = $listener);
}
But this fails with
$commandPath argument missing for the Listener class.
Trying a similar database/site swap in the fire() or handle() methods stops the $commandPath error however it simply does nothing, no feedback and doesn't begin to process any jobs from the database.
I'm at a loss how to get this working with a multisite environment, does anyone have any ideas or am I going the wrong way about this?
My ideal scenario would be being able to run a singular Queue command, have supervisor monitor that and it to skip through each database checking. But I am also willing to spawn a queue command per database/site if necessary.
I have a simple web application that is written with the Laravel 4.2 framework. I have configured the Laravel queue component to add new queue items to a locally running beastalkd server.
Essentially, there is a POST route that will add an item to the beanstalkd tube.
I then have supervisord set up to run artisan queue:listen as three separate processes. The issue that I am seeing is that the different queue:listen processes will end up spawning anywhere between one to three queue:worker processes for just one inserted job.
The end result being that one job inserted into the queue is sometimes being processed by multiple workers at the same time, something I am obviously trying to avoid.
The job code is relatively simple:
<?php
use App\Repositories\URLRepository;
class ProcessDataJob {
private $urls;
public function __construct(URLRepository $urls)
{
$this->urls = $urls;
}
public function fire($job, $data)
{
$input = $data['post'];
if (!isset($data['post']) && !is_array($data['post'])) {
Log::error('[Job #'.$job->getJobId().'] $input was empty inside CreateAuditJob. Deleting job. Quitting. Bye!');
$job->delete();
return false;
}
//
// ... code that will take a few hours to run.
//
$job->delete();
Log::info('[Job #'.$job->getJobId().'] ProcessDataJob was successful, deleting the job!');
return true;
}
}
The fun part being that most of the (duplicated) queue workers fail when deleting the job with this left in the errorlog:
exception 'Pheanstalk_Exception_ServerException' with message 'Job 3248 NOT_FOUND: does not exist or is not reserved by client'
The ttr (Time to Run) is set to 172,800 seconds (or 48 hours), which is much larger then the time it would take for the job to complete.
what's the job time_to_run when queued? If running the job takes longer than time_to_run seconds, the job is automatically re-queued and becomes eligible to be run by the next worker.
A reserved job has time_to_run seconds to be deleted, released or touched. Calling touch() restarts the timeout timer, so workers can use it to give themselves more time to finish. Or use a large enough value when queueing.
I've found the beanstalkd protocol document helpful
https://github.com/kr/beanstalkd/blob/master/doc/protocol.md
Since you are running with Laravel, verify your queue.php configuration file.
Change the ttr value from 60 (default) to something else that is better for you.
'beanstalkd' => array(
'driver' => 'beanstalkd',
'host' => 'localhost',
'queue' => 'default',
'ttr' => 600, //Example
),