I implemented a custom service provider called PermissionServiceProvider, while debugging an issue I found that the boot method is calling periodically. In our testing server it is invoking every minute, whereas in our production server it seems invoking every seconds (invoking faster than the testing server )
Is it normal that service provider boot method is invoking periodically? What controls the periodicity that makes the difference in test and production server?
Attached production log snippet:
Related
I have setup EC2 and RDS on it for an app, now I want to call a script present on the EC2 server (ubuntu, Apache running) every day (which is a sort of trigger for another service) to run within EC2, or a way to run that PHP script on Lambda itself removing EC2 from it.
What I did find was about python script and this: Serverless PHP on AWS Lambda – Rob Allen's DevNotes
Option 1: PHP on AWS Lambda
If you can get PHP working on Lambda, then that solves half your question. You can then schedule it using Amazon CloudWatch Events. Simple create a rule with a schedule to trigger the Lambda function.
Option 2: Triggering script on Amazon EC2 instance
If you just want to trigger a script on an Amazon EC2 instance, you can use a local cron definition.
If your intention was to only run the EC2 instance for the script and then turn off, then:
Configure the script to run when the instance starts up (configure the operating system to run the script)
Configure an Amazon CloudWatch Events rule to run an AWS Lambda function once per day
The Lambda function should start the instance
When the script on the instance has completed its work, it should call the operating system to shutdown the instance. This will cause EC2 to stop it.
Instead of starting and stopping an instance, you could instead Launch and Terminate an instance. In this case, supply the script as User Data and it will automatically run after launch. Configure the instance Shutdown Behavior as Terminate.
The way I'm currently doing this is by calling the script through HTTP using Node.
First, I have set up an AWS Lambda function like this:
exports.handler = (event, context, callback) => {
const https = require('https');
https.get('https://www.example.com/myscript?secret-token', (resp) => {});
};
The secret token is just so that I'm the only one who can execute this script by going to that URL.
Next, I schedule it to run on an EventBridge trigger, with a schedule expression similar to the following:
cron(0 8 * * ? *)
Which makes AWS Lambda call that URL every day 5am local time for me, 8am UTC.
I have set up tasks on kernel.php. I have set up a similar debian environment locally, and the tasks are running as expected. However, on the server, tasks are not running if withoutOverlapping() method is given. If withoutOverlapping() is not given, tasks are running as expected.
Current configuration on kernel.php
$schedule->command('perform:task_one')->withoutOverlapping()->everyFiveMinutes();
Task is not fired at all. If I remove withoutOverlapping, task is fired.
I have implemented withoutOverlapping as my task involves some mailing and may consume time at instances.
I am trying to setup a service on my laravel application with third party library for connecting to provider.
Its code goes as follows
$connection = new CustomConnection();
$connection->refresh();
$connection->sendMessage('user#myapp.com', ['message'=>'something', 'ttl'=>3600]);
$connection->refresh();
$connection->sendMessage('user2#myapp.com', ['message'=>'something', 'ttl'=>3600]);
$connection->close();
My goal is to keep the connection connected while sending message via laravel queue worker.
Something like if que worker establishes
$connection = new CustomConnection();
$connection->refresh();
Executes $connection->refresh() every 5 seconds & whenever job is added in queue it should execute
$connection->sendMessage('user#myapp.com', ['message'=>'something', 'ttl'=>3600]);
$connection->refresh();
Block of code.
I have no clue how laravel's core queue works in backend and if I can override it's functionality and how.
Thanks.
In your service provider, register the connection (or a service that uses the connection) as a singleton. Declare this as a dependency for your job, and all your jobs will have the same connection/service instance for the lifetime of the queue worker.
There's no way you can execute $connection->refresh() every fifth second. If the purpose of this call is some kind of heartbeat/healthcheck, listen for the queue-related events and use these instead. A combination of JobProcessing, JobProcessed, JobFailed and Looping will allow you to execute code before and after jobs execute. You can use these to evaluate if you should call $connection->refresh(), like if at least five seconds has passed since last invocation.
There's no event you can use to run code when a job is dispatched.
Do not attempt to override the internal workings of the queue system. There's no promises of backward compatibility between different Laravel releases, and you'll have to keep track of all (possible) subtle changes that are introduced upstream.
Allows persisting database sessions between queue jobs*
This pull request allows persisting database sessions between queue jobs. To opt-in to this behavior, users simply need set the VAPOR_QUEUE_DATABASE_SESSION_PERSIST environment variable to true. Allowing to make a very simple job that uses the database at least once, up to 45% faster in a 512MB lambda functions.
https://github.com/laravel/vapor-core/pull/97
I am reading the documentation at Laravel under the heading Architecture Concepts.
I am unable to understand application and usage of Console Kernel .(not the Http Kernel)
However, I googled out and found these links
https://laravel.com/api/5.2/Illuminate/Foundation/Console/Kernel.html
https://laravel.com/api/5.3/Illuminate/Contracts/Console/Kernel.html
But I can't understand anything with that API !
The HTTP Kernel is used to process requests that come in through the web (HTTP). Website requests, AJAX, that kind of stuff.
The Console Kernel is used when you interact with your application from the command line. If you use artisan, or when a scheduled job is processed, or when a queued job is processed, all of these actions go through the Console Kernel.
Basically, if you go through index.php, you'll be using the HTTP Kernel. Most everything else will be using the Console Kernel.
I have a Zend Application that is running fine.
I have created a Zend Queue script in my library to run some emailing process to members of the site.
The application has many Models that are working well, but when I try and initiate the application in my queue script, it doesn't run.
The only reason I can see for this is a custom helper that extends Zend_Controller_Action_Helper_Redirector. This redirector checks if https is required.
Without initiating the application the run around I have to do to get my queue is nigh impossible.
In my script I am calling from Supervisrd, I am setting up my environment and $application->bootstrap()->run();
I then call the scripts class, but it does not venture past the ->run().
The redirector helper calls exit(), which is why it's not working. I'd rewrite the endpoint to remove the call to redirect, or write a different endpoint for consumption by the CLI job handled by supervisord.