I am trying to setup a service on my laravel application with third party library for connecting to provider.
Its code goes as follows
$connection = new CustomConnection();
$connection->refresh();
$connection->sendMessage('user#myapp.com', ['message'=>'something', 'ttl'=>3600]);
$connection->refresh();
$connection->sendMessage('user2#myapp.com', ['message'=>'something', 'ttl'=>3600]);
$connection->close();
My goal is to keep the connection connected while sending message via laravel queue worker.
Something like if que worker establishes
$connection = new CustomConnection();
$connection->refresh();
Executes $connection->refresh() every 5 seconds & whenever job is added in queue it should execute
$connection->sendMessage('user#myapp.com', ['message'=>'something', 'ttl'=>3600]);
$connection->refresh();
Block of code.
I have no clue how laravel's core queue works in backend and if I can override it's functionality and how.
Thanks.
In your service provider, register the connection (or a service that uses the connection) as a singleton. Declare this as a dependency for your job, and all your jobs will have the same connection/service instance for the lifetime of the queue worker.
There's no way you can execute $connection->refresh() every fifth second. If the purpose of this call is some kind of heartbeat/healthcheck, listen for the queue-related events and use these instead. A combination of JobProcessing, JobProcessed, JobFailed and Looping will allow you to execute code before and after jobs execute. You can use these to evaluate if you should call $connection->refresh(), like if at least five seconds has passed since last invocation.
There's no event you can use to run code when a job is dispatched.
Do not attempt to override the internal workings of the queue system. There's no promises of backward compatibility between different Laravel releases, and you'll have to keep track of all (possible) subtle changes that are introduced upstream.
Allows persisting database sessions between queue jobs*
This pull request allows persisting database sessions between queue jobs. To opt-in to this behavior, users simply need set the VAPOR_QUEUE_DATABASE_SESSION_PERSIST environment variable to true. Allowing to make a very simple job that uses the database at least once, up to 45% faster in a 512MB lambda functions.
https://github.com/laravel/vapor-core/pull/97
Related
My problem is more of fallback I think, now I have a two queue job in my Laravel job queue, and I am using database Driver. The first command create credentials for my user from another site base on API calls, and the second is to email for verification and 2FA. Also, there is another command that update change my unit conversion rate.
protected function schedule(Schedule $schedule){
$schedule->command('update:conversionRate')->everyFiveMinutes();
$schedule->command('queue:work')->everyMinute();
}
Queue job are added to my queue using dispatch command and shouldQueue interface the API call uses the dispatch function while the email uses shouldQueue.
Now it work because I can see the jobs in my database. But when the server cron job runs it will crash, and my Log file shows that the my MySQL users has reach it maximum connection limit. Hence nobody can assess the database using that user account.
So my question how do I setup the cron job and queue:work so that it does not crash the server?
How I understood your problem with the maximum connection to the database.
The first solution, but it is not best to increase the connection limit to the database.
The second solution is to work with the queue. You have not tried to use a driver not a database, for example redis or beanstalkd?
You also run the command every minute. It's a bad practice to use cron job for queues. There is a supervisor for this.
Also, with the team, try to use the queue: work parameters.
Example
php artisan queue:work --sleep=3 --tries=3 --daemon
--sleep this handler will have a break between the processing of the elements of the queue
--tries = 3 if for some reason the item will not be processed, after this parameter it will try 3 times and proceed to the next element, where by default it will try to try many times.
Experiment with these options.
In my Laravel 5.4 web app user can request report generation that takes a couple of minutes due to a big amount of data. Because of these he couldn't work with application no more, until report will be generated. To fix this problem I have read about queues in laravel and separated out my report generation code to the job class, but my app still holds until report will be generated. How can I fix that?
To be absolutely clear I will sum up my problem:
User make request for report generation (my app absolutely holds at this moment)
My app receives POST request in routes and calls a function from the controller class.
Controller's function dispatches a job, that should generate report and put it into the client web folder.
It sounds like you have already pretty much solved the problem by introducing a queue. Put the job in the queue, but don't keep track of its progress - allow your code to continue and return to the user. It should be possible to "fire-and-forget", and then either ask the user to check if the report is ready in a couple of minutes, or offer the ability to email it to them when it is completed.
By default, Laravel uses the sync queue driver. This driver executes the queued jobs in the same request as the one they are created in. So this won't make any difference.
You should take a look at other drivers and use the Laravel queue worker background process to execute jobs to make sure they don't hold the webrequest from completing.
I'm looking for a solution to add items into a queue and execute them one-by-one in a similar method to google appengine's tasks manager. Each task will be executed using a http request to a php script.
As i'm using amazon, i understood that the best practice is using the SNS service that will be responsible for receiving new tasks, adding them to a queue (Amazon's SQS service) and also inform my php worker that a new task has been pushed into the queue so he can look for it and execute it.
There are several issues with that method (like the need to limit the number of workers instances via the worker itself or just the possibility that the task won't be in the queue when we call the worker because we add the task to the queue in the same time).
I would like to hear if there are any better options or a nicer way of implementing a tasks manager. I preffer using the amazon's services but i'm open to any new suggestion, looking for the best method. Features that are missing in amazon like FIFO and priorities support would also be a nice addition.
Thanks!
Ben
I have found a good solution.
AWS Beanstalk service is apparently offering an option to define a new elastic-beanstalk instance as a "worker" or a "web server". in case you define it as a "Worker", you'll be able to attach it to a sqs queue and it will be responsible for polling the queue and performing the task (with the code you deploy to the instance).
I have implemented a command in my Symfony setup which grabs a job from the DB and then processes it.
How can I run multiple instances of command at once, to get through jobs quicker. I know that multithreading is not supported in PHP but seeing as the command is called from the shell, I was wondering if there was a workaround.
Call command using:
app/console job:process
The way I would solve this is to use a work queue with multiple workers. It's easier to manage and scale than manually running multiple processes and worrying about concurrency.
The simplest general-purpose queue I've found for working with php/symfony is beanstalkd which you can integrate into symfony2 with the LeezyPheanstalkBundle
In general, I'd suggest using enqueue library. You can choose from a variety of transports available, from the simplest like filesystem and Doctrine DBAL to real once like RabbitMQ and Amazon SQS.
Regarding the consumers, you need sort of process manager. There several options:
http://supervisord.org/ - You need extra service. It has to be configured properly.
A pure PHP process manager like this. Based on Symfony process component and pure PHP code. It can handle process reboot, correct exit on sigterm signal and a lot more.
A php\swoole process manager like this. It requires a swoole PHP extension but it is performance is amazing.
I have written a blog post on how to solve this exact problem. https://plume.baucum.me/~/Absolutely/running-multiple-processes-simultaneously-in-a-symfony-command
It is much too long to rehash everything here, but the basic concept is that your command optionally takes in the job's ID. The command will check if the ID was given. If not then it will grab all the jobs from the DB, loop over them, and recall itself with the job ID parameter. As each command is kicked off you store it in an array, and if the array is too big you sleep, for rate throttling. As commands finish you remove them from the array.
When the command is ran with the job ID it will create a lock using Symfony's lock component so that a job cannot accidentally be processed two times at once. It is important that you unlock the job when it either finishes or errors out. Once it has the ID and the lock it will then call whatever code you have written to actually process the job.
Using this technique I have taken commands that took hours to run, as it synchronously went through each task, into taking only minutes. Make sure to try different throttles to balance resource utilization and time it takes to execute your task.
I'd like some help understanding the use of pheanstalk (php beanstalk client). I have a PHP program that is executed on the server when form data is sent to it. The PHP program should then package the form data as a JSON structure and send it to a backend server process.
The thing I don't understand is the connection to the beanstalkd server. Should I create a new Pheanstalk() object each time the PHP program executes - in which case, am I incurring the cost of creating the connection. When is the connection closed (since there is no close() method in pheanstalk)?
If the connection is persistent, is it shared among all executions of the PHP program, in which case, what happens in the case of concurrent hits? Thanks for any help.
Yes, you will have to create a new connection with Pheanstalk (or any other library) each time you start the program, since PHP starts each one fresh. The overhead is tiny though.
The Beanstalkd process is optimised to easily handle a number of connections, and will act on them atomically - you won't get a duplicate job, unless you put two of the same in there (and even then, they would have different job-ID's).
Pheanstalk doesn't even send data to the daemon any information (including opening the connection) until the first command is sent. It's for this reason that you can't tell if the daemon is even alive till you actively make a request (in my tests, I get the list of current tubes). If you kept re-using the instantiated class in the running program, then it would keep reusing it of course.
There's no formal close(), but unset($pheanstalk) would do the same thing, running the destructor. Again, the call is program so transient and the daemon can keep so many concurrent connections open if it's allowed to, that it's not an issue - and it will be shut down as the program itself does.
In short, don't worry. The overhead of connecting and sending data into, or out of, Beanstalkd will probably be a tiny fraction of any work that is done by the worker, or producer, in generating the request/response.