I am trying to implement async job in laravel, so I can send email (using 3rd party API), but let user go in the frontend so request doesn't wait for email to be sent.
I am using Laravel 6.18.
so I've created generic job with php artisan make:job EmailJob
I've set sleep for 60 seconds as a test of long email send.
then in my controller
EmailJob::dispatchAfterResponse();
return response()->json($obj,200);
In chrome console, I can see there is 200 response, however request is still no resolved, and there is no data returned, so my ajax/axios request still waits for full response, eventually it times out (60 seconds is too long), and produces error in frontend.
So question is, how to execute job after full response is sent ?
You have to change the queue driver and run queue:worker
The following 2 resources will help you
https://laravel-news.com/laravel-jobs-and-queues-101
https://laravel.com/docs/6.x/queues#connections-vs-queues
Just like in Terminable Middleware, this will only work if the Webserver has FastCGI implemented.
You can go that way, or you can do a Queue with Database driver, which is simpler to achieve than installing Redis.
You would still need to have a running process to complete the jobs. (worker)
Related
In my Laravel 5.4 web app user can request report generation that takes a couple of minutes due to a big amount of data. Because of these he couldn't work with application no more, until report will be generated. To fix this problem I have read about queues in laravel and separated out my report generation code to the job class, but my app still holds until report will be generated. How can I fix that?
To be absolutely clear I will sum up my problem:
User make request for report generation (my app absolutely holds at this moment)
My app receives POST request in routes and calls a function from the controller class.
Controller's function dispatches a job, that should generate report and put it into the client web folder.
It sounds like you have already pretty much solved the problem by introducing a queue. Put the job in the queue, but don't keep track of its progress - allow your code to continue and return to the user. It should be possible to "fire-and-forget", and then either ask the user to check if the report is ready in a couple of minutes, or offer the ability to email it to them when it is completed.
By default, Laravel uses the sync queue driver. This driver executes the queued jobs in the same request as the one they are created in. So this won't make any difference.
You should take a look at other drivers and use the Laravel queue worker background process to execute jobs to make sure they don't hold the webrequest from completing.
I am trying to set up an API system that synchronously communicates with a number of workers in Laravel. I use Laravel 5.4 and, if possible, would like to use its functionality whenever possible without too many plugins.
What I had in mind are two servers. The first one with a Laravel instance – let’s call it APP – receiving and answering requests from and to a user. The second one runs different workers, each a Laravel instance. This is how I see the workflow:
APP receives a request from user
APP puts request on a queue
Workers look for jobs on the queue and eventually finds one.
Worker resolves job
Worker responses to APP OR APP finds out somehow that job is resolved
APP sends response to user
My first idea was to work with queues and beanstalkd. The problem is that this all seem to work asynchronously. Is there a way for the APP to wait for the result of one of the workers?
After some more research I stumbled upon Guzzle. Would this be a way to go?
EDIT: Some extra info on the project.
I am talking about a Restful API. E.g. a user sends a request in the form of "https://our.domain/article/1" and their API token in the header. What the user receives is a JSON formatted string like {"id":1,"name":"article_name",etc.}
The reason for using two sides is twofold. At one hand there is the use of different workers. On the other hand we want all the logic of the API as secure as possible. When a hack attack is made, only the APP side would be compromised.
Perhaps I am making things all to difficult with the queues and all that? If you have a better approach to meet the same ends, that would of course also help.
I know your question was how you could run this synchronously, I think that the problem that you are facing is that you are not able to update the first server after the worker is done. The way you could achieve this is with broadcasting.
I have done something similar with uploads in our application. We use a Redis queue but beanstalk will do the same job. On top of that we use pusher which the uses sockets that the user can subscribe to and it looks great.
User loads the web app, connecting to the pusher server
User uploads file (at this point you could show something to tell the user that the file is processing)
Worker sees that there is a file
Worker processes file
Worker triggers and event when done or on fail
This event is broadcasted to the pusher server
Since the user is listening to the pusher server the event is received via javascript
You can now show a popup or update the table with javascript (works even if the user has navigated away)
We used pusher for this but you could use redis, beanstalk and many other solutions to do this. Read about Event Broadcasting in the Laravel documentation.
All,
I have a quite disturbing problem with my Amazon Elastic Beanstalk Worker combined with SQS, which is supposed to provide a cron job scheduling - all this running with PHP.
Following scenario - I need a PHP script to be executed regularly in the background, which might eventually run for hours. I saw this nice introduction which seems to cover exact my scenario (AWS Worker Environments - see the Periodic Task part)
So I read quite a lot of howtos and set up an EBS Worker with the SQS (which actually is done automatically during creation of the worker) and provided the cron config (cron.yaml) within my deployment package.
The cron script is properly recognized. The sqs daemon starts, messages are put into the queue and trigger my PHP script exactly on schedule. The script is run and everything works fine.
The configuration of the queue looks like this:
SQS configuration
However after some time of processing (the script is still busy - and NO it is not the next scheduled run^^) a second message is opened and another instance of the same script is executed, and another, and another... in exactly 5 minutes intervals.
I suspect, somehow the message is not removed from the queue (although I ensured that the script sends status 200 back), which ends up in creating new message, if the script runs for too long.
Is there a way to prevent the spawning of another messages? Tell the queue or the sqs daemon not to create new flighing messages? Do I have to remove the message in my code? Although the tutorial states it should happen automatically
I would like to just trigger the script, remove the message from queue and let the script run. No fancy fallback / retry mechanisms please :-)
I spent many hours trying to find something on the internet. Unsuccessful. Any help is appreciated.
Thanks
a second message is opened and another instance of the same script is executed, and another, and another... in exactly 5 minutes intervals.
I doubt it is a second message. I believe it is the same message.
If you don't respond 200 OK before the Inactivity Timeout expires, then the message goes back to the queue, and yes, you'll receive it again, because the system assumes you've crashed, and you would want to see it again. That's part of the design.
There's an X-Aws-Sqsd-Receive-Count request header you're receiving that tells you approximately how many times the current message has been delivered. The X-Aws-Sqsd-Msgid request header identifies the unique message.
If you can't ensure that the script will finish before the timeout, then this is not likely an appropriate use case for this service. It sounds like the service is working correctly.
I know this doesn't directly answer your question regarding configuration, but I ran into a similar issue - my queue configuration is set exactly like yours, and in my Elastic Beanstalk setup, I've set the Visibility Timeout to 1800 seconds (or half an hour) and Max Retries to 2.
If a job runs for more than a minute, it gets run again and then thrown into the dead letter queue, even though after a 200 OK is returned from the application every time.
After a few hours, I realized that it was the Nginx server that was timing out - checking the Nginx error log yielded that insight. I don't know why Elastic Beanstalk includes a web server in this scenario... You may want to check if EB spawns a web server in front of your application, if all else fails.
Look at the Worker Environment documentation for details on the values you can configure. You can configure several different timeout values as well as "Max retries", which if set to 1 will prevent re-sends. However, your Dead Letter Queue will fill up with messages that were actually processed successfully, so that might not be your best option.
I have the following scenario:
I have a WebSocket running with Ratchet (PHP WebSocket)
I use the onMessage() callback function to handle incomming data and respond accordingly.
If I get the 'start-broadcast' message via the WebSocket I have to start a loop which will send out a broadcast message to all connected clients on the WebSocket every 0.2 sec. So I need to make a loop which can do this, but I can't put it into the onMessage() function, as this will block, and I won't be able to receive any more messages via the WS.
If I get the 'stop-broadcast' message via the WebSocket I have to stop the broadcast loop.
So basically I need a way to start and stop this loop, and have this loop running parallel to the WebSocket loop so it doesn't block up.
Problems: The Socket->send() method I'm pretty sure is not thread-safe, so I need to make sure that the WS loop and my broadcast loop are not trying to send a message at the same time.
Possible approaches I have considered:
ReactPHP/Promise
Somehow use this to make an async function inside which I have a loop. I have no experience with Promise, and I don't know if it can do what I need.
Running a spearate PHP-CLI process, and use ZMQ for inter-process communication between the WS instance and the Broadcast loop.
With this I could send message back and forth from the websocket, and
I could send a message to start or stop the broadcast, also I could
send a message from the broadcast loop to the WS loop to send out a
message to the WS clients.
Using pthreads
Spawn a new Thread for the broadcast loop, this can be killed when I
want it to stop. I'm pretty sure I'll have to make sure the Socket is
only used by one thread at a time, so I'll have handle that somehow.
My question is, which approach should I take, and are there any examples or tutorials for the suggested approach?
If I get you right:
You get in a web-socket - command "start"
After receiving the request, you want to start broadcasting with 0.2
seconds interval
If you will come - command "stop", you need to stop broadcasting
Maybe I misunderstood the problem, but it's a bad idea, getting client requests start and stop brodcasting (what if everyone will send a "start command"?)
In general, I would recommend using ZMQ. This is the most scalable solution. (It is best to separate the services)
You start the server.
Waiting for commands from ZMQ, that you need to start broadcasting.
Once you get a command of ZMQ, create a timer with 0.2 second intervals and broadcasts
As soon as you get "command" stop in ZMQ - kill the timer.
OR
Your start the server ZMQ PUB and start brodcasting
Yout start web-socket
You give command start and start receive ZMQ SUB messages
You give command stop and stop receive ZMQ SUB messages
If you want a pub/sub service. Then simply create a Timer and have a list of who to broadcast. Client send "subscribe", and receive messages. Good idea use Redis for storing data between processes (WebSocket - ZMQ)
You need read ZMQ PHP DOC, before using ZMQ and see reactphp/zmq lib
I'm not an expert on http request so this question might be trivial for some. I'm sending a request to a php script which takes a lot of time to process a file and return a response. Is there a way to send a response before this script finishes its task to let the user know about the process status? Since this task can take up to several minutes I'd like to notify the user when key parts of the process are finished.
Note: I cannot break this request into several others
I might not have the correct approach here if so do you have other ideas how this could be handled?
Technically yes, but it would require you to have fine grained control of the http-stack, which you may or may not have in a typical php setup. I would suggest you look into other solutions (E.g. Make request to start the task - then poll to get an update on the progress)
http://www.redips.net/javascript/ajax-progress-bar/
here's a great article that goes over creating ajax a progress bar to use with php.
let me know if it doesn't make sense!
I think the best way for long proccessing requests is cron jobs. You can send request which will create 'task' and catch the task by cron job. Cron job can change task status while working and you can check task status via interval requests. I can't imagine another way to inform users about request proccessing. As soon as you response your headers are sent and PHP stops.
EDIT: it should be noted that Cron jobs are only available on Linux servers. windows servers would require access to the task scheduler, which most web hosts will not allow.