I've a code base written in Laravel 9 which targets to execute multiple requests to one API with 1000 different API keys + API secrets. Is there a way to actually do this simmultaniously and not one after another ?
I've come up to this thread Laravel Commands, Pthreads and Closure but the answer there doesn't look like it is wokring, in my opinion it shows that everything is kinda executing at the same time because there is sleep(rand(1,3)) inside the OperatorThreaded class.
Any idea ?
A Laravelly way is to do it like so:
Create a RequestYourApi Job (php artisan make:job RequestYourApi or make a class yourself inside App\Jobs).
Dispatch any amount of jobs using RequestYourApi::dispatch([/* args */]) (the arguments will be passed to the constructor of this class, and the job will be put into the queue, see https://laravel.com/docs/9.x/queues)
Run multiple workers using for instance Supervisor (or Laravel Horizon): php artisan queue:work (use --help to see additional options)
Scale up/down your workers depending on how fast you need to process things.
Related
I have a use case where i need to do certain database manipulations automatically every month on a specific date.Currently using Symfony 2.7 framework is it possible to call a controller every month on a specific date??.Any feedback's/ideas would be helpful
Instead of calling a controller, you could probably create a Symfony Console Command. Have a look here at how to do it: https://symfony.com/doc/2.7/console.html
With this option, the only way to execute it would be via the cli.
Then you can call the command from a cron job that runs at the time you want. Some examples of how to schedule jobs using crontab: https://tecadmin.net/crontab-in-linux-with-20-examples-of-cron-schedule/
You can make a Console Command in Symfony and call this command every month with a cron task.
Console command
My existing Laravel project is such that all the tasks are executed sequentially. I have identified part of the code which can be run in parallel using PHP threads and can reduce the response time.
So I'm trying to use pthreads, https://github.com/krakjoe/pthreads/tree/PHP5. The appserver.io is already comes with pthread, thus I'm running project on appserver.io and not apache.
I was able to use pthread successfully. As in creating new PDO connection under run() method for each thread and database interaction is working fine using native query like this:
self::$connection = new PDO ( 'mysql:host=127.0.0.1;dbname=mydb' , 'myuser' , 'mypass' );
But I'm not able to use Laravel's \DB:connection the same way. I need to have this working in order to run my code which is written with ORM based queries. How can I create new connection every time under run() method?
Can you not dispatch jobs and just have those jobs run in background (queue them)?
Two things you need to consider.
1. You should load the vendor file in your threads.
2. Instead of facade use singleton class in threads.
Hope this will solve the issue.
I am super confused about Laravel's commands vs. console commands and which one I'm supposed to use with the task scheduler.
I am attempting to use a console since that's the one that seems to work with the task scheduler per Laravel's docs but for some reason it doesn't allow me to have any other methods in the command file other than the construct, fire, getArguments, and getOptions. I can't put everything in the fire method as it would be one big mess.
Basically I am trying to have a crawler run every 5 minutes. So other methods deal with various parts of the crawling process.
And then in the fire method, I'm trying to call my main method via $this->run(); but it returns error:
Declaration of App\Console\Commands\Crawler::run() should be compatible with Illuminate\Console\Command::run(Symfony\Component\Console\Input\InputInterface $input, Symfony\Component\Console\Output\OutputInterface $output)
Any ideas how to get around this?
I want to run some functions parallel in the background.
Is there a nice way to do this in Symfony2 or even with php?
Or is there only the Symfony\Component\Process\Process?
Can I use this Process with a function? I need the actual Context (logged in user and some session data), so it is not possible to source the function out to an external php-file...
Symfony2 Process component allows you to run some shell command or execute php-script in a different process.
To run exact function in a thread try look at PHP Thread class
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Currently I'm trying to build a good scheduler system as an interface for setting and editing cron jobs on my system. My system is built using Zend framework 1.11.11 on a Linux server.
I have 2 main problems that I want your suggestion for:
Problem 1: The setup of the application itself
I have 2 ways to run the cron job:
First way is to create a folder scripts and create a common bootstrap file in it where I'll load only the resources that I need. Then for each task I'll create a separate script and in each script I'll include the bootstrap file. Finally, I'll add a cron task in the crontab file for each one of these scripts and the task will be something like ***** php /path/to/scripts/folder/cronScript_1.php .
Secondly treat the cron job like a normal request (no special bootstrap). Add a cron task in the crontab file for each one of these scripts and the task will be something like ***** curl http://www.mydomain.com/module/controller/action .
Problem 2: The interface to the application
Adding a cron job also can be done in 2 ways:
For each task there will be an entry in the crontab file. when I want to add a new task I must do it via cPanel or any other means to edit the crontab (which might not be available).
Store the tasks in the database and provide a UI for interacting with the database (grid to add few tasks and configuration). After that write only 1 cron job in the crontab file that runs every minute. This job will select all jobs from the database and checks if there is a job that should be run now (the time for the tasks will be stored and compared with the current time of the server).
In your opinion which way is better to implement for each part? Is there a ready made solution for this that is better in general??
Note
I came across Quartz will searching for a ready made solution. Is this what I'm looking for or is it something totally different?
Thanks.
Just my opinion, but I personally like both 1 & 2 dependent on what your script is intending to accomplish. For instance, we mostly do 1 with all of our cron entries as it becomes really easy to look at /etc/crontab and see at a glance when things are supposed to run. However, there are times when a script needs to be called every minute because logic within the script will then figure out what to run in that exact minute. (i.e. millions of users that need to be processed continually so you have a formula for what users to do in each minute of the hour)
Also take a look at Gearman (http://gearman.org/). It enables you to have cron scripts running on one machine that then slice up the jobs into smaller bits and farm those bits out to other servers for processing. You have full control over how far you want to take the map/reduce aspect of it. It has helped us immensely and allows us to process thousands of algorithm scripts per minute. If we need more power we just spin up more "workhorse" nodes and Gearman automatically detects and utilizes them.
We currently do everything on the command line and don't use cPanel, Plesk, etc. so I can't attest to what it's like editing the crontab from one of those backends. You may want to consider having one person be the crontab "gatekeeper" on your team. Throw the expected crontab entries into a non web accessible folder in your project code. Then whenever a change to the file is pushed to version control that person is expected to SSH into the appropriate machine and make the changes. I am not sure of your internal structure so this may or may not be feasible, but it's a good idea for developers to be able to see the way(s) that crontab will be executing scripts.
For Problem 2: The interface to the application I've used both methods 1 & 2. I strongly recommend the 2nd one. It will take quite more upfront work creating the database tables and building the UI. In the long run though, it will make it much easier adding new jobs to be run. I build the UI for my current company and it's so easy to use that non-technical people (accountants, warehouse supervisors) are able to go in and create jobs.
Much easier than logging onto the server as root, editing crontab, remembering the patterns and saving. Plus you won't be known as "The crontab guy" who everyone comes to whenever they want to add something to crontab.
As for setting up the application itself, I would have cron call one script and have that script run the rest. That way you only need 1 cron entry. Just be aware that if running the jobs takes a long time, you need to make sure that the script only starts running if there are no other instances running. Otherwise you may end up with the same job running twice.