I have a scenario where I have drop message to queue and fetch this message from other process and do the stuff.
I have a website written on PHP, I am reading and writing to Redis while main database is MySQL.
I don't want to delay user response time, so i am using Redis. After writing to Redis i want to drop a message in queue and then other process running will read it and store the transaction in database. So sending a message to queue while writing to Redis is not a problem as this can be easily done in PHP, reading from queue can also be achieved by running a PHP script in domain (with open socket), cron etc.
I need to know if there is any opensource software available which can read the message from queue as soon as they arrive and trigger a PHP script with parameters. This mechanism will be fast.
I am not sure about the efficiency of PHP socket running as domain, but for cron there is certain delay.
Related
I was looking for a good way to manage a lot of background tasks, and i found out AWS SQS.
My software is coded in PHP. To complete a background task, the worker must be a CLI PHP application.
How am i thinking of acclompishing this with AWS SQS:
Client creates a message (message = task)
Message Added to Mysql DB
A Cron Job checks mysql db for messages and adds them to SQS queue
SQS Queue Daemon listents to queue for messages and sends HTTP POST requests to worker when a message is received
Worker receives POST request and forks a php shell_execute with parameters to do the work
Its neccessary to insert messages in MySQL because they are scheduled to be completed at a certain time
A little over complicated.
I need to know what is the best way to do this.
I would use AWS Lambda, with an SQS trigger to asynchronoulsy process messages dropped in the queue.
First, your application can post messages directly to SQS, there is no need to first insert the message in MySQL and have a separate daemon to feed the queue.
Secondly, you can write an AWS Lambda function in PHP, check https://aws.amazon.com/blogs/apn/aws-lambda-custom-runtime-for-php-a-practical-example/
Thirdly, I would wire the Lambda function to the queue, following this documentation : https://aws.amazon.com/blogs/apn/aws-lambda-custom-runtime-for-php-a-practical-example/
This will simplify your architecture (less moving parts, less code) and make it more scalable.
Our web application running on elastic beanstalk logs activity of incoming request to a database. We want to decouple the dB logging from the request processing path, so that response time can be sped up. We decided to use sqs queues and beanstalk worker. The idea is to queue the logging event to sqs, and have the worker receive the events and let it do the logging to the dB.
Now the need is to optimize the dB logging operation and avoid creating one connection per message in the queue. From my understanding the sqs daemon would Call the worker for each message, is there a way to have the daemon send messages in a batch, so that there's only one message and it's body has contents of all messages?
Or do we need to use a secondary queue or write a custom sqs message aggregator that processes n messages from the queue and then sends one batch message to another queue and that then gets written to the dB once?
We are using php and mysql
From my experience, defaultly you cannot. The daemon calls your application for each message.
What you can do might be that you cache the messages locally (assuming you are using single instance instead of auto scaling one) in a file (locking system for multi-processing) and then uses the scheduling of ELB cronjob to retrieve information from the file then do your DB operations every a certain amount of time. Thus, you can do that DB operation in a batch.
If you want to use auto scaling with multiple instances, you might need to use another messaging which is a waste compared with another option. This option is you write your own code using based off aws sdk to receive/delete from SQS in a batch and then update your database.
I have a function to import data from excel to database, I make this function to run on server so this function doesn't need to interact with client anymore, the client web browser just need to upload the excel file to server, after that, the task will be run just on server so if the browser closed by client, the function still run on server, i've got this, the problem is, when the browser is leave open by client, the browser will loading as long as the function still active.How can i made the browser not wait respond from server so the browser will not loading while the process is run on server.Please help me.
Use a message queue to offload the task of processing the file from the web server to another daemon running separately.
You can take the cheap and easy route of execing a process with & in the command line, causing it to be backgrounded. However, that gives you little control / status.
The right way to go about it IMO is to queue up these long-running tasks in a database, with some status info associated with them. Then have a dedicated process which runs separate from your webserver, checking the database for tasks, and performs them, updating the database with success/failure status.
Look into using a queue such as Mseven's Queue Plugin:
Msevens Queue Plugin
Or, if you want a more daemon based job, look into Beanstalkd. The queue plugin by mseven is pretty self explanatry though. Stay away from forking processes using &, it can get out of control.
I have a simple messaging queue setup and running using the Zend_Queue object heirarchy. I'm using a Zend_Queue_Adapter_Db back-end. I'm interested in using this as a job queue, to schedule things for processing at a later time. They're jobs that don't need to happen immediately, but should happen sooner rather than later.
Is there a best-practices/standard way to setup your infrastructure to run jobs? I understand the code for receiving a message from the queue, but what's not so clear to me is how run the program that does that receiving. A cron that receives n messages on the command-line, run once a minute? A cron that fires off multiple web requests, each web request running the receiver script? Something else?
Tangential bonus question. If I'm running other queries with Zend_Db, will the message queue queries be considered part of that transaction?
You can do it like a thread pool. Create a command line php script to handle the receiving. It should be started by a shell script that automatically restarts the process if it dies. The shell script should not start the process if it is already running (use a $pid.running file or similar). Have cron run several of these every 1-10 minutes. That should handle the receiving nicely.
I wouldn't have the cron fire a web request unless your cron is on another server for some strange reason.
Another way to use this would be to have some backround process creating data, and a web user(s) consume it as they naturally browse the site. A report generator might work this way. Company-wide reports are available to all users but you don't want them all generating this db/time intensive report. So you create a queue and process one at a time possible removing duplicates. All users can view the report(s) when ready.
According to the docs it doens't look like the zend db is even using the same connection as your other zend_db queries. But of course the best way to find out is to make a simple test.
EDIT
The multiple lines in the cron are for concurrency. each line represents a worker for the pool. I was not clear, you don't want the pid as the identifier, you want to pass that as a parameter.
/home/byron/run_queue.sh Process1
/home/byron/run_queue.sh Process2
/home/byron/run_queue.sh Process3
The bash script would check for the $process.running file if it finds it exit.
otherwise:
Create the $process.running file.
start the php process. Block/wait until finished.
Delete the $process.running file.
This allows for the php script to die but not cause the pool to loose a worker.
If the queue is empty the php script exits immediately and is started again by the nex invocation of cron.
I have a website written in PHP (CakePHP) where certain resource intensive tasks are handled by a background process. This is done through the Beanstalkd message queue. I need some way to retrieve the status of that background process so I can monitor it with Monit.
The background process is a CakePHP Shell (just a PHP CLI script) that communicates with Beanstalkd. It simply does a reserve() on Benastalkd and waits for a new message. When it gets a message, it processes it. I want some way of monitoring this process with Monit so that it can restart the background process if something has gone wrong.
What I have been thinking about so far is writing a PHP CLI script that drops a message in Beanstalkd. The background process picks up the message and somehow communicates it's internal status back to the CLI script. But how? Sockets? Shared memory? Some other IPC method?
Or am I perhaps being too complicated here and is there a much easier way to monitor such a process with Monit?
Thanks in advance!
Here's what I ended up doing in the end.
The CLI script connects to beanstalkd, creates a new queue (tube) and starts watching it. Then it drops a highest priority message in the queue that the background daemon is watching. That message contains the name of the new queue that the CLI script is monitoring.
The background process receives this message almost immediately (because it is highest priority), generates a status message and puts it in the queue that the CLI script is watching. The CLI script receives it and then closes the queue.
When the CLI script does not get a response in 30 seconds it will exit with an error indicating the background daemon is (most likely) hung.
I tied all this into Monit. Monit can now check that the background daemon is running (via the pidfile and process list) and verify that it is actually still processing messages (by using the CLI tool to test that it responds to status requests)
There probably is a plugin to Monit or Nagios to connect, run the stats and return if there are 'too many'. There isn't a 'protocol' written already for that, but t doesn't appear to be exceeding difficult to modify an existing text-based one (like nntp, or smtp) to do what you want. It does mean writing it in C though, by the looks of it.
From a CLI-PHP script, I would go about it through one (or both) of two different methods.
1/ drop a (low-ish) priority message into the queue, and make sure it comes back within a few seconds. Putting it into a dedicated queue and making sure there's nothing there before you put it in there would be a good addition as well.
2/ perform a 'stats' and see how many are waiting: 'current-jobs-ready'.
To get the information back to a website (either way), you can write to a file, or into something like Memcached which gts read and acted upon.