I developed an android app where it subscribes to a queue and also publishes to other queues. At a time it publishes the same message to two different queues, one of them is a queue named "Queue" and now from a appfog instance i need to subscribe to the "Queue" and consume messages and insert them in a mysql db.
I created a php standalone app for the above purpose with codeigniter. By some reason the worker app loses its connection to rabbitmq. i would like to know the best way to do this.
How can a worker app on appfog can sustain the application restarts.
what of kind of thing i need to use to solve the above problem.
What version of RabbitMQ are you using? In 3.0 heartbeats were enabled by default for connections. If there is a regular disconnect (10 minutes by default) period then it is probably the heartbeat that is disconnecting.
Your client (hopefully) can configure a longer during the connection negotiation or you can have your PHP app send a heartbeat at a regular interval. Here is the announcement post that describes it a bit.
Related
I was looking for a good way to manage a lot of background tasks, and i found out AWS SQS.
My software is coded in PHP. To complete a background task, the worker must be a CLI PHP application.
How am i thinking of acclompishing this with AWS SQS:
Client creates a message (message = task)
Message Added to Mysql DB
A Cron Job checks mysql db for messages and adds them to SQS queue
SQS Queue Daemon listents to queue for messages and sends HTTP POST requests to worker when a message is received
Worker receives POST request and forks a php shell_execute with parameters to do the work
Its neccessary to insert messages in MySQL because they are scheduled to be completed at a certain time
A little over complicated.
I need to know what is the best way to do this.
I would use AWS Lambda, with an SQS trigger to asynchronoulsy process messages dropped in the queue.
First, your application can post messages directly to SQS, there is no need to first insert the message in MySQL and have a separate daemon to feed the queue.
Secondly, you can write an AWS Lambda function in PHP, check https://aws.amazon.com/blogs/apn/aws-lambda-custom-runtime-for-php-a-practical-example/
Thirdly, I would wire the Lambda function to the queue, following this documentation : https://aws.amazon.com/blogs/apn/aws-lambda-custom-runtime-for-php-a-practical-example/
This will simplify your architecture (less moving parts, less code) and make it more scalable.
I have a Laravel app (iOS API) that pushes data to SQS to be processed in the background. Depending on the request, we need to dispatch anywhere from 1 to 4 jobs to SQS. For example:
Dispatch a job to SQS, to be processed by a Worker:
for connecting to a socket service (Pusher)
for connecting to Apple's APNS service (Push Notifications)
for sending data to Loggly for basic centralized request logging
for storing analytics data in a SQL database (for the time being)
The problem is, we might have a feature like "chat", which is a pretty light request as far as server processing is concerned, however it needs to connect to SQS three times over to send:
1) Socket push to all the devices
2) Analytics processing
3) Centralized Request / Error Logging
In total, these connections end up doubling or tripling the time that the rest of request takes. ie. POSTing to /chat might otherwise take about 40-50ms, but with SQS, it takes more like 100 - 120ms
Should i be approaching this differently? Is there a way batch these to SQS so we only need to connect once rather than 3 times?
As Michael suggests in the comments, one way to do it is to use a sendMessageBatch request to send up to 10 messages to one single queue. When you're using multiple queues, and maybe even if you're not, there's also another approach.
If you can approach all the different messages as the same element, namely a notification of an action to an arbitrary receiver, you will find yourself in the fanout pattern. This is a commonly used pattern in which multiple receivers need to act on a single message, a single action.
Although AWS SQS doesn't natively support it, in combination with AWS Simple Notification Service you can actually achieve the same thing. Jeff Barr wrote a short blog post on the fanout setup using SNS and SQS a while ago (2012). It boils down to sending a notification to SNS that'll trigger messages to be posted on multiple SQS queues.
If anyone is curious, I released an MIT package to batch dispatch :)
https://packagist.org/packages/atymic/laravel-bulk-sqs-queue
It uses async and sendMessageBatch under the hood.
I am trying to set up an API system that synchronously communicates with a number of workers in Laravel. I use Laravel 5.4 and, if possible, would like to use its functionality whenever possible without too many plugins.
What I had in mind are two servers. The first one with a Laravel instance – let’s call it APP – receiving and answering requests from and to a user. The second one runs different workers, each a Laravel instance. This is how I see the workflow:
APP receives a request from user
APP puts request on a queue
Workers look for jobs on the queue and eventually finds one.
Worker resolves job
Worker responses to APP OR APP finds out somehow that job is resolved
APP sends response to user
My first idea was to work with queues and beanstalkd. The problem is that this all seem to work asynchronously. Is there a way for the APP to wait for the result of one of the workers?
After some more research I stumbled upon Guzzle. Would this be a way to go?
EDIT: Some extra info on the project.
I am talking about a Restful API. E.g. a user sends a request in the form of "https://our.domain/article/1" and their API token in the header. What the user receives is a JSON formatted string like {"id":1,"name":"article_name",etc.}
The reason for using two sides is twofold. At one hand there is the use of different workers. On the other hand we want all the logic of the API as secure as possible. When a hack attack is made, only the APP side would be compromised.
Perhaps I am making things all to difficult with the queues and all that? If you have a better approach to meet the same ends, that would of course also help.
I know your question was how you could run this synchronously, I think that the problem that you are facing is that you are not able to update the first server after the worker is done. The way you could achieve this is with broadcasting.
I have done something similar with uploads in our application. We use a Redis queue but beanstalk will do the same job. On top of that we use pusher which the uses sockets that the user can subscribe to and it looks great.
User loads the web app, connecting to the pusher server
User uploads file (at this point you could show something to tell the user that the file is processing)
Worker sees that there is a file
Worker processes file
Worker triggers and event when done or on fail
This event is broadcasted to the pusher server
Since the user is listening to the pusher server the event is received via javascript
You can now show a popup or update the table with javascript (works even if the user has navigated away)
We used pusher for this but you could use redis, beanstalk and many other solutions to do this. Read about Event Broadcasting in the Laravel documentation.
I would like to know if someone could explain to me how to build a real-time application with Symfony?
I have looked at a lot of documentation with my best friend Google, but I have not found quite detailed articles.
I would like some more PHP-oriented thing and saw that there were technologies like ReactPHP / Ratchet (but I can not find a tutorial clear enough to integrate it into an existing symfony project).
Do you have any advice on which technologies to use and why? (If you have tutorial links I take!)
Thank you in advance for your answers !
Every useful Symfony application does some form of I/O. In traditional applications this is most often blocking I/O. Even if it's non-blocking I/O, it doesn't integrate a global event loop that could schedule other things while waiting for I/O.
If you integrate Symfony into an existing event loop based WebSocket server, it will work with blocking I/O as a proof of concept, but you will quickly notice it isn't running fine in production, because any blocking I/O blocks your whole event loop and thus blocks all other connected clients.
One solution is rewriting everything to non-blocking I/O, but then you'd no longer be using Symfony. You might be able to reuse some components, but only those not doing any I/O.
Another solution is to use RPC and queue WebSocket requests into a queue. The intermediary can be written using non-blocking I/O only, it doesn't have much to do. It basically just forwards WebSocket messages as RPC requests to a queue. Then you have a set of workers pulling from that queue, doing a normal Symfony kernel dispatch and sending the response into a response queue. The worker can then continue to fetch the next job.
With the second solution you can totally use blocking I/O and all existing Symfony components. You can spawn as many workers as you need and you can even keep them alive between requests. The difference with a queue in between is that one blocking worker doesn't block the responsiveness of the WebSocket endpoint.
If you want multiple WebSocket processes, you'll need separate response queues for them, so the responses are sent back to the right process where the client is connected.
You can find a working implementation with BeanstalkD as queue in kelunik/rpc-demo. src/Server.php is just for the demo purpose and can be replaced with a HTTP server at any time. To keep the demo simple it uses a single WebSocket process but that can be changed as outlined above. You can start php bin/server and php bin/worker, then use telnet localhost 2000 to connect and send messages. It will respond with the same message but base64 encoded in the workers.
The mentioned demo is built on Amp, but the same concepts apply to ReactPHP as well.
In this issue of the official Symfony repository you may find comments and ideas about this: https://github.com/symfony/symfony/issues/17051
Hey guys I’m working on a website for my small startup that needs to check a database continuously for new data. I'm a mechanical engineer and don't have experience with web design and web communication. Currently I’m using an AJAX request every second to check a MYSQL database (using PHP). The code compares the received data (in JSON format) and if it’s different than the previous one it triggers a function to process the new data and update the UI.
Just last night I learned about web workers, web sockets and long polling and kinda overwhelmed with all the new options I have now. I’m really confused about whether I need to change my current solution and which solution would be the best. I thought maybe I should create a dedicated web worker that handles the AJAX calls in order to avoid sacrificing UI smoothness (the website should run smoothly on an average tablet).
Anyone with experience can give me some tips and directions? I learned about Pusher API but I would like to avoid API’s for now. I feel like all the code that I have written in the past few months are inefficient after reading about web workers and web sockets…
Thanks in advance...
You should really use google and search SO for previous posts on similar issues.
Here are a few good starters:
In what situations would AJAX long/short polling be preferred over HTML5 WebSockets?
Performance of AJAX vs Websocket REST over HTTP 2.0?
What are Long-Polling, Websockets, Server-Sent Events (SSE) and Comet?
Design/Architecture: web-socket one connection vs multiple connections
Or (outside SO):
http://dsheiko.com/weblog/websockets-vs-sse-vs-long-polling/
https://www.pubnub.com/blog/2015-01-05-websockets-vs-rest-api-understanding-the-difference/
As a quick summation:
I would probably opt for a web socket connection per client.
I would avoid polling the MySQL database (why do that?). There's really no need to waste resources. It's easier to add code to the update gateway, so that whenever the DB is updated, an event is scheduled for all listening sockets... I would consider Redis for Pub/Sub if I were using more than one process / machine for my server app.
An easier workflow would look like this:
Browser page load -> Websocket connection.
Websocket connection -> subscribe (listen to) Redis channel.
SQL update -> (triggers) Redis publish to a channel.
Redis channel publish -> notification to the (subscribed) websocket.
Notification on channel -> web socket message to client.
Good luck.
Here's a simple push idea that may work for you:
Create a trigger that writes to another table when inserts/updates are done and log any relevant data there (something useful to you)
On initial load of app, get the latest updates from the secondary "log" table, store the row/event ID for comparison later
Create a poller (server-sent events) that listens to a specific script that watches said "log" table
Create a CRON job to execute the script from step 3 every X amount of time
(caveat: #3 may not work in IE, so you'd need a fallback or different solution)