I'm working on a API project that needs to send emails and store statistics after (or before depending on the implementation) the response has been returned to the client. For both cases I'm considering the Symfony's EventDispatcher component (I'm not using Symfony as the framework) so each controller action will dispatch an event to add an email to the queue or insert the data into the statistics database table.
So the thing would look something like this
Controller
=> Send Response to client
=> Dispatch Event email => EmailEventListener => Mail queue
=> Dispatch Event stats => StatsEventLister => Database
I'm considering this because I want this internal actions to be as asynchronous as they could be. Is this appropriate solution for this case?
EDIT: As Jovan Perovic suggested I'm adding more information. The API is a REST API that users communicate with it via web or mobile apps and I want to log, store stats and send notifications (emails primarily) without penalizing the performance of the API, the first idea was to use something that run after returning the response to the client but I don't know if that's possible using the EventDispatcher. Even if a use queue to process stats or notifications I need a centralized place where all the controllers can send information so logs be written and stats stored.
I hope my goal is now more clear. Sorry.
I think you could use Request filters (After would be suitable for you), although, I have never attempted to use them outside of Symfony2 framework.
As for async operations, in general, sockets are your friend. You could externalize the logic, by sending data to some socket which will in turn process the data accordingly. If that processing is non-essential (e.g. email and stats), your request could be finished even if your external mechanism fails.
I read some time ago about Gearman here (just an example) which might help up externalize that by creating a separate jobs.
Hope this sheds some light here :)
Related
I have a PHP application built in Symfony. All the infrastructure is on AWS. I also have multiple APIs which are hosted on a ec2 instance connected to a READ-ONLY replica of MySQL Aurora.
My question is how can I log(store in the DB) every API call (user info, call timestamp, which parameters they are passing etc.).
I cannot insert the logging (storing in the DB) into the api endpoint because insert is a time consuming and will degrade our API performance.
Thanks
As I understand the question the main goal is to log all requests (e.g. in the database) without a negative impact on serving the response to the user.
Symfony offers multiple KernelEvents which are are triggered at different points in time while serving a request.
The kernel.terminate event is triggerd after the response was send to the user, when the kernel is about to shout down. This is the perfect time to do clean up work and perform other stuff which should have no influces on the necessary time to create the response.
So, simply create a event subscriber or event listener to handle kernel.terminate and perform your logging here without influencing performance.
I am working on REST API for Android app in Symfony2. Recently I implemented sending push notifications via FCM. Everything works fine, the problem is that my implementation sends notifications normally in controller, so client has to wait for sending all notifications to get response from server, what of course leads to performance issues. Could anyone give me a hint, what is the best way how to handle notifications sending, i.e. in separate thread or some scheduler? I just don't know what are my possibilities. Thanks in advance.
Well, what you are looking for is kind of an asynchronous worker.
That can be accomplished through different ways.
The easiest way might be to store all notifications, that need to be send, in some kind of queue like a table in your Database and processing these entries using a command (https://symfony.com/doc/current/console.html) which is regularly executed via crontab.
Another way would be to use something like RabbitMQ and writing a custom consumer which sends the notifications. Thats quite straight-forwarded and requires something like supervisord to demonize the consumer-process.
Maybe the CronJob-Way is the best for you. I didn't quite get, whether these Push-Notifications need to rely on the requests send by your clients but generally you should try to encapsulate all logic from controllers into services.
The Symfony-Documentation is always a good entry for these kind of questions and should give you some more detailed examples and hints:
https://symfony.com/doc/current/index.html
I have a PHP application that is already up and running and we have to implement a chat messaging system in it. We chose to do this with nodejs and socket.io as it seems the most effective and one of the best documented. I have PHP handling all the DB stuff and node just doing what it's most effective at: nonblocking io to update the client side when a message is received real time (through rooms). I also have a token based authentication going on using jsonwebtokens.
Everything is actually running well now:
When someone sends a message
1. JS send an ajax request to PHP
2. PHP saves the message to the database
3. PHP returns a response
4. JS receives the ajax response and then emits an event to signal to the node to update the appropriate clients
5. Node emits an event to the appropriate clients to update their views: notif icons, creates a silly sound and what not.
What I'm worried about are in steps 4 and 5. Since the data that will be passed to node in these steps are in the client side, any rogue user can effectively make modifications to these data and potentially be able to trigger an update of a view of another user even if he is not the intended receiver. The obvious solution I that I can think of is to allow node to have access to the database and validate that only the legitimate recipient will receive the event trigger, but that defeats the purpose of separating the concerns of the PHP app and node. What is the standard way of handling such a situation?
After a bit of reading I've decided on using Redis because of its PubSub capability. Refer to this for anyone who has the same concern: Redis sub/pub and php/nodejs
i'm running a social network with a messaging and notification feature. each time a user sends a message or places a notification for another user, a row is inserted into a table news_updates with the details about the message or notification and all his friends are inserted into the news_seen table. (once the message is read, or the item related to the notification is opened, seen is set to 1, i'm doing this at the end of my callback function for my ajax request - i'm gathering all the newsitem_ids from all the news items, that are currently open and then i'm doing a big insert with all the newsitem_ids in it).
news_seen:
newsitem_id bigint,
user_id big int,
seen int DEFAULT '0'
at the moment, i'm running an ajax request every 3 seconds, to check the news_updates JOIN news_seen for news.
this turns out to be a huge server load now that i'm getting more and more users. i've been reading a lot about xmpp and the like and i think a push notification service would be great for my site.
the only thing is, i can't really decide on which way to go, since there are so many options.
also, i thought about creating my own system. i'm planning to do it like this:
create an xml file for each user on initial registration (and run a batch for the already registered users)
once a user sends out a news update (i have my own php function for writing them into the db), i include a small command to manipulate the xml file for the respective friends
instead of doing my 3sec ajax request, i'd establish a long connection to the xml file using jquery stream and in case changes were made since the last request, i'd do my usual ajax request that polls the data from the db.
instead of running my check_seen inside the ajax request, i'd insert all the new items into a global array, to be used by an intervaled function that tests if any item in the list is currently being viewed.
do you think this is a good idea?
To be honest I do not think I would implement your specification.
For example I would use a lighter data-model then XML. I would use JSON instead.
I would avoid touching the DISC(database) as much as possible(slow).
Doing two requests instead of one(long-polling/polling). I would try to avoid this.
I would probably try to avoid wasting CPU-time by not using interval functions, but only calling function when needed. I would probably use something like REDIS's pubsub.
Polling / Long-polling is (most of the times) a bad idea in PHP because of blocking IO. I found an interesting project named REACT which I believe does non-blocking IO(expensive). I have not tested this(the performance) myself, but this could be an option.
For XMPP you will have to install additional software. I for instance liked Prosody for it's easy installation/usage. Then you would need to have a look at BOSH. For your bosh client I would use strophe.js or JSJaC.
I would probably use something like socket.io, Faye or maybe vertx.io instead, because it would scale a lot better.
Does RabbitMQ call the callback function for a consumer when it has some message for it, or does the consumer have to poll the RabbitMQ client?
So on the consumer side, if there is a PHP script, can RabbitMQ call it and pass the message/parameters to it. e.g. if rating is submitted on shard 1 and the aggregateRating table is on shard 2, then would RabbitMQ consumer on shard 2 trigger the script say aggRating.php and pass the parameters that were inserted in shard 1?
The AMQPQueue::consume method is now a "proper" implementation of basic.consume as of version 1.0 of the PHP AMQP library (http://www.php.net/manual/en/amqpqueue.consume.php). Unfortunately, since PHP is a single threaded language, you cant do other things while waiting for a message in the same process space. If you call AMQPQueue::consume and pass it a callback, your entire application will block and wait for the next message to be sent by the broker, at which point it will call the provided callback function. If you want a non blocking method, you will have to use AMQPQueue::get (http://www.php.net/manual/en/amqpqueue.get.php), which will poll the server for a message, and return a boolean FALSE if there is no message.
I disagree with scvatex's suggestion to use a separate language for using a "push" approach to this problem though. PHP is not IO driven, and therefore using a separate language to call a PHP script when a message arrives seems like unnecessary complexity: why not just use AMQPQueue::consume and let the process block (wait for a message), and either put all the logic in the callback or make the callback run a separate PHP script.
We have done the latter at my work as a large scale job processing system so that we can segregate errors and keep the parent job processor running no matter what happens in the children. If you would like a detailed description of how we set this up and some code samples, I would be more than happy to post them.
What you want is basic.consume, which allows the broker to push messages to clients.
That said, the libraries are implemented differently. Most of them have support for basic.consume, but because of inherent limitations of the frameworks used, some don't (most notably the official RabbitMQ C client on which a lot of other clients are based).
If your PHP library does not support basic.consume, you either have to use polling (bad), or you could use one of the more complete clients to drive the script. For instance, you could write a Python or Java program that consumes from the broker (so, the broker pushes deliveries to them) and they could call the script whenever a new message is received. The official tutorials are a great introduction to the AMQP APIs and are a good place to start.
This is efficient from most points of view, but it does require a stable connection to the broker.
If in doubt about the capabilities of the various clients, or if you need more guidance, the RabbitMQ Discuss mailing list is a great place to ask questions. The developers make a point of answering any query posted there.
Pecl amqp allows to use consume functionality with AMQPQueue::consume method. You just need to pass callback function in it and it will be executed when message arrives.