I have a PHP application built in Symfony. All the infrastructure is on AWS. I also have multiple APIs which are hosted on a ec2 instance connected to a READ-ONLY replica of MySQL Aurora.
My question is how can I log(store in the DB) every API call (user info, call timestamp, which parameters they are passing etc.).
I cannot insert the logging (storing in the DB) into the api endpoint because insert is a time consuming and will degrade our API performance.
Thanks
As I understand the question the main goal is to log all requests (e.g. in the database) without a negative impact on serving the response to the user.
Symfony offers multiple KernelEvents which are are triggered at different points in time while serving a request.
The kernel.terminate event is triggerd after the response was send to the user, when the kernel is about to shout down. This is the perfect time to do clean up work and perform other stuff which should have no influces on the necessary time to create the response.
So, simply create a event subscriber or event listener to handle kernel.terminate and perform your logging here without influencing performance.
Related
Asking for architectural advice on how to design the system with public API for user clients communication.
I'm working on the project where two clients must be able to communicate at real-time (or close to that) with each other in the most simple way as possible. Let's introduce the resource which has to be accessed by two separate clients. The workflow is the following:
Client #1 connects to the server and creates the resource
Client #2 connects to the server and accesses the resource
Client #1 changes the resource
Client #2 changes the resource
Repeat steps 3 and 4 until done.
The client cannot act until opposing client has not acted - request order must be preserved.
Clients should be able to access the resource via REST API (GET, POST, PUT, DELETE). Each client must wait until the opposing client performs an action. Time for the client to respond and perform an action is about 1-2 seconds (can slightly differ).
Please note that system should be able to handle a high load of concurrent requests (multiple clients communicating at the same time).
The global goal of the application is to provide an API where clients programmed in multiple different languages could communicate at real-time without any polling implementation on the user-client side. User clients must be as simple as possible.
Pseudo user-client example
response = init();
while (response->pending) {
response = get();
}
while (response->action_required) {
response = act();
if (response->error || response->timeout) {
response = get();
}
}
function init() {
// POST resource.example.com
}
function act() {
// PUT resource.example.com
}
function get() {
// GET resource.example.com
}
The problem statement
Since each client must wait until opposing client to act there is a need to introduce the sleep() function in the code which will delay the response until the resource will be affected/changed by the opposing client.
The request polling must be omitted from the user-client and implemented in server side.
Current thoughts and proposal
The initial thought was to implement only the PHP backend and perform response delay inside the API function, however, this implementation seems to cause severe performance issues, so I'm thinking about a more sophisticated solution. Or maybe I am wrong and response delay can successfully be implemented with sleep() inside the PHP backend?
Proposed system architecture
Node WebSocket server (socket.io to receive/return events)
PHP backend with REST API (access/change the resource, fire events to WebSocket)
Node JS application with public API for the end-user client (response delay functionality until the event received)
Please note that PHP backend cannot be replaced in this architecture, however, WebSocket and Node JS application are flexible units for the implementation.
Would be this kind of architecture implementable without severe server performance issues? Is there a better, more feasible way to design this kind of system? Is Node JS application able to handle multiple concurrent requests with response delay or any other kind of web application (Python/Ruby/...) would serve better? Is socket a must-have for this system in order to achieve somewhat real-time behaviour?
Please, share any ideas/insights/suggestions/... what could help to design this system in a sophisticated and well-performing manner.
Thank you in advance!
Some notes:
Avoid Sleep at all costs.
Your use case tends to lend itself to a pub/sub micro-services pattern.
As you need to preserve message processing order you need to have a common queue. Each of your REST API nodes act as a pub/sub publisher onto a distributed message queue system (RabbitMQ, Kafka, etc. type of tech). So for high throughput you now have a farm of machines handing the enqueue. They return immediately with a 201 Accepted, but need a way to mark the message with some kind of client identifier so you can route update messages back over web socket (if you aren't going to poll for status updates by resource id).
You need subscribers to this queue to do the actual processing. Same thing, have these as separate applications and now you can scale out the dequeue and processing. However, the tech you choose for the pub/sub bus needs to be able to invalidate subsequent messages for that resource, and for each one of the invalidated messages provide feedback to your application so that it can send the required message over web socket.
Hope this helps.
I am making an payment system in PHP which depends on a REST API.
My Business Logic:
If someone submits a request through my system, lets say "transfer money from point A to point B" that transaction is saved in my database with status: "submited", then submitted to the (Mobile Network Operator) API URL which processes it and returns back the status to my system, update my database transaction status to the new status 'eg: waiting for the confirmation' and notify the user of the incoming status.
The problem is:
My application should keep requesting with an interval of 10 seconds to check for the new status and showing the new status to the user till the last status of 'complete or declined', since statuses can range to 5 eg:"waiting, declined, approved, complete...' .
I have managed to do this using AJAX, setting time intervals in JavaScript. But it stops requesting if the user closes the browser or anything happened at their end. resulting into my app not knowing whether the money was delivered or not .
I would like to how i can run this circular tasks in the background using Gearman without involving JavaScript time intervals thanks
Gearman is more of a worker queue, not a scheduling system. I would probably setup some type of cron job that will query the database and submit the appropriate jobs to Gearman in an async way. With gearman, you will want to use libdrizzle or something else for persistent queues and also some type of GearmanWorker process manager to run more than one job at a time. There are a number of projects that currently do this with varying degrees of success like https://github.com/brianlmoon/GearmanManager. None of the worker managers I have evaluated have really been up to par, so I created my own that will probably be open-sourced shortly.
You wouldn't use Gearman in the background for circular tasks, which is normally referred to as polling. Gearman is normally used as a job queue for doing things like video compression, resizing images, sending emails, or other tasks that you want to 'background'.
I don't recommend polling the database, either on the frontend or the backend. Polling is generally considered bad, because it doesn't scale. In your javascript example, you can see that as your application grows and is used by thousands of users, polling is going to introduce a lot of unnecessary traffic and load on your servers. On the backend, the machine doing the polling is a single point of failure.
The architecture you want to explore is a message queue. It's similar to the Listener/Observer pattern in programming, but applied at the systems level. This will allow a more robust system that can handle interruptions, from a user closing the browser all the way to a backend system going down for maintenance.
I'm working on a API project that needs to send emails and store statistics after (or before depending on the implementation) the response has been returned to the client. For both cases I'm considering the Symfony's EventDispatcher component (I'm not using Symfony as the framework) so each controller action will dispatch an event to add an email to the queue or insert the data into the statistics database table.
So the thing would look something like this
Controller
=> Send Response to client
=> Dispatch Event email => EmailEventListener => Mail queue
=> Dispatch Event stats => StatsEventLister => Database
I'm considering this because I want this internal actions to be as asynchronous as they could be. Is this appropriate solution for this case?
EDIT: As Jovan Perovic suggested I'm adding more information. The API is a REST API that users communicate with it via web or mobile apps and I want to log, store stats and send notifications (emails primarily) without penalizing the performance of the API, the first idea was to use something that run after returning the response to the client but I don't know if that's possible using the EventDispatcher. Even if a use queue to process stats or notifications I need a centralized place where all the controllers can send information so logs be written and stats stored.
I hope my goal is now more clear. Sorry.
I think you could use Request filters (After would be suitable for you), although, I have never attempted to use them outside of Symfony2 framework.
As for async operations, in general, sockets are your friend. You could externalize the logic, by sending data to some socket which will in turn process the data accordingly. If that processing is non-essential (e.g. email and stats), your request could be finished even if your external mechanism fails.
I read some time ago about Gearman here (just an example) which might help up externalize that by creating a separate jobs.
Hope this sheds some light here :)
We have an web application built in PHP Laravel, which exposes a bunch of schema objects via JSON API calls. We want to tie changes in our schema to AngularJS in such a way that when the database updates, the AngularJS model (and subsequently the view) also updates, in real-time.
In terms of the database, it can be anything, such as mySQL, SQL Server, etc. There's a couple of ways we're thinking about this:
mySQL commits fire some sort of event at Laravel, which then fires a call to all relevant/listening models/views in AngularJS.
Before any data is changed (edited/added) - Laravel fires an event to AngularJS. In other words, after any successful DB commit, another "thing" is done to notify.
The second seems the obvious, clean way of doing this - since the database is not involved lower down the stack. Is there any better way of doing this?
This question is related:
How to implement automatic view update as soon as there is change in database in AngularJs?
but I don't quite understand the concept of a "room" in the answer.
What (if any) is the best way to efficiently tie database commits (pushing) to the AngularJS view (to render changes)? We want to avoid polling a JSON API for changes every second, of course.
I've also had a similar requirements on one of my projects. We solved it with using node.js and sockjs. Flow is like this:
There is a node.js + SockJS server to which all clients connect.
When db is updated, laravel issues a command to node.js via http (redis also a posibility)
Node.js broadcasts the event to all interested clients (this depends upon your business logic)
Either the client reloads the data required or if message is small enough it can be included in node.js broadcast.
Hope this helps. There is no clean way to do this without using other technologies (node.js / web socket / SSE etc). Much of it depends up on the configuration your clients will be using as well.
I'm trying to figure out the best way possible to notify my web-application of database changes. The way I'm doing it now, is that I've got separate MySQL table with a counter.
Now, when some database action happens in table Foo, a trigger is adding up the counter.
The web-app then polls every 5 seconds the server to check out, if something has happened (counter number has changed) and if so, refresh the data in app.
What I would like, is that I would be able to do callback/notify from MySQL to the server and from there to the web-app so that I don't need to poll the server frequently. Is this possible somehow?
How does facebook, gmail send the real time notification?
You can't notify your application directly from MySQL, but there are some solutions to save bandwith and load of your server.
one way of handling this - would be to either implement the observer pattern yourself or simply use a pubsub messaging option (ZMQ/AMQ/RabbitMQ/Redis etc) - when the initial database action takes place (ensure that the transaction has committed), publish the message to the topic on the pubusb tool - your application can subscribe to the pubsub tool and receive messages when there is a DB change.
Hope it helps.