PHP, Mysql Notification Server Requirements - php

I am creating a project management system and in need to do push notifications when an activity took place.
Question : If I do a jquery to refresh and fetch notification from mysql database, say every 30seconds, will there be a huge impact in the server? What are the minimum requirements?
So basically, I'm looking at 10 notifications/day for 20 employees.

Assuming you're talking about an AJAX request to the server in order to update DOM elements, most basic web servers would very much be able to handle a few requests every 30 seconds or so. More important is how well-optimized the server-side code that finds & returns the notifications is. Assuming you'll have a few clients requesting every 30 seconds, I would suggest making sure the code only takes a few seconds to process the request and respond with the updated data.

Related

Is it just fine to POST data to Laravel every second?

I am trying to build a Tracking System where in an android app sends GPS data to a web server using Laravel. I have read tutorials on how to do realtime apps but as how I have understand, most of the guides only receives data in realtime. I haven't seen yet examples of sending data like every second or so.
I guess its not a good practice to POST data every second to a web server specially when you already have a thousand users. I hope anyone could suggest how or what should I do to get this approach?
Also, as much as possible I would only like to use Laravel without any NodeJS server.
Do sending quickly
First you should estimate server capacity. As of fpm, if you have 32 php processes and every post request handles by a server within 0.01sec, capacity can be roughly estimated asN = 32 / 0.01 = 3200 requests per second.
So just do handling fast. If your request handles for 0.1sec, it is too slow to have a lot of clients on a single server. Enable opcache, it can decrease time 5x. Inserting data to mysql is a slow operation, so you probably need to work it out to make it faster. Say, add it to a fast cache (redis\memcached) and when cache already contains 1000 elements or cache is created more than 0.5 seconds ago, move it to a database as a single insert query.
Do sending random
Most of smartphones may have correct time. So it can lead to a thousand of simultaneous requests when next second starts. So, first 0.01sec server will handle 1000 requests, next 0.99sec it will sleep. Insert at mobile code a random delay 0-0.9sec which is fixed for every device and defined at first install or request. It will load server uniformly.
There's at least 2 really important things you should consider:
Client's internet consumption
Server capacity
If you got a thousand users, every second would mean a lot of requests for you server to handle.
You should consider using some pushing techniques, like described in this #Dipin answer:
And when it comes to the server, you should consider using a queue system to handle those jobs. Like described in this article There's probably some package providing the integration to use Firebase or GCM to handle that for you.
Good luck, hope it helps o/

Pull notification methods

I want to design a notification component. I want to understand what type of pulling notification methods are used out there to effectively pull the notification with minimal stress on the server.
Let's say for example I want to notify user of a chat message, I imagine I would need to pull the data quite regularly, like every 500ms for a quick response. However, doing this may overload the system. Hypothetically speaking if I have a million user browsing the site that's 2 million requests every second!
I'm thinking of writing an algorithm that will incrementally increase the pull interval by 1 second on each pull up to a maximum of 60 second. The interval will reset to 500ms if there is new data. In this way, if the user has frequent notification it will be instant. But if there hasn't been notification for a longer period of time, there maybe a bit of delay of up to a minute.
In essence I'm compromising between user experience and server load to find a middle ground for both.
Please advise on possible drawback of this approach if any. Is there a proper name for it?
Alternatively, is there a better method out there?
What you are doing is pulling or long pulling. Effectively it is not good for performance.
The alternative way is pushing (http://en.wikipedia.org/wiki/Push_technology). You push the data when there is something new.
you could use web socket it achieve this.
You could look at Apollo messaging middle-ware that have native support for websockets and good performances.
http://activemq.apache.org/apollo/
The method you are using could lead a network traffic overload on your server if there are many clients connected . Let's suppose you have 1000 clients connected : the server will have to handle 1000 different connections. A better approach is using a push notification system. Check this out https://nodejs.org/it/docs/

Best way to handle memory intensive, long tasks in PHP

I have an APNS notification server sent up, which would in theory every day send about 50,000 to 100,000 users a processed notification (based on the amount of users of our web app that ties in with the iOS app).
The notification would go out around 2, but it must send it to each user individually (using Urban Airship) and is triggered by curl on a cron job.
It iterates through each user and has to use an HTML scraper (simple_html_dom to be exact) which takes about 5-10s per user, and is obviously very memory intensive. A simple GET request cant be the right way to come about doing this, in fact im positive it will fail. What is the best way to handle this long, memory intensive task on a cron job?
If You will reuse same variables or set ones You are not going to use any more to null You won't run out memory.
Just don't load all data at once and free it(set to null) or replace with new data after You process it.
And make sure You can't improve speed of Your task 5-10s sounds really long.

Scale multi request to different services

I have a service, where I need to ask 40 external services (API's) to get information from them, by each user request. For example one user is searching for some information and my service is asking 40 external partners to get the information, aggregates it in one DB (mysql) and displays the result to the user.
At this moment I have a multicurl solution, where I have 10 partner request at one time and if someone parnter is done with the request, then the software is adding another partner from the remaining 30 to the queue of multicurl, until all the 40 request are done and the results are in the DB.
The problem on this solution, is that it can not scale on many servers and I want to have some solution, where I can fire 40 request at one time for example divided on 2-3 servers and wait only so long, as the slowest partner delivers the results ;-) What means, that if the slowest partner tooks 10 seconds I will have the result of all 40 partners in 10 seconds. On multicurl I come in troubles, when there are more then 10-12 requests at one time.
What kind of solution, can you offer me, what i getting as low as possible ressources and can run many many process on one server and be scalable. My software is on PHP written, that mean I need an good connect to the solution with framework or API.
I hope you understand my problem and need. Please ask, if something is not clear.
One possible solution would be to use a message queue system like beanstalkd, Apache ActiveMQ, memcacheQ etc.
A high level example would be:
User makes request to your service for information
Your service adds the requests to the queue (presumably one for each of the 40 services you want to query)
One or more job servers continuously poll the queue for work
A job server gets a message from the queue to do some work, adds the data to the DB and deletes the item from the queue.
In this model, since now the one task of performing 40 requests is distributed and is no longer part of one "process", the next part of the puzzle will be figuring out how to mark a set of work as completed. This part may not be that difficult or maybe it introduces a new challenge (depends on the data and your application). Perhaps you could use another cache/db row to set a counter to the number of jobs a particular request needs in order to complete and as each queue worker finishes a request, it can reduce the counter by 1. Once the counter is 0, you know the request has been completed. But when you do that you need to make sure the counter gets to 0 and doesn't get stuck for some reason.
That's one way at least, hope that helps you a little or opens the door for more ideas.

Strategies for rarely updated data

Background:
2 minutes before every hour, the server stops access to the site returning a busy screen while it processes data received in the previous hour. This can last less than two minutes, in which case it sleeps until the two minutes is up. If it lasts longer than two minutes it runs as long as it needs to then returns. The block is contained in a its own table with one field and one value in that field.
Currently the user is only informed of the block when (s)he tries to perform an action (click a link, send a form etc). I was planning to update the code to bring down a lightbox and the blocking message via BlockUI jquery plugin automatically.
There are basically 2 methods I can see to achieve my aim:
Polling every N seconds (via PeriodicalUpdater or similar)
Long polling (Comet)
You can reduce server load for 1 by checking the local time and when it gets close to the actual time start the polling loop. This can be more accurate by sending the local time to the server returning the difference mod 60. Still has 100+ people querying the server which causes an additional hit on the db.
Option 2 is the more attractive choice. This removes the repeated hit on the webserver, but doesn't allieve the repeated check on the db. However 2 is not the choice for apache 2.0 runners like us, and even though we own our server, none of us are web admins and don't want to break it - people pay real money to play so if it isn't broke don't fix it (hence why were are running PHP4/MySQL3 still).
Because of the problems with option 2 we are back with option 1 - sub-optimal.
So my question is really two-fold:
Are there any other possibilities I've missed?
Is long polling really such a problem at this size? I understand it doesn't scale, but I am more concerned at what level does it starve Apache of threads. Also are there any options you can adjust in Apache so it scales slightly further?
Can you just send to the page how many time is left before the server starts processing data received in the previous hour. Lets say that when sending the HTML you record that after 1 min the server will start processing. And create a JS that will trigger after that 1 min and will show the lightbox.
The alternative I see is to get it done faster, so there is less downtime from the users perspective. To do that I would use a distributed system to do the actual data processing behind the hourly update, such as Hadoop. Then use whichever method is most appropriate for that short downtime to update the page.

Categories