Is it possible to perform HTTP Push with Apache2+PHP? I've done some Googleing around and the only thing close to what i was looking for was a PECL Socket tutorial which didn't quite tackle what i was looking for.
My application at the moment has a basic read GET API, the client requests a read to the API once every 15 seconds. I think this is kind of silly as an open port that just sends data when there is data to send seems like a much better method. My client is written in .net.
Is this possible at all on these technologies? Or will i have to try and use java/comet, which at the moment i just don't have the resources / infrastructure readily available
More information on HTTP Server Push:
https://en.wikipedia.org/wiki/Push_technology#HTTP_server_push
When deciding between different technologies to report events from a HTTP server to a client there are allways tradeoffs to make: 150 Clients polling every 15 seconds with a poll taking 1s will statistically tie up 10 connections, the same 150 clients with a server push technology will tie up 150 connections but with much less CPU.
IMHO Long polling has the best balance if used in combination with Apache/PHP, as it allows for the server to influence the clients, if these are in your control: If the connection count on the server goes too high, it can just return the longest running poll and send information to the client to not repoll immediately, but with some delay.
Related
We have made the backend of a mobile AP in laravel and mysql. The application is hosted on AWS Ec2 and using RDS mysql database.
We are stress testing the app using jmeter. When we send upto 1000 API requests from jmeter, it seems to work fine. However, when we send more than 1000 (roughly) requests in parallel, The jmeter starts getting internal server error (500) as a response for many requests. the internal 500 error percentage increases as we increase the number of APIs
Normally, we would expect that if we increase the APIs, they should be queued and the response should slow if the server is out of resources. We also monitored the resources on the server and they never reached even 50% of the available resources
Is there any timeout setting or any other possible setting that I could tweak so that the we dont get the internal server error before reaching 80% of the resource usage
Regards
Syed
500 is the externally visible symptom of some sort of failure in the server delivering your API. You should look at the error log of that server to see details of the failure.
If you are using php scripts to deliver the API, your mysql (rds) server may be running out of connections. Here's how that might work.
A php-driven web server under heavy load runs a lot of php instances. Each php instance opens up one or more connections to the mysql server. When there are too many php instances x connections per instance the mysql server starts refusing more of them.
Here's what you need to do: restrict the number of php instances your web server is allowed to use at a time. When you restrict that number, incoming requests will queue up (in the TCP connect queue of your OS's communication stack). Then, when an instance is available to serve each item in the queue it will do so.
Apache has a MaxRequestWorkers parameter, with a default extremely large value of 256. Try setting it much lower, for example to 32, and see whether your problem changes.
If you can shrink the number of request workers, you paradoxically may improve high-load performance. Serializing many requests often generates better throughput than trying to do many of them in parallel.
The same goes for the number of active connections to your MySQL server. It obviously depends on the nature of the queries you use, but generally speaking fewer concurrent queries improves performance. So, you won't solve a real-world problem by adding MySQL connections.
You should be aware that the kind of load imposed by server-hammering tools like jmeter is not representative of real world load. 1000 simultaneous jmeter operations without failure is a very good result. If your load-testing setup is robust and powerful, you will always be able to bring your server system to its knees. So, deciding when to stop is an important part of a load testing plan. If this were my system I would stop at 1000 for now.
For your app to be robust in the field, you probably should program it to respond to 500 status by waiting a random amount of time and trying again.
I will start developing a multiplayer card game in a few weeks and until then, I am doing research on the best techniques available. I will be using PHP+MySQL for server-side and JS/HTML5 for client side. The game will be also be played from mobile browser.
The gameplay of my game involves 4 or more users playing at a table, each having up to 30 seconds to take an action. There will be multiple tables available and an user can play at more than 1 table at the same time.
From my research it would seem easy to sync the client with the server by having the server push data trough SSE each time another player made an action. Then when the current player takes an action, the request data would be sent through XMLHttpRequest from client to server and then synced to other players through SSE.
Now from what I read, there is a big downfall to this because for each player that listens to the SSE process, a connection stays opened on the server, consuming a lot of memory.
The application that I will build is intended to support over 10000 players on a dedicated server with modern hardware(8core CPU,64GB RAM), with time. Some may say PHP is not good for this, but the debate here is how to use PHP to make this work.
The alternative to the SSE implementation would be to use Websockets (I am currently checking Ratchet and managed to set up the Chat server from the doc), but overall I have some dillemas which prevent me from taking a decision:
Websockets: When establishing the handshake, a connection stays open on the server too, same as in SSE, but I guess that this is a different type of connection which consumes less memory. Is that right? So websockets would be more efficient.
Any estimates on memory usage for both SSE and WebSockets? I read somewhere that each connection through SSE would consume around 20mb with APACHE and PHP. That's going to be 200GB for 10000 users (too much). I hope I am wrong.I dont know how much memory a socket connection would consume. An idle one I mean.
Will SSE consume more battery (on mobile phones) and will cause more web traffic than the sockets connection?
SSE: If an user will be playing at two tables, do I need to open two SSE processes (one for each table) or how do I tell the SSE process if I am requesting data for table1 or table2? Or I am forced to receive data for all tables at the same time? It wouldnt be a problem to receive data for all the tables that the user is now active at, but I am curious if there are ways of customizing this.
How about dealing with people that have unstable internet connection? Does websockets support automatic reconnection? Or this needs to be done manually from the client?
1) Do both WebSockets and SSEs keep the connection open?
Yes. Both WebSockets and SSEs keep the connection open as long as the server is configured to do so or connection is closed by either client or server. Because many times, servers are default configured to close the connection when it's idle for more than the time specified in configuration.
2) If both keeps the connection open then do both have same memory consumption and network overhead? Which one is lighter and faster?
First thing is that both seems to share same resources in terms of connection making because both has to handshake for the first time and check the heart beat by pinging and rest is a normal single TCP connection.
But there is an important point to note that Websocket is Bi-Directional and SSEs are Uni-Directional so even if a SSE connection is alive and you have to send some data from client to server then you will have to use some kind of XHR which will again create another connection to server. And creating connections is resource intensive. So, it seems like WebSockets is better at resource utilizing when this is about Bi-Directional like in case of above OP case of Card Game of multiplayers. And also this makes WebSockets more faster and Lighter than SSEs. SSEs are good when you just have to send data from server to clients like some Stock Market Prices, Game Scores etc.
3) If the your preferred choice is WebSocket then do we need to worry about URL Rewrite, Response Headers etc
I don't think so because if WebSockets are used with PHP then most of the time it is CLi based separate PHP Script twith a websocket connecting to something like ws://example.com:8092/websocket.php. So, I don't think there will be more to worry about URL Rewrite or response headers. And also as I think headers can be added/modified by PHP script itself if needed anyway.
4) Will SSE consume more battery (on mobile phones) and will cause more web traffic than the sockets connection?
I don't think SSE will consume more battery as compare to WebSockets. But yes it will definitely create more traffic to server because as mentioned above that SSE is Uni-Directional and each data from Client to Server will create another connection to server.
5) How about dealing with people that have unstable internet connection? Does websockets support automatic reconnection? Or this needs to be done manually from the client?
Well there are some techniques which can handle this part to reconnect in such conditions.
From what I understand after the number of max connections is exhausted the server will stop responding which is likely to happen on a live server with long-polling requests on index, will this also happen in a Litespeed server? From what I understand it is superior to vanilla apache and it will not spawn a thread per request.
Sadly node.js isn't really an option, I am trying to pull the newest results for users realtime once per 1-2 minutes.
it will happen on every server when You implement strict long-pooling
please look on websockets as it's the best way of handling thousands of users simultaneously (from performance side of view)
if You cannot use websockets try to implement reconnect as simple interval on which you abort current connection and establishing new one
You should look for a realtime web server, in your technology of choice, which support WebSockets as best-of-breed connectivity and falls back to HTTP-based solutions for older browsers and when there are tricky networks interfering with connections.
I'd recommend taking a look at the realtime web tech guide (which I maintain - and accept contributions to).
So a friend and I are building a web based, AJAX chat software with a jQuery and PHP core. Up to now, we've been using the standard procedure of calling the sever every two seconds or so looking for updates. However I've come to dislike this method as it's not fast, nor is it "cost effective" in that there are tons of requests going back and forth from the server, even if no data is returned.
One of our project supporters recommended we look into a technique known as COMET, or more specifically, Long Polling. However after reading about it in different articles and blog posts, I've found that it isn't all that practical when used with Apache servers. It seems that most people just say "It isn't a good idea", but don't give much in the way of specifics in the way of how many requests can Apache handle at one time.
The whole purpose of PureChat is to provide people with a chat that looks great, goes fast, and works on most servers. As such, I'm assuming that about 96% of our users will being using Apache, and not Lighttpd or Nginx, which are supposedly more suited for long polling.
Getting to the Point:
In your opinion, is it better to continue using setInterval and repeatedly request new data? Or is it better to go with Long Polling, despite the fact that most users will be using Apache? Also, it possible to get a more specific rundown on approximately how many people can be using the chat before an Apache server rolls over and dies?
As Andrew stated, a socket connection is the ultimate solution for asynchronous communication with a server, although only the most cutting edge browsers support WebSockets at this point. socket.io is an open source API you can use which will initiate a WebSocket connection if the browser supports it, but will fall back to a Flash alternative if the browser does not support it. This would be transparent to the coder using the API however.
Socket connections basically keep open communication between the browser and the server so that each can send messages to each other at any time. The socket server daemon would keep a list of connected subscribers, and when it receives a message from one of the subscribers, it can immediately send this message back out to all of the subscribers.
For socket connections however, you need a socket server daemon running full time on your server. While this can be done with command line PHP (no Apache needed), it is better suited for something like node.js, a non-blocking server-side JavaScript api.
node.js would also be better for what you are talking about, long polling. Basically node.js is event driven and single threaded. This means you can keep many connections open without having to open as many threads, which would eat up tons of memory (Apaches problem). This allows for high availability. What you have to keep in mind however is that even if you were using a non-blocking file server like Nginx, PHP has many blocking network calls. Since It is running on a single thread, each (for instance) MySQL call would basically halt the server until a response for that MySQL call is returned. Nothing else would get done while this is happening, making your non-blocking server useless. If however you used a non-blocking language like JavaScript (node.js) for your network calls, this would not be an issue. Instead of waiting for a response from MySQL, it would set a handler function to handle the response whenever it becomes available, allowing the server to handle other requests while it is waiting.
For long polling, you would basically send a request, the server would wait 50 seconds before responding. It will respond sooner than 50 seconds if it has anything to report, otherwise it waits. If there is nothing to report after 50 seconds, it sends a response anyways so that the browser does not time out. The response would trigger the browser to send another request, and the process starts over again. This allows for fewer requests and snappier responses, but again, not as good as a socket connection.
First of all, I am completely new on many stuff, so I will welcome any inputs, including suggestions, existing projects, existing models, etc.
My current problems are:
The background service maintains a queue of tasks. The background service is written in C++ or python.
When a client clicks "Create Task" button in browser, the information will be sent to web server and the web server script (written in PHP) will initiate an RPC call to the background service to append the task to the internal queue.
The client browser will initiate an AJAX request to wait for the completion of the task. The AJAX request will hold until the task is completed (or failed) or the client cancels the request.
Thus, I need an low cost way to get the task progress which is run on a background service process.
I can think of two ways:
The background service can inform the server AJAX script about the progress pro-actively. This is low cost but I actually do not know how to do it. Does any RPC framework provides such asynchronous call back? Currently the RPC framework I decided to use is Thrift because of its multi-languages support.
The AJAX script on server side will make an RPC call to get current progress every a few seconds, and sleep in between. Upon completion, the AJAX script will return, otherwise it will just let the client browser wait by not returning. This is actually simpler but I am not sure about its cost. Note that delay isn't an issue to me here because I suppose that the clients are okay to wait for a few more seconds.
Is there any common way/model to deal with this problem?
Thanks for the help.
Depends on how you code it. The common way to do it is to make a javascripted ajax request every 1-3 seconds or so and poll the progress from the server.
This will intermediately close the connection and be more gentle to the server. If you use a persistent connection (WebSockets also fall into this category), you will keep the server busy. Besides, a "sleep" keeps the CPU busy - which is something I would try to avoid if I were you. On the other hand, if you've got the resources for that...
I can only repeat myself: it depends on how you code it and what you expect of it in the end.
If you want the client do some more work and treat the server gentle, choose your 1st option and if you think your server can handle it, choose the 2nd option and go "persistent" and even use WebSockets (which represent persistent connections to your server - remember that they aren't widely supported by web-browsing clients yet either).
Although I think that in the end - the trade-off of a simple progress compared to hogging your server CPU with constant sleeps and some persistent connections on-top of that will make you choose your 1st option: poll the server script for the progress value every x secs from the client side. Btw.: it's what Twitter does and their servers survived until today! ;)
I think, You can use WebSockets for that.
You can use WebSockets.
Establish a WebSockets connection between the client and a web service that has access to the information you need to pass to the client.
With web sockets, you don't need to poll the server asking it for progress, but rather have the server notify the client whenever it's ready.
A backwards compatible implementation would be long polling.
Cheers