I'm trying out long polling for the first time.
In the PHP script I have a while-loop with a sleep-timer that freezes the script for 10 seconds, then it looks for new stuff in the database again.
I'm thinking performance and server/database load/connections:
What is worse for the server: Many GET-requests (ajax), or many opening/closing of the DB connection?
Would it in any way be better to use long polling but close and re-open the DB connection in each round of the while-loop (to free the limited number of connections)?
This is less trivial, than it sounds: What starts with a simple "should I or not" alternative rapidly increases in complexity with scaling to more servers.
Having hit walls with both approaches, we have come up with a proxy scheme, that seems to work well even on cheap shared hosting:
Run a single instance of a simple proxy script, that polls the DB (On shared hosting we start this from a cron job, that starts an instance only, if no other is running, so we easily survive reboots)
Have that proxy script translate the expensive DB poll into a cheaper poll: SysV SHM and flagfiles in the filesystem both work fine. The proxy should keep its single DB connection open
Have your potentiallly many long-pollers check the proxied flag.
This has made it possible to implement a short server-sided polling intervall without running into problems, when concurrent polls increase.
Related
ReactPHP http server for each user, Is this a good idea?
In my application:
Each logged on user sends and receives data from server. In average one request per second.
After server response, the server have some extra work to do, which is related to specific user.
I can simply build new ReactPHP http server for each user who logs, and release the server after the user log out.
Is this will work? Am i missing something ?
No, it's not a good idea. You need a separate port per user in that case to route the user to the right server. That'd quickly exhaust your ports.
If you have blocking tasks within the event loop and want to use multiple processes because of that, just stick to traditional PHP with mod_php or php-fpm and start a new event loop for each process, do your thing and then exit.
If you don't have any blocking operations and everything is non-blocking, you can just use a single server and it handles all the things.
I'm not sure if exhausting ports would be the issue. Other services that do just this such as WebRTC SFUs. With 65,535 ports available that your talking 30,000+ concurrent TCP connections.
However, with that many users first obvious problem would be memory. At 10 mb just to start up PHP, that would be 300+ gb of memory without including a single line of code or actually doing anything. If your working with a seriously trimmed php binary you can get down to 4 or 5 mb, so at 5,000 concurrent users you would have around 25 gb.
But the real problem is that it would result in thousands of processes, which is impossible to work around. This would be entirely wasteful considering ReactPHP's eventloop can handle 10k users within a single process. Not saying a single PHP process can do the work for that many users (except maybe the most basic chat) but ReactPHP can handle the IO. Throwing them all into their own process though would a nightmare.
The basic idea has been tried in other languages by giving each user their own thread, but even in C/C++ this is quickly proven to be a bad design.
I need to write a socket server that will be handling at least something about 1000 (much more in future) low-traffic permanent connections. I have made a draft version on PHP for testing purposes (we are developing a monitoring hardware, so we needed to develop and test a conversation protocol and hardware capabilities), which suited me very well when i had just a couple of clients connected. But when the amount of connections had grown to ten, some critical issues appeared. Here some info about server architecture:
I have a master process, which waits for socket connections and on connecting creates a child process (that serves this connection from now on) using pcntl_fork(). Also i am setting up a PDO connection to MySQL in master process. All the child processes are sharing the same single PDO object. At first i was afraid of getting some collisions during simultaneous queries, but i haven't encountered them, even through stress-test (10 children were making queries in the loop without stopping). But there is usleep(500000) in each child, so it could be luck, though i had this testing running for a couple of hours. But such load should not be present even at 1k clients connected, due to rare conversations between them and server.
So here is my first question: is it safe to use single PDO object for a big amount of child processes (ideally there would be around 1000)? I can use single connection for each child, but MySQL doesn't support nearly as much connections.
The second issue is in getting parasite MySQL connections. As i mentioned before, i have only one PDO object. But when i have more than one clients connected, and after they had run some queries, i see in mytop that there is more than one DB connection, and i could not find any correlation between the amount of connections and amount of child processes i have. For example i have 3 childs, and 5 DB connections. I tried to establish persistent connections, and it didn't changed anything.
Second question: Is it PDO who makes those additional connecitons to MySQL, or it is the MySQL driver? And is there a way to force them to use one connection? I don't think it could be my fault, my code prints an alert to console every time i call method which creates PDO object, and that happens only once, at the script start, before forking. After that i only run querys out of children, using parent's PDO object. Once again i can not afford to have so many connections due to MySQL limitations.
Third question: Will be one thousand of socket connections a problem by itself? Aside of the CPU and database load, i mean. Or i should do some amount of lesser servers (128 connections for example), that will tell the clients to connect to other one if max number of connects is exceeded?
Thanks in advance for your time and possible answers.
Currently your primary concern should be your socket server architecture. Forking a process for each client is super heavy. AFAIK an average PC can tolerate around 2000 threads and it's not going to work fast. Switching between processes means that CPU should save its state in memory, and if you have enormous amount of processes, CPU will be busy with memory IO and will have little time for actually doing stuff.
You may want to look at Apache for inspiration. In Apache they use a fixed amount of worker processes/threads, each process/thread working with multiple clients via select function and sockets in a non-blocking mode. This is a far more robust approach.
Regarding database IO, I would spawn a process/thread that would be the sole owner of database connections. Worker processes would communicate with the DB IO process using IPC (in case of processes) or lock-free queues (in case of threads). This approach makes you independent of PDO implementation details (if it is thread safe or does it spawn connections etc).
P. S. I suspect that you actually spawn new PDO objects with forking (forking merely means making a copy of a process with its memory and everything inside it) and PDO objects create and shut down connections on demand. It may explain why you're not seeing correlation between low traffic clients and DB connections.
I'm going to be using Nodejs to process some CPU intense loop operations with sending emails to registered users as PHP was using too much during the time it runs and freezes the site.
One thing is that Nodejs will be on different server and do a request using external connection in MySQL.
I've heard that external db connection is bad for performance.
Is this true? And are there any pros and cons of doing this?
Keep in mind, when running a CPU intensive operation in Node the whole application blocks as it runs in a single thread. If you're going to run a CPU intensive operation in Node, make sure you spawn it off into a child process who's only job is to run the calculation and then return to the primary application. This will ensure your Node app is able to continue responding to income requests as the data is being processed.
Now, onto your question. Having the database on a different server is extremely common and typically is a good practice to have. Where you can run into performance problems is if your database is in a different data center entirely. The further (physically) your database server is from your application server, the more latency there will be per request.
If these requests are seriously CPU intensive, you should consider looking into a queueing mechanism for a couple reasons. One, it ensures that even in the event of an application crash, you don't lose a request that is being processed. Two, you can monitor the queue, and scale the number of workers processing the queue in the event that the operations are piling to the point that a single application can't finish processing one before another comes in.
I am trying to write a client-server app.
Basically, there is a Master program that needs to maintain a MySQL database that keeps track of the processing done on the server-side,
and a Slave program that queries the database to see what to do for keeping in sync with the Master. There can be many slaves at the same time.
All the programs must be able to run from anywhere in the world.
For now, I have tried setting up a MySQL database on a shared hosting server as where the DB is hosted
and made C++ programs for the master and slave that use CURL library to make request to a php file (ex.: www.myserver.com/check.php) located on my hosting server.
The master program calls the URL every second and some PHP code is executed to keep the database up to date. I did a test with a single slave program that calls the URL every second also and execute PHP code that queries the database.
With that setup however, my web hoster suspended my account and told me that I was 'using too much CPU resources' and I that would need to use a dedicated server (200$ per month rather than 10$) from their analysis of the CPU resources that were needed. And that was with one Master and only one Slave, so no more than 5-6 MySql queries per second. What would it be with 10 slaves then..?
Am I missing something?
Would there be a better setup than what I was planning to use in order to achieve the syncing mechanism that I need between two and more far apart programs?
I would use Google App Engine for storing the data. You can read about free quotas and pricing here.
I think the syncing approach you are taking is probably fine.
The more significant question you need to ask yourself is, what is the maximum acceptable time between sync's that is acceptable? If you truly need to have virtually realtime syncing happening between two databases on opposite sites of the world, then you will be using significant bandwidth and you will unfortunately have to pay for it, as your host pointed out.
Figure out what is acceptable to you in terms of time. Is it okay for the databases to only sync once a minute? Once every 5 minutes?
Also, when running sync's like this in rapid succession, it is important to make sure you are not overlapping your syncs: Before a sync happens, test to see if a sync is already in process and has not finished yet. If a sync is still happening, then don't start another. If there is not a sync happening, then do one. This will prevent a lot of unnecessary overhead and sync's happening on top of eachother.
Are you using a shared web host? What you are doing sounds like excessive use for a shared (cPanel-type) host - use a VPS instead. You can get an unmanaged VPS with 512M for 10-20USD pcm depending on spec.
Edit: if your bottleneck is CPU rather than bandwidth, have you tried bundling up updates inside a transaction? Let us say you are getting 10 updates per second, and you decide you are happy with a propagation delay of 2 seconds. Rather than opening a connection and a transaction for 20 statements, bundle them together in a single transaction that executes every two seconds. That would substantially reduce your CPU usage.
Trying to separate out my LAMP application into two servers, one for php and one for mysql. So far the application connects locally through a file socket and works fine.
I'm worried about the number connections I can establish if it is over the network. I have been testing tcp connections on unix for benchmark purposes and I know that you cannot exceed a certain amount of connections per second otherwise it halts due to the lack of resources (be it sockets, or file handles or whatever). I also understand that php does not implement connection pooling so for each page load a new connection over the network must be made. I also looked into pconnect for php and it seems to bring more problems.
I know this is a very very common setup (php+mysql), can anyone provide some typical usage and statistics they get out of their servers? Thanks!
The problem is not related to running out of connections allowed my MySQL. The main problem is that unix cannot very quickly create and tear down tcp connections. Sockets end up in TIME_WAIT and you have to wait for a period before you free up more sockets to connect again. These two screenshots clearly shows this pattern. MySQL does work up to a certain point and then pauses because the web server ran out of sockets. After certain amount of time passed, the web server was able to make new connections.
alt text http://img35.imageshack.us/img35/3809/picture4k.png
alt text http://img35.imageshack.us/img35/4580/picture2uyw.png
I think the limit is at 65535. So you'd have to have 65535 connections at the same time to hit that limit since a regular mysql connection closes automatically.
mysql_connect()
Note: The link to the server will be closed as soon as the execution of the script ends, unless it's closed earlier by explicitly calling mysql_close().
But if you're using a persistent mysql connection, then you can run into trouble.
Using persistent connections can require a bit of tuning of your Apache and MySQL configurations to ensure that you do not exceed the number of connections allowed by MySQL.
Each MySQL connection actually uses several meg of ram for various buffers, and takes a while to set up, which is why MySQL is limited to 100 concurrent open connections by default. You can up that limit, but it's better to spend your time trying to limit concurrent connections, via various methods.
Beware of raising the connection limit too high, as you can run out of memory (which, I believe, crashes mysql), or you may push important things out of memory. e.g. MySQL's performance is highly dependent on the OS automatically caching the data it reads from disk in memory; if you set your connection limit too high, you'll be contending for memory with the cache.
If you don't up your connection limit, you'll run out of connections long before your run out of sockets/file handles/etc. If you do increase your connection limit, you'll run out of RAM long before you run out of sockets/file handles/etc.
Regarding limiting concurrent connections:
Use a connection pooling solution. You're right, there isn't one built in to PHP, but there are plenty of standalone ones out there to choose from. This saves expensive connection setup/tear down time.
Only open database connections when you absolutely need them. In my current project, we automatically open a database connection when the first query is issued, and not a moment before; we also release the connection after we've done all our database work, but before the page's HTML is actually generated. The shorter the period of time you hold connections open, the fewer connections will be open simultaneously.
Cache what you can in a lighter-weight solution like memcached. My current project temporarily caches pages displayed to anonymous users (since every anonymous user gets the same HTML, in the end -- why bother running the same database queries all over again a few scant milliseconds later?), meaning no database connection is necessary at all. This is especially useful for bursts of anonymous traffic, like a front-page digg.