JSON Spawning mysql sleep processes - php

I am in the process of making a Javascript(Front end, PHP back end) game. In this game it checks the server for updates every 2 seconds. There is one 1 sql call being run and at the end I use $mysqli->close() to close the SQL connection. The columns in the where are both indexed.
The problem I am having is after its running for a little while MYSQL starts spawning tons of sleep processes. Does anyone have any idea what might be causing this?

I'd push updates to your users instead of polling for them. Check out the AJAX Push Engine, that can help a lot. Also, turn on persistent MySQL connections, that could help.

Do the processes stick around? Mysql likes to keep a few threads open to recieve requests, i wouldn't worry about them unless it leaving a lot of them open and there taking up resources.
If your hitting the request every 2 seconds its cheaper to keep the pool waiting for a request than instantiating a connection every time.

Related

PHP Gearman too much mysql connections

I'm using Gearman in a custom Joomla application and using Gearman UI to track active workers and jobs number.
I'm facing an issue with MYSQL load and number of connections, I'm unable to track the issue, But I've few questions that might help me.
1- Does Gearman Workers launch a new database connection for each job or do they share the same connection?
2- If Gearman launches a new connection everytime a job runs how can I change that to make all jobs share same connection?
3- How can I balance the load between more than one server?
4- Is there is something like "Pay-as-you-go" package for MYSQL hosting? if yes, Please mention them.
Thanks a lot!
This is often an overlooked issue when using any kind of a job queue with workers. 100 workers will open a separate database connection each (they are separate PHP processes). If MySQL is configured allow 50 connections, workers will start failing. To answer your questions:
1) Each worker runs inside one PHP process each, and that process will open 1 database connection. Workers do not share database connections.
2) If only one worker is processing jobs, then only one database connection will be opened. If you have 50 workers running, expect 50 database connections. Since these are not web requests, persistent connections will not work, sharing will not work.
3) You can balance the load by adding READ slaves, and using a MySQL proxy to distribute the load.
4) I've never seen a pay-as-you-go MySQL hosting solution. Ask your provider to increase your number of connections. If they won't, it might be time to run your own server.
Also, the gearman server process itself will only use one database connection to maintain the queue (if you have enabled mysql storage).
Strategies you can use to try and make your worker code play nicely with the database:
After each job, terminate the worker and start it up again. Don't open a new database connection until a new job is received. Use supervisor to keep your workers running all the time.
Close database connections after every query. if you see a lot of connections open in a 'sleep' state, this will help clean them up and keep database connections low. Try $pdo = null; after each query (if you use PDO).
Cache frequently used queries where the result doesn't change, to keep database connections low.
Ensure your tables are properly indexed so queries run as fast as possible.
Ensure database exceptions are caught in a try/catch block. Add retry logic (while loop), where the worker will fail gracefully after say, 10 attempts. Make sure the job is put back on the queue after a failure.
I think the most important thing to look at, before anything else, is the MySQL load. It might be that you have some really heavy queries that are causing this mess. Have you checked the MySQL slow query log? If yes, what did you find? Note that any query that takes more than a second to execute is a slow query.

Should I $mysqli->close a connection after each page load, if PHP runs via FCGI?

I run PHP via FCGI - that is my web server spawns several PHP processes and they keep running for like 10,000 requests until they get recycled.
My question is - if I've a $mysqli->connect at the top of my PHP script, do I need to call $mysqli->close in when I'm about to end running the script?
Since PHP processes are open for a long time, I'd image each $mysqli->connect would leak 1 connection, because the process keeps running and no one closes the connection.
Am I right in my thinking or not? Should I call $mysqli->close?
When PHP exits it closes the database connections gracefully.
The only reason to use the close method is when you want to terminate a database connection that you´ll not use anymore, and you have lots of things to do: Like processing and streaming the data, but if this is quick, you can forget about the close statement.
Putting it in the end of a script means redundancy, no performance or memory gain.
In a bit more detail, specifically about FastCGI:
FastCGI keeps PHP processing running between requests. FastCGI is good at reducing CPU usage by leveraging your server's available RAM to keep PHP scripts in memory instead of having to start up a separate PHP process for each and every PHP request.
FastCGI will start a master process and as many forks of that master process as you have defined and yes those forked processes might life for a long time. This means in effect that the process doesn't have to start-up the complete PHP process each time it needs to execute a script. But it's not like you think that your scripts are now running all the time. There is still a start-up and shutdown phase each time a script has to be executed. At this point things like global variables (e.g. $_POST and $_GET) are populated etc. You can execute functions each time your process shuts down via register_shutdown_function().
If you aren't using persistent database connections and aren't closing database connections, nothing bad will happen. As Colin Schoen explained, PHP will eventually close them during shutdown.
Still, I highly encourage you to close your connections because a correctly crafted program knows when the lifetime of an object is over and cleans itself up. It might give you exactly the milli- or nanosecond that you need to deliver something in time.
Simply always create self-contained objects that are also cleaning up after they are finished with whatever they did.
I've never trusted FCGI to close my database connections for me. One habit I learned in a beginners book many years ago is to always explicitly close my database connections.
Is not typing sixteen keystrokes worth the possible memory and connection leak? As far as I'm concerned its cheap insurance.
If you have long running FastCGI processes, via e.g. php-fpm, you can gain performance by reusing your database connection inside each process and avoiding the cost of opening one.
Since you are most likely opening a connection at some point in your code, you should read up on how to have mysqli open a persistent connection and return it to you on subsequent requests managed by the same process.
http://php.net/manual/en/mysqli.quickstart.connections.php
http://php.net/manual/en/mysqli.persistconns.php
In this case you don't want to close the connection, else you are defeating the purpose of keeping it open. Also, be aware that each PHP process will use a separate connection so your database should allow for at least that number of connections to be opened simultaneously.
You're right in your way of thinking. It is still important to close connection to prevent memory/data leaks and corruption.
You can also lower the amount of scipts recycled each cycle, to stop for a connection close.
For example: each 2500 script runs, stop and close and reopen connection.
Also recommended: back up data frequently.
Hope I helped. Phantom

What is killing my PHP process, and leaving so many sleeping mysql connections?

I'm having trouble investigating an issue with many sleeping MySQL connections.
Once every one or two days I notice that all (151) MySQL connections
are taken, and all of them seem to be sleeping.
I investigated this, and one of the most reasonable explanations is that the PHP script was just killed, leaving a MySQL connection behind. We log visits at the beginning of the request, and update that log when the request finishes, so we can tell that indeed some requests do start, but don't finish, which indicates that the script was indeed killed somehow.
Now, the worrying thing is, that this only happens for 1 specific user, and only on 1 specific page. The page works for everyone else, and when I log in as this user on the Production environment, and perform the exact same action, everything works fine.
Now, I have two questions:
I'd like to find out why the PHP script is killed. Could this possibly have anything to do with the client? Can a client do 'something' to end the request and kill the php script? If so, why don't I see any evidence of that in the Apache logs? Or maybe I don't know what to look for? How do I find out if the script was indeed killed or what caused it?
how do I prevent this? Can I somehow set a limit the to number of mysql connections per PHP session? Or can I somehow detect long-running and sleeping mysql connections and kill them? It isn't an option for me to set the connection-timeout to a shorter time, because there are processes which run considerably longer, and the 151 connetions are used up in less than 2 minutes. Also increasing the number of connections is no solution. So, basically.. how do I kill processes which are sleeping for more than say 1 minute?
Best solution would be that I find out why the request of 1 user can eat up all the database connections and basically bring down the whole application. And how to prevent this.
Any help greatly appreciated.
You can decrease wait_timeout variable of the MySQL server. This specifies the amount of seconds MySQL waits for anything on a non-interactive connection, before it aborts the connection. The default value is 28800 seconds, which seems quite high. You can set this dynamically by executing SET GLOBAL wait_timeout = X; once.
You can still increase it for cronjobs again. Just execute the query SET SESSION wait_timeout = 28800; at the beginning of the cronjob. This only affects the current connection.
Please note that this might cause problems too, if you set this too low. Although I do not see that much problems. Most scripts should finish in less than a second. Setting wait_timeout=5 should therefore cause no harm…

Detecting server overload to limit mysql queries

I'm programming c++ service which constantly every 1 second makes SELECT query with LIMIT 1 on mysql server, something computes and then makes INSERT and this in loop forever and ever.
I'd like to detect server overloading to make SELECTs with bigger LIMIT, for example LIMIT 10 and in greater inetrvals, like every 5 seconds or so. Not sure if my solution will lighten server overloads.
My problem is how to detect these overloads and I'm not sure what I mean by overload :) It could be anything, but my application is web application in php (chat) so overload could be detected on Apache2 side, or mysql side, or detecting how many users make how many inputs (chat messages) in time interval. I don't know :/
Thank you!
EDIT: Okay, I made an socket server from my C++ application and its really fast that way. Now I'm struggling with memory leaks, but that's another story.
So thank you #brianbeuning for helpful thoughts about my problem.
Better solve that forever and ever loop, its not good idea.
If that loop is really must, then use some caching technique.
For detecting "overload" (I would call it high MySQL CPU usage), try calling external commands supported by operating system.
For example if you use this on Linux, play with ps command.
EDIT:
I realized now that you are programming chatting server.
Using MySQL as middleman is NOT good idea.
Try solving this without using MySQL, and then if you need to save chat log, occasionally save it to MySQL (eg. every 10 seconds or so).
I bet it is CPU hog right now for just 20 intensive users.
Try to make direct client-to-client communication, without requiring server (use server only to establish communication between 2 clients).
Another approach would be to buffer the data in your app and use a connection pool of sorts to manage the load. Keep a rolling buffer of data that needs to be inserted and manage the 'limit' based on the size of the buffer.

Indefinitely running PHP script, MySQL timeout

I have a PHP script that runs indefinitely, performing a specific task every 5-10 seconds (a do-while loop, checks the database at the end of every iteration to determine whether or not it should continue). This task includes MySQL database queries. What is the best way to handle the database connection? Should I:
a.) disconnect and then reconnect to the database every iteration?
b.) set the connection timeout to an indefinite limit?
c.) ping the database to make sure I'm still connected, and reconnect if necessary before executing and queries?
d.) Something else?
EDIT: To clarify, the script sends a push notification to users' iPhones.
The suggestion that you cannot run your PHP script as a daemon is ridiculous. I've done it several times, and it works quite well. There is some example code to get you started. (Requires PEAR... if you're not a fan, roll your own.)
Now, on to your actual question. If you are making regular queries, your MySQL connection will not timeout on you. That timeout is for idle connections. Definitely stay connected... there is no reason for the overhead of disconnecting and reconnecting. In any case, on a database failure, since your script is running as a daemon, you probably don't want to immediately kill the process.
I recommend handling the exception and reconnecting. If your reconnect fails, fall back for a little while longer before trying again. After a handful of failures (whatever is appropriate), you may kill the process at that time, as something is probably broken that requires human intervention.

Categories