PHP: game loop (threads or the sort) - php

I am writing PHP code to be a game client. It uses socket; socket_create followed by socket_connect and then socket_read. It works fine, but the issue is that the server can send a packet at any time which means socket_read needs to be happening constantly in a "game loop". So something like this:
<?php
$reply = "";
do {
$recv = "";
$recv = socket_read($socket, '1400');
if($recv != "") {
$reply .= $recv;
}
} while($recv != "");
echo($reply);
?>
Doesn't work because it's stuck in the loop (server doesn't terminate connection until game is quit by client) and the PHP code needs to handle the packet stuff as it comes in.
So PHP doesn't really have threading. What's the best way of handling this?

Basically any software platform is going to butt up against this problem. Most, as you've figured out, solve it with threading. While threading IS possible in PHP. It requires MAJORHAXXX. Such as launching a commandline php thread from within php.
It really doesn't end up being ideal.
However, there are other ways to get around this.
But you need to check ALL the marks on this list first:
[] - My game doesn't need to constantly keep checking the server, such as for player locations or complex movements. Anything beyond a chat-room level of data transfer and update rates should leave this box un-checked.
[] - My game doesn't need to be told BY THE SERVER anything. It is perfectly acceptable for the client to ask for anything it needs, perhaps once a second or better off once a minute.
[] - My game doesn't need to keep a constant simulation of a complex world running on the server for longer than it takes to complete a request. Tracking chat is one thing, doing physics and graphics modifications is another.
If you checked all of these boxes, then PHP is STILL IN THE GAME! Otherwise. Don't bother.
Basically, what I am saying here is that PHP is great for games that aren't really multiplayer, and that are turn-based or at least not very interactive. But once you have to keep things going without the player, PHP falls on its face.
VOODOO LEVEL
But if you simply MUST do this. There ARE ways to get around it.
A - Create a PHP Daemon that runs your world, pipe all other traffic to either a getter or setter request file that interacts with the database. So, you might request a getting of the game world state, or set a value that the player performed. All other game-world related things can be handled by the daemon and the game itself takes place in the database.
B - Use cron, not a Daemon. (dangerous, but we already established you as a risk taker, right?)
C - TRY only a Daemon and listening to sockets, then sending out threads (via exec()) to respond. Kind of like AndreKR's idea above, only you don't need to sleep. Problem here is you will almost always end up missing stuff or otherwise getting cut off. And the whole thing might explode if the Daemon get's run twice somehow..

If you really want to do this, you have to sleep for some time, check the socket, sleep again, check the socket...
To check the socket without blocking you need to use non-blocking I/O which you can achieve with the socket_set_nonblock() or socket_recv() which has a DONTWAIT flag.

Can be done, but I agree with #Andrey and #DampeS8N, not the best choice. If you are dead set on doing this, check out this book: You want to do WHAT with PHP?

TCP implementations tend to fragment and join messages; there's no telling how much data or how many message fragments a socket receive will return. You need to know where a message ends and a new one begins (which may happen multiple times in data returned by a single read). Some simple solutions:
Use some kind of delimiter. End each message by '\0'.
Send the message size along with the message. Start each message with "Content-length: 42\n" or two size bytes (0x00 0x42).
Use XML. <message> starts and </message> ends a message.
PHP's XML parser doesn't like incomplete XMLs, though so the third option is out unless you want to match the start and end tags manually. Use the first option if the protocol is based on ASCII, second if it's binary, third if it's already XML.
Now, remember you can get any number of messages per packet. In the most complex case, you might have the end of an earlier message followed by a number of full messages and the beginning of yet another message in a single packet.
A full solution would be along these lines:
while (connected) {
while (messages in buffer < 1) {
read from socket;
add to buffer;
}
while (messages in buffer > 0) {
extract message from buffer;
process message;
}
}
...though this is an asynchronous message loop. I'll leave the "if there's a message available, return it; else, wait for one" synchronous implementation as an exercise. (Hint: You'll need a class to build and buffer messages.)

PHP has no multithreading, so you should really consider to use a more suitable language (like Andrey mentioned in its comment).

All you have to do is to use socket_select() function:
http://php.net/manual/en/function.socket-select.php
It will put your script to sleep and wake it up when there is data on the socket to be read. It's waaay more efficient that periodical sleep/read, cron scripts and all other proposed solutions.
#aib made a valid point. The server might sent a complete "game message" divided into several packets. Dont expect to get all your data in a singe exceution of code block after socket_select() returns.

Instead of writing this smelly blocking polling loop, check out some event system based around the reactor pattern like Python Twisted or Ruby EventMachine.
I believe the PHP flavor is call PHP-MIO: http://thethoughtlab.blogspot.com/2007/04/non-blocking-io-with-php-mio.html

Related

php: flush data and end client connection

I have a php-script (in a normal LAMP environment) that runs a couple of housekeeping-tasks at the end of script.
I use flush() to push all the data to the client, which works fine (the page is fully loaded), but the browser still waits for data (indicated by the "loading"-animation) which is confusing for the user but of course clear because Apache cannot know whether PHP will generate more output after flush() - in my case it never does, however.
Is there a way to tell the client that the output is finished and the http-connection should be closed immediately even though the script keeps running?
It sounds like you have a long running script performing varioous tasks. Especially it appears to script goes on doing things after it has sent the reply to the client. This is a design that opens a whole lot of potential problems. You should re-think your architecture.
Keep house keeping tasks and client communication strictly separate. For example you could have a client request processed and trigger internal sub requests (which you can detach from) or deligate tasks to a cron like system. Then offer a second view to the client which visualized the progress and result of those tasks. This approach is much safer, more flexible and easier to extend when required. And your problem at hand is solved, too :-)
you can use this function fastcgi_finish_request() special function to finish request and flush all data while continuing to do something time-consuming (video converting, stats processing etc.); http://php.net/manual/en/install.fpm.php but you need to install FPM for it like
<?php
echo "You can see this from the browser immediately.<br>";
fastcgi_finish_request();
sleep(10);
echo "You can't see this form the browser.";
?>

How does Long Polling or Comet Work with PHP?

I am making a notification system for my website. I want the logged in users to immediately noticed when a notification has made. As many people say, there're only a few ways of doing so.
One is writing some javascript code to ask the server "Are there any new notifications ?" at a given time interval. It's called "Polling" (I should be right).
Another is "Long Polling" or "Comet". As wikipedia says, long polling is similar to polling. Without asking everytime for new notifications, when new notifications are available, server sends them directly to the client.
So how can i use Long Polling with PHP ? (Don't need full source code, but a way of doing so)
What's its architecture/design really ?
The basic idea of long-polling is that you send a request which is then NOT responded or terminated by the server until some desired condition. I.e. server-side doesn't "finish" serving the request by sending the response. You can achieve this by keeping the execution in a loop on server-side.
Imagine that in each loop you do a database query or whatever is necessary for you to find out if the condition you need is now true. Only when it IS you break the loop and send the response to the client. When the client receives the response, it immediately re-sends the "long-polling" request so it wouldn't miss a next "notification".
A simplified example of the server-side PHP code for this could be:
// Set the loop to run 28 times, sleeping 2 seconds between each loop.
for($i = 1; $i < 29; $i++) {
// find out if the condition is satisfied.
// If YES, break the loop and send response
sleep(2);
}
// If nothing happened (the condition didn't satisfy) during the 28 loops,
// respond with a special response indicating no results. This helps avoiding
// problems of 'max_execution_time' reached. Still, the client should re-send the
// long-polling request even in this case.
You can use (or study) some existing implementations, like Ratchet. There are a few others.
Essentially, you need to avoid having apache or the web server handle the request. Just like you would with a node.js server, you can start PHP from the command line and use the server socket functions to create a server and use socket_select to handle communications.
It could technically work throught the web server by keeping a loop active. However, the memory overhead of keeping a php process active per HTTP connection is typically too high. Creating your own server allows you to share the memory between connections.
I used long polling for a chat application recently. After doing some research and playing it with a while here are some things I would recommend.
1) Don't long poll for more than about 20 seconds. Some browsers will timeout. I normally set my long poll to run about 20 seconds and send back an empty response at that point. Then you can use javascript to restart the long poll.
2) Every once in a while a browser will hang up. To help add a second level of error checking, I have a javascript timer run for 30 seconds and if no response has come in 30 seconds I abandon the ajax call and start it up again.
3) If you are using php make sure you use session_write_close()
4) If you are using ajax with Jquery you may need to use abort()
You can find your answer here. More detail here . And you should remember to use $.ajaxSetup({ cache:false }); when working with jquery.

PHP Background Process on BSD uses 100% CPU

I have a PHP script that runs as a background process. This script simply uses fopen to read from the Twitter Streaming API. Essentially an http connection that never ends. I can't post the script unfortunately because it is proprietary. The script on Ubuntu runs normally and uses very little CPU. However on BSD the script always uses nearly a 100% CPU. The script is working just fine on both machines and is the exact same script. Can anyone think of something that might point me in the right direction to fix this? This is the first PHP script I have written to consistently run in the background.
The script is an infinite loop, it reads the data out and writes to a json file every minute. The script will write to a MySQL database whenever a reconnect happens, which is usually after days of running. The script does nothing else and is not very long. I have little experience with BSD or writing PHP scripts that run infinite loops. Thanks in advance for any suggestions, let me know if this belongs in another StackExchange. I will try to answer any questions as quickly as possible, because I realize the question is very vague.
Without seeing the script, this is very difficult to give you a definitive answer, however what you need to do is ensure that your script is waiting for data appropriately. What you should absolutely definitely not do is call stream_set_timeout($fp, 0); or stream_set_blocking($fp, 0); on your file pointer.
The basic structure of a script to do something like this that should avoid racing would be something like this:
// Open the file pointer and set blocking mode
$fp = fopen('http://www.domain.tld/somepage.file','r');
stream_set_timeout($fp, 1);
stream_set_blocking($fp, 1);
while (!feof($fp)) { // This should loop until the server closes the connection
// This line should be pretty much the first line in the loop
// It will try and fetch a line from $fp, and block for 1 second
// or until one is available. This should help avoid racing
// You can also use fread() in the same way if necessary
if (($str = fgets($fp)) === FALSE) continue;
// rest of app logic goes here
}
You can use sleep()/usleep() to avoid racing as well, but the better approach is to rely on a blocking function call to do your blocking. If it works on one OS but not on another, try setting the blocking modes/behaviour explicitly, as above.
If you can't get this to work with a call to fopen() passing a HTTP URL, it may be a problem with the HTTP wrapper implementation in PHP. To work around this, you could use fsockopen() and handle the request yourself. This is not too difficult, especially if you only need to send a single request and read a constant stream response.
It sounds to me like one of your functions is blocking briefly on Linux, but not BSD. Without seeing your script it is hard to get specific, but one thing I would suggest is to add a usleep() before the next loop iteration:
usleep(100000); //Sleep for 100ms
You don't need a long sleep... just enough so that you're not using 100% CPU.
Edit: Since you mentioned you don't have a good way to run this in the background right now, I suggest checking out this tutorial for "daemonizing" your script. Included is some handy code for doing this. It can even make a file in init.d for you.
How does the code look like that does the actual reading? Do you just hammer the socket until you get something?
One really effective way to deal with this is to use the libevent extension, but that's not for the feeble minded.

break up recursive function in php

What is the best way to break up a recursive function that is using a ton of resources
For example:
function do_a_lot(){
//a lot of code and processing is done here
//it takes a lot of execution time
if($true){
//if true we have to do all of that processing again
do_a_lot();
}
}
Is there anyway to make the server only have to take the brunt of the first execution and then break up the recursion into separate processes? Or am I dreaming?
Honestly, if your function is using up that much of your system's resources, I'd most likely refactor my code. However, it's not truly multithreading, but you could perhaps look at using popen to fork your process.
One of the rule of PHP is "Share nothing". That means every PHP process is independant and shares nothing with the others. So if you want to break your execution on several PHP process you'll have to store the data somewhere. It can be a memcached storage, or a database, or the session, as you want.
Then you'll need to 'fork' your PHp process. They're solutions available to get this done on the server side. IMHO this is all hacks. Dangerous and not minded in the PHP/web way. With the exception of 'work queues' tools.
I think the nicest way is to break your task with ajax. This will allow you a clean user interface and will avoid any long response timeout in the web process. i.e. show a 'working zone' to you user, then ask in ajax for next step of the job (first one), get response (in server side stor you response), then ask for next step, store new response and respond , next step, etc. You can even add a 'stop that stuff' function on the client side.
You can check as well for 'php work queue' on google.
If it's a long running task, divide and conquer with gearman

How to deal with streaming data in PHP?

There is a family of methods (birddog, shadow, and follow)in the Twitter API that opens a (mostly) permanent connection and allows you to follow many users. I've run the sample connection code with cURL in bash, and it works nicely: when a user I specify writes a tweet, I get a stream of XML in my console.
My question is: how can I access data with PHP that isn't returned as a direct function call, but is streamed? This data arrives sporadically and unpredictably, and it's not something I've ever dealt with nor do I know where to begin looking for answers. Any advice and descriptions of libraries or pitfalls would be appreciated.
fopen and fgets
<?php
$sock = fopen('http://domain.tld/path/to/file', 'r');
$data = null;
while(($data = fgets($sock)) == TRUE)
{
echo $data;
}
fclose($sock);
This is by no means great (or even good) code but it should provide the functionality you need. You will need to add error handling and data parsing among other things.
I'm pretty sure that your script will time out after ~30 seconds of listening for data on the stream. Even if it doesn't, once you get a significant server load, the sheer number of open and listening connections will bring the server to it's knees.
I would suggest you take a look at an AJAX solution that makes a call to a script that just stores a Queue of messages. I'm not sure how the Twitter API works exactly though, so I'm not sure if you can have a script run when requested to get all the tweets, or if you have to have some sort of daemon append the tweets to a Queue that PHP can read and pass back via your AJAX call.
There are libraries for this these days that make things much easier (and handle the tricky bits like reconnections, socket handling, TCP backoff, etc), ie:
http://code.google.com/p/phirehose/
I would suggest looking into using AJAX. Im not a PHP developer, but I would think that you could wire up an AJAX call to the API and update your web page.
Phirehose is definitely the way to go:
http://code.google.com/p/phirehose/

Categories