I use library http://phpseclib.sourceforge.net/ssh/intro.html.
My script communicates with a remote server via bidirectional xml stream.
It uses the read() function of the library to read another chunk of data every 30s. In between, my script does something else + sleep()
Now can it be, that my script misses some data, since it "slept" while they came? Is that possible? How else may it miss data incoming via the stream?
If you are referring to sleep() on the PHP (client) side, than it is a question of whether the SSH client is running under your thread or under its own thread.
If its your thread, then yes it can miss data, if its on its own thread, it won't it will be waiting for you to come back.
NOTE: Doing what you are trying to do will be very unstable, some SSH servers will disconnect you after a certain amount of idle time, as well as a connection that doesn't send/recv data is likely to get terminated.
If you're timing out, on the client side, every 30 seconds, it's possible the server times out in less time than that if no packets are read or sent.
What'd be really helpful is the command you're running, the output you're expecting and the output you're getting back. That'll make diagnosing your issues easier.
Related
I'm trying to find a solution to my problem with sending data to client with PHP. The biggest issue is - what I'm trying to do is to keep sending data inside a single connection with PHP script. I'm sure there are other ways but currently I don't know how to solve this.
What I'm trying to do is: A client connects to a web server and keeps the connection opened, so the TCP connection is "established". He will keep making for example GET requests every X seconds to keep this connection alive.
Inside of this connection on certain event I want to send the client some data without him making the request! So it means the event is triggered on the server side, not on the client side.
There is no possibility of using any JavaScript or any client-side technique as my client is Arduino module. I can keep the connection opened but I need to pass data to the client by using HTTP.
I have a database set up on the server and PHP will send data to the client when something changes inside the database.
I was trying to play with php flush() running in loop in the PHP script; but that doesn't do it the way I want.
So any advices appreciated.
Thank you.
edit: would be perfect it the solution would also work on Windows machine!
edit2: There will be multiple clients, not just one (eg hundreds)
As long as you don’t have lots of clients, Server-side Events sounds like it could work for you.
http://dsheiko.com/weblog/html5-and-server-sent-events
Just read that you will have hundreds of clients, in that case you probably won't want to use PHP but use node.js instead.
How about CRON jobs?
http://en.wikipedia.org/wiki/Cron
http://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/
I think that might be the solution for your project. If i undrestand CRON's correctly, what thy do is execute a given script at given intervals. So that is basicly what u want, scripts to execute for every X seconds. And inside your script u have your function working with the database.
I think what you are looking for is IPC - Inter Process Communication. In your case I would suggest a message queue (or multiple of those)
(On the client)
Open connection to foo.php
When receiving a new line, process it.
If connection times out, re-open it
(On the server - foo.php)
Open a Message Queue (You will have to register a message queue for each user!)
Register it so that your bar.php knows about it.
Start a blocking receive.
When a message is received, send whatever you want to send, FLUSH OUTPUT BUFFERS, go back to 2.
If anything times out, back to 2.
(On the server - bar.php)
When the database changes, send message to all active queues
There are a few problems with this approach:
The server side only really works on Linux / Unix (that includes Macs)
There is a limited number of message queues
You may have to do some housekeeping, removing old queues, etc.
The benefits:
This is application independent. Message queues are an operating system feature, thus your bar.php could really be say a java application.
Ok, so I think I found the way I want it to work. The problem why flush() wasn't working is that I didn't reach the flush buffer limit before flushing. Also I'm using Nginx server and I disabled gzip (just in case).
My test code which absolutely works looks like this:
<?
ob_implicit_flush(1);
for($i=0; $i<10; $i++){
echo $i;
//this is for the buffer achieve the minimum size in order to flush data
echo str_repeat(' ',1024*64);
sleep(1);
}
?>
Found my answer here: PHP Flush that works... even in Nginx
I will test it with my Arduinos if it can accept such output. Thanks all for your help.
I was wondering - as I wrote a snippet of code that could update up to 10,000 rows and might take a few seconds to complete, if when the file is accessed via an ajax request, the post query is send to the php file, then the browser is closed, does the file get fully executed? assume it takes about 25 seconds to complete the request, the user might not wait for 25 seconds, is good enough to "ping" this file and let the user browse along or close its browser window as the mysql queries are taking place?
The request has 3 parts
A browser connected to the web server
PHP script that is executed by the server
A query running in the DB server
When you close the browser, connection with the server is closed. The server may or may not kill the started PHP script (if PHP is running as apache module, it would be killed, unless ignore_user_abort is called). Also the web server may have a time limit for the request and either kill the script or just send the client a connection timeout message, without killing the script, but without giving it the chance to send anything to the browser.
Here is the tricky part - the update is running in the database and it won't be killed by the web server, neither by PHP.
So what you want to achieve is pinging a PHP script, that is executing a query, but the client do not wait the result. You may or may not want the query itself to be asyncronous (the PHP script not to wait the query), but you have to tell the client that the request is fulfilled, by sending content-length of 0 for example, and flushing the output (the http headers actually), and running PHP with ignore_user_abort so it continues the execution.
Use ignore-user-abort to continue running the script even after the client has disconnected
ignore_user_abort(true);
set_time_limit(0);
You can use connection_status to track if the connection has disconnected
if (connection_status()!=0) { //connection disconnected
Here's the answer for your question:
http://www.php.net/manual/en/features.connection-handling.php
Normally no, but your script pass in ABORTED status.
More details in the manual page about Connection handling:
http://www.php.net/manual/en/features.connection-handling.php
Internally in PHP a connection status is maintained. There are 3
possible states:
0 - NORMAL
1 - ABORTED
2 - TIMEOUT
When a PHP script is running normally the NORMAL state, is active. If
the remote client disconnects the ABORTED state flag is turned on. A
remote client disconnect is usually caused by the user hitting his
STOP button.
As soon as you close the browser, it disconnects from the server before getting the reply. I do not know exactly how different servers behave in this condition but I assume that most of the server will abort the thread that they are working on to reply the request.
Further, things can be different with different operations - i.e. file i/o or database operation. If it is an atomic database operation, my assumption is, it will complete any how.
Suppose a page takes a long time to generate, some large report for example, and the user closes the browser, or maybe they press refresh, does the PHP engine stop generating the page from the original request?
And if not, what can one do to cope with users refreshing a page a lot that causes an expensive report to be generated.
I have tried this and it seems that it does not stop any running query on the database. But that could be an engine problem, not PHP.
Extra info:
IIS7
MS SQL Server via ODBC
When you send a request to the server, it is executed on the server without any communication with the browser until information is sent back to the browser. When PHP tries to send data back to the browser, it will fail and therefore the script will exit.
However, if you have a lot of code executing before any headers are sent, this will continue to execute until the headers are sent and a failed response is received.
PHP knows when a connection has been closed when it tries to output some data (and fails). echo, print, flush, etc. Aside from this, no, it doesn't; everything else is happening on the server end.
There is little in the way of passing back information about the browser state once a request has been made (or in your case, in progress)
To know if a user is still connected to your site, you will need to implement a long poll / comet or perhaps a web socket.
Alternatively - you may want to run the long query initiated via an ajax call - while keeping the main browser respsonsive (not white screened). This allows you to detect if the browser is closed during the long query with a Javascript event onbeforeunload() to notify your backend that the user has left. (I'm not sure how you would interupt a query in progress from another HTTP request though)
PHP have two functions to control this. set_time_limit(num) able to increase the limit before a page execution "dies". If you don't expand that limit, a page running "too long" will die. Bad for a long process. Also you need ignore_user_abort(TRUE) so the server don't close the PHP process if the server detect that the page has ben closed in the client side.
You may also need to check for memory leaks if you are writing something that use much memory and run for several hours.
When you send a request to the server the server will go away and perform the appropriate actions. IIS/SQL Server does not know if the browser has been closed (and it is not IIS/SQL Server's responsibility to understand this) so it will execute the commands (as told to do so by the PHP engine until it has finished or until the engine kills any transactions). Since your report could be dynamic, IIS will not cache page requests, SQL Server however can cache the last previously ran queries therefore you will see some performance gain from the database backend.
I'm using the JAXL library to implement a jabber chat bot written in php, which is then ran as a background process using the PHP CLI.
Things work quite well, but I've been having a hard time figuring out how to make the chat bot reconnect upon disconnection!
I notice when I leave it running over night sometimes it drops off and doesn't come back. I've experimented with $jaxl->connect() and $jaxl->startStream(), and $jaxl->startCore() after jaxl_post_disconnect hook, but I think I'm missing something.
One solution would be to test your connection:
1) making a "ping" request to your page/controller or whatever
2) setTimeout(functionAjaxPing(), 10000);
3) then read the Ajax response and if == "anyStringKey" then your connection works find
4) else: reconnect() / errorMessage() / whatEver()
This is what IRC chat use i think.
But this will generate more traffic since the ping/ping request will be needed.
Hop this will help you a bit. :)
If you are using Jaxl v3.x all you need is to add a callback for on_disconnect event.
Also you must be using XEP-0199 XMPP Ping. What this XEP will do is, periodically send out XMPP pings to connected jabber server. It will also receive server pings and send back required pong packet (for instance if your client is not replying to server pings, jabber.org will drop your connection after some time).
Finally you MUST also use whitespace pings. A whitespace ping is a single space character sent to the server. This is often enough to make NAT devices consider the connection “alive”, and likewise for certain Jabber servers, e.g. Openfire. It may also make the OS detect a lost connection faster—a TCP connection on which no data is sent or received is indistinguishable from a lost connection.
What I ended up doing was creating a crontab that simply executed the PHP script again.
In the PHP script I read a specific file for the pid of the last fork. If it exists, the script attempts to kill it. Then the script uses pcntl_fork() to fork the process (which is useful for daemonifying a PHP script anyway) and capture the new PID to a file. The fork then logs in with to Jabber with JAXL per usual.
After talking with the author of JAXL it became apparent this would be the easiest way to go about this, despite being hacky. The author may have worked on this particular flaw in more recent iterations, however.
One flaw to this particular method is it requires pcntl_fork() which is not compiled with PHP by default.
I was wondering about the lifespan of a PHP script when called via Ajax. Assume that there is a long-running (i.e. 30 seconds) PHP script on a server and that page is loaded via Ajax. Before the script completes, the user closes the browser. Does the script continue running to completion, is it terminated, or is this a function of the server itself (I'm running Apache fwiw). Thanks for any help.
This may be of interest: ignore_user_abort()
ignore_user_abort — Set whether a client disconnect should abort script execution
However note
PHP will not detect that the user has aborted the connection until an attempt is made to send information to the client.
The script will continue running. Closing the browser on the client does not notify the server to stop processing the request.
If you have a large time consuming script, then I would suggest splitting it up into chunks. Much better that way