How to keep a shell open across multiple ajax requests in PHP? - php

I need to use a shell across multiple ajax requests. Basically this means that the next request should be able to continue with the same shell where the other process left off.
The purpose is to communicate with daemons like FTP to open an FTP connection and log in and in the next request continue with that connection and be able to use it for uploads. But it's not limited to only the FTP daemon (as I know that FTP is supported in PHP).

It depends upon the system you want to design but in general, it could be done by maintaining request levels in session or in database, i.e. first check for login, if user is logged in then shell can allow more requests. You may need to define steps of communication for which requests are desired.

Related

How does one achieve realtime messaging between different instances of (perhaps different) PHP scripts?

I'm working on feeding the client realtime events using event-streams and HTML5 SSEs client-side.
But some of my events will actually come from form submissions by other clients.
What's the best method for detecting these form submissions (so as to append them to the event-stream script) ASAP (after they occur)?
So essentially, I need realtime cross-script messaging between multiple instances of different scripts instantiated by different clients, analagous to X-doc messaging in JS, but for PHP.
The best I can come up with is to repeatedly poll a subdir of /tmp for notification files, which is a terrible solution.
Often you can use MYSQL to play the role of the tmp dir you were talking about. This is more portable because they don't have to be on the same server to do this and the data is separate. However the scripts will have to manually check the mysql location to see if the other one has taken care of this. The other option is to open sockets and write back and forth in real time or to use some prebuilt tool for just this purpose which I'm pretty sure might exist.
If you want the events to be triggered near to realtime, then you need to handle them synchronously - which means running a daemon. And the simplest way to implement a daemon which can synchronize data across client connections is to use an event based server. There's a nice implementation of the latter using php here - there are plenty of examples of how to daemonize a PHP process on the interent. Then just open a blocking connection to the server from your PHP code / access this via comet.

Critical code parts in php application?

Okay, in my head this is somewhat complicated and I hope I can explain it. If anything is unclear please comment, so I can refine the question.
I want to handle user file uploads to a 3rd server.
So we have
the User
the the website (server where the website runs on)
the storage server (which recieves the file)
The flow should be like:
The Website requests an upload url from the storage clouds gateway, that points directly to the final storage server (something like http://serverXY.mystorage.com/upload.php). Along with the request a "target path" (website specific and globally unique) and a redirect url is sent.
the Website generates an upload form with the storage servers upload url as target, the user selects a file and clicks the submit button. The storage server handles the post request, saves the file to a temporary location (which is '/tmp-directory/'.sha1(target-path-fromabove)) and redirects the back to the redirect url that has been specified by the website. The "target path" is also passed.
I do not want any "ghosted files" to remain if the user cancels the process or the connection gets interrupted or something! Also entries in the websites database that have not been correctly processed int he storage cloud and then gets broken must be avoided. thats the reason for this and the next step
these are the critical steps
The website now writes en entry to its own database, and issues a restful request to the storage api (signed, website has to authenticate with secret token) that
copies the file from its temporary location on the storage server to its final location (this should be fast because its only a rename)
the same rest request also inserts a database row in the storage networks database along with the websites id as owner
All files in tmp directory on the storage server that are older than 24 hours automatically get deleted.
If the user closes the browser window or the connection gets interrupted, the program flow on the server gets aborted too, right?
Only destructors and registered shutdown functions are executed, correct?
Can I somehow make this code part "critical" so that the server, if it once enters this code part, executes it to teh end regardless of whether the user aborts the page loading or not?
(Of course I am aware that a server crash or an error may interrupt at any time, but my concerns are about the regular flow now)
One of me was to have a flag and a timestamp in the websites database that marks the file as "completed" and check in a cronjob for old incompleted files and delete them from the storage cloud and then from the websites database, but I would really like to avoid this extra field and procedure.
I want the storage api to be very generic and use it in many other future projects.
I had a look at Google storage for developers and Amazon s3.
They have the same problem and even worse. In amazon S3 you can "sign" your post request. So the file gets uploaded by the user under your authority and is directly saved and stored and you have to pay it.
If the connection gets interrupted and the user never gets back to your website you dont even know.
So you have to store all upload urls you sign and check them in a cronjob and delete everything that hasnt "reached its destination".
Any ideas or best practices for that problem?
If I'm reading this correctly, you're performing the critical operations in the script that is called when the storage service redirects the user back to your website.
I see two options for ensuring that the critical steps are performed in their entirety:
Ensure that PHP is ignoring connection status and is running scripts through to completion using ignore_user_abort().
Trigger some back-end process that performs the critical operations separately from the user-facing scripts. This could be as simple as dropping a job into the at queue if you're using a *NIX server (man at for more details) or as complex as having a dedicated queue management daemon, much like the one LrdCasimir suggested.
The problems like this that I've faced have all had pretty time-consuming processes associated with their operation, so I've always gone with Option 2 to provide prompt responses to the browser, and to free up the web server. Option 1 is easy to implement, but Option 2 is ultimately more fault-tolerant, as updates would stay in the queue until they could be successfully communicated to the storage server.
The connection handling page in the PHP manual provides a lot of good insights into what happens during the HTTP connection.
I'm not certain I'd call this a "best practice" but a few ideas on a general approach for this kind of problem. One of course is to allow the transaction of REST request to the storage server to take place asynchronously, either by a daemonized process that listens for incoming requests (either by watching a file for changes, or a socket, shared memory, database, whatever you think is best for IPC in your environment) or a very frequently running cron job that would pick up and deliver the files. The benefits of this are that you can deliver a quick message to the User who uploaded the file, while the background process can try, try again if there's a connectivity issue with the REST service. You could even go as far as to have some AJAX polling taking place so the user could get a nice JS message displayed when you complete the REST process.

Persist an FTP connection PHP resource across AJAX calls

I have a multi-user PHP web application that can interact with an FTP server via AJAX. The application allows the user to browse an FTP site. Javascript makes an AJAX call which communicates with a server-script that returns a list of files and directories within a given directory.
This works fine. However, each time a directory listing is requested, the server must re-establish a connection with the FTP server, which takes a lot of time.
I need to persist an FTP connection PHP resource across AJAX calls. In other words, the connection must remain open, and I must be able to run ftp_nlist() using that resource, without re-establishing the connection or re-authenticating, with each new AJAX call (until the connection times out, of course).
Can anyone think of a way to do this?
I don't think it's possible using the FTP library in PHP. I see somebody even had a feature request for it in PHP, but it doesnt seem there was any action taken on it.
The only way I can think of is to use a 3rd-party FTP client that keeps the connection open and interface with it through PHP. (instead of a 3rd party ftp client, you could just use the FTP functions built-in to the OS. Windows provides them, as does Linux through the "ftp" program.)
Sorry to add clutter without a clear answer for ya but this might be helpful:
http://www.eecho.info/Echo/php/client-url-library-php-curl/
It appears you are in control of opening and closing the connections however in terms of returning this variable to the client and having it re-used I'm not sure that is possible (also it might just clean itself up out of your control), alternatively you might (depending on the end environment) consider using a Java backend, you could code up a simple server and just add the FTP code on top (mmm... cake). Some examples of what you'd need to do for that are here:
http://fragments.turtlemeat.com/javawebserver.php
http://www.javaworld.com/javaworld/jw-04-2003/jw-0404-ftp.html
This assumes a pretty large amount of control of what's run in the server environment though so really depends on you owning the server basically or having full priveleges to do do what you want (like Amazon EC2 from what they advertise at least). You might be able to pull this off with Tomcat or some other JSP container and use JSPs instead of writing your own server but I don't know that you'd be able to persist the connection their either since it's sort of the same as PHP where the server generally interprets the file "on the fly" so to speak.
You can not create a persistent FTP connection with PHPs normal ftp classes functions. Are all users accesing the same ftp server or are you running a ftp web interface? If multiple users are connected to the same server (with the same rights) you could implement a cached solution.
I ended up making this work using global variables (eg. $my_global). I have a ConnectionPooler singleton class which manages connections stored in a hash.

Keeping a connection open for each client in php

I have a C++ backend application coded over a TCP socket, to which I connect PHP to. The problem is that the connection is closed on every refresh, change page, etc. I would like to keep the connection open for each client, doing something like $_Session does.
This is not really what PHP (or the whole of web-based applications and services, for that matter) are meant for. It also means begging for resource problems before long because big PHP processes will be running simultaneously, instead of running for a quick moment on each request.
What speaks against making use of the normal session mechanisms from within your app (i.e. dealing with session ID cookies) like other clients?
I'm no expert in C++ but I'm sure that most http libraries can deal with a "cookie jar", which is essentially all you need to persist a session from within your client application.
While I don't know much about PHP, I can tell you that web browsers aren't designed to hold continuous connections. They have to reconnect every time they make a HTML request.
The HTTP standard specifies that the server will disconnect from the client after it has finished sending it it's request.

File resource persistence in PHP

I'm developing a simple chat web application based on the MSN protocol. The server communicates with the MSN server through a file resource returned from fsockopen (). The client accesses the server via XMLHttpRequest. The server initially logs in, and prints out the contact list (formatted in an HTML table) which the client receives through the responseText () of the XMLHttpRequest object.
Here's the problem. The file resource that is responsible for communication with the MSN server must be kept alive in order for all chat related functions to work (creating conversations, keeping track of offline/online state changes, etc). However in order for the XMLHttpRequest to complete, the PHP script must finish execution. Which means the client will get no response from the XMLHttpRequest while the chat session is in progress.
Whats worse is a file resource cannot be serialized, meaning I cannot simply store the chat session in a $_SESSION [] placeholder.
So, my question is, is there any possible way for me to 'transfer' a file resource from one file to another?
In most languages its not possible to pass file handles between applications - AFAIK most operating systems don't allow it either.
The solution is to keep the server process running as daemon - which means it needs to run outside of the webserver.
See
http://symcbean.blogspot.com/2010/02/php-and-long-running-processes.html
and
http://www.phpclasses.org/browse/package/5758.html
C.
A possible solution would be to have a PHP script on the server-side that just doesn't end ; this way, the resource corresponding to the fsockopen call would never be deleted, and the connection wouldn't be closed.
About this, you might want to search for the term "comet" ; the basic idea is to have a script that runs forever on the server-side, that sends updates to the client whenever it's necessary.
Instead of having the browser send an Ajax request every X seconds, you'd keep an openened connection between the client and the server -- just note that, unfortunatly, PHP is often said not to be the best tool for that job...
On stackoverflow : [php] comet
The resource can't survive the end of the request unless you create PHP extension that does it (like persistent MySQL connections do with mysql_pconnect() for example). However, you could use Comet technology and for example Bayeux protocol supported by Dojo toolkit among others, to talk to the server. That would require either standalone server or long-running request, in latter case ensure that PHP and webserver time limits would not kill that request for running too long.
Thanks everyone for the suggestions. Before I started this project I had considered using comet technology, but decided against (PHP/Apache don't seem to implement well). I've come up with a hacked together solution, not the most elegant but workable.
One PHP script is responsible for the MSN server communication, it will run as long as the user is active. It writes data to a file (email_out), as well as reads data from a file (email_in). Whenever the client sends a AJAX request a separate PHP script will write any POST data to the file (email_in) and will return any data from (email_out). Both scripts will not read/write data until they finally have access to the file (as there will be fighting for the file resource).
I don't know, suggestions? This is certainty not the most efficient means of doing things but it's really the only PHP/apache solution I could think of.

Categories