I'm developing a simple chat web application based on the MSN protocol. The server communicates with the MSN server through a file resource returned from fsockopen (). The client accesses the server via XMLHttpRequest. The server initially logs in, and prints out the contact list (formatted in an HTML table) which the client receives through the responseText () of the XMLHttpRequest object.
Here's the problem. The file resource that is responsible for communication with the MSN server must be kept alive in order for all chat related functions to work (creating conversations, keeping track of offline/online state changes, etc). However in order for the XMLHttpRequest to complete, the PHP script must finish execution. Which means the client will get no response from the XMLHttpRequest while the chat session is in progress.
Whats worse is a file resource cannot be serialized, meaning I cannot simply store the chat session in a $_SESSION [] placeholder.
So, my question is, is there any possible way for me to 'transfer' a file resource from one file to another?
In most languages its not possible to pass file handles between applications - AFAIK most operating systems don't allow it either.
The solution is to keep the server process running as daemon - which means it needs to run outside of the webserver.
See
http://symcbean.blogspot.com/2010/02/php-and-long-running-processes.html
and
http://www.phpclasses.org/browse/package/5758.html
C.
A possible solution would be to have a PHP script on the server-side that just doesn't end ; this way, the resource corresponding to the fsockopen call would never be deleted, and the connection wouldn't be closed.
About this, you might want to search for the term "comet" ; the basic idea is to have a script that runs forever on the server-side, that sends updates to the client whenever it's necessary.
Instead of having the browser send an Ajax request every X seconds, you'd keep an openened connection between the client and the server -- just note that, unfortunatly, PHP is often said not to be the best tool for that job...
On stackoverflow : [php] comet
The resource can't survive the end of the request unless you create PHP extension that does it (like persistent MySQL connections do with mysql_pconnect() for example). However, you could use Comet technology and for example Bayeux protocol supported by Dojo toolkit among others, to talk to the server. That would require either standalone server or long-running request, in latter case ensure that PHP and webserver time limits would not kill that request for running too long.
Thanks everyone for the suggestions. Before I started this project I had considered using comet technology, but decided against (PHP/Apache don't seem to implement well). I've come up with a hacked together solution, not the most elegant but workable.
One PHP script is responsible for the MSN server communication, it will run as long as the user is active. It writes data to a file (email_out), as well as reads data from a file (email_in). Whenever the client sends a AJAX request a separate PHP script will write any POST data to the file (email_in) and will return any data from (email_out). Both scripts will not read/write data until they finally have access to the file (as there will be fighting for the file resource).
I don't know, suggestions? This is certainty not the most efficient means of doing things but it's really the only PHP/apache solution I could think of.
Related
I am writing an application with a Flex/as3 client. The client needs to send some data (at this point an xml file roughly 10 KB but later could balloon to up to 100 KB) to the server every minute or so and then receive another similar xml file back. The server's job is to validate the data in the xml file and possibly update a mysql database. Some other Flex/as3 clients that are in the same "group" as the client which sends data (and clients can join/leave a group anytime) need to be notified when the server processes the xml file so they can then chose whether to download the file. There can be several such groups of clients (and clients from different groups don't talk to each other). Since I am somewhat familiar with php I would prefer to use this for the server-side script.
My questions are the following:
1) Would it be best to write this as a socket application? Or should this be just POST data sent to a web script?
2) If this is sent to a web script as just some POST data, how can I ensure that other clients get notified? Do I just ping my script every few seconds (sounds resource intensive)...
3) Is there some framework/libraries that I should use (on client or server side) to facilitate developing this?
I appreciate your help,
Ilya
Your best option is to use a socket connection. Normal POST data will be both slow and taxing. If you insist on it, I've done it before and can tell you how, but ultiamtely this should be easier: http://help.adobe.com/en_US/ActionScript/3.0_ProgrammingAS3/WS5b3ccc516d4fbf351e63e3d118a9b90204-7cfb.html
I am writing a JavaScript for an in-browser IM client for the sake of practicing and learning JavaScript and AJAX.
I need to be able to check for a change in the file size of a text file that is being used as a temporary storage for 40-80 SQL entries that contain messages so that it can update the display.
At the moment I am using a setInterval function to periodically check for a change in file size using short PHP script, but this can cause issues, if the interval is to long, messages are delayed, if it is shorter, it means a lot of php scripts running very quickly, which takes up server resources.
What is the best way to do this if the main concern is to reduce server resource usage?
(I am running my server off of a rather low tech PC I've scraped together(2gb ram, 2.8ghz AMD seperon processor))
Preferably, I would want to do this using an AJAX event triggered by someone sending a message, I.E. When user B triggers the event that edits the file by pressing enter, that triggers a function on user A's side that updates the HTML file
Any ideas? I am open to any solution to this particular problem. I gave specific examples of what I want to happen in the specific languages in order to give a better idea of what it is I am attempting to do.
If there is a way to do this that isn't JavaScript/PHP, I'd also be open to exploring that as an option.
Doing this with PHP can be a bit cumbersome. You could try doing something like long polling where you keep the HTTP request open until the server has new data to send to the user. If messages are sent frequently, this might not be ideal. You might want to consider using event-driven web technologies like node.js with something like Socket.IO.
In any case, you'll likely want to maintain a connection with the server if you want to get the message in near real-time. There are ways to use WebSockets with PHP as well, but PHP isn't really the best for this because it's not designed to keep scripts running for long periods (also see What exactly entails setting up a PHP Websocket Server?).
Browsers & HTTP/ AJAX generally work by a "pull" model. The browser/ or AJAX sends the server a request, then the server answers a response.
There isn't generally much provision for the server to contact the browser, to "push" an event. This can however be simulated by a long-running request, to which the server writes data when the event/ or events occur.
For example, this could be a request that answers "empty" after a timeout of 10-30 seconds.. or the server returns & answers immediately, if there are event(s) in its queue.
With a Java server this is easy to do, and I've used this successfully for event notification in a major integration project a few years back.
However I'm not sure in PHP how much ability there is (probably very near zero) to maintain an overall server state, coordinate or communicate between threads/requests, or maintain event queues.
You could look into something like a Java webapp running on Tomcat. All you need is a basic web.xml and one Servlet class, and you can build just about anything from there.
Hi,
From the image above, I have a webserver a linux machine and client/device.. Now i need for this 3 to communicate. The webserver sends data to an ip address(client/device) based on button pressed on the webpage. but before the data is sent, the data must first access the linux machine, the machine then sends the data down to the device which then the device reads the data and act based on the command sent.. then the device sends back data to the linux machine which then the linux machine sends it to the webserver for ack'd. meaning data is received by the device without any problems.
Php is for the webserver. Now how will php sends data to an ip adress.
The linux machine handles all requests and sends everything down to the device and when the device got the data it will send a data to linux machine which then machine sends an ok to the webserver that the data arrived succesfully.(I read about socket programming and i think of creating an application that reads requests.) or if you have any idea how can i do this?.
How can the device read a data sent by the webserver?..
Thanks,
EDIT: The device is not connected to the linux machine. the device is only connnected via the ethernet cable.
Let's call the topmost machine 'Server', the middle machine 'Controller' and the bottom machine 'Device'. It does not matter if the device is a peripheral (say, USB or serial device), or a computer.
The first task is to get the Controller to query the Device. The best way to do this really depends on the Device. If you consider things like USB audio/video devices, they need to be tuned, then they send a continuous stream of data. Things like temperature or humidity sensors are told to do a measurement, then they respond with data.
Usually you write the required functions into a small library, and verify it works using command line tools. In some cases the library may not be necessary, for example if the Device is already supported by the kernel in Controller, and the information is trivially available. (For example, consider the temperature sensors in hard drives: if Device(s) are hard disks, then Controller can simply use the command hddtemp /dev/sda to get the temperature of the /dev/sda (first SATA/ATA/SCSI hard disk). I'd expect the end user to be able to pick which hard disks she is interested in, so that choice would have to flow from Server to Controller.)
Next, you write a service that will run on the Controller. This service will incorporate the library functions already written and tested, so it can easily access the Device. (This way you know the Controller-Device communication works, and don't need to worry about it. One thing at a time.)
There are many different designs for the service, from plain TCP/IP or UDP/IP sockets to Remote Procedure Calls (RPC), to high-level protocols like HTTP. In recent years, the last, using HTTP, has become more and more common, with responses being XML, plain text, or binary media (usually images). The idea is to have the service be basically just another web server that can access the Device directly. Security is simpler, because it does not need to be world-accessible: it can very well only answer to requests coming from the Server only. I've written such services using basic shell scripting (Bash), PHP (both PHP-CGI and command-line PHP, PHP-CLI), and C, among others. The best choice depends on the details, really. I personally prefer either a simple text-based TCP/IP socket, or HTTP.
On the Server, you can write a PHP page, that connects to Controller, requesting whatever it wants to request (usually depends on user data, first checked for sanity and safety, of course). PHP has easy built-in facilities for doing both HTTP requests and connecting using raw TCP/IP, so it suits quite well for this. If HTTP protocol wrappers are enabled, then it is just $handle = fopen("http://192.168.x.x/myservice?param1=" . urlencode($param1) . "¶m2=" . urlencode($param2), "r+b");. To get a socket connection, you use the fsockopen() function instead. (For details, see fopen(), http wrappers, and fsockopen() at the PHP Function Reference at www.php.net.)
In practice the PHP page code first creates a connection to the Controller. Then it sends a request, containing the relevant sanitized commands/parameters received from the end user. Then it waits for the Controller to respond with the results (by simply reading the response), then closes the connection. The response should contain all the data needed, so the PHP page is free to construct the page to the end user.
None of this is really difficult, but there is a lot to do. I've found the Controller-Device communication to require the most work; after that is done, the rest has always been quite straightforward.
If you can provide more details what the Controller-Device connection is, what kind of data (text? numbers? images? a lot of binary data?) the Device provides, and what kind of parameters/commands (just "one result, please?", basic commands like "move up", "where are you?") do you expect you need to send to the Controller/Device, I could perhaps be more specific.
Also, are you limited to PHP, or would you be comfortable writing the Controller service using C? I've found that to be a very good combination myself.
Edited to add:
In a nutshell, the three points can be answered as follows:
Either using fopen("http://ip.add.re.ss:port/", "r+b"); if using the HTTP protocol and PHP is configured to allow http wrappers (they usually are), or using fsockopen(). See the PHP documentation linked above for details.
With an IP-connected Device, Controller is basically a relay or translator. Usually this means a daemon running on Controller, managing incoming requests from Server (or Servers), and responses from Device (or Devices). This is more common when there are a varying number of Devices, and/or more than one interface is needed. In practice, the Controller runs a daemon just like described above, except the protocols may be standard or simple enough so there is no need to write a library.
The PHP running on the Server must contain the request details (exactly what is desired) to the Controller. The Controller must pass them on to the Device. If the Controller provides a http URL for the PHPs on the server connect to, it can parse the query parameters, and translate them into a format the Device understands.
One particular issue in practice is to handle concurrent accesses. There is usually only a single connection from Controller to Device, but more than one PHP might connect to the Controller simultaneously. So there is some book-keeping involved.
In some cases the Device provides a continuous stream of data (or regular updates of data) to the Controller, and the Controller simply keeps tabs on it. When a PHP running on the Server queries something from the Controller, the Controller simply looks up the latest data (without contacting the Device at all, just receiving the data as normal), and responds with it. Here, it is common to include a timestamp, or better yet, the age of the data, in the response from Controller to Server.
You really should add some details to your question. (I suspect the downvote is due to lack of details.) You don't need to tell us the exact make and model of the Device, only whether it is a receiver (TV? radio? weather station?) or a sensor cluster or a door lock, and if you know any details on the communications protocols (which ones)? Thus far, we only know it uses IP. That does not help at all, just about everything uses IP nowadays. This is also why my answer is so vague; I'd like to be more precise, but you do not provide enough information for me to do so.
I have a series of XML files which can be retrieved, edited and saved by a User. My intention is to allow multiple Users to edit these files at the same time. Many parts of these XML files relate to content displayed in the browser UI for example a <name>My title</name> node is displayed and can be edited.
The technologies I'm using are Javascript, PHP, and a master XML file containing references to other XML files (both master and referenced files can be edited in the UI). The server is WebDAV enabled, and WebDAV methods are used via YUI3's io module to handle retrieval, saving, collection moving etc.
How do I go about updating UIs where these resources are being used, based on the contents of the edited and saved XML file(s)?
I know I could probably run setTimeouts and whatnot to check for updates, but it seems more intuitive to make the UI respond only when data is changed.
cheers!
The feature you're describing is similar to a technique known as server-push. What you're asking to do is a very tricky thing for a web app (especially for PHP, which is built around the idea of a request that gets served and the script terminating).
HTML5 is introducing technologies such as websockets for maintaining a persistent connection to a server, you could look into websockets as a solution, but it's a brand-new technology and I don't think the spec is even finalized yet, so it will only be implemented in the very latest versions of browsers, if at all.
You've already mentioned AJAX polling (driven by setInterval), but you've also noticed that it's problematic. You're right of course, it is, local data can become stale in the interval between polls, and you'll generate a lot of traffic between the server and any open clients.
An alternative is so called "long-polling". The idea is the client starts an AJAX session with the server. On the server the script invoked by the client basically just sits there and waits for something to change. When it does, the server notifies the client by sending a JSON/XML/whatever response and closing the AJAX session. When the client receives the response, it processes it and initiates a new AJAX connection to wait for another server response.
This approach is almost instantaneous, because data gets pushed to the client as soon as it's available. However, it also means lots of open connections to the server and this can put the server under a lot of load. Also, PHP scripts aren't really meant to run or sleep for a long time due to the request-response model the language is built around. It is possible, but probably not advisable to follow this approach.
How do I implement basic "Long Polling"? has some examples of the long-polling technique.
Good luck!
I have a C++ backend application coded over a TCP socket, to which I connect PHP to. The problem is that the connection is closed on every refresh, change page, etc. I would like to keep the connection open for each client, doing something like $_Session does.
This is not really what PHP (or the whole of web-based applications and services, for that matter) are meant for. It also means begging for resource problems before long because big PHP processes will be running simultaneously, instead of running for a quick moment on each request.
What speaks against making use of the normal session mechanisms from within your app (i.e. dealing with session ID cookies) like other clients?
I'm no expert in C++ but I'm sure that most http libraries can deal with a "cookie jar", which is essentially all you need to persist a session from within your client application.
While I don't know much about PHP, I can tell you that web browsers aren't designed to hold continuous connections. They have to reconnect every time they make a HTML request.
The HTTP standard specifies that the server will disconnect from the client after it has finished sending it it's request.