Keeping a connection open for each client in php - php

I have a C++ backend application coded over a TCP socket, to which I connect PHP to. The problem is that the connection is closed on every refresh, change page, etc. I would like to keep the connection open for each client, doing something like $_Session does.

This is not really what PHP (or the whole of web-based applications and services, for that matter) are meant for. It also means begging for resource problems before long because big PHP processes will be running simultaneously, instead of running for a quick moment on each request.
What speaks against making use of the normal session mechanisms from within your app (i.e. dealing with session ID cookies) like other clients?
I'm no expert in C++ but I'm sure that most http libraries can deal with a "cookie jar", which is essentially all you need to persist a session from within your client application.

While I don't know much about PHP, I can tell you that web browsers aren't designed to hold continuous connections. They have to reconnect every time they make a HTML request.
The HTTP standard specifies that the server will disconnect from the client after it has finished sending it it's request.

Related

Websocket with endless loop cycling. its really better than ajax?

I am trying understand websockets.
I have seen 2 examples here in doc and also here.
Both examples are using endless loop cycling, listening for when a new client connects, when they do something interesting and when they are disconnected.
My question is: Is using websockets (with endless loop cycling) better than an ajax solution with http requests per x time ?
AJAX and WebSockets are vastly different. Asking if one is better than the other is like asking if a screwdriver is better than a hammer.
WebSockets are used for real time, interactive communication. Both sides of a WebSocket connection can send data and it will be received within milliseconds by the other end. The connection stays open, reducing latency due to connection negotiation.
However, it only sort of plays nicely with HTTP. That is, it plays nicely with proxies that are WebSocket aware, and with firewalls. WebSocket traffic is most definitely not HTTP traffic, except for the client's first packet, which requests switching from HTTP to the WebSocket protocol.
AJAX, on the other hand, is pure HTTP. The only difference between AJAX and a standard web request is that an AJAX request is initiated by client side scripts and the response is available to that same script rather than reloading the page.
In both AJAX and WebSockets, the client scripts can receive data and use it within that same script. That's where the similarities end.
WebSockets set up a permanent connection and both sides can send data at any time, or sit quietly at any time. With AJAX, the client makes a request and the server responds.
For instance, if you were to set up a new message notification system, if you were using WebSockets, then as soon as a new message is available, the server sends it straight to the browser. If there are no new messages, the server stays quiet. If you were using AJAX, the client would periodically send a request to the server, which would always respond, either saying there were no new messages, or delivering the notifications that are pending. There is no way for the server to initiate things on its end, it must wait for the AJAX request.
Server side, things diverge from the traditional PHP web development paradigms. A typical WebSocket server will be a stand alone, CLI application running as a daemon. (If that last sentence doesn't make sense, please spend a while taking the time to really understanding how to administer a server.)
This means that multiple clients will be connecting to the same script, and superglobal variables like $_GET and $_SESSION will be absolutely meaningless. It seems easy to conceptualize in a small use case, but remember that you will most likely want to get information from other parts of your site, which often means using libraries and frameworks that have absolutely no concept of accessing data outside of the HTTP request/response model.
Thus, for simplicity, you'll usually want to stick with AJAX requests and periodic polling, unless you have the means to rethink the network data and possibly re-implement things that your libraries automate, if you're looking to update standard web traffic.
As for the server's loop:
It is not a busy loop, it is an IO blocked loop.
If the server tries to read network data and none is available, the operating system will block (pause) the script and go off to do whatever else needs to be done. In my WS server, I block waiting for network traffic for at most 1 second at a time, before the script returns to check and see if anything else new happened that I should notify my clients of. Typically, this is barely a few milliseconds before the server goes right back to its IO blocked state waiting for new data on the wire. Some others have implemented my server using LibEv, which allows them to respond to events outside of the network IO without having to wait for the block to timeout.
This is the way nearly every server does things. This is why you can have Apache actively listening and serving web traffic without every server that runs Apache being pegged at 100% CPU usage even when there is no traffic.
In closing, WebSockets is a wonderful technology, but web libraries and frameworks are simply not built to use them. Thus, unless you're working in a system where waiting 3 seconds for a full AJAX request is far, far too long, it's probably best to use AJAX. If you're writing a multiplayer interactive game or a chat system, then you've found a perfect use for WebSockets.
I do heartily encourage everyone to learn WebSockets... but it's not a magic bullet, and few parts of the web are designed in ways where people can get real use out of it.
Yes, sockets are better in many cases.
It's not forever loop with 100% cpu utilizing, it's just liveloop, which exists in each daemon application.
Sync accept operation is where 99.99% of time we are.
Ajax heartbeat is more traffic, more server CPU and memory.
I too am in the learning phase. I have built a php-based websocket server and have it communicating with web pages. Perhaps my 2c perspective is useful.
Getting the websocket server (wss) working using available sources as a starting point is not that difficult, but what to do with it all next is.
The wss runs in CLI version of php. Late model browser loads a normal http or https page containing a request to the wss, along with anything else that page needs to do, a handshake occurs. Communication is then possible directly between browser and wss at the whim of either end. This is low overhead and hence fast and simple. Very cool. What is said over that link needs to be understood by both ends - subprotocol agreement. You may have to roll your own in php and in javascript. No more http headers, urls, etc etc.
The wss is a long-lived, stateful instance of php (very unlike apache etc which forget you on sending the page). An entire app can be run in the wss instance, keeping state for itself and each connected client. It used to be said that php was too leaky for long life but I don't hear that much any more. But I believe you still have to be careful with memory.
However, being a single php instance there is not the usual separation between client instances. For example statics in classes are shared with every class instance and hence every client. So for a single user style app sharing data with a heap of clients this is great. I can see that Ajax type calls can be replaced in this way, but if the app still had to rebuild state to service each client, and then release it to save resources, that seems to lessen the advantage.
Going a step further and keeping truly stateful instances for clients seems like a possible next step. Replicating the traditional session based system is one possibility, alternatively fork new php interpreters and look after communications between parent and children via sockets or suchlike. But this would require resources per client that would be severely limiting for any non-trivial app.
Or perhaps it is possible to put the bulk of the app in the parent and let the children just do the very client specific stuff. Or break the app design into small independent units that can communicate directly via sockets. Socket communication does seem to be catching on nowadays.
As Ghedpunk says in so many ways, the real world does not yet seem ready to realise the full potential of the web socket concept but it can certainly replace Ajax. The added advantage of the server sending without being asked opens up new possibilities previously too difficult to consider.

using server push methods without browser

I have a client device that request a web page.
I trying to send data to a client when a database table entry is changed.
Problems: the client is not a "browser" ie client side scripting wont do me any good here.(Its a micro controller)
Attempts at first I was thinking of using php and the flush command. I could ever so often output waiting to the client while still in a loop thats checking the database for changes. This to me is a stretch of a method for I don't think my server supports the function and I dont really like it for it seems "dirty" :) ...
Next thought have the php constantly poll the database for changes using a loop. the client should wait until the server finishes and thus I will have a stable connection for "as long as it takes for a change to happen :) optimistic I know". Taking into account that if the connection does time out I can have the client reconnect.
Now a bit of a silly stretch is server side JavaScript a thing lol yes i asked...maybe there is something i don't know about...
I'm hoping someone here can help on this quest of knowledge
Thanks JT
My client is currently:
Opening a socket using tcp connection on port 8090... Then opening a connection to my web site using my socket number and the server address and server port number(80)...I not sure how to correlate this type of socket to the type i would need to stream data very sparingly to the client.
If you need to stick with the HTTP protocol (see comments on other possible methods), read about the meta refresh HTML header. This does what you asked without client side scripting.
Another possible thing would be to setup your db updates as an RSS feed.
It does feel better design not to use HTTP.
Non HTTP based notes:
1) could you just do your current HTTP request, sleep abit, then repeat the same process?
Nothing special, or original; but this fundamentally is what other options are doing.
2) using the same socket, blocking reads would allow you to get more data as soon as its available
the webserver may need to have the config adjusted to act as streaming media server.
38951
As discussion, not a solution, have a look at streaming media

Jquery PHP push?

I need to implement real-time page data update with php and jquery.
(I found www.ape-project.org/ but it seems site is down)
Is any other solutions?
Very TNX!
You might want to check out Comet:
Comet is a web application model in
which a long-held HTTP request allows
a web server to push data to a
browser, without the browser
explicitly requesting it.[1][2] Comet
is an umbrella term, encompassing
multiple techniques for achieving this
interaction. All these methods rely on
features included by default in
browsers, such as JavaScript, rather
than on non-default plugins. The Comet
approach differs from the original
model of the web, in which a browser
requests a complete web page at a
time.
http://en.wikipedia.org/wiki/Comet_%28programming%29
If you want to do streaming (sending multiple messages over a single long lived, low latency connection), you probably need a comet server. Check out http://cometdaily.com/maturity.html for details on a variety of server implementations (I am the maintainer of one of them - Meteor).
If you are happy to reconnect after each message is received, you can do without complicated servers and transports and just use long polling - where you make an ajax request and the server simply sleeps until it has something to send back. But you will end up with LOTS of connections hanging off your web server, so if you're using a conventional web server like Apache, make sure it's configured to handle that. By default Apache doesn't like having more than a few hundred concurrent connections.
There is lots of solutions to do this...
Depending on what is your data, how your data are organized and stored (mysql ?).
Your question is too open to have a real answer.

How to push information from a PHP proxy to a Flex application

So I have a PHP proxy that gets information from a website. Let's say the proxy gets the information from (www.example.com). It checks if the number of lines returned is the same as before, if not then there are more lines, it counts the difference then it need to push this information to the Flex client saying that it has new information, (x) more lines have been written.
I am not really sure how to do the push mechanism on the php proxy because I am not sure how to actually push from the proxy to the client, never done it before. Any help?
Normally, you cannot initiate a transfer from the server side. You can either
set up a timer in the Flex application that fires every 2-3 seconds and checks the php proxy for updates using a URLLoader
use sockets (XMLSocket) for direct data pushing - use of sockets needs the client to open some ports which might be blocked by their firewall.
You can't really push anything from the server to your flash application unless you have an open connection. So you can either request the number from the proxy it fetches the info and gives it back to the app, or open a socket connection which is available since AS3. The socket connection stays up until explicitly closed, but that seems an overkill just for sending some info.

File resource persistence in PHP

I'm developing a simple chat web application based on the MSN protocol. The server communicates with the MSN server through a file resource returned from fsockopen (). The client accesses the server via XMLHttpRequest. The server initially logs in, and prints out the contact list (formatted in an HTML table) which the client receives through the responseText () of the XMLHttpRequest object.
Here's the problem. The file resource that is responsible for communication with the MSN server must be kept alive in order for all chat related functions to work (creating conversations, keeping track of offline/online state changes, etc). However in order for the XMLHttpRequest to complete, the PHP script must finish execution. Which means the client will get no response from the XMLHttpRequest while the chat session is in progress.
Whats worse is a file resource cannot be serialized, meaning I cannot simply store the chat session in a $_SESSION [] placeholder.
So, my question is, is there any possible way for me to 'transfer' a file resource from one file to another?
In most languages its not possible to pass file handles between applications - AFAIK most operating systems don't allow it either.
The solution is to keep the server process running as daemon - which means it needs to run outside of the webserver.
See
http://symcbean.blogspot.com/2010/02/php-and-long-running-processes.html
and
http://www.phpclasses.org/browse/package/5758.html
C.
A possible solution would be to have a PHP script on the server-side that just doesn't end ; this way, the resource corresponding to the fsockopen call would never be deleted, and the connection wouldn't be closed.
About this, you might want to search for the term "comet" ; the basic idea is to have a script that runs forever on the server-side, that sends updates to the client whenever it's necessary.
Instead of having the browser send an Ajax request every X seconds, you'd keep an openened connection between the client and the server -- just note that, unfortunatly, PHP is often said not to be the best tool for that job...
On stackoverflow : [php] comet
The resource can't survive the end of the request unless you create PHP extension that does it (like persistent MySQL connections do with mysql_pconnect() for example). However, you could use Comet technology and for example Bayeux protocol supported by Dojo toolkit among others, to talk to the server. That would require either standalone server or long-running request, in latter case ensure that PHP and webserver time limits would not kill that request for running too long.
Thanks everyone for the suggestions. Before I started this project I had considered using comet technology, but decided against (PHP/Apache don't seem to implement well). I've come up with a hacked together solution, not the most elegant but workable.
One PHP script is responsible for the MSN server communication, it will run as long as the user is active. It writes data to a file (email_out), as well as reads data from a file (email_in). Whenever the client sends a AJAX request a separate PHP script will write any POST data to the file (email_in) and will return any data from (email_out). Both scripts will not read/write data until they finally have access to the file (as there will be fighting for the file resource).
I don't know, suggestions? This is certainty not the most efficient means of doing things but it's really the only PHP/apache solution I could think of.

Categories