I will try to make my first post here as interesting as possible.
Lately I have been interested in the feasibility of handling WebSocket requests on a shared hosting server.
Please don't tell me "upgrade your plan". All this would be trivial on at least a VPS. I realize that.
As many know, shared hosts will...
Kill a daemon if they see one
Block usage of server sockets
Deny you shell access
Keep apache off limits (no module installations)
These restrictions eliminate phpwebsocket, python altogether. A no-daemon solution that masquerades as a web page is needed.
PHP being my favorite server-side language, I crafted a PHP websocket gateway posing as a web page.
So far I have been successful in sending the right headers for the handshake and streaming output (using output buffering), but I still can't figure out how to continue to read data after the initial request.
In short, I want to continue to receive data from the client even after the PHP script is started. I have tried reading the php://input pseudofile, but I can't seem to get any more reads out of it after the end of the GET. Is there any setting or hack that will allow this?
Thanks!
the short version: What you're trying to do is simply not possible.
the long version: the best you can get is a oneway communication channel that looks like a websocket connection in your browser, but that only works on one direction. From the server to the browser. The other direction will simply not work, because the webserver isn't aware that you're trying to use a different protocol than HTTP, and there is no way of telling it. At least not in the scenario you just outlined.
Your problem here is Apache itself. Once Apache has read the first HTTP request (the websocket handshake) it will continue reading from the TCP connection for any additional HTTP requests. Thus any new data send on the TCP connection will never be passed on to your script. This is necessary as the HTTP/1.1 protocol supports Keep-Alive by default meaning multiple Request/Response cycles are done on one TCP connection. The browser doesn't open a HTTP connection for each request (which was the default in HTTP/1.0). You can't change this behavior. To implement a websocket server you will need to setup your own socket.
After the WebSocket handshake is done, it works pretty much like regular sockets. There is no reason why Apache would allow unidirectional communication without headers.
Related
I have a client device that request a web page.
I trying to send data to a client when a database table entry is changed.
Problems: the client is not a "browser" ie client side scripting wont do me any good here.(Its a micro controller)
Attempts at first I was thinking of using php and the flush command. I could ever so often output waiting to the client while still in a loop thats checking the database for changes. This to me is a stretch of a method for I don't think my server supports the function and I dont really like it for it seems "dirty" :) ...
Next thought have the php constantly poll the database for changes using a loop. the client should wait until the server finishes and thus I will have a stable connection for "as long as it takes for a change to happen :) optimistic I know". Taking into account that if the connection does time out I can have the client reconnect.
Now a bit of a silly stretch is server side JavaScript a thing lol yes i asked...maybe there is something i don't know about...
I'm hoping someone here can help on this quest of knowledge
Thanks JT
My client is currently:
Opening a socket using tcp connection on port 8090... Then opening a connection to my web site using my socket number and the server address and server port number(80)...I not sure how to correlate this type of socket to the type i would need to stream data very sparingly to the client.
If you need to stick with the HTTP protocol (see comments on other possible methods), read about the meta refresh HTML header. This does what you asked without client side scripting.
Another possible thing would be to setup your db updates as an RSS feed.
It does feel better design not to use HTTP.
Non HTTP based notes:
1) could you just do your current HTTP request, sleep abit, then repeat the same process?
Nothing special, or original; but this fundamentally is what other options are doing.
2) using the same socket, blocking reads would allow you to get more data as soon as its available
the webserver may need to have the config adjusted to act as streaming media server.
38951
As discussion, not a solution, have a look at streaming media
OS: Linux(RedHat)
Programming Language: C++
I need to create a daemon(process) for Linux using C++ that will continuously listen on a custom port for PHP requests. The PHP will send the request data in XML form to the daemon, the daemon will parse the XML using Xerces and send back an appropriate reply in XML form to the PHP page.
I have successfully created a daemon process listening on port 4646 on localhost, but what I can't figure out is how the request from PHP will go to the daemon and how will the daemon send the reply back.
I tried google-ing for this particular problem but couldn't find a solution at all.
Any kind of help on this problem will be very much appreciated.
I have also read a little about PHP daemons, but I'm not sure whether they are applicable in this particular scenario.
This approach is not hard and fast so any alternative approach will also do. The only thing hard and fast is the results i.e succesful communication between the PHP pages and the daemon.
Question is rather confused.
I need to create a daemon(process) for Linux using C/C++
Why does it have to be written in C or C++?
I have also read a little about PHP daemons, but I'm not sure whether they are applicable
Does that mean it doesn't need to be written in C/++? Why do you think they might not be applicable?
the daemon will parse the XML using Xerces
Why does it have to use Xerces? Presumably the daemon is supposed to do something more than just parse XML and compose a response - what else does it do?
Writing a daemon is not a trivial process. Writing a socket server is not a trivial process. It is somewhat simplified by implementing a well defined protocol at each end.
...which rather begs the question, why not just use HTTP as the protocol and a webserver to implement the server stuff, and seperate the application-specific logic into a [f]CGI program. And taking this one step further, why not implement the application-specific logic using PHP.
The only thing hard and fast is the results i.e succesful communication between the PHP pages and the daemon
Some options:
Write the application specific part as a PHP page then invoke it via an HTTP request using curl
Write the server as a single tasking stdio server and use [x]inetd to invoke it, handling the client side connection as a network socket (requires that you define your protocol)
Write a forking server daemon in PHP handling the connection at both ends as a network socket (requires that you define your protocol)
write a single threaded server daemon (using socket_select) in PHP handling the connection at both ends as a network socket (requires that you define your protocol)
Of course anywhere I've mentioned PHP above, you could equally use C, C++, Perl, Java....etc.
Its better that u can use the php socket library to connect with the daemon running in your system and then u can pass data to the daemon and can process the result sent back by the daemon .
You can refer the PHP Socket Library for creating code to do socket connection with daemon ...
I think this is a better option than using CURL as the daemon is also a custom socket interface , CURL will be most suitable for HTTP request's , but i think here the daemon is not an HTTP one..
xinetd / inetd might be a bit old skool but can make this easy and scalable (with in limits)
Inetd will call you program and send the the traffic to stdin and your stdout will go to the connection. As long as you dont need shared information it stops you having to worry about making the program bug free/no memory leaks etc....
Simon Loader
I'm looking for a way to keep track of a HTTP 1.1 connection kept alive across requests, in order to get a ftp-like session.
The idea would be to have an authentication at first request, and then, keeping the authentication valid while the socket is still open (with HTTP 1.1 keep-alive feature).
I've been searching for such a solution without much success until here.
I'm looking for information, such as:
is there somewhere a socket ID available from apache in PHP?
is there a module allowing to add information to an HTTP 1.1 connection (something that could be used from PHP)?
Any other idea?
You can't do that.
I don't think Apache (or any web server, for that matter) exposes socket IDs to a PHP script. But it may be possible to uniquely identify a connection using $_SERVER['REMOTE_ADDR'] and $_SERVER['REMOTE_PORT'] because the same client can't initiate two connections to the same server from the same ephemeral port at the same time.
But even if you do this, the keep-alive session probably won't stay around long enough to allow for meaningful human interaction like FTP does. Most keep-alive sessions are automatically terminated after 5-15 seconds, depending on your web server's configuration. You may be able to tweak this, but it's a very bad idea to keep connections alive for longer than that, especially when using Apache with mod_php, because a long-lasting connection can monopolize server resources at the expense of other users. Besides, computers and routers tend to reuse ephemeral ports, so there's no guarantee that you're talking to the same user. Things get even more complicated when you take into account all the proxies and reverse proxies that people use every day.
Just use sessions and/or cookies. They always work.
If you are not limited to apache, and can simply run php scripts I think you can do it.
http://php.net/manual/en/book.sockets.php
PHP allows you to open a specific port and have low level read writes on it. This way you can write any existing or your own protocols.
For this particular situation, you can have apache running on a non-http port and listen the http port from custom PHP script. You shall then authenticate the request on connection; append a custom header(call it myauth) for user and then forward the request to apache. Make sure you filter out the myauth header while reading the original http request.
Can someone explain us the difference in behaviour between the following parameters :
keep_alive parameter in Zend_Http_Client class ?
and
persistent in Zend_Http_Client_Adapter_Socket class ?
I'd like to understand what I need to do to keep a bunch of HTTPS connection open (to avoid the negociation of the SSL).
Thanks,
Gaston
If you use the persistent connection ou should use as well the keep-alive, as without keep Alive HTTP 1.1 connexion your persitent will have to do a lot of work to emulate the job.
Edit : (it was time to eat)
Keepalive settings talks about a quite-short time setting, set by the server. Apache by default handle 15s for Keep-Alive requests, but a current optimized setting is 5s. This is mostly done to help HTTP client downloading js and css attached to a page in the same HTTP connexion. If you can adjust the server settings you can try longest Keep-Alive queries (but be careful, this will limit seriously the number of client accepeted by your server).
Persistent connexion mode is done to really emulate a long-term persistent connexion, the socket opened is not closed at the end of the script. You should be very careful with such setting. Are you in CLI mode? FCGI?. If you're running in an apache process I'm really not sure you'll get the same connexion on the next request on this script (which will may be handle by another apache process), it's even worst if your code is running on several apache servers in a large deployment. And this is for the client (PHP) side, but it can be as well a big pain for the targeted server.
Re-edit: (as something about SSL must be said)
Are you sure you need to optimize SSL negociation time? SSL use Cache, at least on server side, to limit the negociation to the first request. Client side caching of SSL session is maybe done by the PHP stream_socket_client function (which is used by the Zend class. If not you could test a new class from your own (just need to implement the interface) and try using curl, as curl use SSL session caching by default.
KeepAlive means that the connection may be reused over the course of an individual request, though it will be closed at the end of the request. Persistent means that the connection will survive beyond the individual request so-as to be used by a subsequent request on the same PHP process.
FYI, while keep-alive is supposed to reuse the HTTP connection, the adapter class (at least as recently as 1.10) doesn't handle this correctly and opens a new connection regardless of the flag.
Is it possible to implement a p2p using just PHP? Without Flash or Java and obviously without installing some sort of agent/client on one's computer.
so even though it might not be "true" p2p, but it'd use server to establish connection of some sort, but rest of communication must be done using p2p
i apologize for little miscommunication, by "php" i meant not a php binary, but a php script that hosted on web server remote from both peers, so each peer have nothing but a browser.
without installing some sort of
agent/client on one's computer
Each computer would have to have the PHP binaries installed.
EDIT
I see in a different post you mentioned browser based. Security restrictions in javascript would prohibit this type of interaction
No.
You could write a P2P client / server in PHP — but it would have to be installed on the participating computers.
You can't have PHP running on a webserver cause two other computers to communicate with each other without having P2P software installed.
You can't even use JavaScript to help — the same origin policy would prevent it.
JavaScript running a browser could use a PHP based server as a middleman so that two clients could communicate — but you aren't going to achieve P2P.
Since 2009 (when this answer was originally written), the WebRTC protocol was written and achieved widespread support among browsers.
This allows you to perform peer-to-peer between web browsers but you need to write the code in JavaScript (WebAssembly might also be an option and one that would let you write PHP.)
You also need a bunch of non-peer server code to support WebRTC (e.g. for allow peer discovery and proxy data around firewalls) which you could write in PHP.
It is non-theoretical because server side application(PHP) does not have peer's system access which is required to define ports, IP addresses, etc in order to establish a socket connection.
ADDITION:
But if you were to go with PHP in each peer's web servers, that may give you what you're looking for.
Doesn't peer-to-peer communication imply that communication is going directly from one client to another, without any servers in the middle? Since PHP is a server-based software, I don't think any program you write on it can be considered true p2p.
However, if you want to enable client to client communications with a php server as the middle man, that's definitely possible.
Depends on if you want the browser to be sending data to this PHP application.
I've made IRC bots entirely in PHP though, which showed their status and output in my web browser in a fashion much like mIRC. I just set the timeout limit to infinite and connected to the IRC server using sockets. You could connect to anything though. You can even make it listen for incoming connections and handle them.
What you can't do is to get a browser to keep a two-way connection without breaking off requests (not yet anyways...)
Yes, but its not what's generally called p2p, since there is a server in between. I have a feeling though that what you want to do is to have your peers communicate with each other, rather than have a direct connection between them with no 'middleman' server (which is what is normally meant by p2p)
Depending on the scalability requirements, implementing this kind of communication can be trivial (simple polling script on clients), or demanding (asynchronous comet server).
In case someone comes here seeing if you can write P2P software in PHP, the answer is yes, in this case, Quentin's answer to the original question is correct, PHP would have to be installed on the computer.
You can do whatever you want to do in PHP, including writing true p2p software. To create a true P2P program in PHP, you would use PHP as an interpreted language WITHOUT a web server, and you would use sockets - just like you would in c/c++. The original accepted answer is right and wrong, unless however the original poster was asking if PHP running on a webserver could be a p2p client - which would of course be no.
Basically to do this, you'd basically write a php script that:
Opens a server socket connection (stream_socket_server/socket_create)
Find a list of peer IP's
Open a client connection to each peer
...
Prove everyone wrong.
No, not really. PHP scripts are meant to run only for very small amount of time. Usually the default maximum runtime is two minutes which will be normally not enough for p2p communication. After this the script will be canceled though the server administrator can deactivate that. But even then the whole downloading time the http connection between the server and the client must be hold. The client's browser will show in this time its page loading indicator. If the connection breakes most web servers will kill the php script so the p2p download is canceled.
So it may be possible to implement the p2p protocol, but in a client/server scenario you run into problems with the execution model of php scripts.
both parties would need to be running a server such as apache although for demonstration purposes you could get away with just using the inbuilt php test server. Next you are going to have to research firewall hole punching in php I saw a script i think on github but was long time ago . Yes it can be done , if your client is not a savvy programmer type you would probably need to ensure that they have php installed and running. The path variable may not work unless you add it to the system registry in windows so make sure you provide a bat file that both would ensure the path is in the system registry so windows can find it .Sorry I am not a linux user.
Next you have to develop the code. There are instrucions for how hole punching works and it does require a server on the public domain which is required to allow 2 computers to find each others ip address. Maybe you could rig up something on a free website such as www.000.webhost.com alternatively you could use some kind of a built in mechanism such as using the persons email address. To report the current ip.
The biggest problem is routers and firewalls but packets even if they are directed at a public ip still need to know the destination on a lan so the information on how to write the packet should be straight forwards. With any luck you might find a script that has done most of the work for you.