This is complicated, and not necessarily one question. I'd appreciate any possible help.
I've read that is is possible to have websockets without server access, but I cannot seem to find any examples that show how it is. I've come to that conclusion (that I believe I need this) based on the following two things:
I've been struggling for the past several hours trying to figure out how to even get websockets to work with the WAMP server I have on my machine, which I have root access. Installed composer, but cannot figure out how to install the composer.phar file to install ratchet. Have tried other PHP websocket implementations (would prefer that it be in PHP), but still cannot get them to work.
My current webhost I'm using to test things out on is a free host, and doesn't allow SSH access. So, even if I could figure out to get websockets with root access, it is a moot point when it comes to the host.
I've also found free VPS hosts by googling (of course, limited everything) but has full root access, but I'd prefer to keep something that allows more bandwidth (my free host is currently unlimited). And I've read that you can (and should) host the websocket server on a different subdomain than the HTTP server, and that it can even be run on a different domain entirely.
It also might eventually be cheaper to host my own site, of course have no real clue on that, but in that case I'd need to figure out how to even get websockets working on my machine.
So, if anyone can understand what I'm asking, several questions here, is it possible to use websockets without root access, and if so, how? How do I properly install ratchet websockets when I cannot figure out the composer.phar file (I have composer.json with the ratchet code in it but not sure if it's in the right directory), and this question is if the first question is not truly possible. Is it then possible to have websocket server on a VPS and have the HTTP server on an entirely different domain and if so, is there any documentation anywhere about it?
I mean, of course, there is an option of using AJAX and forcing the browser to reload a JS file every period of time that would use jQuery ajax to update a series of divs regardless of whether anything has been changed, but that could get complicated, and I'm not even sure if that is possible (I don't see why it wouldn't be), but then again I'd prefer websockets over that since I hear they are much less resource hungry than some sort of this paragraph would be.
A plain PHP file running under vanilla LAMP (i.e. mod_php under Apache) cannot handle WebSocket connections. It wouldn't be able to perform the protocol upgrade, let alone actually perform real-time communication, at least through Apache. In theory, you could have a very-long-running web request to a PHP file which runs a TCP server to serve WebSocket requests, but this is impractical and I doubt a shared host will actually allow PHP to do that.
There may be some shared hosts that make it possible WebSocket hosting with PHP, but they can't offer that without either SSH/shell access, or some other way to run PHP outside the web server. If they're just giving you a directory to upload PHP files to, and serving them with Apache, you're out of luck.
As for your trouble with Composer, I don't know if it's possible to run composer.phar on a shared host without some kind of shell access. Some hosts (e.g. Heroku) have specific support for Composer.
Regarding running a WebSocket server on an entirely different domain, you can indeed do that. Just point your JavaScript to connect to that domain, and make sure that the WebSocket server provides the necessary Cross-Origin Resource Sharing headers.
OK... you have a few questions, so I will try to answer them one by one.
1. What to use
You could use Socket.IO. Its a library for developing realtime web application based on JavaScript. It consists of 2 parts - client side (runs on the visitor browser) and server side. Basic usage does not require almost any background knowledge on Node.js. Here is an example tutorial for a simple chat app on the official Socket.IO website.
2. Hosting
Most of the hosting providers have control panel (cPanel) with the capebility to install/activate different Apache plugins and so on. First you should check if Node.js isn't available already, if not you could contact support and ask them if including this would be an option.
If you don't have any luck with your current hosting provider you could always switch hosts quickly as there are a lot of good deals out there. Google will definitely help you here.
Here is a list containing a few of the (maybe) best options. Keep in mind that although some hosting deals may be paid there are a lot of low cost options to choose from.
3. Bandwidth
As you are worried about "resource hungry" code maybe you can try hosting some of your content on Amazon CloudFront. It's a content delivery network that is widely used and guarantees quick connection and fast resource loading as the files are loaded from the closest to the client server. The best part is that you only pay for what you actually use, so if you don't have that much traffic it would be really cheap to run and still reliable!
Hope this helps ;)
Related
So I wrote this nice SAAS solution and got some real clients. Now a request was made by a client to add some functionality that requires websockets.
In order to keep things tidy, I would like to use another server to manage the websockets.
My current stack is an AWS application loadbalancer, behind it two servers - one is the current application server. It's an Apache web server with PHP on it running the application.
The client side is driven by AngularJS.
The second server (which does not exist yet) is going to Nginx, and session will be stored on a third Memcache server.
Websocket and application server are both on the same domain, using different port in order to send the requests to the right server (AWS ELB allows sending requests to different server groups by port). Both the application and websockets will be driven by PHP and Ratchet.
My questions are two:
is for the more experienced developers - does such architecture sounds reasonable (I'm not aiming for 100Ks of concurrent yet - I need a viable and affordable solution aiming to max 5000 concurrents at this stage).
What would be the best way to send requests from the application server (who has the logic to generate the requests) to the websocket server?
Please notice I'm new to websockets so maybe there are much better ways to do this - I'll be grateful for any idea.
I'm in the middle of using Ratchet with a SPA to power a web app. I'm using Traefik as a front-end proxy, but yes, Nginx is popular here, and I'm sure that would be fine. I like Traefik for its ability to seamlessly redirect traffic based on config file changes, API triggers, and configuration changes from the likes of Kubernetes.
I agree with Michael in the comments - using ports for web sockets other than 80 or 443 may cause your users to experience connection problems. Home connections are generally fine on non-standard ports, but public wifi, business firewalls and mobile data can all present problems, and it's probably best not to risk it.
Having done a bit of reading around, your 5,000 concurrent connections is probably something that is going to need OS-level tweaks. I seem to recall 1,024 connections can be done comfortably, but several times that level would need testing (e.g. see here, though note the comment stream goes back a couple of years). Perhaps you could set up a test harness that fires web socket requests at your app, e.g. using two Docker containers? That will give you a way to understand what limits you will run into at this sort of scale.
Your maximum number of concurrent users strikes me that you're working at an absolutely enormous scale, given that any given userbase will usually not all be live at the same time. One thing you can do (and I plan to do the same in my case) is to add a reconnection strategy in your frontend that does a small number of retries, and then pops up a manual reconnect box (Trello does something like this). Given that some connections will be flaky, it is probably a good idea to give some of them a chance to die off, so you're left with fewer connections to manage. You can add an idle timer in the frontend also, to avoid pointlessly keeping unused connections open.
If you want to do some functional testing, consider PhantomJS - I use PHPUnit Spiderling here, and web sockets seems to work fine (I have only tried one at a time so far, mind you).
There are several PHP libraries to send web socket requests. I use Websocket Client for PHP, and it has worked flawlessly for me so far.
A friend of ours is running the Plover software for her closed caption and other reporting work. She is trying to find a way to have this post in real time on a local server for others (Hard of Hearing) to watch in real time (but not allow them to edit) from their tablets or laptops on a LAN.
This would be similar to what Stack Overflow does when editing (but over a LAN rather than on the same machine). I type in an edit box, and it prints below in real time. How is this being done? Is there a place to find this code?
I can help her get the WiFi or Blue-tooth to connect to their systems, I just don't know how to get it to push to them. The reporting machine will be running Ubuntu. If I need to install Apache, PHP for her that's fine and just guide them to a URL.
It sounds like the Plover software works at a (keyboard) device driver level, and so can be used to enter steno-to-text in any desktop application. Thus I would arrange things this way:
Put Apache on your reporting server, and set up a web application on there which shows a text box. You can use sockets (ideally) or AJAX (as a fallback) to transport your text from a browser to the server. This can then be sent out to any number of clients on a different page, probably via a database as an intermediate store. AJAX requires frequent polls and so is inefficient and slow, but on a LAN with a small number of users it would still be OK. Sockets are better but need a library to implement - take a look here at a PHP example.
Take a look at this answer to understand the different ways a browser and server can communicate (especially the section on HTML5 Websockets). Pusher is mentioned - that makes it really easy, but if you are broadcasting on a LAN it seems pointless to need the internet. I'd do it myself, for what it's worth.
If you want to stick with AJAX, jQuery, MooTools or Prototype is fine. If you want to use sockets there are several libraries that will use sockets first, and then fall back to a variety of technologies (long polling) and then finally AJAX. This will depend mostly on browser support for these various things.
I believe web sockets need a server component for which Apache is generally considered unsuitable. The first link I gave, for the Ratchet library, looks like it has its own listener component. Perhaps a good first step would be to work through the demos, so you can understand the technology and customise it for your needs?
Earlier today, I asked a question on the Programmers StackExchange: Is it bad practice to run Node.js and apache in parallel?
My end application can be considered a social network in which I want to have a chat feature and a normal status update feature.
For the chat feature, I'd like to use Node.js because I want to push data from the server to the client instead of polling the server frequently. For the status update, I want a normal apache and PHP installation, because I'm way more familiar with that and don't see why I'd use Node.js for that.
However, that would mean I'd have to run Node.js and apache in parallel. Whilst that is possible and not considered bad practice according to the answer on Programmers.SE, I do see a few technical problems:
I'd need two ports open - could give a problem with open networks that don't have all ports open
I can't use my shared-server because I'm not allowed to open a port there, so I'd have to buy a VPS
I don't care too much about the second one, more about the first one. So are there really no solutions to combine both features on one port?
Or is there some workaround for the ports? Could I, for example, redirect subdomain.domain.com:80 to domain.com:x where x is the port of Node.js? Would that be possible and solve my problem? This solution was given in this Programmers.SE answer, but how would I go about implementing it?
You could proxy all requests to node.js through the Apache (using mod_proxy), so you won't have any troubles will multiple open ports. This also allows to remap everything to subfolders or subdomains.
This is performance-wise not the best solution, but if you are on a shared web-space it doesn't really matter. (Shared servers usually are pretty slow, and if you get a larger user base you need to move to a separate server either way sooner or later.)
As #TheHippo said you can do this with Apache's mod_proxy.
nGinx however may act faster especially if you're running PHP >= 5.4 with FastCGI.
nGinx is also a better forwarding proxy than apache and it's event based model is in line with Node's event based I/O. With the propper setup this could mean better overall performance.
If you're in a restricted environment (like shared server or no ability to change the webserver) then you should go with Apache and mod_proxy.
With all the buzz around WebSockets, it's pretty hard to find a good walkthrough on how to use them with an Apache server on Google.
We're developing a plugin, in PHP (symfony2), which will run from time to time kind of a chat instance. And we find WebSockets more interesting, standard and quick than AJAX for this matter. The thing is, we don't have much sysadmin ressources in our group and we find hard to gather good informations on the following matters:
Can we run a WebSocket instance on a traditional Apache, dedicated server, and if yes, do you have useful links for us?
If we need to mod the server, what kind of tools would you recommend knowing that we are not too skilled in sysadmin so we can't afford to have a high maintenance b*** on this.
Thank you very much,
ps: we'll link back to your blog/site as we'll make a technical/informational post on our devblog about this part of our app.
Thank you again!
As #zaf states you are more likely to find a standalone PHP solution - not something that runs within Apache. That said there is a apache WebSocket module.
However, the fundamental problem is that Apache wasn't built with maintaining many persistent connections in mind. It, along with PHP, is built on the idea that requests are made and responses are quickly sent back. This means that resources can very quickly be used up if you are holding requests open and you're going to need to look into horizontal scaling pretty quickly.
Personally I think you have two options:
Use an alternative realtime web technology solution and communicate between your web application and realtime web infrastructure using queues or short-lived requests (web services).
Off load the handling of persistent connections and scaling of the realtime web infrastructure to a realtime web hosted service. I work for Pusher and we fall into this category.
For both self-hosted and hosted options you can check out my realtime web tech guide.
One path is to use an independent installed web sockets server.
For PHP you can try:
http://code.google.com/p/phpwebsocket/ or http://github.com/Devristo/phpws/
There are some other projects which you can try as well.
Basically, you need to upload, unpack and start running the process.
On the frontend, you'll have javascript connecting to the server on the specific port.
Most websocket servers have a demo which echoes back whatever it hears, so this is a good place to write some test code. You may even find a rudimentary chat implementation.
The tricky part is to monitor the web socket server and to make sure it runs smoothly and continuously.
Try to test on as many browsers/devices as possible as this will decide on which websocket server implementation you choose. There are old and new protocols you have to watch out for.
I introduced another websocket server: PHP Ratchet (Github).
This is better and complete list of client & server side codes and browser support.
Please check this link.
Another Path is to use a dedicated websocket server.
Try Achex Websocket Server at www.achex.ca and checkout the tutorials.
OR
If you really want Apache, check out Apache Camel. (but you have to set it up and its a bit more complicated than achex server)
http://camel.apache.org/websocket.html
Is it possible to implement a p2p using just PHP? Without Flash or Java and obviously without installing some sort of agent/client on one's computer.
so even though it might not be "true" p2p, but it'd use server to establish connection of some sort, but rest of communication must be done using p2p
i apologize for little miscommunication, by "php" i meant not a php binary, but a php script that hosted on web server remote from both peers, so each peer have nothing but a browser.
without installing some sort of
agent/client on one's computer
Each computer would have to have the PHP binaries installed.
EDIT
I see in a different post you mentioned browser based. Security restrictions in javascript would prohibit this type of interaction
No.
You could write a P2P client / server in PHP — but it would have to be installed on the participating computers.
You can't have PHP running on a webserver cause two other computers to communicate with each other without having P2P software installed.
You can't even use JavaScript to help — the same origin policy would prevent it.
JavaScript running a browser could use a PHP based server as a middleman so that two clients could communicate — but you aren't going to achieve P2P.
Since 2009 (when this answer was originally written), the WebRTC protocol was written and achieved widespread support among browsers.
This allows you to perform peer-to-peer between web browsers but you need to write the code in JavaScript (WebAssembly might also be an option and one that would let you write PHP.)
You also need a bunch of non-peer server code to support WebRTC (e.g. for allow peer discovery and proxy data around firewalls) which you could write in PHP.
It is non-theoretical because server side application(PHP) does not have peer's system access which is required to define ports, IP addresses, etc in order to establish a socket connection.
ADDITION:
But if you were to go with PHP in each peer's web servers, that may give you what you're looking for.
Doesn't peer-to-peer communication imply that communication is going directly from one client to another, without any servers in the middle? Since PHP is a server-based software, I don't think any program you write on it can be considered true p2p.
However, if you want to enable client to client communications with a php server as the middle man, that's definitely possible.
Depends on if you want the browser to be sending data to this PHP application.
I've made IRC bots entirely in PHP though, which showed their status and output in my web browser in a fashion much like mIRC. I just set the timeout limit to infinite and connected to the IRC server using sockets. You could connect to anything though. You can even make it listen for incoming connections and handle them.
What you can't do is to get a browser to keep a two-way connection without breaking off requests (not yet anyways...)
Yes, but its not what's generally called p2p, since there is a server in between. I have a feeling though that what you want to do is to have your peers communicate with each other, rather than have a direct connection between them with no 'middleman' server (which is what is normally meant by p2p)
Depending on the scalability requirements, implementing this kind of communication can be trivial (simple polling script on clients), or demanding (asynchronous comet server).
In case someone comes here seeing if you can write P2P software in PHP, the answer is yes, in this case, Quentin's answer to the original question is correct, PHP would have to be installed on the computer.
You can do whatever you want to do in PHP, including writing true p2p software. To create a true P2P program in PHP, you would use PHP as an interpreted language WITHOUT a web server, and you would use sockets - just like you would in c/c++. The original accepted answer is right and wrong, unless however the original poster was asking if PHP running on a webserver could be a p2p client - which would of course be no.
Basically to do this, you'd basically write a php script that:
Opens a server socket connection (stream_socket_server/socket_create)
Find a list of peer IP's
Open a client connection to each peer
...
Prove everyone wrong.
No, not really. PHP scripts are meant to run only for very small amount of time. Usually the default maximum runtime is two minutes which will be normally not enough for p2p communication. After this the script will be canceled though the server administrator can deactivate that. But even then the whole downloading time the http connection between the server and the client must be hold. The client's browser will show in this time its page loading indicator. If the connection breakes most web servers will kill the php script so the p2p download is canceled.
So it may be possible to implement the p2p protocol, but in a client/server scenario you run into problems with the execution model of php scripts.
both parties would need to be running a server such as apache although for demonstration purposes you could get away with just using the inbuilt php test server. Next you are going to have to research firewall hole punching in php I saw a script i think on github but was long time ago . Yes it can be done , if your client is not a savvy programmer type you would probably need to ensure that they have php installed and running. The path variable may not work unless you add it to the system registry in windows so make sure you provide a bat file that both would ensure the path is in the system registry so windows can find it .Sorry I am not a linux user.
Next you have to develop the code. There are instrucions for how hole punching works and it does require a server on the public domain which is required to allow 2 computers to find each others ip address. Maybe you could rig up something on a free website such as www.000.webhost.com alternatively you could use some kind of a built in mechanism such as using the persons email address. To report the current ip.
The biggest problem is routers and firewalls but packets even if they are directed at a public ip still need to know the destination on a lan so the information on how to write the packet should be straight forwards. With any luck you might find a script that has done most of the work for you.