remote rendering of PHP file - php

this question might quite possibly not have an answer or be quite stupid. I guess we will find out.
My situation is the following:
I am wokring with an embedded device. On the embedded device there
are 10-20 webservices running -- you can actually imagine small
webservers. Each of them is on its own port. This one might have the IP 152.0.0.1
A top-level web-interface is wrapped around these webservices. This
webinterface runs on a standard LEMP setup (debian). Say this has the IP 10.0.0.1
To work on the embedded device one can locally jump from webservice to webservice by calling 152.0.0.1:xxxx, 152.0.0.1:xxxy, 152.0.0.1:xxxz in a (local) webbrowser. This works fine. It seems crazy to have a setup like this but I do not have the possiblity to change things on this end.
What I would like to do is to embed the small webservices in the top-level webinterface, which will have a navigation bar where one can chose which one to look at.
Both systems are not in the same network but connected via a ssh tunnel. However, the ports themselves are dynamic, so I can not simply hard-code them in the top-level web-interface. I should be able by building the navbar using dynamic ports obtained from a database -- this is possible. Since there will be multiple such embedded devices there will be multiple ports to forward with the ssh connection. This is the problematic part.
My question is: is there actually a way to get rid on this port-dilemma situation only on the nginx/php7 level? The embedded device should keep its own ports but I would not want to call them explicitely from the webinterface. Also, each device having a set of ports is quite cumbersome. I could use nginx reverse proxy to map the ports to URLs but I would have to do that on the embedded hardware, which is complicated. I could do it on the debian server as well using nginx but this means I would have to forward a whole bunch of ports from the embedded device to the debian server, which is not nice.
Is there a way I can make the embedded device build a page all on its own, using its internal port-scheme and then display that on the top-level webinterface? I am thinking about a command which allows me to do so, e.g.
buildpage($device, $port) on the embedded device alone and "forward" the entire content of that built webpage to the top-level webinterface. The top-level webinterface would then never have to deal with the port-confusion on the embedded device, it just has to say which port it should use, the embedded device internally pre-renders the page and sends it to the top-level webinterface. The point here being, the aforementioned ssh tunnel would not have to forward a whole bunch of ports anymore but possibly onlyone -- wich is the ultimate goal!
I might be overseeing an easy solution here and the approach probably does not make all that much sense but please consider it. In case there is a better option, I would be very keen to know.
Thank you a lot

Related

How to bypass the VPN for specific connections in PHP?

My (Windows) computer is connected with OpenVPN to my VPN provider. That means that everything goes through it, alternatively using its proxies as well for a little bit of variation.
In many situations, I don't want it to go through the VPN (or any proxy) when making a request. For example, when I use PHP to log in to my bank. Or when all proxies/VPNs are blocked from downloading a file, or loading a webpage, etc., which happens frequently. But then I'm stuck, because to the best of my knowledge, there is no way to tell PHP to "bypass the VPN and use my home IP address directly".
I would like something like:
php_bypass_VPN();
/* make cURL requests here */
php_restore_VPN();
Is it possible at all? If not, why is this not a major problem for anyone else? Or is it? It has caused countless issues for me, and not just in PHP context. For example, I would want the buffering local Internet radio station to go through my normal IP address as well, but no software I've ever seen provides any means to "bypass VPN".
If the solution involves hacking the OS and/or installing a bunch of spyware, I'll not be happy. Please tell me there is some simple way to do this, such as:
shell_exec('somebinary bypassvpntemporarily');
That would be swell, although a cross-platform solution would be vastly preferred.
Most VPNs work in a way that they install a new network interface driver on your PC and make sure all traffic goes through it so it can encrypt it on the way out.
I guess you could try to go through a specific interface but I'm not sure that will overcome the VPN (heres how to get a specific interface how to bind raw socket to specific interface).
Other than that you could create your own driver...

Adding websockets to existing application

So I wrote this nice SAAS solution and got some real clients. Now a request was made by a client to add some functionality that requires websockets.
In order to keep things tidy, I would like to use another server to manage the websockets.
My current stack is an AWS application loadbalancer, behind it two servers - one is the current application server. It's an Apache web server with PHP on it running the application.
The client side is driven by AngularJS.
The second server (which does not exist yet) is going to Nginx, and session will be stored on a third Memcache server.
Websocket and application server are both on the same domain, using different port in order to send the requests to the right server (AWS ELB allows sending requests to different server groups by port). Both the application and websockets will be driven by PHP and Ratchet.
My questions are two:
is for the more experienced developers - does such architecture sounds reasonable (I'm not aiming for 100Ks of concurrent yet - I need a viable and affordable solution aiming to max 5000 concurrents at this stage).
What would be the best way to send requests from the application server (who has the logic to generate the requests) to the websocket server?
Please notice I'm new to websockets so maybe there are much better ways to do this - I'll be grateful for any idea.
I'm in the middle of using Ratchet with a SPA to power a web app. I'm using Traefik as a front-end proxy, but yes, Nginx is popular here, and I'm sure that would be fine. I like Traefik for its ability to seamlessly redirect traffic based on config file changes, API triggers, and configuration changes from the likes of Kubernetes.
I agree with Michael in the comments - using ports for web sockets other than 80 or 443 may cause your users to experience connection problems. Home connections are generally fine on non-standard ports, but public wifi, business firewalls and mobile data can all present problems, and it's probably best not to risk it.
Having done a bit of reading around, your 5,000 concurrent connections is probably something that is going to need OS-level tweaks. I seem to recall 1,024 connections can be done comfortably, but several times that level would need testing (e.g. see here, though note the comment stream goes back a couple of years). Perhaps you could set up a test harness that fires web socket requests at your app, e.g. using two Docker containers? That will give you a way to understand what limits you will run into at this sort of scale.
Your maximum number of concurrent users strikes me that you're working at an absolutely enormous scale, given that any given userbase will usually not all be live at the same time. One thing you can do (and I plan to do the same in my case) is to add a reconnection strategy in your frontend that does a small number of retries, and then pops up a manual reconnect box (Trello does something like this). Given that some connections will be flaky, it is probably a good idea to give some of them a chance to die off, so you're left with fewer connections to manage. You can add an idle timer in the frontend also, to avoid pointlessly keeping unused connections open.
If you want to do some functional testing, consider PhantomJS - I use PHPUnit Spiderling here, and web sockets seems to work fine (I have only tried one at a time so far, mind you).
There are several PHP libraries to send web socket requests. I use Websocket Client for PHP, and it has worked flawlessly for me so far.

PHP and server-to-server communication in LAN

I am doing a project that has to incorporate load-balancing using OpenStack platform. It boils down to spreading browser requests, that execute calculation-heavy scripts, across several virtual machines running some distro of Linux.
Due to all installation attempts of OpenStack going horribly wrong, I ended up using TryStack.org, which is a free and working environment. The obvious problem here is, it offers very limited resources. For instance, I can have only 1 floating (external) IP, which can be assigned to only 1 single instance (virtual machine) and there are measures that make it impossible to change it via API.
Due to those limitations, I have to work with a very peculiar setup: I have a network with nodes A, B and C. A, B and C can communicate with each other, but only A has an external IP, ie. is accessible by browser.
(illustration)
Therefore, I have to:
direct all browser requests to A,
have A request (and wait for) execution of calculation-heavy scripts on B/C,
make B/C send back results once they are finished
and finally have A dress results in HTML and send response back to the browser.
Is there any mechanism in PHP that can do 2. and 3.? If not, what (Linux-compatible) language/technology can do that? (I have already written almost all of the code in PHP, but I suppose I can switch.)
Alternatively: is there some other free OpenStack service that would allow me to give every instance an IP (in which case the spreading problem could be solved via simple redirects)?
As arkascha advised in the comments, I used curl to simply send a page request from A to B/C, and then parsed the page using text manipulation.

How to set up Plover so Stenography can be broadcast on a LAN in real-time?

A friend of ours is running the Plover software for her closed caption and other reporting work. She is trying to find a way to have this post in real time on a local server for others (Hard of Hearing) to watch in real time (but not allow them to edit) from their tablets or laptops on a LAN.
This would be similar to what Stack Overflow does when editing (but over a LAN rather than on the same machine). I type in an edit box, and it prints below in real time. How is this being done? Is there a place to find this code?
I can help her get the WiFi or Blue-tooth to connect to their systems, I just don't know how to get it to push to them. The reporting machine will be running Ubuntu. If I need to install Apache, PHP for her that's fine and just guide them to a URL.
It sounds like the Plover software works at a (keyboard) device driver level, and so can be used to enter steno-to-text in any desktop application. Thus I would arrange things this way:
Put Apache on your reporting server, and set up a web application on there which shows a text box. You can use sockets (ideally) or AJAX (as a fallback) to transport your text from a browser to the server. This can then be sent out to any number of clients on a different page, probably via a database as an intermediate store. AJAX requires frequent polls and so is inefficient and slow, but on a LAN with a small number of users it would still be OK. Sockets are better but need a library to implement - take a look here at a PHP example.
Take a look at this answer to understand the different ways a browser and server can communicate (especially the section on HTML5 Websockets). Pusher is mentioned - that makes it really easy, but if you are broadcasting on a LAN it seems pointless to need the internet. I'd do it myself, for what it's worth.
If you want to stick with AJAX, jQuery, MooTools or Prototype is fine. If you want to use sockets there are several libraries that will use sockets first, and then fall back to a variety of technologies (long polling) and then finally AJAX. This will depend mostly on browser support for these various things.
I believe web sockets need a server component for which Apache is generally considered unsuitable. The first link I gave, for the Ratchet library, looks like it has its own listener component. Perhaps a good first step would be to work through the demos, so you can understand the technology and customise it for your needs?

Websocket complications

This is complicated, and not necessarily one question. I'd appreciate any possible help.
I've read that is is possible to have websockets without server access, but I cannot seem to find any examples that show how it is. I've come to that conclusion (that I believe I need this) based on the following two things:
I've been struggling for the past several hours trying to figure out how to even get websockets to work with the WAMP server I have on my machine, which I have root access. Installed composer, but cannot figure out how to install the composer.phar file to install ratchet. Have tried other PHP websocket implementations (would prefer that it be in PHP), but still cannot get them to work.
My current webhost I'm using to test things out on is a free host, and doesn't allow SSH access. So, even if I could figure out to get websockets with root access, it is a moot point when it comes to the host.
I've also found free VPS hosts by googling (of course, limited everything) but has full root access, but I'd prefer to keep something that allows more bandwidth (my free host is currently unlimited). And I've read that you can (and should) host the websocket server on a different subdomain than the HTTP server, and that it can even be run on a different domain entirely.
It also might eventually be cheaper to host my own site, of course have no real clue on that, but in that case I'd need to figure out how to even get websockets working on my machine.
So, if anyone can understand what I'm asking, several questions here, is it possible to use websockets without root access, and if so, how? How do I properly install ratchet websockets when I cannot figure out the composer.phar file (I have composer.json with the ratchet code in it but not sure if it's in the right directory), and this question is if the first question is not truly possible. Is it then possible to have websocket server on a VPS and have the HTTP server on an entirely different domain and if so, is there any documentation anywhere about it?
I mean, of course, there is an option of using AJAX and forcing the browser to reload a JS file every period of time that would use jQuery ajax to update a series of divs regardless of whether anything has been changed, but that could get complicated, and I'm not even sure if that is possible (I don't see why it wouldn't be), but then again I'd prefer websockets over that since I hear they are much less resource hungry than some sort of this paragraph would be.
A plain PHP file running under vanilla LAMP (i.e. mod_php under Apache) cannot handle WebSocket connections. It wouldn't be able to perform the protocol upgrade, let alone actually perform real-time communication, at least through Apache. In theory, you could have a very-long-running web request to a PHP file which runs a TCP server to serve WebSocket requests, but this is impractical and I doubt a shared host will actually allow PHP to do that.
There may be some shared hosts that make it possible WebSocket hosting with PHP, but they can't offer that without either SSH/shell access, or some other way to run PHP outside the web server. If they're just giving you a directory to upload PHP files to, and serving them with Apache, you're out of luck.
As for your trouble with Composer, I don't know if it's possible to run composer.phar on a shared host without some kind of shell access. Some hosts (e.g. Heroku) have specific support for Composer.
Regarding running a WebSocket server on an entirely different domain, you can indeed do that. Just point your JavaScript to connect to that domain, and make sure that the WebSocket server provides the necessary Cross-Origin Resource Sharing headers.
OK... you have a few questions, so I will try to answer them one by one.
1. What to use
You could use Socket.IO. Its a library for developing realtime web application based on JavaScript. It consists of 2 parts - client side (runs on the visitor browser) and server side. Basic usage does not require almost any background knowledge on Node.js. Here is an example tutorial for a simple chat app on the official Socket.IO website.
2. Hosting
Most of the hosting providers have control panel (cPanel) with the capebility to install/activate different Apache plugins and so on. First you should check if Node.js isn't available already, if not you could contact support and ask them if including this would be an option.
If you don't have any luck with your current hosting provider you could always switch hosts quickly as there are a lot of good deals out there. Google will definitely help you here.
Here is a list containing a few of the (maybe) best options. Keep in mind that although some hosting deals may be paid there are a lot of low cost options to choose from.
3. Bandwidth
As you are worried about "resource hungry" code maybe you can try hosting some of your content on Amazon CloudFront. It's a content delivery network that is widely used and guarantees quick connection and fast resource loading as the files are loaded from the closest to the client server. The best part is that you only pay for what you actually use, so if you don't have that much traffic it would be really cheap to run and still reliable!
Hope this helps ;)

Categories