My development environment consists of the single-threaded built-in PHP server. Works great:
APP_ENV=dev php -S localhost:8080 -c php.ini web/index.php
One issue with this is that the built-in server is single-threaded. This makes lots of parallel XHRs resolve sequentially. Worst of all, it doesn't mimic our production environment very well. Some front-end issues with concurrency simply don't exist in this set up.
My question:
What's an existing solution that I could leverage that would proxy requests asynchronously to multiple instances of the same PHP built-in server?
For example, I'd have a few terminal sessions running the built-in server on different ports, then each request is routed to a different one of those instances. In other words, I want multiple instances of my application running in parallel using the simplest possible set up (no Apache or Nginx if possible).
A super-server, like inetd or tcpserver†, works well. I'm a fan of the latter:
tcpserver waits for incoming connections and, for each connection,
runs a program of your choice.
With that, now you want to use a reverse proxy to pull the HTTP protocol off the wire and then hand it over to a connection-specific PHP server. Pretty simple:
$ cat proxy-to-php-server.sh
#!/bin/bash -x
# get a random port -- this could be improved
port=$(shuf -i 2048-65000 -n 1)
# start the PHP server in the background
php -S localhost:"${port}" -t "$(realpath ${1:?Missing path to serve})" &
pid=$!
sleep 1
# proxy standard in to nc on that port
nc localhost "${port}"
# kill the server we started
kill "${pid}"
Ok, now you're all set. Start listening on your main port:
tcpserver -v -1 0 8080 ./proxy-to-php-server.sh ./path/to/your/code/
In English, this is what happens:
tcpserver starts listening on all interfaces at port 8080 (0 8080) and prints debug information on startup and each connection (-v -1)
For each connection on that port, tcpserver spawns the proxy helper, serving the given code path (path/to/your/code/). Pro tip: make this an absolute path.
The proxy script starts a purpose-built PHP web server on a random port. (This could be improved: script doesn't check if port is in use.)
Then the proxy script passes its standard input (coming from the connection tcpserver serves) to the purpose-built server
The conversation happens, then the proxy script kills the purpose-built server
This should get you in the ballpark. I've not tested it extensively. (Only on GNU/Linux, Centos 6 specifically.) You'll need to tweak the proxy's invocation of the built-in PHP server to match your use case.
Note that this isn't a "load balancing" server, strictly: it's just a parallel ephemeral server. Don't expect too much production quality out of it!
† To install tcpserver:
$ curl -sS http://cr.yp.to/ucspi-tcp/ucspi-tcp-0.88.tar.gz | tar xzf -
$ cd ucspi-tcp-0.88/
$ curl -sS http://www.qmail.org/moni.csi.hu/pub/glibc-2.3.1/ucspi-tcp-0.88.errno.patch | patch -Np1
$ sed -i 's|/usr/local|/usr|' conf-home
$ make
$ sudo make setup check
I'm going to agree that replicating a virtual copy of your production environment is your best bet. You don't just want to cause issues, you want to cause yourself the same issues. Also, there's little guarantee that you will hit all of the same issues under the alternate setup.
If you do want to do this, however, you don't have particularly many options. Either you direct incoming requests to an intermediate piece of software which then dispatches them to the php backends -- this would be the Apache, Nginx solutions -- or you don't, and the request is directly handled by the single php thread.
If you're not willing to use that interposed software, there's only one layer between you and the client: networking. You could, in theory, set up a round-robin DNS for yourself. You give yourself multiple IPs, load up a PHP server listening on each, and then let your client connections get spread across them. Note that this would assign each client to a specific process -- which may not be the level of parallel you're looking for.
Related
Is it possible to run serve my web application from another server than the one provided in cloud9?
For example : I would like to run different applications (PHP, Node.js - not sure what's possible yet) with nginx as the backend server (i) and/or a reverse proxy (ii) (to try different scenarios and configuration options).
Is it possible to run nginx and serve content to the outside world in cloud9?
Is it possible to have nginx as a reverse proxy in cloud9?
EDIT:
Here they write:
$PORT is exposed to the outside: When you run an application which listens on the port specified in the environment variable $PORT, you can access this application using the http://projectname.username.c9.io URL scheme. The proxy expects the server on that port to be a HTTP server. Other protocols are not supported.
This leads me to believe that if I would start nginx on port=$PORT it would be accesible via the specified URL sheme - can anyone confirm? Maybe anyone has tried this and can share some time-saving tips. Thanks.
I know this might be a late reply but might be helpful for those who are wondering how to do the same.
Short answer
I've created a repository to hold all the configuration needed on the process. Just run a command and NGINX and PHP-FPM will be serving and accessible from internet.
GitHub repo: https://github.com/GabrielGil/c9-lemp
Explanation
Basically to run NGINX on a c9 environment as you noted, you just have to make it listen on port 8080. YOu can either edit the default site on /etc/nginx/sites-available or create and enable your own (That's what the script above does)
Then, in order to run PHP-FPM script using NGINX, configure some permissions and the socket on the webserver is needed. By default, c9 uses ubuntu:ubuntu and the webserver www-data:www-data.
The script above also does this changes for you.
Hope this help you, or other users on similar situations.
You can run nginx on a normal Cloud9 workspace, as long as it listens to port 8080 (the value of $PORT). The URL scheme to reach your server would be http://projectname-username.c9.io, however. Please refer to the docs.c9.io for more up-to-date help on running applications.
One other thing you can do if you have another server where you would like to host your software, is to create an ssh workspace (https://docs.c9.io/ssh_workspaces.html). That way, you can connect Cloud9 to an external server directly.
I have a web application with Apache and PHP on the back end. I am in the process of enhancing this with many new features and considering using node.js for the new work.
First of all, can PHP and node.js co exist on the same machine? I do not see why not.
Second, can I just call node.js code directly from Javascript and return JSON back?
Yes, and yes. Node and Apache / PHP can co-exist on a single server.
The only issue you are likely to run into is that they cannot both listen on the same port. HTTP, by default, runs on port 80 and only one process can "listen" on a single port at any one time. You may therefore have to run the Node app on a different port (for example, 8080), which could bring in difficulties if any of your target users are restricted to only port 80.
You can run node and PHP on same server, and even on the same port. The key is to use a server like nginx in front listening on port 80, set up PHP in Nginx as you normally would (using php-fpm) and set up your Node instance to listen locally on some high port like 8081.
Then just configure nginx to proxy all the Node requests through to localhost:8081 using the directory name as a filter. You're essentially setting up nginx to treat Node requests a bit like it treats PHP requests: it forwards them off to some other daemon then handles the response as it comes back. Nginx is great at this. It will add a layer of security and it will boost performance because it's so good at managing many connections at once even when the backend doesn't.
Another benefit of this is that you can have multiple separate Node instances on different domains too, and use regular nginx rules to handle it all. You can also run it alongside other app servers like apps written in Go.
You'd benefit from Nginx's configurability, its SSL and HTTP/2 support and huge speed in serving static files, and not having to serve static files from your Node app (if you don't want to).
Yes, You can do it. If you server is an Ubuntu or Debian, follow these steps:
Open your terminal an write:
sudo curl -sL https://deb.nodesource.com/setup_8.x | bash -
sudo apt-get install nodejs
If curl is not installed on your server:
sudo apt-get install curl
To your Node.js application not stop when you exit the Terminal without shutting down your instance, use a package called Forever.
npm install -g forever
If your site is uploaded and NPM and Forever are configured correctly, it is time to start the Node.js instance. If you’re using Express.js, run the following command to start a Forever instance:
forever start ./path/to/your/project
In the above command you'll notice I am feeding the ./bin/www script because that is what npm start launches for Express.js. Be sure to change the script to whatever your launch script is.
By default, the website (nodejs) is running at http://localhost:3000 which isn't ideal for remote visitors. We want to be able to access our site from a domain name processed by Apache. In your Apache VirtualHost file, you might have something that looks like the following:
<virtualhost *:80>
ServerName www.example.com
ProxyPreserveHost on
ProxyPass / http://localhost:3000/
ProxyPassReverse / http://localhost:3000/
</virtualhost>
We are telling Apache to create a proxy that will get our Node.js http://localhost:3000 site every time the www.yousite.com domain name is hit. All assets and pages will use the www.yoursite.com path instead of http://localhost:3000 leading everyone to believe the website is being served no differently than any other.
However, by default, the Apache proxy modules are not enabled. You must run the following two commands if the modules are not already enabled:
a2enmod proxy
a2enmod proxy_http
You may be required to restart Apache after enabling these modules.
I get this information on The Poliglot Developer.
Yeh, if you use php to serve javascript pages to clients the javascript code can use AJAX requests to access routes exposed from your node server.
The answer to the following doesn't satisfy me, I wish to know a bit more about what's going on.
Can anyone explain the $pty argument in ssh2_exec() function call
Does it force the client to tell the server to spawn a PTY or is the PTY totally client-sided?
As far as I know it's attached to a process such as a SSHd for example, which would require a call to the server.
Also, when set to true does it emulate the default shell? What is it?
I know you can pass xterm for example, which emulates a PTY, is this any different? Emulation implies it's not a real PTY from my perspective.
That may be a little confusing to read, but I'm trying to grasp this concept.
Thank you. I appreciate it.
A "pty" is essentially a "pipe" between some sort of application or daemon (for example, I work on virtualization, and we use a pty to provide the virtual terminal for a virtual machine). A pty has a "master" and a "slave" side. The slave side is what your normal "terminal" program would use - xterm or ssh, etc. The master is used by whatever "thing" provides the data into the terminal [and if you write into the pty, e.g. when you type or paste text into an xterm] it gets read by the process controlling the master - the master then does whatever it should do with such data - e.g. sending it across the network in an ssh case.
It is completely to do with what happens "your end".
If you are running a command that is "interactive" over ssh - say "ssh somemachine make menuconfig" [assuming your home directory is a linux source directory - we'll ignore the fact that it probably isn't], the default is to not make a pty, so menuconfig will probably fail [to operate correctly, at least] because it's a "interactive" text program that allows you to press keys to mover around, etc. So using "ssh -t somemachine make menuconfig" will give your ssh a pty. Alternatively, "ssh somemachine" will give you a pty by default, since you are expected to type things into the other end.
The pty is a "local" terminal, but the sshd process will provide it with data from the other end, and your local sshd process feeds it into the "master" side of the pty.
This page describes what I've tried to say
http://lugatgt.org/2009/10/28/ssh-tips-and-tricks-2/
Researching PHP/Gearman. I'm trying to get an understanding of how the Gearman Server/Process determines what constitutes a "valid" client.
In the docs that I've seen, the docs show a number of clients connecting to the the Gearman Server. However, I've not found anything that describes how the server "validates" the workers, or restricts the workers/clients from accessing/getting work from the Server.
As an example, I create a Gearman Server, and I have a network with 5 child machines, each of which has a "worker". My evil friend Steve adds another machine to the network, with it's own worker..
How do I stop Steve's worker from getting work from my Server!
Is there a way to have the client/worker register itself, so I can essentially allocate IDs to the clients/workers???
I'm fairly certain that there's a way to do this, but I haven't come across it yet.
I'm testing on a Linux env, using PHP/MySQL/Gearman.
Thanks
Like memcached, gearman has no access control or authentication whatsoever.
Your best solution is to rely on your OS, e.g firewall rules.
Namely iptables should block all incoming traffic to port 4730 (standard gearman port), like this
iptables -A INPUT -p tcp --dport 4730 -s server1 -j ACCEPT
...
iptables -A INPUT -p tcp --dport 4730 -s server5 -j ACCEPT
iptables -A INPUT -p tcp --dport 4730 -j DROP
That way, you still can use Gearman from localhost.
Disclaimer : this rule is on top of my head, please double check these rules before running it on production server.
Hope this helps !
By listening (1) either only on localhost or (2) settings up proper firewall rules if you need outside access. Gearman is created with the intention of having as little overhead as possible, there is no authentication protocol. If this is not enough, only listening on localhost & using SSH tunnels to that machine is a possibility. Also a possibility is using the HTTP protocol (see here), and putting a validating proxy in front of it.
Gearman servers should only be accessible on your internal network. The network your application lives on should not have unauthorized devices on it. Your application servers shouldn't be sharing a network with your wireless router. Gearman will only send jobs to servers registered to that particular server with the same task name. If one of the employees at your company registers a task with the same name to your production Gearman master as a joke, you have bigger problems.
I'm trying to implement a socket server that will run in most shared PHP hosting.
The requirements are that the Socket server can be installed, started and stopped from PHP automatically without the user doing anything. It doesn't matter what language the socket server is written in, as long as it will run on the majority of shared hosting globally.
Currently, I've written a Socket Server with PHP that implements an Object Cache:
http://code.google.com/p/php-object-cache/
source: http://code.google.com/p/php-object-cache/source/browse/trunk/socket.class.php
However, PHP has to be compiled with sockets support, and not many servers run with PHP sockets support.
My real question is: What language should I implement the socket server in, and have maximum platform support and be invokable from within PHP.
In other words, what scripting language is the most common on PHP enabled Servers?
Or do I have to write the socket server in a compiled language to have it works across all servers?
Lets leave IIS out of the picture at the moment, just Linux servers. I don't think many PHP sites are running on IIS...
edit:
Sorry I think my question is not clear.
I'd like to know, what languages is best suited for creating a socket server given the following requirements:
The language must exist in shared hosting, alongside PHP running in Apache (not CLI).
The sockets support must be enabled natively, not via a required extension.
PHP must be able to write the deamon to file as well as start and stop the deamon.
I'm not asking for a solution for a single server. It has to run natively on the majority of shared hosting servers.
Any server can be stopped or started by PHP under Linux. Of course, if you are running a server which accepts sockets from the internet, then you can just connect directly to the server and tell it to shutdown. No need to go via PHP!
As for "starting a server from PHP", well, under Linux, anything can be started from pretty much anything. Just shell out to start the process and have it drop into daemon mode.
I'm a Perl fan myself. Not surprisingly, there's a
Perl Daemon library available.
If your hosting provider offers Perl script support, then you probably have permission to use "system" or backticks command. Then you can very likely start a daemon. However, you will need to use a non-privileged port (over 1024). Also, you should ASK THEM FIRST! They may not appreciate you tying up ports on their server. This is very definitely something you should discuss with your hosting provider.
It really depends on what the install requirements are. Often the easiest and most standard way to write a socket server is to write an inet.d service. This is a standard daemon on my unix machines, and it will fork a process and handle the socket level details. If you want your service to run on port below 1024 on Unix, this is one of the easier ways to get it done. However, the initial install requires root to configure inet.d.
If you shared hosting allows PHP to do an exec call, then you could start the daemon that way. Keep in mind though, it'll need to run above port 1024. You next need to decide if your program is going to be multi-threaded or multi-process. Typically Java programs are multi-threaded, while an Apache instance is normally multi-process.
Lastly, the host may have a firewall in place. This helps prevent shared hosting accounts from becoming part of a bot-net. If the firewall rules don't allow connections to other ports, you won't be able to connect to it remotely.