Why every tutorial that I find on the web always share the code folder between the two machines?
If i'm setting two different machines, its the same that have two physical machines, let's say nginx server on california and php server in florida.
On my understanding, the fastcgi protocol is sending the data through the networks right?
Thats why we are using IP:PORT right?
So nginx ask the machine that understand php to process the data sending to that IP:PORT through fastcgi and getting the processed response to show to the browser right?
Or am I crazy?
Having made the same journey through these tutorials, here my view:
php-fpm is only the processor for php files, cannot serve static files. For php calls only, php-fpm would technically be enough but the php-fpm container cannot handle multiple requests even if the php-fpm itself is capable to do so
nginx plays the role of a webserver and load balancer, generally a socket (via volume) to communicate to php, and is also delivering static files
So, the shared code is necessary for the entrypoint to php files and having the static files to serve while php-fpm is using code for execution.
Suggestions to improve this answer welcome.
References here:
php-fpm and nginx on linode.com
nextcloud using the fpm image
Related
I have been trying to develop web pages locally (in Windows 10) and running in my local browser (chrome, vivaldi). Right now I have 3 different ways to run simple servers locally: php's built in server, python's http.server module, and vscode's LiveServer. When I run the php server, I can execute php code properly, as one would expect. But calling php urls using the other two, I get a "Save File" dialog! Where is that coming from? Instead of a simple "not found" I get the dialog. So I have two questions: (1) Why am I getting the save file dialog? (2) Is it possible to process php files using LiveServer or python's http.server module (which I don't expect can ever support php)
if the save dialog is being shown it's cause the server can't interpret php code. You have to check these servers configs to check their integration with PHP (if they they can do that).
Good questions. Erick has answered the 1st one. I'll just elaborate more on it and then answer the 2nd one.
Why do you get save file dialog?
At a high level, a web server is serving files. When serving HTML/CSS/JS files to the browser, life is easy. Your browser understands HTML/CSS/JS and knows how to render it for the user. If your browser was sent unprocessed PHP file (assuming that file was present), the browser won't know what to do with <?php .. ?> tags and such. So, the browser offers the user to download the file. Same thing with a zip file. If you went to http://someurl.com/abc.zip, if the webserver found that file under the root of someurl.com, it'll send it to the browser and the browser will offer the user to download it. There's more to it than just that.
So, how does a web server process PHP files? It depends on the web server, but the common thing is that they need help in processing PHP files. Web server is configured to send the request to php.exe or some other system such as PHP-FPM, which processes the file and returns back to the web server to send it to the user. Processing of the file converts echo "<div>$variable</div>"; to clean HTML <div>I am awesome</div>. This processing system (php.exe or PHP-FPM) tag team with web server to serve to the browser what it can render.
Is it possible to cross-render languages?
Yes, you can in multiple ways. One of the common ways is to find the best processing system for the language of choice. For example, PHP can be processed with PHP-FPM running as a service. So, http://someurl.com/test/index.php could run through PHP-FPM. Python may use WSGI and you may choose gunicorn to process Python files. In that case, your webserver can be asked to send python-related directories/subdomains directly to gunicorn (essentially a proxy).
Reverse proxy
Let's say you have multiple sites with multiple language needs.
http://py.someurl.com serves Python/Django
http://someurl.com serves straight HTML
http://ph.someurl.com services PHP
http://js.someurl.com is powered by NodeJS
py.someurl.com could run on the server using gunicorn web server (or other wsgi-friendly servers) on port 8000. Node could be serving using Express web server on port 9000.
You could run NGINX server that serves straight HTML and also serves ph.someurl.com by sending requests to PHP-FPM service. It can also be configured to take all requests to js.someurl.com and hand it off to http://localhost:9000 where Node will service the request and send output back to NGINX and NGINX can send the request to the browser. Similarly, requests to py.someurl.com can be sent to localhost:8000 where gunicorn processes the request and sends the request back to NGINX, which forwards the request to the browser.
From a user's perspective, all they know is the NGINX server. All the other things in the background are known to NGINX. NGINX, in that case, serves as a web server and a proxy.
We have a PHP application which automates script commands. Many of these are through web interfaces. I want this php application user to be running lots of cli and ssh commands, so I dont really want www-data doing it, as it would involve changing lots of script files to www-data executable permissions, and we want scripts to be entered into web interfaces.
This application is cross-operating system. Ideally on anything that php runs on, but win, mac,
The important things that we need to be able to do are (I think) ...
1) Have a Web Server (It's currently Apache, and that's working cross os so that would be great), that is running under normal settings, normal user, reverse proxied to the below application on te same server.
2) Have a PHP application on a different port, running as its own user that can do whatever it wants.
The ability to just run
php -S localhost:8000
As is available in the built in php web server seems ideal for this. So...
1) Is it safe to use the PHP built in Web Server if it's behind an apache proxy? I'm assuming the fact we are proxying over the entire request anyway means no, since it says not to.
2) Is there another Web Server/PHP Server that can easily do this?
3) Is there a way of running two apache processes to do this?
4) Am i doing this the wrong way entirely? There's another app I know that does it like that, but a Java app and the whole process is started and owned by a non apache user.
Thanks in advance
Apache 2.4 + php-fpm + mod_proxy_fcgi will suit you just fine.
(to elaborate for the downvote -- php-fpm allows the PHP process to run as a separate daemon under its own userid which is exactly the privilege separation requested here)
Is it possible to run serve my web application from another server than the one provided in cloud9?
For example : I would like to run different applications (PHP, Node.js - not sure what's possible yet) with nginx as the backend server (i) and/or a reverse proxy (ii) (to try different scenarios and configuration options).
Is it possible to run nginx and serve content to the outside world in cloud9?
Is it possible to have nginx as a reverse proxy in cloud9?
EDIT:
Here they write:
$PORT is exposed to the outside: When you run an application which listens on the port specified in the environment variable $PORT, you can access this application using the http://projectname.username.c9.io URL scheme. The proxy expects the server on that port to be a HTTP server. Other protocols are not supported.
This leads me to believe that if I would start nginx on port=$PORT it would be accesible via the specified URL sheme - can anyone confirm? Maybe anyone has tried this and can share some time-saving tips. Thanks.
I know this might be a late reply but might be helpful for those who are wondering how to do the same.
Short answer
I've created a repository to hold all the configuration needed on the process. Just run a command and NGINX and PHP-FPM will be serving and accessible from internet.
GitHub repo: https://github.com/GabrielGil/c9-lemp
Explanation
Basically to run NGINX on a c9 environment as you noted, you just have to make it listen on port 8080. YOu can either edit the default site on /etc/nginx/sites-available or create and enable your own (That's what the script above does)
Then, in order to run PHP-FPM script using NGINX, configure some permissions and the socket on the webserver is needed. By default, c9 uses ubuntu:ubuntu and the webserver www-data:www-data.
The script above also does this changes for you.
Hope this help you, or other users on similar situations.
You can run nginx on a normal Cloud9 workspace, as long as it listens to port 8080 (the value of $PORT). The URL scheme to reach your server would be http://projectname-username.c9.io, however. Please refer to the docs.c9.io for more up-to-date help on running applications.
One other thing you can do if you have another server where you would like to host your software, is to create an ssh workspace (https://docs.c9.io/ssh_workspaces.html). That way, you can connect Cloud9 to an external server directly.
I am making an application for school but i need to be able to manage some Services running on my Debian 7 machine. I'm running "nginx" and "PHP5-FPM" so PHP 5.4. but how can I restart or stop for example "nginx" from my PHP file? i tried
exec("/etc/init.d/nginx stop");
Also I tried
shell_exec("/etc/init.d/nginx stop");
but no result php returns me:
Stopping nginx: nginx
Thanks in advance
You need to be root to restart these services, and unless your web service (like Apache) is running as your site(s) as root, this isn't going to work. There are a couple of options at your disposal, the best one would likely depend on your situation.
You can create a two-layered approach where the front-end (run by your web server) issues the commands, and the back-end is a service running as root that executes them. This could also add a layer of security as the back-end could sanitize the commands before they're executed. The communication between the front-end and back-end could be any number of things, such as a file that the front-end writes to and the back-end reads from every few seconds, or you could go with WebSockets and make it real-time.
You could run an additional instance of your web server as root that handles just this task. You would definitely want to run this over SSL, and it's somewhat risky since if someone could inject code into one of your pages, their code would be running on your server as root.
I need to deploy a Django project on a shared server which I have no root access for, and no administration capabilities whatsoever.
Each user on the server has a dedicated directory from which Apache serves that users files (public URL would be /~username/).
Problem is, Apache on this server has no CGI capabilities, no mod_python, no mod_wsgi. I can work with PHP, however.
What hacks do I have to deploy a Django project on this server, maybe employing PHP somehow?
This is in no way a production scenario, and any hack you can think of which will work would be great. Ignore any performance or scalability factors - this is only a POC.
Without mod_python, mod_wsgi, or fastCGI, you won't be able to do it directly.
I'm thinking what you might have to do is run the django app as standalone, listening on another port, then basically use PHP to proxy requests to it.
So you do your
python manage.py runserver 9999
maybe starting it with a nohup instead to keep it running when you logout:
nohup python manage.py runserver 9999 &
Then, in ~username, you make a proxy.php script that takes any additional PATH_INFO and makes a request to localhost:9999, passing along HTTP headers, collects the response, and sends it back to the browser.
So, eg, the browser requests http://example.com/~username/proxy.php/some/path/ and the PHP script requests http://localhost:9999/some/path/ and sends along the results.
I'm not a PHP programmer so I can't exactly show you how to write that, but I'm sure someone out there must have implemented an HTTP proxy in PHP.
If you have .htaccess support in that directory and apache has mod_proxy_http enabled, you could just have apache directly proxy the request. The documentation is pretty easy to follow. But if they don't have CGI enabled, they probably don't have that set up either.
Of course, the easiest thing would be if you could just get away with django running and listening on a public facing port and access it directly. Ie,
python manage.py runserver example.com:9999
and access it directly as http://example.com:9999/