Does the PHP built in web server allow multiple instances/sites?
Background on the project skeleton
I am working through Zend for the first time via Chrisopher Valles' tutorial.. with a slight difference. He uses Vagrant to instantiate a VBox instance, and I'm working locally in Ubuntu 12.x LTS...
https://github.com/christophervalles for more details on the Vagrant box...
I want to utilize the internal PHP 5.5 server if possible, but getting an error when (of course) running the 2nd call.
Is the best/usual solution to have the core service be on some variable port when in development, and run the web client on port 80?
I'd, of course, need to rewrite some of my client's code to point to the new port, but would the Zend Service need a rewrite anywhere? I'd say no.
Starting up my site's core/api services Zend project :
>php -S 0.0.0.0:8080 -t public/ public/index.php
PHP 5.5.10-1+deb.sury.org~precise+1 Development Server started at Day Date Time
Listening on http://0.0.0.0:8080
Document root is /home/core_site/public
Press Ctrl-C to quit.
Starting up my site's web client, that talks to the first server:
>php -S 0.0.0.0:8080 -t public/ public/index.php
[Day Date] Failed to listen on 0.0.0.0:8080 (reason: Address already in use)
you could just use a different port
php -S 0.0.0.0:8081 -t public/ public/index.php
Thanks for reading my book!
About your question, Andrew is right about using a different port. As far as I know you cannot do domain name based hosts on the built in php server. If you don't want to use different ports each time you should look into using apache or nginx.
If you go with nginx and php you can re-use the config files I use for the Vagrant machine so you don't have to do everything from scratch. Essentially the OS on the Vagrant machine is a Ubuntu 12.04 (the same as yours) so shouldn't be hard to re-use the configs :D
You can check out the php.ini and the nginx vhosts used on Vagrant here.
Cheers!
Related
There's a lot info in the internet about docker's base operations: "how to pull image", "how to run/start container", but almost nothing about what to do next with it. How to develop?
For example, I pulled linode/lamp. Simple project is lying in /var/www/example.com/public_html/.
I launch: docker run -p 80:80 -t -i linode/lamp /bin/bash, service apache2 start. Now, in my browser in http://localhost I see an index page of the project.
Now I want to edit/add/delete files in the project. Do this with bash and nano is insane, obviously. Therefore I want to use PhpStorm. And I cannot understand what to do.
What option should I choose to create a project?
Web server is installed locally, source files are located under its document root.
Web server is installed locally, source files are located elsewhere locally.
Web server is on a remote host, files are accessible via network share or mounted drive.
Web server is on a remote host, files are accessible via FTP/SFTP/FTPS.
Source files are in a local directory, no Web server is yet configured.
it the first, then where shoud I get files? If "via FTP/SFTP/FTPS" - how to set up? I don't get it.
I know that PhpStorm has Deployment - Docker settings and I can configure it. That how it looks on my machine:
Docker settings Image
Debug/Run configuration Image
But it only gives an ability to start containers, connect to them via console. Shoulв I use it somehow?
Docker Image
Please, explain me, what should I do? I would like to see the answers for Windows и Linux (if there is a difference, of course)
P.S. I use Docker on Windows. But in Settings it's switched to Linux containers.
I used to think that the internal web server which SYMFONY uses is part of APACHE server, which runs with the following command:
$ php bin/console server:start
But as I turn off the Apache server on my PC (windows 10) the internal web server still producing without any problems, so is it that the internal web server has nothing to do with the Apache server or it's something unusual?
PHP provides a standalone built-in web server.
You can try it by running php -S localhost:3000 -t web at the root directory of your project then browsing http://localhost:3000/app_dev.php.
All commands that are part of the server:* namespace are related to the PHP built-in server.
For more informations, look at the command directly.
it is using PHP build in server
I'm trying to run a simple Laravel project inside a Vagrant (VirtualBox) VM. The guest is Ubuntu 14.04 x64, and the host is Windows 7 x64. I've set up port forwarding (8000 on host to 8000 on guest), but when I run php artisan serve, though I get a message stating that the server is running on port 8000, when I visit localhost:8000 on my host machine, Chrome tells me 'this webpage is not available'. There are two complications:
First, if I use curl from inside the VM, I receive the correct page contents - so it appears the server is working fine.
Second, if I run a Python web server using python -m SimpleHTTPServer on the same VM, I can access it fine on my host OS. Visiting localhost:8000, I see the directory contents listed. So it appears the port forwarding is working fine.
I tried deleting the public/.htaccess file in the Laravel project, to no avail. I'm no PHP expert, and this problem is hard to Google! Any pointers would be appreciated.
After reading this question I tried
php artisan serve --host 0.0.0.0
And it works fine now.
Is it possible to run serve my web application from another server than the one provided in cloud9?
For example : I would like to run different applications (PHP, Node.js - not sure what's possible yet) with nginx as the backend server (i) and/or a reverse proxy (ii) (to try different scenarios and configuration options).
Is it possible to run nginx and serve content to the outside world in cloud9?
Is it possible to have nginx as a reverse proxy in cloud9?
EDIT:
Here they write:
$PORT is exposed to the outside: When you run an application which listens on the port specified in the environment variable $PORT, you can access this application using the http://projectname.username.c9.io URL scheme. The proxy expects the server on that port to be a HTTP server. Other protocols are not supported.
This leads me to believe that if I would start nginx on port=$PORT it would be accesible via the specified URL sheme - can anyone confirm? Maybe anyone has tried this and can share some time-saving tips. Thanks.
I know this might be a late reply but might be helpful for those who are wondering how to do the same.
Short answer
I've created a repository to hold all the configuration needed on the process. Just run a command and NGINX and PHP-FPM will be serving and accessible from internet.
GitHub repo: https://github.com/GabrielGil/c9-lemp
Explanation
Basically to run NGINX on a c9 environment as you noted, you just have to make it listen on port 8080. YOu can either edit the default site on /etc/nginx/sites-available or create and enable your own (That's what the script above does)
Then, in order to run PHP-FPM script using NGINX, configure some permissions and the socket on the webserver is needed. By default, c9 uses ubuntu:ubuntu and the webserver www-data:www-data.
The script above also does this changes for you.
Hope this help you, or other users on similar situations.
You can run nginx on a normal Cloud9 workspace, as long as it listens to port 8080 (the value of $PORT). The URL scheme to reach your server would be http://projectname-username.c9.io, however. Please refer to the docs.c9.io for more up-to-date help on running applications.
One other thing you can do if you have another server where you would like to host your software, is to create an ssh workspace (https://docs.c9.io/ssh_workspaces.html). That way, you can connect Cloud9 to an external server directly.
I have a web application with Apache and PHP on the back end. I am in the process of enhancing this with many new features and considering using node.js for the new work.
First of all, can PHP and node.js co exist on the same machine? I do not see why not.
Second, can I just call node.js code directly from Javascript and return JSON back?
Yes, and yes. Node and Apache / PHP can co-exist on a single server.
The only issue you are likely to run into is that they cannot both listen on the same port. HTTP, by default, runs on port 80 and only one process can "listen" on a single port at any one time. You may therefore have to run the Node app on a different port (for example, 8080), which could bring in difficulties if any of your target users are restricted to only port 80.
You can run node and PHP on same server, and even on the same port. The key is to use a server like nginx in front listening on port 80, set up PHP in Nginx as you normally would (using php-fpm) and set up your Node instance to listen locally on some high port like 8081.
Then just configure nginx to proxy all the Node requests through to localhost:8081 using the directory name as a filter. You're essentially setting up nginx to treat Node requests a bit like it treats PHP requests: it forwards them off to some other daemon then handles the response as it comes back. Nginx is great at this. It will add a layer of security and it will boost performance because it's so good at managing many connections at once even when the backend doesn't.
Another benefit of this is that you can have multiple separate Node instances on different domains too, and use regular nginx rules to handle it all. You can also run it alongside other app servers like apps written in Go.
You'd benefit from Nginx's configurability, its SSL and HTTP/2 support and huge speed in serving static files, and not having to serve static files from your Node app (if you don't want to).
Yes, You can do it. If you server is an Ubuntu or Debian, follow these steps:
Open your terminal an write:
sudo curl -sL https://deb.nodesource.com/setup_8.x | bash -
sudo apt-get install nodejs
If curl is not installed on your server:
sudo apt-get install curl
To your Node.js application not stop when you exit the Terminal without shutting down your instance, use a package called Forever.
npm install -g forever
If your site is uploaded and NPM and Forever are configured correctly, it is time to start the Node.js instance. If you’re using Express.js, run the following command to start a Forever instance:
forever start ./path/to/your/project
In the above command you'll notice I am feeding the ./bin/www script because that is what npm start launches for Express.js. Be sure to change the script to whatever your launch script is.
By default, the website (nodejs) is running at http://localhost:3000 which isn't ideal for remote visitors. We want to be able to access our site from a domain name processed by Apache. In your Apache VirtualHost file, you might have something that looks like the following:
<virtualhost *:80>
ServerName www.example.com
ProxyPreserveHost on
ProxyPass / http://localhost:3000/
ProxyPassReverse / http://localhost:3000/
</virtualhost>
We are telling Apache to create a proxy that will get our Node.js http://localhost:3000 site every time the www.yousite.com domain name is hit. All assets and pages will use the www.yoursite.com path instead of http://localhost:3000 leading everyone to believe the website is being served no differently than any other.
However, by default, the Apache proxy modules are not enabled. You must run the following two commands if the modules are not already enabled:
a2enmod proxy
a2enmod proxy_http
You may be required to restart Apache after enabling these modules.
I get this information on The Poliglot Developer.
Yeh, if you use php to serve javascript pages to clients the javascript code can use AJAX requests to access routes exposed from your node server.