I'm trying to build a PHP site and I'm wanting to test my PHP files without uploading them to my host. Basically testing them on my own machine before I upload them. How do I do that?
PHP 5.4 and later have a built-in web server these days.
You simply run the command from the terminal:
cd path/to/your/app
php -S 127.0.0.1:8000
Then in your browser go to http://127.0.0.1:8000 and boom, your system should be up and running. (There must be an index.php or index.html file for this to work.)
You could also add a simple Router
<?php
// router.php
if (preg_match('/\.(?:png|jpg|jpeg|gif)$/', $_SERVER["REQUEST_URI"])) {
return false; // serve the requested resource as-is.
} else {
require_once('resolver.php');
}
?>
And then run the command
php -S 127.0.0.1:8000 router.php
References:
https://www.php.net/manual/en/features.commandline.webserver.php
https://www.php.net/manual/en/features.commandline.options.php
Install and run XAMPP: http://www.apachefriends.org/en/xampp.html
This is a simple, sure fire way to run your php server locally:
php -S 0.0.0.0:<PORT_NUMBER>
Where PORT_NUMBER is an integer from 1024 to 49151
Example: php -S 0.0.0.0:8000
Notes:
If you use localhost rather than 0.0.0.0 you may hit a
connection refused error.
If want to make the web server accessible to any interface, use 0.0.0.0.
If a URI request does not specify a
file, then either index.php or index.html in the given directory are
returned.
Given the following file (router.php)
<?php
// router.php
if (preg_match('/\.(?:png|jpg|jpeg|gif)$/', $_SERVER["REQUEST_URI"])) {
return false; // serve the requested resource as-is.
} else {
echo "<p>Welcome to PHP</p>";
}
?>
Run this ...
php -S 0.0.0.0:8000 router.php
... and navigate in your browser to http://localhost:8000/ and the following will be displayed:
Welcome to PHP
Reference:
Built-in web server
I often use following command to spin my PHP Laravel framework :
$ php artisan serve --port=8080
or
$ php -S localhost:8080 -t public/
In above command :
- Artisan is command-line interface included with Laravel which use serve to call built in php server
To Run with built-in web server.
php -S <addr>:<port> -T
Here,
-S : Switch to Run with built-in web server.
-T : Switch
to specify document root for built-in web server.
I use WAMP. One easy install wizard, tons of modules to for Apache and PHP preconfigured and easy to turn on and off to match your remote config.
If you want an all-purpose local development stack for any operating system where you can choose from different PHP, MySQL and Web server versions and are also not afraid of using Docker, you could go for the devilbox.
The devilbox is a modern and highly customisable dockerized PHP stack supporting full LAMP and MEAN and running on all major platforms. The main goal is to easily switch and combine any version required for local development. It supports an unlimited number of projects for which vhosts and DNS records are created automatically. Email catch-all and popular development tools will be at your service as well. Configuration is not necessary, as everything is pre-setup with mass virtual hosting.
Getting it up and running is pretty straight-forward:
# Get the devilbox
$ git clone https://github.com/cytopia/devilbox
$ cd devilbox
# Create docker-compose environment file
$ cp env-example .env
# Edit your configuration
$ vim .env
# Start all containers
$ docker-compose up
Links:
Github: https://github.com/cytopia/devilbox
Website: http://devilbox.org
Install XAMPP. If you're running MS Windows, WAMP is also an option.
MAMP if you are on a MAC MAMP
AppServ is a small program in Windows to run:
Apache
PHP
MySQL
phpMyAdmin
It will also give you a startup and stop button for Apache. Which I find very useful.
If you are using Windows, then the WPN-XM Server Stack might be a suitable alternative.
Use Apache Friends XAMPP. It will set up Apache HTTP server, PHP 5 and MySQL 5 (as far as I know, there's probably some more than that). You don't need to know how to configure apache (or any of the modules) to use it.
You will have an htdocs directory which Apache will serve (accessible by http://localhost/) and should be able to put your PHP files there. With my installation, it is at C:\xampp\htdocs.
If you have a local machine with the right software: web server with support for PHP, there's no reason why you can't do as you describe.
I'm doing it at the moment with XAMPP on a Windows XP machine, and (at home) with Kubuntu and a LAMP stack.
Another option is the Zend Server Community Edition.
you can create your own server in php using code as well!
<?php
set_time_limit(0);
$address = '127.0.0.1';
$port =4444;
$server = '$address + $port';
// <-- Starts Server
$sock = socket_create(AF_INET, SOCK_STREAM, 0);
socket_bind($sock, $address, $port) or die('Could not bind to address');
echo "\n Server is running on port $port waiting for connection... \n\n";
while(1)
{
socket_listen($sock);
$client = socket_accept($sock);
$input = socket_read($client, 443);
$incoming = array();
$incoming = explode("\r\n", $input);
$fetchArray = array();
$fetchArray = explode(" ", $incoming[0]);
$file = $fetchArray[1];
if($file == "/"){
$file = "src/browser.php";// this file is open with server when it starts!
} else {
$filearray = array();
$filearray = explode("/", $file);
$file = $filearray[1];
}
echo $fetchArray[0] . " Request " . $file . "\n";
// <-- Control Header
$output = "";
$Header = "HTTP/1.1 200 OK \r\n" .
"Date: Fri, 31 Dec 1999 23:59:59 GMT \r\n" .
"Content-Type: text/html \r\n\r\n";
$Content = file_get_contents($file);
$output = $Header . $Content;
socket_write($client,$output,strlen($output));
socket_close($client);
}
print('server running..');
run this code then open browser to localhost:443 or whichever port you chose
A clean way to do this, even if you have existing servers on your machine, is to use Docker. Run from any terminal via docker run with a single line:
docker run --name=php -d -it -p 80:80 --mount type=bind,source='/absolute/path/to/your/php/web/root/folder/',target=/app webdevops/php-nginx-dev
You will now have a running container named php serving requests on your localhost, port 80. You should be able to see your php scripts in any browser using the url http://127.0.0.1
Notes:
If you don't have Docker installed, instructions for Debian/Ubuntu and Windows 10+ are at the end. It can be installed on Windows 7 but it's quite annoying and not worth it. For windows 7, if you must, I'd just install Uniserver or XAMPP or something like that.
You can confirm that the container is live by running docker ps in a terminal on the host machine.
In order to keep your app/code modifications after the container is terminated/removed, the web root is bound to the folder where you ran the docker run command. To change it, specify the path to your local folder in the
--mount source='[/local/path]' parameter. Note: Because the folder is bound to the container, changes you make in the container will also be made in the host folder.
Logs can be viewed using the following command (--follow is optional, ctrl+c to exit):
docker logs php --follow
The web root folder in the container is /app. This may be helpful if you don't need to save anything and don't feel like specifying a bind mount in the docker run command.
The port is specified using the -p [host port]:80 parameters. You may have to explicitly specify -p 80:80 in order to be able to connect to the container from a web browser (at least on Windows 10).
To access the container's bash terminal run this from the host machine (type exit to return to host):
docker exec -it php /bin/bash
You can install packages in the container's bash terminal the same way that you would on a native Debian/Ubuntu box (e.g. apt install -y nano).
Composer is already installed (run composer -v from container's terminal to inspect)
To launch an additional container, specify a different host port and container name using the --name=[new_name] and -p [host port]:80 parameters.
If you need a database or other server, do the same thing with a docker image for MySQL or MariaDB or whatever you need. Just remember to bind the data folder to a host folder so you don't lose it if you accidentally delete your docker image(s).
How to install Docker:
Debian/Ubuntu as root (or add sudo before each of these commands):
apt-get update
apt install -y ca-certificates curl gnupg lsb-release
mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
chmod a+r /etc/apt/keyrings/docker.gpg
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
service docker start
systemctl enable docker
Windows 10+ (tested on 10, should work on >10):
Use Chocolatey, a command-line package manager for Windows. Chocolatey also has a gui if you insist. Once installed, run:
choco install -y docker-desktop
Mac, Chromebook, etc:
You are alone. But we believe in you.
Related
I have a PHP server that I need to launch in a docker image along a Python service. Both of them need to be in the same image. At first, I wrote the Dockerfile to start the PHP server, by following a simple guide I found online, and I came up with this:
FROM php:7-apache
COPY ./www/ /var/www/html
WORKDIR /var/www/html
EXPOSE 70
Then, because I need a third service running on a second container, I created the following docker-compose file:
version: '3.3'
services:
web:
build: .
image: my-web
ports:
- "70:80"
secondary-service:
image: my-service
ports:
- "8888:8888"
Using only that, the website works just fine (except for the missing service on the web container). However, if I want to start a service inside the web container alongside the web, I need to start the website manually from a bash script, since docker can only have one CMD entry. This is what I tried:
FROM php:7-apache
COPY ./www/ /var/www/html
RUN mkdir "/other_service"
COPY ./other_service /other_service
RUN apt-get update && bash /other_service/install_dependenci172.17.0.1es.sh
WORKDIR /var/www/html
EXPOSE 70
CMD ["bash", "/var/www/html/launch.sh"]
And this is launch.sh:
#!/bin/bash
(cd /other_service && python3 /other_service/start.py &) # CWD needs to be /other_service/
php -S 0.0.0.0:70 -t /var/www/html
And that also starts the server without problems, along with other_service.
However, when I go to my browser (in the host) and browse to http://localhost:70, I get the error "Connection reset". The same happens when I try to do a request using curl localhost:70, which results in curl: (56) Recv failure: Connection reset by peer.
I can see in the log of the web that the php test server is running:
PHP 7.4.30 Development Server (http://0.0.0.0:70) started
And if I open a shell inside the container and I run the curl command inside of it, it gets the webpage without any problems.
I have been searching similar questions around, but none if them had an answer, and the ones that did didn't work.
What is going on? Shouldn't manually starting the server from a bash script work just fine?
Edit: I've just tried to only start the PHP server like below and it doesn't let me connect to the webpage either
#!/bin/bash
#(cd /other_service && python3 /other_service/start.py &) # CWD needs to be /other_service/
php -S 0.0.0.0:70 -t /var/www/html
I found the issue. It was as easy as starting the Apache server too:
#!/bin/bash
(cd /other_service && python3 /other_service/start.py &) # CWD needs to be /other_service/
/etc/init.d/apache2 start
php -S 0.0.0.0:70 -t /var/www/html
I am trying to execute this command using the shell_exec function from PHP:
shell_exec("cd /home/ec2-user; ./certbot-auto -n --apache -d mydomain.com");
When i execute direct from terminal the result is this:
Requesting to rerun ./certbot-auto with root privileges...
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for mydomain.com
Waiting for verification...
Cleaning up challenges
Created an SSL vhost at /etc/httpd/conf.d/vhost-le-ssl.conf
Deploying Certificate to VirtualHost /etc/httpd/conf.d/vhost-le-ssl.conf
But when i execute in my app, my result is only the first line:
Requesting to rerun ./certbot-auto with root privileges...
How can i fix this?
Obs:
I am trying to install Certbot SSL certificates.
My app is in Amazon AWS
I do not have much knowledge on servers.
I am using Laravel 5.5 in my app.
Take a look at sudo in php exec() or google for "php sudoers".
Your script is running as apache user and hasn't rights as root; so you need to make an entry in /etc/sudoers (or put a file in sudoers.d) to be able to run certbot-auto as root from your script.
I am trying to run the CLI version of this PHP databse Search and Replace Script, but I think this a more general MySQL problem relating to Mac OS X and MAMP. I receive the following error whenever I attempt to run the CLI script locally:
db: SQLSTATE[HY000] [2002] Connection refused
Here is the command I'm running:
./srdb.cli.php -h 127.0.0.1 -u root -n mydbname -proot -c utf\-8 -s mywebsite.com -r dev.mywebsite.com
What I've tried
I am able to connect to mysql using these settings, no problem, using mysql -u root -proot etc...
Swapping 127.0.0.1 for localhost gives the same error.
All my my.cnf files are blank.
Apache and MySQL are running fine.
I have succeeded in replicating this problem on another Mac running MAMP
I am using this mysql: /Applications/MAMP/Library/bin/mysql
And this php: /Applications/MAMP/bin/php/php5.3.28/bin/php
Anybody any ideas? Thanks!
Edit
Here is the source code showing how the script connects to MySQL:
https://github.com/interconnectit/Search-Replace-DB/blob/master/srdb.cli.php
which in turn imports this:
https://github.com/interconnectit/Search-Replace-DB/blob/master/srdb.class.php
As stated in my comment already, chances are that you're not running the PHP binary you thought you were running. Even if the MAMP php binary is in your path, the shebang line in srdb.cli.php reads #!/usr/bin/php and that points to the Apple-provided php binary.
So if you invoke the script with the full path to your MAMP php binary, the problem should be avoided:
/Applications/MAMP/bin/php/php5.3.28/bin/php srdb.cli.php -h 127.0.0.1 -u root -n mydbname -proot -c utf\-8 -s mywebsite.com -r dev.mywebsite.com
Another solution might be to replace the shebang line with:
#!/usr/bin/env php
This works only if the MAMP binary is in your $PATH in front of /usr/bin. Using #!/usr/bin/env phpensures however, that you're always using the same binary no matter if you're invoking the script via ./srdb.cli.php or with php srdb.cli.php.
Stop mysql :
sudo service mysql stop
And then start it :
sudo service mysql start
It resolved the problem
To add onto z80crew's brilliant solution, for anyone else unfamiliar/uncomfortable with altering path variables, specifying the full location paths for both the MAMP php binary and the search-replace-db script in the cli script provided by interconnect solved the problem for me. I put the strings to search for and replace with in quotes. I also increased the php timeout limit in wp-config.php with: set_time_limit(3000);
I was consistent with the server name between the options passed to the script and what's in my wp-config.php file (using localhost in wp-config, using localhost in the script as well)
/Applications/MAMP/bin/php/php7.4.2/bin/php /Applications/MAMP/htdocs/test/Search-Replace-DB-master/srdb.cli.php -h localhost -u root -proot --port 8889 -n test -s "http://olddomain.com" -r "http://localhost:8888/test" -v true
I'm trying to set up a centralized server which is in charge of monitoring my other servers. This centralized server needs to be able to collect particular information/metrics about a specific server (such as df -h and service httpd status); but it also needs to be able to restart Apache if needed.
If it wasn't for the Apache restart, I could write a listening script to provide a means of giving the centralized server the data it needs without having to SSH in. But because I also want it to be able to restart Apache, it needs to be able to log in and initiate scripts through a combination of PHP and Bash.
At the moment, I'm using PHP's shell_exec to execute this (very simple) Bash script:
#!/bin/sh
ssh -i /path/to/keyFile.pem ec2-user#x.x.x.x;
I'm accessing the external server (which is an EC2 instance) through a private IP. If I launch this script, I can log in without any problem - the problem comes, however, when I then want to send back the output for commands like the ones I've listed above.
In a Bash script, how would I output a command like df -h after SSHing into another server? Is this possible?
There is a PECL extension for SSH.
Other than that you'll probably want to either use the &$output parameter of exec() to grab the output:
$output = array();
exec('bash myscript.sh', $output);
print_r($output);
Or use output redirection
$output = '/path/to/output.txt';
exec("bash myscript.sh > $output");
if( file_exists($output) && is_readable($output) ) {
$mydata = file_get_contents($output);
}
and, of coure, this all assumes your script looks like what jeroen has in his answer.
You could use:
ssh -i /path/to/keyFile.pem ec2-user#x.x.x.x 'df -h'
or for multiple commands:
ssh -i /path/to/keyFile.pem ec2-user#x.x.x.x 'ls -al ; df -h'
That works from the command line but I have not tried it via php's exec (nor on Amazon to be honest...).
If you're doing ssh I'd suggest phpseclib, a pure PHP SSH implementation. It's a ton more portable than the PECL SSH extension and more reliable too.
I have a Samba share from Windows network mounted to a directory on my Linux based Webserver. I have mounted the directory as follows:
mount -t cifs -o username=admin,password='passsword',domain=mydomain.local,file_mode=0644,dir_mode=0777,uid=client_user,gid=client_user '//192.168.0.x/d$' /home/client_user/mnt
The mount works and I can browse through the files and directories in the OS. However, I wish to be able to access this through a PHP script ran from the browser. However, any file operations on the share result in a permission denied error. I have experimented a little and replaced the uid and gid parameter values with apache, but still no luck.
Any suggestions are much appreciated
Edit
In further tests I have created a file with the following code:
if(is_readable('/path/to/mnt')) {
echo 'Readable';
}
else {
echo 'Not';
}
Running this from the command line on the server results in Readable being printed. I have ran this as root and as a user on the server, but it will not work from the browser.
So after some trial and error I worked out that SELinux was not permitting httpd access to the folders.
Running this command allows httpd to access cifs:
setsebool -P httpd_use_cifs on
However, further investigation revealed that I could set the httpd context on just the mounted folder. So I unmounted the drive and amended my mount command to include:
context="system_u:object_r:httpd_sys_rw_content_t:s0",
The full command:
mount -t cifs -o context="system_u:object_r:httpd_sys_rw_content_t:s0",username=admin,password='passsword',domain=mydomain.local,file_mode=0644,dir_mode=0777,uid=client_user,gid=client_user '//192.168.0.x/d$' /home/client_user/mnt