I want to connect a nginx and a php-fpm container using a unix socket and have the php-fpm be completely disconnected from any network for security reasons.
I could of course just create the php-fpm.sock file somewhere on my docker host and mount it into both containers, however I would like the socket file to be automatically cleaned when the containers shut down and not have to worry about creating / shipping it with my docker-compse.yml. I therefore thought about creating a named volume in docker-compose and mount it as /var/run/. This is however (I think) not good, because I don't want everything in /var/run/ to be shared, but only php-fpm.sock. Is there a way to create a named single-file volume in docker-compose?
If the directory structure is as:
.
|__docker-compose.yml
|__php-fpm.sock
Then you can use following volume in your compose file:
volumes:
- ./php-fpm.sock:/var/run/php-fpm.sock
Yes, instead of mounting /var/run/ just mount /var/run/php-fpm.sock
Related
At the moment I have setup a simple lamp stack environment with docker.
However, when I specify e.g. a php.ini file, the php.ini file shows up as a folder and not a specific ini file - which means that I cannot edit it.
some volumes context of my dockerfile:
volumes:
- ./:/var/www/html
- ./.docker/php/custom.ini:/usr/local/etc/php/php.ini
What am I missing? php.ini should be a file, and not a folder, no?.
Directories are created whenever Docker cannot find the file specified, leading to it assuming you want it created.
Don't use relative directories in volume mounts, always use the full path, for example by prefixing your bind mount with $(pwd).
If you're a Docker Desktop user, make sure you have allowed the given path to be mounted as volume.
I have jboss server on docker and I'm trying to check over php whether audio file is saved or not. If file exists I would like to copy it to local machine.
$location = shell_exec('docker exec -it name3 /bin/bash -c cd .**********; ls');
$filepath = $location;
if (file_exists($filepath)) {
//copy that file to local machine
}
echo $filepath;
but for $location I get folders inside htdocs (xampp). Is it possible to access files inside docker container over php (which is on local server)?
You can use docker cp to tell docker to copy that files for you from the container to the local machine
docker cp <containerId>:/file/path/within/container /host/path/target
to get the containerId you can run
docker ps
If I understand correctly, PHP is running on your local machine and you want to check if a file exists or not inside the running docker container.
Regardless of whether or not PHP has access to the part of the filesystem outside your doc root, or whether it has permission to execute shell commands (both security concerns), I would still use a different approach.
Use a docker volume for storing the recordings.
Mount the volume on the path you are checking - i.e. /opt/Restcomm-JBoss-AS7/standalone/deployments/restcomm.war/recordings/
Set the host path of your docker volume somewhere where PHP has read access on the filesystem
Use PHP to check whether a file exists, without using any shell commands. e.g. https://secure.php.net/manual/en/function.file-exists.php
Please note that this way your recordings will also persist across restarts / crashes of the docker container, which I guess is important...
I have 3 Docker containers(nginx, php and mysql) bundled together via docker-compose.
In /etc/nginx/sites-enabled I have .conf files for 2 websites.
magento2.loc > magento2.conf
pma.loc > pma.conf
On the host(Ubuntu) I modified /etc/hosts accordingly.
127.0.0.1 > magent2.loc
127.0.0.1 > pma.loc
docker-compose.yml
version: '2'
services:
nginx:
(...)
links:
- php
(...)
php:
(...)
After running docker-compose up, server log in console shows:
response from php and nginx for magento2.loc < correct
but only response from nginx(no php) from pma.loc < incorrect
How do I make pma.loc work with PHP? Do I need multiple PHP containers for that?
The easiest way to achieve what you're trying to achieve is to first, place your database in its own container. Then, for each website create a container having the necessary components to run the website as shown below:
You would establish a link between each website container and the database container. You will have to then, of course, create virtual hosts for each website so they do not have port conflicts.
There is a slightly more complex solution to the problem in which you use yet another Docker container to act as a proxy which directs traffic appropriately to each web site. The other solution, which is what you're trying to do now, is run multiple websites in the same container in which you have to establish server blocks (Nginx's term for virtual hosts) for each website in the container.
In any case you should place your database in its own container and link to that container in the Docker run command or Docker-Compose setup.
Code of Dockerfile is here
FROM php:7.0-cli
COPY ./ /
Code of create.php is here
<?php
$fp = fopen($argv[1],'wa+');
fwrite($fp,'sample');
fclose($fp);
?>
I build my docker using following command in debian - stretch
docker build -t create-tool .
I run it using following command in debian - stretch
docker run create-tool php create.php /home/user1/Desktop/test
But it is not creating the file, instead it throws the error "failed to open stream".
3 Scenarios I would investigate :
Because /home/user1/Desktop/test doesn't exist within the docker container. That file exists on your host system, so it's out of reach for the container.
If you want to access files on your host, you'll need to share the folders you want the container to access with a volume.
The path exists in your container, but is for example a directory. You can check quickly if a file exists and/or is accessible within your container by running docker run create-tool ls -lsah /home/user1/Desktop/test.
This will then list what the container can access with the given path.
Double-check what the value of $argv[1] in your PHP script is. You wouldn't want to get escaped characters, etc. print_r($argv); should help.
How about using touch()? It can solve your problem!
http://php.net/manual/en/function.touch.php
I am using Docker-machine on Mac for a PHP application.
My code is shored in mac, and shared to docker-machine as volume.
This is my docker-compose.yml
app:
build: .
volumes:
- .:/var/www/html
My PHP application will create a folder in the shared volume and write some files in it.
The shared volume is set to permission 777 on mac (which I know I shouldn't do it, but I cannot solve the problem even with this)
After running the application, I got mkdir(): Permission denied.
The newly created folder is in permission drwxr-xr-x, so my application cannot write any file in it.
Is there anyway to set the new folders to inherit folder permission from its parent?
You might want to look at http://docker-sync.io - using unison as strategoy, you can actually properly map the uid on the user in the container, removing any issues with permissions.