fopen does not create files when used with docker - php

Code of Dockerfile is here
FROM php:7.0-cli
COPY ./ /
Code of create.php is here
<?php
$fp = fopen($argv[1],'wa+');
fwrite($fp,'sample');
fclose($fp);
?>
I build my docker using following command in debian - stretch
docker build -t create-tool .
I run it using following command in debian - stretch
docker run create-tool php create.php /home/user1/Desktop/test
But it is not creating the file, instead it throws the error "failed to open stream".

3 Scenarios I would investigate :
Because /home/user1/Desktop/test doesn't exist within the docker container. That file exists on your host system, so it's out of reach for the container.
If you want to access files on your host, you'll need to share the folders you want the container to access with a volume.
The path exists in your container, but is for example a directory. You can check quickly if a file exists and/or is accessible within your container by running docker run create-tool ls -lsah /home/user1/Desktop/test.
This will then list what the container can access with the given path.
Double-check what the value of $argv[1] in your PHP script is. You wouldn't want to get escaped characters, etc. print_r($argv); should help.

How about using touch()? It can solve your problem!
http://php.net/manual/en/function.touch.php

Related

Move files from a docker container to an outside folder on the same server using php

I am fairly new to docker, please bear with me if its a basic question. I have a laravel project on a server, the project is dockerized. What I want to do is move a file from my project to another location on the same server that is not dockerized.
So, my project is setup on /var/www/my-project directory and I want to copy a file from my-project/storage/app/public/file.csv to /var/{destination_folder}. How can I do that in laravel? I think my issue is not related to laravel it is related to docker which is not allowing to move files out of it. Please don't add laravel or php file copy code snippets,I have tried plenty.
What I've tried?
1- I have tried copying file using:
Storage::disk('local')->put('/var/{destination_folder}', 'my-project/storage/app/public/file.csv' )
but, it does not copy the file.
2- I have also tried moving the file using bash script which I'm executing from my laravel controller using shell_exec or process but, it is also not working.
cp "/var/www/my-project/storage/app/public/file.csv" "/var/destination_folder"
What's hapening in this solution is that it is working when I run the command from terminal, but its not working when I call it from my controller and it gives me
cp: cannot create regular file '/var/destination_folder/file.csv': No such file or directory
After googling the above error it seemed that this is a permission issue wo, I changed the permission of the destination folder to 775 and I also checked the user from which I was running the laravel app and it gave me root when I ran whoami from the app.
Let me know how this could be achieved, thank you!
The entire point of docker is that it is isolated from the base host. You cannot simply copy the file out, as the docker host does not have access to any disk that is not mounted.
The easiest option is to create a destination directory and create a bind mount as per https://docs.docker.com/storage/bind-mounts/
You would then use the following argument for your docker run:
--mount type=bind,source=/var/destination_folder,target=/some_directory_inside_your_docker and copy the file to some_directory_inside_your_docker and it will appear in the parent host.
Another option is to generate a user account on the parent host, LOCK IT DOWN HARD for security reasons, and then have a private key inside your docker that would allow your docker to SSH to the parent host (note, this won't work with every network configuration). I don't think it's a good idea when you can do bind mounts, but it would work.

Images inside a docker volume are not served by a docker php application

I'm running a custom Docker image based on the official php-fpm image, with just minor tweaks. Inside the host machine I have a folder containing some images I would like to server, so I am running the container and binding the host image folder to a folder in my container like /home/host/images:/var/www/storage/images. This goes well and if I bash into the container I see the files there.
My problem comes when I symlink the folder from the public folder in the container like ln -s /var/www/storage/images /var/www/public/images.
The link seems correct and I can navigate there using the shell and see for example my test.png image, but whenever I try to server any image for example https://my-web-app.com/images/test.png I get a 404.
However, from inside the containers shell I've created another folder like /var/www/storage/images2 and then moved test.png and then updated my link like ln -s /var/www/storage/images /var/www/public/images2 and then tested https://my-wen-app.com/images/test.png and it works!
Why can't I link my bound folder but I can link this new folder that I just created? Is there anything else to do when binding to allow this link to work?
I finally found out the cause.
The symlink was working perfectly, the problem was that the container schema was: one container running php-fpm and another container running nginx the main folder was being bound to both containers but the images folder was just mounted to the php-fpm container so when the nginx container was asked for the images was not finding anything because it had no visibility of this volume.

Access files inside docker container over php

I have jboss server on docker and I'm trying to check over php whether audio file is saved or not. If file exists I would like to copy it to local machine.
$location = shell_exec('docker exec -it name3 /bin/bash -c cd .**********; ls');
$filepath = $location;
if (file_exists($filepath)) {
//copy that file to local machine
}
echo $filepath;
but for $location I get folders inside htdocs (xampp). Is it possible to access files inside docker container over php (which is on local server)?
You can use docker cp to tell docker to copy that files for you from the container to the local machine
docker cp <containerId>:/file/path/within/container /host/path/target
to get the containerId you can run
docker ps
If I understand correctly, PHP is running on your local machine and you want to check if a file exists or not inside the running docker container.
Regardless of whether or not PHP has access to the part of the filesystem outside your doc root, or whether it has permission to execute shell commands (both security concerns), I would still use a different approach.
Use a docker volume for storing the recordings.
Mount the volume on the path you are checking - i.e. /opt/Restcomm-JBoss-AS7/standalone/deployments/restcomm.war/recordings/
Set the host path of your docker volume somewhere where PHP has read access on the filesystem
Use PHP to check whether a file exists, without using any shell commands. e.g. https://secure.php.net/manual/en/function.file-exists.php
Please note that this way your recordings will also persist across restarts / crashes of the docker container, which I guess is important...

Specify umask in Dockerfile

I'm using Mac OS and recently I'm trying to set up a development environment with docker. Docker seems to be nice, but currently I'm facing the following problem:
PROBLEM:
Whenever PHP (in the docker container) is creating a folder with a subfolder, apache results in a 500-error. Apache-log: "... Can't create directory app/../../folder/subfolder/subsubfolder/"
I assume that this is caused by the environment variable umask, because whenever a folder is created, it doesn't have write permission. Because of that subfolders can't be created and so on.
To test this out, I wrote a little test-script (umask-test.php):
$umask = umask(0);
echo ("umask = $umask <br>");
And bingo! Every time I build and run the container and start the script via the browser, the result is:
umask = 18
GOAL:
So I would like to have umask always to be set to 000 (zero)
I figured out, the best place to set this variable would be the Dockerfile, so in the Dockerfile I stated the following:
FROM ubuntu:trusty
...
ENV UMASK 0
...
Problem is, that this results in nothing:
the test-script gives out 18 for umask
folders are still created with the wrong permission
subfolders can't be created.
QUESTIONS:
What am I doing wrong?
How can umask in docker containers always be set to zero?
How can I permit the apache-user (www-data) to create folders that always have write-permissions and in which subfolders can be created?
Problem solved
Since hopefully this is helpful for other, I want to provide the answer to my own question:
The problem is not docker and umask-settings in the container. The problem is the Mac and the umask-setting on the Mac OS!!
Example: If umask on the Mac is set to 022, then folders created on mounted directories by docker have the permissions 755. This causes, that no subfolders can be created.
This link is providing the information about how to set umask for the Mac: https://support.apple.com/en-us/HT201684
So if you type in your terminal
sudo launchctl config user umask 000
and reboot, all your folders will be created with 777-permissions.
Including the folders mounted to docker.
Before I was asking myself why running containers (initialized with run -v ...) are not really working. Now it seems to work all right! :-)
According to the Docker docs environment variables you set with ENV do persist to the running container, but Apache is probably very picky about which ones it pays attention to on start up on security grounds.
Try this answer.

Docker data-only container permissions

I'm developing PHP app and I'm stuck with Docker volumes. I've tried to create a separate data-only container for my PHP app but couldn't make it work because of permission issues... I've googled and read all I could and the most close to working solution is described here
But it's a bit old and I couldn't make it work too.
I've created a repo with a simple test code:
https://github.com/oleynikd/docker-volumes-permissions/
This simple project uses docker-compose to run 2 containers:
For php fpm
For nginx
The php code is mapped to php container and lives in /src directory.
The main problem is that PHP has no rights to write to code's directory because it runs as www-data user but code's directory belongs to user with id 1000 and group staff.
After running docker-compose up and visiting /index.php you'll see the warning and ls -lah output that shows the permission issue. Here's the screenshot:
I've tried to fix this by adding RUN mkdir -p /src && chown -R www-data:www-data /src to php Dockerfile but that didn't helped.
So questions are:
Why the owner and the group of /src is 1000:staff?
How to fix this?
I'm sure the solution is simple but I can't find it.
Please help!
P.S. Feel free to contribute to repo if you know how to fix this issue.
The owner of the files is 1000:staff because 1000:1000 is the uid:gid of the owner of the files on the host machine.
You could avoid it using volumes without specifying the path of the files on the host machine and adding the files with a COPY instruction in the dockerfile. But maybe you need to easily access to theses files on the host?
For development environments (and development environment only), I use a hacky solution that I described in this answer to manage it.

Categories