I have jboss server on docker and I'm trying to check over php whether audio file is saved or not. If file exists I would like to copy it to local machine.
$location = shell_exec('docker exec -it name3 /bin/bash -c cd .**********; ls');
$filepath = $location;
if (file_exists($filepath)) {
//copy that file to local machine
}
echo $filepath;
but for $location I get folders inside htdocs (xampp). Is it possible to access files inside docker container over php (which is on local server)?
You can use docker cp to tell docker to copy that files for you from the container to the local machine
docker cp <containerId>:/file/path/within/container /host/path/target
to get the containerId you can run
docker ps
If I understand correctly, PHP is running on your local machine and you want to check if a file exists or not inside the running docker container.
Regardless of whether or not PHP has access to the part of the filesystem outside your doc root, or whether it has permission to execute shell commands (both security concerns), I would still use a different approach.
Use a docker volume for storing the recordings.
Mount the volume on the path you are checking - i.e. /opt/Restcomm-JBoss-AS7/standalone/deployments/restcomm.war/recordings/
Set the host path of your docker volume somewhere where PHP has read access on the filesystem
Use PHP to check whether a file exists, without using any shell commands. e.g. https://secure.php.net/manual/en/function.file-exists.php
Please note that this way your recordings will also persist across restarts / crashes of the docker container, which I guess is important...
Related
I am fairly new to docker, please bear with me if its a basic question. I have a laravel project on a server, the project is dockerized. What I want to do is move a file from my project to another location on the same server that is not dockerized.
So, my project is setup on /var/www/my-project directory and I want to copy a file from my-project/storage/app/public/file.csv to /var/{destination_folder}. How can I do that in laravel? I think my issue is not related to laravel it is related to docker which is not allowing to move files out of it. Please don't add laravel or php file copy code snippets,I have tried plenty.
What I've tried?
1- I have tried copying file using:
Storage::disk('local')->put('/var/{destination_folder}', 'my-project/storage/app/public/file.csv' )
but, it does not copy the file.
2- I have also tried moving the file using bash script which I'm executing from my laravel controller using shell_exec or process but, it is also not working.
cp "/var/www/my-project/storage/app/public/file.csv" "/var/destination_folder"
What's hapening in this solution is that it is working when I run the command from terminal, but its not working when I call it from my controller and it gives me
cp: cannot create regular file '/var/destination_folder/file.csv': No such file or directory
After googling the above error it seemed that this is a permission issue wo, I changed the permission of the destination folder to 775 and I also checked the user from which I was running the laravel app and it gave me root when I ran whoami from the app.
Let me know how this could be achieved, thank you!
The entire point of docker is that it is isolated from the base host. You cannot simply copy the file out, as the docker host does not have access to any disk that is not mounted.
The easiest option is to create a destination directory and create a bind mount as per https://docs.docker.com/storage/bind-mounts/
You would then use the following argument for your docker run:
--mount type=bind,source=/var/destination_folder,target=/some_directory_inside_your_docker and copy the file to some_directory_inside_your_docker and it will appear in the parent host.
Another option is to generate a user account on the parent host, LOCK IT DOWN HARD for security reasons, and then have a private key inside your docker that would allow your docker to SSH to the parent host (note, this won't work with every network configuration). I don't think it's a good idea when you can do bind mounts, but it would work.
I'm running a custom Docker image based on the official php-fpm image, with just minor tweaks. Inside the host machine I have a folder containing some images I would like to server, so I am running the container and binding the host image folder to a folder in my container like /home/host/images:/var/www/storage/images. This goes well and if I bash into the container I see the files there.
My problem comes when I symlink the folder from the public folder in the container like ln -s /var/www/storage/images /var/www/public/images.
The link seems correct and I can navigate there using the shell and see for example my test.png image, but whenever I try to server any image for example https://my-web-app.com/images/test.png I get a 404.
However, from inside the containers shell I've created another folder like /var/www/storage/images2 and then moved test.png and then updated my link like ln -s /var/www/storage/images /var/www/public/images2 and then tested https://my-wen-app.com/images/test.png and it works!
Why can't I link my bound folder but I can link this new folder that I just created? Is there anything else to do when binding to allow this link to work?
I finally found out the cause.
The symlink was working perfectly, the problem was that the container schema was: one container running php-fpm and another container running nginx the main folder was being bound to both containers but the images folder was just mounted to the php-fpm container so when the nginx container was asked for the images was not finding anything because it had no visibility of this volume.
Code of Dockerfile is here
FROM php:7.0-cli
COPY ./ /
Code of create.php is here
<?php
$fp = fopen($argv[1],'wa+');
fwrite($fp,'sample');
fclose($fp);
?>
I build my docker using following command in debian - stretch
docker build -t create-tool .
I run it using following command in debian - stretch
docker run create-tool php create.php /home/user1/Desktop/test
But it is not creating the file, instead it throws the error "failed to open stream".
3 Scenarios I would investigate :
Because /home/user1/Desktop/test doesn't exist within the docker container. That file exists on your host system, so it's out of reach for the container.
If you want to access files on your host, you'll need to share the folders you want the container to access with a volume.
The path exists in your container, but is for example a directory. You can check quickly if a file exists and/or is accessible within your container by running docker run create-tool ls -lsah /home/user1/Desktop/test.
This will then list what the container can access with the given path.
Double-check what the value of $argv[1] in your PHP script is. You wouldn't want to get escaped characters, etc. print_r($argv); should help.
How about using touch()? It can solve your problem!
http://php.net/manual/en/function.touch.php
I want to connect a nginx and a php-fpm container using a unix socket and have the php-fpm be completely disconnected from any network for security reasons.
I could of course just create the php-fpm.sock file somewhere on my docker host and mount it into both containers, however I would like the socket file to be automatically cleaned when the containers shut down and not have to worry about creating / shipping it with my docker-compse.yml. I therefore thought about creating a named volume in docker-compose and mount it as /var/run/. This is however (I think) not good, because I don't want everything in /var/run/ to be shared, but only php-fpm.sock. Is there a way to create a named single-file volume in docker-compose?
If the directory structure is as:
.
|__docker-compose.yml
|__php-fpm.sock
Then you can use following volume in your compose file:
volumes:
- ./php-fpm.sock:/var/run/php-fpm.sock
Yes, instead of mounting /var/run/ just mount /var/run/php-fpm.sock
So I had to write a script that will separate certain images on a network file server and back them up preserving the file structure. To do this, I am mounting the file server as a folder on my linux box where the script will be running. The file server is a windows box.
The file server was mounted like this:
mount -t cifs //xxx.xxx.xxx.xxx/pictures$ -o username=imageuser,password=pa$$word images
If I run a copy command like this:
cp images/somefolder/subfolder/someimage.jpg images/differentfolder/subfolder/someimage.jpg
My question is this:
Will "someimage.jpg" be simply be copied from one location to the other on the windows machine, or will the image be downloaded to the linux box over the network and then uploaded to the new location on the windows machine?
Edit: If the file will be round-tripped - I would like to know how to do it without that or at least to be pointed in the right direction where I can read up on a way to do it.
Neither cp nor the smb protocol are smart enough to realize that the source + destination of the file are on the same remote server. cp will simply do its usual thing and slurp all the data from the source file (copying it to the client machine), then spit it back out in the target file on the server. So yes, it'll be a round-trip through the client.
A better solution for this sort of thing is using an SSH remote command, turning it into a purely server-side operation:
ssh imageuser#x.x.x.x 'cp sourcefile targetfile'
You can still keep the fileserver mounted on your local machine to actually see what files you're dealing with, but do all the file copy/move operations via the ssh commands for efficiency. Since the server is a Windows machine, you'll probably have to install cygwin and get an ssh server running.