Call Docker container outside docker - php

I have a project running under Apache and without docker. However, in this project, I have a PDF export and this PDF export I want to call it in a docker container. How do you do that?
Here is my docker-compose.yaml :
version: "3.3"
services:
php:
image: registry.gitlab.com/nodevo/keneo/php:latest
volumes:
- .:/srv:rw,cached
- ~/.ssh/id_rsa:/root/.ssh/id_rsa:ro
- ~/.composer:/tmp
browsershot:
image: ouranoshong/browsershot
links:
- chrome
chrome:
build: ./docker/chrome
cap_add:
- SYS_ADMIN
ports:
- '9223:9222'
The Browsershot service downloads the PDF and have node and npm. For that, it simulates a browser, so I have a chrome service but I don't know if it's the right method.
These 2 commands works in a terminal :
docker-compose run --rm browsershot node -v
docker-compose run --rm browsershot npm -v
However, Browsershot needs the path of the executable node and npm which are suddenly the 2 commands above just like that :
/**
* #param string $nodePath
* #param string $npmPath
*/
public function __construct(string $nodePath, string $npmPath)
{
$this->browserShot = new Browsershot();
$this->browserShot
->setNodeBinary($nodePath)
->setNpmBinary($npmPath)
->showBackground()
->setOption('fullPage', true)
->setOption('args', ['--disable-web-security'])
->emulateMedia('screen')
;
}
The variable $nodePath is docker-compose run --rm browsershot node and variable $npmPath id docker-compose run --rm browsershot npm.
This code works with executable in my local machine like /usr/local/bin/node and /usr/local/bin/npm.
But, when I launch the export with docker commands, I get this error:
HTTP 500 Internal Server Error
The command "PATH=$PATH:/usr/local/bin NODE_PATH=`docker-compose run --rm browsershot node docker-compose run --rm browsershot npm root -g` docker-compose run --rm browsershot node '/media/vdufour/DATA/Sites/keneo/vendor/spatie/browsershot/src/../bin/browser.js' '{"url":"file:\/\/\/tmp\/639306024-0718333001579852859\/index.html","action":"pdf","options":{"args":["--disable-web-security"],"viewport":{"width":800,"height":600},"fullPage":true,"emulateMedia":"screen","delay":2000,"displayHeaderFooter":false,"printBackground":true}}'" failed.
Exit Code: 1(General error)
Working directory: /media/vdufour/DATA/Sites/keneo/public
Output:
================
Error Output:
================
Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.

When using docker-compose, I just "ssh" into the container's shell like this:
docker-compose exec php /bin/bash
(where php is the container in my docker-compose.yml)
Rememvber though that for every terminal window you open, you also need to run the following to load in the environment vars:
eval $(docker-machine env)
Without doing that, you'll get the Couldn't connect to Docker daemon message.

Try to run commands with bash
docker-compose run --rm browsershot bash -c "npm && node && some_command..."

Related

How to set an environment variable for an apache2 -DFOREGROUND [duplicate]

Here is my Dockerfile:
FROM ros:kinetic-ros-core-xenial
CMD ["bash"]
If I run docker build -t ros . && docker run -it ros, and then from within the container echo $PATH, I'll get:
/opt/ros/kinetic/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
If I exec into the container (docker exec -it festive_austin bash) and run echo $PATH, I'll get:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Why are the environment variables different? How can I get a new bash process on the container with the same initial environment?
The ENTRYPOINT command is only invoked on docker run, not on docker exec.
I assume that this /ros_entrypoint.sh script is responsible for adding stuff to PATH. If so, then you could do something like this for docker exec:
docker exec -it <CONTAINER_ID> /ros_entrypoint.sh bash
docker exec only gets environment variables defined in Dockerfile with instruction ENV. With docker exec [...] bash you additionally get those defined somewhere for bash.
Add this line to your Dockerfile:
ENV PATH=/opt/ros/kinetic/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
or shorter:
ENV PATH=/opt/ros/kinetic/bin:$PATH
This is old question but since it's where google directed me I thought I'll share solution I ended up using.
In your entrypoint script add a section similar to this:
cat >> ~/.bashrc << EOF
export PATH="$PATH"
export OTHER="$OTHER"
EOF
Once you rebuild your image you can exec into your container (notice bash is invoked in interactive mode):
docker run -d --rm --name container-name your_image
docker exec -it container-name /bin/bash -i
If you echo $PATH now it should be the same as what you have set in .bashrc

Is there anyway to start docker container without a service?

I have a php container which needs php-fpm to be started everytime I start the container . Now because of a wrong configuration in php-fpm config file , fpm does not gets started and so , container cannot start. Is there anyway that I can start the container without php-fpm so that I can fix the config file?
The container error is as follows :
[04-Sep-2020 13:47:30] ERROR: [/usr/local/etc/php-fpm.conf:7] value is NULL for a ZEND_INI_PARSER_ENTRY
[04-Sep-2020 13:47:30] ERROR: failed to load configuration file '/usr/local/etc/php-fpm.conf'
[04-Sep-2020 13:47:30] ERROR: FPM initialization failed
There are two ways to fix the image. Since I can't find image digitalocean/php, I'll use php:7.4-fpm in my example.
First way:
Copy file from the container and use it to build your own image:
Create Dockerfile:
FROM php:7.4-fpm
COPY ./php-fpm.conf /usr/local/etc/php-fpm.conf
Then:
docker run --detach --name php php:7.4-fpm tail -f /dev/null
docker cp php:/usr/local/etc/php-fpm.conf php-fpm.conf
docker stop php
docker rm -v php
# Edit php-fpm.conf
docker build --tag myphp-fm .
docker run --detach --name php myphp-fpm
and you get running container based on the fixed image.
Second way:
Run a shell using the broken image, fix the file and create a new image using the shell container
docker run -it --name php php:7.4-fpm bash
# Edit /usr/local/etc/php-fpm.conf
# If you install any additional tools remember to remove them afterwards
# and clean any cache's
# Once you're done exit the shell, thus stopping the container
docker commit -a "you" -m "/usr/local/etc/php-fpm.conf fix" php myphp-fpm
docker stop php
docker rm -v php
docker run --detach --name php myphp-fpm
and again you get running container based on the fixed image.
Of course, you can run your new image in whatever way you run the original image in the beginning.
I recommend the first way as it's way easier to edit the file outside the container.

Dockerizing a command line application

I have created a command line application using symfony 3.4 which doesn't need to display any web page.
I generally run the commands like following:
php bin/console MY_COMMAND_NAME
I want to dockerize the application and share it with others, so inside the root directory of my project I created a docker-compose.yml file, which looks like following:
version: "3.3"
services:
web:
image: php:7.3-cli
Then I ran docker-compose up, after that I checked the PHP version by the following command and it showed my the correct version:
docker run php:7.3-cli php -v
However, when I ran docker ps, it didn't show any container running.
My question is how to run the commands inside my project root directory. FYI, I am using Docker Toolbox, on windows 10 Home Edition and my project location is:
C:\Users\{my_user_name}\Desktop\folder_1\folder_2
The docker container need to have a long running process defined in CMD to stay running. php-cli is not that. If you run composer up, you'll see something like this:
$ docker-compose up
Creating network "tempphpdocker_default" with the default driver
Pulling web (php:7.3-cli)...
7.3-cli: Pulling from library/php
b8f262c62ec6: Pull complete
a98660e7def6: Pull complete
4d75689ceb37: Pull complete
639eb0368afa: Pull complete
2cdbfdb779b1: Pull complete
e0b637fa9606: Pull complete
da7333b0ef25: Pull complete
01d65ff46009: Pull complete
673e50bed3b9: Pull complete
bf6c6e34305d: Pull complete
Digest: sha256:1453f5ef0d4d1d424ed8114dd90a775bdec06cc6fb3bbae9521dcb4ca0c8ca90
Status: Downloaded newer image for php:7.3-cli
Creating tempphpdocker_web_1 ...
Creating tempphpdocker_web_1 ... done
Attaching to tempphpdocker_web_1
web_1 | Interactive shell
web_1 |
tempphpdocker_web_1 exited with code 0
The exit code is 0. This means your command in the docker image php:7.3-cli has successfully run and finished.
To properly dockerize your applicaiton, you should override this by writing you own docker file with proper COPY calls that bundle your CLI program into it. Your Dockerfile should probably look something like this:
FROM php:7.3-cli
RUN mkdir -p /opt/workdir/bin
RUN mkdir -p /opt/workdir/vendor
COPY bin/ /opt/workdir/bin
COPY vendor/ /opt/workdir/vendor
WORKDIR /opt/workdir
CMD php ./bin/console COMMAND
You can simply build and run this Dockerfile, or you if you prefer docker-compose, you can define docker-compose.yml in the same folder as the Dockerfile:
version: "3.3"
services:
web:
image: php-custom
build: ./
Please noted that a dockerized application can only access files and folder in the docker image. You should bind volumes of your local file system to the container before it can actually work on your filesystem.
Quick and dirty fix to keep you container running just override the container command in docker-compose.
version: "3.3"
services:
web:
image: php:7.3-cli
command: tail -f /dev/null
when you run docker-compose up it will keep the docker container but it will do not thing, just will give away to run command inside container.
docker exec -it php-cli_web_1 ash
My question is how to run the commands inside my project root
directory.
As mentioned by #David, you need to mount your host project to the container in docker-compose.
For instance
your project is placed on the host /home/myporject, mount the project within docker-compose and it will be available inside the container. then you can update the command of your docker-compose to run the script.
keep in mind
The life of container is the life of docker-compose command
When the execution completed your container will be die after execution. so your container will run until the php:7.3-cli /app/your_script.php this script completed.
version: "3.3"
services:
web:
image: php:7.3-cli
command: php:7.3-cli /app/your_script.php
volumes:
- /home/myporject:/app

How can I mount a volume to docker container in OSX?

My ultimate goal is to run different versions of PHP on my local computer for testing.
I believe Docker is the best way to accomplish this.
I have been able to get a container with Apache and PHP running via this tutorial: https://github.com/tutumcloud/apache-php
But the issue is that I cannot mount a volume so that I can edit local files and view them on the docker container.
Here are my steps in terminal running in the same directory as the docker file:
docker build -t tutum/apache-php .
docker run -d -p 8080:80 tutum/apache-php -v /Users/user-name-here/apache-php/sample:/app/
The error I get back is:
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"-v\": executable file not found in $PATH".
I'm on OSX - El Captain, just installed latest version of Docker and Docker tools.
The basic format of the docker run command is:
docker run [<options>] <image> [<command>]
In this case, your image is tutum/apache-php, and the run command is being parsed like this:
docker run -d -p 8080:80 tutum/apache-php -v /Users/user-name-here/apache-php/sample:/app/
docker run
options: -d -p 8080:80
image: tutum/apache-php
command: -v /Users/user-name-here/apache-php/sample:/app/
This is the source of your error.
exec: "-v": executable file not found in $PATH
Since the command begins with -v, it will try to locate and execute that command. Of course, it doesn't exist, so you will get this error.
The fix is simply to move the -v option and its argument to the proper place.
docker run -d -p 8080:80 -v /Users/user-name-here/apache-php/sample:/app/ tutum/apache-php

Running package manager inside the docker

I've built an image for the purpose of PHP development, and it became clear to me that I didn't really thought about how to access the tools that I need for every day development. For example: composer, package manager for PHP, I need it to run whenever composer.json updates. I thought it is worth installing those tools inside the same image, but then I don't have a way to access them. So, I can:
Create separate image for composer and run it in different container
Install composer on my host machine.
I'd like to avoid option 2), but then, does it have sense having a setup like 1) ? How did you guys solved this issue ?
Unless you have some quite specific requirements there is a third option:
Connect to the container using docker exec command:
docker exec -it CONTAINER-NAME/ID COMMAND [ARG...]
Here is the example:
1: Create your application:
echo "<?php phpinfo();" > index.php
2: Start container:
docker run -it --rm --name my-apache-php-app -p 80:80 -v "$PWD":/var/www/html php:5.6-apache
3: Open another terminal window and exec required commands inside running container:
docker exec -it my-apache-php-app curl -sS https://getcomposer.org/installer | php
docker exec -it my-apache-php-app ls
If you need shell inside running container - run:
docker exec -it my-apache-php-app bash
That's it!

Categories