I am trying to wrap my head around Docker. For me there are still grey areas, which I have tried to research on the internet.
I'm trying to setup a Docker for an existing Laravel application.
I use Laradock (laradock/laradock) from GitHub (it's like Homestead but for Docker).
I use command like this:
docker-compose up -d nginx mysql
If I use docker ps command, I would be able to see the running containers for both MySQL and Nginx and maybe some others.
How can I take an image of docker-compose?
I have tried this
docker commit [container-id] username/reponame
An image does get created, but it's incomplete image. As I am running a whole application with MySQL, Nginx, etc..
How to take image of whole and deploy it to Docker?
All I want is that I can deploy an existing project that is using multiple containers in local to the Docker container on docker.com.
I'm using Docker on Windows 10.
Update
Here is the single node i have, currently I am on free plan with docker.com and microsoft.com.
The Node with Azure Microsoft:
The Container of Laravel:
The repository on Docker I am using on container:
and finally the image link as my repository image is public:
https://hub.docker.com/r/pakistanihaider/pakistanihaider.me/
Now for some reason the image I created and I pushed to Docker is not working and site is not going live, even on localhost when I create image and I try to run image it doesn't work. But when I try docker-compose up -d command, everything works perfectly.
The logs I have online for my service
[pakistanihaider-1]2017-01-02T17:56:56.794183200Z /usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
[pakistanihaider-1]2017-01-02T17:56:56.794218000Z 'Supervisord is running as root and it is searching '
[pakistanihaider-1]2017-01-02T17:56:56.923437200Z 2017-01-02 17:56:56,923 CRIT Supervisor running as root (no user in config file)
[pakistanihaider-1]2017-01-02T17:56:56.923533400Z 2017-01-02 17:56:56,923 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
[pakistanihaider-1]2017-01-02T17:56:56.958072800Z 2017-01-02 17:56:56,957 INFO RPC interface 'supervisor' initialized
[pakistanihaider-1]2017-01-02T17:56:56.958969800Z 2017-01-02 17:56:56,958 CRIT Server 'unix_http_server' running without any HTTP authentication checking
[pakistanihaider-1]2017-01-02T17:56:56.960595100Z 2017-01-02 17:56:56,960 INFO supervisord started with pid 1
[pakistanihaider-1]2017-01-02T17:56:57.968087400Z 2017-01-02 17:56:57,967 INFO spawned: 'nginx' with pid 7
[pakistanihaider-1]2017-01-02T17:56:57.970004700Z 2017-01-02 17:56:57,969 INFO spawned: 'hhvm-fastcgi' with pid 8
[pakistanihaider-1]2017-01-02T17:56:57.971064400Z 2017-01-02 17:56:57,970 INFO spawned: 'php5-fpm' with pid 9
[pakistanihaider-1]2017-01-02T17:56:59.547596700Z 2017-01-02 17:56:59,542 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
[pakistanihaider-1]2017-01-02T17:56:59.547617600Z 2017-01-02 17:56:59,542 INFO success: hhvm-fastcgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
[pakistanihaider-1]2017-01-02T17:56:59.547623400Z 2017-01-02 17:56:59,543 INFO success: php5-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
What is going wrong here?
Related
PHP isn't a natively supported language in AWS Lambda, but I thought I'd try my hand at getting one working, using a custom Docker image. I am using this official AWS example to structure the image.
I don't quite understand the pieces yet. I will add what files I have to this post.
Firstly, my Dockerfile:
# Demo of a PHP-based lambda
#
# See example:
# https://github.com/aws-samples/php-examples-for-aws-lambda/blob/master/0.7-PHP-Lambda-functions-with-Docker-container-images/Dockerfile
FROM php:8.0-cli-alpine
WORKDIR /root
# Install Composer
COPY bin bin
RUN sh /root/bin/install-composer.sh
RUN php /root/composer.phar --version
# Install Composer deps
COPY composer.json composer.lock /root/
RUN php composer.phar install
# Install runtimes
COPY runtime /var/runtime
COPY src /var/task/
# Entrypoint
CMD ["index"]
Based on the example I also have:
A PHP listener program at /var/runtime/bootstrap (nearly verbatim copy of the example repo)
Composer dependencies at /root/vendor/... that are loaded by the bootstrap
A trivial index file at /var/task/index.php (verbatim copy of the example repo)
One change I have made is to base the image on an Alpine image from PHP, rather than to use an Amazon Linux image. I wonder, could there be something in the Amazon image that is needed?
The test I am using is the "hello world" in the AWS Lambda UI:
{
"key1": "value1",
"key2": "value2",
"key3": "value3"
}
Anyway, I have used docker login and docker push to get the image to ECR. When I run the hello world test inside the AWS dashboard, I am getting this set of error logs in CloudWatch:
2021-11-13T19:12:12.449+00:00 IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [index] WorkingDir: [/root]
2021-11-13T19:12:12.493+00:00 START RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Version: $LATEST
2021-11-13T19:12:12.502+00:00 IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [index] WorkingDir: [/root]
2021-11-13T19:12:12.504+00:00 END RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb
2021-11-13T19:12:12.504+00:00 REPORT RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Duration: 9.20 ms Billed Duration: 10 ms Memory Size: 128 MB Max Memory Used: 3 MB
2021-11-13T19:12:12.504+00:00 RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Runtime.InvalidEntrypoint
That makes a lot of sense, as I don't understand the entry point of "index" either, but it's there as a CMD in the example Dockerfile. Is this an alias for something? I would be inclined to use something like php /var/runtime/bootstrap, but I'd rather understand things, rather than guessing.
I believe I might be able to use Lambda RIE to test this locally, but I wonder if that would tell me what I already know - I need to fix the CMD.
For what it's worth, I can't see how the index.php file is triggered in the lambda either. How does that get invoked?
Update
I am wondering if the parent image in the example (public.ecr.aws/lambda/provided) has an ENTRYPOINT that would shed some light on the apparently confusing CMD. I wonder if I will investigate that next.
Update 2
The ponderance that I might have to use the Amazon Linux image parent was a false steer - this official resource shows the use of standard Python and Node images.
I decided to try repairing the main Docker command:
CMD ["php", "/var/runtime/bootstrap"]
However it doesn't like that:
START RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Version: $LATEST
IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [php,/var/runtime/bootstrap] WorkingDir: [/root]
IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [php,/var/runtime/bootstrap] WorkingDir: [/root]
END RequestId: d95a29d3-6764-4bae-9976-09830c1c17af
REPORT RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Duration: 19.88 ms Billed Duration: 20 ms Memory Size: 128 MB Max Memory Used: 3 MB
RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied
Runtime.InvalidEntrypoint
Update 3
No matter what I do, I seem to run into problems with entrypoints. I've even tried a runtime script to chmod +x on the various binaries it doesn't like, but of course if I try to kick that off in the ENTRYPOINT, the container believes that /bin/sh cannot be executed. This is getting rather silly - containers are just not behaving correctly in the Lambda environment.
Update 4
I have now tried to move away from Alpine, in case a more standard OS will work correctly. No joy - I get the same. I'm now at the random-trying things point, and this is rather slow going, given that the build-push-install loop is cumbersome.
This question looks promising, but putting the bootstrap file in /var/task seems to go directly against the example I am working from.
This problem was tricksy because there were two major interlocking problems - a seemingly excessive permissions requirement, and what struck me as a non-standard use of the ENTRYPOINT/CMD systems.
Working solution
The Dockerfile that works is as follows:
# Demo of a PHP-based lambda
#
# See example:
# https://github.com/aws-samples/php-examples-for-aws-lambda/blob/master/0.7-PHP-Lambda-functions-with-Docker-container-images/Dockerfile
FROM php:8.0-cli-alpine
WORKDIR /root
# Install Composer
COPY bin bin
RUN sh /root/bin/install-composer.sh
RUN php /root/composer.phar --version
# Install Composer deps
COPY composer.json composer.lock /root/
# Move deps to /opt, /root has significant permission issues
RUN php /root/composer.phar install && \
mv /root/vendor /opt/vendor
# Install runtimes
COPY runtime/bootstrap /var/runtime/
COPY src/index.php /var/task/
RUN chmod 777 /usr/local/bin/php /var/task/* /var/runtime/*
# The entrypoint seems to be the main handler
# and the CMD specifies what kind of event to process
WORKDIR /var/task
ENTRYPOINT ["/var/runtime/bootstrap"]
CMD ["index"]
So, that resolves one of my nagging questions about Amazon Linux - it is not required. Note that although I installed Composer dependencies in /root, they could not stay there - even 777 perms on them seemed to be insufficient.
As you can see I used 777 permissions on things in /var. 755 might work, maybe even 750 would work - but the key here is that Amazon appears to be a user that is not the build (root) user. That tripped me up a lot.
Now the ENTRYPOINT is used to run the bootstrap script, which appears to be doing general mediation between events on the AWS side and "use cases" in /var/tasks. The normal purpose of a Docker entrypoint is as a command wrapper to CMD, so to use CMD as a "default lambda type" seems to significantly violate the principle of least surprise. I would have thought the lambda type would be defined by the incoming event, not by any lambda-side setting.
Testing
To test this lambda I use this event in the Lambda UI:
{
"queryStringParameters": { "name": "halfer" }
}
And the demo code will respond with:
{
"statusCode": 200,
"headers": {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "Content-Type",
"Access-Control-Allow-Methods": "OPTIONS,POST"
},
"body": "Hello, halfer"
}
Suffice it to say this feels rather brittle. Admittedly the demo code is not production quality, but even so, I suspect this would need a pipeline to do a real AWS Lambda test prior to merging down or deployment.
Performance
Here is why lambdas are tempting, especially for infrequent calls such as crons - they are instantiated quickly and die quickly, leaving no running infra. In one of my demo calls I have:
Init duration 188.75 ms
Duration 39.45 ms
Billed duration 229 ms
Deeper understanding
Having worked with the pieces I think I can now explain them rather better, and what I thought of as unusual architectural choices may actually have some purpose. I fear however this design ideology is not sufficiently documented, so engineers working with Docker-based AWS Lambdas have to spend additional time figuring the pieces out.
Consider the processing loop in the demo runtime:
// This is the request processing loop. Barring unrecoverable failure, this loop runs until the environment shuts down.
do {
// Ask the runtime API for a request to handle.
$request = getNextRequest();
// Obtain the function name from the _HANDLER environment variable and ensure the function's code is available.
$handlerFunction = $_ENV['_HANDLER'];
require_once $_ENV['LAMBDA_TASK_ROOT'] . '/' . $handlerFunction . '.php';
// Execute the desired function and obtain the response.
$response = $handlerFunction($request['payload']);
// Submit the response back to the runtime API.
sendResponse($request['invocationId'], $response);
} while (true);
This picks up $_ENV['_HANDLER'] from the Lambda environment, and AWS derives that from the CMD of the image. Now, in PHP the env vars in $_ENV are static for the duration of the process, so it is perhaps a slightly odd choice to read this in a loop and include the file in a loop - it would have been better to do this in an initialisation phase, returning a clean error if the include isn't found.
However, here's the likely purpose of this system: AWS Lambdas let users customise the CMD in the web dashboard. So in an example enterprise let's say that there's three lambdas - one for responding to a web event, one for a scheduler, and one for responding to SNS topics. The handlers for each of these could be added to the same image, allowing the three lambdas to share an image - all they need to do is to supply a CMD override, and each one will load and use the right handler.
I am new for docker and consul. I have created the 4 instances in AWS. I have added the use of the following instance.
First Instance - Server 1
Second Instance - Server 2
Third Instance - Server 3
Fourth Instance - Server 4
This instance having the ubuntu 18.04 OS. I am trying to implement an auto-discovery concept using consul.
I have done the following steps.
I have installed the docker in my four instances using the below link https://docs.docker.com/install/linux/docker-ce/ubuntu/
And pulling the consul image using the below link
https://hub.docker.com/_/consul?tab=description
I have checked on 'Running Consul for Development'. Its working fine for all the instances.
Server 1:
I am trying to run on consul agent in client mode. It's showing below error.
sudo docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}' consul agent -bind=<external ip> -retry-join=<root agent ip>
external ip - I have given on server1 private IP.
root agent ip - I have given on bootstrap server private IP.
Output:
I got the 64 letter key. EG:
b93b160ef52b9203d67bb6db27793963dc419276145f4c247c9ba4e2bd6deb03
But that reference sites having a different response.
dig #bootstrap_server_private_ip -p 8600 consul.service.consul
It's showing on connection timed out an error.connection timed out error
I'm having problem with deploying with the use of deployer, this is the first time i'm using any deployment tool. My teacher have made a guide for making it work but, I haven't been able to do a deploy.
So if anyone can tell me what i'm doing wrong please tell me.
here follows all the specs for my computer and my setup:
Computer:
Operating system Windows 10 Home
Manufacturer ASUSTek Computer Inc.
Model E403SA
Processor Intel(R) Pentium(R) CPU N3700 # 1.60GHz 1.60 GHz
RAM 4.0 GB
64-bit operatingsystem, x64 based processor
.ssh/config file:
Host xxxxx
ControlMaster no
Hostname ssh.xxxxx.xx
User xxxxxx_xxx
note that I added ControlMaster no becouse I read that the problem could be with ssh multiplexing but it but I got the same error with and without it...
deploy.php file ( in the root of the project):
<?php
namespace Deployer;
require 'recipe/common.php';
// Project name
set('application', 'blog');
// Project repository
set('repository', 'git#github.com:xxxxxxx/xxxxxxxxxxxxxxxx.git');
// [Optional] Allocate tty for git clone. Default value is false.
set('git_tty', true);
// Shared files/dirs between deploys
set('shared_files', ['config/dbinfo.json']);
set('shared_dirs', []);
// Writable dirs by web server
set('writable_dirs', []);
// Hosts
host('ssh.xxxxxx.xx')
->set('deploy_path', '~/xxxx.xxxxxxxxxxxxx.xxxx.xxxxxxx')
->user('xxxxxx_xxx')
->port(22);
// Tasks
desc('Deploy your project');
task('deploy:custom_webroot', function() {
run("cd {{deploy_path}} && ln -sfn {{release_path}} public_html/xxxxxxxxxxxx");
});
task('deploy', [
'deploy:info',
'deploy:prepare',
'deploy:lock',
'deploy:release',
'deploy:update_code',
'deploy:shared',
'deploy:writable',
'deploy:clear_paths',
'deploy:symlink',
'deploy:unlock',
'cleanup',
'success'
]);
// [Optional] If deploy fails automatically unlock.
after('deploy:failed', 'deploy:unlock');
after('deploy', 'deploy:custom_webroot');
When i try to run dep deploy in the root of the project I get the flowing output:
$ dep deploy
✈︎ Deploying master on ssh.xxxxxx.xx
➤ Executing task deploy:prepare
✔ Executing task deploy:failed
➤ Executing task deploy:unlock
[Deployer\Exception\RuntimeException]
The command "rm -f ~/blog.xxxxxxxxxxx.xxxx.xxxxx/.dep/deploy.lock" failed.
Exit Code: -1 (Unknown error)
Host Name: ssh.xxxxx.xx
================
mm_send_fd: sendmsg(2): Broken pipe
mux_client_request_session: send fds failed
any help would be extremely appreciated!
Try using dep deploy:unlock and then use deploy normally.
I am using this file to deploy a multicontainer nginx php-fpm application in AWS.
I run eb local run and shows me this error.
holdbusinessnginx_1 | nginx: [emerg] host not found in upstream "php:9000" in /etc/nginx/conf.d/upstream.conf:1
elasticbeanstalk_holdbusinessnginx_1 exited with code 1
It probably is because nginx is running before php-fpm.
In docker-compose.yml file there is a directive called depends-on.
Is there a way to use it in dockerrun.aws.json file?
Just use directive of
"links": [
"php"
],
where php will be the name of other container you defined in the same Dockerrun.aws.json file. EB is kinda guessing the dependencies on the links, volumes etc. So with forcing nginx container to link to php you're saying to the EB that php should get up before nignx. In shortcut. :-)
Sorry to take so much for the response. It was really that. A misatention mine.
I hope someone can help me out with this issue I'm facing.
I've made a fully functional project on a local server and would now like to deploy it to Bluemix Cloud Foundry.
I've followed the tutorial: https://console.eu-gb.bluemix.net/docs/starters/upload_app.html
But when I'm trying to push it through terminal with following commands
cf push app_name -b https://github.com/cloudfoundry/php-buildpack.git -s cflinuxfs2
cf push app_name -b https://github.com/cloudfoundry/go-buildpack
cf push app_name -c start_command
cf push app_name -m 512m
But non seems to work, since every single time I get the following error
Staging failed: Buildpack compilation step failed
-----> Composer command failed
FAILED
Error restarting application: BuildpackCompileFailed
It is a PHP app build with PHPStorm on Symfony and Doctrine if that matters.
I am fairly new to all server/setup/deployment configurations as well as command line.
EDIT 1
I figured out this part thanks to this link: https://support.run.pivotal.io/entries/109600943-cf-push-ing-a-symfony-app-fails-with-Composer-command-failed-
It seems that by default the buildpack assumes that you want all of the files you push to be public. Because of this assumption, it takes all of your files and moves them into the doc root of either HTTPD or Nginx.
By creating the file .bp-config/options.json in the root of your project. Then inside options.json add
{
"WEBDIR": "web"
}
This will tell the buildpack that you have a specific directory to use for the doc root, so it will just use that instead of moving everything into the default doc root.
However...
This brings me a new issue and returns the following error
FAILED
Error restarting application: Start unsuccessful
If i enter the recent log the terminal provides me this:
2016-08-25T02:53:40.62+0200 [App/0] OUT Could not open input file: app.php
2016-08-25T02:53:40.62+0200 [App/0] ERR
2016-08-25T02:53:40.69+0200 [DEA/211] ERR Instance (index 0) failed to start accepting connections
2016-08-25T02:53:40.72+0200 [API/9] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"b6c3c871-5484-4f12-9d84-657cf6eacfbf", "instance"=>"c11566bdabe5458d9bfc4965c9c1aa85", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472086420}
2016-08-25T02:53:40.72+0200 [API/3] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"b6c3c871-5484-4f12-9d84-657cf6eacfbf", "instance"=>"c11566bdabe5458d9bfc4965c9c1aa85", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472086420}
2016-08-24T16:41:14.03+0200 [DEA/135] OUT Starting app instance (index 0) with guid abb206b3-b8ea-4269-b248-ec7b35f7098a
2016-08-24T16:41:26.26+0200 [App/0] ERR bash: start_command: command not found
2016-08-24T16:41:26.26+0200 [App/0] OUT
2016-08-24T16:41:26.35+0200 [DEA/135] ERR Instance (index 0) failed to start accepting connections
2016-08-24T16:41:26.38+0200 [API/6] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"5ebd6d77-68c4-4901-b9a8-b5cecfa4cddb", "instance"=>"7b5b555ae68645f4a2c09b73c0adbcb3", "index"=>0, "reason"=>"CRASHED", "exit_status"=>127, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472049686}
EDIT 2 (updated error msg)