I'm deploying code to a single-instance web server AWS EB environment that will provision/update my connected RDS database. I've got an .ebextensions file that calls deployment code:
---
container_commands:
01deploydb:
command: /var/www/html/php/cli/deploy-db.php
leader_only: true
On the same deployment, I dropped the deploy-db.php file back one directory into /cli/. On deployment, I get ERROR: [Instance: i-*****] Command failed on instance. Return code: 127 Output: /bin/sh: /var/www/html/php/cli/deploy-db.php: No such file or directory.
container_command 01deploydb in .ebextensions/01_db.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
If I deploy a version that does not include the command, then deploy a second update including the command, there is no error. However, adding the command and the file it calls at the same time produces the error. A similar sequence occurred earlier with a different command/file.
My question is: is there a documented order/sequence for how AWS updates the environment? I would have expected that my new version would have fully deployed (and the .php file installed) before container_commands are called.
The commands: section runs before the project files are put in place. This is where you can install server packages for example.
The container_commands: section runs in a staging directory before the files are put in its final destination. Here you can modify your files if you need to. Current path is this staging directory so you can run it like this (I might get the app directory wrong, maybe it should be php/cli/deploy-db.php)
container_commands:
01deploydb:
command: cli/deploy-db.php
leader_only: true
Reference for above: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
You can also run a post deploy scripts. This is not very well documented (at least it wasn't). You can do something like this (it won't be leader only though, but you could put a file in this directory through a container_commands:):
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_deploy.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
/var/www/html/php/cli/deploy-db.php
Related
PHP isn't a natively supported language in AWS Lambda, but I thought I'd try my hand at getting one working, using a custom Docker image. I am using this official AWS example to structure the image.
I don't quite understand the pieces yet. I will add what files I have to this post.
Firstly, my Dockerfile:
# Demo of a PHP-based lambda
#
# See example:
# https://github.com/aws-samples/php-examples-for-aws-lambda/blob/master/0.7-PHP-Lambda-functions-with-Docker-container-images/Dockerfile
FROM php:8.0-cli-alpine
WORKDIR /root
# Install Composer
COPY bin bin
RUN sh /root/bin/install-composer.sh
RUN php /root/composer.phar --version
# Install Composer deps
COPY composer.json composer.lock /root/
RUN php composer.phar install
# Install runtimes
COPY runtime /var/runtime
COPY src /var/task/
# Entrypoint
CMD ["index"]
Based on the example I also have:
A PHP listener program at /var/runtime/bootstrap (nearly verbatim copy of the example repo)
Composer dependencies at /root/vendor/... that are loaded by the bootstrap
A trivial index file at /var/task/index.php (verbatim copy of the example repo)
One change I have made is to base the image on an Alpine image from PHP, rather than to use an Amazon Linux image. I wonder, could there be something in the Amazon image that is needed?
The test I am using is the "hello world" in the AWS Lambda UI:
{
"key1": "value1",
"key2": "value2",
"key3": "value3"
}
Anyway, I have used docker login and docker push to get the image to ECR. When I run the hello world test inside the AWS dashboard, I am getting this set of error logs in CloudWatch:
2021-11-13T19:12:12.449+00:00 IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [index] WorkingDir: [/root]
2021-11-13T19:12:12.493+00:00 START RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Version: $LATEST
2021-11-13T19:12:12.502+00:00 IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [index] WorkingDir: [/root]
2021-11-13T19:12:12.504+00:00 END RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb
2021-11-13T19:12:12.504+00:00 REPORT RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Duration: 9.20 ms Billed Duration: 10 ms Memory Size: 128 MB Max Memory Used: 3 MB
2021-11-13T19:12:12.504+00:00 RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Runtime.InvalidEntrypoint
That makes a lot of sense, as I don't understand the entry point of "index" either, but it's there as a CMD in the example Dockerfile. Is this an alias for something? I would be inclined to use something like php /var/runtime/bootstrap, but I'd rather understand things, rather than guessing.
I believe I might be able to use Lambda RIE to test this locally, but I wonder if that would tell me what I already know - I need to fix the CMD.
For what it's worth, I can't see how the index.php file is triggered in the lambda either. How does that get invoked?
Update
I am wondering if the parent image in the example (public.ecr.aws/lambda/provided) has an ENTRYPOINT that would shed some light on the apparently confusing CMD. I wonder if I will investigate that next.
Update 2
The ponderance that I might have to use the Amazon Linux image parent was a false steer - this official resource shows the use of standard Python and Node images.
I decided to try repairing the main Docker command:
CMD ["php", "/var/runtime/bootstrap"]
However it doesn't like that:
START RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Version: $LATEST
IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [php,/var/runtime/bootstrap] WorkingDir: [/root]
IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [php,/var/runtime/bootstrap] WorkingDir: [/root]
END RequestId: d95a29d3-6764-4bae-9976-09830c1c17af
REPORT RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Duration: 19.88 ms Billed Duration: 20 ms Memory Size: 128 MB Max Memory Used: 3 MB
RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied
Runtime.InvalidEntrypoint
Update 3
No matter what I do, I seem to run into problems with entrypoints. I've even tried a runtime script to chmod +x on the various binaries it doesn't like, but of course if I try to kick that off in the ENTRYPOINT, the container believes that /bin/sh cannot be executed. This is getting rather silly - containers are just not behaving correctly in the Lambda environment.
Update 4
I have now tried to move away from Alpine, in case a more standard OS will work correctly. No joy - I get the same. I'm now at the random-trying things point, and this is rather slow going, given that the build-push-install loop is cumbersome.
This question looks promising, but putting the bootstrap file in /var/task seems to go directly against the example I am working from.
This problem was tricksy because there were two major interlocking problems - a seemingly excessive permissions requirement, and what struck me as a non-standard use of the ENTRYPOINT/CMD systems.
Working solution
The Dockerfile that works is as follows:
# Demo of a PHP-based lambda
#
# See example:
# https://github.com/aws-samples/php-examples-for-aws-lambda/blob/master/0.7-PHP-Lambda-functions-with-Docker-container-images/Dockerfile
FROM php:8.0-cli-alpine
WORKDIR /root
# Install Composer
COPY bin bin
RUN sh /root/bin/install-composer.sh
RUN php /root/composer.phar --version
# Install Composer deps
COPY composer.json composer.lock /root/
# Move deps to /opt, /root has significant permission issues
RUN php /root/composer.phar install && \
mv /root/vendor /opt/vendor
# Install runtimes
COPY runtime/bootstrap /var/runtime/
COPY src/index.php /var/task/
RUN chmod 777 /usr/local/bin/php /var/task/* /var/runtime/*
# The entrypoint seems to be the main handler
# and the CMD specifies what kind of event to process
WORKDIR /var/task
ENTRYPOINT ["/var/runtime/bootstrap"]
CMD ["index"]
So, that resolves one of my nagging questions about Amazon Linux - it is not required. Note that although I installed Composer dependencies in /root, they could not stay there - even 777 perms on them seemed to be insufficient.
As you can see I used 777 permissions on things in /var. 755 might work, maybe even 750 would work - but the key here is that Amazon appears to be a user that is not the build (root) user. That tripped me up a lot.
Now the ENTRYPOINT is used to run the bootstrap script, which appears to be doing general mediation between events on the AWS side and "use cases" in /var/tasks. The normal purpose of a Docker entrypoint is as a command wrapper to CMD, so to use CMD as a "default lambda type" seems to significantly violate the principle of least surprise. I would have thought the lambda type would be defined by the incoming event, not by any lambda-side setting.
Testing
To test this lambda I use this event in the Lambda UI:
{
"queryStringParameters": { "name": "halfer" }
}
And the demo code will respond with:
{
"statusCode": 200,
"headers": {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "Content-Type",
"Access-Control-Allow-Methods": "OPTIONS,POST"
},
"body": "Hello, halfer"
}
Suffice it to say this feels rather brittle. Admittedly the demo code is not production quality, but even so, I suspect this would need a pipeline to do a real AWS Lambda test prior to merging down or deployment.
Performance
Here is why lambdas are tempting, especially for infrequent calls such as crons - they are instantiated quickly and die quickly, leaving no running infra. In one of my demo calls I have:
Init duration 188.75 ms
Duration 39.45 ms
Billed duration 229 ms
Deeper understanding
Having worked with the pieces I think I can now explain them rather better, and what I thought of as unusual architectural choices may actually have some purpose. I fear however this design ideology is not sufficiently documented, so engineers working with Docker-based AWS Lambdas have to spend additional time figuring the pieces out.
Consider the processing loop in the demo runtime:
// This is the request processing loop. Barring unrecoverable failure, this loop runs until the environment shuts down.
do {
// Ask the runtime API for a request to handle.
$request = getNextRequest();
// Obtain the function name from the _HANDLER environment variable and ensure the function's code is available.
$handlerFunction = $_ENV['_HANDLER'];
require_once $_ENV['LAMBDA_TASK_ROOT'] . '/' . $handlerFunction . '.php';
// Execute the desired function and obtain the response.
$response = $handlerFunction($request['payload']);
// Submit the response back to the runtime API.
sendResponse($request['invocationId'], $response);
} while (true);
This picks up $_ENV['_HANDLER'] from the Lambda environment, and AWS derives that from the CMD of the image. Now, in PHP the env vars in $_ENV are static for the duration of the process, so it is perhaps a slightly odd choice to read this in a loop and include the file in a loop - it would have been better to do this in an initialisation phase, returning a clean error if the include isn't found.
However, here's the likely purpose of this system: AWS Lambdas let users customise the CMD in the web dashboard. So in an example enterprise let's say that there's three lambdas - one for responding to a web event, one for a scheduler, and one for responding to SNS topics. The handlers for each of these could be added to the same image, allowing the three lambdas to share an image - all they need to do is to supply a CMD override, and each one will load and use the right handler.
I am running Laravel on Homestead, and whenever I run any php artisan XXX command, the file named -1 is created in the root directory of the app.
Contents of the file are similar to these ones:
Log opened at 2017-12-22 13:54:00
I: Connecting to configured address/port: 10.0.2.2:9000.
E: Time-out connecting to client. :-(
Log closed at 2017-12-22 13:54:00
I am 99% sure it is related some changes I made in my failed attempts to make XDebug breakpoints work with artisan commands. I have exported some shell variables, as recommended in this answer, but when I run export -p I don't see any of them.
Did anyone have a similar issue? What setting can be causing such behavior?
Following the suggestion of LazyOne, I found the answer:
It seems that paths in .ini file have to be absolute. So instead of:
xdebug.remote_log=~/code/xdebug.log
I had to set it to:
xdebug.remote_log=/home/vagrant/code/xdebug.log
and now it works as supposed to.
I have trouble deploying with Deployer 4.0.2 and I am in need for help of somebody more experienced than me in this.
I want to deploy a repository of mine to a Ubuntu 16.04 server.
I am using laravel homestead as a development environment, where I also installed deployer. From there I ssh into my remote server.
I was able to deploy my code with the root user, until I hit a RuntimeExceptionthat aborted my deployment.
Do not run Composer as root/super user! See https://getcomposer.org/root for details
That made me create another user called george, whom I gave superuser rights. I copied my public key from my local machine to a newly generated ~/.ssh/authorized_keys file, that gave me permission to access the server via ssh.
Yet when I run dep deploy with the new user:
server('production', '138.68.99.157')
->user('george')
->identityFile()
->set('deploy_path', '/var/www/test');
I get another RuntimeException:
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Now it looks like the new user george cannot access the ~/.ssh/id_rsa.pubkey. So I copy them from the root folder into my home folder and also add the public key in the Github SSH settings.
cp root/.ssh/id_rsa.pub home/george/.ssh/id_rsa.pub
cp root/.ssh/id_rsa home/george/.ssh/id_rsa
Only to get the same error as before.
In the end I had to add github to my list of authorized hosts:
ssh-keyscan -H github.com >> ~/.ssh/known_hosts
Only to get the next RuntimeException
[RuntimeException]
sudo: no tty present and no askpass program specified
I managed to comment this code in the deploy.php
// desc('Restart PHP-FPM service');
// task('php-fpm:restart', function () {
// // The user must have rights for restart service
// // /etc/sudoers: username ALL=NOPASSWD:/bin/systemctl restart php-fpm.service
// run('sudo systemctl restart php-fpm.service');
// });
// after('deploy:symlink', 'php-fpm:restart');
to get the deployment process finally done, and now I ask myself, if the restart of php-fpm is really necessary, for me to continue debugging this deployment tool? Or can I live without it?
And if I need it, can somebody help me understand what I need it for? And maybe as a luxury also provide the solution to the RuntimeException?
Try this:
->identityFile('~/.ssh/id_rsa.pub', '~/.ssh/id_rsa', 'pass phrase')
It works great for me - no need for an askpass program.
It helps to be explicit in my experience.
As for your phpfm restart task .. I haven't seen that before. Shouldn't be needed. :)
EDIT:
That you provide a password is probably a good sign that you ought to refactor your Deployer code a bit if you keep it under source control.
I am loading site specific data from a YAML file - which I am not submitting to source control.
The first bit of my stage.yml :
# Site Configuration
# -------------
prod_1:
host: hostname
user: username
identity_file:
public_key: /home/user/.ssh/key.pub
private_key: /home/user/.ssh/key
password: "password"
stage: production
repository: https://github.com/user/repository.git
deploy_path: /var/www
app:
debug: false
stage: 'prod'
And then, in my deploy.php :
if (!file_exists (__DIR__ . '/deployer/stage/servers.yml')) {
die('Please create "' . __DIR__ . '/deployer/stage/servers.yml" before continuing.' . "\n");
}
serverList(__DIR__ . '/deployer/stage/servers.yml');
set('repository', '{{repository}}');
set('default_stage', 'production');
Notice that, when you use serverList, it replaces your server setup in deploy.php
I hope someone can help me out with this issue I'm facing.
I've made a fully functional project on a local server and would now like to deploy it to Bluemix Cloud Foundry.
I've followed the tutorial: https://console.eu-gb.bluemix.net/docs/starters/upload_app.html
But when I'm trying to push it through terminal with following commands
cf push app_name -b https://github.com/cloudfoundry/php-buildpack.git -s cflinuxfs2
cf push app_name -b https://github.com/cloudfoundry/go-buildpack
cf push app_name -c start_command
cf push app_name -m 512m
But non seems to work, since every single time I get the following error
Staging failed: Buildpack compilation step failed
-----> Composer command failed
FAILED
Error restarting application: BuildpackCompileFailed
It is a PHP app build with PHPStorm on Symfony and Doctrine if that matters.
I am fairly new to all server/setup/deployment configurations as well as command line.
EDIT 1
I figured out this part thanks to this link: https://support.run.pivotal.io/entries/109600943-cf-push-ing-a-symfony-app-fails-with-Composer-command-failed-
It seems that by default the buildpack assumes that you want all of the files you push to be public. Because of this assumption, it takes all of your files and moves them into the doc root of either HTTPD or Nginx.
By creating the file .bp-config/options.json in the root of your project. Then inside options.json add
{
"WEBDIR": "web"
}
This will tell the buildpack that you have a specific directory to use for the doc root, so it will just use that instead of moving everything into the default doc root.
However...
This brings me a new issue and returns the following error
FAILED
Error restarting application: Start unsuccessful
If i enter the recent log the terminal provides me this:
2016-08-25T02:53:40.62+0200 [App/0] OUT Could not open input file: app.php
2016-08-25T02:53:40.62+0200 [App/0] ERR
2016-08-25T02:53:40.69+0200 [DEA/211] ERR Instance (index 0) failed to start accepting connections
2016-08-25T02:53:40.72+0200 [API/9] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"b6c3c871-5484-4f12-9d84-657cf6eacfbf", "instance"=>"c11566bdabe5458d9bfc4965c9c1aa85", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472086420}
2016-08-25T02:53:40.72+0200 [API/3] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"b6c3c871-5484-4f12-9d84-657cf6eacfbf", "instance"=>"c11566bdabe5458d9bfc4965c9c1aa85", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472086420}
2016-08-24T16:41:14.03+0200 [DEA/135] OUT Starting app instance (index 0) with guid abb206b3-b8ea-4269-b248-ec7b35f7098a
2016-08-24T16:41:26.26+0200 [App/0] ERR bash: start_command: command not found
2016-08-24T16:41:26.26+0200 [App/0] OUT
2016-08-24T16:41:26.35+0200 [DEA/135] ERR Instance (index 0) failed to start accepting connections
2016-08-24T16:41:26.38+0200 [API/6] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"5ebd6d77-68c4-4901-b9a8-b5cecfa4cddb", "instance"=>"7b5b555ae68645f4a2c09b73c0adbcb3", "index"=>0, "reason"=>"CRASHED", "exit_status"=>127, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472049686}
EDIT 2 (updated error msg)
I'm trying to deploy my Laravel application to Elastic Beanstalk in development mode. To make the application run in development mode rather than production, I've done the following in my /bootstrap/start.php file:
$env = $app->detectEnvironment(function() {
return $_ENV['ENV_NAME'];
});
To actually create the environment variable, I've created a .config file in the following path: /.ebextensions/00environmentVariables.config with these contents:
option_settings:
- namespace: aws:elasticbeanstalk:application:environment
option_name: ENV_NAME
value: development
- option_name: DB_HOST
value: [redacted]
- option_name: DB_PORT
value: [redacted]
- option_name: DB_NAME
value: [redacted]
- option_name: DB_USER
value: [redacted]
- option_name: DB_PASS
value: [redacted]
When I run eb start from the command line, it spins up an EC2 instance and attempts to provision it, at which point it tells me it fails. and to check the logs. In the logs, I can see these entries:
PHP Notice: Undefined index: ENV_NAME in
/var/app/ondeck/bootstrap/start.php on line 28
Notice: Undefined index: ENV_NAME in /var/app/ondeck/bootstrap/start.php on line 28
So for some reason, the ENV_NAME environment variable doesn't exist, even though I've specified it in 00environmentVariables.config. What's even weirder, is that I can see the environment variable does exist under the software configuration settings of the EB environment:
To summarize:
I know my .config files are being parsed from looking at the logs
For some reason my Laravel application still doesn't think that ENV_NAME eixsts
ENV_NAME eixsts both in the .config file and in my Elastic Beanstalk settings for this environment
EDIT
Alright so I worked out that the environment variables do work correctly when serving the application over the Apache HTTP server, but the environment variables don't exist when running the PHP CLI.
In the above logs, it's complaining about ENV_NAME not existing when running a /usr/bin/composer.phar install.
So, for some reason, my environment variables don't exist within the PHP CLI but they do work normally when serving over Apache.
FURTHER EDIT
So I SSH'd into the EC2 instance that's hosting my Laravel application on Elastic Beanstalk, and I can see the proper environment variables when I use the ``printenv command`:
ENV_NAME=development
However, if I do a die(var_dump($_SERVER)); and run the PHP CLI, I don't see the environment variables that I've defined. Same story with $_ENV and getenv().
Why can't I access my environment variables within the PHP CLI, when I can access them when Apache processes my PHP scripts?
YET ANOTHER EDIT
I made a test.php file with one line: die(var_dump($_ENV));.
When I run this using php test.php I successfully get my custom environment variables coming out, so this seems like a composer only problem, not a PHP CLI problem.
I use a YAML script which sets the environment variables for the root user from the existing variables set for ec2-user. Add this to your .ebextensions folder with the .config extension.
From there you can run PHP cli and it will see the correct environment variables
commands:
create_post_dir:
command: "mkdir /opt/elasticbeanstalk/hooks/appdeploy/post"
ignoreErrors: true
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/job_after_deploy.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
source /opt/elasticbeanstalk/support/envvars
# Run PHP scripts here. #
From XuDing's answer to this question and this answer
I created a job that creates .env file every 5 minutes.
Add the following to your .ebextensions
"/opt/elasticbeanstalk/hooks/appdeploy/post/91_set_create_app_env_file_job.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
echo "Removing any existing CRON jobs..."
crontab -r
APP_ENV=/var/app/current/.env
EB_ENVVARS=/opt/elasticbeanstalk/support/envvars
CONSTANTS=/var/app/current/.constants
CRON_CMD="grep -oE '[^ ]+$' $EB_ENVVARS > $APP_ENV; cat $CONSTANTS >> $APP_ENV"
echo "Creating .env file...."
eval $CRON_CMD
echo "Scheduling .env file updater job to run every 5 minutes..."
(crontab -l 2>/dev/null; echo "*/5 * * * * $CRON_CMD")| crontab -
Reason I did it this way is that you may want to update your environment variables via the AWS UI Console.
This is the best solution in my opinion.