UPDATE: Thank you all! I have solved this by creating a custom runtime for my PHP Lambda.
I am currently using Node.js 8.10 Runtime with a php.handler and my Lambda function works fine, but when I change the Runtime to 12.x, I get the following error:
"php-7-bin/bin/php: error while loading shared libraries: libcrypt.so.1: cannot open shared object file: No such file or directory"
exports.handler = function(event, context, callback) {
var php = spawn('php-7-bin/bin/php',['--php-ini', 'user.ini', process.env['PHPFILE']], {maxBuffer: 200 * 1024 * 200});
var output = "";
var statusCode = 0;
php.stdin.write(JSON.stringify(event));
php.stdin.end();
php.stdout.on('data', function(data) {
console.log("CHUNK: " + data);
output+=data;
});
php.stderr.on('data', function(data) {
console.log(data);
});
php.on('close', function(code) {
var obj = JSON.parse(output);
statusCode = obj.status.statusCode;
if(statusCode !== 0){
callback(output);
}else{
context.succeed(obj);
}
});
}
I need to update my Lambda to the latest node.js version, but I have no idea how to overcome this error, so any help would be greatly appreciated!
First, why on earth are you using node to load php?
But if you had this working before, why do you need to update to node 12?
If you are upgrading from Node 8, the runtime is different:
https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html
So then take a look here:
https://aws.amazon.com/blogs/apn/aws-lambda-custom-runtime-for-php-a-practical-example/
You may need to create a new custom runtime based off the node12 built-in runtime for AWS.
Easy fix is to add on top of your PHP code:
set_include_path('/opt/lib64’);
If that won't work you need to compile/build/install the missing modules/libraries by yourself:
Run two docker instances that will have mounted the same “local” Layer folder.
First container is going to be your lambda container while second one is Amazon linux used to build items.
Test your code with Lambda container and in case something is missing switch to Amazon Linux and build/extract binaries/libraries into shared Layer folder structure.
Make sure the Lambda code have proper PATH defined to use Layer folder.
Install docker.
In first terminal tab go to your lambda folder and start the lambda docker container:
docker run --rm -it --entrypoint=/bin/bash -v "$PWD":/var/task:ro,delegated -v /your/path/to/Layer/folder/:/opt:rw,delegated -e AWS_ACCESS_KEY_ID=[ACCESS_KEY_PASTE_HERE -e AWS_SECRET_ACCESS_KEY=[SECRET_GOES_HERE] lambci/lambda:nodejs12.x
In second terminal tab run another container with Amazon linux:
docker run --rm -it -v /your/path/to/Layer/folder/:/opt:rw,delegated amazonlinux:latest
(Keep in mind that the Layer folder is mounted with read/write permissions).
Test your lambda code in your favourite way or just by simple run (make sure to check if your handler module name is “handler” and file name is “index.js”):
cd /var/task
node index.js; node "var func = require('./index.js');func.handler({},function() {},function(){console.log('Lambda finished')});"
In case you find some missing libraries, make sure to add to your PHP code:
set_include_path('/opt/lib');
Then on Amazon Linux terminal tab and install/build your library and then copy it to Layer folder:
cp /usr/lib64/[here is your library name] /opt/lib
Test again your code in Lambda container.
When you will be done just zip the content of your Lambda Layer structure, keep in mind that your \bin ora \lib folders need to be in the root folder of the zip file.
Add the zip file as a Layer for you lambdas and attach it.
I Fix this problem by adding extra library folder in my function's zip.
Make a directory name extra-libs
Copy all required libraries from Amazon Linux 2 to Extra-libs by using following steps :
Run amazon Linux 2's docker instance by following command
docker run --rm -it -v :/opt:rw,delegated amazonlinux:latest
Then in docker instance make directory using
mkdir deps
Copy all required libraries from lib64 to deps directory using
cp -f lib64/libcrypt.so.1 deps (Taken libcrypt.so.1 for example purpose)
Then open another terminal window and move all library files to local extra-libs
docker cp <DOCKER_CONTAINER_ID>:/deps/ . && mv deps/* ./extra-libs
Get container id by using docker ps
Then in index.js file , add following line to php's env setting.
LD_LIBRARY_PATH:path.join(__dirname, '/extra-libs')
Zip extra-libs folder with your lambda function and upload it.
Hope this helps.
Related
PHP isn't a natively supported language in AWS Lambda, but I thought I'd try my hand at getting one working, using a custom Docker image. I am using this official AWS example to structure the image.
I don't quite understand the pieces yet. I will add what files I have to this post.
Firstly, my Dockerfile:
# Demo of a PHP-based lambda
#
# See example:
# https://github.com/aws-samples/php-examples-for-aws-lambda/blob/master/0.7-PHP-Lambda-functions-with-Docker-container-images/Dockerfile
FROM php:8.0-cli-alpine
WORKDIR /root
# Install Composer
COPY bin bin
RUN sh /root/bin/install-composer.sh
RUN php /root/composer.phar --version
# Install Composer deps
COPY composer.json composer.lock /root/
RUN php composer.phar install
# Install runtimes
COPY runtime /var/runtime
COPY src /var/task/
# Entrypoint
CMD ["index"]
Based on the example I also have:
A PHP listener program at /var/runtime/bootstrap (nearly verbatim copy of the example repo)
Composer dependencies at /root/vendor/... that are loaded by the bootstrap
A trivial index file at /var/task/index.php (verbatim copy of the example repo)
One change I have made is to base the image on an Alpine image from PHP, rather than to use an Amazon Linux image. I wonder, could there be something in the Amazon image that is needed?
The test I am using is the "hello world" in the AWS Lambda UI:
{
"key1": "value1",
"key2": "value2",
"key3": "value3"
}
Anyway, I have used docker login and docker push to get the image to ECR. When I run the hello world test inside the AWS dashboard, I am getting this set of error logs in CloudWatch:
2021-11-13T19:12:12.449+00:00 IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [index] WorkingDir: [/root]
2021-11-13T19:12:12.493+00:00 START RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Version: $LATEST
2021-11-13T19:12:12.502+00:00 IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [index] WorkingDir: [/root]
2021-11-13T19:12:12.504+00:00 END RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb
2021-11-13T19:12:12.504+00:00 REPORT RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Duration: 9.20 ms Billed Duration: 10 ms Memory Size: 128 MB Max Memory Used: 3 MB
2021-11-13T19:12:12.504+00:00 RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Runtime.InvalidEntrypoint
That makes a lot of sense, as I don't understand the entry point of "index" either, but it's there as a CMD in the example Dockerfile. Is this an alias for something? I would be inclined to use something like php /var/runtime/bootstrap, but I'd rather understand things, rather than guessing.
I believe I might be able to use Lambda RIE to test this locally, but I wonder if that would tell me what I already know - I need to fix the CMD.
For what it's worth, I can't see how the index.php file is triggered in the lambda either. How does that get invoked?
Update
I am wondering if the parent image in the example (public.ecr.aws/lambda/provided) has an ENTRYPOINT that would shed some light on the apparently confusing CMD. I wonder if I will investigate that next.
Update 2
The ponderance that I might have to use the Amazon Linux image parent was a false steer - this official resource shows the use of standard Python and Node images.
I decided to try repairing the main Docker command:
CMD ["php", "/var/runtime/bootstrap"]
However it doesn't like that:
START RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Version: $LATEST
IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [php,/var/runtime/bootstrap] WorkingDir: [/root]
IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [php,/var/runtime/bootstrap] WorkingDir: [/root]
END RequestId: d95a29d3-6764-4bae-9976-09830c1c17af
REPORT RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Duration: 19.88 ms Billed Duration: 20 ms Memory Size: 128 MB Max Memory Used: 3 MB
RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied
Runtime.InvalidEntrypoint
Update 3
No matter what I do, I seem to run into problems with entrypoints. I've even tried a runtime script to chmod +x on the various binaries it doesn't like, but of course if I try to kick that off in the ENTRYPOINT, the container believes that /bin/sh cannot be executed. This is getting rather silly - containers are just not behaving correctly in the Lambda environment.
Update 4
I have now tried to move away from Alpine, in case a more standard OS will work correctly. No joy - I get the same. I'm now at the random-trying things point, and this is rather slow going, given that the build-push-install loop is cumbersome.
This question looks promising, but putting the bootstrap file in /var/task seems to go directly against the example I am working from.
This problem was tricksy because there were two major interlocking problems - a seemingly excessive permissions requirement, and what struck me as a non-standard use of the ENTRYPOINT/CMD systems.
Working solution
The Dockerfile that works is as follows:
# Demo of a PHP-based lambda
#
# See example:
# https://github.com/aws-samples/php-examples-for-aws-lambda/blob/master/0.7-PHP-Lambda-functions-with-Docker-container-images/Dockerfile
FROM php:8.0-cli-alpine
WORKDIR /root
# Install Composer
COPY bin bin
RUN sh /root/bin/install-composer.sh
RUN php /root/composer.phar --version
# Install Composer deps
COPY composer.json composer.lock /root/
# Move deps to /opt, /root has significant permission issues
RUN php /root/composer.phar install && \
mv /root/vendor /opt/vendor
# Install runtimes
COPY runtime/bootstrap /var/runtime/
COPY src/index.php /var/task/
RUN chmod 777 /usr/local/bin/php /var/task/* /var/runtime/*
# The entrypoint seems to be the main handler
# and the CMD specifies what kind of event to process
WORKDIR /var/task
ENTRYPOINT ["/var/runtime/bootstrap"]
CMD ["index"]
So, that resolves one of my nagging questions about Amazon Linux - it is not required. Note that although I installed Composer dependencies in /root, they could not stay there - even 777 perms on them seemed to be insufficient.
As you can see I used 777 permissions on things in /var. 755 might work, maybe even 750 would work - but the key here is that Amazon appears to be a user that is not the build (root) user. That tripped me up a lot.
Now the ENTRYPOINT is used to run the bootstrap script, which appears to be doing general mediation between events on the AWS side and "use cases" in /var/tasks. The normal purpose of a Docker entrypoint is as a command wrapper to CMD, so to use CMD as a "default lambda type" seems to significantly violate the principle of least surprise. I would have thought the lambda type would be defined by the incoming event, not by any lambda-side setting.
Testing
To test this lambda I use this event in the Lambda UI:
{
"queryStringParameters": { "name": "halfer" }
}
And the demo code will respond with:
{
"statusCode": 200,
"headers": {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "Content-Type",
"Access-Control-Allow-Methods": "OPTIONS,POST"
},
"body": "Hello, halfer"
}
Suffice it to say this feels rather brittle. Admittedly the demo code is not production quality, but even so, I suspect this would need a pipeline to do a real AWS Lambda test prior to merging down or deployment.
Performance
Here is why lambdas are tempting, especially for infrequent calls such as crons - they are instantiated quickly and die quickly, leaving no running infra. In one of my demo calls I have:
Init duration 188.75 ms
Duration 39.45 ms
Billed duration 229 ms
Deeper understanding
Having worked with the pieces I think I can now explain them rather better, and what I thought of as unusual architectural choices may actually have some purpose. I fear however this design ideology is not sufficiently documented, so engineers working with Docker-based AWS Lambdas have to spend additional time figuring the pieces out.
Consider the processing loop in the demo runtime:
// This is the request processing loop. Barring unrecoverable failure, this loop runs until the environment shuts down.
do {
// Ask the runtime API for a request to handle.
$request = getNextRequest();
// Obtain the function name from the _HANDLER environment variable and ensure the function's code is available.
$handlerFunction = $_ENV['_HANDLER'];
require_once $_ENV['LAMBDA_TASK_ROOT'] . '/' . $handlerFunction . '.php';
// Execute the desired function and obtain the response.
$response = $handlerFunction($request['payload']);
// Submit the response back to the runtime API.
sendResponse($request['invocationId'], $response);
} while (true);
This picks up $_ENV['_HANDLER'] from the Lambda environment, and AWS derives that from the CMD of the image. Now, in PHP the env vars in $_ENV are static for the duration of the process, so it is perhaps a slightly odd choice to read this in a loop and include the file in a loop - it would have been better to do this in an initialisation phase, returning a clean error if the include isn't found.
However, here's the likely purpose of this system: AWS Lambdas let users customise the CMD in the web dashboard. So in an example enterprise let's say that there's three lambdas - one for responding to a web event, one for a scheduler, and one for responding to SNS topics. The handlers for each of these could be added to the same image, allowing the three lambdas to share an image - all they need to do is to supply a CMD override, and each one will load and use the right handler.
I am working on a project where I want to use PHP and Phantomjs together, I have completed my phantomJs script and trying to run it using php exec function. but the function is returning an array of error list.
below I am writing my code of phantomjs and php
dir: /var/www/html/phantom/index.js
var page = require('webpage').create();
var fs = require('fs');
page.open('http://insttaorder.com/', function(status) {
// Get all links to CSS and JS on the page
var links = page.evaluate(function() {
var urls = [];
$("[rel=stylesheet]").each(function(i, css) {
urls.push(css.href);
});
$("script").each(function(i, js) {
if (js.src) {
urls.push(js.src);
}
});
return urls;
});
// Save all links to a file
var url_file = "list.txt";
fs.write(url_file, links.join("\n"), 'w');
// Launch wget program to download all files from the list.txt to current
// folder
require("child_process").execFile("wget", [ "-i", url_file ], null,
function(err, stdout, stderr) {
console.log("execFileSTDOUT:", stdout);
console.log("execFileSTDERR:", stderr);
// After wget finished exit PhantomJS
phantom.exit();
});
});
dir: /var/www/html/phantom/index.php
exec('/usr/bin/phantomjs index.js 2>&1',$output);
echo '<pre>';
print_r($output);
die;
Also tried with
exec('/usr/bin/phantomjs /var/www/html/phantom/index.js 2>&1',$output);
echo '<pre>';
print_r($output);
die;
After runing this i am getting below error
Array
(
[0] => QXcbConnection: Could not connect to display
[1] => PhantomJS has crashed. Please read the bug reporting guide at
[2] => and file a bug report.
[3] => Aborted (core dumped)
)
But if I run index.php file from the terminal like this:
user2#user2-H81M-S:/var/www/html/phantom$ php index.php
then it works fine.I don't know how to solve it. Please help.
i am using following version
system version: Ubuntu 16.04.2 LTS
PHP version: 5.6
phantomJs version: 2.1.1
Did you tried to set an environment variable on your server ? or added it before calling phantomjs ?
I was in the same situation and found some solutions:
a. define or set variable QT_QPA_PLATFORM to offscreen:
QT_QPA_PLATFORM=offscreen /usr/bin/phantomjs index.js
b. or add this line into your .bashrc file (put it at the end):
export QT_QPA_PLATFORM=offscreen
c. or install the package xvfb and call xvfb-run before phantomjs:
xvfb-run /usr/bin/phantomjs index.js
d. or use the parameter platform:
/usr/bin/phantomjs -platform offscreen index.js
Maybe you don't want / can't make modification on your server and in that case you may try to download the static binary from official website then:
/path/to/the/bin/folder/phantomjs index.js
and / or create an alias in your .bash_aliases file like this:
alias phantomjs=/path/to/the/bin/folder/phantomjs
make sure that phantomjs is not installed already on the system if you decide to use the alias.
if the file .bash_aliases not exist already, feel free to create it or add the alias line at the end of the .bashrc file
Some references:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=817277
https://github.com/ariya/phantomjs/issues/14376
https://bugs.launchpad.net/ubuntu/+source/phantomjs/+bug/1586134
I had the same problem running phantomjs on headless Ubuntu 18.04 (on the default Vagrant vm install of openstreetmap-website). Folloiwng Jiab77's links, it seems the Phantomjs team says the problem is the Debian package but the Debian team closed the bug as wontfix. I needed phantomjs to "just work" so it can be called by other programs that expect it to work normally. Specifically, openstreetmap-website has an extensive Ruby test suite with over 40 tests that were failing because of this, and I didn't want to modify all those tests.
Following Jiab77's answer, here's how I made it work:
As root, cp /usr/bin/phantomjs /usr/local/bin/phantomjs
Edit /usr/local/bin/phantomjs and add the line export QT_QPA_PLATFORM=offscreen so it runs before execution. Here is what mine says after doing so:
#!/bin/sh
LD_LIBRARY_PATH="/usr/lib/phantomjs:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH
# 2018-11-13: added the next line so phantomjs can run headless as explained on
# https://stackoverflow.com/questions/49154209/how-to-solve-error-qxcbconnection-could-not-connect-to-display-when-using-exec
export QT_QPA_PLATFORM=offscreen
exec "/usr/lib/phantomjs/phantomjs" "$#"
After this change, phantomjs can be run from the command line without changing anything else, and all the tests that depend on phantomjs were successfully passed.
I'm experimenting with Deployer to deploy Laravel application into shared hosting (using laravel recipe) from my local ~/Code/project_foo.
The point is that when I'm connected to my shared hosting server via ssh, then default php -v version is 5.6.33. I confirmed that I can change php version on fly by calling php70 -v or even whole path like /usr/local/bin/php70 whatever.
The point is that I don't know how to tell deployer to call commands using php70 which is required, otherwise composer install fails.
So in Terminal I'm inside root of the Laravel project and I simply call:
dep deploy
My deploy.php is messy and very simple but this is just a proof of concept. I'm trying to figure out everything and then I will make it look nicer.
I checked the source code of the laravel recipe, and I saw that there is:
{{bin/php}}
but I don't know how to override the value to match what my hosting tells me to use:
/usr/local/bin/php70
Please, give me any hints how to force the script use different PHP version once connected to the remote host / server.
This is whole script:
<?php
namespace Deployer;
require 'recipe/laravel.php';
//env('bin/php', '/usr/local/bin/php70'); // <- I thought that this will work but it doesn't change anything
// Project name
set('application', 'my_project');
// Project repository
set('repository', 'git#github.com:xxx/xxx.git');
// [Optional] Allocate tty for git clone. Default value is false.
set('git_tty', true);
// Shared files/dirs between deploys
add('shared_files', []);
add('shared_dirs', []);
// Writable dirs by web server
add('writable_dirs', []);
// Hosts
host('xxx')
->user('xxx')
->set('deploy_path', '/home/slickpl/projects/xxx');
// Tasks
task('build', function () {
run('cd {{release_path}} && build');
});
// [Optional] if deploy fails automatically unlock.
after('deploy:failed', 'deploy:unlock');
// Migrate database before symlink new release.
before('deploy:symlink', 'artisan:migrate');
OK, I've found the solution.
I added (after require):
set('bin/php', function () {
return '/usr/local/bin/php70';
});
For anybody who searches for changing Composer's PHP version:
set('bin/composer', function () {
return '/usr/bin/php7.4 /usr/local/bin/composer';
});
There is function locateBinaryPath()
so result is:
set('bin/php', function () {
return locateBinaryPath('php7.4');
});
first found php path and composer path use this
for more information Setting PHP versions in Deployer deployments
find / -type f -name "php" 2>&1 | grep -v "Permission denied"
find / -type f -name "composer" 2>&1 | grep -v "Permission denied"
then
set('bin/composer',
function () {
return 'php_path composer_path';
});
like this
set('bin/composer',
function () {
return '/opt/remi/php73/root/usr/bin/php /usr/bin/composer';
});
I have created a number of selenium IDE files that I converted to phpunit format in Firefox IDE 2.9.1.1 (on a Windows box), using Options -> Format converter. Those converted files define a class "Example" that is derived from class PHPUnit_Extensions_SeleniumTestCase. I now know that this class needs to be changed to PHPUnit_Extensions_Selenium2TestCase. The problem is, I cannot get this to run with recent versions of phpunit.
I am running these tests on a Fedora 24 VM which is using php 5.6.30, java 1.8.0_121-b14, and firefox 51.0.1-2. I have tried to get these to run using selenium standalone server 3.0.1 (and now 3.1.0), and phpunit 5.7.13. I have the latest php facebook WebDriver installed. The error I keep getting is that the above mentioned class is not found. I did a grep on that class and this is what I found:
[root#localhost bin]# grep -r "PHPUnit_Extensions_Selenium2TestCase" .
Binary file ./phpunit/phpunit-4.6.7.phar matches
Binary file ./phpunit/phpunit-4.7.6.phar matches
So, it appears that this class does not exist in phpunit 5.7 and above (which are in that directory), nor does it exist in html-runner.phar, which is in the same directory. The seleniumhq.org site says to use html runner if you convert from IDE, but I can find no examples of how to use the html-runner.phar file (and no documentation).
Can someone please tell me what I should change the class name to, to get this test to work?
UPDATE:
I now know that if I want to use phpunit and selenium server to drive a firefox browser, I have to get selenium talking to geckodriver. I have installed:
geckodriver 0.14.0 at /usr/local/bin/geckodriver
selenium server 3.0.1 at /usr/local/bin/selenium
phpunit-5.7.13.phar installed at /usr/local/bin/phpunit
I used composer to add webdrivers (facebook 1.3.0 :
[root#localhost composer]# cat composer.json
{
"require": {
"facebook/webdriver": "^1.3",
"phpunit/phpunit": ">=3.7",
"phpunit/phpunit-selenium": ">=1.2"
}
}
php composer.phar install
They were added to the PATH:
[bjt#localhost projects]$ echo $PATH
/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/bin/selenium:/usr/local/bin/phpunit:/usr/local/bin/composer:/usr/local/bin/geckodriver
I have a small test file:
?php
require_once('/usr/local/bin/composer/vendor/autoload.php');
class test extends PHPUnit_Extensions_Selenium2TestCase
{
protected function setUp()
{
$this->setBrowser("*firefox");
$this->setBrowserUrl("https://fakeurl.com/");
}
public function testMyTestCase()
{
$this->open("/");
}
}
Starting the selenium server:
java -jar /usr/local/bin/selenium/selenium-standalone-3.0.1.jar
When I run the test:
/usr/local/bin/phpunit/phpunit-5.7.13.phar --verbose test.php
Yields this error:
PHPUnit_Extensions_Selenium2TestCase_WebDriverException: The best matching driver provider Firefox/Marionette driver can't create a new driver instance for Capabilities [{browserName=*firefox}]
So, it appears that geckodriver is not talking to selenium server. If I try to force the issue by changing the execution of the server:
java -Dwebdriver.gecko.driver="/usr/local/bin/geckodriver/geckodriver" -jar /usr/local/bin/selenium-server-standalone-3.0.1.jar
or
sudo java -Dwebdriver.gecko.driver="/usr/local/bin/geckodriver/geckodriver" -jar /usr/local/bin/selenium-server-standalone-3.0.1.jar
It makes no difference. I'm hoping someone can point out what I am missing. I'm at a dead end.
I am currently working through the same process of getting this to work and would like to point out a few things:
htmlrunner is meant to run the test cases saved directly from the Selenium IDE in the default format, which is html.
Make sure you can run firefox from the terminal running the selenium server.
You added two different selenium php bindings. The facebook one and the phpunit-selenium. I currently have it working with only the facebook binding and phpunit extending the class PHPUnit_Framework_TestCase. I would recommend using the example provided within github for facebook/webdriver without phpunit to verify that your selenium config is working, then add phpunit.
I'm deploying code to a single-instance web server AWS EB environment that will provision/update my connected RDS database. I've got an .ebextensions file that calls deployment code:
---
container_commands:
01deploydb:
command: /var/www/html/php/cli/deploy-db.php
leader_only: true
On the same deployment, I dropped the deploy-db.php file back one directory into /cli/. On deployment, I get ERROR: [Instance: i-*****] Command failed on instance. Return code: 127 Output: /bin/sh: /var/www/html/php/cli/deploy-db.php: No such file or directory.
container_command 01deploydb in .ebextensions/01_db.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
If I deploy a version that does not include the command, then deploy a second update including the command, there is no error. However, adding the command and the file it calls at the same time produces the error. A similar sequence occurred earlier with a different command/file.
My question is: is there a documented order/sequence for how AWS updates the environment? I would have expected that my new version would have fully deployed (and the .php file installed) before container_commands are called.
The commands: section runs before the project files are put in place. This is where you can install server packages for example.
The container_commands: section runs in a staging directory before the files are put in its final destination. Here you can modify your files if you need to. Current path is this staging directory so you can run it like this (I might get the app directory wrong, maybe it should be php/cli/deploy-db.php)
container_commands:
01deploydb:
command: cli/deploy-db.php
leader_only: true
Reference for above: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
You can also run a post deploy scripts. This is not very well documented (at least it wasn't). You can do something like this (it won't be leader only though, but you could put a file in this directory through a container_commands:):
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_deploy.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
/var/www/html/php/cli/deploy-db.php