Error writing dockerrun.aws.json v2 file - php

I am using this file to deploy a multicontainer nginx php-fpm application in AWS.
I run eb local run and shows me this error.
holdbusinessnginx_1 | nginx: [emerg] host not found in upstream "php:9000" in /etc/nginx/conf.d/upstream.conf:1
elasticbeanstalk_holdbusinessnginx_1 exited with code 1
It probably is because nginx is running before php-fpm.
In docker-compose.yml file there is a directive called depends-on.
Is there a way to use it in dockerrun.aws.json file?

Just use directive of
"links": [
"php"
],
where php will be the name of other container you defined in the same Dockerrun.aws.json file. EB is kinda guessing the dependencies on the links, volumes etc. So with forcing nginx container to link to php you're saying to the EB that php should get up before nignx. In shortcut. :-)

Sorry to take so much for the response. It was really that. A misatention mine.

Related

What should be run in a container for a PHP-based Docker AWS lambda?

PHP isn't a natively supported language in AWS Lambda, but I thought I'd try my hand at getting one working, using a custom Docker image. I am using this official AWS example to structure the image.
I don't quite understand the pieces yet. I will add what files I have to this post.
Firstly, my Dockerfile:
# Demo of a PHP-based lambda
#
# See example:
# https://github.com/aws-samples/php-examples-for-aws-lambda/blob/master/0.7-PHP-Lambda-functions-with-Docker-container-images/Dockerfile
FROM php:8.0-cli-alpine
WORKDIR /root
# Install Composer
COPY bin bin
RUN sh /root/bin/install-composer.sh
RUN php /root/composer.phar --version
# Install Composer deps
COPY composer.json composer.lock /root/
RUN php composer.phar install
# Install runtimes
COPY runtime /var/runtime
COPY src /var/task/
# Entrypoint
CMD ["index"]
Based on the example I also have:
A PHP listener program at /var/runtime/bootstrap (nearly verbatim copy of the example repo)
Composer dependencies at /root/vendor/... that are loaded by the bootstrap
A trivial index file at /var/task/index.php (verbatim copy of the example repo)
One change I have made is to base the image on an Alpine image from PHP, rather than to use an Amazon Linux image. I wonder, could there be something in the Amazon image that is needed?
The test I am using is the "hello world" in the AWS Lambda UI:
{
"key1": "value1",
"key2": "value2",
"key3": "value3"
}
Anyway, I have used docker login and docker push to get the image to ECR. When I run the hello world test inside the AWS dashboard, I am getting this set of error logs in CloudWatch:
2021-11-13T19:12:12.449+00:00 IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [index] WorkingDir: [/root]
2021-11-13T19:12:12.493+00:00 START RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Version: $LATEST
2021-11-13T19:12:12.502+00:00 IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [index] WorkingDir: [/root]
2021-11-13T19:12:12.504+00:00 END RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb
2021-11-13T19:12:12.504+00:00 REPORT RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Duration: 9.20 ms Billed Duration: 10 ms Memory Size: 128 MB Max Memory Used: 3 MB
2021-11-13T19:12:12.504+00:00 RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Runtime.InvalidEntrypoint
That makes a lot of sense, as I don't understand the entry point of "index" either, but it's there as a CMD in the example Dockerfile. Is this an alias for something? I would be inclined to use something like php /var/runtime/bootstrap, but I'd rather understand things, rather than guessing.
I believe I might be able to use Lambda RIE to test this locally, but I wonder if that would tell me what I already know - I need to fix the CMD.
For what it's worth, I can't see how the index.php file is triggered in the lambda either. How does that get invoked?
Update
I am wondering if the parent image in the example (public.ecr.aws/lambda/provided) has an ENTRYPOINT that would shed some light on the apparently confusing CMD. I wonder if I will investigate that next.
Update 2
The ponderance that I might have to use the Amazon Linux image parent was a false steer - this official resource shows the use of standard Python and Node images.
I decided to try repairing the main Docker command:
CMD ["php", "/var/runtime/bootstrap"]
However it doesn't like that:
START RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Version: $LATEST
IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [php,/var/runtime/bootstrap] WorkingDir: [/root]
IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [php,/var/runtime/bootstrap] WorkingDir: [/root]
END RequestId: d95a29d3-6764-4bae-9976-09830c1c17af
REPORT RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Duration: 19.88 ms Billed Duration: 20 ms Memory Size: 128 MB Max Memory Used: 3 MB
RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied
Runtime.InvalidEntrypoint
Update 3
No matter what I do, I seem to run into problems with entrypoints. I've even tried a runtime script to chmod +x on the various binaries it doesn't like, but of course if I try to kick that off in the ENTRYPOINT, the container believes that /bin/sh cannot be executed. This is getting rather silly - containers are just not behaving correctly in the Lambda environment.
Update 4
I have now tried to move away from Alpine, in case a more standard OS will work correctly. No joy - I get the same. I'm now at the random-trying things point, and this is rather slow going, given that the build-push-install loop is cumbersome.
This question looks promising, but putting the bootstrap file in /var/task seems to go directly against the example I am working from.
This problem was tricksy because there were two major interlocking problems - a seemingly excessive permissions requirement, and what struck me as a non-standard use of the ENTRYPOINT/CMD systems.
Working solution
The Dockerfile that works is as follows:
# Demo of a PHP-based lambda
#
# See example:
# https://github.com/aws-samples/php-examples-for-aws-lambda/blob/master/0.7-PHP-Lambda-functions-with-Docker-container-images/Dockerfile
FROM php:8.0-cli-alpine
WORKDIR /root
# Install Composer
COPY bin bin
RUN sh /root/bin/install-composer.sh
RUN php /root/composer.phar --version
# Install Composer deps
COPY composer.json composer.lock /root/
# Move deps to /opt, /root has significant permission issues
RUN php /root/composer.phar install && \
mv /root/vendor /opt/vendor
# Install runtimes
COPY runtime/bootstrap /var/runtime/
COPY src/index.php /var/task/
RUN chmod 777 /usr/local/bin/php /var/task/* /var/runtime/*
# The entrypoint seems to be the main handler
# and the CMD specifies what kind of event to process
WORKDIR /var/task
ENTRYPOINT ["/var/runtime/bootstrap"]
CMD ["index"]
So, that resolves one of my nagging questions about Amazon Linux - it is not required. Note that although I installed Composer dependencies in /root, they could not stay there - even 777 perms on them seemed to be insufficient.
As you can see I used 777 permissions on things in /var. 755 might work, maybe even 750 would work - but the key here is that Amazon appears to be a user that is not the build (root) user. That tripped me up a lot.
Now the ENTRYPOINT is used to run the bootstrap script, which appears to be doing general mediation between events on the AWS side and "use cases" in /var/tasks. The normal purpose of a Docker entrypoint is as a command wrapper to CMD, so to use CMD as a "default lambda type" seems to significantly violate the principle of least surprise. I would have thought the lambda type would be defined by the incoming event, not by any lambda-side setting.
Testing
To test this lambda I use this event in the Lambda UI:
{
"queryStringParameters": { "name": "halfer" }
}
And the demo code will respond with:
{
"statusCode": 200,
"headers": {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "Content-Type",
"Access-Control-Allow-Methods": "OPTIONS,POST"
},
"body": "Hello, halfer"
}
Suffice it to say this feels rather brittle. Admittedly the demo code is not production quality, but even so, I suspect this would need a pipeline to do a real AWS Lambda test prior to merging down or deployment.
Performance
Here is why lambdas are tempting, especially for infrequent calls such as crons - they are instantiated quickly and die quickly, leaving no running infra. In one of my demo calls I have:
Init duration 188.75 ms
Duration 39.45 ms
Billed duration 229 ms
Deeper understanding
Having worked with the pieces I think I can now explain them rather better, and what I thought of as unusual architectural choices may actually have some purpose. I fear however this design ideology is not sufficiently documented, so engineers working with Docker-based AWS Lambdas have to spend additional time figuring the pieces out.
Consider the processing loop in the demo runtime:
// This is the request processing loop. Barring unrecoverable failure, this loop runs until the environment shuts down.
do {
// Ask the runtime API for a request to handle.
$request = getNextRequest();
// Obtain the function name from the _HANDLER environment variable and ensure the function's code is available.
$handlerFunction = $_ENV['_HANDLER'];
require_once $_ENV['LAMBDA_TASK_ROOT'] . '/' . $handlerFunction . '.php';
// Execute the desired function and obtain the response.
$response = $handlerFunction($request['payload']);
// Submit the response back to the runtime API.
sendResponse($request['invocationId'], $response);
} while (true);
This picks up $_ENV['_HANDLER'] from the Lambda environment, and AWS derives that from the CMD of the image. Now, in PHP the env vars in $_ENV are static for the duration of the process, so it is perhaps a slightly odd choice to read this in a loop and include the file in a loop - it would have been better to do this in an initialisation phase, returning a clean error if the include isn't found.
However, here's the likely purpose of this system: AWS Lambdas let users customise the CMD in the web dashboard. So in an example enterprise let's say that there's three lambdas - one for responding to a web event, one for a scheduler, and one for responding to SNS topics. The handlers for each of these could be added to the same image, allowing the three lambdas to share an image - all they need to do is to supply a CMD override, and each one will load and use the right handler.

How to configure nginx to use a lock file to generate a 423 Locked response?

Environment :
I am using nginx 1.14.2 and php-fpm 7.2 (nginx and php-fpm are on the same VM)
Context :
I developed the following use case : when a lock file is present on the filesystem (created/deleted by an "upgrade" script), I return a 423 Locked HTTP response via my source code.
(the script is used to update some files and clear the cache of the server)
Issue :
I want nginx to handle the lock file to return the 423 Locked response and "free" the php-fpm process.
Is it possible to configure nginx for a such behavior ?
if (-f /path/to/file) {
return 423;
}

Laravel Error 503 Service Unavailable Service

Error 503 Service Unavailable
Service Unavailable
Guru Meditation:
XID: 5312211
Varnish cache server
I working on cpanel and subdomain but i got this error from laravel project. do you can help me to solve this?
I using cpanel and laravel 5.5
Looks like you ran the artisan down command, but not the up command.
Just run:
php artisan up
and than you will get:
"Application is now live."
if have tried php artisan up command and your site still not up and gives 503 then ,
delete down file inside /storage/framework/
I had the same problem in Cpanel what i have did to fix this.
I just have fixed PHP version to the latest
then I goes to "Shell option in panel" and entered this command and my app is live now. php artisan up
The cause of the error likely stems from Apache and PHP-FPM becoming overloaded with requests. PHP-FPM, and occasionally Apache, will need adjustments to their limitations to get around this.
To start, we recommend attempting to adjust the PHP-FPM pool limits from within the WHM "MultiPHP Manager." To do so globally:
[1] Access WHM >> MultiPHP Manager
[2] Select "System PHP-FPM Configuration"
[3] Adjust the "Max Children" and "Max Requests" fields. We recommend incrementing in values of 5 to 10 to ensure that PHP-FPM does not get overloaded.
[4] Save your configuration settings
To perform these changes by domain:
[1] Access WHM >> MultiPHP Manager
[2] Scroll down and locate a domain that experiences the problem.
[3] Select "Edit PHP-FPM" for the domain in question.
[4] Adjust the "Max Children" and "Max Requests" fields.
[5] Save your configuration settings
If you have run php artisan down previously then you may face this issue.
You have to make this up using php artisan up command
Why this happened? as I did the same mistake php artisan down and then run
php artisan serve
and CLI was showing me
<http://127.0.0.1:8000>
[Thu Dec 31 00:25:23 2020] PHP 7.4.3 Development Server (http://127.0.0.1:8000) started
application started but it was showing 503 service Unavailable then run php artisan up and application started.
run this command to make this workable
php artisan up
If you get "Laravel Error 503 Service Unavailable Service" but there is no info in log file, check filesystem status. You may have filled up disk space. Check what's filling it:
cd / && sudo du -h --max-depth=1 -x
503 Service Unavailable
The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state

AWS Elastic Beanstalk Deployment Order

I'm deploying code to a single-instance web server AWS EB environment that will provision/update my connected RDS database. I've got an .ebextensions file that calls deployment code:
---
container_commands:
01deploydb:
command: /var/www/html/php/cli/deploy-db.php
leader_only: true
On the same deployment, I dropped the deploy-db.php file back one directory into /cli/. On deployment, I get ERROR: [Instance: i-*****] Command failed on instance. Return code: 127 Output: /bin/sh: /var/www/html/php/cli/deploy-db.php: No such file or directory.
container_command 01deploydb in .ebextensions/01_db.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
If I deploy a version that does not include the command, then deploy a second update including the command, there is no error. However, adding the command and the file it calls at the same time produces the error. A similar sequence occurred earlier with a different command/file.
My question is: is there a documented order/sequence for how AWS updates the environment? I would have expected that my new version would have fully deployed (and the .php file installed) before container_commands are called.
The commands: section runs before the project files are put in place. This is where you can install server packages for example.
The container_commands: section runs in a staging directory before the files are put in its final destination. Here you can modify your files if you need to. Current path is this staging directory so you can run it like this (I might get the app directory wrong, maybe it should be php/cli/deploy-db.php)
container_commands:
01deploydb:
command: cli/deploy-db.php
leader_only: true
Reference for above: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
You can also run a post deploy scripts. This is not very well documented (at least it wasn't). You can do something like this (it won't be leader only though, but you could put a file in this directory through a container_commands:):
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_deploy.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
/var/www/html/php/cli/deploy-db.php

Nginx + php-fpm: Bad gateway only when xdebug server is running

Problem
When xdebug server is running from IntelliJ IDEA, I get 502 Bad Gateway from nginx when I try loading my site to trigger breakpoints.
If I stop the xdebug server, the site works as intended.
So, I'm not able to run the debugger, but it did work previously (!). Not able to pinpoint why it suddenly stopped working.
Setup
A short explanation of the setup (let me know if I need to expand on this).
My php app is running in a docker container, and it is linked to nginx running in a different container using volumes_fromin the docker compose config.
After starting the app, I can verify using phpinfo(); the xdebug module is loaded.
My xdebug.ini has the following content:
zend_extension=xdebug.so
xdebug.remote_enable=1
xdebug.remote_host=10.0.2.2
xdebug.remote_connect_back=0
xdebug.remote_port=5555
xdebug.idekey=complex
xdebug.remote_handler=dbgp
xdebug.remote_log=/var/log/xdebug.log
xdebug.remote_autostart=1
I got the ip address for remote_host (where the xdebug server is running) by these steps:
docker-machine ssh default
route -n | awk '/UG[ \t]/{print $2}' <-- Returns 10.0.2.2
To verify I could reach the debugging server from within my php container, I did the following steps
docker exec -it randomhash bash
nc -z -v 10.0.2.2 5555
Giving the following output depending on xdebug server running or not:
Running: Connection to 10.0.2.2 5555 port [tcp/*] succeeded!
Not running: nc: connect to 10.0.2.2 port 5555 (tcp) failed: Connection refused
So IntelliJ IDEA is surely set up to receive connections on 5555. I also did the appropriate path mapping between my source file paths and the remote path (when setting up the PHP Remote Debugging server from within IDEA).
Any ideas? Kind of lost on this one as I don't have much experience with any of these technologies :D
This sometimes happens, the reason is the errors in php-fpm and xdebug (exactly)!
When I refactored my colleagues code, оne page on the project returned 502 Bad Gateway
Here's what I found:
php-fpm.log
WARNING: [pool www] child 158 said into stderr: "*** Error in `php-fpm: pool www': free(): invalid size: 0x00007f1351b7d2a0 ***"
........
........
WARNING: [pool www] child 158 exited on signal 6 (SIGABRT - core dumped) after 38.407847 seconds from start
I found a piece of code that caused the error:
ob_start();
$result = eval("?>".$string."<"."?p"."hp return 1;");
$new_string = ob_get_clean();
But that is not all. The error occurred only in a certain state $string which at first glance, did not differ from the others. In my case, everything is simple. I removed the code that caused the error. This did not affect the functionality of the web page. I continued to debug the code further.
I had the same problem with the Vagrant Homestead Parallels box with a Silicon chip. Switching from php 7.3 to 7.4 fixed the issue for me.

Categories