deploy with deployer fails - php

I'm having problem with deploying with the use of deployer, this is the first time i'm using any deployment tool. My teacher have made a guide for making it work but, I haven't been able to do a deploy.
So if anyone can tell me what i'm doing wrong please tell me.
here follows all the specs for my computer and my setup:
Computer:
Operating system Windows 10 Home
Manufacturer ASUSTek Computer Inc.
Model E403SA
Processor Intel(R) Pentium(R) CPU N3700 # 1.60GHz 1.60 GHz
RAM 4.0 GB
64-bit operatingsystem, x64 based processor
.ssh/config file:
Host xxxxx
ControlMaster no
Hostname ssh.xxxxx.xx
User xxxxxx_xxx
note that I added ControlMaster no becouse I read that the problem could be with ssh multiplexing but it but I got the same error with and without it...
deploy.php file ( in the root of the project):
<?php
namespace Deployer;
require 'recipe/common.php';
// Project name
set('application', 'blog');
// Project repository
set('repository', 'git#github.com:xxxxxxx/xxxxxxxxxxxxxxxx.git');
// [Optional] Allocate tty for git clone. Default value is false.
set('git_tty', true);
// Shared files/dirs between deploys
set('shared_files', ['config/dbinfo.json']);
set('shared_dirs', []);
// Writable dirs by web server
set('writable_dirs', []);
// Hosts
host('ssh.xxxxxx.xx')
->set('deploy_path', '~/xxxx.xxxxxxxxxxxxx.xxxx.xxxxxxx')
->user('xxxxxx_xxx')
->port(22);
// Tasks
desc('Deploy your project');
task('deploy:custom_webroot', function() {
run("cd {{deploy_path}} && ln -sfn {{release_path}} public_html/xxxxxxxxxxxx");
});
task('deploy', [
'deploy:info',
'deploy:prepare',
'deploy:lock',
'deploy:release',
'deploy:update_code',
'deploy:shared',
'deploy:writable',
'deploy:clear_paths',
'deploy:symlink',
'deploy:unlock',
'cleanup',
'success'
]);
// [Optional] If deploy fails automatically unlock.
after('deploy:failed', 'deploy:unlock');
after('deploy', 'deploy:custom_webroot');
When i try to run dep deploy in the root of the project I get the flowing output:
$ dep deploy
✈︎ Deploying master on ssh.xxxxxx.xx
➤ Executing task deploy:prepare
✔ Executing task deploy:failed
➤ Executing task deploy:unlock
[Deployer\Exception\RuntimeException]
The command "rm -f ~/blog.xxxxxxxxxxx.xxxx.xxxxx/.dep/deploy.lock" failed.
Exit Code: -1 (Unknown error)
Host Name: ssh.xxxxx.xx
================
mm_send_fd: sendmsg(2): Broken pipe
mux_client_request_session: send fds failed
any help would be extremely appreciated!

Try using dep deploy:unlock and then use deploy normally.

Related

What should be run in a container for a PHP-based Docker AWS lambda?

PHP isn't a natively supported language in AWS Lambda, but I thought I'd try my hand at getting one working, using a custom Docker image. I am using this official AWS example to structure the image.
I don't quite understand the pieces yet. I will add what files I have to this post.
Firstly, my Dockerfile:
# Demo of a PHP-based lambda
#
# See example:
# https://github.com/aws-samples/php-examples-for-aws-lambda/blob/master/0.7-PHP-Lambda-functions-with-Docker-container-images/Dockerfile
FROM php:8.0-cli-alpine
WORKDIR /root
# Install Composer
COPY bin bin
RUN sh /root/bin/install-composer.sh
RUN php /root/composer.phar --version
# Install Composer deps
COPY composer.json composer.lock /root/
RUN php composer.phar install
# Install runtimes
COPY runtime /var/runtime
COPY src /var/task/
# Entrypoint
CMD ["index"]
Based on the example I also have:
A PHP listener program at /var/runtime/bootstrap (nearly verbatim copy of the example repo)
Composer dependencies at /root/vendor/... that are loaded by the bootstrap
A trivial index file at /var/task/index.php (verbatim copy of the example repo)
One change I have made is to base the image on an Alpine image from PHP, rather than to use an Amazon Linux image. I wonder, could there be something in the Amazon image that is needed?
The test I am using is the "hello world" in the AWS Lambda UI:
{
"key1": "value1",
"key2": "value2",
"key3": "value3"
}
Anyway, I have used docker login and docker push to get the image to ECR. When I run the hello world test inside the AWS dashboard, I am getting this set of error logs in CloudWatch:
2021-11-13T19:12:12.449+00:00 IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [index] WorkingDir: [/root]
2021-11-13T19:12:12.493+00:00 START RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Version: $LATEST
2021-11-13T19:12:12.502+00:00 IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [index] WorkingDir: [/root]
2021-11-13T19:12:12.504+00:00 END RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb
2021-11-13T19:12:12.504+00:00 REPORT RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Duration: 9.20 ms Billed Duration: 10 ms Memory Size: 128 MB Max Memory Used: 3 MB
2021-11-13T19:12:12.504+00:00 RequestId: 38da1e10-4c93-4109-be10-32c58f83a2fb Error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Runtime.InvalidEntrypoint
That makes a lot of sense, as I don't understand the entry point of "index" either, but it's there as a CMD in the example Dockerfile. Is this an alias for something? I would be inclined to use something like php /var/runtime/bootstrap, but I'd rather understand things, rather than guessing.
I believe I might be able to use Lambda RIE to test this locally, but I wonder if that would tell me what I already know - I need to fix the CMD.
For what it's worth, I can't see how the index.php file is triggered in the lambda either. How does that get invoked?
Update
I am wondering if the parent image in the example (public.ecr.aws/lambda/provided) has an ENTRYPOINT that would shed some light on the apparently confusing CMD. I wonder if I will investigate that next.
Update 2
The ponderance that I might have to use the Amazon Linux image parent was a false steer - this official resource shows the use of standard Python and Node images.
I decided to try repairing the main Docker command:
CMD ["php", "/var/runtime/bootstrap"]
However it doesn't like that:
START RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Version: $LATEST
IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [php,/var/runtime/bootstrap] WorkingDir: [/root]
IMAGE Launch error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied Entrypoint: [docker-php-entrypoint] Cmd: [php,/var/runtime/bootstrap] WorkingDir: [/root]
END RequestId: d95a29d3-6764-4bae-9976-09830c1c17af
REPORT RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Duration: 19.88 ms Billed Duration: 20 ms Memory Size: 128 MB Max Memory Used: 3 MB
RequestId: d95a29d3-6764-4bae-9976-09830c1c17af Error: fork/exec /usr/local/bin/docker-php-entrypoint: permission denied
Runtime.InvalidEntrypoint
Update 3
No matter what I do, I seem to run into problems with entrypoints. I've even tried a runtime script to chmod +x on the various binaries it doesn't like, but of course if I try to kick that off in the ENTRYPOINT, the container believes that /bin/sh cannot be executed. This is getting rather silly - containers are just not behaving correctly in the Lambda environment.
Update 4
I have now tried to move away from Alpine, in case a more standard OS will work correctly. No joy - I get the same. I'm now at the random-trying things point, and this is rather slow going, given that the build-push-install loop is cumbersome.
This question looks promising, but putting the bootstrap file in /var/task seems to go directly against the example I am working from.
This problem was tricksy because there were two major interlocking problems - a seemingly excessive permissions requirement, and what struck me as a non-standard use of the ENTRYPOINT/CMD systems.
Working solution
The Dockerfile that works is as follows:
# Demo of a PHP-based lambda
#
# See example:
# https://github.com/aws-samples/php-examples-for-aws-lambda/blob/master/0.7-PHP-Lambda-functions-with-Docker-container-images/Dockerfile
FROM php:8.0-cli-alpine
WORKDIR /root
# Install Composer
COPY bin bin
RUN sh /root/bin/install-composer.sh
RUN php /root/composer.phar --version
# Install Composer deps
COPY composer.json composer.lock /root/
# Move deps to /opt, /root has significant permission issues
RUN php /root/composer.phar install && \
mv /root/vendor /opt/vendor
# Install runtimes
COPY runtime/bootstrap /var/runtime/
COPY src/index.php /var/task/
RUN chmod 777 /usr/local/bin/php /var/task/* /var/runtime/*
# The entrypoint seems to be the main handler
# and the CMD specifies what kind of event to process
WORKDIR /var/task
ENTRYPOINT ["/var/runtime/bootstrap"]
CMD ["index"]
So, that resolves one of my nagging questions about Amazon Linux - it is not required. Note that although I installed Composer dependencies in /root, they could not stay there - even 777 perms on them seemed to be insufficient.
As you can see I used 777 permissions on things in /var. 755 might work, maybe even 750 would work - but the key here is that Amazon appears to be a user that is not the build (root) user. That tripped me up a lot.
Now the ENTRYPOINT is used to run the bootstrap script, which appears to be doing general mediation between events on the AWS side and "use cases" in /var/tasks. The normal purpose of a Docker entrypoint is as a command wrapper to CMD, so to use CMD as a "default lambda type" seems to significantly violate the principle of least surprise. I would have thought the lambda type would be defined by the incoming event, not by any lambda-side setting.
Testing
To test this lambda I use this event in the Lambda UI:
{
"queryStringParameters": { "name": "halfer" }
}
And the demo code will respond with:
{
"statusCode": 200,
"headers": {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "Content-Type",
"Access-Control-Allow-Methods": "OPTIONS,POST"
},
"body": "Hello, halfer"
}
Suffice it to say this feels rather brittle. Admittedly the demo code is not production quality, but even so, I suspect this would need a pipeline to do a real AWS Lambda test prior to merging down or deployment.
Performance
Here is why lambdas are tempting, especially for infrequent calls such as crons - they are instantiated quickly and die quickly, leaving no running infra. In one of my demo calls I have:
Init duration 188.75 ms
Duration 39.45 ms
Billed duration 229 ms
Deeper understanding
Having worked with the pieces I think I can now explain them rather better, and what I thought of as unusual architectural choices may actually have some purpose. I fear however this design ideology is not sufficiently documented, so engineers working with Docker-based AWS Lambdas have to spend additional time figuring the pieces out.
Consider the processing loop in the demo runtime:
// This is the request processing loop. Barring unrecoverable failure, this loop runs until the environment shuts down.
do {
// Ask the runtime API for a request to handle.
$request = getNextRequest();
// Obtain the function name from the _HANDLER environment variable and ensure the function's code is available.
$handlerFunction = $_ENV['_HANDLER'];
require_once $_ENV['LAMBDA_TASK_ROOT'] . '/' . $handlerFunction . '.php';
// Execute the desired function and obtain the response.
$response = $handlerFunction($request['payload']);
// Submit the response back to the runtime API.
sendResponse($request['invocationId'], $response);
} while (true);
This picks up $_ENV['_HANDLER'] from the Lambda environment, and AWS derives that from the CMD of the image. Now, in PHP the env vars in $_ENV are static for the duration of the process, so it is perhaps a slightly odd choice to read this in a loop and include the file in a loop - it would have been better to do this in an initialisation phase, returning a clean error if the include isn't found.
However, here's the likely purpose of this system: AWS Lambdas let users customise the CMD in the web dashboard. So in an example enterprise let's say that there's three lambdas - one for responding to a web event, one for a scheduler, and one for responding to SNS topics. The handlers for each of these could be added to the same image, allowing the three lambdas to share an image - all they need to do is to supply a CMD override, and each one will load and use the right handler.

Deployer - no tty present and no askpass program specified - How to deploy with Deployer

I have trouble deploying with Deployer 4.0.2 and I am in need for help of somebody more experienced than me in this.
I want to deploy a repository of mine to a Ubuntu 16.04 server.
I am using laravel homestead as a development environment, where I also installed deployer. From there I ssh into my remote server.
I was able to deploy my code with the root user, until I hit a RuntimeExceptionthat aborted my deployment.
Do not run Composer as root/super user! See https://getcomposer.org/root for details
That made me create another user called george, whom I gave superuser rights. I copied my public key from my local machine to a newly generated ~/.ssh/authorized_keys file, that gave me permission to access the server via ssh.
Yet when I run dep deploy with the new user:
server('production', '138.68.99.157')
->user('george')
->identityFile()
->set('deploy_path', '/var/www/test');
I get another RuntimeException:
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Now it looks like the new user george cannot access the ~/.ssh/id_rsa.pubkey. So I copy them from the root folder into my home folder and also add the public key in the Github SSH settings.
cp root/.ssh/id_rsa.pub home/george/.ssh/id_rsa.pub
cp root/.ssh/id_rsa home/george/.ssh/id_rsa
Only to get the same error as before.
In the end I had to add github to my list of authorized hosts:
ssh-keyscan -H github.com >> ~/.ssh/known_hosts
Only to get the next RuntimeException
[RuntimeException]
sudo: no tty present and no askpass program specified
I managed to comment this code in the deploy.php
// desc('Restart PHP-FPM service');
// task('php-fpm:restart', function () {
// // The user must have rights for restart service
// // /etc/sudoers: username ALL=NOPASSWD:/bin/systemctl restart php-fpm.service
// run('sudo systemctl restart php-fpm.service');
// });
// after('deploy:symlink', 'php-fpm:restart');
to get the deployment process finally done, and now I ask myself, if the restart of php-fpm is really necessary, for me to continue debugging this deployment tool? Or can I live without it?
And if I need it, can somebody help me understand what I need it for? And maybe as a luxury also provide the solution to the RuntimeException?
Try this:
->identityFile('~/.ssh/id_rsa.pub', '~/.ssh/id_rsa', 'pass phrase')
It works great for me - no need for an askpass program.
It helps to be explicit in my experience.
As for your phpfm restart task .. I haven't seen that before. Shouldn't be needed. :)
EDIT:
That you provide a password is probably a good sign that you ought to refactor your Deployer code a bit if you keep it under source control.
I am loading site specific data from a YAML file - which I am not submitting to source control.
The first bit of my stage.yml :
# Site Configuration
# -------------
prod_1:
host: hostname
user: username
identity_file:
public_key: /home/user/.ssh/key.pub
private_key: /home/user/.ssh/key
password: "password"
stage: production
repository: https://github.com/user/repository.git
deploy_path: /var/www
app:
debug: false
stage: 'prod'
And then, in my deploy.php :
if (!file_exists (__DIR__ . '/deployer/stage/servers.yml')) {
die('Please create "' . __DIR__ . '/deployer/stage/servers.yml" before continuing.' . "\n");
}
serverList(__DIR__ . '/deployer/stage/servers.yml');
set('repository', '{{repository}}');
set('default_stage', 'production');
Notice that, when you use serverList, it replaces your server setup in deploy.php

AWS Elastic Beanstalk Deployment Order

I'm deploying code to a single-instance web server AWS EB environment that will provision/update my connected RDS database. I've got an .ebextensions file that calls deployment code:
---
container_commands:
01deploydb:
command: /var/www/html/php/cli/deploy-db.php
leader_only: true
On the same deployment, I dropped the deploy-db.php file back one directory into /cli/. On deployment, I get ERROR: [Instance: i-*****] Command failed on instance. Return code: 127 Output: /bin/sh: /var/www/html/php/cli/deploy-db.php: No such file or directory.
container_command 01deploydb in .ebextensions/01_db.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
If I deploy a version that does not include the command, then deploy a second update including the command, there is no error. However, adding the command and the file it calls at the same time produces the error. A similar sequence occurred earlier with a different command/file.
My question is: is there a documented order/sequence for how AWS updates the environment? I would have expected that my new version would have fully deployed (and the .php file installed) before container_commands are called.
The commands: section runs before the project files are put in place. This is where you can install server packages for example.
The container_commands: section runs in a staging directory before the files are put in its final destination. Here you can modify your files if you need to. Current path is this staging directory so you can run it like this (I might get the app directory wrong, maybe it should be php/cli/deploy-db.php)
container_commands:
01deploydb:
command: cli/deploy-db.php
leader_only: true
Reference for above: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
You can also run a post deploy scripts. This is not very well documented (at least it wasn't). You can do something like this (it won't be leader only though, but you could put a file in this directory through a container_commands:):
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_deploy.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
/var/www/html/php/cli/deploy-db.php

Deploying existing local project to Bluemix BuildPack error

I hope someone can help me out with this issue I'm facing.
I've made a fully functional project on a local server and would now like to deploy it to Bluemix Cloud Foundry.
I've followed the tutorial: https://console.eu-gb.bluemix.net/docs/starters/upload_app.html
But when I'm trying to push it through terminal with following commands
cf push app_name -b https://github.com/cloudfoundry/php-buildpack.git -s cflinuxfs2
cf push app_name -b https://github.com/cloudfoundry/go-buildpack
cf push app_name -c start_command
cf push app_name -m 512m
But non seems to work, since every single time I get the following error
Staging failed: Buildpack compilation step failed
-----> Composer command failed
FAILED
Error restarting application: BuildpackCompileFailed
It is a PHP app build with PHPStorm on Symfony and Doctrine if that matters.
I am fairly new to all server/setup/deployment configurations as well as command line.
EDIT 1
I figured out this part thanks to this link: https://support.run.pivotal.io/entries/109600943-cf-push-ing-a-symfony-app-fails-with-Composer-command-failed-
It seems that by default the buildpack assumes that you want all of the files you push to be public. Because of this assumption, it takes all of your files and moves them into the doc root of either HTTPD or Nginx.
By creating the file .bp-config/options.json in the root of your project. Then inside options.json add
{
"WEBDIR": "web"
}
This will tell the buildpack that you have a specific directory to use for the doc root, so it will just use that instead of moving everything into the default doc root.
However...
This brings me a new issue and returns the following error
FAILED
Error restarting application: Start unsuccessful
If i enter the recent log the terminal provides me this:
2016-08-25T02:53:40.62+0200 [App/0] OUT Could not open input file: app.php
2016-08-25T02:53:40.62+0200 [App/0] ERR
2016-08-25T02:53:40.69+0200 [DEA/211] ERR Instance (index 0) failed to start accepting connections
2016-08-25T02:53:40.72+0200 [API/9] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"b6c3c871-5484-4f12-9d84-657cf6eacfbf", "instance"=>"c11566bdabe5458d9bfc4965c9c1aa85", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472086420}
2016-08-25T02:53:40.72+0200 [API/3] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"b6c3c871-5484-4f12-9d84-657cf6eacfbf", "instance"=>"c11566bdabe5458d9bfc4965c9c1aa85", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472086420}
2016-08-24T16:41:14.03+0200 [DEA/135] OUT Starting app instance (index 0) with guid abb206b3-b8ea-4269-b248-ec7b35f7098a
2016-08-24T16:41:26.26+0200 [App/0] ERR bash: start_command: command not found
2016-08-24T16:41:26.26+0200 [App/0] OUT
2016-08-24T16:41:26.35+0200 [DEA/135] ERR Instance (index 0) failed to start accepting connections
2016-08-24T16:41:26.38+0200 [API/6] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"5ebd6d77-68c4-4901-b9a8-b5cecfa4cddb", "instance"=>"7b5b555ae68645f4a2c09b73c0adbcb3", "index"=>0, "reason"=>"CRASHED", "exit_status"=>127, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472049686}
EDIT 2 (updated error msg)

Why deployer is rejecting my credentials?

I've created a new project to deploy my sources in ubuntu. My workspace, generated by a jenkins extraction is in a webserver.
I've installed deployer in this webserver to put in another server my sources validated by jenkins.
I made a "deploy" directory into the project which includes the receipe directory, the deploy.php, and the servers.yml
I've downloaded the receipe directory because i didn't understand what the receipe/common.php is about : https://github.com/deployphp/deployer/blob/master/recipe/common.php
Here is my deploy.php :
<?php
require 'recipe/common.php';
serverList('config/servers.yml');
set('repository', 'git#xx.xx.xx.xx:/opt/git/intranetv2.git');
Here is my servers.yml :
production:
host: xx.xx.xx.xx
user: administrateur
identity_file:
public_key: "~/.ssh/id_rsa.pub"
private_key: "~/.ssh/id_rsa"
password: "aaaaa"
stage: production
deploy_path: "/var/www/intranet"
branch: master
I don't understand why it rejects me when i do :
dep deploy:release production
It is unable to connect with the given credentials.
Thanks.
Does it working then you do it by hands? Do you have password on keys?

Categories