How can I run Envoy as root? I have a company server which has root access disabled, but I can sudo -s to it.
For example, when running git pull through Envoy I am getting:
[jenkins]: error: cannot open .git/FETCH_HEAD: Permission denied
I have tried adding sudo -s to it:
#task('deploy')
sudo -s
git pull
#endtask
But this only results in:
[jenkins]: sudo: no tty present and no askpass program specified
Is there a way to run Envoy as root?
Just log in to the server as root
#servers(['web' => 'root#webserver.example.com'])
But logging in as root and running commands is not the most secure way.
At least disable password login for root after setting up ssh keys.
In perfect world, you should have a user which can run commands needed for deployment only.
Related
I have web based on symfony 3. For deploy i have own bash script. In this script are routines like git pull, clear cache...
On server is: Debian GNU/Linux 9 (stretch)
But after deploy i can see exception in prod.log:
[2018-05-11 16:55:36] request.CRITICAL: Uncaught PHP Exception InvalidArgumentException: "The directory "/var/www/domain.com/var/cache/prod/annotations" is not writable." at /var/www/domain.com/vendor/doctrine/cache/lib/Doctrine/Common/Cache/FileCache.php line 92 {"exception":"[object] (InvalidArgumentException(code: 0): The directory \"/var/www/domain.com/var/cache/prod/annotations\" is not writable. at /var/www/domain.com/vendor/doctrine/cache/lib/Doctrine/Common/Cache/FileCache.php:92)"} []
But problem is - before and after clear cache i have command chmod -R 777 /var and after deploy if i look to /var/cache/prod/annotations i see new files with time of change equals time when i run clear cache (via sh script).
Anybody some ideas? Where look what chceck?
This error occurs if your shell writes into app/cache first and then you open your website where other parts should be written to that folder too.
Simple solution: add your shell user to the group that your webserver uses, probably www-data, through sudo usermod -a -G www-data userName
The problem is the permissions, to fix use this:
setfacl -R -m u:www-data:rX folderProject
setfacl -R -m u:www-data:rwX folderProject/var/cache folderProject/var/logs folderProject/var/sessions folderProject/app/sessions
setfacl -dR -m u:www-data:rwX folderProject/var/cache folderProject/var/logs folderProject/var/sessions folderProject/app/sessions
Replace folderProject with your project name.
If you are using Ubuntu with apache this command fix the problem, if you are using another GNU/Linux change the user www-data to your user
I'm trying to create webhook for autoupdate my repository trough EC2 instance with php exec() script. The valid ssh key for all git commands is owned by sudoers.
So I need to run this after each commit:
exec('
cd ~/var/www/html/site/repo-test &&
sudo git pull origin master &&
sudo chown -R apache:apache ~/var/www/html/site/repo-test');
It is not working at all, I think it's because of the webhook file is ran by apache:apache. But I didn't setup any password for sudoers, so why the apache admin can't execute sudo?
I have a webhook for my application that currently does this:
cd /var/www/html; git pull origin master; /usr/local/bin/composer dump-autoload; php artisan migrate
I've been able to get all the commands above to work except the composer dump-autoload command.
When I log into the server as ec2-user and run sudo -u apache /usr/local/bin/composer dump-autoload, the command runs. But if I hit the endpoint that runs this command through a PHP script using shell_exec, this does not work.
Is there a way for me to get apache user to run this command on its own?
This should be doable by modifying your sudoers file.
visudo
Add the line:
ec2-user ALL=(apache) NOPASSWD: /path/to/script.sh
Don't forget to check if the apache user does have writing privileges under Laravel directory. composer will try to write in 'vendors' directory.
Regards.
I am trying to deploy a Symfony2 PHP project on Ubuntu 15.10 with MagePHP, but it always asks me for the SSH users password when executing:
sudo php vendor/andres-montanez/magallanes/bin/mage deploy to:staging
When checking the log I can see it stops at this command:
ssh -p 22 -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ssh-user#my-domain.com "sh -c \"mkdir -p /customers/489176_10999/websites/my_company/symfony/staging/releases/20160902094526\""
Executing this command by itself works fine (so the server accepts the ssh key), but from within the context of the deployment script it doesn't.
I am quite puzzled by this, since both commands are run from the same directory. Any ideas how I can make this work?
try running the deploy with sudo.
Regards!
Since the file has been located under /var/www the ssh-agent had no access to the key files, since they were stored under the user directory. Moving the entire project inside the user directory fixed this issue.
I have my Laravel project all working on my localhost.
I deployed it to EC2, but nothing comes up.
All I see in the dev console is internal error (500).
What am I missing? What do I need to change before deploying to AWS? Here is the URL:
http://ec2-52-88-99-75.us-west-2.compute.amazonaws.com
Here is the httpd.conf file:
http://collabedit.com/sdcxm
After installing Laravel, you may need to configure some permissions.
Directories within the storage and the bootstrap/cache directories
should be writable by your web server. - http://laravel.com/docs/master#configuration
Laravel "Storage" folder and the "bootstrap/cache" folder needs access from both the command line user (the one that runs composer update etc) and the default web server user (www-data) in case you are using ubuntu on your EC2 instance.
The following three commands will ensure that both of them have the rights to do that. Run them at the root of your project
HTTPDUSER=`ps aux | grep -E '[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1 | cut -d\ -f1`
sudo setfacl -R -m u:"$HTTPDUSER":rwX -m u:`whoami`:rwX storage bootstrap/cache
sudo setfacl -dR -m u:"$HTTPDUSER":rwX -m u:`whoami`:rwX storage bootstrap/cache
This should start displaying you specific errors that you can debug. Also make sure that you have the debug option set as true in app.php
yourproject/config/app.php
'debug' => true,
Also make sure you have a default .env file that specifies the environment at the project root.
yourproject/.env
//should have this
APP_ENV = dev
Also if you are using sessions etc make sure you have a generated a key by
using this command and doesn't have config/app.php set as
'key' => env('APP_KEY', 'SomeRandomString'),
yourproject/config/app.php
php artisan key:generate
One common pitfall for new Amazon EC2 instances is to assign a security group
to your instance that doesn't have port 80 and 443 allowed as inbound.
Please check your EC2 instance's security group and allow those ports in the group if not already.
This worked for me:
[root#example-laravel-server ~]# chown -R apache:apache /var/www/laravel/laravel/storage