I am trying to deploy a Symfony2 PHP project on Ubuntu 15.10 with MagePHP, but it always asks me for the SSH users password when executing:
sudo php vendor/andres-montanez/magallanes/bin/mage deploy to:staging
When checking the log I can see it stops at this command:
ssh -p 22 -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ssh-user#my-domain.com "sh -c \"mkdir -p /customers/489176_10999/websites/my_company/symfony/staging/releases/20160902094526\""
Executing this command by itself works fine (so the server accepts the ssh key), but from within the context of the deployment script it doesn't.
I am quite puzzled by this, since both commands are run from the same directory. Any ideas how I can make this work?
try running the deploy with sudo.
Regards!
Since the file has been located under /var/www the ssh-agent had no access to the key files, since they were stored under the user directory. Moving the entire project inside the user directory fixed this issue.
Related
Hope someone can shed some light on this one, it's driving me nuts. I have a backup web app written in php, this creates an rsync which is run on a remote server and stores the files in a local folder. This function works using the php exec function.
We are migrating this to within a laravel8 app and the identical call to the same rsync and to the same local storage directory fails. The error is a 255.
If i use the same rsync command from command line it works the same as it does in our original app.
A bit more...
The Laravel instance is using Horizon to perform this so i have this in the handle() function of a process file in the jobs folder.
Of note I have a dev version of this laravel app and with this it works correctly and syncs with the remote folder.
My rsync command is using an id_rsa file for the apache user - www-data (the app is not available to the outside world). the folders have the correct permissions and owners (www-data and 755 for directories and 644 for files).
an example (modified version) of the rsync command:
rsync -rLDvcs -e "ssh -p 22 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i /var/www/.ssh/id_rsa" --no-perms --no-group --no-owner --progress --exclude-from '/var/www/backup/exclude.txt' --delete --backup --backup-dir=/hdd/mnt/incs/deleted_files/the-project/ theproject#188.102.210.41:/home/theproject/files/ /hdd/mnt/incs/theproject/files
the code which calls this is:
$aResult['res'] = exec($rsync . ' 2>&1', $aResult['output'], $aResult['return']);
when run $aResult['return'] has a value of 255 and the process has failed. The docs seem to say this is a failure to connect yet the identical call on another app on the same machine works (as it does if rsync is used in command line)
Any ideas? The php version on this box (the original app and the laravel upgrade) is php8.0 and my dev copy uses php7.4. my dev and this internal box use Ubuntu20.04 and apache2 so besides the php version are pretty identical.
No error is in any logs.
thanks
Craig
How can I run Envoy as root? I have a company server which has root access disabled, but I can sudo -s to it.
For example, when running git pull through Envoy I am getting:
[jenkins]: error: cannot open .git/FETCH_HEAD: Permission denied
I have tried adding sudo -s to it:
#task('deploy')
sudo -s
git pull
#endtask
But this only results in:
[jenkins]: sudo: no tty present and no askpass program specified
Is there a way to run Envoy as root?
Just log in to the server as root
#servers(['web' => 'root#webserver.example.com'])
But logging in as root and running commands is not the most secure way.
At least disable password login for root after setting up ssh keys.
In perfect world, you should have a user which can run commands needed for deployment only.
What I am trying to do
I have a git repository on bitbucket. After pushing to the repository from my local machine I want to automatically pull the master branch to my webspace.
What I have done so far
I connected to my server using ssh, created the ssh key and registered the public key on github.
I created a .sh script which pulls the master branch using ssh - so far so god - everything works when I run the script from the command line/putty
What is the problem
I want to trigger the .sh script with a webhook on bitbucket (I can give an url). For that purpose I created a .php file in my webspace:
<?php
$output = shell_exec('./deploy.sh 2>&1');
echo $output;
my .sh script looks like this:
#!/bin/bash
git pull git#bitbucket.org:dualmeta/test.git master
As already said, running the .sh script with putty works perfectly fine. However if I enter the url to the .php file in my browser it gives me an error:
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I already did some reserach and found that many people have the exact same problem. However in my case i do not have root/sudo access because it is a rented webspace and not my own vServer.
Is there any chance getting this to work?
You must add access www-data or apache user to your git directory.
chown -R apache:apache git_directory
or
chown -R www-data:www-data git_directory
or
chmod o+rw -R git_directory
Use this too :
git config credential.helper store
I've been at this for two days now and haven't been able to find any way (good or bad) of doing that to work.
I have to be able of dynamically mounting drives over network from my website's pages (that part is inevitable).
I have no problems doing it directly on the console with the following command
mount -t cifs //IP-REMOTE-MACHINE/Folder -o username=username,password=password /mnt/share
Obviously trying to just do a shell_exec() of this command wouldn't work with no root rights.
I tried to shell_exec() a script in which I would switch to root user (via su or sudo mycommand) but both of them wouldn't work (never been able to succeed in doing a script who would automatically switch my user to root even with the root pwd hard coded (even if that feels an extremely bad idea I could have accepted that atm).
After that I tried to use pmountbut never found a way to access to a remote shared file (don't think it's even possible but I may have missed something here?)
All that is running on a Debian machine with apache2.
I have a wild idea...
You could set a cron to run as root that checks for mount commands from your script. The script would simply set a mount command to be processed, and when the cron gets to it, runs the mount, marks the command as processed, and writes to a log file which you could then display.
It's not safe to run sudo commands with www-data (the user for web servers in Debian).
But if you want to run sudo [command] in a php script, you must add the user www-data in sudoers: http://www.pendrivelinux.com/how-to-add-a-user-to-the-sudoers-list/
And then you can exec: sudo mount ...
EDIT: It's safer to add in visudo:
www-data ALL= NOPASSWD: /bin/mount
To allow www-data to use only sudo /bin/mount
I'm trying to mount an iscsi virtual disk, but if I execute the command through the exec function in php this give me that error: mount: special device /dev/sdf1 does not exist.
But if I run the command directly in the console it run well!!
What can I do?
I'm obtaining the /dev/sdf1 in a good way, and it exists, but only through php doesn't work.
Thanks
I'm running the command with sudo and run it in console as www-data user always with sudo, so, I suppose that is the same enviroment.
sudo mount -t ext3 /dev/sdf1 /san_disks/RIBS_2
The sudoers file has this lines:
www-data ALL = (root) /usr/bin/iscsiadm, /bin/mount, /bin/umount
%www-data ALL=NOPASSWD: ALL
And it works in console.
This was happening because "/dev" wasn't updated. I made a sleep(1) and it works!!