Hope someone can shed some light on this one, it's driving me nuts. I have a backup web app written in php, this creates an rsync which is run on a remote server and stores the files in a local folder. This function works using the php exec function.
We are migrating this to within a laravel8 app and the identical call to the same rsync and to the same local storage directory fails. The error is a 255.
If i use the same rsync command from command line it works the same as it does in our original app.
A bit more...
The Laravel instance is using Horizon to perform this so i have this in the handle() function of a process file in the jobs folder.
Of note I have a dev version of this laravel app and with this it works correctly and syncs with the remote folder.
My rsync command is using an id_rsa file for the apache user - www-data (the app is not available to the outside world). the folders have the correct permissions and owners (www-data and 755 for directories and 644 for files).
an example (modified version) of the rsync command:
rsync -rLDvcs -e "ssh -p 22 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i /var/www/.ssh/id_rsa" --no-perms --no-group --no-owner --progress --exclude-from '/var/www/backup/exclude.txt' --delete --backup --backup-dir=/hdd/mnt/incs/deleted_files/the-project/ theproject#188.102.210.41:/home/theproject/files/ /hdd/mnt/incs/theproject/files
the code which calls this is:
$aResult['res'] = exec($rsync . ' 2>&1', $aResult['output'], $aResult['return']);
when run $aResult['return'] has a value of 255 and the process has failed. The docs seem to say this is a failure to connect yet the identical call on another app on the same machine works (as it does if rsync is used in command line)
Any ideas? The php version on this box (the original app and the laravel upgrade) is php8.0 and my dev copy uses php7.4. my dev and this internal box use Ubuntu20.04 and apache2 so besides the php version are pretty identical.
No error is in any logs.
thanks
Craig
Related
I have an application in php that works on the docker. I would like to send a command from php code to container that should create files (some_dir/certs/cert.crt etc.). This command i run like this (by system/exec/shell_exec or symfony/process)
system("traefik-certs-dumper file --source acme.json --dest some_dir --version v2");
When php run this code then directory has been created but not files, also i don't have any error.
This command works when i make it from terminal via docker exec but not from php. This is probably some permission problem between php and docker container, but i don't know how can i set it.
I'm trying to set in docker file this, but not working:
RUN chmod 777 /go/bin/traefik-certs-dumper
RUN usermod -u 1000 www-data
also standard command works like this:
system("mkdir -p some_dir_1234");
system("touch some_dir_1234/some_file_1234");
How can I allow an installed library to create files?
I finally was able to find a solution. In a separate container, I had a process supervisor running, who saw this file, because it was mounted to the main application directory also. What had to be done was to mount the file to the main container and the supervisor container.
Try
exec("traefik-certs-dumper file --source acme.json --dest some_dir --version v2");
intead of system()
I am trying to deploy a Symfony2 PHP project on Ubuntu 15.10 with MagePHP, but it always asks me for the SSH users password when executing:
sudo php vendor/andres-montanez/magallanes/bin/mage deploy to:staging
When checking the log I can see it stops at this command:
ssh -p 22 -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ssh-user#my-domain.com "sh -c \"mkdir -p /customers/489176_10999/websites/my_company/symfony/staging/releases/20160902094526\""
Executing this command by itself works fine (so the server accepts the ssh key), but from within the context of the deployment script it doesn't.
I am quite puzzled by this, since both commands are run from the same directory. Any ideas how I can make this work?
try running the deploy with sudo.
Regards!
Since the file has been located under /var/www the ssh-agent had no access to the key files, since they were stored under the user directory. Moving the entire project inside the user directory fixed this issue.
I've been at this for two days now and haven't been able to find any way (good or bad) of doing that to work.
I have to be able of dynamically mounting drives over network from my website's pages (that part is inevitable).
I have no problems doing it directly on the console with the following command
mount -t cifs //IP-REMOTE-MACHINE/Folder -o username=username,password=password /mnt/share
Obviously trying to just do a shell_exec() of this command wouldn't work with no root rights.
I tried to shell_exec() a script in which I would switch to root user (via su or sudo mycommand) but both of them wouldn't work (never been able to succeed in doing a script who would automatically switch my user to root even with the root pwd hard coded (even if that feels an extremely bad idea I could have accepted that atm).
After that I tried to use pmountbut never found a way to access to a remote shared file (don't think it's even possible but I may have missed something here?)
All that is running on a Debian machine with apache2.
I have a wild idea...
You could set a cron to run as root that checks for mount commands from your script. The script would simply set a mount command to be processed, and when the cron gets to it, runs the mount, marks the command as processed, and writes to a log file which you could then display.
It's not safe to run sudo commands with www-data (the user for web servers in Debian).
But if you want to run sudo [command] in a php script, you must add the user www-data in sudoers: http://www.pendrivelinux.com/how-to-add-a-user-to-the-sudoers-list/
And then you can exec: sudo mount ...
EDIT: It's safer to add in visudo:
www-data ALL= NOPASSWD: /bin/mount
To allow www-data to use only sudo /bin/mount
I have a jenkins build job of my symfony2 project that uses grunt to start the php built in webserver so that casperjs can run functional tests against it.
To start my webserver I'm using the following command:
php app/console server:start --router=" + __dirname + "/app/config/router_test.php --env=test 0.0.0.0:9001"
However the build fails with the following message:
A process is already listening on http://0.0.0.0:9001.
Thus I have SSHed to the jenkins box and run:
netstat -tln | grep 9001
Only to get no results?!
I have restarted the server and killed all php processes, disabled iptables however none of this seems to work.
This build used to work and in the last change, all that was added were more functional tests.
Has anyone got any ideas why this could be happening?
As commented, the fix that worked for me was to change the workspace directory. Seems to have been a permissions issue with the workspace folder that jenkins created yet a chmod 777 didn't resolve it hence the new workspace folder.
I have been struggling to get my CakePHP site working on a Godaddy "grid hosting" account. My cake app is setup is hosted from a subdirectory on the account, and can be accessed via a subdomain. I had to adjust my .htaccess files to get this working, and now I need to get the CakePHP console working in this environment.
I have the same cake application setup on an Ubuntu server which is hosted on Amazon's EC2 service. Basically a plain out of the box Ubuntu LAMP setup. The CakePHP console works as expected in this environment.
When I try to run the console on Godaddy I get the following message:
CakePHP Console: This file has been loaded incorrectly and cannot
continue.Please make sure that /cake/console is in your system
path,and check the manual for the correct usage of this
command.(http://manual.cakephp.org/)
I've started to add in some debugging code in cake/console/cake.php to find out what's going on. On the godaddy site, when I echo out print_r($this->args) at line 183 I find the array is empty. When I do this on my Ubuntu EC2 instance I get this:
Array
(
[0] => /var/www/www.directory.sdcweb.org/htdocs/cake/console/cake.php
)
It looks like godaddy's command-line PHP isn't passing through the bash shell command line arguments. Does anybody have some advice as to how I might get the CakePHP console working on Godaddy?
The bash script which invokes the Cake shell contains the following
LIB=${0/%cake/}
APP=`pwd`
exec php -q ${LIB}cake.php -working "${APP}" "$#"
exit;
I am thinking that modifying this script may solve the problem.
in the cake shell script (cake/console/cake) change
exec php -q ${LIB}cake.php -working "${APP}" "$#"
to
exec php -q -d register_argc_argv=1 ${LIB}cake.php -working "${APP}" "$#"
after this I found out that calling php like this happened to run the PHP 4 CLI. to fix this here is the final bash script that I am using to invoke PHP 5 on my shared Godaddy hosting
exec /web/cgi-bin/php5 -q -d register_argc_argv=1 ${LIB}cake.php -working "${APP}" "$#"
if you setup a php-based cron job through their hosting control panel, you will find the php command invoked is actually to this php5 executable.
"Please make sure that /cake/console is in your system path."
This is grid hosting so I'm assuming you have a .bashrc file which you can edit. First you need to know the absolute path to your cake sub-directory then use vim or nano to edit your .bashrc
PATH=$PATH:/absolute/path/to/cake/console
Then you can log out and log back in and you should be able to type cake bake from anywhere and it should fix the error your getting (run it from your app directory so it can find your database.php).
Failing a .bashrc file you can export the variable temporarily but you will have to type it every time you log in.
I don't think editing anything in the lib/cake is okay, since it will be gone with your first cake update.
Rather, I changed the register_argc_argv setting from the php.ini by adding the line:
register_argc_argv=On
All seems to work now with me.