I have a github account set up to my EC2 server with no issues. When i try to run a bash script to 'git pull' it wont do it. I will do a 'git status' and many other commands. Here is my sh file
cd /var/www/html/TDS/;
ls -la;
type git;
git status;
git remote -v;
git pull origin master;
echo "hello world";
All lines work except the git pull. I have tried git pull, git pull origin master, git fetch, git fetch origin master. I have ruled out all possibilities like permission issues and privileges.
This sh file is executed by hitting a PHP page, the PHP page looks like this
<?php
$output = shell_exec('/bin/sh /var/www/html/TDS/git.sh');
print_r("<pre>$output</pre>");
?>
Very simple and it works minus the Pull request. Any help would be amazing, I'm so close to getting this to work.
For a git pull to work, the user running it must have write permissions to the git repo's index (under .git/). Make sure the user under which the script is run (Apache?) has those rights.
...does PHP (www-data) have permissions? Is it the owner of the file?
Is this an ssh URL to the origin repository? Do you have ssh-agent running when you do it manually? Have you provided ssh agent access to the shell script (hint, the answers are Yes, Yes, No. Probably.)
So we have determined it is ssh access that is the problem. You then have two choices: getting ssh-agent credentials into the php process and allowing the php script access to ssh credentials without requiring a password. Both are problematic one way or another.
To get assh-agent credentials into the php process, copy the $SSH_AUTH_SOCK environmental variable from a shell into your php/shell script SSH_AUTH_SOCK=/tmp/ssh-wScioBA10361/agent.10361 git pull. Then assuming the php script has sufficient privs to access that file, git pull will work. This is problematic because you need to ssh into the system to get the auth sock, change the program to use the new socket (or write a program to find the current socket), and leave everything running. Log out, reboot, etc and you will lose git pull functionality.
The other option is to create ssh credentials for the php/shell user who is running git pull. Find the home directory, create .ssh, and ssh-keygen new keys for that user. You can set up the private key to not have a password so that anyone who can access this file (security risk!!) can ssh using those credentials. Add the public key to the authorized keys of the account who has access to the git repo (gitolite would allow you to restrict what privileges that account might have).
Related
I am desperately trying to mount a CIFS share on a Debian 10 box through a web user interface and get it accessible for the whole system. The mount command is executed successfully but the mount point is not listed in /etc/mtab or /proc/mounts and therefore also not shown by the mount command.
I am using apache2 as a webserver and I tried different approaches all with the same result.
The goal is to use a php-script with Apache or Nginx that mounts a share that is valid and visible for the whole OS just like if I used the mount command on the commandline.
I have tried different ways with a mount.php that calls a bash-script to mount the share:
added www-data to sudoers without password and call the script containing "sudo mount ..."
used a c-compiled wrapper that is executed as root which calls a bash-script that mounts the share
installed php-fpm with a root-enabled socket to call the bash script
let the bash-script add the share into /etc/fstab and execute mount -a
All these approaches work as they should if called from the command line, even when called as www-data user (where possible).
They all also seem to mount the share when called through the web interface, because if I use the same techniques to launch a mount without any parameters in a php-script from the website the mount is listed as it should be. Also a second try to mount the share through the web interface gives the message that the device is busy.
But when I use the mount command without any parameters on the command line the mountpoint is not listed nor do I find it in /etc/mtab or /proc/mounts.
In the last approach, where I let the script edit the /etc/fstab and call a 'mount -a' the behaviour is exactly the same (listed in web interface but not on command line), but when I reboot the share is mounted as expected and visible.
So I am very sure that I am overlooking some kind of userspace / sandbox / terminal restriction where apache2 runs in that has some effect on the mount command. What is strange, because I even can edit the /etc/fstab with the scripts and seem to have root access to everything - even to mount, otherwise it would not start at all. But anyhow the mount command seems to write it's mount-results somewhere else when invoked through the web interface.
Does anybody have an idea that points me in the right direction?`
Thanks in advance,
Axel
Apache2 has a property "PrivateTmp" which is set in /etc/systemd/multi-user.target.wants/apache2.service. Try commenting it by putting a hash (#) in front of the line.
Mount uses the temp folder and if Apache uses a private temp folder it might not appear in the mount list.
If you have troubles with permissions, and dont want to dig too deep on that, i recommend to use a simple Write to File function in PHP, then, with a CRON JOB execute a script that if find that file, delete it, and execute the function you desire. That cron job should not have any permission issues on the complete computer.
I am not able to pull a file remotely from git server through php. I wrote the following BAT code and am trying to execute it via php.
the batch file git1.bat is a follows:
cd C:\repos\rep2 && git pull origin master 2>&1
the php code:
<?php
echo shell_exec("C:\\xampp\htdocs\AS-otg\\git1.bat");
?>
the output I get:
However, I get the required result when I do the same directly from cmd.
I tried some other git commands like log etc. which work just fine.
I need to do this via php only... please help.
log is a local command that does not need to talk with a remote host. pull does a fetch first. It seems you are running the PHP script under another user than when you run the script manually. If you run it manually you authenticate to the remote server with your SSH key and when the PHP script runs the script, the effective user does not have that SSH key to authenticate with.
Btw. you should keep in mind that a pull is not suited for being done non-interactively. When doing a pull you can easily get conflicts if the incoming changes are not fast-forward.
My company have a cloud dedicated server, hosted in google cloud, running centos 7 with apache2, php5.5 and mariadb running.
The webserver is running a private application for business clients and companys. Every client has his own database and subdomain, so they, and their own clients can access his applicacion going to http://theirname.example.net/
I've created an interactive command line script for client creation. It creates user and secure password, ftp custom folder, create mysqldatabase and populate it with a sql file, create subdomains and other thing. Also i've made another version of this script with no user interaction, receiving parameters as the client name via command line arguments ( /path-to/script.sh usertocreate mysqluser mysql pass).
So, here is the deal, i want to create a web interface, password protected, just available for my company IP address, this interface should be able to run these SH scripts (not the interactive ones) with sudo permissions.
I was thinking on create a subserver in other port (like http://example.org:2501) using another instance of apache (or other webserver) which runs with a specified user with sudo permissions enables only in the neccesary folders.
Before doing anything, i've created a PHP script which runs commands to console, and tried to run SUDO commands with that. Allowing apache user to sudoers list (just to make it work during develop). I could run these scripts from my web app without SUDO permissions, but they where not working at 100% (since some commands require SUDO). When i try with sudo i receive a code 127 error responde (permission problem).
I stopped there and decided to investigate the best way to do this.
I have full control of the server machine. Apache2 and everything normal is running well. (mariadb, proftpd with passive mode active -100 ports added- )
SELINUX is DISABLED. Firewalld Running
SSH is available for use
I can install another webserver in another port to accomplish this. If a lot of HTTPD configurations should be changed to accomplish this, i will preffer to install another webserver
I can also install any 3rd party software.
I'm a PHP Developer with low experience in other programming languages, if it's necesary to invoke any other programming language to do this (maybe like Python) i would love some docummentation links
Access to this web application would be limited only to my company's static ip address and will be protected
Any thoughts/ideas ? Thanks in advance
PS: If someone want to edit my text and add some colours and format, edition will be aprovved
EDIT TLDR: I want to run another httpd in another with a webapi. This webapi should can run console commands as SUDO. Access to this webserver will be limited to my company's IP. I'm not sure if this is the best way to do it and i want opinions. Also, i'm not pretty sure if its possible to run sudo commands from php without any trouble.
I'm working on a VPN signup site, which is written in PHP and runs on the same Ubuntu server that the VPN server runs on. It allows users to sign up for VPN services, but currently it just emails the support staff their information, and they manually edit the config files on the server. I'm using PPP to handle authentication, so I have a file containing information like below:
# user server password ip
test l2tpd testpassword *
In order for a new user to be added to the VPN service, their details must be appended to the above table, and the command
sudo /etc/init.d/xl2tpd restart
run in order to apply the new changes. What I am looking to do is automate the process. From what I can tell, there are two options. One is to create a shell script, and then use shell_exec('./adduser test testpassword');. The other is to do it directly in PHP, by opening the file, modifying it and saving it again.
From a security and speed point of view, which approach is better, or is there another one which I haven't thought of?
sudo can be configured to execute just a specific command for a specific user, so modifying your sudoers file can mean you can use sudo in a more secure way to execute specific commands.
You could combine this with a wrapper script so that php was only executing a localised script with limited rights.
So your wrapper script, let's call it 'restart_auth.sh` may contain:
#!/bin/sh
sudo /etc/init.d/xl2tpd restart
You would then shell_exec('restart_auth.sh') from php to run that script.
You would edit your sudoers file to allow the user that the script was run as (your php user) to run /etc/init.d/xl2tpd. so if your php user is www_data edit sudoers (using visudo ) to contain:
user host = (www_data) NOPASSWD: /etc/init.d/xl2tpd
Provided no tainted data - that is unvalidated information that may contain shell escape characters - is passed through to a shell exec command then it is secure.
As someone else suggested it may be better to write the data to a pending list then read from that, rather than passing it on a shell_exec() line. However that can still introduce insecurities, so making sure the values you are writing to the file are untainted is the most important thing.
Also never run that full script as root even as a cron job, but instead use the same approach with sudoers to only permit the running script to execute specific commands as root. For instance you could allow sudo "cat changes.txt >> auth_file"
I'm currently trying to write an internal application to be able to deploy our projects to acceptance and production servers with a single click.
We are using phing to accomplish this.
At the moment I'm having difficulty checking out (or doing an svn export) the project. I use the following command:
<exec command="svn checkout ${svn.host} ${svn.exportdir} --force --username server --password <password>" />
on a normal command line this works perfectly, however i get prompted to accept a certificate because the host uses https. Problem is there seems to be no parameter to automatically accept a certificate.
the --trust-server-cert doesn't help either, becase the certificate is rejected due to a hostname mismatch, where the parameter only bypasses a "CA is unknown"-error.
Any ideas on how I can check out (or export, update, ...) the project?
Do a wget on the svn servers HTTPS adress and accept the certificate permanently.
$ wget https://svn.mydomain.com/repos
And then press p to accept the cert.
I also added some hints to the PHP documentation about the problems with certificates:
Simply call
svn checkout https://svn.mydomain.com/repos --force --username server --password iMPs+nana0kIF
on your command line and accept the cert.
There could be still a problem when the user which executes the Phing command is not root, then you have to execute this command as the user which runs the Phing command:
su wwwrun wget https://...
su wwwrun svn checkout https://...
Just do one manual checkout as the user that will be running phing. You can checkout to /dev/null if you want to. Once you have accepted the certificate, it will stay accepted (if that user has a .subversion directory to store it).
By the way, any specific reason why you are using the svn commandline interface through and ExecTask instead of just using the SvnCheckoutTask directly?