I am in the process of setting up a remote backup for my MariaDB server (running on 64 bit Ubuntu 12.04).
The first step in this process was easy - I am using automysqlbackup as explained here.
The next step is mostly pretty easy - I am uploading the latest backup in the folder /var/lib/automysqlbackup/daily/dbname to a remote backup service (EVBackup in the present case) via SFTP. The only issue I have run into is this
The files created in that folder by automysqlbackup bear names in the format dbname_2014-04-25_06h47m.Friday.sql.gz and are owned by root. Establishing the name of the latest backup file is not an issue. However, once I have got it I am unable to use file_get_contents since it is owned by the root user and has its permissions set to 600. Perhaps there is a way to run a shell script that fetches me those contents? I am a novice when it comes to shell scripts. I'd be much obliged to anyone who might be able to suggest a way to get at the backup data from PHP.
For the benefit of anyone running into this thread I give below the fully functional script I created. It assumes that you have created and shared your public ssh_key with the remote server (EVBackup in my case) as described here.
In my case I had to deal with one additional complication - I am in the process of setting up the first of my servers but will have several others. Their domain names bear the form svN.example.com where N is 1, 2, 3 etc.
On my EVBackup account I have created folders bearing the same names, sv1, sv2 etc. I wanted to create a script that would safely run from any of my servers. Here is that script with some comments
#! /bin/bash
# Backup to EVBackup Server
local="/var/lib/automysqlbackup/daily/dbname"
#replace dbname as appropriate
svr="$(uname -n)"
remote="$(svr/.example.com/)"
#strip out the .example.com bit to get the destination folder on the remote server
remote+="/"
evb="user#user.evbackup.com:"
#user will have to be replaced with your username
evb+=$remote
cmd='rsync -avz -e "ssh -i /backup/ssh_key" '
#your ssh key location may well be different
cmd+=$local
cmd+=$evb
#at this point cmd will be something like
#rsync -avz -e "ssh -i /backup/ssh_key" /home bob#bob.evbackup.com:home-data
eval $cmd
Related
I have to realize a fallback solution (a auth system) for an external application. Therefore I have to keep the auth folder of my primary auth server syncronized with my fallback servers. The folder contains several .php files, .bin files and some others. Unfortunately I have no idea how I should realize a (for example hourly) syncronization of those folders to my fallback servers.
All servers use CPanel / WHM, maybe there is a solution for this or how can I keep them synced otherwise? I thought about a .php script which logs in via FTP and syncronizes them. I would put a cronjob then for this .php script. But I don't even know whether this is possible. If the primary server is offline it shouldn't affect my fallback servers in a negative way of course.
How should/can I realize this?
I suggest you use RSYNC, assuming you are not on a shared hosting plan.
Rsync, which stands for "remote sync", is a remote and local file
synchronization tool. It uses an algorithm that minimizes the amount
of data copied by only moving the portions of files that have changed.
http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
For this to work, you need to have access to SFTP port on your server and of course, a linux terminal!.
Leonel Atencio's suggestion of rsync is great.
Here is the rsync shell script that I use. It is placed in a folder named /publish in my project. The gist contains the rs_exclude.txt file the shell script mentions.
rsync.sh
# reverse the comments on the next two lines to do a dry run
#dryrun=--dry-run
dryrun=
c=--compress
exclude=--exclude-from=rs_exclude.txt
pg="--no-p --no-g"
#delete is dangerous - use caution. I deleted 15 years worth of digital photos using rsync with delete turned on.
# reverse the comments on the next two lines to enable deleting
#delete=--delete
delete=
rsync_options=-Pav
rsync_local_path=../
rsync_server_string=user#example.com
rsync_server_path="/home/www.example.com"
# choose one.
#rsync $rsync_options $dryrun $delete $exclude $c $pg $rsync_local_path $rsync_server_string:$rsync_server_path
#how to specify an alternate port
#rsync -e "ssh -p 2220" $dryrun $delete $exclude $c $pg $rsync_local_path $rsync_server_string:$rsync_server_path
https://gist.github.com/treehousetim/2a7871f87fa53007f17e
running via cron
Source
Edit your crontab.
# crontab -e
Crontab entries are one per line. The comment character is the pound (#) symbol. Use the following syntax for your cron entry.
These examples assume you placed your rsync.sh script in ~/rsync
These examples will also create log files of the rsync output.
Each Minute
* * * * * ~/rsync/rsync.sh > ~/rsync/rsync.log
Every 5 Minutes
*/5 * * * * ~/rsync/rsync.sh > ~/rsync/rsync.log
Save your crontab and exit the editor. You should see a message confirming your addition to the crontab.
I'm currently working on a project to make changes to the system with PHP (e.g. change the config file of Nginx / restarting services).
The PHP scripts are running on localhost. In my opinion the best (read: most secure) way is to use SSH to make a connection. I considering one of the following options:
Option 1: store username / password in php session and prompt for sudo
Using phpseclib with a username / password, save these values in a php session and prompt for sudo for every command.
Option 2: use root login
Using phpseclib with the root username and password in the login script. In this case you don't have to ask the user for sudo. (not really a safe solution)
<?php
include('Net/SSH2.php');
$ssh = new Net_SSH2('www.domain.tld');
if (!$ssh->login('root', 'root-password')) {
exit('Login Failed');
}
?>
Option 3: Authenticate using a public key read from a file
Use the PHP SSHlib with a public key to authenticate and place the pubkey outside the www root.
<?php
$connection = ssh2_connect('shell.example.com', 22, array('hostkey' => 'ssh-rsa'));
if (ssh2_auth_pubkey_file($connection, 'username', '/home/username/.ssh/id_rsa.pub', '/home/username/.ssh/id_rsa', 'secret')) {
echo "Public Key Authentication Successful\n";
} else {
die('Public Key Authentication Failed');
}
?>
Option 4: ?
I suggest you to do this in 3 simple steps:
First.
Create another user (for example runner) and make your sensitive data (like user/pass, private key, etc) accessible just for this user. In other words deny your php code to have any access to these data.
Second.
After that create a simple blocking fifo pipe and grant write access to your php user.
Last.
And finally write a simple daemon to read lines from the fifo and execute it for example by ssh command. Run this daemon with runner user.
To execute a command you just need to write your command in the file (fifo pipe). Outputs could be redirected in another pipe or some simple files if needed.
to make fifo use this simple command:
mkfifo "FIFONAME"
The runner daemon would be a simple bash script like this:
#!/bin/bash
while [ 1 ]
do
cmd=$(cat FIFONAME | ( read cmd ; echo $cmd ) )
# Validate the command
ssh 192.168.0.1 "$cmd"
done
After this you can trust your code, if your php code completely hacked, your upstream server is secure yet. In such case, attacker could not access your sensitive data at all. He can send commands to your runner daemon, but if you validate the command properly, there's no worry.
:-)
Method 1
I'd probably use the suid flag. Write a little suid wrapper in C and make sure all commands it executes are predefined and can not be controlled by your php script.
So, you create your wrapper and get the command from ARGV. so a call could look like this:
./wrapper reloadnginx
Wrapper then executes /etc/init.d/nginx reload.
Make sure you chown wrapper to root and chmod +s. It will then spawn the commands as root but since you predefined all the commands your php script can not do anything else as root.
Method 2
Use sudo and set it up for passwordless execution for certain commands. That way you achieve the same effect and only certain applications can be spawned as root. You can't however control the arguments so make sure there is no privilege escalation in these binaries.
You really don't want to give a PHP script full root access.
If you're running on the same host, I would suggest to either directly write the configuration files and (re)start services or to write a wrapper script.
The first option obviously needs a very high privilege level, which I would not recommend to do. However, it will be the most simple to implement. Your other named options with ssh do not help much, as a possible attacker still may easily get root privileges
The second option is way more secure and involves to write a program with high level access, which only takes specified input files, e.g. from a directory. The php-script is merely a frontend for the user and will write said input files and thus only needs very low privileges. That way, you have a separation between your high- and low privileges and therefore mitigate the risk, as it is much easier to secure a program, with which a possible attacker may only work indirectly through text files. This option requires more work, but is a lot more secure.
You can extend option 3 and use SSH Keys without any library
$command = sprintf('ssh -i%s -o "StrictHostKeyChecking no" %s#%s "%s"',
$path, $this->getUsername(), $this->getAddress(), $cmd );
return shell_exec( $command );
I use it quite a lot in my project. You can have a look into SSH adapter I created.
The problem with above is you can't make real time decisions (while connected to a server). If you need real time try PHP extension called SSH2.
ps. I agree with you. SSH seams to be the most secure option.
You can use setcap to allow Nginx to bind to port 80/443 without having to run as root. Nginx only has to run as root to bind on port 80/443 (anything < 1024). setcap is detailed in this answer.
There are a few cavets though. You'll have to chown the log files to the right user (chown -R nginx:nginx /var/log/nginx), and config the pid file to be somewhere else other than /var/run/ (pid /path/to/nginx.pid).
lawl0r provides a good answer for setuid/sudo.
As a last restort you could reload the configuration periodically using a cron job if it has changed.
Either way you don't need SSH when it's on the same system.
Dude that is totally wrong.
You do not want to change any config files you want to create new files and then include them in default config files.
Example for apache you have *.conf files and include directive. It is totally insane to change default config files. That is how I do it.
You do not need ssh or anything like that.
Belive me it is better and safer if you will.
I'm creating a php script to backup my website everyday, backup goes to another linux server of mine but how can i compress all files and send to another linux server via a script?
One possible solution (in bash).
BACKUP_SERVER_PATH=remote_user#remote_server:/remote/backup/path/
SITE_ROOT=/path/to/your/site/
cd "$SITE_ROOT"
now=$(date +%Y%m%d%H%M)
tar -cvzf /tmp/yoursite.$now.tar.gz .
scp /tmp/yoursite.$now.tar.gz "$BACKUP_SERVER_PATH"
Some extra stuff to take into account for permission (read access to docroot) and ssh access to the remote server (for scp).
Note that there are really many ways to do this. Another one, if you don't mind storing an uncompressed version of your site is to use rsync.
The answer provided by Timothée Groleau should work, but I prefer doing it the other way around: from my backups machine (a server with a lot of storage available) launch a process to backup all other servers.
In my environment, I use a configuration file that lists all servers, valid user for each server and the path to backup:
/usr/local/etc/backupservers.conf
192.168.44.34 userfoo /var/www/foosites
192.168.44.35 userbar /var/www/barsites
/usr/local/sbin/backupservers
#!/bin/sh
CONFFILE=/usr/local/etc/backupservers.conf
SSH=`which ssh`
if [ ! -r "$CONFFILE" ]
then
echo "$CONFFILE not readable" >&2
exit 1
fi
if [ ! -w "/var/backups" ]
then
echo "/var/backups is not writable" >&2
exit 1
fi
while read host user path
do
file="/var/backups/`date +%Y-%m-%d`.$host.$user.tar.bz2"
touch $file
chmod 600 $file
ssh $user#$host tar jc $path > $file
done
For this to work correctly and without need to enter passwords for every server to backup you need to exchange SSH keys (there are lots of questions/answers on stackoverflow on how to do this).
And last step would be to add this to cron to launch the process each night:
/etc/crontab
0 2 * * * backupuser /usr/local/sbin/backupservers
I am working on a php website and it gets regularly infected by Malware. I've gone through all the security steps but failed. But I know how it every time infect my code. It comes at the starting of my php index file as following.
<script>.....</script><?
Can anybody please help me how can I remove the starting block code of every index file at my server folders? I will use a cron for this.
I already gone through regex question for removal of javascript malware but did not found what I want.
You should change FTP password to your website, and also make sure that there are no programs running in background that open TCP connections on your server enabling some remote dude to change your site files. If you are on Linux, check the running processes and kill/delete all that is suspicious.
You can also make all server files ReadOnly with ROOT...
Anyhow, trojan/malware/unautorized ftp access is to blame, not JavaScript.
Also, this is more a SuperUser question...
Clients regularly call me do disinfect their non-backed up, PHP malware infected sites, on host servers they have no control over.
If I can get shell access, here is a script I wrote to run:
( set -x; pwd; date; time grep -rl zend_framework --include=*.php --exclude=*\"* --exclude=*\^* --exclude=*\%* . |perl -lne 'print quotemeta' |xargs -rt -P3 -n4 sed -i.$(date +%Y%m%d.%H%M%S).bak 's/<?php $zend_framework=.*?>//g'; date ; ls -atrFl ) 2>&1 | tee -a ./$(date +%Y%m%d.%H%M%S).$$.log`;
It may take a while but ONLY modifies PHP files containing the trojan's signature <?php $zend_framework=
It makes a backup of the infected .php versions to .bak so that when re-scanned, will skip those.
If I cannot get shell access, eg. FTP only, then I create a short cleaner.php file containing basically that code for php to exec, but often the webserver times out the script execution before it goes through all subdirectories though.
WORKAROUND for your problem:
I put this in a crontab / at job to run eg. every 12 hours if such access to process scheduling directly on the server is possible, otherwise, there are also more convoluted approaches depending on what is permitted, eg. calling the cleaner php from the outside once in a while, but making it start with different folders each time via sort --random (because after 60sec or so it will get terminated by the web server anyway).
Change Database Username Password
Change FTP password
Change WordPress Hash Key.
Download theme + plugins to your computer and scan with UPDATED antivirus specially NOD32.
Don't look for the pattern that tells you it is malware, just patch all your software, close unused ports, follow what people told you here already instead of trying to clean the code with regex or signatures...
I want to do SVN update easier - with calling PHP script.
I created PHP script:
$cmd = "svn update https://___/svn/website /var/www/html/website/ 2>&1";
exec($cmd, $out);
As the user running the script is apache (not root), I get some permission errors.
If I change the owner of every directory to apache (or chrown everything to 777) I have another problem. Because I use https protocol user apache should permanently accept certificate of the svn server. I tried to do "su - apache" and accept certificate but OS says that "apache" is not valid user. I also dont know how could I accept certificate with exec() function.
Any idea? How can I make svn update-ing easier?
Is the error telling you that the user isn't a valid svn user? If apache is the user running httpd, you should be able to su to it. This is the script I use:
/usr/bin/svn --config-dir=/home/user/.subversion --username=svnuser --password=svnpass update
once the password is saved you can remove it from the command. Again, make sure the user/pass above is a valid SVN user.
Lately I've actually migrated to using Hudson for svn updates as you can schedule it as well as run manually and do a bunch of other tasks, plus you can view the svn logs for each commit as well as any console errors.
Why not use php svn functions instead of (insecure) exec?
http://www.php.net/manual/en/function.svn-auth-set-parameter.php has good examples for authentification options.
Use getent apache on the shell. This will return the shell of apache. Most likely, it is /bin/nologin or /bin/false. Change this to /bin/bash. You'll also need to specify the home directory and create it on the file system.
UPDATE: getent apache will actually return the entry in the /etc/passwd file for the apache user. The last token in this string is the shell.