I have to realize a fallback solution (a auth system) for an external application. Therefore I have to keep the auth folder of my primary auth server syncronized with my fallback servers. The folder contains several .php files, .bin files and some others. Unfortunately I have no idea how I should realize a (for example hourly) syncronization of those folders to my fallback servers.
All servers use CPanel / WHM, maybe there is a solution for this or how can I keep them synced otherwise? I thought about a .php script which logs in via FTP and syncronizes them. I would put a cronjob then for this .php script. But I don't even know whether this is possible. If the primary server is offline it shouldn't affect my fallback servers in a negative way of course.
How should/can I realize this?
I suggest you use RSYNC, assuming you are not on a shared hosting plan.
Rsync, which stands for "remote sync", is a remote and local file
synchronization tool. It uses an algorithm that minimizes the amount
of data copied by only moving the portions of files that have changed.
http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
For this to work, you need to have access to SFTP port on your server and of course, a linux terminal!.
Leonel Atencio's suggestion of rsync is great.
Here is the rsync shell script that I use. It is placed in a folder named /publish in my project. The gist contains the rs_exclude.txt file the shell script mentions.
rsync.sh
# reverse the comments on the next two lines to do a dry run
#dryrun=--dry-run
dryrun=
c=--compress
exclude=--exclude-from=rs_exclude.txt
pg="--no-p --no-g"
#delete is dangerous - use caution. I deleted 15 years worth of digital photos using rsync with delete turned on.
# reverse the comments on the next two lines to enable deleting
#delete=--delete
delete=
rsync_options=-Pav
rsync_local_path=../
rsync_server_string=user#example.com
rsync_server_path="/home/www.example.com"
# choose one.
#rsync $rsync_options $dryrun $delete $exclude $c $pg $rsync_local_path $rsync_server_string:$rsync_server_path
#how to specify an alternate port
#rsync -e "ssh -p 2220" $dryrun $delete $exclude $c $pg $rsync_local_path $rsync_server_string:$rsync_server_path
https://gist.github.com/treehousetim/2a7871f87fa53007f17e
running via cron
Source
Edit your crontab.
# crontab -e
Crontab entries are one per line. The comment character is the pound (#) symbol. Use the following syntax for your cron entry.
These examples assume you placed your rsync.sh script in ~/rsync
These examples will also create log files of the rsync output.
Each Minute
* * * * * ~/rsync/rsync.sh > ~/rsync/rsync.log
Every 5 Minutes
*/5 * * * * ~/rsync/rsync.sh > ~/rsync/rsync.log
Save your crontab and exit the editor. You should see a message confirming your addition to the crontab.
Related
I am in the process of setting up a remote backup for my MariaDB server (running on 64 bit Ubuntu 12.04).
The first step in this process was easy - I am using automysqlbackup as explained here.
The next step is mostly pretty easy - I am uploading the latest backup in the folder /var/lib/automysqlbackup/daily/dbname to a remote backup service (EVBackup in the present case) via SFTP. The only issue I have run into is this
The files created in that folder by automysqlbackup bear names in the format dbname_2014-04-25_06h47m.Friday.sql.gz and are owned by root. Establishing the name of the latest backup file is not an issue. However, once I have got it I am unable to use file_get_contents since it is owned by the root user and has its permissions set to 600. Perhaps there is a way to run a shell script that fetches me those contents? I am a novice when it comes to shell scripts. I'd be much obliged to anyone who might be able to suggest a way to get at the backup data from PHP.
For the benefit of anyone running into this thread I give below the fully functional script I created. It assumes that you have created and shared your public ssh_key with the remote server (EVBackup in my case) as described here.
In my case I had to deal with one additional complication - I am in the process of setting up the first of my servers but will have several others. Their domain names bear the form svN.example.com where N is 1, 2, 3 etc.
On my EVBackup account I have created folders bearing the same names, sv1, sv2 etc. I wanted to create a script that would safely run from any of my servers. Here is that script with some comments
#! /bin/bash
# Backup to EVBackup Server
local="/var/lib/automysqlbackup/daily/dbname"
#replace dbname as appropriate
svr="$(uname -n)"
remote="$(svr/.example.com/)"
#strip out the .example.com bit to get the destination folder on the remote server
remote+="/"
evb="user#user.evbackup.com:"
#user will have to be replaced with your username
evb+=$remote
cmd='rsync -avz -e "ssh -i /backup/ssh_key" '
#your ssh key location may well be different
cmd+=$local
cmd+=$evb
#at this point cmd will be something like
#rsync -avz -e "ssh -i /backup/ssh_key" /home bob#bob.evbackup.com:home-data
eval $cmd
I'm currently working on a project to make changes to the system with PHP (e.g. change the config file of Nginx / restarting services).
The PHP scripts are running on localhost. In my opinion the best (read: most secure) way is to use SSH to make a connection. I considering one of the following options:
Option 1: store username / password in php session and prompt for sudo
Using phpseclib with a username / password, save these values in a php session and prompt for sudo for every command.
Option 2: use root login
Using phpseclib with the root username and password in the login script. In this case you don't have to ask the user for sudo. (not really a safe solution)
<?php
include('Net/SSH2.php');
$ssh = new Net_SSH2('www.domain.tld');
if (!$ssh->login('root', 'root-password')) {
exit('Login Failed');
}
?>
Option 3: Authenticate using a public key read from a file
Use the PHP SSHlib with a public key to authenticate and place the pubkey outside the www root.
<?php
$connection = ssh2_connect('shell.example.com', 22, array('hostkey' => 'ssh-rsa'));
if (ssh2_auth_pubkey_file($connection, 'username', '/home/username/.ssh/id_rsa.pub', '/home/username/.ssh/id_rsa', 'secret')) {
echo "Public Key Authentication Successful\n";
} else {
die('Public Key Authentication Failed');
}
?>
Option 4: ?
I suggest you to do this in 3 simple steps:
First.
Create another user (for example runner) and make your sensitive data (like user/pass, private key, etc) accessible just for this user. In other words deny your php code to have any access to these data.
Second.
After that create a simple blocking fifo pipe and grant write access to your php user.
Last.
And finally write a simple daemon to read lines from the fifo and execute it for example by ssh command. Run this daemon with runner user.
To execute a command you just need to write your command in the file (fifo pipe). Outputs could be redirected in another pipe or some simple files if needed.
to make fifo use this simple command:
mkfifo "FIFONAME"
The runner daemon would be a simple bash script like this:
#!/bin/bash
while [ 1 ]
do
cmd=$(cat FIFONAME | ( read cmd ; echo $cmd ) )
# Validate the command
ssh 192.168.0.1 "$cmd"
done
After this you can trust your code, if your php code completely hacked, your upstream server is secure yet. In such case, attacker could not access your sensitive data at all. He can send commands to your runner daemon, but if you validate the command properly, there's no worry.
:-)
Method 1
I'd probably use the suid flag. Write a little suid wrapper in C and make sure all commands it executes are predefined and can not be controlled by your php script.
So, you create your wrapper and get the command from ARGV. so a call could look like this:
./wrapper reloadnginx
Wrapper then executes /etc/init.d/nginx reload.
Make sure you chown wrapper to root and chmod +s. It will then spawn the commands as root but since you predefined all the commands your php script can not do anything else as root.
Method 2
Use sudo and set it up for passwordless execution for certain commands. That way you achieve the same effect and only certain applications can be spawned as root. You can't however control the arguments so make sure there is no privilege escalation in these binaries.
You really don't want to give a PHP script full root access.
If you're running on the same host, I would suggest to either directly write the configuration files and (re)start services or to write a wrapper script.
The first option obviously needs a very high privilege level, which I would not recommend to do. However, it will be the most simple to implement. Your other named options with ssh do not help much, as a possible attacker still may easily get root privileges
The second option is way more secure and involves to write a program with high level access, which only takes specified input files, e.g. from a directory. The php-script is merely a frontend for the user and will write said input files and thus only needs very low privileges. That way, you have a separation between your high- and low privileges and therefore mitigate the risk, as it is much easier to secure a program, with which a possible attacker may only work indirectly through text files. This option requires more work, but is a lot more secure.
You can extend option 3 and use SSH Keys without any library
$command = sprintf('ssh -i%s -o "StrictHostKeyChecking no" %s#%s "%s"',
$path, $this->getUsername(), $this->getAddress(), $cmd );
return shell_exec( $command );
I use it quite a lot in my project. You can have a look into SSH adapter I created.
The problem with above is you can't make real time decisions (while connected to a server). If you need real time try PHP extension called SSH2.
ps. I agree with you. SSH seams to be the most secure option.
You can use setcap to allow Nginx to bind to port 80/443 without having to run as root. Nginx only has to run as root to bind on port 80/443 (anything < 1024). setcap is detailed in this answer.
There are a few cavets though. You'll have to chown the log files to the right user (chown -R nginx:nginx /var/log/nginx), and config the pid file to be somewhere else other than /var/run/ (pid /path/to/nginx.pid).
lawl0r provides a good answer for setuid/sudo.
As a last restort you could reload the configuration periodically using a cron job if it has changed.
Either way you don't need SSH when it's on the same system.
Dude that is totally wrong.
You do not want to change any config files you want to create new files and then include them in default config files.
Example for apache you have *.conf files and include directive. It is totally insane to change default config files. That is how I do it.
You do not need ssh or anything like that.
Belive me it is better and safer if you will.
I am working on a php website and it gets regularly infected by Malware. I've gone through all the security steps but failed. But I know how it every time infect my code. It comes at the starting of my php index file as following.
<script>.....</script><?
Can anybody please help me how can I remove the starting block code of every index file at my server folders? I will use a cron for this.
I already gone through regex question for removal of javascript malware but did not found what I want.
You should change FTP password to your website, and also make sure that there are no programs running in background that open TCP connections on your server enabling some remote dude to change your site files. If you are on Linux, check the running processes and kill/delete all that is suspicious.
You can also make all server files ReadOnly with ROOT...
Anyhow, trojan/malware/unautorized ftp access is to blame, not JavaScript.
Also, this is more a SuperUser question...
Clients regularly call me do disinfect their non-backed up, PHP malware infected sites, on host servers they have no control over.
If I can get shell access, here is a script I wrote to run:
( set -x; pwd; date; time grep -rl zend_framework --include=*.php --exclude=*\"* --exclude=*\^* --exclude=*\%* . |perl -lne 'print quotemeta' |xargs -rt -P3 -n4 sed -i.$(date +%Y%m%d.%H%M%S).bak 's/<?php $zend_framework=.*?>//g'; date ; ls -atrFl ) 2>&1 | tee -a ./$(date +%Y%m%d.%H%M%S).$$.log`;
It may take a while but ONLY modifies PHP files containing the trojan's signature <?php $zend_framework=
It makes a backup of the infected .php versions to .bak so that when re-scanned, will skip those.
If I cannot get shell access, eg. FTP only, then I create a short cleaner.php file containing basically that code for php to exec, but often the webserver times out the script execution before it goes through all subdirectories though.
WORKAROUND for your problem:
I put this in a crontab / at job to run eg. every 12 hours if such access to process scheduling directly on the server is possible, otherwise, there are also more convoluted approaches depending on what is permitted, eg. calling the cleaner php from the outside once in a while, but making it start with different folders each time via sort --random (because after 60sec or so it will get terminated by the web server anyway).
Change Database Username Password
Change FTP password
Change WordPress Hash Key.
Download theme + plugins to your computer and scan with UPDATED antivirus specially NOD32.
Don't look for the pattern that tells you it is malware, just patch all your software, close unused ports, follow what people told you here already instead of trying to clean the code with regex or signatures...
How to set up a cron job via PHP (not CPanel)?
Most Linux systems with crond installed provides a few directories you can set up jobs with:
/etc/cron.d/
/etc/cron.daily/
/etc/cron.weekly/
/etc/cron.monthly/
...
The idea here is to create a file in one of these directories. You will need to set the proper permissions/ownership to those (or one of those) directories so that the user launching the PHP script can write to it (Apache user if it's a web script, or whatever CLI user if CLI is used).
The easiest thing is to create an empty file, assign proper permission/ownership to it, and have the PHP script append/modify it.
Per example:
$ touch /etc/cron.d/php-crons
$ chown www-data /etc/cron.d/php-crons
Then in PHP:
$fp = fopen('/etc/cron.d/php-crons', 'a');
fwrite($fp, '* 23 * * * echo foobar'.PHP_EOL);
fclose($fp);
If what you're getting at is dynamically adding lots of jobs to crontab form your application, a better way to do that is manually add ONE cron job:
php -f /path/to/your/runner.php
Store your jobs that you would be adding to cron manually in a table (or one table per task-type), and then have your runner go through the table(s) every minute/hour/day/whatever and execute all the ones that should be executed at that time.
From pure PHP I will create deamon that will manage this (those) cron job(s).
how to create it:
http://kevin.vanzonneveld.net/techblog/article/create_daemons_in_php/ to start with
Finding crontab file isn't easy on shared hosting and there's no certainty that cron will read that file again while it's already running.
Actually I the best way is to use corntab command.
If you don't have access to shell you can use for example PHPShell. Try this.
Uplode a txt file via FTP with jobs in crontab fomat for example
5 * * * * /some/file/to/run.sh > /dev/null
(remember to put a newline at the end of that line)
Log in to your PHPShell and run
crontab uploded_filename.txt
Remember to change file permissions
chmod 775 uploded_filename.txt
Check your cron jobs using
crontab -l
Cheers
There is an embargo on the use of PHP to edit crontabs which has been in place since 2004. You may not be allowed to do this if you live outside of the United States, check with your local government agency.
But seriously, you could always call "crontab -" with a system call. If you need to do this for some user other than the webserver, you'll need some ssh or sudo magic. But it all seems like a bad idea.
How to get $base_url to show the correct url for my Drupal site when I'm running a cron job? Do I have to set the global $base_url manually for that to happen? Do I have to run the cron job as a registered user?
When I run mysite.com/cron.php by hand everything seems to work fine: $base_url is set to the correct url. However, when I run a similar command via cron or drush, $base_url is set to a generic "http://default".
The funny thing is that when I run cron manually as a registered user from inside Drupal (using devel, for instance), $base_url aways points to the right url.
Any suggestions?
Thanks in advance,
Leo
Your cron is probably set up wrong.
You can use wget or curl, which is effectively the same as running the cron "by hand". Something like this:
5 * * * * wget http://example.com/cron.php
You are probably using drupal.sh, which claims that you should use "http://default/cron.php as the URI." This will break the $base_url handling. The following might work with drupal.sh.
5 * * * * /path/to/drupal.sh --root /home/site/public_html/ http://example.com/cron.php
When using drush, you might have to supply the --uri argument:
drush --uri=http://example.com cron
You could also just set the $base_url variable in settings.php (which is a perfectly valid way to do it, not a hack).
Let's should walk trough several possible causes:
wget, curl or lynx don't exist on the server. Try running these commands by hand, your OS will tell you if the programs are not available. Solution: make them available, install them, or ask your sysadmin to make them available or install them.
wget, curl and the likes cannot connect to the outside world. Call the entire cron command by hand, but _make sure you leave out the --silent or --quit parameters, you want to get verbose information. Good chance some firewall is blocking your connection from inside to outside. Many well-secured systems do. Soldution: Contact your sysadmin to disable the firewall.
No-one can connect or run your cron.php. You already point out, that is not the case, but for future reference: many servers have blocked cron.php to be run by "just anyone". You can find this out by calling cron.php and looking into the watchdog (Drupal » Admin » Logs » Recent Events). A record telling that cron was run should be present there. Solution: Find out how the cron.php is blocked from "just anyone", often this is a record in .htaccess or apache configuration, often it is a firewall. Disable that for your requested IP or client.