We have a Magneto store that setup in 2 web servers behind a load balancer, which will take care of the load and send customers to either server.
I have a Magento module thats allows customers to upload files through the website, which then get saved in to, [web_server_ip]/media/[visitor_id]/file_name.ext
Since there are 2 servers web_server_ip has to be consistent throughout a customer session when they upload files. Means all files from a customer need to be in same directory in the same server that they got connected at first, regardless of which server they'll be sent by the load balancer within a single session. To ensure that I have following function to get the a consistent Server IP for file uploads.
protected function getServerIp() {
if (Mage::registry("dlite_server")!=""){
$serverip = Mage::registry("dlite_server");
}else{
if (substr($_SERVER['SERVER_ADDR'],0,strpos($_SERVER['SERVER_ADDR'], '.'))=='10'){
//Private IP to Public IP conversion
if ($_SERVER['SERVER_ADDR'] == '10.999.99.999'){
$serverip = '50.99.999.999';
}else if ($_SERVER['SERVER_ADDR'] == '10.888.888.888'){
$serverip = '50.888.88.888';
}else{
$serverip = $_SERVER['SERVER_ADDR'];
}
}else{
$serverip = $_SERVER['SERVER_ADDR'];
}
}
Mage::register("dlite_server", $serverip);
//I have used customer session as below to hold the IP first
//Mage::getSingleton('customer/session')->setDliteServerIp($serverip);
return $serverip;
}
The issue I'm facing is, I tried to hold the IP throughout the customer session using Magento's customer/session which failed 20% of the time. And as you can see now I'm trying to use Mage::registry to hold the IP, that fails even more, like 30% of the time.
So I was wondering is there any thing that's consistent in Magento that I can use to serve my purpose?
Related
Recently, I had a single IP call the same page 15,000 times in quick succession which used up a lot of server resources (with resulting warning email from Host service). I am on a shared host so can't load new modules and therefore have no real options to truly limit bandwidth to an IP.
So, I'm trying to figure out how I can use the least amount of resources in spotting an offending IP and redirecting it to a 403 Forbidden page. I am already checking for common hacks and using Project HoneyPot, but to do this for each of the 15,000 page hits is not efficient (and, like this one, it doesn't catch them all).
I currently log access to each page to a mysql table called visitors. I can imagine a couple of ways to go about this:
Option 1: Using MySql:
1) Query the visitors table for the number of hits from the IP over the last 10 seconds.
2) If greater than a certain number (15?), flag the last entry in visitors as blocked for this IP.
3) With each subsequent page request, the query on the visitors table will show the ip as blocked and I can then redirect to 403 Forbidden page.
Option 2: Modifying an Include File on the fly which contains blacklisted IPs:
1) Include a file which returns an array of blacklisted IPs
2) If the current IP is not on the list, query the visitors table as in Option 1 to see if the number of hits from the IP over the last 10 seconds is greater than a certain number.
3) If the IP is offending, modify the include file to include this IP address as shown below.
In essence, my question is: Which uses more resources (x 15,000): a query to the database, or the code below which uses include to read a file and then array_search(), OR is there a better way to do this?
<?php
$ip = $_SERVER['REMOTE_ADDR'];
$filename='__blacklist.php';
if (file_exists($filename)){
// get array of excluded ip addresses
$array = (include $filename);
// if current address is in list, send to 403 forbidden page
var_dump($array);
if (is_array($array) && array_search($ip, $array) !== false){
blockAccess("Stop Bugging Me!");
}
} else {
echo "$filename does not exist";
}
// evaluate some condition which if true will cause IP to be added to blacklist - this will be a query to a MySql table determining number of hits to the site over a period of time like the last 10 seconds.
if (TRUE){
blockip($ip);
}
// debug - let's see what is blocked
// $array = (include $filename);
// var_dump($array);
// add the ip to the blacklist
function blockip($ip){
$filename='__blacklist.php';
if (! file_exists($filename)){
// create the include file
$handle = fopen($filename, "w+");
// write beginning of file - 111.111.111.111 is a placeholder so all new ips can be added
fwrite($handle, '<?php return array("111.111.111.111"');
} else {
// let's block the current IP
$handle = fopen($filename, 'r+');
// Don't use filesize() on files that may be accessed and updated by parallel processes or threads
// (as the filesize() return value is maintained in a cache).
// use fseek & ftell instead
fseek($handle, 0 ,SEEK_END);
$filesize = ftell($handle);
if ($filesize > 20){
// remove ); from end of file so new ip can be added
ftruncate($handle, $filesize-2);
// go to end of file
fseek($handle, 0 ,SEEK_END);
} else {
// invalid file size - truncate file
$handle = fopen($filename, "w+");
// write beginning of file with a placeholder so a new ip can be added
fwrite($handle, '<?php return array("111.111.111.111"');
}
}
//add new ip and closing of array
fwrite($handle, "," . PHP_EOL . '"' . $ip . '");');
fclose($handle);
}
function blockAccess($message) {
header("HTTP/1.1 403 Forbidden");
echo "<!DOCTYPE html>\n<html>\n<head>\n<meta charset='UTF-8' />\n<title>403 Forbidden</title>\n</head>\n<body>\n" .
"<h1>Forbidden</h1><p>You don't have access to this page.</p>" .
"\n</body>\n</html>";
die();
}
?>
There are a lot of points to address here.
Query vs Include
This is basically going to come down to the server its hosted on. Shared hosting is not known for good IO and you will have to test this. This also depends on how many IPs you will be blacklisting.
Blocking Malicious IPs
Ideally you don't want malicious users to hit PHP once you determine that they are malicious. The only real way to do that in a shared environment on apache is to block them from htaccess. Its not recommended but it is possible to modify htaccess from PHP.
Order Deny,Allow
Deny from xxx.xxx.xxx.xxx
Caching
The main concern I have from reading your question is that you seem to not understand the problem. If you are receiving 15,000 hits in a timeframe of seconds, you should not have 15,000 database connections and you should not have all of those requests hitting PHP. You need to cache these requests. If this is happening your system is fundamentally flawed. It should not be physically possible for 1 user on home internet to spike your resource usage that much.
Shared hosting is a bad idea
I suggest in your situation to get a VPS or something else that will allow you to use a reverse proxy and employ far more caching/blacklisting/resource monitoring.
I've created an application using PHP and I'm going to sell it to my local market. I will personally be going to their locations to install/configure Apache & MySQL as well as installing my own code.
I would like a security system so that if anyone attempts to copy my code to an unauthorized machine, it won't run.
I know no one can prevent reverse engineering an application. even .exe (binary) files are cracked and with PHP (source code) anyone can do.
In my country those reverse engineers are really hard to find, so I would like to propose minimal security options like:
1) Create class (say, Navigation) which identifies system information like CPU ID, Computer name or any combination of hardware ID to make a UNIQUE_ID and matches with my given UNIQUE_ID (to the individual to whom I sold the application). If it's valid, it returns the navigation menu. Otherwise it will simply destroy the database and halt the execution by throwing an exception, maybe like:
class Navigation {
public function d() {
return current system UNIQUE_ID;
}
public function get() {
$a = file_get_contents('hash');
$c = $this->d();
if (crypt($c) != $a) {
//destory database
throw new Exception('');
} else {
return "<ul><li><a>home</a></li></ul>"; //navigation menu
}
}
}
2) Then during the installation process I'll change system UNIQUE_ID in "hash" file, create an object, and save it into a file (nav.obj):
(install.php)
<?php
$a=new Navigation;
$out=serialize($a);
file_put_contents('nav.obj', $out);
3) in header.php (which gets included in every file):
<?php
$menu=file_get_contents('nav.obj');
$menu=unserialize($a);
echo $menu->get();
?>
I know this method isn't full proof, but I'm pretty sure that around 60% of PHP developers won't be able to crack it!
Now I only need to get current system UNIQUE_ID.
I have created this function to get an unique ID based on hardware (Hard disk UUID). It is possible to use different resources like machine names, domains or even hard disk size to get a better approach depending on your needs.
function UniqueMachineID($salt = "") {
if (strtoupper(substr(PHP_OS, 0, 3)) === 'WIN') {
$temp = sys_get_temp_dir().DIRECTORY_SEPARATOR."diskpartscript.txt";
if(!file_exists($temp) && !is_file($temp)) file_put_contents($temp, "select disk 0\ndetail disk");
$output = shell_exec("diskpart /s ".$temp);
$lines = explode("\n",$output);
$result = array_filter($lines,function($line) {
return stripos($line,"ID:")!==false;
});
if(count($result)>0) {
$result = array_shift(array_values($result));
$result = explode(":",$result);
$result = trim(end($result));
} else $result = $output;
} else {
$result = shell_exec("blkid -o value -s UUID");
if(stripos($result,"blkid")!==false) {
$result = $_SERVER['HTTP_HOST'];
}
}
return md5($salt.md5($result));
}
echo UniqueMachineID();
As per http://man7.org/linux/man-pages/man5/machine-id.5.html
$machineId = trim(shell_exec('cat /etc/machine-id 2>/dev/null'));
EDIT for Tito:
[ekerner#**** ~]$ ls -l /etc/machine-id
-r--r--r--. 1 root root 33 Jul 8 2016 /etc/machine-id
EDIT 2 for Tito: Some things to consider and scenarios:
Is the user allowed to get a new machine? Id guess yes.
Or run on multiple devices?
Sounds like the machine could be irrelevant in your case?
If its user only (no machine restrictions) then Id go for a licencing service (relies on network).
There are many services for this:
Google Play (for Android apps) is a good example: https://developer.android.com/google/play/licensing/index.html
MS and Apple have similar services.
However just search the web for the term "Software Licensing Service" or "Cloud Based Software Licensing Service".
If its user + single device, then youll need to pass up the device id to whatever service you use or make, then allow the machine id to be updated, but not allow revert to previous machine id (would mean multiple devices).
However said services will give you the client code which should take care of that if its a requirement.
Two scenarios from experience:
1: User on any device: we simply made an API in the cloud (in a website) and a login screen in the app, when the user logged in it authenticated via the API and kept a token, and whenever the device was connected to the net the app would query the API and update the login and/or token.
You could alternatively have the login screen in the purchase (like maybe they already logged into a site to purchase), generate a key and pack it with or bind it into the app.
2: User plus machine:
Same thing except when the API is queried the machine id is passed up. The machine ID can change as many times as the user updates their device, but we kept a record of machine ids and made to ban rule on: if we saw an old (previously used) machine id then a certain amount of time had to have passed. Thus allowed the user to break their machine and pull out an old one.
Also to consider if you make one, how will you stop the app from working? Ppl are pretty clever it will need to be core compiled.
However that all being said, the various licensing services are pro at this and can cater for most needs. Plus in their experience theyve already overcome the security pitfalls. Id name one that I like except its yours to search out.
Nice if you can come on back with and positive or negative outcomes from your trails.
function getMachineId() {
$fingerprint = [php_uname(), disk_total_space('.'), filectime('/'), phpversion()];
return hash('sha256', json_encode($fingerprint));
}
This will get a probably-unique id based on a hash of:
The server's OS, OS version, hostname, and architecture.
The total space (not free space) on the drive where the php script is.
The Unix timestamp creation time of the computer's root file system.
The currently installed PHP version.
Unlike the other answers it doesn't depend on shell_exec() being enabled.
I'm a bit confused by how to get the region name and cannot find any documentation on it.
I have the database installed wich it 'GeoIP.dat' and 'geoip.inc' in this directory '...IP GeoLite\GeoLite' and i also have a php page for the test \IP GeoLite\find.php
the code inside the 'find.php' page is it didn't work :
<?php
/* Instead of having to determine the country of the client every time they visit the site we are going to set
a cookie so that any other script in PHP or Javascript can use the region information.
The user is also given a menu option to change the region and reset the cookie to a new value.
Likewise, if it already exists we don't want to change it.
We start off by checking that the cookie called Region exists.
If it does, the job is nearly done and we simply set the $Region variable so that we can refresh
the cookie at the end of the program by recreating it. */
if(isset($_COOKIE['Region']))
{
$Region = $_COOKIE['Region'];
}
else
/* Only if the cookie isn't set do we do the actions in the else part of the if,
so this makes the whole thing efficient.
To make use of the GeoLite code we have to load the include file: */
{
$GeoPath= 'GeoLite/';
include($GeoPath.'geoip.inc');
}
$countrydata = GeoIP_region_name_by_code(gir->country_code, gir->region) ;
echo $countrydata ;
?>
You must open the Geo IP binary data file first
// Open Geo IP binary data file
$geoIp = geoip_open($GeoPath.'GeoIP.dat',GEOIP_STANDARD);
Look at this documentation http://www.maxmind.com/download/geoip/api/php.old/README
I have an app that uploads user files to S3. At the moment, the ACL for the folders and files is set to private.
I have created a db table (called docs) that stores the following info:
id
user_id
file_name (original file as specified by the user)
hash_name (random hash used to save the file on amazon)
So, when a user wants to download a file, I first check in the db table that they have access to file. I'd prefer to not have the file first downloaded to my server and then sent to the user - I'd like them to be able to grab the file directly from Amazon.
Is it OK to rely on a very very long hashname (making it basically impossible for anyone to randomly guess a filename)? In this case, I can set the ACL for each file to public-read.
Or, are there other options that I can use to serve the files whilst keeping them private?
Remember, once the link is out there, nothing prevents a user from sharing that link with others. Then again, nothing prevents the user from saving the file elsewhere and sharing a link to the copy of the file.
The best approach depends on your specific needs.
Option 1 - Time Limited Download URL
If applicable to your scenario, you can also create expiring (time-limited) custom links to the S3 contents. That would allow the user to download content for a limited amount of time, after which they would have to obtain a new link.
http://docs.amazonwebservices.com/AmazonS3/latest/dev/S3_QSAuth.html
Option 2 - Obfuscated URL
If you value avoiding running the file through your web server over the risk that a URL, however obscure, might be intentionally shared, then use the hard-to-guess link name. This would allow a link to remain valid "forever", which means the link can be shared "forever".
Option 3 - Download through your server
If you are concerned about the link being shared and certainly want users to authenticate through your website, then serve the content through your website after verifying user credentials.
This option also allows the link to remain valid "forever" but require the user to log in (or perhaps just have an authentication cookie in the browser) to access the link.
I just want to post the PHP solution with code, if anybody has the same problem.
Here's the code I used:
$aws_access_key_id = 'AKIAIOSFODNN7EXAMPLE';
$aws_secret_key = 'YourSecretKey12345';
$aws_bucket = 'bucket';
$file_path = 'directory/image.jpg';
$timeout = '+10 minutes';
// get the URL!
$url = get_public_url($aws_access_key_id,$aws_secret_key,$aws_bucket,$file_path,$timeout);
// print the URL!
echo($url);
function get_public_url($keyID, $s3Key, $bucket, $filepath, $timeout)
{
$expires = strtotime($timeout);
$stringToSign = "GET\n\n\n{$expires}\n/{$aws_bucket}/{$file_path}";
$signature = urlencode(hex2b64(hmacsha1($s3Key, utf8_encode($stringToSign))));
$url = "https://{$bucket}.s3.amazonaws.com/{$file_path}?AWSAccessKeyId={$keyID}&Signature={$signature}&Expires={$expires}";
return $url;
}
function hmacsha1($key,$data)
{
$blocksize=64;
$hashfunc='sha1';
if (strlen($key)>$blocksize)
$key=pack('H*', $hashfunc($key));
$key=str_pad($key,$blocksize,chr(0x00));
$ipad=str_repeat(chr(0x36),$blocksize);
$opad=str_repeat(chr(0x5c),$blocksize);
$hmac = pack(
'H*',$hashfunc(
($key^$opad).pack(
'H*',$hashfunc(
($key^$ipad).$data
)
)
)
);
return bin2hex($hmac);
}
function hex2b64($str)
{
$raw = '';
for ($i=0; $i < strlen($str); $i+=2)
{
$raw .= chr(hexdec(substr($str, $i, 2)));
}
return base64_encode($raw);
}
i'm trying to use sessions to store the amount of login attempts. When the maximum of login attempts is reached i'm storing the client's ip address in a blacklist table.
Some things i've taken into account, you might need to know about:
I'm using session_regenerate_id(); after i set a session value.
I'm not using any cookies apart from the session since this is not necessary and not 2012 :p
The users ip is blacklisted until i mannually delete his row from the blacklist table.
The SESSION_MAX_ATTEMPTS is a defined constant and set to 5.
The index.php?module=login&task=blacklist page is just showing the user a message that it's blacklisted. This page does not have any functionallity.
i'm using a custom build php framework, so i had to translate some OOP called method's to simplified php code.
The following function is called before a login query is executed:
private function preventAttack()
{
$blocked = getData("SELECT count(*) as blocked FROM blacklist WHERE ip = #Value0;", Array( $_SERVER['REMOTE_ADDR'] ));
if($blocked[0]["blocked"] == "1")
{
redirect("index.php?module=login&task=blacklist");
}
$old = (int)$this->session->get("login_attempts");
if(!empty($old))
{
if($old > SESSION_MAX_ATTEMPTS)
{
setData("INSERT INTO blacklist SET ip = #Value0;", Array( $_SERVER['REMOTE_ADDR'] ));
redirect("index.php?module=login&task=blacklist");
}
else
{
$old++;
$this->session->set("login_attempts",$old);
}
}
else
{
$this->session->set("login_attempts", 0);
}
}
The first if statement works including both query's but i'm stuck at what's the best way to store the amount of attempts and what's the best way to ++ it? Maybe you guys can set me in the right direction.
If you have any questions about my code, please add a comment. I know it's a bit unreadable since it's from my framework, i've translated this a bit before posting it.
Store the number of failed attempts in the database, not the session. N.B.: You probably need to keep each failure along with a timestamp in its own record (and ignore/delete anything older than a threshold).
By the way, in response to deceze's comment:
Ginormous flaw in this approach: sessions depends on the client sending a cookie. Real attackers will simply not send the cookie back. ziiiing You'll have to go by IP for everything.
The solution to this is that you don't accept login attempts that don't come with a valid session cookie, set elsewhere.
Thanks a lot guys, i've learned a lot from your comments. For users with the same problem i'll explain what i've learned and how i'm using that knowledge.
What i've learned so far:
Real attackers will simply not send a cookie back. So using cookies or sessions doesn't make any sense in this case.
If you want to blacklist attackers, use can use a firewall that does this automatically for you. It makes no sense to do this from your script, sessions are too easily circumvented and if you check IP addresses you could block out entire offices or schools on the same external IP.
Whatever you do: store the login attempts data only on the server in a database/flatfile/etc. The user or attacker will not be able to edit this data so easily.
And if you do store data on your web server for performance, store it in a file, not a DB.
If i forgot something please comment and i will edit this post.
I checked with my hosting provider and they are already blocking out a lot of these attackers using solutions like the firewall mentioned above. So i will stop trying to also do this in my scripts.
My script is fixed now and only blacklisting users and noob hackers guessing passwords:
private function preventAttack()
{
$blocked = getData("SELECT count(*) as blocked FROM blacklist WHERE ip = #Value0;", Array( $_SERVER['REMOTE_ADDR'] ));
if($blocked[0]["blocked"] == "1")
{
redirect("index.php?module=login&task=blacklist");
}
$old = (int)$this->session->get("login_attempts");
if($old > 0)
{
if(($old + 1) >= SESSION_MAX_ATTEMPTS)
{
setData("INSERT INTO blacklist SET ip = #Value0;", Array( $_SERVER['REMOTE_ADDR'] ));
$this->session->set("login_attempts", 0);
redirect("index.php?module=login&task=blacklist");
}
else
{
$old++;
$this->session->set("login_attempts",$old);
}
}
else
{
$this->session->set("login_attempts", 1);
}
}
Since i'm not quite sure how many users this platform is going to get and i don't want a possible high mysql server load i've decided not to store the login attempts in the database. Maybe in the future, who knows.