Payment queuing system for PHP - php

I'm trying to figure out the best way to handle payment processing to prevent duplicate payment submissions. I'm using PHP (specifically CakePHP 2.3.8) with Balanced Payments to handle the payment processing.
I've noticed on my server logs that I've had multiple requests submitted all within a second for something usually related to wordpress or phpmyadmin, such as
ip.address.here - [08/Jul/2014:15:03:12 -0400] "GET / HTTP/1.1" 302 320 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)"
ip.address.here - [08/Jul/2014:15:03:12 -0400] "GET / HTTP/1.1" 302 320 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)"
ip.address.here - [08/Jul/2014:15:03:12 -0400] "GET / HTTP/1.1" 302 320 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)"
ip.address.here - [08/Jul/2014:15:03:13 -0400] "GET / HTTP/1.1" 302 320 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)"
I'm worried about someone trying something similar (accidental or not) but with a payment. What is the most effective way to handle a situation like above where multiple requests come in very quickly, regardless if it's a "hacker" or just a hiccup in the system? If I use a queuing system like this one, speficially for CakePHP how would I keep track of previously processed entries in order to detect duplicate submissions?
Say I have a queue of 3 entries. While processing entry 1 would I check entry 2 and 3 to make sure they're not duplicate information? If they are, then just delete them?

In my experience, you should just save a Session identifier that stays the same until the transaction has finished or has been canceled. Usually the payment gateway takes that session identifier to check that you're not duplicating the transaction.
Also, the Payment Gateway should detect duplicates (i.e. same card info, same amount in a short period of time) and will return an error back to you, letting you know what happened.
So, you just remember to keep track of your unique id and reset it when the transaction is completed. If your're keeping a shopping bag, and that expires, make the transaction id expire at that same time. And finally, make sure to disable the submit button with javascript so the users don't tap it more than once.

Related

is there a reason to not put my admin directory in robots.txt?

This may have been asked and answered, since I'm not sure what is the best way to phrase this.
I want to ensure that search spiders don't index the admin side of my website. Unfortunately, if I put the path into my robots.txt file, I'm handing over the cookie jar. Thankfully it's locked, though.
I've already had quite a few "visitors" who start by grabbing robots.txt. Obviously, non-legit spiders will ignore robots.txt, but I want to prevent Google and Bing from plastering my admin directory in search results.
My admin directory is not called "admin" (the most common SBO tactic)
Directory browsing is already blocked
Any IP who connects to my admin directory without logging in first with appropriate permissions is blacklisted. I have been monitoring, and have only had a couple of legit spiders get blacklisted by this manner
I'm using .htaccess (merging several public blacklists) and PHP blacklisting based on behaviors (some automatic, but still Mark-I eyeball as well)
All actions on the admin side are auth-based
The only links to the admin side are presented to authorized users with the appropriate permissions.
I'm not sure if I should put the admin directory in robots.txt - On one hand, legit spiders will ignore that directory, but on the other, I'm telling those who want to do harm that directory exists, and I don't want prying eyes...
I want to ensure that search spiders don't index the admin side of my website. Unfortunately, if I put the path into my robots.txt file, I'm handing over the cookie jar. Thankfully it's locked, though.
You rightly recognize the conundrum. If you put the admin url in the robots.txt, then well-behaved bots will stay away. On the other hand, you are basically telegraphing to bad folks where the soft spots are.
If you inspect your web server's access log, you will most likely see a LOT of requests for admin-type pages. For instance, looking at the apache log on one of my servers, I see opportunistic script kiddies searching for wordpress, phpmyadmin, etc:
109.98.109.101 - - [24/Jan/2019:08:48:36 -0600] "GET /wpc.php HTTP/1.1" 404 229 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0)"
109.98.109.101 - - [24/Jan/2019:08:48:36 -0600] "GET /wpo.php HTTP/1.1" 404 229 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0)"
109.98.109.101 - - [24/Jan/2019:08:48:37 -0600] "GET /wp-config.php HTTP/1.1" 404 229 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0)"
109.98.109.101 - - [24/Jan/2019:08:48:43 -0600] "POST /wp-admins.php HTTP/1.1" 404 229 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"
109.98.109.101 - - [24/Jan/2019:08:50:01 -0600] "GET /wp-content/plugins/portable-phpmyadmin/wp-pma-mod/index.php HTTP/1.1" 404 229 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.108 Safari/537.36
109.98.109.101 - - [24/Jan/2019:08:48:39 -0600] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 229 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0)"
109.98.109.101 - - [24/Jan/2019:08:48:39 -0600] "GET /phpmyadmin/scripts/db___.init.php HTTP/1.1" 404 229 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0)"
109.98.109.101 - - [24/Jan/2019:08:49:35 -0600] "GET /phpmyadmin/index.php HTTP/1.1" 404 229 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.108 Safari/537.36"
109.98.109.101 - - [24/Jan/2019:08:49:47 -0600] "GET /admin/phpmyadmin/index.php HTTP/1.1" 404 229 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.108 Safari/537.36"
109.98.109.101 - - [24/Jan/2019:08:49:47 -0600] "GET /admin/phpmyadmin2/index.php HTTP/1.1" 404 229 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.108 Safari/537.36"
My access log has thousands upon thousands of these. Bots search for them all the time and none of these files are listed in my robots.txt file. As you might guess, unless you have an admin url that is really randomly named, the bad guys could very well guess its name is /admin.
I've already had quite a few "visitors" who start by grabbing robots.txt. Obviously, non-legit spiders will ignore robots.txt, but I want to prevent Google and Bing from plastering my admin directory in search results.
I'd strongly recommend spending some time banning bad bots or basically any bots that you have no use for. AHrefsBot & SemRushBot come to mind. It shouldn't be too hard to find bad bot lists but you'll need to evaluate any list you find to make sure it isn't blocking bots you want to serve. In addition to adding an exclusion rule to your robots.txt file, you should probably configure your application to ban bad bots by sending a 403 forbidden or 404 gone or other HTTP response code of your choice.
In the end, it's critical to remember the maxim that "security by obscurity is not security". One of the most important principles of encryption and security is Kerckhoff's Principle -- i.e., "the enemy knows the system." Your site should not not just rely on the location of your admin urls being obscure or secret. You must require authentication and use sound best practices in your authentication code. I would not rely on apache authentication but would instead code my web application to accept user login/password in a securely-hosted form (use HTTPS) and I would store only the hashed form of those passwords. Do not store cleartext passwords ever.
In the end, the security of your system is only as good as the weakest link. There is some value to having a unique or unusual admin because you might be exposed to fewer attacks, but this in itself doesn't provide any real security. If you still have reservations about broadcasting this url in your robots.txt file, perhaps weigh that against the problems you might expect if GoogleBot or BingBot or some other friendly bot starts stomping around in your admin urls. Would it bother you if these urls ended up in the google search index?

shared hosting at godaddy hacked index.php and login.php automatically changed

yesterday many web application that i have hosted at godaddy shared hosting got defaced (hacked). They changed the index.php and login.php to follwing source code :
Deface By black sQl
HACKED BY black sQl
WARNING!!!
Lets start to secure your website
But Remember This SECUIRITY IS AN ILLUSION!! BD Black Hackers Cyber Army black sql::!hR V1Ru5::3lack D4G0N::TLM-V1Ru5
i donot know how they did that as it is just the login page there is no usage of get and the username and password are only the fields the user can input and they are also cleaned before they enter any function.
i checked the raw access logs and found some suspicious entries there. those are as following :
46.118.158.19 - - [29/Sep/2017:06:27:29 -0700] "GET / HTTP/1.1" 200 522 "http://pochtovyi-index.ru/" "Opera/8.00 (Windows NT 5.1; U; en)"
188.163.72.15 - - [29/Sep/2017:06:48:37 -0700] "GET / HTTP/1.1" 200 522 "https://educontest.net/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)"
can anybody help me how to secure this kind of intrusion?
It depends on how he has gained access.
Major steps to take into consideration are:
Restricting access to server using htaccess
use PDO, to make apps secure from SQL Injection
Secure your apps with CSRF tokens
etc. Check this All in one Cheat Sheet

404 Bot Attack on My Website (DDoS of Sorts)

Over the last few days I have noticed that my Wordpress website had been running quite slowly, so I decided to investigate. After checking my database I saw that a table which was responsible for tracking 404 errors was over 1GB is size. At this point it was evident I was being targeted by bots.
After checking my access log I could see that there was a pattern of sorts, the bot seemed to land on a legitimate page which listed my categories and then move into a category page and at this point they request seemingly random page numbers, many of which are non-existent pages causing the issue.
Example:
/watch-online/ - Landing Page
/category/evolution/page/7 - 404
/category/evolution/page/1
/category/evolution/page/3
/category/evolution/page/5 - 404
/category/evolution/page/8 - 404
/category/evolution/page/4 - 404
/category/evolution/page/2
/category/evolution/page/6 - 404
/category/evolution/page/9 - 404
/category/evolution/page/10 - 404
This is the actual order of requests and they all happen within a second, at this point the IP becomes blocked as too many 404's have been thrown but this seems to have no affect due to the sheer number of bots all doing the same thing.
Also the category changes with each bot so they are all attacking random categories and generating 404 pages.
At the moment there are 2037 unique ip's which have thrown similar 404s in the last 24 hours.
I also use Cloudflare and have manually blocked many ip's from ever reaching my box but this attack is relentless and it seems as though they keep generating new ip's. Here is a list of some offending ip's:
77.101.138.202
81.149.196.188
109.255.127.90
75.19.16.214
47.187.231.144
70.190.53.222
62.251.17.234
184.155.42.206
74.138.227.150
98.184.129.57
151.224.41.144
94.29.229.186
64.231.243.218
109.160.110.135
222.127.118.145
92.22.14.143
92.14.176.174
50.48.216.145
58.179.196.182
Other than automatically blocking ip's for too many 404 errors I can think of no other real solution and this in itself is quite ineffective due to the sheer number of ip's.
Any suggestions on how to deal with this would be greatly appreciated as there appears to be no end to this attack and my websites performance really is taking a hit.
Some User Agents Include:
Mozilla/5.0 (Windows NT 6.3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.86 Safari/537.36
Mozilla/5.0 (Windows NT 6.2; rv:26.0) Gecko/20100101 Firefox/26.0
Mozilla/5.0 (compatible; MSIE
10.0; Windows NT 7.0; WOW64; Trident/6.0)
Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:22.0) Gecko/20100101
Firefox/22.0 Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36
If its your personal website, you can try checking cloudflare, which is free and also it can provide support against any ddos attacks.May be you can give a try.
Okay so after much searching, experimentation and head banging I have finally mitigated the attack.
The solution was to install the apache module 'mod_evasive' see:
https://www.digitalocean.com/community/tutorials/how-to-protect-against-dos-and-ddos-with-mod_evasive-for-apache-on-centos-7
So for any other poor soul that gets slammed as severally as I did have a look at that and get your thresholds finely tuned. This is a simple, cheap and very effective means of drastically downplaying any attack similar to the one I suffered.
My server is still getting bombarded by bots but this really does limit their damage.

Blocking HTTP POST attack via mod_rewrite

I have a WordPress site that is being attacked with the following HTTP POST requests:
x.x.x.x - - [15/Jul/2013:01:26:52 -0400] "POST /?CtrlFunc_stttttuuuuuuvvvvvwwwwwwxxxxxyy HTTP/1.1" 200 23304 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"
x.x.x.x - - [15/Jul/2013:01:26:55 -0400] "POST / HTTP/1.1" 200 23304 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"
The attack itself isn't bad enough to bring down Apache, but it does drive up the CPU usage more than I'd like it to. I would therefore like to block these using mod_rewrite -- straight to a 403 page should do it -- but I've not had any luck so far with anything I've tried. I would like to block all blank HTTP POST requests (to /) as well as /?CtrlFunc_*
What I've done as a workaround for now is block all HTTP POST traffic but that won't work long-term.
Any ideas? I've invested several hours on this and have not made much progress.
Thanks!
Instead of blocking the request via mod_rewrite, I'd use it as bait to record the IP of the offenders. Then, adding them to a 96 hour black list within your firewall will block all requests from them.
See Fail2ban.
Specifically, I believe that Fail2ban filters are the right place to start looking to write your url-specific case.
http://www.fail2ban.org/wiki/index.php/MANUAL_0_8#Filters
http://www.fail2ban.org/wiki/index.php/HOWTO_apache_myadmin_filter
Here's is a Fail2ban blog post that creates a filter for this POST attack.

PHP inconsistently generates a 500 server error when looping through an array

I was using the following code to retrieve HTML snippets from a database table and display them. It worked fine on my old web host, but after moving to a new web host I started getting (rather unhelpful) 500 Internal Server errors. Both hosts use PHP 5.2.x.
$query = "SELECT id, html FROM $tableName ORDER BY id DESC LIMIT 0, 300";
$result = mysql_query($query);
while ($row = mysql_fetch_assoc($result)) {
$myArray[] = $row;
}
foreach($myArray as $m) {
echo $m['html'];
}
By selectively commenting out code, I narrowed the problem down to the foreach loop. I eventually found that I could get the page to display (with no 500 error) if I chopped off some of the items (see $offset below). Sometimes I have to use an offset of 50, sometimes 100 or more.
$counter = 0;
$offset = 100;
$limit = count($myArray) - $offset;
while ($counter < $limit) {
echo $myArray[$counter]['html'];
++$counter;
}
This made me think there was something wrong with the individual HTML snippets. So I adjusted the offset one by one until I found the offending row (i.e., $offset=23 worked, but $offset=22 doesn't, therefore row #23 is the culprit). I looked at that row's HTML and it is perfectly fine. Not only that, but earlier in the day my script had even displayed that particular HTML snippet with no issues (this table periodically has new HTML inserted, and I'm just viewing the most recent 300 of them).
I also tried adding some basic checks before echo-ing, but it had no effect:
while ($counter < $limit) {
if ($myArray[$counter]['html'] != false && !empty($myArray[$counter]['html'])) {
echo $myArray[$counter]['html'];
}
++$counter;
}
Any ideas why echo and/or the loop is failing? How can I see useful errors instead of a 500 server error? I have PHP display_errors turned on and I can see errors from other parts of the script when I intentionally force them (both on the page and in the error log file).
Update: Apache access log
Okay, I went to it first and manually set '$offset' to 200 (see the parameter ?o=1 in the URL) which I knew would let the page display properly. The result:
my.ip.add.ress - p [18/Jun/2010:13:27:36 -0400] "GET /test2/index.php HTTP/1.1" 200 602778 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729)"
my.ip.add.ress - p [18/Jun/2010:13:27:47 -0400] "GET /test2/index.php?o=200 HTTP/1.1" 200 418127 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729)"
Then I forced $offset to 1 (which would generate the 500 error) and I got this:
my.ip.add.ress - - [18/Jun/2010:13:31:06 -0400] "GET /test2/index.php?o=1 HTTP/1.1" 404 - "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729)"
my.ip.add.ress - p [18/Jun/2010:13:30:59 -0400] "GET /test2/index.php HTTP/1.1" 200 602731 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729)"
It says 404, however the page being displayed in the browser says two things: the <title> is 500 Internal Server error, and the page body repeats this but then also mentions the 404 because it couldn't find the 500 error HTML page (which I haven't set up yet).
(Sorry, ideally this would have been just been a comment, sans-speculation, if I was able to post one)
I notice based on your access log that your script's output is fairly large (602778 bytes formed from what I believe is only 200 of your original 300 records), and after performing a quick test for myself, the script could potentially consume several megabytes of memory when it gets called. That's certainly nothing outrageous, but if your new host is super-stringy about their setting of memory_limit in php.ini, exceeding this value would raise E_FATAL in PHP, which could in turn generate the internal server error.
Admittedly, this scenario seems very unlikely to me, but it would explain why the problem manifested when you changed to a new host and why it 'magically' appeared after working fine earlier, provided the so-called problem HTML came from a slightly smaller result set when you viewed it before.
In any case, imaginative guessing on my part aside, you should have access to your php_error.log file, which would show if PHP was causing any sort of fatal error, and that would help debug your problem further.

Categories