I am aware that $_SERVER[REFERRER] can be used to detect where a request comes from however it turned out that it can be fooled or even a browser might not send it as part of the request.
Could anyone suggest a better way to detect or rather allow requests from own domain only to avoid spoofing, DoS, etc security attacks?
I could be wrong, but I think you are referring to CSRF attack. First tutorial that I found is this one.
As #afuzzyllama pointed out, DoS consists of sending more data than your server/network connection can handle. In such a case, your PHP script will not be accessible anymore, so you can not implement a DoS protection in your PHP application. This must be done by your network administrator or hosting company.
Related
I am about to implement a code that will be doing a CURL request to a URL to check whether it is a 40x or 50x webpage. but my worry is when my user entered a malicious URL. How safe PHP CURL library can handle this? is PHP CURL will also encounter some exploits same to a web browser?
As any other software curl is not perfect and it has its own bugs and security risks. But since CURL doesn't interpret the results, ie. doesn't parse html or javascript, the surface of attack is much smaller. In short, it's much safer than a web browser. Most of the security risks are related to bad programmers that exploit the servers they use to run curl more than end-users.
There is a small risk that somebody could supply a url specifically engineered to exploit CURL and then your site. There is also the improbable risk that your mechanism could be used to harm other people. For instance it could be used to flood other people, by forcing the check of a specific page multiple times. But these are very small risks and are mainly the domain of system administrators, that could monitor usage resources of the whole system and catch bad behaviour.
As a programmer you should probably check the obvious: the request shouldn't continue for a lot of time, the url shouldn't be too long, etc. But you can't really do much else.
I have found many posts regarding checking the SNI browser support part, but not combining with the system time.
The default landing for my site is http. Now I have set up SSL. But compatibility is most important to me. Security is least important. I only want to auto-redirect users to https if I'm damn sure they won't see an error (and then falsely think my site's broken).
Since landing page is HTTP, we can use php or .htacess to detect. I had seen discussion elsewhere on using php redirections which may cause some "Back" button issue. (Force SSL and handle browsers without SNI support)
Besides SNI support, I also know that if user has a wrongly configured system time, he may encounter error on https site.
So what's the best approach to achieve both SNI + system time check? Any other possible scenario where user get redirected to https may encounter errors?
=====================================================================
Update:
Loading the home page doesn't really require "security". I'm thinking whether I can do a quick check to see if user could successfully load an object e.g. https://example.com/test.ico , if yes then show a "Secure Login" option. (Kudos to Steffen). The post action will then be done in https to prevent credentials being submitted unsafely.
If the test for https fails, then no choice the user has to login without https.
Will this work?
The additional test would definitely be a drag on site load speed isnt it
There is no way to detect this on the server side with PHP or .htaccess because if a connection fails because of missing SNI or wrong system time then it fails already during the SSL handshake. But the PHP part or .htaccess gets only used for the HTTP part, i.e. only if the SSL handshake completed successfully.
What you might try to do is to include a https resource into your landing page and see if this gets loaded successfully. This might be done with image, css , XHR or similar. For example you could do this
<img src="https://test-ssl.example.com"
onload="redirect_to_https();"
onerror="notify_user_about_problem();" />
If the SSL connection to your site got established successfully the onload handler gets executed and the user gets redirected to the https site. If the handshake fails instead the user gets notified about the problem.
Note that you cannot distinguish between different kinds of errors this way, i.e. you don't know if the problem is caused by the wrong system time or missing SNI support. If you want to do this you need to have to include a non-SNI resource the same way and see if it gets successfully loaded.
But compatibility is most important to me. Security is least important.
I consider this a bad but unfortunately very common approach. I would recommend you instead use this mechanism not to allow the user to access all functionality of your site with http only, but the sensitive parts should be restricted to https. But you could use this approach to inform user about the problem in detail and showing alternatives instead of just causing some strange connection error because of the failed SSL handshake.
This is not the answer you are looking for and was going to leave as a comment for that reason, but got to long, so decided to answer instead:
While it's admirable to try to handle all your users no matter what antiquated tech or bad setup they have, at some point you've got to ask yourself about the effort's you're putting in versus the reward of a few extra hits?
Managing dual http and https is an administrative nightmare that just puts your https-capable users (the vast majority on nearly all sites) at needless risk due to downgrade attacks, inability to make cookies secure, accidentally including insecure content... etc. From an SEO perspective as well you basically have two sites which is difficult to manage.
IMHO, if you feel this strongly about SNI users then pay for a dedicated IP, if not move on to HTTPS only.
compatibility is most important to me. Security is least important
Then stick to HTTP and don't even consider HTTPS.
Finally you've got to remember that browsers will force you move on soon enough. Forcing SHA-2 certs for example (which are not supported by very old browsers - similar to SNI) means that you will eventually have to call it a day on older browsers. So any answer you come up with here will only be short lived. And that's assuming you can come up with a solution that will work across all browsers (no small ask in itself!).
I am working on a PHP sandbox for a Web Application Honeypot. The PHP sandbox will analyze a PHP file that may have been injected as part of an RFI attack. It should run the file in a safe environment and return the result, embedding the output of the PHP script. We hope to fool the attacker into believing that this is a genuine response and thus continue with the next step of his attack.
In order to build the sandbox, we used the Advance PHP Debugger (ADP). Using the rename_function and override_function, vulnerable PHP functions have been rewritten. Some functions such as exec,disk_free_space have been rewritten to send out fake replies. All the others function just return nothing. Here's a complete list of the functions that have been considered.
Also, the input script is run only for a maximum of 10 seconds in the sandbox. After that, the entire sandbox process gets killed.
Is this list good enough? Does this make the sandbox secure enough to be made part of the web app?
Beside blocking function calls like this, are there anymore security measures that should be taken?
In the end, this is a honeypot. So, we would like our reply to be as close as possible to a real reply. So, by blocking DNS function calls like dns_check_record and gethostbyname are we restricting the scope of execution for the script unnecessarily. (I am not sure why they are present in the first place)
In short, I would like to know what elements I should add/delete from the list.
Any other suggestions/advice on how to go about this will be highly appreciated.
I think it's very hard, if not impossible, to foresee all the possible harmful function calls in order to fake their output (for example, highlight_file or its alias show_source are not on your list). Besides, using the same server for both the real app and the honeypot rises other concerns: does the app use extensions? if it does many more functions have to be blocked/faked. What if you update one of those extensions? you'll have to recheck for new security holes. Also, what if a malicious file is uploaded to the honeypot, and then accessed from the main app?? sure you will take measures to not allow that to happen, but if you have a bug at some point, the harmful code will already be on the server... doesn't look safe to me.
I think it would be better to set up a vm as MitMaro suggested. In that case, the VM itself would be as good as a sandbox as you can get, and without much effort you can let all those nasty php functions execute inside the VM without compromising the security of the main app
Let's say I have a website where
PHP 5.3 is installed
every output is htmlspecialchars()ed.
PDO and prepared statements are the only way to interact with the database
error_reporting() is off
every request is passed to index.php (front controller) and no direct file access is allowed except for index.php via .htaccess
every input is properly escaped (why should I? i use Prepared statements, how could an user input mess up with my code?)
there's no use of evil()
Is it considered safe? What other things could be fixed to improve security? How could you attack it? Hack it? PHP/Server side is possible to improve security?
Check this page : PHP Security Guide. Most attacks are documented. If after implementing these security checks, you're still hacked, there are high chances that the problem doesn't come from your PHP application.
By the way, as #Jacco stated, there is some wrong stuff on the article I linked to.
Use prepared statements instead of mysql_real_escape_string(), but you already did that.
About salting, follow this answer instead : https://stackoverflow.com/a/401684/851498
Finally, checking ['type'] (for file upload) is unsafe since a malicious user can change this value. Instead, see the suggested solution of this link : http://www.acunetix.com/websitesecurity/upload-forms-threat.htm
I remember when I started web developing, I read allot about sanitizing data, creating numerous mysql users with a subset of permissions for specific queries, etc.
It gets you in the mindset of treating security with code, not with the operating system.
What use is all of this if you connect to your console with telnet, or use ftp with authentication?
I guess I should cut to the point. I think modern open source technologies such as php mysql etc have build up allot of security features, which gave me a false sense of security.
The damage you can do through these technologies is negligible compared to hacking into console with a brute force attack. If I were you I would worry much more about geting a proper firewal and only allowing port 80 or the bare minimum of ports you need. If you enable console access I would only allow your desktop IP... etc.
and make sure if you ever send a password, that it is encrypted through ssl
There is no absolute security guarantee, you can add the following to the answers above:
If you allow file uploads, make sure you do mime checking;
Make sure the public cannot upload an unlimited amount of files to
overload and eventually kill your server;
If you own the server make sure there are no other weak gates to your site, you can spend millions making your site bulletproof to any type of attack, but if someone gains access to it through another website hosted on the same server, you're out of luck;
Use a vulnerability scanner like acunetix, skipfish;
If you own the server make sure you stay up to date with the versions of the software running on your server (PHP/Apache/MySQL). Subscribe to get updates from the vendors;
If the budget allows it, you could offer a bounty to someone to find a security hole in a DEV release of your code;
Use a product like the following: https://www.cloudflare.com/features-security
security is a major concern for any product and it can not be achieved by some finger count policies but they are important so everywhere in the code think the negative possibilities and work against them to prevent them.
other thing you have to do
store sensitive data in encrypted formate in db
clean XSS every user input data
It is important to note that "safe" is a context-based term. It highly depends on your needs, and there are companies out there (I'm looking at you Google) who will not even consider installing PHP at all.
If you are working at a big company, I would recommend hiring the services of professionals.I heard from a friend that this company does sec checkups for all the big companies, which seems likely since they are the people that distribute Kali Linux.
https://www.offensive-security.com/offensive-security-solutions/penetration-testing-services/
There can be multiple other issues as well, such as session problems, sensitive information enumeration, authorization and authentication issues, and lot more. Issues like business logic bypass can not be resolved by traditional secure coding guidelines. However, looking at PHP Security Cheat Sheet and OWASP PHP Security Project would be a great help to understand the big picture of security issues.
You can learn more about exploiting PHP security issues and related attack techniques by solving the PHP security challenges by RIPSTech (https://www.ripstech.com/php-security-calendar-2017/) or by reading their writeups of real-world vulnerabilities found in popular PHP apps (https://www.ripstech.com/security-vulnerability-database/)
While installing an application onto a client's server, I would like to make sure that the client (or a future developer for them, etc) does not copy my application and place it on other domains/servers/local servers.
How can I verify that my application is running on the server I installed it on? I do not want any substantial lag in the script every time it runs, so I assume a 'handshake' method is not appropriate.
I was thinking the script could request a PHP page on my own server every time it runs. This could send my server their server info and domain name, which my script can check against a database of accepted clients. If the request is invalid, my server handles the work of emailing me the details so I can follow it up. This should not slow down the client's script as it isn't expecting a response, and will still operate on their 'invalid' server until I can investigate this and follow it up with them personally.
If this is the best method (or if there is better), what PHP call should I be making to request my server's script? file_get_contents, curl and similar seem to always retrieve the response, which I don't need.
UPDATE
Thank you all for your responses. I completely understand that PHP is open source and should be freely available to edit. I should have stated more clearly initially, but my intentions were for this verification method to assist me in finding anyone breaching my license agreement. The application is covered under a license, but I would also like to include this check so that I can monitor an initial misuse of my application.
Hence, somebody may still breach my license and it would most likely go unnoticed, but if I implement this script I have the advantage of any 'lazy robbers' who don't break apart my application and remove the verifier before ripping it.
Does this justify the use of such a script? If so, is cURL my best option?
Any checking code for verification is easily replaced with a return true;. Look at the faq at https://stackoverflow.com/tags/php/info :
Q. Can I protect my PHP code from theft? If so, how?
A. There is no effective technical solution to protect, encode or encrypt PHP source code. There are many products that offer some levels of protection, but all can be broken with time and effort. Your best option is not a technical solution, but a legal solution in the form of a license agreement.
You get a legal agreement and sue everyone.
SaaS is your friend. Host the application on your own secure servers, and charge a license fee for your customers to access it.
imo its worth checking out some joomla extensions that do this. There a few different implementations, some check the domain and validate it before executing, most are encrypted, along with a domain validation. I remember sakic's url sef extension used to do this. There are quite a few more commercial extensions that use the same thing. Apart from that I cant think of another way.Probably another good idea is to have a good license in place and a good lawyer....
Short answer: This can't be done.
Long answer: Whatever protection you put in your code, it can be removed with little difficulty by anyone with some experience in PHP. Even if the code is encoded with something like ionCube or Zend Guard, this too can be decoded with relative ease.
Your only option is to protect your intellectual property by actively pursuing copyright infringers. Even this is not foolproof, as our folks from RIAA and MPAA know very well. In this day and age, I'd say this is not a solvable problem.
You could integrate phone-home behavior into your software but you should probably consult a lawyer to discuss privacy issues about that and to work out privacy guidelines and terms of use for your clients' usage license.
One thing to be careful about is the data you send (and the way you send it, i.e. securely encrypted or not) to identify the client who is illegally using your product because it could potentially be used to compromise your client's infrastructure or for spying on your client.
Regarding your phone-home function, be warned that the client could just locate and remove it, so using a PHP obfuscator or compiler might provide some additional protection against this (though any sufficiently determined PHP developer could probably disable this). Note that your protection will only act as a deterrent aimed to make the cost of circumvention
approach or exceed the cost for legal use.
EDIT:
As poke wrote in the question comment, you could move parts of your code outside the software installed at your client's site to your servers but this may backfire when your servers are unreachable for some reason (e.g. for maintenance).
In the end, I think that customer satisfaction should be valued higher than protecting your software from the customer, i.e. try to avoid protections that are likely to make your customers angry.
You could encode it and hard code a license file that would allow it to only work on the domain it was intended for (e.g. use ioncube or zend to encode a file that checks if the HTTP HOST is the intended domain without doing a handshake). You could then make that file required in all other files (if everything was encoded).