I am about to implement a code that will be doing a CURL request to a URL to check whether it is a 40x or 50x webpage. but my worry is when my user entered a malicious URL. How safe PHP CURL library can handle this? is PHP CURL will also encounter some exploits same to a web browser?
As any other software curl is not perfect and it has its own bugs and security risks. But since CURL doesn't interpret the results, ie. doesn't parse html or javascript, the surface of attack is much smaller. In short, it's much safer than a web browser. Most of the security risks are related to bad programmers that exploit the servers they use to run curl more than end-users.
There is a small risk that somebody could supply a url specifically engineered to exploit CURL and then your site. There is also the improbable risk that your mechanism could be used to harm other people. For instance it could be used to flood other people, by forcing the check of a specific page multiple times. But these are very small risks and are mainly the domain of system administrators, that could monitor usage resources of the whole system and catch bad behaviour.
As a programmer you should probably check the obvious: the request shouldn't continue for a lot of time, the url shouldn't be too long, etc. But you can't really do much else.
Related
I was thinking about web-security and then this thought popped into my head.
Say that there's this jerk who hates me and knows how to program. I am managing a nice website/blog with a considerable amount of traffic. Then that jerk creates a program that automatically request my website over and over again.
So if I am hosting my website on a shared hosting provider then obviously my website will stop responding.
This type of attacks may not be common, but if someone attempts something like that on my website i must do something about it. I don't think that popular CMS's like wordpress or drupal do something about this type of attacks.
My assumption is ;
If a user requests more than x times (let's say 50) in 1-minute, block that user. (stop responding)
My questions are;
Is my assumption ok ? If not what to do about it ?
Do websites like Google, Facebook, Youtube...[etc] do something about this type of attacks.
What you are facing is the DoS.[Denial of Service] Attack. Where one system tries to go on sending packets to your webserver and makes it unresponsive.
You have mentioned about a single jerk, what if the same jerk had many friends and here comes DDoS [Distributed DoS] Attack. Well this can't be prevented.
A Quick fix from Apache Docs for the DoS but not for the DDoS ...
All network servers can be subject to denial of service attacks that
attempt to prevent responses to clients by tying up the resources of
the server. It is not possible to prevent such attacks entirely, but
you can do certain things to mitigate the problems that they create.
Often the most effective anti-DoS tool will be a firewall or other
operating-system configurations. For example, most firewalls can be
configured to restrict the number of simultaneous connections from any
individual IP address or network, thus preventing a range of simple
attacks. Of course this is no help against Distributed Denial of
Service attacks (DDoS).
Source
The issue is partly one of rejecting bad traffic, and partly one of improving the performance of your own code.
Being hit with excess traffic by malicious intent is called a Denial of Service attack. The idea is to hit the site with traffic to the point that the server can't cope with the load, stops responding, and thus no-one can get through and the site goes off-line.
But you can also be hit with too much traffic simply because your site becomes popular. This can easily happen overnight and without warning, for example if someone posts a link to your site on another popular site. This traffic might actually be genuine and wanted (hundred of extra sales! yay!), but can have the same effect on your server if you're not prepared for it.
As others have said, it is important to configure your web server to cope with high traffic volumes; I'll let the other answers speak for themselves on this, and it is an important point, but there are things you can do in your own code to improve things too.
One of the main reasons that a server fails to cope with increased load is because of the processing time taken by the request.
Your web server will only have the ability to handle a certain number of requests at once, but the key word here is "simultaneous", and the key to reducing the number of simultaneous requests is to reduce the time it takes for your program to run.
Imagine your server can handle ten simultaneous requests, and your page takes one second to load.
If you get up to ten requests per second, everything will work seamlessly, because the server can cope with it. But if you go just slightly over that, then the eleventh request will either fail or have to wait until the other ten have finished. It will then run, but will eat into the next second's ten requests. By the time ten seconds have gone by, you're a whole second down on your response time, and it keeps getting worse as long as the requests keep pouring in at the same level. It doesn't take long for the server to get overwhelmed, even when it's only just a fraction over it's capacity.
Now imagine the same page could be optimised to take less time, lets say half a second. Your same server can now cope with 20 requests per second, simply because the PHP code is quicker. But also, it will be easier for it recover from excess traffic levels. And because the PHP code takes less time to run, there is less chance of any two given requests being simultaneous anyway.
In short, the server's capacity to cope with high traffic volumes increases enormously as you reduce the time taken to process a request.
So this is the key to a site surviving a surge of high traffic: Make it run faster.
Caching: CMSs like Drupal and Wordpress have caching built in. Make sure it's enabled. For even better performance, consider a server-level cache system like Varnish. For a CMS type system where you don't change the page content much, this is the single biggest thing you can do to improve your performance.
Optimise your code: while you can't be expected to fix performance issues in third-party software like Drupal, you can analyse the performance of your own code, if you have any. Custom Drupal modules, maybe? Use a profiler tool to find your bottlenecks. Very often, this kind of analysis can reveal that a single bottleneck is responsible for 90% of the page load time. Don't bother with optimising the small stuff, but if you can find and fix one or two big bottlenecks like this, it can have a dramatic effect.
Hope that helps.
These types of attacks are called (D)DoS (Distributed Denial of Service) attacks and are usually prevented by the webserver hosting your PHP Application. Since apache is used the most, I found an article you might find interesting: http://www.linuxforu.com/2011/04/securing-apache-part-8-dos-ddos-attacks/.
The article states that apache has multiple mods available specifically created to prevent (D)DoS attacks. These still need to be installed and configured to match your needs.
I do believe that Facebook, Google etc. have their own similar implementations to prevent DoS attacks. I know for a fact that Google Search engine uses a captcha if alot of search requests are coming from the same network.
Why it is not wise to prevent DoS within a PHP script is because the PHP processor still needs to be started whenever a request is made, which causes alot of overhead. By using the webserver for this you will have less overhead.
EDIT:
As stated in another answer it is also possible to prevent common DoS attacks by configuring the server's firewall. Checking for attacks with firewall rules happens before the webserver is getting hit, so even less overhead there. Furthermore you can detect attacks on other ports aswell (such as portscans). I believe a combination of the 2 works best as both complement each other.
In my opinion, the best way to prevent DoS is to set the firewall to the lower level: at the entry of the server. By settings some network firewall config with iptables, you can drop packets from senders which are hitting too hard your server.
It'll be more efficient than passing through PHP and Apache, since them need to use a lot (relatively) of processus to do the checking and they may block your website, even if you detect your attacker(s).
You can check on this topic for more information: https://serverfault.com/questions/410604/iptables-rules-to-counter-the-most-common-dos-attacks
I am aware that $_SERVER[REFERRER] can be used to detect where a request comes from however it turned out that it can be fooled or even a browser might not send it as part of the request.
Could anyone suggest a better way to detect or rather allow requests from own domain only to avoid spoofing, DoS, etc security attacks?
I could be wrong, but I think you are referring to CSRF attack. First tutorial that I found is this one.
As #afuzzyllama pointed out, DoS consists of sending more data than your server/network connection can handle. In such a case, your PHP script will not be accessible anymore, so you can not implement a DoS protection in your PHP application. This must be done by your network administrator or hosting company.
Let's say I have a website where
PHP 5.3 is installed
every output is htmlspecialchars()ed.
PDO and prepared statements are the only way to interact with the database
error_reporting() is off
every request is passed to index.php (front controller) and no direct file access is allowed except for index.php via .htaccess
every input is properly escaped (why should I? i use Prepared statements, how could an user input mess up with my code?)
there's no use of evil()
Is it considered safe? What other things could be fixed to improve security? How could you attack it? Hack it? PHP/Server side is possible to improve security?
Check this page : PHP Security Guide. Most attacks are documented. If after implementing these security checks, you're still hacked, there are high chances that the problem doesn't come from your PHP application.
By the way, as #Jacco stated, there is some wrong stuff on the article I linked to.
Use prepared statements instead of mysql_real_escape_string(), but you already did that.
About salting, follow this answer instead : https://stackoverflow.com/a/401684/851498
Finally, checking ['type'] (for file upload) is unsafe since a malicious user can change this value. Instead, see the suggested solution of this link : http://www.acunetix.com/websitesecurity/upload-forms-threat.htm
I remember when I started web developing, I read allot about sanitizing data, creating numerous mysql users with a subset of permissions for specific queries, etc.
It gets you in the mindset of treating security with code, not with the operating system.
What use is all of this if you connect to your console with telnet, or use ftp with authentication?
I guess I should cut to the point. I think modern open source technologies such as php mysql etc have build up allot of security features, which gave me a false sense of security.
The damage you can do through these technologies is negligible compared to hacking into console with a brute force attack. If I were you I would worry much more about geting a proper firewal and only allowing port 80 or the bare minimum of ports you need. If you enable console access I would only allow your desktop IP... etc.
and make sure if you ever send a password, that it is encrypted through ssl
There is no absolute security guarantee, you can add the following to the answers above:
If you allow file uploads, make sure you do mime checking;
Make sure the public cannot upload an unlimited amount of files to
overload and eventually kill your server;
If you own the server make sure there are no other weak gates to your site, you can spend millions making your site bulletproof to any type of attack, but if someone gains access to it through another website hosted on the same server, you're out of luck;
Use a vulnerability scanner like acunetix, skipfish;
If you own the server make sure you stay up to date with the versions of the software running on your server (PHP/Apache/MySQL). Subscribe to get updates from the vendors;
If the budget allows it, you could offer a bounty to someone to find a security hole in a DEV release of your code;
Use a product like the following: https://www.cloudflare.com/features-security
security is a major concern for any product and it can not be achieved by some finger count policies but they are important so everywhere in the code think the negative possibilities and work against them to prevent them.
other thing you have to do
store sensitive data in encrypted formate in db
clean XSS every user input data
It is important to note that "safe" is a context-based term. It highly depends on your needs, and there are companies out there (I'm looking at you Google) who will not even consider installing PHP at all.
If you are working at a big company, I would recommend hiring the services of professionals.I heard from a friend that this company does sec checkups for all the big companies, which seems likely since they are the people that distribute Kali Linux.
https://www.offensive-security.com/offensive-security-solutions/penetration-testing-services/
There can be multiple other issues as well, such as session problems, sensitive information enumeration, authorization and authentication issues, and lot more. Issues like business logic bypass can not be resolved by traditional secure coding guidelines. However, looking at PHP Security Cheat Sheet and OWASP PHP Security Project would be a great help to understand the big picture of security issues.
You can learn more about exploiting PHP security issues and related attack techniques by solving the PHP security challenges by RIPSTech (https://www.ripstech.com/php-security-calendar-2017/) or by reading their writeups of real-world vulnerabilities found in popular PHP apps (https://www.ripstech.com/security-vulnerability-database/)
While installing an application onto a client's server, I would like to make sure that the client (or a future developer for them, etc) does not copy my application and place it on other domains/servers/local servers.
How can I verify that my application is running on the server I installed it on? I do not want any substantial lag in the script every time it runs, so I assume a 'handshake' method is not appropriate.
I was thinking the script could request a PHP page on my own server every time it runs. This could send my server their server info and domain name, which my script can check against a database of accepted clients. If the request is invalid, my server handles the work of emailing me the details so I can follow it up. This should not slow down the client's script as it isn't expecting a response, and will still operate on their 'invalid' server until I can investigate this and follow it up with them personally.
If this is the best method (or if there is better), what PHP call should I be making to request my server's script? file_get_contents, curl and similar seem to always retrieve the response, which I don't need.
UPDATE
Thank you all for your responses. I completely understand that PHP is open source and should be freely available to edit. I should have stated more clearly initially, but my intentions were for this verification method to assist me in finding anyone breaching my license agreement. The application is covered under a license, but I would also like to include this check so that I can monitor an initial misuse of my application.
Hence, somebody may still breach my license and it would most likely go unnoticed, but if I implement this script I have the advantage of any 'lazy robbers' who don't break apart my application and remove the verifier before ripping it.
Does this justify the use of such a script? If so, is cURL my best option?
Any checking code for verification is easily replaced with a return true;. Look at the faq at https://stackoverflow.com/tags/php/info :
Q. Can I protect my PHP code from theft? If so, how?
A. There is no effective technical solution to protect, encode or encrypt PHP source code. There are many products that offer some levels of protection, but all can be broken with time and effort. Your best option is not a technical solution, but a legal solution in the form of a license agreement.
You get a legal agreement and sue everyone.
SaaS is your friend. Host the application on your own secure servers, and charge a license fee for your customers to access it.
imo its worth checking out some joomla extensions that do this. There a few different implementations, some check the domain and validate it before executing, most are encrypted, along with a domain validation. I remember sakic's url sef extension used to do this. There are quite a few more commercial extensions that use the same thing. Apart from that I cant think of another way.Probably another good idea is to have a good license in place and a good lawyer....
Short answer: This can't be done.
Long answer: Whatever protection you put in your code, it can be removed with little difficulty by anyone with some experience in PHP. Even if the code is encoded with something like ionCube or Zend Guard, this too can be decoded with relative ease.
Your only option is to protect your intellectual property by actively pursuing copyright infringers. Even this is not foolproof, as our folks from RIAA and MPAA know very well. In this day and age, I'd say this is not a solvable problem.
You could integrate phone-home behavior into your software but you should probably consult a lawyer to discuss privacy issues about that and to work out privacy guidelines and terms of use for your clients' usage license.
One thing to be careful about is the data you send (and the way you send it, i.e. securely encrypted or not) to identify the client who is illegally using your product because it could potentially be used to compromise your client's infrastructure or for spying on your client.
Regarding your phone-home function, be warned that the client could just locate and remove it, so using a PHP obfuscator or compiler might provide some additional protection against this (though any sufficiently determined PHP developer could probably disable this). Note that your protection will only act as a deterrent aimed to make the cost of circumvention
approach or exceed the cost for legal use.
EDIT:
As poke wrote in the question comment, you could move parts of your code outside the software installed at your client's site to your servers but this may backfire when your servers are unreachable for some reason (e.g. for maintenance).
In the end, I think that customer satisfaction should be valued higher than protecting your software from the customer, i.e. try to avoid protections that are likely to make your customers angry.
You could encode it and hard code a license file that would allow it to only work on the domain it was intended for (e.g. use ioncube or zend to encode a file that checks if the HTTP HOST is the intended domain without doing a handshake). You could then make that file required in all other files (if everything was encoded).
I have a script to handle http requests. I'm trying to think of some of the security issues I might have with it. My biggest concern at the moment is how I can manage multiple requests from the same source over and over. For instance someone trying to shut down my system.
Do I need to be concerned or will Apache handle this issue. If not what is the best approach to take using php?
Check out the mod_evasive Apache module. Also, the Apache documentation has some good tips.
IMHO security has always to be considered from different viewpoints and at different levels.
From what you've described and what I think you're trying to achieve (Denial Of Service Attack countermesure), tough, it's my belief that your best bet would be that of dealing with requests at a lower level (IE packet filtering) than where apache operates. With PHP alone you can definitely perform other security checks, but most likely not do much (if anything) against a DOS Attack.