If a phpinfo() dump is shown to an end user, what is the worst that a malicious user could do with that information? What fields are most unsecure? That is, if your phpinfo() was publicly displayed, after taking it down, where should you watch/focus for malicious exploits?
Knowing the structure of your filesystem might allow hackers to execute directory traversal attacks if your site is vulnerable to them.
I think exposing phpinfo() on its own isn't necessarily a risk, but in combination with another vulnerability could lead to your site becoming compromised.
Obviously, the less specific info hackers have about your system, the better. Disabling phpinfo() won't make your site secure, but will make it slightly more difficult for them.
Besides the obvious like being able to see if register_globals is On, and where files might be located in your include_path, there's all the $_SERVER ($_SERVER["DOCUMENT_ROOT"] can give clues to define a relative pathname to /etc/passwd) and $_ENV information (it's amazing what people store in $_ENV, such as encryption keys)
The biggest problem is that many versions make XSS attacks simple by printing the contents of the URL and other data used to access it.
http://www.php-security.org/MOPB/MOPB-08-2007.html
A well-configured, up-to-date system can afford to expose phpinfo() without risk.
Still, it is possible to get hold of so much detailed information - especially module versions, which could make a cracker's life easier when newly-discovered exploits come up - that I think it's good practice not to leave them up. Especially on shared hosting, where you have no influence on everyday server administration.
Hackers can use this information to find vulnerabilities and hack your site.
Honestly, not much. Personally, I frequently leave phpinfo() pages up.
If you have some serious misconfigurations (e.g. PHP is running as root), or you're using old and vulnerable versions of some extensions or PHP itself, this information will be more exposed. On the other hand, you also wouldn't be protected by not exposing phpinfo(); you should have instead take care of having your server up-to-date and correctly configured.
Related
I am looking for best practices, modules, etc. to securely do file system manipulation via PHP application. The CMS-like application will not use a database, but instead the markdown files are placed in folders and are processed at display time. Therefore, there will be a lot of moving files around, renaming files, writing to files, etc.
I am looking to either find some libraries (e.g., equivalent to an ORM) that will help to manage such actions, input sanitization, moving files. etc. rather than start from scratch. If nothing like this is available, I would like a listing of best practices, etc.
So far I have only found guidance from PHP.net.
More information: The plan is to build a web based end-user interface which sits ontop of Stacey. I would have a test environment with the end user interface, and when changes are ready they are then synced to the production environment. This is a non-DB based system. Stacey is convenient to manage and work with from a developer standpoint, but user's don't want to work directly with markdown and move files, etc.
Also: Please limit the answer to PHP issues; server things like chrooting or locking down the server would be dependent upon the user's individual environment and needs. From a development standpoint, I want to focus on securing my distributed code.
I don't know of any specific libraries that do this -- the filesystem support in PHP is extensive so I'm not sure why they'd be necessary. You might be better off starting with an existing CMS and modifying it to do what you want -- however I understand that might not be possible. It also sounds like the sort of thing that should be using a database, but I guess you already know that.
I can't claim to know exact best practice, this is more general advice.
First, your web server -- and therefore your PHP scritps -- will be running as a certain user. This depends on your configuration and particular server as well as the underlying OS. Ideally you want to make sure this user only has access to the filesystem area that your using as storage. Deny all access to everywhere else apart from read-access to where it really needs (your scripts, etc) and read-write to the storage area. The exact way to do this depends on your system.
That's your last line of defense, do not rely on it, it's there as a safety net.
It's not clear exactly what will cause files to be renamed, moved, altered but it's a safe bet that it's from user input. Therefore you need to make sure you sanitize all user input, if their page name becomes a file name you want do not want to let some enter ../../index.php as a page name and nuke your main site.
Always assume the worst case: a user who knows the internals of your system intimately and is aiming to do most damage. Do not rely on 'security by obscurity' or 'nobody will ever do that'.
What I would do (and have done before) is two fold. First wrap all the filesystem functions up into a class that provides the same functions as methods. The job of this class is to check that anything happening is allowed, that means it's probably going to have to read the paths and filenames and work out the location of the changes.
Secondly, sanitize all user input that could be malicious when it first arrives. You might want to look at using escapeshellarg or URL encoding, or something else depending on what your input is.
You also mention files are processed at run-time, if users are allowed to write scripts (or worse PHP that gets executed) then you have a lot more issues and may have a fundamental problem. But that's not clear from your question.
Hope that helps.
Is it generally safe to not restrict the access to "phpsysinfo" system information page of a linux server? There are some information on the server software, linux distro running and HDDs (and that could be potentionally helpful if someone wanted to hack the system), but is the cod of phpSysInfo safe? Would you advise to limit the access?
To answer your question, nothing is 100% safe. For this reason I would restrict access to phpSysInfo to the people who need it.
I good security tip : Always restrict access to only the people who need the corresponding functionality.
Came back after years to further clarify this issue. While Dwarth's answer is correct in the general case, PHPSysInfo is outright dangerous in some situations.
It may be not so obvious - yes, showing hardware info can be considered unsafe already, but it may be fine in many situations. However since PHPSysInfo shows full mtab output it also includes stuff like mounted disks' passwords if you have any.
You may want to avoid having that in the first place but if you can't anything that shows mounted devices in full will leak this info.
There was a mysql injections on my website.It has 1000 of existing php files. From last 6 months , when i code i keep sure the code is injections free.But is there any solution how can i secure the legacy code without changing every file.
A few years ago, a legacy application that I was helping support was hacked, and it too had a very large footprint of files. The length of time it was going to take to resolve the issue on all the legacy files was very significant, so we decided to add a Mod Security layer to help mitigate the issue while we worked on rebuilding the application.
If your website is important to you, and you want some extra protection while you search and destroy the vulnerabilities in your code, I would highly recommend Mod Security. You can set it up locally if you manage your own Apache server, or you can setup a proxy server whose only responsibility is to scrub incoming requests. I've used both with a high success rate.
Sorry, there is no magic wand to wave. You're going to have to audit any and all code that is exposed to user input (even indirectly) and either verify that it is safe or fix it.
This is a very vast field. However, There are few points I can give you
Sanitize and filter every input you receive from the user.
Handle every errors correctly. Do not even leave one possibility of an error being triggered
(This is a very important point and the main reason of most of the hacks)
If there were previous breaches, check the apache's log files to see where the injection happened. or where the hacking occur.
There are mainly two types of logs file maintained in apache. access_log and error_log
Once a breach occurs, make a backup of the logs and mitigate the problem, reviewing the logs.
If a documentation is available for the system you are maintaining currently, then the vulnerability can be detected quicker.
Some helpful references
http://php.net/manual/en/security.database.sql-injection.php
http://simon.net.nz/articles/protecting-mysql-sql-injection-attacks-using-php/
It would involve changing all of your exploitable code if it is in each of these files! There are lots of different ways of having a standardized sanitation function or cleaning and filtering input across your website but if each of these pages were written separately and each have exploitable code in them, you will need to correct each of them...
first off, if you managed to consolidate and organize your code properly, then the issue can be isolated. if injections is the issue, then all you have to look at (if you used an MVC pattern) is all your models. if your methods of doing queries are safe, then at least half of your problems will be washed away just like (snap) that! the rest should be the other half - your validation scripts (which, if you followed MVC, should be classified under "helpers" and "libraries")
however, if your "1000+" PHP code are a bunch of "copy, paste, run-right-off-the-bat", then there is nothing else you can do besides manually peering to each of them, trace and test out the code. besides, reaching that many PHP files, you should have thought about creating a maintainable code.
I have a experience on Joomla, Drupal, wordpress and small cms configuration. But one of my client is asking about the security level in the above cms. I never thought about the security risks and it's really very new to me. On which basis i can choose which is best CMS when considering about the security level and minimum risks? And what kind of security we can provide to the server make the application highly secured?
All the big CMS products you mentioned should be okay. Look at who else is using them; this is a great way to judge how good the product really is. For example, Drupal is used by the White House. This fact gives me a lot of confidence in Drupal.
The important thing is to make certain that you keep up-to-date with any security fixes that are released.
The vast majority of security problems in all these products come from non-core modules that you might install. If you're really worried about security, I suggest keeping the number of modules you use to an absolute minimum.
Where you do need to use an external module, do thorough investigations to find out how good it is: how often is it updated? are there any known bugs with it which may be security issues? how widely used is it? And as I mentioned above with the core CMS, who is using it?
You should also ensure that your web server is secure. It's not just your CMS that will provide routes in for a hacker. Close all un-necessary ports and services. Make sure that everything possible is encrypted (use SFTP, definitely not FTP). If you're using a PHP-based CMS such as Drupal, use a security-hardened PHP version (Suhosin) rather than the basic version.
Finally, you should accept that no matter how good your software and no matter how vigilant you are, you could still get hacked. Worse, you could get hacked without even knowing about it. Even the best software has flaws which can be exploited. For this reason, you should aim to have several layers of security before anyone can get to any genuinely sensitive data.
http://www.php.net/manual/en/features.remote-files.php
The only time I could ever think of doing include("http://someotherserver/foo.php") would be as some sort of weird intra-server service interface, but even then I could think of a million different ways that were safer to accomplish the same thing. Still, my specific question is, has anyone seen remote includes in a production environment and did it make any sense doing so?
Edit:
To clear something up, I would cause physical injury to befall anyone who ever tried to use remote includes in a production environment I worked on... So yes I know this is a nightmarish security hole. Just trying to figure out why its still there versus other weird ideas like magic quotes and global variables.
While I've never seen this in real life, I could imagine a farm with separate physical servers with no shared file system. You could possibly have one server with the all the code ie api.domain.com and the other servers include from it. It would make deployments easier if you have tens or hundreds of sepearate sites. But as alex said, it's asking to be hacked.
Remote file execution is extremely dangerous... I've never used it on my servers, and I can't imagine a valid reason to put your, ahem, balls into the basket that someone else controls. That's just asking to be hacked.
No, I didn't. It's going to the bear's mouth.
I suppose the possiblity to include/require remote files is a consequence of allow_url_fopen -- which was introduced in PHP 4.0.x.
Though, considering the security risks of remote-inclusion, a new directive, allow_url_include was introduced in PHP 5.2 : now, this one determines whether you can remote include/require, while the first ones only impacts fopen and the like -- which is nice : it allows an admin to disable remote inclusion, while keeping remote opening.
As others, I didn't ever see remote-require/include used in real-case scenario, while I, of course, often see situations where remote-opening is used -- bad thing is I sometimes see servers with allow_url_fopen disabled because of security reasons that don't exist anymore :-(