So if you have a PHP page, while if someone loads that page they may not see the server side run PHP code; if they grab the source, the file itself is still publicly available, because if you make it not publicly available the person would not be able to load that page.
Thus someone could with the right knowledge 'grab' that file and then read the serverside script stuff.
So is it not safer to make a 'proxy'. for example, AJAX post call to a PHP page (called script handler) and pass a string with the first 2 char being the id to the PHP script to run and the rest of the string being the data for that script, then the script handler runs and include based on the number and returns the echoed back HTML that is then displayed.
What do you guys think? I have done this and it works quite nice, if I grab source all I get is an HTML page with a div container and a javascript file with ajax calls to script handler.
No. Your 'workaround' does not fix the problem, if there ever was one.
If a client (a browser) asks a 'resource' (a page, for example) from a webserver, the webserver won't just serve the resource as it finds it on disk.
If you configured your webserver well, it will know that
An .html, .gif, .png, .css, .js file can just be served as-is.
A .php, .php5, .cgi, .pl file has to be executed first, and the resulting output has to be served.
So with a properly configured server (and most decent webservers are properly configured by default), grabbing the PHP source just by calling the page is impossible - the webserver will know to execute the source and return the result.
But
One of the most encountered bugs when writing your own 'upload/download script' is allowing users to upload/download .php (or other executable) files. If your own script 'serves' the .php file by reading it from disk and writing it to the net, users will be able to see your code.
Solution:
Don't write scripts unless you know what you are doing.
Avoid the not-invented-here syndrome (don't reinvent the wheel unless you are sure you NEED a better wheel AND can MAKE a better wheel)
Don't solve problems that don't exist!
By the by:
if your webserver was mal-configured and is just serving .php files as viewable/downloadable files, your 'solution' of calling it by ajax would not change this... Ajax still is client-side, so any client could bypass the ajax and fetch the script itself.
If your web server is configured correctly, users should never be able to view the actual contents of the PHP file. If they try, they should see the actual output of the PHP script as your web server reads and executes it, then passes that as the response to the HTTP request.
Furthermore, you need to understand that users can easily still look at the file the AJAX request is fetching; all they need to do is install Firebug, or use the Chrome developer tools, and they'll be able to see the full URL the file is fetched from.
So to sum up, firstly you shouldn't need to use this kind of 'security technique' for PHP files, and secondly, the 'security technique' will not stop anyone with more than a passing interest in your data.
Related
I have page caching going on and I am trying to improve server response time and when I grab the cached page content via php, for whatever reason it takes about 800ms to output it to the browser.
require_once(PATH_TO_CACHED_FILE);
When I copy this exact same content and put it into a .html file, I get the same content in the browser in about 250ms.
I also get about 250ms when I switch the above require_once to this =>
echo 'a';
So with all this in mind - I'm thinking this is probably related to the buffer size as the bigger the buffer the longer it takes to output it)? Right? I mean - this is a huge difference - what could one do in order to more or less match outputting the content via php to simply loading html since both files do virtually the same thing (grab content / push to browser)?
Thanks!
btw: I also testing copying the output to a .php file (so it just echoed the cached HTML, there are no calculations done in this file) and it still takes ~800ms -- how can simply changing the extension from .php to .html make 500ms difference?
btw2: not sure if it's important, php is on nginx
what could one do in order to more or less match outputting the content via php to simply loading html since both files do virtually the same thing (grab content / push to browser)?
Nothing, see below.
how can simply changing the extension from .php to .html make 500ms difference?
The PHP/PHP-FPM scripting engine is quite heavy compared to simple HTML access because it has to (potentially, based on current state) fork a worker process, do the necessary bootstrapping, load modules if they were not, parse the script and even in a simple HTML you requireing, it must parse it and look for <?php tags to process.
With direct HTML access, there is no PHP engine involved. Nothing of the above happens if you simply link/access your HTML file directly.
Aside from that, buffering can be an issue as well, if you use zlib on the PHP side.
If you must insist on some parent PHP code to be used to serve cache HTML files, then it probably makes sense to experiment with using readfile in lieu of require, as the former won't be subject to parsing the file content for PHP tags.
Otherwise, the best route to go with, if you already have complete HTML files (as in full page cache), is completely avoiding PHP invocation for them. This can be accomplished by using NGINX (or your webserver's of choice) rewrites to route requests directly to HTML-cache files, based on their presence (e.g. for NGINX this can be implemented using try_files directive).
So I'm a bit confused about what crafty users can and can't see on a site.
If I have a file with a bunch of php script, the user cant see it just by clicking "view source." But is there a way they can "download" the entire page including the php?
If permission settings should pages be set to, if there is php script that must execute on load but that I dont want anyone to see?
Thanks
2 steps.
Step 1: So long as your PHP is being processed properly this is nothing to worry about...do that.
Step 2: As an insurance measure move the majority of your PHP code outside of the Web server directory and then just include it from the PHP files that are in the directory. PHP will include on the file system and therefore have access to the files, but the Web server will not. On the off chance that the Web server gets messed up and serves your raw PHP code (happened to Facebook at one point), the user won't see anything but a reference to a file they can't access.
PHP files are processed by the server before being sent to your web browser. That is, the actual PHP code, comments, etc. cannot be seen by the client. For someone to access your php files, they have to hack into your server through FTP or SSH or something similar, and you have bigger problems than just your PHP.
It depends entirely on your web server and its configuration. It's the web server's job to take a url and decide whether to run a script or send back a file. Commonly, the suffix of a filename, file's directory, or the file's permission attributes in the filesystem are used to make this decision.
PHP is a server side scripting language that is executed on server. There is no way it can be accessed client side.
If PHP is enabled, and if the programs are well tagged, none of the PHP code will go past your web server. To make things further secure, disable directory browsing, and put an empty index.php or index.html in all the folders.
Ensure that you adhere to secure coding practices too. There are quite a number of articles in the web. Here is one http://www.ibm.com/developerworks/opensource/library/os-php-secure-apps/index.html
I was wondering how php files are actually secured. How come one can not download a php file, even if the exact location is known?
When I upload a php file to my webserver, lets say to domain.com/files, and I call the domain.com/files page, I can clearly see the php file and its actual size. Downloading the file however leads to an empty file.
The question then is: How does the security mechanism work exactly?
The web server's responsibility is to take the PHP script and hand it to the PHP interpreter, which sends the HTML (or other) output back to the web server.
A mis-configured web server may fail to handle the PHP script properly, and send it down to the requesting browser in its raw form, and that would make it possible to access PHP scripts directly.
Your web hosting may have a mechanism to list the contents of a directory, but the unless it supplies a download mechanism to supply the PHP script with plain text headers (as opposed to HTML) without handing it to the PHP interpreter, it will be executed as PHP rather than served down.
In order to be able to download the raw PHP file, the server would have to do some extra work (possibly via another PHP script) which reads the PHP file from disk and sends its contents down to the browser with plain text headers.
When you request domain.com/files your web server is setup to show all the files in that directory.
When you request the actual php file the web server executes it and outputs the results back to you - not the source code.
Of course, both of the above can be configured. You could switch of directory listing and disable parsing of php files so the actual file contents/source code is output.
Its usually a good practice to switch off directory listing.
When you first install PHP on your server, it reconfigures Apache so that when a .php file is requested, Apache hands processing over to PHP. PHP then processes the code in the file, and returns whatever text the PHP code echoed or printed to Apache, which then sends that back over the network to the person that requested the PHP file.
The "security" comes simply in that Apache does not simply serve the PHP file, but rather hands it over to the PHP processor for execution. If Apache is not configured correctly or you use a server software that does not recognize PHP, the raw PHP file will be sent to the client.
Short answer: since the server is configured to execute PHP files and return the results, you will never be able to access the PHP source from the outside. All code is executed immediately by the server. So, to answer your question:
The security mechanism is that .php files are executed automatically by the server when they are requested.
This is a huge misconception. When you attempt to access a PHP file over port 80, your request is likely being run through a web server which does something with the file. In the case of PHP, it runs that file through the PHP interpreter, which causes that file to create some output, and that is what is sent to you.
You can easily allow downloading of PHP files by removing the interpreter for that file type. If the web server doesn't have anything special for it and doesn't understand the file, it will just have the client download it.
I was wondering if there was a way to basically host a site on your server so you can run PHP, but have the actual code hosted on GitHub. In other words...
If a HTTP request went to:
http://mysite.com/docs.html
It'd request and pull in the content (via file_get_contents() or something):
https://raw.github.com/OscarGodson/Core.js/master/docs.html
Or, if they went to:
http://mysite.com/somedir/another/core.js
It'd pull down:
https://raw.github.com/OscarGodson/Core.js/master/somedir/another/core.js
I know GitHub has their own DNS servers, but id rather host it on my so i can run server side code. What would the htaccess code look like for this?
This is beyond the capabilities of .htaccess files, if the requirement is to run the PHP embedded in the HTML stored on github.com at the server on yourserver.com simply by a configuration line like a redirect in the .htaccess file.
A .htaccess file is typically used to provide directives to the Apache web server. These directives can indicate, for example, access permissions, popup password protection, linkages between URLs and the server's file system, handlers for certain types of files when fetched by the server before delivery to the browser, and redirects from one URL to another URL.
An .htaccess file can issue redirects for http://mysite.com/somedir/another/core.js to https://raw.github.com.... but then the browser will be pointed to raw.github.com, not mysite.com. Tricks can be done with frames to make this redirection less transparent to the human at the browser... but these dont affect the fact that the data comes from github.com without ever going to the server at mysite.com
In particular, PHP tags embedded in the HTML on github.com are never received by mysite.com's server and therefore will not run. Probably not want you want. Unless some big changes have occurred in Apache, .htaccess files will not set up that workflow. It might be possible for some expert to write an apache module to do it, but I am not sure.
What you can do is put a cron job on mysite.com that git pull's from github.com every few minutes. Perhaps that is what you want to do instead?
If the server can run PHP code, you can do this.
Basically, in the .htaccess file you use a RewriteRule to send all paths to a PHP script on your server. For example, a request for /somedir/anotherdir/core.js becomes /my-script.php/somedir/anotherdir/core.js. This is how a lot of app frameworks operate. When my-script.php runs the "real" path is in the PATH_INFO variable.
From that point the script could then fetch the file from GitHub. If it was HTML or JavaScript or an image, it could just pass it along to the client. (To do things properly, though, you'll want to pass along all the right headers, too, like ETag and Last-Modified and then also check those files, so that caching works properly and you don't spend a lot of time transferring files that don't need to be transferred again and again. Otherwise your site will be really slow.)
If the file is a PHP file, you could download it locally, then include it into the script in order to execute it. In this case, though, you need to make sure that every PHP file is self-contained, because you don't know which files have been fetched from GitHub yet, so if one file includes another you need to make sure the files dependent on the first file are downloaded, too. And the files dependent on those files, also.
So, in short, the .htaccess part of this is really simple, it's just a single RewriteRule. The complexity is in the PHP script that fetches files from GitHub. And if you just do the simplest thing possible, your site might not work, or it will work but really painfully slowly. And if you do a ton of genius level work on that script, you could make it run OK.
Now, what is the goal here? To save yourself the trouble of logging into the server and typing git pull to update the server files? I hope I've convinced you that trying to fetch files on demand from GitHub will be even more trouble than that.
I'm trying to implement SwfUpload in my web page, and I'm using php to save files on server. As is the first time I use this component, I choose to run the algoritm is suggested from the SwfUpload team (http://swfupload.org/forum/generaldiscussion/214): I've put it in a file and I've said the control to use it as code file.
It didn't work, as I'm asking help, but what really drives me crazy is that i really don't know how to debug that stuff! The request to the file is encapsulated into the flash object, and i can't get any feedback from it if something goes wrong.
Anyone is more experienced than me about this control?
Thanks
One thing you can do if all else fails is to make upload.php append to a log file your debug messages. Example: file_put_contents("swfupload.log", print_r($_REQUEST, 1), FILE_APPEND);
The only tricky thing about swfupload that you need to understand, is that the flash component is run with a different cookie jar (for security reasons), so you need to manually tell it (via a flash param) the session_id you currently have on the server, so when it makes the http request to upload.php it passes that session_id in a $_GET param and the php script starts the session with that specific id: session_start($_GET['SESSION_ID']);. From that point on, upload.php behaves just like any other php code with your session data available. You get the $_FILES, move them to their respective folder, save them in db, and that's it.
Well SWFUpload needs to upload the file to some script, so you can log to a file from within that script based on data received...
Oh, that's a pain to debug.
One way to get to the script output (which might be some fatal errors that you can't log or something) is to use a proxy like fiddler that shows you all http traffic.
It's sometimes hard to get flash using the proxy. You may need to configure the proxy in IE even if you use another browser.