Equivalent of ASP.NET HttpModules in PHP - php

what is the equivalent of ASP.NET HttpModules in PHP?
If there are any how can I include them for that specific application (not globally) in other words what is the equivalent of web.config
Example : I need to log the request and the headers, if the server is returning a HTTP 500 error irrespective of the code which is run.
In ASP.NET, I would have a HTTP Module in which I can grab the response code and other details, before sending to the client. I can also handle Begin Request.
I need something similar in PHP

Unfortunately PHP is more like ASP in the sense that the "application" is a loose concept, the files are not tightly related, so anything you do would likely have to be at the web server level
Assuming you are on a linux/apache server. One approach would be to use .htaccess, these can be modified at the directory (ie application level) and have sever powerful features.
One example is for url re-writing:
http://roshanbh.com.np/2008/02/hide-php-url-rewriting-htaccess.html
Official Apache Docs: http://httpd.apache.org/docs/1.3/mod/mod_rewrite.html

You can look the ModSecurity for Apache: http://www.modsecurity.org/
It will allow you to log full POST and headers data depending on rules you define.
ModSecurity configuration is very powerful but very complex too.
HTTP Traffic Logging
Web servers are typically
well-equipped to log traffic in a form
useful for marketing analyses, but
fall short logging traffic to web
applications. In particular, most are
not capable of logging the request
bodies. Your adversaries know this,
and that is why most attacks are now
carried out via POST requests,
rendering your systems blind.
ModSecurity makes full HTTP
transaction logging possible, allowing
complete requests and responses to be
logged. Its logging facilities also
allow fine-grained decisions to be
made about exactly what is logged and
when, ensuring only the relevant data
is recorded. As some of the request
and/or response may contain sensitive
data in certain fields, ModSecurity
can be configured to mask these fields
before they are written to the audit
log.

Related

NGINX reverse proxy / php pre processing

I would somehow solve the following scenario: We have a server nginx acting as a reverse proxy for some apache servers. We should make sure that when a request comes to nginx proxy it is pre processed by a php script that sets some HTTP request headers, based on the URL content, and then the URL is passed to the server apache.
We should avoid redirects in this process, but I have no idea how I could do it.
Thanks a lot...
[EDIT]
Sorry for the vague question. Our setup is as follows: nginx is used as a balancer for some apache web server. On the web server runs an application that generates the content of e-commerce (and page categories) on the basis of the analysis of the submitted URL. We use a third-party analysis tool that requires a request header valorized with category but the categories are calculated by the php code of the application... I should make that the request processed by nginx will have an header before arriving to apache. I can extract the code from the php application and create an intermediate layer but I have no idea how to manage the whole process.
This is a simple draw: Black as-is, in green to-be (or may be-to-be)
simple solution draw
Your question is very vague - and will probably be closed on that basis. My response here is intended as a comment - but its a bit long for the comment box.
That you are using nginx as a reverse proxy implies that you are somewhat concerned with performance. While it would be quite possible to implement what you describe, the nature of PHP running in a webserver means that it will be rather inefficient at the task you describe - each incoming web request will require a new connection to the backend webserver.
Presumably there is some application running on or behind the apache webservers - is there a reason you don't implement the required functionality there?
Can you provide examples of the changes you need to apply to requests and responses? It's possible that some of this could be handled by nginx or apache.
Alternatively you might have a look at ICAP (rfc3507) which is protocol designed for supporting these kind of transformations. Although there sre server implementations using PHP, I suspect they will have most of the same performance issues referenced above.

Count downloads without `echo file_get_contents($file)`?

I am now having download links on my server that directly points to files. I have a set of quite complicated rewrite rules but they don't affect what I am asking for.
What I want to do is to count the number of downloads. I know I could write a PHP script to echo the content and with a rewrite rule so that the PHP script will process all downloads.
However, there are a few points that I am worried about:
There is a chance that some dangerous paths (e.g. /etc/passwd, ../../index.php) will not be blocked due to carelessness or unnoticed bugs
Need to handle HTTP 404 Not Found response (and others) in the script which I prefer letting Apache handle them (I have an error handler script that rely on server redirect variables)
HTTP headers (like content type or modified time) may not be correctly set
Using a PHP script doesn't usually allow HTTP 304 Unmodified response so that browser caching will be useless, and re-download can consume extra bandwidth Actually I can check for that, but would require some more coding and debugging.
PHP script uses more processing power than directly loading the file directly by Apache
So, I would like to find some other ways to perform statistics. Can I, for example, make Apache trigger a script when certain files (in certain directories) are being requested and downloaded?
This may not be quite what you're looking for, but in the spirit of using the right tool for the job you could easily use Google Analytics (or probably any other analytics package) to track this. Take a look at https://support.google.com/analytics/bin/answer.py?hl=en-GB&answer=1136922.
Edit:
It would require the ability to modify the vhost setup for your site, but you could create a separate apache log file for your downloads. Let's say you've got a downloads folder to store the files that are available for download, you could add something like this to your vhost:
SetEnvIf Request_URI "^/downloads/.+$" download
LogFormat "%U" download-log
CustomLog download-tracking.log download-log env=download
Now, any time something is requested from the /downloads/ folder, it will be logged in the download-tracking.log file.
A few things to know:
You can have as many SentEnvIf lines as you need. As long as they all set the download environment variable, the request will be logged to the CustomLog
The LogFormat I've shown will log only the URI requested, but you can easily customize that to log much more than just the URI, see http://httpd.apache.org/docs/2.2/mod/mod_log_config.html#logformat for more details.
If you're providing PDF files, be aware that some browsers/plugins will make a separate request for each page of the PDF so you would need to account for that when you read the logs.
The primary benefit of this method is that it does not require any coding, just a simple config change and you're ready to go. The downside, of course, is that you'd have to do some kind of log processing. It just depends what is most important to you.
Another option would be to use a PHP script and the readfile function. This makes it much easier to log requests to a database, but it does come with the other issues you mentioned earlier.
There are ways to pipe Apache logs to MySQL, but from what I've seen it can be tricky. Depending on what you're doing, it may be worth the effort... but then again it might not.
You can parse the Apache log files.
Apaches mod_lua probably is the most general, flexible and effective approach to hooking own code into the request processing inside apache. Usually you chose that language for the task that offers the most direct approach. And lua is much better in teracting with c/c++ than anything else.
However there certainly are other strategies, so be creative. Two things come to my mind immediately:
some creative use of PAM if you are under some sort of unix like system: configure some kind of dummy authentication requirement and setup PAM for processing. Inside the PAM configuration you can do whatever you like. The avantage: you get requests and can filter yourself what to count and what not. You have to make sure the PAM response does not create a valid session though, so that you really get a tick for each request done by a client, not only the first one.
there are other apache modules that allow to do request processing. Have a look at the forensic module or the external filter module. Both allow to hook external logic into request processing. You will need cli based php configured for that.

How to debug Javascript + PHP + Web services

Disclaimer: May be a insane question but I have suffered a lot so came here.
I am working on a legacy application which uses JS + PHP + Web services (Written in spring).
Flow of the application :
Whenever any web service is called from JS it is redirected to one php file. The php file authenticates the user(using one web service) and then forwards the request to actual web service.
How can I debug this application ? I have debugged JS using Firebug and servr side code using Eclipse but never debugged such a application.
~Ajinkya.
I think there are a variety of things that need to be done, and I must say this question is sufficiently general as to not have a straight answer so I will do my best. As xdazz mentioned, var_dump (and die) are necessary from the PHP standpoint.
Whenever anything is returned to JS console.log it. In addition, ensure XHTTP requests are turned on for Firebug or alternatively view the output of each request in the Chrome Network tab.
With a combination of console.log, var_dump, and die, you can trace non-functioning parts of the application repeatedly step by step until you come across the bug.
Alternatively, and in the long run you ought to be doing this anyway, build error handling code into all the PHP code that is only activated when a debug flag is set to true. This way you can get detailed error messages and then when you deploy, you can turn them off to avoid compromising security.
If you are needing to inspect the entire lifecycle of a Web service request in your scenario you will need to combine a several techniques. Considering the fact that the scope of your scenario spans from client to server you will need to decide with what you will persist the information you need to inspect.
Personally, I would choose the path of least resistance which in my case would probably be cookies. With that being said you should be able chronologically log the necessary information via JavaScript and PHP, both before, during and after the request and even redirect has occurred.
This strategy would then allow for the information logged with cookies to then be dumped or analyzed via JavaScript, WebKit inspector or Firebug. Again, this is probably how I would handle such a scenario. Lastly, you can apply different storage strategies to this technique such as using a session or database for persistence.
Note: You can use something like WebKit Inspector, and possibly Firebug, to analyze data transmitted and received for GET, POST and even WebSocket requests.

https login form

What should i consider when switching a simple(user+pass) login form from http to https?
Are there any differences when using https compared to http?
From what i know the browser won't cache content server over https, so page-loading might be slower, but other that that i know nothing about this.
Anyone has any experience with this things?
Do not mix secure and non-secure content on the same site as browsers will display annoying warnings if you do so.
Additionally, set cookies as https-only when the users uses https so they are never sent over a http connection.
When switching over to https consider that ALL web assets (images, js, css) must be coming from a https domain, otherwise your user will get warnings about unsecure transmission of data. If you've got any hard coded urls you'll need to dynamically change them to https.
I would add that you should prefer to send your url parameters via post instead of get, otherwise you may be leaving private data all over the place in logfiles, browser windows, etc.
The security layer is implemented in the webserver (e.g. Apache), while your login is implemented at the business logic (your application).
There's no difference for your business logic to use http or https, by the time you receive the request, it's going to be the same, because you receive it decrypted. The web server does the dirty job for you.
As you say, it might be a little bit slower because the web server has to encrypt / decrypt the requests.
As Ben says, all the resources have to come from the secure domain, otherwise some browsers get really annoying (such as IE) with the warnings.
From what i know the browser won't cache content server over https
Provided you send caching instructions in the headers then the client should still cache the content (MSIE does have a switch hidden away to disable caching for HTTPS - but it defaults to caching enabled, Firefox probably has similar).
The time taken for the page to turn will be higher - and much more affected by network latency due the additional overhead of the SSL handshake (once encryption has been negotiated the overhead isn't that much, but depending on your webserver and how its configured you probably won't be able to use KeepAlives with MSIE).
Certainly there will be no difference to your PHP code.
C.

PHP Proxy - Basic Explanation

How does a PHP Proxy work ?
I am looking to make a little script which is similar to other php proxies
But how does it actually work ?
I'm thinking of a PHP Proxy, used to go around AJAX Sane Origin Policy. If you need a real HTTP proxy, the process is much more complex.
Simplest pseudocode:
get the URL from request (e.g. from $_POST['url'])
reject invalid URLs (e.g. don't make requests to localhost (or within your private subnet, if you have several servers))
(optional) check your script's cache, return cached response if applicable
make request to target URL, e.g. with cURL
(optional) cache response, if applicable
return response
Note: in this simplest form, you are allowing anyone to access any URL on the Internet through your PHP Proxy; some access control should be implemented (e.g. logged-in users only, depending on what you use the proxy for).
That's more work than you might think. Simply calling a remote web page and displaying its contents is not enough (that would be readfile('http://google.com') in the simplest case), you have to rewrite the urls in the html document to point to your own proxy again, you need to be able to process https (or you would be allowing normal access to sensitive data, if the target page needs https) and many others (that have partially been compiled in RFC 3143).
Maybe apache's mod_proxy has all you need, but if you really want to write one yourself, studying the source code of other projects (like php-proxy) might give you more insight into the matter.

Categories