I created a simple php script with the following contents:
<?php
phpinfo();
?>
Then I executed it on the server normally as a URL (example: http://xxx.xxx.xxx.xxx/whatever.php) and received information about what's installed, with php, etc.
Next, I used an advanced tool (opera dragonfly) and requested the same URL again, with the host HTTP request header changed from xxx.xxx.xxx.xxx to a random word. I'll call the new value word.
The same page appears, but what I find is that the SCRIPT_URL environment variable value is http://word/whatever.php instead of http://xxx.xxx.xxx.xxx/whatever.php and HTTP_HOST and SERVER_NAME are both set to word.
Why can't one environment variable value be the domain part of the url someone types in (like http://xxx.xxx.xxx.xxx) and another one be the value of the http host: header? and how do I fix this?
I'm using apache.
Related
For the router, I wanted to get the host name from the URI upon a server request. I know that I need to read it from the $_SERVER variable. But it seems that in the $_SERVER array there are multiple entries (at least two) for the host name.
Could you please tell me which value should I choose to read - the most reliable one?
For example, when I have an URI like this:
http://local.mvc/mycontroller/myaction
the $_SERVER array will have:
[HTTP_HOST] => local.mvc
[SERVER_NAME] => local.mvc
I need to obtain the value local.mvc.
Thank you for your time.
SERVER_NAME is the name assigned to the given server in its configuration (be it i.e. apache.conf file and its ServerName directive or similar for other software), while HTTP_HOST value is obtained from the headers of HTTP request that comes from client (web browser usually). These two might differ if your server servers multiple domains (like shared server / virtual server hosting). Depending on the use case you may want to use either of these, however HTTP_HOST seems like better choice as it is always tells what user wanted to reach.
the $_SERVER variable has 'REQUEST_URI'.
just var_dump the server variable, and is should be there (or look at the documentation)
If you're using PHP5 or PHP7, try php_uname
To get the host name:
php_uname("n");
This is a weird problem: a vhost in apache has been configured on my local machine in order to accept request like http://dev.myproject.com - my hosts file consists of a corresponding entry, e.g.
127.0.0.1 dev.myproject.com
Now, if I use the URL http://dev.myproject.com in my browser everything works like expected, i.e. index.php will be executed.
However, if I start my console and use
curl http://dev.myproject.com
it seems to ignore the entry in my host file, i.e. it tries to find http://dev.myproject.com using DNS resulting in
curl: (6) Could not resolve host: dev.myproject.com
Any ideas? I'm stuck...
use the --resolve flag on your CURL command
On Android, I'm using the Phonegap
var ft = new FileTransfer();
ft.upload(pic_to_upload, "http://" + app_domain + "/test_phonegap.php/",
success, failure, options);
If I set the domain to localhost or 127.0.0.1 or 10.0.0.6 (internal IP) it works,
but if I use the actual domain of the website it doesn't work.
More specifically what happens is the php script is executed (server is Apache), but if I look at the $_REQUEST or $_FILE variables they are empty, whereas with localhost it receives everything just fine.
I've put into my xml/config.xml:
<access origin="http://127.0.0.1*"/> <!-- allow local pages -->
<access origin="http://www.domain.com/"/>
where domain.com is the domain to which I sent the request, again, it does receive the request but without the $_REQUEST or $_FILE variables (also without the $_GET and $_POST, all these variables are empty)
what can be going wrong? I'm completely baffled.
All the other AJAX requests I done were JSONP and worked without a problem, but for file upload it won't work sadly.
Also, I see nothing in Apache's error log from the last week or so there's nothing about this in the error log.
Thanx for any help
add your domain to phonegap/cordova white list unless you add a domain to white list phonegap will cut any attempt to access them for more information :
http://docs.phonegap.com/en/1.9.0/guide_whitelist_index.md.html
As far as I'm aware, the webserver (Apache/Nginx) provides the ($_SERVER['REMOTE_ADDR']) based on the claimed location of the requesting user agent. So I understand they can be lying, but is it possible that this value could be blank? Would the network interface or webserver even accept a request without a correctly formed IP?
http://php.net/manual/en/reserved.variables.server.php
It is theoretically possible, as the matter is up to the http server or at least the corresponding PHP SAPI.
In practice, I haven't encountered such a situation, except with the CLI SAPI.
EDIT: For Apache, it would seem this is always set, as ap_add_common_vars always adds it to the table that ends up being read by the Apache module PHP SAPI (disclaimer: I have very limited knowledge of Apache internals).
If using PHP in a CGI environment, the specification in RFC 3875 seems to guarantee the existence of this variable:
4.1.8. REMOTE_ADDR
The REMOTE_ADDR variable MUST be set to the network address of the
client sending the request to the server.
Yes. I currently see values of "unknown" in my logs of Apache-behind-Nginx, for what looks like a normal request/response sequence in the logs. I believe this is possible because mod_extract_forwarded is modifying the request to reset REMOTE_ADDR based on data in the X-Forwarded-For header. So, the original REMOTE_ADDR value was likely valid, but as part of passing through our reverse proxy and Apache, REMOTE_ADDR appears invalid by the time it arrives at the application.
If you have installed Perl's libwww-perl, you can test this situation like this (changing example.com to be your own domain or application):
HEAD -H 'X-Forwarded-For: ' -sSe http://www.example.com/
HEAD -H 'X-Forwarded-For: HIMOM' -sSe http://www.example.com/
HEAD -H 'X-Forwarded-For: <iframe src=http://example.com>' -sSe http://www.example.com/
( You can also use any other tool that allows you to handcraft HTTP requests with custom request headers. )
Now, go check your access logs to see what values they logged, and check your applications to see how they handled the bad input. `
Well, it's reserved but writable. I've seen badly written apps that were scribbling all over the superglobals - could the script be overwriting it, e.g. with $_SERVER['REMOTE_ADDR'] = '';?
Other than that, even if the request were proxied, there should be the address of the proxy - could it be some sort of internal-rewrite module messing with it (mod_rewrite allows internal redirects, not sure if it affects this)?
It shouldn't be blank, and nothing can't connect to your web service. Whatever's connecting must have an IP address to send and receive data. Whether that IP address can be trusted is a different matter.
I'm attempting to use curl inside php to grab a page from my own web server. The page is pretty simple, just has some plain text output. However, it returns 'null'. I can successfully retrieve other pages on other domains and on my own server with it. I can see it in the browser just fine, and I can grab it with command line wget just fine, it's just that when I try to grab that one particular page with curl, it simply comes up null. We can't use file_get_contents because our host has it disabled.
Why in the world would this be different behavior be happening?
Found the issue. I was putting my url someplace that was not in curl_init(), and that place was truncating the query string. Once I moved it back to curl_init, it worked.
Try setting curl's user agent. Sometimes hosts will block "bots" by blocking things like wget or curl - but usually they do this just by examining the user agent.
You should check the output of curl_error() and also take a look at your logfiles for the http server.