At some point in the last week, one of our servers stopped being able to "talk to itself", so to speak.
We have a cronjob which calls a script via cURL at https://example.com/scripts/curl-script.php - example.com is hosted on the same machine - this no longer works. When run via the command line, you can see cURL looking up the correct external IP for the domain, but the connection simply times out. Same for another URL hosted on the same machine.
wget, or telnetting to example.com on ports 80 or 443, the same thing. Ping also times out (expected, as there's a firewall in place here) and so does traceroute (all hops are just * * *)
If I add an entry in /etc/hosts to map example.com to 127.0.0.1, the script starts working as expected - it just can't talk to itself via it's external IP any more.
I haven't changed anything on this server for a while and I don't believe any automated updates have updated any of the related components. This has been working fine for weeks and I can't understand why it would suddenly stop.
Does anyone have any suggestions for a proper fix for this issue instead of the hosts file amendment?
Related
This question will look like an essay because it is really weird (at least I can't see the logic on the problem.
I am running two computers right now, one Toshiba NB510 which I want to use as a web server, with XAMPP and controlled with VNC from my second (and main) computer, an MSI 2qe (with also XAMPP installed).
The problem is that when I run Apache and MySQL from the MSI, I can write to my local IP to access from any device on my LAN to access "localhost" (until this all ok) but when I run XAMPP on the web server I can only access to localhost from that computer (using the local IP).
Maybe it's a problem on my network?
Here is a map of the network (done in paint sorry)
You have two pcs. Both with your servers running, but you can't access one of them correct ?
The usual troubleshooting for this is :
first to check if you can ping it (to make sure it's on the network)
second to check if the ports you will use are open (tipically, 80 for
web)
finally to check if your server is set to accept incoming connection
(Apache config)
EDIT:
I've seen your picture now. If as you said, both PCs can "see" each other (as in, Toshiba can ping and access MSI, and MSI can ping toshiba but not access it) check if your commumator is the source of the problem
All solved, I installed windows 10 x64 instead of x32 and reinstalled xampp and all works
I've got two Linux servers set up at the same datacenter. They're hooked up through a private switch, and they respond to each other on local IP addresses (192.168.0.N and 192.168.0.M).
Currently, I have a PHP file on Server N calling a PHP file on Server M via (basically) file_get_contents("http://$domain/$folder/$filename.php");, and that file runs a PHP script on the destination server. The problem with this, of course, is that it goes out over the internet, chewing up bandwidth in both the send and the reply. This is why I hooked them up with the private switch.
How can I set it up so that I can call the other file by replacing the $domain with the 192.168.0.M address? What settings do I have to change on Server N (Centos 6.6, running WHM and CPanel) in order for it to recognize that 192.168.0.M will shortcut to $domain's home folder on Server M?
This is commonly handled with split horizon DNS, where internal DNS queries get answered differently than external ones.
It's best to do this via a proper internal DNS server, but if you can't, you can adjust the hosts file on your servers (in Linux, /etc/hosts) to override the IP for the domain name.
I have a problem that is stuck in my mind for almost 24 hours, and at this moment I don't know how to fix it.
Here is the thing: I want to have one 'main' socket on my server that processes all incoming data and sends it to other clients using PHP. That part goes fine, but I want to connect with that socket using multiple subdomains, e.g. ex.example.com. The thing with this is, that you cannot connect with that subdomain unless you have a socket running on it, and that just fills up your ports and that is what I'm trying to prevent.
The best solution is to make Apache process the incoming TCP request, saves the data on which domain you are connecting, and then redirects the client to the main socket, which processes the received data and immediately acts when the client is accepted.
Honestly, I have no idea how to do this. I'm searching for hours, but the only thing I've found was something on Stackoverflow that got close to it: Apache - handling TCP connections, but not HTTP requests
But with that piece of script I am not able to save data (which domain you're using) and send it to the main socket.
I don't know if this can be done by Apache or if it is possible at all, or if there are any other workarounds.
Thank you :)
You are confused about subdomains. Sockets, TCP, and IP all know absolutely nothing about names. DNS wasn't invented until that networking stack had been around for years.
Thus, you can point any number of domains at a single "socket" port on a machine.
Apache can route an incoming request to different "webspaces" (i.e. virtual hosts) based on the destination IP address of the incoming connection(1) or the HTTP/1.1 "Host" header(2). The former was how virtual hosts used to be done but now almost everybody uses the latter.
(1) A machine can have multiple IP addresses even with a single network card but ports are unique to any given protocol on that machine. You point different domains to different addresses and define a reverse-mapping on the webserver so it can tell how the request began.
(2) The value of the "host" is the DNS name that was given to the browser. Since this value is passed explicitly to the webserver, that server doesn't need to rely on tricks like #1.
I have a PHP script sitting on a server that is hit by several different machines at different times throughout the day based on cronjobs that are setup on each machine. I'd like to know the IP of the machines making the request and when it is made by a browser, the following executes successfully:
<?php
...
echo $_SERVER['REMOTE_ADDR'];
...
?>
However, when made by CURL or any other command line tool I have attempted to use (lynx included), I end up with the following garbage:
2701:5:4a80:7d:2ee:8eff:5e61:801d
From the investigation I've done, this is a result of Apache not populating the $_SERVER variable for requests received that are made from the command line.
REMOTE ADDR Issue with Cron Job
Anyone know of a way to get command line requests to play nice with the $_SERVER variable or should I go down another route?
That's not garbage, that is the correct remote address. Someone used IPv6 to access your server.
This question has been asked in many forms, and I have spent more than six hours scouring the internet for an answer that solves my problem. So far, I've been unsuccessful. I use MAMP to develop PHP applications, and I upgraded from Snow Leopard to Lion yesterday and immediately my local applications were running much slower. I believe that its a DNS lookup issue around how Lion handles IPv6. I tried the following steps to fix the problem:
Changed all of the entries in my host file to no longer use the .local TLD
Put all of the entries in my host file onto separate lines
Ensured that my host file had the correct encoding
Added IPv6 entries to all local entries in my host file
Installed dnsmasq (may have not done this correctly)
Put all of my host file entries before the fe80::1%lo0 localhost line
This fixed some problems, but there's still one problem that I haven't figured out. In our PHP applications, we define our SOAP endpoints like so:
api:8080/contract/services/SomeService?wsdl
On each server, there is an "api" entry in the host file that points to the IP address for the SOAP API. So, when I want to point to our dev server, I change my hosts file to look like this:
132.93.1.4 api
(not a real IP)
The DNS lookup for the api entry in the host file still takes 5 seconds every time. When I ping api, the result comes back immediately. However, when I ssh api, it takes about 5 seconds before I can connect to the server. This means that when I load up my PHP application, any SOAP query will take 5 seconds + however long the actual query takes, making local development totally impossible. I realize that the way we're defining our endpoint may not be the best design decision, but it's what I have to work with.
From other questions I've read, I believe it's trying to look up "api" in IPv6 first, failing, and then looking in /etc/hosts. I tried using dnsmasq to switch this order, but had no luck. Does anybody know how to force it to read /etc/hosts first, or skip IPv6 altogether?
Update: I changed the entry in the hostfile to api.com, api.foo, anything with a "." in it, and it responded immediately. However, I would still like to find a solution that doesn't require changing the name "api".
I was having the same issues ever since I upgraded my modem which supports IPv6. Adding both host name formats (IPv4 and IPv6) fixed the issue for:
::1 domain.dev # <== localhost on crack
127.0.0.1 domain.dev