This question has been asked in many forms, and I have spent more than six hours scouring the internet for an answer that solves my problem. So far, I've been unsuccessful. I use MAMP to develop PHP applications, and I upgraded from Snow Leopard to Lion yesterday and immediately my local applications were running much slower. I believe that its a DNS lookup issue around how Lion handles IPv6. I tried the following steps to fix the problem:
Changed all of the entries in my host file to no longer use the .local TLD
Put all of the entries in my host file onto separate lines
Ensured that my host file had the correct encoding
Added IPv6 entries to all local entries in my host file
Installed dnsmasq (may have not done this correctly)
Put all of my host file entries before the fe80::1%lo0 localhost line
This fixed some problems, but there's still one problem that I haven't figured out. In our PHP applications, we define our SOAP endpoints like so:
api:8080/contract/services/SomeService?wsdl
On each server, there is an "api" entry in the host file that points to the IP address for the SOAP API. So, when I want to point to our dev server, I change my hosts file to look like this:
132.93.1.4 api
(not a real IP)
The DNS lookup for the api entry in the host file still takes 5 seconds every time. When I ping api, the result comes back immediately. However, when I ssh api, it takes about 5 seconds before I can connect to the server. This means that when I load up my PHP application, any SOAP query will take 5 seconds + however long the actual query takes, making local development totally impossible. I realize that the way we're defining our endpoint may not be the best design decision, but it's what I have to work with.
From other questions I've read, I believe it's trying to look up "api" in IPv6 first, failing, and then looking in /etc/hosts. I tried using dnsmasq to switch this order, but had no luck. Does anybody know how to force it to read /etc/hosts first, or skip IPv6 altogether?
Update: I changed the entry in the hostfile to api.com, api.foo, anything with a "." in it, and it responded immediately. However, I would still like to find a solution that doesn't require changing the name "api".
I was having the same issues ever since I upgraded my modem which supports IPv6. Adding both host name formats (IPv4 and IPv6) fixed the issue for:
::1 domain.dev # <== localhost on crack
127.0.0.1 domain.dev
Related
At some point in the last week, one of our servers stopped being able to "talk to itself", so to speak.
We have a cronjob which calls a script via cURL at https://example.com/scripts/curl-script.php - example.com is hosted on the same machine - this no longer works. When run via the command line, you can see cURL looking up the correct external IP for the domain, but the connection simply times out. Same for another URL hosted on the same machine.
wget, or telnetting to example.com on ports 80 or 443, the same thing. Ping also times out (expected, as there's a firewall in place here) and so does traceroute (all hops are just * * *)
If I add an entry in /etc/hosts to map example.com to 127.0.0.1, the script starts working as expected - it just can't talk to itself via it's external IP any more.
I haven't changed anything on this server for a while and I don't believe any automated updates have updated any of the related components. This has been working fine for weeks and I can't understand why it would suddenly stop.
Does anyone have any suggestions for a proper fix for this issue instead of the hosts file amendment?
this is my very first question here, so please be clement.
I have been programming for weeks a web site with both PHP and JAVASCRIPT, and I also use nodeJs with socket.io and express.
When I tested my localhost locally (on a linux Debian), I configured my app.js to work on the port 3000. Thus, there was no conflict between Apache (which is already working on port 80) and NodeJs, and everything worked well.
But since yesterday, I've attempted for the first time to host my website, and of course NodeJs didn't work anymore (I think it's absolutely normal, because only the port 80 is listened, isn't it ?), but the rest of the website still worked.
So, I did some research, and I've found a solution here which deals with proxy on Apache. Unfortunately, since I've done it, my browzer doesn't display my /index.php normally, instead of it, it tries to download index.php as a bin file.
(some precisions : my app.js is configured to work with /game.php, not /index.php, but if I try to access to /game.php it shows : "Cannot GET /game.php")
I'm a little lost, I'm still struggling to search some informations by myself, but I know that I'm lacking knowledges
PS : Before to do this handling of the apache2.conf file, I attempted to "turn" the port 3000 to 80 by modifying /etc/rc.local with iptables, but the same problem returned : the web browzer only wants to DL the /index.php....
Thanks for have read, and sorry for my bad English.
If you want more accurates details, ask me.
Your PHP config is messed up, you can tell any time you download instead of view them in your browser. If I had to guess I'd say that it has something to do with those virtual host blocks you added. Look up how to setup PHP with Apache, or ask your hosting provider for help. With the information you have provided, there is not much we can do for you as your question is not clear.
I'm trying to get php's ftp methods to work from within a VM. I can connect using ftp_connect but not actually do anything afterwards.
HOST: Ubuntu 14.10
GUEST: Debian 7
Stack: Vagrant - VirtualBox - Debian - LAMP
I'm using vagrant to run a virtual box VM that runs a lamp stack. In php I'm running some method calls (ftp_pasv, ftp_nlist) that are not working.
I discovered that because of the FTP protocol using random ports for connections, the issue is caused by the use of NAT networking in virtualbox. I have the perfect vagrant-virtualbox setup except for this one issue. Does anyone know of a method to get ftp to work on the guest OS in this scenario. I know I could try using a bridged setup, but that means a bunch more work setting it up and the machine will be available to the public. So I would prefer to try to get it working behind that NAT.
I also have tried to use ftp_pasv to get passive mode turned on, which would fix the issue. But the method returns false when I call it to turn on passive mode.
As far as I know this isn't possible. Maybe if you want to hack some source code and compile custom solutions it will work. But that's harder than just using a different setup. I've resorted to using curl to make the ftp connections. Which works for listing files and downloading them.
Anyone that comes across this question and actually finds a solution please post it here.
The problem is most likely related to the network configuration. The fact that e.g. creating a directory works in contrary to getting the directory listing indicates, that theres an issue with the back channel.
A potential root cause is the configuration of the network router. It seems that some routers handle packages different if they are sent from different mac adresses (host vs guest system).
I had this issue and it turned out that upgrading Virtual Box solved the issue. Possibly some bug in the NAT interface.
Ok, so I have set up a installation of lamp and mediawiki on my machine with the path http://localhost/mw/. I then proceeded to install windows on a virtual machine so i could test mediawiki installation with Internet Explorer. So i set the appropriate $wgServer setting to my host IP addr which was reachable from the virtualbox client.
First i accessed http://x.x.x.x/ and got a directory listing, yay it works. right?.... NO..
I then proceeded to access http://x.x.x.x/mw/ (mediawiki path), and to my suprise, IE was just loading on recieve. Several hours went by, and still IE was loading the page... No connection timeout, no recieve timeout. just loading.. forever and ever...
When trying to investigate what was really going on here, i downloaded the cli utility cURL. and proceeded with the command: curl -v http://x.x.x.x/mw/index.php/Main_Page. I was able to retrieve the page, however the result was mind blowing!
First off, mediawiki reports that the page was rendered quite fast (as read from the recieved html source)
Served in 0.356 secs.
Curl on the other hand;
* 14542 bytes transfered in 764.580 seconds (19 bytes/sec).
This suggests to me that for some reason the path /mw/... has a very slow transfer rate. All the other sites works just fine, but not /mw/
And since i never got a connection timeout or receive timeout in IE im guessing that i'm recieving byte for byte at a very slow rate, and it does this for all the resources on the page im trying to get.
And to make things even more interesting, the host machine can access /mw/ without any problems at all. I also tried connecting with another computer on the network (not a virtual machine), and it also suffered the same issue with endless loading.
Any ideas on what is going on here?
The issue seems to be traced back to xdebug module when configured with auto connect back.
Removing xdebug.xdebug.remote_connect_back in xdebug config solved the issue.
I have a standard php app that uses SQL Server as the back-end database. There is a serious delay in response for each page I access. This is my development server, so its not an issue with the live setup, but it is really annoying for working on the system.
I have a 5 - 8 second delay on each page.
I am running SqlServer 2000 Developer Edition on a Virtual Machine (Virtual PC).
I have installed SqlServer on my development machine but get the same delay.
I have isolated the issue to the call to mssql_connect (calling mssql_pconnect has no effect)
It is a networking issue on how I have set up (or not set up, since I didn't really change default config) SQL server. It's not a strictly a programming issue but I thought I might get some valuable feedback here.
Can anyone tell me if there is a trick, specific set of protocols, registry setting, something that will kill this delay?
I was also experiencing a 5-10 second delay on every connect, using the official Microsoft SQL drivers for PHP (as suggested by #gaRex) - none of the answers posted here solved it for me.
As suggested by #ircmaxell, my problem was a DNS issue - and the solution was to edit the \windows\system32\drivers\etc\hosts file (your local local host file) and add the name of my own machine to it.
In the "system properties" dialog, find the "computer name" of your machine - then add a line like 127.0.0.1 my-computer to your local host file.
For me, the delay occurred once more, on the following attempt to load the page - after that, it was super fast, no delay at all.
Note that this problem may occur even on a physical machine, not only on a VM.
I came across network issues when running virtual pc, everything network related is slow, try adding this entry on your registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
Create new DWORD value named DisableTaskOffload and set its value to 1.
Restart the computer.
It worked for me, source.
Is it perhaps a DNS issue? I know that MySQL does a reverse DNS lookup on each login (not each connection). If you don't have a reverse dns record for your server (or your dns is slow) it can cause a major delay at login. There's an option in MySQL to disable that. I'm not sure about SQL Server, but I'd assume it may be doing something similar...
I remember the same problem, but forgot, how we have solve it.
To clarify please specify exact connect strings, your SQLserver versions and also try to start this old good utility c:\WINDOWS\system32\cliconfg.exe, which is also can bring some light.
Yes, I know, it's from 2k, but guys at m$ don't like to create client tools from scratch.
Also try to get "right" mssql client dlls for PHP.