I've got two PC's, but both have another IP. The simple question is how to use one of the PC's as a proxy for the other one, by using cURL, so requests from both PC's will have the same IP.
Is there a way to turn one PC into a proxy server and then make cURL make requests using that IP?
Yes, there are lot's of proxy packages running out of the box (you could even configure apache to do it). Wouldn't recommend rolling your own in PHP if that's what you're after. You can configure curl easily to use a proxy, see the curl_setopt possibilities.
if you are runnning a webserver on each machine , then you can install a php proxy script.
see Google PHP Proxy search results : at least 4 choices on first page.
if you are not running a webserver. Then I suggest you download a standalone proxy such as squid.
This options works for windows or linux. You can download squid for windows here. just unzip and run squid, not setup required.
Related
The situation
I have been developing in php and using wamp for the past 2 years. Then I come across a module to implement a chat system followed by instant notifications. So I go look it up and found this awesome "nodejs" that allows you to connect to connected users in realtime.
This guy nodejs socket.io and php uploaded a way to integrate nodejs socket.io and php without node server.
So I downloaded his project (github) and ran it on my computer but it gave
connection refused error from 8080 So,
I go to nodejs site and install nodejs on my system (windows). It automatically updated my environment variables and I could just go to my command line to run a example project as
path(...)node nodeServer.js
and then run the index file of the project from the shared link and it starts working. everything runs smooth and nice.
MY QUESTION
If without installing nodejs on my system I cannot run the node app in the small example project then how am I supposed to install nodejs on live server (apache) and use command line to start nodejs.
I know this might be too silly but I am really new to nodejs so I don't know if I can run node on live php server. If it is possible then can anyone tell me how can I do that ? or is it just an ideal situation and can't be done.
Node.js do not need to be installed with Apache. Node.js itself provide a server that would listen on a port. You can use Apache or Nginx to use proxy. You can run your application without these server also.
Create a file index.js using the code below and run node index.js
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');
Open you browser and enter this url : http://127.0.0.1:1337/ You will see Hello World over there. In this case nodejs is listening on port 1337
If you are using cloud or VPS or any kind of solution that allows you full control of stuff installed, you can just install node.js there and run what you need...
https://github.com/joyent/node/wiki/installing-node.js-via-package-manager
some services will allow you to pick what gets installed... so you just pick nodejs and run it alongside your apache.
However, if you are using shared hosting solution, there is limited number of those actually even hosting node (if any) and solving this would be almost impossible for you.
Second Edit: Sorry for editing twice, but there is a thing with "no nodejs server" in mentioned stackoverflow post - there is actually a server and mentioned need to npm install certain modules... this is not right way to do this, but if you still want to try this you need node installed (and npm along with it) and then you need to npm isntall mentioned packages, add simple server file quoted in the post, run it and then have all you need for your chat...
If you need some help, ping me, but if this is time critical project, rather find some third party solution... and then learn about this one.
TLDR find a hosting service that'll give u admin and support firewall requests, or self host w/ a free dns subdomain and have a script update your ip so you dont have to pay for static.
My Experiences:
You can actually utilize node for input/output stream manipulation as well. Look at gulp and node for more info. Using bower and bluebird on top of a git project makes setting up apps very easy and quick via node.
As for using socket.io w/ a node/wamp setup, I've actually used this in the past. I had wamp installed on the server initially, but I used the apache directives to reverse proxy requests on 8080 to the node.js app from the client scripts.
I did have to install node separately, though, so you'll need something like ramnode maybe (I think they allow hosted apps like iis/mvc etc too).
Easiest hosting setup for development imo was self host wamp/node w/ a free subdomain from afraid.dns.
Otherwise ramnode gives you full access to admin features on your vm, i believe. So you may be able to install node there as long as you request firewall permissions when needed for xtra ports (socket.io used diff ports for requests on page so I didnt have to worry about CORs crap or anything).
We are running nginx on CentOS 6.4 and upgraded the curl packages from 7.19 to 7.41 yesterday and since then cURL no longer checks the hosts file to resolve host names (and so will not connect to xyz.local)
We use cURL via Guzzle and are no longer able to connect to the various services on our local machines.
John Hart posted this answer which is helpful but would require a fairly significant change to how our site (a LOT of legacy code) manages connections for our local and dev environments.
Is it possible to just tell cURL to use the hosts file?
As a workaround you can install a DNS server that reads your hosts file, and use it as system DNS resolver. For example dnsmasq does this by default.
I'm trying to send a cURL request from a Windows Server 2008 machine using PHP (version 5.3.12) and keep receiving the error Could not resolve proxy: http=127.0.0.1; Host not found. As far as I cal tell, I'm not using a proxy - CURLOPT_PROXY is not set, I've run netsh winhttp show proxy to make sure there's not a system-wide setting in place, I've even checked all the browsers on my machine to confirm none are configured to use a proxy (just in case this could possibly have an effect). I'm having trouble figuring out why cURL insists on telling me that 1) I'm using a proxy and 2) it can't connect to it.
I'm able to resolve the error by explicitly disabling the use of a proxy via curl_setopt($curl, CURLOPT_PROXY, '');, but this isn't the greatest solution - a lot of the places I use cURL are in libraries, and it'd be a pain (not to mention less than maintainable) to go around and hack this line into all of them. I'd rather find the root cause and fix it there.
If it helps, this has happened to me only with POST requests so far. Command-line cURL (from a Git bash prompt) works fine. These calls also work fine from our dev machine, so it seems to be something specific to my machine.
If I need to apply the above hack, I will, but I thought before I resorted to that I'd ask the good folks of SO - is there anywhere I'm missing that could be configuring the use of a proxy? Let me know if there's any additional helpful info I forgot to add.
cURL relies on environment variables for proxy settings. You can set these on Windows via "Advanced System Settings".
The variables you need to set and/or change for optimum control are "http_proxy", "HTTPS_PROXY", "FTP_PROXY", "ALL_PROXY", and "NO_PROXY".
However, if you just want to avoid using a proxy at all, you can probably get away with creating the system variable http_proxy and setting it to localhost and then additionally creating the system variable NO_PROXY and setting it to *, which will tell cURL to avoid using a proxy for any connection.
In any case, be sure to restart any command windows to force recognition of the change in system environment variables.
[source - cURL Manual]
I am using this class to make one GET and another POST request to a website (the first request is to set a cookie). I am testing in a Win XP virtual machine with virtualbox, using wamp from wampserver dot com. The 2 requests takes from 10 to 18 seconds (with curl), but if I make those request directly via the webbrowser in that same virtual machine the website loads in just a few seconds, and it retrieves all the images, css, etc.
What could be causing curl to work so slow? is there a way to fix it?
I faced the same issue, using the curl command.
as suggested above - forcing ipv4 only dns lookup fixed it.
curl -4 $url # nice and fast
(I already had ::1 localhost in my hosts file - but that didn't help).
Curl is probably trying to Reverse DNS the server and, as it cant, it just hangs there a little waiting for the timeout.
If the timeout is given by IPV6 you can try CURL_IPRESOLVE_V4 to bypass it altogether. It really depends on your machine configuration and is more a question to Server Fault.
Check your web server logs and try to find any difference between the requests from the normal web browser and the requests from curl
It is probably due to IPv6.
Try add
curl_setopt($ch, CURLOPT_IPRESOLVE, CURL_IPRESOLVE_V4 );
I ran into this issue with a local web server. I was able to fix by adding
::1 localhost
to /etc/hosts/ file.
This is the ipv6 notation for 127.0.0.1
I was wondering, whether knockd http://www.zeroflux.org/cgi-bin/cvstrac.cgi/knock/wiki would be a good was to be able to restart apache without logging into ssh. But my programming question was whether there is a way to send tcp/udp packages via PHP so I can knock via a webclient.
I am aware that this is not the safest way of doing it, but I will only want to do things like update the svn, restart apache without having any passwords in it like with using ssh to do that.
You may use fsockopen() functions... but what you are doing(and the way you are doing it) is very risky from a security standpoit.. as it had been said, ssh is the way:)
If you really want to restart the apache server by using remote access (non-ssh) you can create a small php-daemon, that just watches for a specific file,(ex: /tmp/restart.apache) and when that file appears run exec("/etc/init.d/apache restart") (or whatever the command is for your distribution). This daemon should run as root... and the thing is that the whole security thing is up to you this way, you have to make sure this cannot get arbitrarly executed...
Your portknock ideea... a simple port scanner may restart your apache by mistake:) portknock is recommented to be used in conjunction with a ssh auth , not directly with apache:)
Seriously, you do not want to do what your trying to do.
You should look into calling your remote server through some sort of secure protocol, like SSH. And on the client side, have a small PHP utility application/script that executes remote SSH commands (preferably with a keyfile only based authentication mechanism).
Why not have a PHP script that calls "svn update"? As long as the files are writeable by the user Apache runs as, it works great. Just hit that URL to update the website
For SVN you have whole PHP api, try search SVN on php.net