Is there a way to check if a host is up? - php

I'm trying to do this in PHP. I need to check if a specified host is "up"
I thought of pinging the specified host (though I'm not sure how I would, since that would require root. --help here?)
I also though of using fsockopen() to try to connect on a specified port, but that would fail too, if the host wasn't listening for connections on that port.
Additionally, some hosts block ping requests, so how might I get around this? This part isn't a necessity, though, so don't worry about this too much. I realize this one might get tricky.

I typically do a simple cURL for a public page and see if it returns a 200. If you get a 500, 404, or anything besides a 200 response you know something fishy is up.

The short answer is that there is no good, universal way to do this. Ping is about as close as you can get (almost all hosts will respond to that), but as you observed, in PHP that usually requires root access to use the low port.
Does your host allow you to execute system calls, so you could run the ping command at the OS level and then parse the results? This is probably your best bet.
$result = exec("ping -c 2 google.com");
If a host is blocking a ping request, you could do a more general portscan to look for other open ports (but this is pretty rude, don't do it to hosts who haven't given you specific permission). Nmap is a good tool for doing this. It uses quite a few tricks to figure out if a host is up and what services may or may not be running. Be careful though, as some shared hosting providers will terminate your account for "hacking activity" if you install and use Nmap, especially against hosts you do not control or have permission to probe.
Beyond that, if you are on the same unswitched ethernet layer as another host (if you happen to be on the same open WiFi network, for example), an ethernet adaptor in promiscuous mode can sniff traffic to and from a host even if it does not respond directly to you.

You could use cURL
$url = 'yoururl';
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_exec($ch);
$retcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if (200==$retcode) {
// All's well
} else {
// not so much
}

For the host to be monitored at all, at least one port must be open. Is the host a web server? If so you could just open a connection to port 80, as long as it's opened successfully then at least some part of the host is working.
A better solution would be to have a script that is web accessible to just your monitor, and then you could open a connection to that, and that script would return various bits of system info.
EDIT--
How thorough do you want this test to be?
[server on] -> [apache running] -> [web application working]
Are all different levels of working. Just showing apache is returning something does at least show the server is on, but not that your web app is running.
(I realise that you may not be running anything like this but I hope it's a useful example)
EDIT--
Would it be worth installing a lightweight http server (I mean very light weight) just for monitoring?
Failing that could you install something on the hosts that phoned home every so often to show they are up?

I used gethostbyname($hostname).
The function gives you the IP if the host is up, or the input hostname if it couldn't find the IP.
if ($hostname !== gethostbyname($hostname)) {
//Host is up
}

Related

PHP Fatal error: Uncaught PDOException: SQLSTATE[HY000] [2002] on heavy query [duplicate]

Sometimes I get the following error while I was doing HttpWebRequest to a WebService. I copied my code below too.
System.Net.WebException: Unable to connect to the remote server ---> System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it 127.0.0.1:80
at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.InternalConnect(EndPoint remoteEP)
at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception)
--- End of inner exception stack trace ---
at System.Net.HttpWebRequest.GetRequestStream()
ServicePointManager.CertificatePolicy = new TrustAllCertificatePolicy();
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.PreAuthenticate = true;
request.Credentials = networkCredential(sla);
request.Method = WebRequestMethods.Http.Post;
request.ContentType = "application/x-www-form-urlencoded";
request.Timeout = v_Timeout * 1000;
if (url.IndexOf("asmx") > 0 && parStartIndex > 0)
{
AppHelper.Logger.Append("#############" + sla.ServiceName);
using (StreamWriter reqWriter = new StreamWriter(request.GetRequestStream()))
{
while (true)
{
int index01 = parList.Length;
int index02 = parList.IndexOf("=");
if (parList.IndexOf("&") > 0)
index01 = parList.IndexOf("&");
string parName = parList.Substring(0, index02);
string parValue = parList.Substring(index02 + 1, index01 - index02 - 1);
reqWriter.Write("{0}={1}", HttpUtility.UrlEncode(parName), HttpUtility.UrlEncode(parValue));
if (index01 == parList.Length)
break;
reqWriter.Write("&");
parList = parList.Substring(index01 + 1);
}
}
}
else
{
request.ContentLength = 0;
}
response = (HttpWebResponse)request.GetResponse();
If this happens always, it literally means that the machine exists but that it has no services listening on the specified port, or there is a firewall stopping you.
If it happens occasionally - you used the word "sometimes" - and retrying succeeds, it is likely because the server has a full 'backlog'.
When you are waiting to be accepted on a listening socket, you are placed in a backlog. This backlog is finite and quite short - values of 1, 2 or 3 are not unusual - and so the OS might be unable to queue your request for the 'accept' to consume.
The backlog is a parameter on the listen function - all languages and platforms have basically the same API in this regard, even the C# one. This parameter is often configurable if you control the server, and is likely read from some settings file or the registry. Investigate how to configure your server.
If you wrote the server, you might have heavy processing in the accept of your socket, and this can be better moved to a separate worker-thread so your accept is always ready to receive connections. There are various architecture choices you can explore that mitigate queuing up clients and processing them sequentially.
Regardless of whether you can increase the server backlog, you do need retry logic in your client code to cope with this issue - as even with a long backlog the server might be receiving lots of other requests on that port at that time.
There is a rare possibility where a NAT router would give this error should its ports for mappings be exhausted. I think we can discard this possibility as too much of a long shot though, since the router has 64K simultaneous connections to the same destination address/port before exhaustion.
The most probable reason is a Firewall.
This article contains a set of reasons, which may be useful to you.
From the article, possible reasons could be:
FTP server settings
Software/Personal Firewall Settings
Multiple Software/Personal Firewalls
Anti-virus Software
LSP Layer
Router Firmware
Computer Turned Off
Computer Not Plugged In
Fiddler
I had the same. It was because the port-number of the web service was changing unexpectedly.
This problem usually happens when you have more than one copy of the project
My project was calling the Web service with a specific port number which I assigned in the Web.Config file of my main project file. As the port number changed unexpectedly, the browser was unable to find the Web service and throwing that error.
I solved this by following the below steps: (Visual Studio 2010)
Go to Properties of the Web service project --> click on Web tab --> In Servers section --> Check Specific port
and then assign the standard port number by which your main project is calling the web service.
I hope this will solve the problem.
Cheers :)
I think, you need to check your proxy settings in "internet options". If you are using proxy/'hide ip' applications, this problem may be occurs.
I had the same problem. The problem is that I didn't start the selenium server. I have downloaded the selenium server and i started it. After starting the selenium server, issue gone and all worked fine.
Refer this : http://coding-issues.blogspot.in/2012/11/no-connection-could-be-made-because.html
I had the same error with my WCF service using Net TCP binding, but resolved after starting the below services in my case.
Net.Pipe.Listener.Adapter
Net.TCP.Listener.Adapter
Net.Tcp Port Sharing Service
In my case, some domains worked, while some did not. Adding a reference to my organization's proxy Url in my web.config fixed the issue.
<system.net>
<defaultProxy useDefaultCredentials="true">
<proxy proxyaddress="http://proxy.my-org.com/" usesystemdefault="True"/>
</defaultProxy>
</system.net>
When you call service which has only HTTP (ex: http://example.com) and you call HTTPS (ex: https://example.com), you get exactly this error - "No connection could be made because the target machine actively refused it"
I faced same error because when your Server and Client run on same machine the Client need server local ip address not Public ip address to communicate with server you need Public ip address only in case when Server and Client run on separate machine so use Local ip address in client program to connect with server Local ip address can be found using this method.
public static string Getlocalip()
{
try
{
IPAddress[] localIPs = Dns.GetHostAddresses(Dns.GetHostName());
return localIPs[7].ToString();
}
catch (Exception)
{
return "null";
}
}
I got this error in an application that uses AppFabric. The clue was getting a DataCacheException in the stack trace. To see if this is the issue for you, run the following PowerShell command:
#("AppFabricCachingService","RemoteRegistry") | % { get-service $_ }
If either of these two services are stopped, then you will get this error.
For me, I wanted to start the mongo in shell (irrelevant of the exact context of the question, but having the same error message before even starting the mongo in shell)
The process 'MongoDB Service' wasn't running in Services
Start cmd as Administrator and type,
net start MongoDB
Just to see MongoDB is up and running just type mongo, in cmd it will give Mongo version details and Mongo Connection URL
Well, I've received this error today on Windows 8 64-bit out of the blue, for the first time, and it turns out my my.ini had been reset, and the bin/mysqld file had been deleted, among other items in the "Program Files/MySQL/MySQL Server 5.6" folder.
To fix it, I had to run the MySQL installer again, installing only the server, and copy a recent version of the my.ini file from "ProgramData/MySQL/MySQL Server 5.6", named my_2014-03-28T15-51-20.ini in my case (don't know how or why that got copied there so recently) back into "Program Files/MySQL/MySQL Server 5.6".
The only change to the system since MySQL worked was the installation of Native Instruments' Traktor 2 and a Traktor Audio 2 sound card, which really shouldn't have caused this problem, and no one else has used the system besides me. If anyone has a clue, it would be kind of you to comment to prevent this for me and anyone else who has encountered this.
For service reference within a solution.
Restart your workstation
Rebuild your solution
Update service reference in WCFclient project
At this point, I received messsage (Windows 7) to allow system access.
Then the service reference was updated properly without errors.
I would like to share this answer I found because the cause of the problem was not the firewall or the process not listening correctly, it was the code sample provided from Microsoft that I used.
https://msdn.microsoft.com/en-us/library/system.net.sockets.socket%28v=vs.110%29.aspx
I implemented this function almost exactly as written, but what happened is I got this error:
2016-01-05 12:00:48,075 [10] ERROR - The error is: System.Net.Sockets.SocketException (0x80004005): No connection could be made because the target machine actively refused it [fe80::caa:745:a1da:e6f1%11]:4080
This code would say the socket is connected, but not under the correct IP address actually needed for proper communication. (Provided by Microsoft)
private static Socket ConnectSocket(string server, int port)
{
Socket s = null;
IPHostEntry hostEntry = null;
// Get host related information.
hostEntry = Dns.GetHostEntry(server);
// Loop through the AddressList to obtain the supported AddressFamily. This is to avoid
// an exception that occurs when the host IP Address is not compatible with the address family
// (typical in the IPv6 case).
foreach(IPAddress address in hostEntry.AddressList)
{
IPEndPoint ipe = new IPEndPoint(address, port);
Socket tempSocket =
new Socket(ipe.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
tempSocket.Connect(ipe);
if(tempSocket.Connected)
{
s = tempSocket;
break;
}
else
{
continue;
}
}
return s;
}
I re-wrote the code to just use the first valid IP it finds. I am only concerned with IPV4 using this, but it works with localhost, 127.0.0.1, and the actually IP address of you network card, where the example provided by Microsoft failed!
private Socket ConnectSocket(string server, int port)
{
Socket s = null;
try
{
// Get host related information.
IPAddress[] ips;
ips = Dns.GetHostAddresses(server);
Socket tempSocket = null;
IPEndPoint ipe = null;
ipe = new IPEndPoint((IPAddress)ips.GetValue(0), port);
tempSocket = new Socket(ipe.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
Platform.Log(LogLevel.Info, "Attempting socket connection to " + ips.GetValue(0).ToString() + " on port " + port.ToString());
tempSocket.Connect(ipe);
if (tempSocket.Connected)
{
s = tempSocket;
s.SendTimeout = Coordinate.HL7SendTimeout;
s.ReceiveTimeout = Coordinate.HL7ReceiveTimeout;
}
else
{
return null;
}
return s;
}
catch (Exception e)
{
Platform.Log(LogLevel.Error, "Error creating socket connection to " + server + " on port " + port.ToString());
Platform.Log(LogLevel.Error, "The error is: " + e.ToString());
if (g_NoOutputForThreading == false)
rtbResponse.AppendText("Error creating socket connection to " + server + " on port " + port.ToString());
return null;
}
}
This is really specific, but if you receive this error after trying to connect to a database using mongo, what worked for me was running mongod.exe before running mongo.exe and then the connection worked fine. Hope this helps someone.
One more possibility --
Make sure you're trying to open the same IP address as where you're listening. My server app was listening to the host machine's IP address using IPv6, but the client was attempting to connect on the host machine's IPv4 address.
I've received this error from referencing services located on a WCFHost from my web tier. What worked for me may not apply to everyone, but I'm leaving this answer for those whom it may. The port number for my WCFHost was randomly updated by IIS, I simply had to update the end routes to the svc references in my web config. Problem solved.
In my scenario, I have two applications:
App1
App2
Assumption: App1 should listen to App2's activities on Port 5000
Error: Starting App1 and trying to listen, to a nonexistent ghost town, produces the error
Solution: Start App2 first, then try to listen using App1
Go to your WCF project -
properties ->
->
debuggers
-> unmark the checkbox
Enable Edit and Continue
In my case this was caused by a faulty deployment where a setting in my web.config was not made.
A collegue explained that the IP address in the error message represents the localhost.
When I corrected the web.config I was then using the correct url to make the server calls and it worked.
I thought I would post this in case it might help someone.
Using WampServer 64bit on Windows 7 Home Premium 64bit I encountered this exact problem. After hours and hours of experimentation it became apparent that all that was needed was in my.ini to comment out one line. Then it worked fine.
commented out 1 line
socket=mysql
If you put your old /data/ files in the appropriate location, WampServer will accept all of them except for the /mysql/ folder which it over writes. So then I simply imported a backup of the /mysql/ user data from my prior development environment and ran FLUSH PRIVILEGES in a phpMyAdmin SQL window. Works great. Something must be wrong because things shouldn't be this easy.
I had this issue happening often. I found SQL Server Agent service was not running. Once I started the service manually, it got fixed. Double check if the service is running or not:
Run prompt, type services.msc and hit enter
Find the service name - SQL Server Agent(Instance Name)
If SQL Server Agent is not running, double-click the service to open properties window. Then click on Start button. Hope it will help someone.
I came across this error and took some time to resolve it. In my case I had https and net.tcp configured as IIS bindings on same port. Obviously you can't have two things on the same port. I used netstat -ap tcp command to check whether there is something listening on that port. There was none listening. Removing unnecessary binding (https in my case) solved my issue.
It was a silly issue on my side, I had added a defaultproxy to my web.config in order to intercept traffic in Fiddler, and then forgot to remove it!
There is a service called "SQL Server Browser" that provides SQL Server connection information to clients.
In my case, none of the existing solutions worked because this service was not running. I resumed it and everything went back to working perfectly.
I was facing this issue today. Mine was Asp.Net Core API and it uses Postgresql as the database. We have configured this database as a Docker container. So the first step I did was to check whether I am able to access the database or not. To do that I searched for PgAdmin in the start as I have configured the same. Clicking on the resulted application will redirect you to the http://127.0.0.1:23722/browser/. There you can try access your database on the left menu. For me I was getting an error as in the below image.
Enter the password and try whether you are able to access it or not. For me it was not working. As it is a Docker container, I decided to restart my Docker desktop, to do that right click on the docker icon in the task bar and click restart.
Once after restarting the Docker, I was able to login and see the Database and also the error was gone when I restart the application in Visual Studio.
Hope it helps.
it might be because of authorisation issues; that was the case for me.
If you have for example: [Authorize("WriteAccess")] or [Authorize("ReadAccess")] at the top of your controller functions, try to comment them out.
I just faced this right now...
Here on my end, I have 2 separated Visual Studio solutions (.sln)... opened each one in their own Visual Studio instance.
Solution 2 calls Solution 1 code. The problem was related to the port assigned to Solution 1. I had to change the port on solution 1 to another one and then Solution 2 started working again. So make sure you check the port assigned to your project.
Normally, connection scripts do not mention the port to use. For example:
$mysqli = mysqli_connect('127.0.0.0.1', 'user', 'password', 'database');
So, to connect with a manager that doesn't use port 3306, you have to specify the port number on the connection request:
$mysqli = mysqli_connect('127.0.0.0.1', 'user', 'password', 'database', '3307');
To check the connections on the MySQL or MariaDB database manager, use the script:
wamp(64)\www\testmysql.php
by putting 'http://localhost/testmysql.php' in the browser address bar having first modified the script according to your parameters.
I forgot to start the service so it failed because no service was listening on port.
Resolved by starting the service.

"php_connect_nonb() failed: Operation now in progress (115)" happens intermittently

We send some files across to a third party with a PHP cron job via FTP.
However sometimes we get the following error:
ErrorException [ 2 ]: ftp_put(): php_connect_nonb() failed: Operation
now in progress (115) ~ MODPATH/fileop/classes/Drivers/Fileop/Ftp.php [ 37 ]
When I say "sometimes" I mean exactly that; most times it goes across fine but about 1 in 5 times we get that error. It's not to do with the files themselves, because they will go happily if we try again.
We've found similar issues online - relating to a bug in PHP with NAT devices or to do with firewall configuration but again the implication is that if this were the case it would never work.
So, why would this work some times and not others?
ftp_set_option($ftpconn, FTP_USEPASVADDRESS, false);
This line of code before setting passivity of the connection ftp_pasv($ftpconn, true);
Solved my problem
FTP(S) uses random ports to set up data connections; an intermittent success rate indicates that not all ports are allowed by a firewall on the client and/or server machines. The port range for incoming (PASV) data connections can be set in the FTP server.
This page has a nice summary:
The easy way is to simply allow FTP servers and clients unlimited
access through your firewall, but if you like to limit their access to
"known" ports, you have to understand the 4 different scenarios.
1) The FTP server should be allowed to accept TCP connections to port
21, and to make TCP connections from port 20 to any (remote ephemeral)
port.
2) The FTP server should be allowed to accept TCP connections to port
21, AND to accept TCP connections to any ephemeral port as well!
3) The FTP client should be allowed to make TCP connections to port
21, and to accept TCP connections from port 20 to any ephemeral port.
4) The FTP client should be allowed to make TCP connections to port
21, and to make TCP connections to any other (remote ephemeral) port
as well!
So, I'm writing this answer after doing some investigation on my FTP server and reading the link you provided elitehosts.com.
I'm using FileZilla FTP server, and there is a specific setting that I had to enter to make it work. Going into the server settings, there is an area titled "Passive mode settings". In that dialog, there is an area titled "IPv4 specific", and within that area there is a setting labeled "External Server IP Address for passive mode transfers:". It's a radio button selection set, and it was on "Default", but since the FTP server is NAT'ed, I changed that radio selection from "Default" to "Use the following IP:" and entered in the external-facing IP address of my gateway provided by my ISP.
After I set this up, it worked! Not terribly sure if your FTP server is NAT'ed, but I thought I would provide the answer on this thread because it seems related.
In addition to Cees answer, I am running vsftp on ec2 and had to comment out the listen_ipv6=YES, listen=YES then "service vsftpd restart".
Although documentation says it will listen on ipv4 as well it wasn't and this resolved the issue.
For me all I had to do was to remove the ftp_pasv( $ftpconn, true ); and everything worked perfectly. I'm not yet sure why but I am trying to find out and I will surely come back when I do get the reason behind it.
This should be a comment under jj_dev2 comment, but I cannot add one due to reputation. But maybe it will be helpful for someone, so I post it here.
We had the same issue as described in the original post. In our case it worked with many customers - except one.
The solution in jj_dev2 comment did work for us. So we investigated what does ftp_set_option($conn, FTP_USEPASVADDRESS, false) actually do. And based on that we found out that in fact customer's FTPS server was configured incorrectly.
In response to PASV command (ftp_pasv($conn, true)) FTP server returns an IP address which the PHP FTP client then will use for data transfers. In our case the FTP server was returning an internal IP address and not the public IP address that we connect to. Customer had to fix their FTP server settings so FTP server would send external IP address in the PASV command response.

curl_exec($ch) not executing on external domains anymore, why?

I was using cURL to scrape content from a site and just recently my page stated hanging when it reached curl_exec($ch). After some tests I noticed that it could load any other page from my own domain but when attempting to load from anything external I'll get a connect() timeout! error.
Here's a simplified version of what I was using:
<?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,'http://www.google.com');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 0);
$contents = curl_exec ($ch);
curl_close ($ch);
echo $contents;
?>
Here's some info I have about my host from my phpinfo():
PHP Version 5.3.1
cURL support enabled
cURL Information 7.19.7
Host i686-pc-linux-gnu
I don't have access to SSH or modifying the php.ini file (however I can read it). But is there a way to tell if something was recently set to block cURL access to external domains? Or is there something else I might have missed?
Thanks,
Dave
I'm not aware about any setting like that, it would not make much sense.
As you said you are on a remote webserver without console access I guess that your activity has been detected by the host or more likely it caused issues and so they firewalled you.
A silent iptables DROP would cause this.
When scraping google you need to use proxies for more than a few hand full of requests and you should never abuse your webservers primary IP if it's not your own. That's likely a breach of their TOS and could even result in legal action if they get banned from Google (which can happen).
Take a look at Google rank checker that's a PHP script that does exactly what you want using CURL and proper IP management.
I can't think of anything that's causing a timeout than a firewall on your side.
I'm not sure why you're getting a connect() timeout! error, but the following line:
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 0);
If it's not set to 1, it will not return any of the page's content back into your $contents.

cURL/PHP Request Executes 50% of the Time

After searching all over, I can't understand why cURL requests issued to a remote SSL-enabled host are successful only 50% or so of the time in my case. Here's the situation: I have a sequence of cURL requests, all of them issued to a HTTPS remote host, within a single PHP script that I run using the PHP CLI. Occasionally when I run the script the requests execute successfully, but for some reason most of the times I run it I get the following error from cURL:
* About to connect() to www.virginia.edu port 443 (#0)
* Trying 128.143.22.36... * connected
* Connected to www.virginia.edu (128.143.22.36) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* error:140943FC:SSL routines:SSL3_READ_BYTES:sslv3 alert bad record mac
* Closing connection #0
If I try again a few times I get the same result, but then after a few tries the requests will go through successfully. Running the script after that again results in an error, and the pattern continues. Researching the error 'alert bad record mac' didn't give me anything helpful, and I hesitate to blame it on an SSL issue since the script still runs occasionally.
I'm on Ubuntu Server 10.04, with php5 and php5-curl installed, as well as the latest version of openssl. In terms of cURL specific options, CURLOPT_SSL_VERIFYPEER is set to false, and both CURLOPT_TIMEOUT and CURLOPT_CONNECTTIMEOUT are set to 4 seconds. Further illustrating this problem is the fact that the same exact situation occurs on my Mac OS X dev machine - the requests only go through ~50% of the time.
The remote host is maybe not a real unique host. Maybe it's some sort of load balancing solution with several servers taking the incoming requests.
What make me think it could be that is the 'mac error' in the error message. This could mean the remote host mac address as changed while the SSL negociation was still running. And this could explain that sometimes you do not have any problem.
But maybe not :-) SSL problems are quite hard to find.
I do not understand your answer on prefork MPM vs Worker MPM, if you run PHP in cli mode your apache MPM is not used, you're not even using apache.
You may need this option:
CURLOPT_FORBID_REUSE
Pass a long. Set to 1 to make the next transfer explicitly close the connection when done. Normally, libcurl keeps all connections alive when done with one transfer in case a succeeding one follows that can re-use them. This option should be used with caution and only if you understand what it does. Set to 0 to have libcurl keep the connection open for possible later re-use (default behavior).
Had you tried?
curl_setopt($handle, CURLOPT_SSLVERSION, 3);

How do you proxy though a server using ssh (socks…) using php’s CURL?

I want to use ssh, something like this:
ssh -D 9999 username#ip-address-of-ssh-server
But within php CURL, but I don't really see how this could be done?
I noticed “CURLPROXY_SOCKS5” as a type in the php site, but guess that wouldn’t work since it isn’t really socks, it’s ssh…
I’m currently using this code:
curl_setopt($ch, CURLOPT_PROXY, ‘ip:port');
But I'm using a free proxy and it’s rather slow and unreliable, I'm also sending sensitive information over this proxy. This is why I want to proxy it over a save server I trust, but I only have ssh setup on it and it’s unable to host a proper proxy.
You can use both libssh2 and curl from within a PHP script.
First you need to get the ssh2 library from the PECL site. Alternatively, the PEAR package has SSH2 support too.
After installing you can then read the ssh2 documentation on setting up a tunnel.
In your script you can then set up the tunnel.
After the tunnel is set up in the script you can specify the CURL proxy.
Perform your CURL operation.
Release the tunnel resource and close the connection in your script.
I'm not a PHP expert, but here's a rough example:
<?php
$connection = ssh2_connect(ip-address-of-ssh-server, 22);
ssh2_auth_pubkey_file($connection, 'username', 'id_dsa.pub', 'id_dsa');
$tunnel = ssh2_tunnel($connection, '127.0.0.1', 9999);
curl_setopt($ch, CURLOPT_PROXY, ‘127.0.0.1:9999');
// perform curl operations
// The connection and tunnel will die at the and of the session.
?>
The simplest option
Another option to consider is using sftp (ftp over ssh) instead of CURL... this is probably the recommended way to copy a file from one server to another securely in PHP...
Even simpler example:
<?php
$connection = ssh2_connect(ip-address-of-ssh-server, 22);
ssh2_auth_password($connection, 'username', 'password');
ssh2_scp_send($connection, '/local/filename', '/remote/filename', 0644);
?>
according to manpage the -D does create a socks proxy.
-D [bind_address:]port
Specifies a local ``dynamic'' application-level port forwarding.
This works by allocating a socket to listen to port on the local
side, optionally bound to the specified bind_address. Whenever a
connection is made to this port, the connection is forwarded over
the secure channel, and the application protocol is then used to
determine where to connect to from the remote machine. Currently
the SOCKS4 and SOCKS5 protocols are supported, and ssh will act
as a SOCKS server. Only root can forward privileged ports. Dy-
namic port forwardings can also be specified in the configuration
file.
You could use ssh2 module and ssh2_tunnel function to create ssh tunnel throu remote server.
Examples available: http://www.php.net/manual/en/function.ssh2-tunnel.php
See my comment on Qwerty's proposed solution. I think you are looking in the wrong direction to try to solve this question. Instead, you should just use cURL and create a personal certificate for yourself. You say you want to use SSH for safety, but why not a certificate instead?
This site will let you easily create one
http://www.cacert.org/
Since it's just for you, you can add an exception to your browsers so they won't complain of a bad certificate. No need for ssh!
To open the SSH tunnel only for the duration of your script, you probably would need to use PHP forks. In one process, open the SSH tunnel (-D - you need to do some work to make sure you're not colliding on ports here), and in the other process, use CURL with socks proxy config. When your transfer is done, signal the ssh fork to terminate so the connection gets torn down.
Keep in mind that while the tunnel is open, other users on the same machine can also proxy on that port if they wanted to. With that in mind, it might be a better idea to use the -L 1234:remotehost:80 flag, and just get the URL http://localhost:1234/some/uri
If things go wrong with this, you may find orphaned SSH tunnels on your server though, so I would call this somewhat fragile.

Categories