I'm using the WCF4.0 template -REST. I'm trying to make a method that uploads a file using a stream.
The problem always occur at
Stream serverStream = request.GetRequestStream();
Class for streaming:
namespace LogicClass
{
public class StreamClass : IStreamClass
{
public bool UploadFile(string filename, Stream fileStream)
{
try
{
FileStream fileToupload = new FileStream(filename, FileMode.Create);
byte[] bytearray = new byte[10000];
int bytesRead, totalBytesRead = 0;
do
{
bytesRead = fileStream.Read(bytearray, 0, bytearray.Length);
totalBytesRead += bytesRead;
} while (bytesRead > 0);
fileToupload.Write(bytearray, 0, bytearray.Length);
fileToupload.Close();
fileToupload.Dispose();
}
catch (Exception ex) { throw new Exception(ex.Message); }
return true;
}
}
}
REST project:
[WebInvoke(UriTemplate = "AddStream/{filename}", Method = "POST", BodyStyle = WebMessageBodyStyle.Bare)]
public bool AddStream(string filename, System.IO.Stream fileStream)
{
LogicClass.FileComponent rest = new LogicClass.FileComponent();
return rest.AddStream(filename, fileStream);
}
Windows Form project: for testing
private void button24_Click(object sender, EventArgs e)
{
byte[] fileStream;
using (FileStream fs = new FileStream("E:\\stream.txt", FileMode.Open, FileAccess.Read, FileShare.Read))
{
fileStream = new byte[fs.Length];
fs.Read(fileStream, 0, (int)fs.Length);
fs.Close();
fs.Dispose();
}
string baseAddress = "http://localhost:3446/File/AddStream/stream.txt";
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(baseAddress);
request.Method = "POST";
request.ContentType = "text/plain";
Stream serverStream = request.GetRequestStream();
serverStream.Write(fileStream, 0, fileStream.Length);
serverStream.Close();
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
{
int statusCode = (int)response.StatusCode;
StreamReader reader = new StreamReader(response.GetResponseStream());
}
}
I've turned off the firewall and my Internet connection, but the error still exists. Is there a better way of testing the uploading method?
Stack trace:
at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception)
"Actively refused it" means that the host sent a reset instead of an ack when you tried to connect. It is therefore not a problem in your code. Either there is a firewall blocking the connection or the process that is hosting the service is not listening on that port. This may be because it is not running at all or because it is listening on a different port.
Once you start the process hosting your service, try netstat -anb (requires admin privileges) to verify that it is running and listening on the expected port.
update: On Linux you may need to do netstat -anp instead.
You don't have to restart the PC. Restart IIS instead.
Run -> 'cmd'(as admin) and type "iisreset"
You must set up your system proxy
You have to go through this path
controlpanel>>internet option>>connnection>>LAN settings>>
proxy
no tik:use proxy server
I got a similar error message like TCP error code 10061: No connection could be made because the target machine actively refused it in my current project. I find this 10061 error code cannot distinguish the case that the service endpoint is not started and the case that it is blocked by the firewall. Often, the firewall can be switched off, but the problem is still there.
You can test your code in the below two ways.
Insert code to get time A that service is started and time B that client sends the request to the server. If B is earlier than A, it can cause this problem.
Change your server port to another port that is also available in the system. You will find the same error code reported.
Above is my fix. It works on my machine. I hope it helps!
Check if any other program is using that port.
If an instance of the same program is still active, kill that process.
I had a similar issue. In my case the service would work fine on the developer machine but fail when on a QA machine. It turned out that on the QA machine the application wasn't being run as an administrator and didn't have permission to register the endpoint:
HTTP could not register URL http://+:12345/Foo.svc/]. Your process does
not have access rights to this namespace (see
http://go.microsoft.com/fwlink/?LinkId=70353 for details).
Refer here for how to get it working without being an admin user: https://stackoverflow.com/a/885765/38258
If you use WCF storm, can you even log-in to the WCF service endpoint? If not, and you are hosting it in a Windows service, you probably forgot to register that namespace. It's not very well advertised that this step is required, and it is actually annoying to do.
I use this tool to do this; it automates all those cumbersome steps.
Check whether the port number in file Web.config of your webpage is the same as the one that is hosted on IIS.
I had the same problem on my web server "No connection could be made because the target machine actively refused it 161.x.x.235:5672". I asked the Admin to open the port 5672 on the web server, then it worked fine.
I had a similar problem
rejecting localhost and 127.0.0.1.
cmd(admin) netstat -anb found the port running on 169.254.80.80 (dont know were that ip came from because my network ip was 10.0.0.5.
after putting in this IP it worked.
This Gives correct IP:
IPAddress ipAddress = ipHostInfo.AddressList[0];
Console.WriteLine(ipAddress.ToString());
I also faced problem in .Net Remoting Service in C#.
I got it solved in 3 steps:
Change Port of Protocol in all the files whereever it is being used.
Run your Host Server Program and make it active.
Now run your client program.
I could not restart IIexpress. This is the solution that worked for me
Cleaned the build
Rebuild
With this error I was able to trace it, thanks to #Yaur, you need to basically check the service (WCF) if it's running and also check the outbound and inbound TCP properties on your advance firewall settings.
With similar pattern, my rest client is calling the service API, the service called successfully when debugging, but not working on the published code. Error was: Unable to connect to the remote server.
Inner Exception: System.Net.Sockets.SocketException (0x80004005): No connection could be made because the target machine actively refused it serviceIP:443 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception)
Resolution: Set the proxy in Web config.
<system.net>
<defaultProxy useDefaultCredentials="true">
<proxy proxyaddress="http://proxy_ip:portno/" usesystemdefault="True"/>
</defaultProxy>
</system.net>
I had a similar issue. In my case VPN proxy app such as Psiphon ، changed the proxy setup in windows so follow this :
in Windows 10, search change proxy settings and turn of use proxy server in the manual proxy
Make Sure all services used by your application are on, which are using host and port to utilize them . For e.g, as in my case, This error can come, If your application is utilizing a Redis server and it is not being able to connect to it on given port and host, thus producing this error .
For my case, I had an Angular SLA project template with ASP.NET Core.
I was trying to run the IIS Express from the Visual Studio WebUI solution, triggering the "Actively refused it" error.
The problem, in this case, wasn't connected with the Firewall blocking the connection.
It turned out that I had to run the Angular server independently of the Kestrel run because the Server was expecting the UI to run on a specific port but wasn't actually.
For more information, check the official Microsoft documentation.
I had similar problem. In launchSettings, my IIS Express was configured on one port, and there was another launch profile that started on another ApplicationUrl with another port.
Starting the web app up with the IIS Express profile led me to have the error.
I am using the Apache ActiveMQ Artemis AMQP message broker. I started getting the "No connection could be made because the target machine actively refused it" exception when trying to send and receive messages with the broker after a reboot. On a computer where the same type of broker still worked, netstat -anb showed that the broker was listening on the expected port 5672. On the computer with the error, the broker was not listening. On the computer with the error, starting the broker resulted in the following warning's appearing in Microsoft Event Viewer's "Windows Logs > System":
The system failed to register host (A or AAAA) resource records for network adapter
with settings:
Adapter Name : {286EE2DA-3D81-41AE-VE5G-5D761FD3925E}
Host Name : mypc
Primary Domain Suffix : myco.com
DNS server list :
55.77.168.1, 74.86.130.1
Sent update to server : 186.952.335.157:45
IP Address(es) :
182.269.1.437
Either the DNS server does not support the DNS dynamic update protocol or the authoritative zone for the specified DNS domain name does not accept dynamic updates.
To register the DNS host (A or AAAA) resource records using the specific DNS domain name and IP addresses for this adapter, contact your DNS server or network systems administrator.
I was able to use the broker without error after I ran the following in a cmd.exe with administrative privileges, rebooted, and waited about fifteen minutes:
ipconfig /flushdns
ipconfig /renew
ipconfig /registerdns
I am making following request which resulted empty reply from server.
Originate server : AWS ec2 / PHP 5.4 / Guzzle
Remote server : AWS ec2 through elb
CURL info :{
"url":"https:\/\/xxx\/xxx",
"content_type":null,
"http_code":0,
"header_size":0,
"request_size":5292,
"filetime":-1,
"ssl_verify_result":0,
"redirect_count":0,
"total_time":120.987057,
"namelookup_time":0.000277,
"connect_time":0.001504,
"pretransfer_time":0.014271,
"size_upload":2430,
"size_download":0,
"speed_download":0,
"speed_upload":20,
"download_content_length":-1,
"upload_content_length":2430,
"starttransfer_time":60.998147,
"redirect_time":59.988895,
"certinfo":[],
"primary_ip":"54.169.126.111",
"primary_port":443,
"local_ip":"192.168.2.111",
"local_port":39522,
"redirect_url":""
}
CURL error : [curl] 52: Empty reply from server [url] https:\/\/xxx\/xxx
Pls note that this does not happen all the time.
It seems like the request has not even reach the destination(elb) since there was no logs relate to the request
1. Is the issue with originate server or remote server ?
2. "starttransfer_time":60.998147 Could this be the root cause ?
Solutions,workarounds,suggestions are welcome.Thanks!
As it seems request never reached the server,
Check for network errors. Any TCP re-transmission/timeout or any errors. As you mentioned no reply means is it TCP timeout?
Run a tcpdump and analyse traces based on that you can decide.
Additionally you can enable log level in both applications in Originate and Remote servers.
Check for error patterns, ex: is it because of high load?
In my case "empty answer from server" was caused by exhausted memory on the remote server. In this case fatal error was thrown and request was terminated.
Debugging cURL with curl_setopt($h, CURLOPT_VERBOSE, true); did not help since there was only "Connection died, retrying a fresh connect" and then "Empty reply from server". We had to debug it on remote server's side.
With the soap server (“https://sdkeval2.yodlee.com/yodsoap/service”). We are getting below error:
HTTP Error: Couldn't open socket connection to server (http://XX.XX.XX:8080/yodsoap/services/CobrandLoginService/), Error (110): Connection timed out
The issue could be due to the firewall at either end. At your end do check if the outbound traffic is allowed to the specific server on the specified port. At Yodlee's end they use white-list mechanism to filter traffic so make sure your IP addresses are included in the white-list. Note that Yodlee may take up to a week before your IP maybe allowed into their system. If all of these are in order then your connection should be successful without any issue.
We send some files across to a third party with a PHP cron job via FTP.
However sometimes we get the following error:
ErrorException [ 2 ]: ftp_put(): php_connect_nonb() failed: Operation
now in progress (115) ~ MODPATH/fileop/classes/Drivers/Fileop/Ftp.php [ 37 ]
When I say "sometimes" I mean exactly that; most times it goes across fine but about 1 in 5 times we get that error. It's not to do with the files themselves, because they will go happily if we try again.
We've found similar issues online - relating to a bug in PHP with NAT devices or to do with firewall configuration but again the implication is that if this were the case it would never work.
So, why would this work some times and not others?
ftp_set_option($ftpconn, FTP_USEPASVADDRESS, false);
This line of code before setting passivity of the connection ftp_pasv($ftpconn, true);
Solved my problem
FTP(S) uses random ports to set up data connections; an intermittent success rate indicates that not all ports are allowed by a firewall on the client and/or server machines. The port range for incoming (PASV) data connections can be set in the FTP server.
This page has a nice summary:
The easy way is to simply allow FTP servers and clients unlimited
access through your firewall, but if you like to limit their access to
"known" ports, you have to understand the 4 different scenarios.
1) The FTP server should be allowed to accept TCP connections to port
21, and to make TCP connections from port 20 to any (remote ephemeral)
port.
2) The FTP server should be allowed to accept TCP connections to port
21, AND to accept TCP connections to any ephemeral port as well!
3) The FTP client should be allowed to make TCP connections to port
21, and to accept TCP connections from port 20 to any ephemeral port.
4) The FTP client should be allowed to make TCP connections to port
21, and to make TCP connections to any other (remote ephemeral) port
as well!
So, I'm writing this answer after doing some investigation on my FTP server and reading the link you provided elitehosts.com.
I'm using FileZilla FTP server, and there is a specific setting that I had to enter to make it work. Going into the server settings, there is an area titled "Passive mode settings". In that dialog, there is an area titled "IPv4 specific", and within that area there is a setting labeled "External Server IP Address for passive mode transfers:". It's a radio button selection set, and it was on "Default", but since the FTP server is NAT'ed, I changed that radio selection from "Default" to "Use the following IP:" and entered in the external-facing IP address of my gateway provided by my ISP.
After I set this up, it worked! Not terribly sure if your FTP server is NAT'ed, but I thought I would provide the answer on this thread because it seems related.
In addition to Cees answer, I am running vsftp on ec2 and had to comment out the listen_ipv6=YES, listen=YES then "service vsftpd restart".
Although documentation says it will listen on ipv4 as well it wasn't and this resolved the issue.
For me all I had to do was to remove the ftp_pasv( $ftpconn, true ); and everything worked perfectly. I'm not yet sure why but I am trying to find out and I will surely come back when I do get the reason behind it.
This should be a comment under jj_dev2 comment, but I cannot add one due to reputation. But maybe it will be helpful for someone, so I post it here.
We had the same issue as described in the original post. In our case it worked with many customers - except one.
The solution in jj_dev2 comment did work for us. So we investigated what does ftp_set_option($conn, FTP_USEPASVADDRESS, false) actually do. And based on that we found out that in fact customer's FTPS server was configured incorrectly.
In response to PASV command (ftp_pasv($conn, true)) FTP server returns an IP address which the PHP FTP client then will use for data transfers. In our case the FTP server was returning an internal IP address and not the public IP address that we connect to. Customer had to fix their FTP server settings so FTP server would send external IP address in the PASV command response.
I have a load balanced dev site that I'm working out bugs for SSL on and I have ran into one last very annoying issue. On some pages I need to force it to SSL so easy enough, I just wanted to create a
header ("Location: https://www.example.com/mypage.php");
I thought that was easy enough and no worries. However, every time I do this it transforms it back to http. Well as you can figure it creates an endless loop that can't be resolved. I can't figure out how to keep that https in there so that it will pull the secure version of the page. If I navigate directly to the secure page with https it works just fine. The only issue is on this redirect.
Any help would be awesome! I'm using POUND as a load balance proxy. Apache on the web-server nodes. The SSL cert is setup at the Load Balancer.
When loadbalancing, 'internal' SSL usually goes out the door: Clients connect through a load-balancer with which you can do SSL encryption, but behind that in most loadbalancers I've seen is plain 'HTTP'. Try to get your loadbalancer to set a custom header to you indicating that there is a HTTPS connection between loadbalancer & client.
From http://www.apsis.ch/pound/index_html
WHAT POUND IS:
...
an SSL wrapper: Pound will decrypt HTTPS requests from client browsers and pass them as plain HTTP to the back-end servers.
And from more manual pages:
HTTP Listener
RewriteLocation 0|1|2
If 1 force Pound to change the Location: and Content-location:
headers in responses. If they point to the back-end itself or to
the listener (but with the wrong protocol) the response will be
changed to show the virtual host in the request. Default: 1
(active). If the value is set to 2 only the back-end address is
compared; this is useful for redirecting a request to an HTTPS
listener on the same server as the HTTP listener.
redirecting to https pages is no problem.
you can check for the port, scheme or server variable (probably server variable is the best) to see if https is on, and have it as a condition for redirecting
$_SERVER['SERVER_PORT'] == 443
parse_url($_SERVER['REQUEST_URI'],PHP_URL_SCHEME) == 'https'
$_SERVER['HTTPS'] == 'on'
but as you have an infinite loop there must be something else wrong!
try using the load blancer "balance" instead. it only takes about 5 minutes to set up, and instead of proxying, will do "real" load balancing. I would guess your proxy is currently redirecting https requests to the http address. Try making a request without using the balancer. you can do this by setting up the host name in your /etc/hosts file to point directly to a server instead of to the load balancer's IP