Amazon Web Services PHP FTP Get - php

I am using Elastic Beanstalk and have deployed my application to Worker Tier.
Part of my application is to connect to remote ftp and download remote files using PHP.
It works without a problem on localhost. When I execute the PHP script on amazon web services, I get this weird error:
IP1 = XXX.XX.XX.XX
IP2 = XX.XX.XXX.XXX
PHP Error[2]: ftp_get(): I won't open a connection to IP1 (only to IP2)
Application runs on a single instance (non-balanced), on default VPC.
What is really weird is that IP1 does not match to the host I'm trying to download the file from (ie example.com). Could that be the Internet Gateway IP?
Same application also downloads pictures and connects to APIs, it's definitely connected to the Internet.
I assume the VPC routing configuration won't allow instance to talk to other protocols with target 0.0.0.0/0 (ie any location) but only HTTP.
VPC ID: vpc-53cc2236
Network ACL: acl-c850baad
State: available
Tenancy: Default
VPC CIDR: 172.31.0.0/16
DNS resolution: yes
DHCP options set: dopt-f2998e90
DNS hostnames: yes
Route table: rtb-2b64914e
EC2 instance belongs to subnet-1250b265:
Route Table: rtb-2b64914e
Destination: 172.31.0.0/16 target: local
Destination: 0.0.0.0/0 target: igw-a48199c6
Route table rtb-2b64914e:
Destination | Target | Status | Propagated
172.31.0.0/16 | local | Active | No
0.0.0.0/0 | igw-a48199c6 | Active |No
There are also two other subnets, subnet-ab0003ed, and subnet-96f335f3 which belong to same route table as subnet-1250b265.

I had the same issue, and resolved it by using passive mode.
ftp> pass
Passive mode on.
In ruby, the code ended up being:
Net::FTP.open(host) do |ftp|
ftp.login(user, pwd)
ftp.passive = true
ftp.put(output_filename, "#{target_dir]}/#{target_filename}")
end

Some folks suggested that firewall might block access to FTP. That wasn't the case for me, proper ports were open.
I tried ftp_connect, ftp_get but didn't work (tried with PASV on/off).
I tried to curl ftp download the file but got: No Route to Host error.
Then, I tried to curl ftp download the file using php, but I only got an empty file.
Since I have SSH access to the server, I tried to get the file using SCP but got a weird error saying Warning: ssh2_scp_recv(): Unable to receive remote file.
I also found a PHP patch for this but didn't want to mess patch PHP version on AWS.
I ended up using phpseclib which worked fine - no trouble there.

Related

Fatal error: Uncaught mysqli_sql_exception: No connection could be made because the target machine actively refused [duplicate]

I'm using the WCF4.0 template -REST. I'm trying to make a method that uploads a file using a stream.
The problem always occur at
Stream serverStream = request.GetRequestStream();
Class for streaming:
namespace LogicClass
{
public class StreamClass : IStreamClass
{
public bool UploadFile(string filename, Stream fileStream)
{
try
{
FileStream fileToupload = new FileStream(filename, FileMode.Create);
byte[] bytearray = new byte[10000];
int bytesRead, totalBytesRead = 0;
do
{
bytesRead = fileStream.Read(bytearray, 0, bytearray.Length);
totalBytesRead += bytesRead;
} while (bytesRead > 0);
fileToupload.Write(bytearray, 0, bytearray.Length);
fileToupload.Close();
fileToupload.Dispose();
}
catch (Exception ex) { throw new Exception(ex.Message); }
return true;
}
}
}
REST project:
[WebInvoke(UriTemplate = "AddStream/{filename}", Method = "POST", BodyStyle = WebMessageBodyStyle.Bare)]
public bool AddStream(string filename, System.IO.Stream fileStream)
{
LogicClass.FileComponent rest = new LogicClass.FileComponent();
return rest.AddStream(filename, fileStream);
}
Windows Form project: for testing
private void button24_Click(object sender, EventArgs e)
{
byte[] fileStream;
using (FileStream fs = new FileStream("E:\\stream.txt", FileMode.Open, FileAccess.Read, FileShare.Read))
{
fileStream = new byte[fs.Length];
fs.Read(fileStream, 0, (int)fs.Length);
fs.Close();
fs.Dispose();
}
string baseAddress = "http://localhost:3446/File/AddStream/stream.txt";
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(baseAddress);
request.Method = "POST";
request.ContentType = "text/plain";
Stream serverStream = request.GetRequestStream();
serverStream.Write(fileStream, 0, fileStream.Length);
serverStream.Close();
using (HttpWebResponse response = request.GetResponse() as HttpWebResponse)
{
int statusCode = (int)response.StatusCode;
StreamReader reader = new StreamReader(response.GetResponseStream());
}
}
I've turned off the firewall and my Internet connection, but the error still exists. Is there a better way of testing the uploading method?
Stack trace:
at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception)
"Actively refused it" means that the host sent a reset instead of an ack when you tried to connect. It is therefore not a problem in your code. Either there is a firewall blocking the connection or the process that is hosting the service is not listening on that port. This may be because it is not running at all or because it is listening on a different port.
Once you start the process hosting your service, try netstat -anb (requires admin privileges) to verify that it is running and listening on the expected port.
update: On Linux you may need to do netstat -anp instead.
You don't have to restart the PC. Restart IIS instead.
Run -> 'cmd'(as admin) and type "iisreset"
You must set up your system proxy
You have to go through this path
controlpanel>>internet option>>connnection>>LAN settings>>
proxy
no tik:use proxy server
I got a similar error message like TCP error code 10061: No connection could be made because the target machine actively refused it in my current project. I find this 10061 error code cannot distinguish the case that the service endpoint is not started and the case that it is blocked by the firewall. Often, the firewall can be switched off, but the problem is still there.
You can test your code in the below two ways.
Insert code to get time A that service is started and time B that client sends the request to the server. If B is earlier than A, it can cause this problem.
Change your server port to another port that is also available in the system. You will find the same error code reported.
Above is my fix. It works on my machine. I hope it helps!
Check if any other program is using that port.
If an instance of the same program is still active, kill that process.
I had a similar issue. In my case the service would work fine on the developer machine but fail when on a QA machine. It turned out that on the QA machine the application wasn't being run as an administrator and didn't have permission to register the endpoint:
HTTP could not register URL http://+:12345/Foo.svc/]. Your process does
not have access rights to this namespace (see
http://go.microsoft.com/fwlink/?LinkId=70353 for details).
Refer here for how to get it working without being an admin user: https://stackoverflow.com/a/885765/38258
If you use WCF storm, can you even log-in to the WCF service endpoint? If not, and you are hosting it in a Windows service, you probably forgot to register that namespace. It's not very well advertised that this step is required, and it is actually annoying to do.
I use this tool to do this; it automates all those cumbersome steps.
Check whether the port number in file Web.config of your webpage is the same as the one that is hosted on IIS.
I had the same problem on my web server "No connection could be made because the target machine actively refused it 161.x.x.235:5672". I asked the Admin to open the port 5672 on the web server, then it worked fine.
I had a similar problem
rejecting localhost and 127.0.0.1.
cmd(admin) netstat -anb found the port running on 169.254.80.80 (dont know were that ip came from because my network ip was 10.0.0.5.
after putting in this IP it worked.
This Gives correct IP:
IPAddress ipAddress = ipHostInfo.AddressList[0];
Console.WriteLine(ipAddress.ToString());
I also faced problem in .Net Remoting Service in C#.
I got it solved in 3 steps:
Change Port of Protocol in all the files whereever it is being used.
Run your Host Server Program and make it active.
Now run your client program.
I could not restart IIexpress. This is the solution that worked for me
Cleaned the build
Rebuild
With this error I was able to trace it, thanks to #Yaur, you need to basically check the service (WCF) if it's running and also check the outbound and inbound TCP properties on your advance firewall settings.
With similar pattern, my rest client is calling the service API, the service called successfully when debugging, but not working on the published code. Error was: Unable to connect to the remote server.
Inner Exception: System.Net.Sockets.SocketException (0x80004005): No connection could be made because the target machine actively refused it serviceIP:443 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception)
Resolution: Set the proxy in Web config.
<system.net>
<defaultProxy useDefaultCredentials="true">
<proxy proxyaddress="http://proxy_ip:portno/" usesystemdefault="True"/>
</defaultProxy>
</system.net>
I had a similar issue. In my case VPN proxy app such as Psiphon ، changed the proxy setup in windows so follow this :
in Windows 10, search change proxy settings and turn of use proxy server in the manual proxy
Make Sure all services used by your application are on, which are using host and port to utilize them . For e.g, as in my case, This error can come, If your application is utilizing a Redis server and it is not being able to connect to it on given port and host, thus producing this error .
For my case, I had an Angular SLA project template with ASP.NET Core.
I was trying to run the IIS Express from the Visual Studio WebUI solution, triggering the "Actively refused it" error.
The problem, in this case, wasn't connected with the Firewall blocking the connection.
It turned out that I had to run the Angular server independently of the Kestrel run because the Server was expecting the UI to run on a specific port but wasn't actually.
For more information, check the official Microsoft documentation.
I had similar problem. In launchSettings, my IIS Express was configured on one port, and there was another launch profile that started on another ApplicationUrl with another port.
Starting the web app up with the IIS Express profile led me to have the error.
I am using the Apache ActiveMQ Artemis AMQP message broker. I started getting the "No connection could be made because the target machine actively refused it" exception when trying to send and receive messages with the broker after a reboot. On a computer where the same type of broker still worked, netstat -anb showed that the broker was listening on the expected port 5672. On the computer with the error, the broker was not listening. On the computer with the error, starting the broker resulted in the following warning's appearing in Microsoft Event Viewer's "Windows Logs > System":
The system failed to register host (A or AAAA) resource records for network adapter
with settings:
Adapter Name : {286EE2DA-3D81-41AE-VE5G-5D761FD3925E}
Host Name : mypc
Primary Domain Suffix : myco.com
DNS server list :
55.77.168.1, 74.86.130.1
Sent update to server : 186.952.335.157:45
IP Address(es) :
182.269.1.437
Either the DNS server does not support the DNS dynamic update protocol or the authoritative zone for the specified DNS domain name does not accept dynamic updates.
To register the DNS host (A or AAAA) resource records using the specific DNS domain name and IP addresses for this adapter, contact your DNS server or network systems administrator.
I was able to use the broker without error after I ran the following in a cmd.exe with administrative privileges, rebooted, and waited about fifteen minutes:
ipconfig /flushdns
ipconfig /renew
ipconfig /registerdns

Testing WordPress Application

I am attempting to test my AWS EC2 WordPress application that uses nginx & php-fpm for managing incoming requests.
I don't have the means to test the site using the SSL certificate name which is installed onto the ALB, so this has to be done internally and directly at the EC2 instance. It will soon become apparent that my knowledge of WordPress hosting is limited.
I've adopted port forwarding as a means to connect to the application which is detailed here in this article: https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/. So by using the following command I can achieve this:
aws ssm start-session --target i-xxxxxxxxxxxxxx --profile username --region eu-west-1 --document-name AWS-StartPortForwardingSession --parameters '{"portNumber":["80"],"localPortNumber":["9999"]}'
I can get the default nginx page to appear if I run http://localhost:9999 from a browser. What I prefer to do is see if I can hit any of the php WordPress pages. This bit is unclear to me. If I this time access http://localhost:9999/site-1 then I encounter a 404. Then by looking in the /var/log/nginx/error.log I see this in more detail.
*[error] 26041#26041: 1 open() "/usr/share/nginx/html/site-1" failed (2: No such file or directory), client: 127.0.0.1, server: localhost, request: "GET /whitelines HTTP/1.1", host: "localhost:9999".*
This is further confusion since when I check the EC2 filesystem, I find the site-1 directory structure containing all of the php files under a different location /var/www/site-1.
Not sure why this works using the SSL Certificate name -> ALB -> TG -> EC2 but not by going directly at the EC2.
I suppose what I want to do is, if it possible to verify the sites work without using the Cert ALB route? If so then where am I going wrong?
Thanks!

" PHP Warning: pg_connect(): Unable to connect to PostgreSQL server: FATAL: password authentication failed for user ..." in Raspberry Pi (Stretch)

So sorry to bother everyone, I'm new to PostgreSQL, but my project requires me to build an automation system that connects to a server running PostgreSQL server. Long story short, let's just say I am required to perform a data insertion/manipulation from a Web Form into the server via PHP pg_connect(). The Web Form is located locally in /var/www/html/web_form.html and calls to a PHP script that performs the data insertion,
<form name="some_name" action="script.php" method="POST">
so the data insertion is done to a local server.
I'm using a Raspberry Pi 3 to simulate the "server" (as so not to "disturb" the true running server during such an early development). The Raspberry Pi is running a Raspbian Stretch distro. Well, it was originally Jessie, but then I decided to involve the Stretch repo from
deb http://mirrordirector.raspbian.org/raspbian/ stretch main contrib non-free rpi
with a priority of 100. It also has postgresql-9.6.3 (from Stretch), php7.0.19-1 (also from Stretch), and apache2.4.25 (from official Raspbian). Please do note that the Stretch repo do replaces A LOT of Jessie's official stable packages, and also known to be "not quite stable" itself, so it might also be the source of the problem.
I can access the database in the server from remote computers, whether using direct psql, Python (psycopg2), or even the exact same Web Form via PHP (tried from remote computer), so it will mostly won't be because of PDO problems (I've checked phpinfo() , but feel free to give advice on this, since I don't really understand PDO myself). On local, I can access the database via psql, but not local access via PHP (pg_connect).
I've meddled with this problem for hours now in vain. Please help me.
Here's my Raspberry Pi's configurations:
1.
A snippet of the PHP code:
$conn_str="host=localhost port=5432 dbname=my_db_name user=my_user_name password=my_password";
$db = pg_connect($conn_str);
if(!$db){
$errormessage=pg_last_error();
echo "Error 0: " . $errormessage;
exit();
}
Please mind that this same code has perfectly worked on another computer. I've succeeded to perform data insertion when the Web Form is located in a remote computer accessing the said server, or when the Web Form is accessing the remote computer's own local PostgreSQL server, but not when it is done in this particular problem server. When running in browser, it only shows:
Error 0:
the result when running in console:
PHP Warning: pg_connect(): Unable to connect to PostgreSQL server: FATAL: password authentication failed for user "raspiserver"
FATAL: password authentication failed for user "raspiserver" in php shell code on line 1
I also have tried to change host=localhost to host=127.0.0.1 or host=0.0.0.0. I'm sure that the error came from that block of code, since that block is already on the first lines in my PHP code, while the other error reporting codes I put, each has distinct Error X: number.
2.
The PostgreSQL configurations:
For the pg_hba.conf content :
# Database administrative login by Unix domain socket
local all postgres peer
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local my_db_name my_user_name 127.0.0.1/32 md5
local all all md5
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
# local replication postgres peer
#host replication postgres 127.0.0.1/32 md5
#host replication postgres ::1/128 md5
#host all all 0.0.0.0/0 md5
host another_db_name my_user_name 192.168.52.0/24 trust
host my_db_name my_user_name 192.168.52.0/24 trust
I already tried to comment/uncomment LINE 7, or change both LINE 7 and 8 to either md5 , trust , or password . I also have tried to remove 127.0.0.1/32 from LINE 7, or change it to 127.0.0.0/24 or 0.0.0.0/0 . And, yes, I did performed
sudo /etc/init.d/postgresql reload
sudo /etc/init.d/postgresql restart
(even sudo reboot) for every change I've made.
For the postgresql.conf content :
I already set listen_addresses = '*' and password_encryption = on , with the rest remained unchanged.
3.
Firewall :
sudo iptables -L -n doesn't show any entries :
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
What is wrong with my configurations?
Are Stretch the cause?
Or is it because the implementation of PostgreSQL-9.6?
I've already googled here and there, but most of the solutions only advice to change the pg_hba.conf to trust , or assume that the user name is not exist.
I'm desperate, please help.
(Please do mind that I don't know anything regarding PHP, PostgreSQL, Apache, or server, so please don't expect that I really know what I've done so far. Please do analyze everything.... Also, English is not my native, so I might mixed around some "jargons" here and there (if any).... Sorry for that... )
This particular declaration in pg_hba.conf is incorrect:
# TYPE DATABASE USER ADDRESS METHOD
local my_db_name my_user_name 127.0.0.1/32 md5
because when the TYPE field is local, the line must have 4 fields, not 5: there is no ADDRESS field because it doesn't apply to Unix domain sockets (which is what local really means), only to TCP connections (which are declared with TYPE being host or hostssl)
When you reload PostgreSQL with a pg_hba.conf with this line, it will fail with
LOG: invalid authentication method "127.0.0.1/32"
but you will see that only if looking in the server log. As a result of the failure, it will ignore your new version of pg_hba.conf.
Anyway this other line already in your file
host all all 127.0.0.1/32 md5
probably does what you want. In this case just remove the offending line, reload again, and check the server log for a message indicating the success of the reload.

Why orientDB connection to localhost of docker is refused

I did build orientDB and appserver on docker. They are running as well. This is list containers on docker:
core#localhost ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS
NAMES
01abef0204a7 fxrialab/appserver:latest /usr/sbin/httpd -D F 30 minute
s ago Up 30 minutes 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
appserver
f6d0631bb092 fxrialab/orient:latest /bin/sh -c cd /opt/o 30 minute
s ago Up 30 minutes 0.0.0.0:2424->2424/tcp, 0.0.0.0:2480->2480/tcp
appserver/db,orient
ebc1386250b9 fxrialab/data:latest /usr/sbin/sshd -D 30 minute
s ago Up 30 minutes 0.0.0.0:2200->22/tcp
data
Also i did create database on orientDB. I think orient is working fine. However when i login to my website. I got errors "Connection refused" like this:
Socket error #111: Connection refused
• vendors/OrientDB/OrientDB.php:254
OrientDBSocket->__construct('localhost','2424',30)
• apps/models/DB.php:11 OrientDB->__construct('localhost','2424')
I dont know what reason althought i did test fine on local when i enabled server.bat file. Ah, i using orientdb-1.7
Thank for advance !
localhost is a synonim for ip address 127.0.0.1
your binds are for 0.0.0.0
I am not familiar with OrientDB, but I believe that what you are facing is ports access problem (server configuration), not software problem (in this case software is OrientDB). Most servers are configured to block all ports, unless specifically allowed - for security reasons, so that noone can connect to the server on those blocked ports.
So, what is probably happening is that your local server is fine with using those ports, and remote server is secured and thus prevents connections.
Docker is running a virtual machine with separate network interface for each containers. Thus if your appserver connects to localhost, it won't see the orientdb.
You need to update your configuration to lookup the proper environment variable for the link (see http://docs.docker.com/userguide/dockerlinks/) or just use the db host. Docker injects a hostname into /etc/hosts for each linked container.
I've found cause of this issue, because im config HOST to connect is wrong. It's not localhost because docker uses its own internal IP's. In ssh type docker inspect orient then use global variables for config to access to IP & Port.

Debugging PHP Cloud application via SSH tunnel

I'm trying to use remote debugging in Eclipse/Windows via an SSH tunnel as described in these articles on PHP Cloud.
http://www.phpcloud.com/help/putty-ssh-debug-tunnel
http://www.phpcloud.com/help/debugging-overview
I've been able to establish an SSH connection using PuTTY with public/private key managed by Pagent. I'm now facing issues when testing the debugger in Eclipse's Debug Configurations menu. I've set up a server with the following details.
Base URL: http://lhith.my.phpcloud.com (the link to my application on
PHP Cloud).
Local web root: C:\Users\Luke\workspace\lhith (the path that contains
index.php on my local copy)
Path mapping: /.apps/http/__default__/0/1.7-zdc (the path containing
index.php on the server) -> /lhith (path containing index.php in the
workspace)
File: /lhith/index.php
URL: http://lhith.my.phpcloud.com
I also configured Zend Debugger to use port 10137 and the Client Host/IP of 127.0.0.1.
When I connect my SSH session and then try to test the debugger I see the error "A timeout occurred when the debug server attempted to connect to the following client hosts/IPs: -127.0.0.1"
What could be going wrong here? What can I do about it?
Thank you for any assistance provided.
I made some progress on this tonight. I setup port forwarding on my internet router to forward port 10137 to my computer and then added my internet routers public IP address to the list of allowed hosts on the Zend Server debug settings on my.phpcloud.com.
I also added this IP to the Debugger configuration in Eclipse and was able to successfully connect to the remote system. It appears there is a problem with the SSH remote tunnel settings, I will keep digging but I wanted to share my findings so far as this has been driving me crazy!

Categories