Testing WordPress Application - php

I am attempting to test my AWS EC2 WordPress application that uses nginx & php-fpm for managing incoming requests.
I don't have the means to test the site using the SSL certificate name which is installed onto the ALB, so this has to be done internally and directly at the EC2 instance. It will soon become apparent that my knowledge of WordPress hosting is limited.
I've adopted port forwarding as a means to connect to the application which is detailed here in this article: https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/. So by using the following command I can achieve this:
aws ssm start-session --target i-xxxxxxxxxxxxxx --profile username --region eu-west-1 --document-name AWS-StartPortForwardingSession --parameters '{"portNumber":["80"],"localPortNumber":["9999"]}'
I can get the default nginx page to appear if I run http://localhost:9999 from a browser. What I prefer to do is see if I can hit any of the php WordPress pages. This bit is unclear to me. If I this time access http://localhost:9999/site-1 then I encounter a 404. Then by looking in the /var/log/nginx/error.log I see this in more detail.
*[error] 26041#26041: 1 open() "/usr/share/nginx/html/site-1" failed (2: No such file or directory), client: 127.0.0.1, server: localhost, request: "GET /whitelines HTTP/1.1", host: "localhost:9999".*
This is further confusion since when I check the EC2 filesystem, I find the site-1 directory structure containing all of the php files under a different location /var/www/site-1.
Not sure why this works using the SSL Certificate name -> ALB -> TG -> EC2 but not by going directly at the EC2.
I suppose what I want to do is, if it possible to verify the sites work without using the Cert ALB route? If so then where am I going wrong?
Thanks!

Related

Configuration of Mercure with Symfony 5

I'm trying to code a chat with Symfony 5 and Mercure, but I have some issues with the configuration. I work with Windows 10.
This is the documentation that I followed : https://github.com/dunglas/mercure/blob/main/docs/hub/install.md
I installed this version on my project: mercure_0.13.0_Windows_arm64.zip.
Then, I decompressed it, and right after in my terminal, I ran "composer require symfony/mercure".
This is in my .env:
# See https://symfony.com/doc/current/mercure.html#configuration
# The URL of the Mercure hub, used by the app to publish updates (can be a local URL)
MERCURE_URL=:https://127.0.0.1:8000/.well-known/mercure
# The public URL of the Mercure hub, used by the browser to connect
MERCURE_PUBLIC_URL=https://127.0.0.1:8000/.well-known/mercure
# The secret used to sign the JWTs
MERCURE_JWT_SECRET="!ChangeMe!"
###< symfony/mercure-bundle ###```
Then I ran the Mercure server with this command line : ```$env:MERCURE_PUBLISHER_JWT_KEY='!ChangeMe!'; $env:MERCURE_SUBSCRIBER_JWT_KEY='!ChangeMe!'; .\mercure.exe run -config Caddyfile.dev```.
In my PowerShell, I have this :
```2021/11/16 01:39:58.029 INFO http server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS {"server_name": "srv0", "https_port": 443}
2021/11/16 01:39:58.029 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2021/11/16 01:39:58.111 INFO tls cleaning storage unit {"description": "FileStorage:C:\\Users\\toufi\\AppData\\Roaming\\Caddy"}
2021/11/16 01:39:58.113 INFO tls finished cleaning storage units
2021/11/16 01:39:58.134 INFO pki.ca.local root certificate is already trusted by system {"path": "storage:pki/authorities/local/root.crt"}
2021/11/16 01:39:58.135 INFO http enabling automatic TLS certificate management {"domains": ["localhost"]}
2021/11/16 01:39:58.136 WARN tls stapling OCSP {"error": "no OCSP stapling for [localhost]: no OCSP server specified in certificate"}
2021/11/16 01:39:58.143 INFO autosaved config (load with --resume flag) {"file": "C:\\Users\\toufi\\AppData\\Roaming\\Caddy\\autosave.json"}
2021/11/16 01:39:58.143 INFO serving initial configuration```
It seems to run well, but in my browser when I run ```https://localhost/.well-known/mercure```,
I have :
```Not Found
The requested URL was not found on this server.
Apache/2.4.46 (Win64) OpenSSL/1.1.1h PHP/7.4.25 Server at localhost Port 443```
Someone can help me because I don't know how to access to my Mercure server with my browser ?
Thank you very much
Hey ben you should try and run https://localhost:8000/.well-known/mercure in ur browser
instead of https://localhost/.well-known/mercure
Maybe you've allready figured out this problem in the last 3 months, but here are a thing it came to my mind.
You start Mercure 0.13 in dev mode (Caddyfile.dev) allowing to access the demo page.
(Btw I miss the log entry here which tells you, that the server uses the file specified by you, and should be something like {"level":"info","ts":1646214769.1484525,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile.dev","config_adapter":""}) You may want to open this demo page to see if Mercure works or not. The default url is https://localhost/.well-known/mercure/ui/. It may vary depending on your settings. It worth to try also with http.
You don't provide the SERVER_NAME env variable, so I assume caddy attempts to use 80 and 443 ports. Therefore https://127.0.0.1:8000 in symfony config won't work. And 80 might fail if there is already a web server running to provide access for symfony.
You can use SERVER_NAME=:8000 to launch mercure on port 8000, but in this case it will be only http, not https.
So, what I would do in your case, I would start mercure in dev mode, and check the demo page. If both http and https attempts fail, I would start mercure with additional SERVER_NAME (ex. 8000) and check http://localhost:8000/.well-known/mercure/ui/. If nothing goes wrong, one of them should work. And then you can proceed to configure symfony to use the mercury service.

I can not be accessing to files of 'domains' directory in openserver with ngrok for setting webhook for telegram bot in windows

Before using windows I was Ubuntu User. When I used ngrok on ubuntu, It automatically accesses to /var/www/html directory, afetr this I can easily open php file which telegram bot codes to see result.
Now on Windows, I am using openserver. Openserever domains directory for php files like /var/www/html in ubuntu.
I installed ngrok.exe. when I type ngrok.exe http 80 on cmd. I am gettin gall thins correctly:
Web Interface http://127.0.0.1:4040
Forwarding http://5756c0888da3.ngrok.io -> http://localhost:80
Forwarding https://5756c0888da3.ngrok.io -> http://localhost:80
But when with it I can not be accessing to domains.
I tried this command also:ngrok.exe http halalBot.test:80 to connect my project directly.
Web Interface http://127.0.0.1:4040
Forwarding http://40b6091d262f.ngrok.io -> http://halalBot.test:80
Forwarding https://40b6091d262f.ngrok.io -> http://halalBot.test:80
However, when I tried to see url http://40b6091d262f.ngrok.io on browser, this url is not for http://halalBot.test:80, namely this project is not being opened on browser, but opening localhost page which is not localhost page of openserver.
Please if someone know how to access to 'domains' directory with ngrok for setting webhook for Telegram bot!
I need to specify the --host-header parameter so that the server receives the Host header for the correct processing of the request:
ngrok http --host-header=halalBot.test 80

Amazon Web Services PHP FTP Get

I am using Elastic Beanstalk and have deployed my application to Worker Tier.
Part of my application is to connect to remote ftp and download remote files using PHP.
It works without a problem on localhost. When I execute the PHP script on amazon web services, I get this weird error:
IP1 = XXX.XX.XX.XX
IP2 = XX.XX.XXX.XXX
PHP Error[2]: ftp_get(): I won't open a connection to IP1 (only to IP2)
Application runs on a single instance (non-balanced), on default VPC.
What is really weird is that IP1 does not match to the host I'm trying to download the file from (ie example.com). Could that be the Internet Gateway IP?
Same application also downloads pictures and connects to APIs, it's definitely connected to the Internet.
I assume the VPC routing configuration won't allow instance to talk to other protocols with target 0.0.0.0/0 (ie any location) but only HTTP.
VPC ID: vpc-53cc2236
Network ACL: acl-c850baad
State: available
Tenancy: Default
VPC CIDR: 172.31.0.0/16
DNS resolution: yes
DHCP options set: dopt-f2998e90
DNS hostnames: yes
Route table: rtb-2b64914e
EC2 instance belongs to subnet-1250b265:
Route Table: rtb-2b64914e
Destination: 172.31.0.0/16 target: local
Destination: 0.0.0.0/0 target: igw-a48199c6
Route table rtb-2b64914e:
Destination | Target | Status | Propagated
172.31.0.0/16 | local | Active | No
0.0.0.0/0 | igw-a48199c6 | Active |No
There are also two other subnets, subnet-ab0003ed, and subnet-96f335f3 which belong to same route table as subnet-1250b265.
I had the same issue, and resolved it by using passive mode.
ftp> pass
Passive mode on.
In ruby, the code ended up being:
Net::FTP.open(host) do |ftp|
ftp.login(user, pwd)
ftp.passive = true
ftp.put(output_filename, "#{target_dir]}/#{target_filename}")
end
Some folks suggested that firewall might block access to FTP. That wasn't the case for me, proper ports were open.
I tried ftp_connect, ftp_get but didn't work (tried with PASV on/off).
I tried to curl ftp download the file but got: No Route to Host error.
Then, I tried to curl ftp download the file using php, but I only got an empty file.
Since I have SSH access to the server, I tried to get the file using SCP but got a weird error saying Warning: ssh2_scp_recv(): Unable to receive remote file.
I also found a PHP patch for this but didn't want to mess patch PHP version on AWS.
I ended up using phpseclib which worked fine - no trouble there.

Setting up haProxy to content switch multiple apps on Appfog

I currently have two apps at AppFog, they are.
http://sru-forums-prod.aws.af.cm/ and http://sru-home-prod.aws.af.cm/
I have haProxy running locally on my computer, this is my current config file.
global
debug
defaults
mode http
timeout connect 500ms
timeout client 50000ms
timeout server 50000ms
backend legacy
server forums sru-forums-prod.aws.af.cm:80
frontend app *:8232
default_backend legacy
The end-goal is that localhost:8232 forwards traffic to sru-home-prod, while localhost:8232/forums/* forwards traffic to sru-forums-prod. However I cant even get a simple proxy up and running.
When I run HAProxy off this config file I receive AppFog 404 Not Found at localhost:8232.
What am I missing, is this even possible?
EDIT:
New config works but now i have a port 60032 coming back in the response.
global
debug
defaults
mode http
timeout connect 500ms
timeout client 50000ms
timeout server 50000ms
backend legacy
option forwardfor
option httpclose
reqirep ^Host: Host:\ sru-forums-prod.aws.af.cm
server forums sru-forums-prod.aws.af.cm:80
frontend app *:8000
default_backend legacy
The reason you are getting an AppFog 404 Not Found is because applications hosted on AppFog are routed by domain name. In order for AppFog to know what app to serve you, the domain name is required to be in the HTTP request. When you go to localhost:8232/forums/ it sends localhost as the domain name which AppFog does not have as a registered app name.
There is a good way to get around this issue
1) Map your application to a second domain name, for example:
af map <appname> sru-forums-prod-proxy.aws.af.cm
2) Edit your /etc/hosts file and add this line:
127.0.0.1 sru-forums-prod-proxy.aws.af.cm
3) Go to http://sru-forums-prod-proxy.aws.af.cm:8232/forums/ which will map to the local machine but will go through your haproxy successfully ending up with the right host name mapped to your app hosted at AppFog.
Here is a working haproxy.conf file that demonstrates how this has worked for us so far using similar methodologies.
defaults
mode http
timeout connect 500ms
timeout client 50000ms
timeout server 50000ms
backend appfog
option httpchk GET /readme.html HTTP/1.1\r\nHost:\ aroundtheworld.appfog.com
option forwardfor
option httpclose
reqirep ^Host: Host:\ aroundtheworld.appfog.com
server pingdom-aws afpingdom.aws.af.cm:80 check
server pingdom-rs afpingdom-rs.rs.af.cm:80 check
server pingdom-hp afpingdom-hp.hp.af.cm:80 check
server pingdom-eu afpingdom-eu.eu01.aws.af.cm:80 check
server pingdom-ap afpingdom-ap.ap01.aws.af.cm:80 check
frontend app *:8000
default_backend appfog
listen stats 0.0.0.0:8080
mode http
stats enable
stats uri /haproxy

Debugging PHP Cloud application via SSH tunnel

I'm trying to use remote debugging in Eclipse/Windows via an SSH tunnel as described in these articles on PHP Cloud.
http://www.phpcloud.com/help/putty-ssh-debug-tunnel
http://www.phpcloud.com/help/debugging-overview
I've been able to establish an SSH connection using PuTTY with public/private key managed by Pagent. I'm now facing issues when testing the debugger in Eclipse's Debug Configurations menu. I've set up a server with the following details.
Base URL: http://lhith.my.phpcloud.com (the link to my application on
PHP Cloud).
Local web root: C:\Users\Luke\workspace\lhith (the path that contains
index.php on my local copy)
Path mapping: /.apps/http/__default__/0/1.7-zdc (the path containing
index.php on the server) -> /lhith (path containing index.php in the
workspace)
File: /lhith/index.php
URL: http://lhith.my.phpcloud.com
I also configured Zend Debugger to use port 10137 and the Client Host/IP of 127.0.0.1.
When I connect my SSH session and then try to test the debugger I see the error "A timeout occurred when the debug server attempted to connect to the following client hosts/IPs: -127.0.0.1"
What could be going wrong here? What can I do about it?
Thank you for any assistance provided.
I made some progress on this tonight. I setup port forwarding on my internet router to forward port 10137 to my computer and then added my internet routers public IP address to the list of allowed hosts on the Zend Server debug settings on my.phpcloud.com.
I also added this IP to the Debugger configuration in Eclipse and was able to successfully connect to the remote system. It appears there is a problem with the SSH remote tunnel settings, I will keep digging but I wanted to share my findings so far as this has been driving me crazy!

Categories