I have two websites:
1)httpwebsite.com where I run my web application which uses APACHE, PHP and MYSQL;
2)wss.com where I run a nodeJS websocket server, used for a multiplayer game;
I want to host the javascript client-side files that communicate with the websocket server, on httpwebsite.com, so I dont have to configure a http server on nodeJS, for many reasons, like security and lack of experience with using nodeJS as HTTP server.
I want to use nodeJS only for the websocket server, for performance and flexibility reasons, among many others.
I've heard that Same-origin policy restricts communication from httpwebsite.com with wss.com , but can this be reconfigured to actually allow communication between two different domains that want to communicate with each other on purpose?
Do I have other options than actually running a HTTP server on the nodeJS server?
You can use CORS for secure requests from one domain to another domain.
http://www.html5rocks.com/en/tutorials/cors/
2 options:
You can add CORS headers to wss.com to allow access to website.com to load it's resources. The link Matt gave should explain how this works and you just need to add this HTTP Header to each Node server you need to access.
You can proxy your requests through your Apache server to the node server. So the web browser thinks it's talking to a service on the same origin. This is often used to only have your web server publically available and your app server (running node) not directly available and protected behind a firewall - though obviously Apache needs to be able to access it.
You can use this config in Apache to achieve option 2 to forward http://website.com/api calls to a service running in wss.com on port 3000.
#send all /api requests to node
ProxyPass /api http://wss.com:3000
#Optionally change all references to wss.com to this domain on return:
ProxyPassReverse /api http://wss.com:3000
Related
I've searched a lot but I couldn't find a PHP proxy server which can be run on a shared host, so I decided to build a very simple one from scratch but I'm still on the first step. I've created a subdomain httpp.alvandsoft.com and also redirected all its subdirectories (REQUEST_URI) to the main index.php to be logged and to know what whould a proxy server exactly receive and send
(The log is accessible through https://httpp.alvandsoft.com/?log=1&log_filename=log.txt)
But whenever I set it as a proxy for Telegram or other apps, it doesn't receive ANY requests at all, even when I use 443 or 80 ports, neither in different proxies such as HTTP, SOCKS or MTPROTO.
Is proxy something that depends on the server's settings and works in a way other than regular HTTP requests and responses or I'm missing something?
I found it out myself. HTTP(s) proxies send their requested URL as Host request header and many hosts and websites, check this request header and if it's not a member of their valid IPs, redirect it immediately.
Introduction
I have a university-managed server, that does not give students access to open ports for internet traffic, but students could still open ports (like 4040 for their NodeJS applications for internal access).
Each student's account has a public_html folder (similar to /var/www/), which serves all files in public_html on this URL (using an Apache server) statically (or rendered by PHP):
http://mi-linux.wlv.ac.uk/~studentid/
Problem
However, in my case, I want to expose an API for external testing using postman and adding it to a React application. The problem is, it uses NodeJS and express, and creates its own server on port 4040.
I could access the API by using the curl command from ssh internally like:
studentid#csl-student:~$ curl http://localhost:4040
{"message": "Hey! The backend is working. Explore routes from the code"}
Now, since students cannot allow open posts for HTTP traffic, one could simply not access my NodeJS API from outside the server (which I have to).
Things I looked up
I searched extensively on topics like how to statically serve a nodejs application, to how to forward ports from 8080 (https port) to 4040 (my NodeJS port), but most of them require sudo access, and some of them simply don't work.
Solution I propose
I think I still could access my public_html folder, that is statically served, and could render PHP. I know that I could create a index.html (or index.php) file that would fetch http://localhost:4040 internally, and simply forward the result, since the API port is open internally, and the index.html file could be served externally.
What the file could do
While loading the file using
http://mi-linux.wlv.ac.uk/~studentid/index.html
The file could load the response from localhost:4040 internally on the server itself (since the API is accessible internally), then send the result along with the status codes and headers.
However, my API has several routes, and I could not manage to hard code each route. There must be a more efficient way of doing this.
What I'm looking for
I would be really thankful if one could direct me to a package already made for the purpose of forwarding static files with responses pre-loaded from an internal API.
Or I could have a php script that could do all the forwarding that I need and make the API public.
PS: I know this should be asked on Server Fault, but since I think this could be done by using a script or something, I asked it here.
A workaround for this would be to host your NodeJs code on a third part platform, say Heroku.
And create a PHP script that acts as a middleware proxy and does curl requests to your Heroku-Hosted-NodeJs API endpoint.
Are the Apache's plugins mod_rewrite and mod_proxy enabled on the server?
If so, maybe you could use ProxyReverse with mod_rewrite's P flag inside a .htaccess file as described in this post:
RewriteEngine On
RewriteRule ^/node/(.*)$
http://localhost:4040/$1 [L,P]
ProxyPassReverse /node http://localhost:4040
(I can't test it right now, so my answer is purely speculative, sorry.)
Another option, in case that the server conf cannot be modified, could be use mod_headers and Location, but I'm not sure if it will be transparent:
Header edit Location ^http://internal\.example\.com/(.*) http://example.com/$1
As the title suggests, I'm wondering If it's technically possible for a PHP script to act as an SOCKS proxy. If not what are the technical limitations?
I have access to a paid hosting which provides me with executing PHP scripts and a domain name is connected to the host. (e.g. example.com).
Is there any SOCKS proxy written in PHP so I may upload it a directory at host (e.g. example.com/proxy) and configure a client (like Firefox) to connect via the proxy.
cURL and other extensions are supported.
I'm not yet sure about SSH access.
I have seen projects like php-proxy or glype but These are not things I need because they can be used only by browsing proxy's homepage. (They are web proxies, But I need a proxy server)
What you describe will not work. While PHP does have the ability to create a TCP server, a proxy server in particular must already be running and listening for connections before a client tries to connect to it, and hosting providers execute a PHP script only on an as-needed basis whenever a client requests the script via the HTTP/S protocol, running the script only for the duration of that request. For what you want, you need a dedicated server running your PHP application separate from a web server. You won't get that with a hosting provider.
I have an asp.net web application hosted and I would like to add a sub-domain (or sub-directory) and run/host a php application in it.
How would i do this?
A subdomain would be nicer from a technical standpoint:
You could make your DNS route the subdomain.yourdomain.com to a different Server then yourdomain.com
You could add different IIS bindings for subdomain.yourdomain.com and yourdomain.com in IIS.
For a sub-directory you'd have to run php in IIS as well, here is a guide in how to set it up: You can also run an php website in iis.
You could also run php on Apache and dotnet in IIS. The only downside: If you try to run both apps on 1 server, only 1 (IIS or apache) Can listen for http traffic on port 80 (the default port). so you'd have to host 1 on a different port (eg for port 81: yourdomain.com:81) which i wouldn't advice for a production website.
You could make a tiny application which receives every request and forwards the http request to the right application on a different port. This is called a reverse proxy.
I'm developing a SPA that runs on Backbone.js locally and setting server up with Grunt for livereload . I did a REST api with PHP for my app which i also run locally. Now i have a problem with cross domain policy since my servers are on different ports. I tried to combine two servers on one port both from apache and from grunt but i'm not sure if it is at all possible. How should i deal with this problem? i would like to develop my app locally and use the livereload features of grunt.
I propose installing nginx to act as reverse proxy. It can serve static files from one directory (aka frontend), and server side generated scripts (aka backend) from other server.
It serves backend, if the request do not corresponds to the file existing in frontend directory.
This is config example for it - https://github.com/vodolaz095/hunt/blob/master/examples/serverConfigsExamples/nginx.conf
It serves static html, css, js files from directory /home/nap/static and backend from the localhost:3000, and both of them are accessible on localhost:80 as the one server.
I hope this is what you need.
So i ended up using grunt-connect-proxy which did just what i needed.