I have a application in nodejs and PHP. For now i am using different both.
Can i run node and apache both on same port 8080
Is, there is any way to run any application both on 8080
Thanks
An port is usually connected only to one application. So you have to use different ports.
Proxy -
But you could for example create an virtual host in apache and configure this host as proxy. In this configuration you could access the node.js server on the same port. The node.js server would run on another port.
For TCP based applications, no. You can have only one application listening on a single port at time. If you had 2 network cards, you could have one application listen on the first IP and the second one on the second IP, both using port 8080.... but I doubt that is your case.
I guess you can run them on UDP protocol, which could allow you to have two applications listen to the same port, but as UDP is unreliable and it doesn't establish connection, just sends/receives packets. You might experience big packet loss on UDP.
So, short answer - no.
Actually, there might be a way using iptables. Add a rule of this nature
iptables -A INPUT \
-p tcp \
-m bpf --bytecode "14,0 0 0 20,177"
-j redirect .. Port 80
Check the first few bytes of the connection, and redirect the connection. Then you can accept both http and node.js on the same port, but have the servers running on separate ports.
The syntax above is incorrect, and I have not tried it, but I know people who have used this strategy successfully.
Related
Assume server 1 is located at 5:5:5:5:11211, and server 2 is located at 25.25.25.25:11211. You add them to the server pool and everything is great. Until somebody connects to that port and starts messing with your data.
So we change the port to 38295. Harder to find, but not impossible, so it's still not enough.
My questions are:
1) Can you set authentication (username/password) for memcached servers to verify a connection? Can you whitelist specific hosts/IPs (probably a bad idea)?
2) Can you and should you secure data transferred over the internet? The data is in raw format, and your ISP and anyone sniffing the line could see all the data being sent. But encrypting data would probably affect performance?
What solutions are there for setting up a cluster of memcached servers and how do you secure and authenticate them?
The solution that met my needs was to set up iptables entries as suggested by sumoanand. Here's what I got working.
Start memcached using something like this:
/usr/bin/memcached -p 11211 -l 0.0.0.0 -d -u www-data -m 12288
Keep in mind that the -l parameter is set to 0.0.0.0, which essentially allows connections from ANY source. If you keep the standard 127.0.0.1 this will not work.
Next, we make entries to the iptables. If your memcached server is on your LAN, the following command will allow connections only from specific local servers.
For instance, in order to add 192.168.1.100 to the allowed list, we issue the command:
iptables -A INPUT -p tcp -s 192.168.1.100 --dport 11211 -j ACCEPT
If you want to whitelist a remote server, for example, 25.62.25.62 then you issue another command:
iptables -A INPUT -p tcp -s 25.62.25.62 --dport 11211 -j ACCEPT
You can whitelist as many IPs as you want, but be sure to issue the final command that blocks all other connections on that port.
iptables -A INPUT -p tcp --dport 11211 -j REJECT
The IPtables are read in the order they are entered, so if you issue a REJECT ALL statement before issuing any ACCEPT rules, all connections will be rejected (even the whitelisted ones).
Data sent this way, however, is still not encrypted in any way. Anything intercepting your memcached server and the remote server (packet sniffers, ISPs) will be able to see the data completely raw.
I don't think we need to go for complex solution here as mention by Mike.
Assume your web servers(web1, web2, web3) need to get data from memcache servers(mem1 & mem2) via 11211 port located in the same internal network and internal ip addresses of each web server starts with 172.221...
In this case, you can put a restriction in the ip-table of mem1 & mem2 servers to ONLY accept the requests from 172.221.. for 11211 port.
Hope this will help.
Memcached now supports SASL. This will allow you to perform strong authentication for your memchaced service. Here is a good article on how to set up SASL with memcached.
http://blog.couchbase.com/sasl-memcached-now-available
Researching PHP/Gearman. I'm trying to get an understanding of how the Gearman Server/Process determines what constitutes a "valid" client.
In the docs that I've seen, the docs show a number of clients connecting to the the Gearman Server. However, I've not found anything that describes how the server "validates" the workers, or restricts the workers/clients from accessing/getting work from the Server.
As an example, I create a Gearman Server, and I have a network with 5 child machines, each of which has a "worker". My evil friend Steve adds another machine to the network, with it's own worker..
How do I stop Steve's worker from getting work from my Server!
Is there a way to have the client/worker register itself, so I can essentially allocate IDs to the clients/workers???
I'm fairly certain that there's a way to do this, but I haven't come across it yet.
I'm testing on a Linux env, using PHP/MySQL/Gearman.
Thanks
Like memcached, gearman has no access control or authentication whatsoever.
Your best solution is to rely on your OS, e.g firewall rules.
Namely iptables should block all incoming traffic to port 4730 (standard gearman port), like this
iptables -A INPUT -p tcp --dport 4730 -s server1 -j ACCEPT
...
iptables -A INPUT -p tcp --dport 4730 -s server5 -j ACCEPT
iptables -A INPUT -p tcp --dport 4730 -j DROP
That way, you still can use Gearman from localhost.
Disclaimer : this rule is on top of my head, please double check these rules before running it on production server.
Hope this helps !
By listening (1) either only on localhost or (2) settings up proper firewall rules if you need outside access. Gearman is created with the intention of having as little overhead as possible, there is no authentication protocol. If this is not enough, only listening on localhost & using SSH tunnels to that machine is a possibility. Also a possibility is using the HTTP protocol (see here), and putting a validating proxy in front of it.
Gearman servers should only be accessible on your internal network. The network your application lives on should not have unauthorized devices on it. Your application servers shouldn't be sharing a network with your wireless router. Gearman will only send jobs to servers registered to that particular server with the same task name. If one of the employees at your company registers a task with the same name to your production Gearman master as a joke, you have bigger problems.
After seeing ZeroMQ is the answer (http://vimeo.com/20605470), by Ian Barber, i got excited about testing out the pipeline pattern presented by him. However, he uses an IPC example: https://github.com/ianbarber/ZeroMQ-Talk/tree/master/worker
How this should work with TCP? How can i instantiate workers through TCP instead of process forking?
It will look almost exactly the same, but instead of using ipc:// sockets names you would use tcp:// socket names. So instead of this:
$work->bind("ipc:///tmp/work");
$ctrl->bind("ipc:///tmp/control");
You might have this:
$work->bind("tcp://*:8080");
$ctrl->bind("tcp://*:8081");
This has the work socket listening on port 8080 and the ctrl socket listening on port 8081. Your connect operations would look something like:
$work->connect("tcp://1.2.3.4:8080");
(Assuming the ip address of your server was 1.2.3.4).
I have a proxy in our office which close all the trafic from certain ports. It's a big office so the Security manager won't open any ports. We need some data through the port 9001 which isn't opened.
To bypass this, I put a php script in my server which outputs an XML which needs to be read in my office network. So, I tested at home with a cUrl and worked flawlessly but it seems that now it's grabing nothing.
Tested with var_dump it shows: boolean false with the curl exec
What other methods could I use?
You could set up a VPN on an allowed port and connect through that. That would probably simplify a lot of other stuff as well. OpenVPN is pretty light and straight-forward to set up.
install a proxy on an external host which is listening on port 80 but fetches from port 9001
Lets us consider a case, there are 2 apache server running, and one domain is available.
if we make a request like this, http://domain1.com/example1.php it should request one
apache server where actual domain is present. When http://domain1:8000/example1.php it should point to a application in a another server (other machine) under a same domain group.
Now a question is, if http://domain1:8000/example1.php is requested, then it will run in which server? which server will interpret it? which server will execute those files, either apache server in domain1 system or, a apache server that domain1:8000 (this is other machine, to which request is port forwarded) points?
A server will listen on a specific port, so if you are using different ports, it will go to whatever server is listening on that port, regardless of the domain.
Since you're using port forwarding, then it can only be processed by where ever you forward the ports to. So, port 80 is being forwarded to your main server and port 8000 to the other server. If you didn't forward, and all were going to the first server, then you would get an error if the first server were not also listening on port 8000.