memcached authenticating remote connections - php

Assume server 1 is located at 5:5:5:5:11211, and server 2 is located at 25.25.25.25:11211. You add them to the server pool and everything is great. Until somebody connects to that port and starts messing with your data.
So we change the port to 38295. Harder to find, but not impossible, so it's still not enough.
My questions are:
1) Can you set authentication (username/password) for memcached servers to verify a connection? Can you whitelist specific hosts/IPs (probably a bad idea)?
2) Can you and should you secure data transferred over the internet? The data is in raw format, and your ISP and anyone sniffing the line could see all the data being sent. But encrypting data would probably affect performance?
What solutions are there for setting up a cluster of memcached servers and how do you secure and authenticate them?

The solution that met my needs was to set up iptables entries as suggested by sumoanand. Here's what I got working.
Start memcached using something like this:
/usr/bin/memcached -p 11211 -l 0.0.0.0 -d -u www-data -m 12288
Keep in mind that the -l parameter is set to 0.0.0.0, which essentially allows connections from ANY source. If you keep the standard 127.0.0.1 this will not work.
Next, we make entries to the iptables. If your memcached server is on your LAN, the following command will allow connections only from specific local servers.
For instance, in order to add 192.168.1.100 to the allowed list, we issue the command:
iptables -A INPUT -p tcp -s 192.168.1.100 --dport 11211 -j ACCEPT
If you want to whitelist a remote server, for example, 25.62.25.62 then you issue another command:
iptables -A INPUT -p tcp -s 25.62.25.62 --dport 11211 -j ACCEPT
You can whitelist as many IPs as you want, but be sure to issue the final command that blocks all other connections on that port.
iptables -A INPUT -p tcp --dport 11211 -j REJECT
The IPtables are read in the order they are entered, so if you issue a REJECT ALL statement before issuing any ACCEPT rules, all connections will be rejected (even the whitelisted ones).
Data sent this way, however, is still not encrypted in any way. Anything intercepting your memcached server and the remote server (packet sniffers, ISPs) will be able to see the data completely raw.

I don't think we need to go for complex solution here as mention by Mike.
Assume your web servers(web1, web2, web3) need to get data from memcache servers(mem1 & mem2) via 11211 port located in the same internal network and internal ip addresses of each web server starts with 172.221...
In this case, you can put a restriction in the ip-table of mem1 & mem2 servers to ONLY accept the requests from 172.221.. for 11211 port.
Hope this will help.

Memcached now supports SASL. This will allow you to perform strong authentication for your memchaced service. Here is a good article on how to set up SASL with memcached.
http://blog.couchbase.com/sasl-memcached-now-available

Related

Remote MySQL not working with Laravel Config

Our database server has grant access to all local ip's with user root and it works well to access them via cli in all local IP. But in our webserver (different machine), Laravel still can't connected to the database, here's the error:
I've tried to clear cache, reinstall the Laravel (install composer, generate new key etc.) change db config drive to sqli, it still error connecting the remote db, but why the DB works when I querying the DB via tinker on that machine, this is so unusual.
If your MySQL database is on a remote server it may well be one of the following issues:
Firewall Block, the server with the MySQL service may be behind a firewall that is set to block external access to the port that MySQL operates on.
MySQL User Permissions, if the MySQL service is not sat behind a firewall then the next cause may be that the user has only localhost access permissions.
You should try to log in to your remote server and, from there, connect to your database via some shell command to verify you actually can.
Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides a mechanism for supporting access control security policies, including United States Department of Defense–style mandatory access controls (MAC).
httpd_can_network_connect (HTTPD Service):: Allow HTTPD scripts and modules to connect to the network.
Run as root:
# setsebool -P httpd_can_network_connect 1
Solved: php can't connect to mysql with error 13 (but command line can)
Now Laravel can connect the DB, By executing: setsebool httpd_can_network_connect=1 in Database Server. But I still don't understand why. Is there any reasonable explanation?

Seleiumn With Gcloud

We have Google Cloud Server and set up Selenium Grid on it. Selenium Grid hub is up and running.
But when I want to reach ip::4444/grid/console/ from my PC browser, I cannot
reach it.
I also cannot test from my PC; I do everything as Node in My PC.
When I go to the Google cloud Server (The Test Inteface) and enter Start Test, the test does not start. I get this message: " Error forwarding the new session Empty pool of VM for setup Capabilities "
I use PHP (Facebook Webdriver).
There are two level of firewalls, one is Gcloud. Next is your machine
Gcloud firewall needs to allow 4444. Then you need to check firewall of machine using iptables -S
If you see the port is not allowed, you can use below iptables statement to add it
iptables -A INPUT -p tcp -m multiport --dports 4444 -j ACCEPT

Load balancing PHP built-in server?

My development environment consists of the single-threaded built-in PHP server. Works great:
APP_ENV=dev php -S localhost:8080 -c php.ini web/index.php
One issue with this is that the built-in server is single-threaded. This makes lots of parallel XHRs resolve sequentially. Worst of all, it doesn't mimic our production environment very well. Some front-end issues with concurrency simply don't exist in this set up.
My question:
What's an existing solution that I could leverage that would proxy requests asynchronously to multiple instances of the same PHP built-in server?
For example, I'd have a few terminal sessions running the built-in server on different ports, then each request is routed to a different one of those instances. In other words, I want multiple instances of my application running in parallel using the simplest possible set up (no Apache or Nginx if possible).
A super-server, like inetd or tcpserver†, works well. I'm a fan of the latter:
tcpserver waits for incoming connections and, for each connection,
runs a program of your choice.
With that, now you want to use a reverse proxy to pull the HTTP protocol off the wire and then hand it over to a connection-specific PHP server. Pretty simple:
$ cat proxy-to-php-server.sh
#!/bin/bash -x
# get a random port -- this could be improved
port=$(shuf -i 2048-65000 -n 1)
# start the PHP server in the background
php -S localhost:"${port}" -t "$(realpath ${1:?Missing path to serve})" &
pid=$!
sleep 1
# proxy standard in to nc on that port
nc localhost "${port}"
# kill the server we started
kill "${pid}"
Ok, now you're all set. Start listening on your main port:
tcpserver -v -1 0 8080 ./proxy-to-php-server.sh ./path/to/your/code/
In English, this is what happens:
tcpserver starts listening on all interfaces at port 8080 (0 8080) and prints debug information on startup and each connection (-v -1)
For each connection on that port, tcpserver spawns the proxy helper, serving the given code path (path/to/your/code/). Pro tip: make this an absolute path.
The proxy script starts a purpose-built PHP web server on a random port. (This could be improved: script doesn't check if port is in use.)
Then the proxy script passes its standard input (coming from the connection tcpserver serves) to the purpose-built server
The conversation happens, then the proxy script kills the purpose-built server
This should get you in the ballpark. I've not tested it extensively. (Only on GNU/Linux, Centos 6 specifically.) You'll need to tweak the proxy's invocation of the built-in PHP server to match your use case.
Note that this isn't a "load balancing" server, strictly: it's just a parallel ephemeral server. Don't expect too much production quality out of it!
† To install tcpserver:
$ curl -sS http://cr.yp.to/ucspi-tcp/ucspi-tcp-0.88.tar.gz | tar xzf -
$ cd ucspi-tcp-0.88/
$ curl -sS http://www.qmail.org/moni.csi.hu/pub/glibc-2.3.1/ucspi-tcp-0.88.errno.patch | patch -Np1
$ sed -i 's|/usr/local|/usr|' conf-home
$ make
$ sudo make setup check
I'm going to agree that replicating a virtual copy of your production environment is your best bet. You don't just want to cause issues, you want to cause yourself the same issues. Also, there's little guarantee that you will hit all of the same issues under the alternate setup.
If you do want to do this, however, you don't have particularly many options. Either you direct incoming requests to an intermediate piece of software which then dispatches them to the php backends -- this would be the Apache, Nginx solutions -- or you don't, and the request is directly handled by the single php thread.
If you're not willing to use that interposed software, there's only one layer between you and the client: networking. You could, in theory, set up a round-robin DNS for yourself. You give yourself multiple IPs, load up a PHP server listening on each, and then let your client connections get spread across them. Note that this would assign each client to a specific process -- which may not be the level of parallel you're looking for.

Can we run node and apache both on same port

I have a application in nodejs and PHP. For now i am using different both.
Can i run node and apache both on same port 8080
Is, there is any way to run any application both on 8080
Thanks
An port is usually connected only to one application. So you have to use different ports.
Proxy -
But you could for example create an virtual host in apache and configure this host as proxy. In this configuration you could access the node.js server on the same port. The node.js server would run on another port.
For TCP based applications, no. You can have only one application listening on a single port at time. If you had 2 network cards, you could have one application listen on the first IP and the second one on the second IP, both using port 8080.... but I doubt that is your case.
I guess you can run them on UDP protocol, which could allow you to have two applications listen to the same port, but as UDP is unreliable and it doesn't establish connection, just sends/receives packets. You might experience big packet loss on UDP.
So, short answer - no.
Actually, there might be a way using iptables. Add a rule of this nature
iptables -A INPUT \
-p tcp \
-m bpf --bytecode "14,0 0 0 20,177"
-j redirect .. Port 80
Check the first few bytes of the connection, and redirect the connection. Then you can accept both http and node.js on the same port, but have the servers running on separate ports.
The syntax above is incorrect, and I have not tried it, but I know people who have used this strategy successfully.

Gearman - Client

Researching PHP/Gearman. I'm trying to get an understanding of how the Gearman Server/Process determines what constitutes a "valid" client.
In the docs that I've seen, the docs show a number of clients connecting to the the Gearman Server. However, I've not found anything that describes how the server "validates" the workers, or restricts the workers/clients from accessing/getting work from the Server.
As an example, I create a Gearman Server, and I have a network with 5 child machines, each of which has a "worker". My evil friend Steve adds another machine to the network, with it's own worker..
How do I stop Steve's worker from getting work from my Server!
Is there a way to have the client/worker register itself, so I can essentially allocate IDs to the clients/workers???
I'm fairly certain that there's a way to do this, but I haven't come across it yet.
I'm testing on a Linux env, using PHP/MySQL/Gearman.
Thanks
Like memcached, gearman has no access control or authentication whatsoever.
Your best solution is to rely on your OS, e.g firewall rules.
Namely iptables should block all incoming traffic to port 4730 (standard gearman port), like this
iptables -A INPUT -p tcp --dport 4730 -s server1 -j ACCEPT
...
iptables -A INPUT -p tcp --dport 4730 -s server5 -j ACCEPT
iptables -A INPUT -p tcp --dport 4730 -j DROP
That way, you still can use Gearman from localhost.
Disclaimer : this rule is on top of my head, please double check these rules before running it on production server.
Hope this helps !
By listening (1) either only on localhost or (2) settings up proper firewall rules if you need outside access. Gearman is created with the intention of having as little overhead as possible, there is no authentication protocol. If this is not enough, only listening on localhost & using SSH tunnels to that machine is a possibility. Also a possibility is using the HTTP protocol (see here), and putting a validating proxy in front of it.
Gearman servers should only be accessible on your internal network. The network your application lives on should not have unauthorized devices on it. Your application servers shouldn't be sharing a network with your wireless router. Gearman will only send jobs to servers registered to that particular server with the same task name. If one of the employees at your company registers a task with the same name to your production Gearman master as a joke, you have bigger problems.

Categories