I’m trying to incrementally create a new version of an old api built with php.
What I’m trying to do is
Check if router can handle request
-case no: redirects request to old api (old api in another server on local network)
Challenges I have is I want to forward request including any request body sent from client.
Current code:-
try{
Router->run()
}catch ( NotFound $e ) {
// forward request to old server
}
die();
I tried using curl but couldn’t make it respect multi file upload requests.
I also considered using 300 redirect headers but was not sure if all clients will be able to handle that gracefully. (Current api is consumed by web and mobile applications)
Which method is recommended in such use case ? Are redirect headers reliable ?
If not is there a good client lib that can help me achieve what I need ?
Thanks
I think using a proxy is better than PHP handling, for example, something like this can pass requests to the old backend if catch 502 error in new server:
http {
upstream new_api {
server host:port;
}
upstream old_api {
server host:port;
}
server {
location / {
proxy_intercept_errors on;
error_page 502 = #old_api;
proxy_pass http://new_api;
}
location #old_server {
proxy_set_header Host old.api.com;
proxy_redirect https://old.api.com/ https://$http_host/;
proxy_cookie_domain old.api.com $http_host;
proxy_pass https://old_api;
}
}
Related
For a website I'm using a nginx configuration that requires a client ssl certificate. I want my Symfony/php project to be able to verify that a client ssl certificate is being used (and provide some extra information from the certificate as well). So I was thinking of doing this by adding it to the request http header.
In my nginx site configuration I have set:
ssl_client_certificate /home/user/ssl/ca.crt;
ssl_verify_client on;
This works, the client certificate is obligatory.
But I want my underlaying Symfony/php project to be able to verify that a client certificate is being used. I was thinking of adding it to the http request header, but I seem only to be able to add it to the http response header (back to the browser) like this (in the same nginx site config):
location / {
try_files $uri /app.php$is_args$args;
add_header X-ssl-client-verify $ssl_client_verify;
}
In firefox I can see this response header indeed, but that is not what I want (and can be a security hazzard). I've also looked into this:
proxy_set_header X-ssl-client-verify $ssl_client_verify;
But this does not work because I'm not using a proxy.
Is there some other way to add an element to the request header? Or is there an alternative way to get client ssl certificate information into my Symfony / php project?
I need to develop a challenge page much similar to the Cloudflare firewall challenge.
I know how to make the front end and the back end of the challenge app and I know how to set it up on the server.
The problem comes on how to implement it to the website which is on one server, while the challenge app is on another server. How should I make the communication between the servers? Both servers will be Ubuntu 16.4. Thank you all in advance.
I think it's better to solve this issue like cloudflare using nginx server.
Make an nginx reverse proxy with rate limit
if limit hit user will be redirected to error page
integrate your challenge app with the error page.
more about this configuration is here:
https://serverfault.com/questions/645154/how-to-redirect-to-an-other-link-when-excess-request-limit-req-with-nginx
And How to use PHP in error pages is here:
Nginx, PHP + FPM Custom Error Pages
you can run this reverse proxy on third server or you can run it on the challenge app server.
make your domain points to the reverse proxy
then
make nginx config
server {
listen 80 default_server;
server_name *.*;
client_body_timeout 5s;
client_header_timeout 5s;
location / {
proxy_set_header Host $host;
proxy_pass http://00.00.00.00/; #replace with upstream ip
}
}
you have to combine this with custom php error page
At this moment my REST API works on PHP, and is running behind Apache2/Nginx (on Apache2 actually, migration to Nginx is in progress), but after reading about Golang and Node.js performance for rest, i am thinking about migrating my REST from PHP to one of this variants, but where i stuck is how to migrate only some of routes, not whole REST at one.
For example now i have two routes
/users and /articles
apache is listening for 80 port, and then with PHP help return response for them, but what if i want to migrate /articles to Node.js? How my webserver will know what for /articles he need to call Node.js if Node.js will be on different port, but for /users still use PHP?
You can set up the new Node.js REST API to use your old PHP REST API and replace the endpoints in the Node.js REST API when ready.
Here's an example using Hapi.js (but you could use any Node.js RESTful framework):
const Hapi = require('hapi');
const request = require('request');
const server = new Hapi.Server();
server.connection({ port: 81, host: 'localhost' });
server.route({
method: 'GET',
path: '/new',
handler: (req, reply) => {
reply('Hello from Node.js API');
}
});
server.route({
method: 'GET',
path: '/{endpoint}',
handler: (req, reply) => {
request.get(`http://localhost:80/${req.params.endpoint}`)
.on('response', (response) => {
reply(response);
});
}
});
server.start((err) => {
if (err) {
throw err;
}
console.log(`Server running at: ${server.info.uri}`);
});
You could run both PHP and Node.js on the same server (using different ports), but you're probably better to run them on separate servers in the same network. Once you've moved all the endpoints, you'll not want PHP/etc on your server.
Found pretty good solution from my colleagues, just handle request with nginx and redirect to another server if request uri contains something, like this:
server {
listen 127.0.0.1:80;
server_name localhost.dev;
location ~* ^/[a-zA-Z0-9]+_[a-zA-Z0-9]+_(?<image_id>[0-9]+).* {
include proxy_headers.conf;
proxy_set_header X-Secure False;
add_header X-Image-Id $image_id;
access_log off;
proxy_pass http://localhost-image-cache;
proxy_next_upstream off;
}
}
upstream localhost-image-cache {
hash $server_name$image_id consistent;
server 127.0.0.1:81 max_fails=0;
keepalive 16;
}
I have three servers. One for load balancing other two for serving web application. My load balancing works fine if i use my web page as static site. but when i log into my web page it does not respond correctly because every time it changes its server when the page loads. how can i do it without changing current server before log-out. My load balancing server configuration is
upstream web_backend{
server 192.168.33.2;
server 192.168.33.3;
}
server{
listen 80;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://web_backend;
}
}
You can use Session persistence feature of nginx:
If there is the need to tie a client to a particular application
server — in other words, make the client’s session “sticky” or
“persistent” in terms of always trying to select a particular server —
the ip-hash load balancing mechanism can be used.
With ip-hash, the client’s IP address is used as a hashing key to
determine what server in a server group should be selected for the
client’s requests. This method ensures that the requests from the same
client will always be directed to the same server except when this
server is unavailable.
To configure ip-hash load balancing, just add the ip_hash directive to
the server (upstream) group configuration:
In your case just add ip_hash into your upstream definition
upstream web_backend{
ip_hash;
server 192.168.33.2;
server 192.168.33.3;
}
Is there a way to directly connect to Redis using client side (not Node.js) javascript?
I'm already using Node.js + PHP + Redis + Socket.io (for the client) successfully for a few projects. However, I really think this could be further simplified to something like PHP + Redis + Browser javascript - taking out the Node.js server which is just another server I'd rather not use if it isn't necessary. For simple things, I think it would be better to just connect directly to Redis using Javascript.
From what I understand, Redis just serves its request through a port so any language that can make requests to that port would work. In theory, couldn't you just hit the redis server's port using client side javascript?
I'm mostly interested in the publish/subscribe functions, which may or may not be possible.
I'm not sure if you can access a non-port 80 port using AJAX, but you technically should be able to forward Redis' port to port 80 using Nginx reverse proxy or something.
Any ideas? Just a thought. I'm very happy with my current solution, but it doesn't hurt to wonder if we could do this even better or more efficiently.
You can only make HTTP and websockets requests with client-side JavaScript. However, you should look into Webdis. It adds an easy HTTP/JSON layer to Redis and should do exactly what you want.
Edit: Link fixed.
The real obstacle is overcoming the non-port 80/443 limitation for the ajax request in the browser; Even with the Webdis solution, because it runs off port 7379 by defaul,t and would conflict with your Apache or Nginx process if ran off port 80.
My advice would be to use the nginx proxy_pass to point to webdis process. You can redirect traffic to port 80 and perform ajax request without the annoying security issues.
Below is a sample NGINX configuration that seems to do the trick for me.
upstream WebdisServerPool
{
server 127.0.0.1:7379; #webdis server1
server 192.168.1.1:7379; #webdis server 2
}
server {
listen 80; #
root /path/to/my/php/code/;
index index.php;
server_name yourServerName.com;
location ~* \.(ico|css|js|gif|jpe?g|png)(\?[0-9]+)?$ {
expires max;
log_not_found off;
}
location / {
# Check if a file exists, or route it to index.php.
try_files $uri $uri/ /index.php;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /path/to/my/php/code/$fastcgi_script_name;
}
location /redis {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
rewrite /(.*)/(.*)/(.*)$ /$2/$3 break; #ignore the /redis
proxy_redirect off;
proxy_pass http://webdisServerPool;
}
}
On the front end side, here is an example of getting all the keys. All redis requests would go through /redis for example:
$.ajax({
url: "/redis/KEYS/*",
method: 'GET',
dataType: 'json',
success:function(data)
{
$each(data.KEYS,function(key,value){
$('body').append(key+"=>"+value+" <br> ");
});
}
});
OR
You could use:
http://wiki.nginx.org/HttpRedis and parse the response yourself.
I have found that the direct Redis http interfaces don't work very well with pub/sub or are difficult to set up (at time of writing).
Here is my "workaround" for pub/sub based on the predis examples.
http://bradleygoldsmith.tumblr.com/post/35601539836/quick-and-dirty-redis-subscribe-publish-notifications
I have a bunch of predefined redis accessors in php, and I use a 'router' style function to use them from the client via $.post requests with jQuery. The router is just a big switch:
public function router() {
$response = array();
switch ($_POST['method']) {
case 'get_whole_list': //is a convenience function with arg $list_key
if ($_POST['list_key']) {//which will be provided by the POST request data
$response = $this->get_whole_list($_POST['list_key']);
break;
} else {
$response = (array('error' => 'must be passed with post key "list_key"'));
break;
} //and so on, until
//it's time to send the response:
return json_encode(array('response' => $response));
}
and then you just echo $myClass->router()
I access it with jQuery like:
redgets.get_whole_list = function(key, callback) {
$.post(redgets.router, //points to my php file
{method: 'get_whole_list', //tells it what to do
list_key: key}, //provides the required args
function(data) {
callback($.parseJSON(data).response); //parses the response
});
this all works fine; maybe it's not ideal, but it does make a node.js server redundant.
I am surprised that nobody has already made a general-purpose redis interface in this style.