At this moment my REST API works on PHP, and is running behind Apache2/Nginx (on Apache2 actually, migration to Nginx is in progress), but after reading about Golang and Node.js performance for rest, i am thinking about migrating my REST from PHP to one of this variants, but where i stuck is how to migrate only some of routes, not whole REST at one.
For example now i have two routes
/users and /articles
apache is listening for 80 port, and then with PHP help return response for them, but what if i want to migrate /articles to Node.js? How my webserver will know what for /articles he need to call Node.js if Node.js will be on different port, but for /users still use PHP?
You can set up the new Node.js REST API to use your old PHP REST API and replace the endpoints in the Node.js REST API when ready.
Here's an example using Hapi.js (but you could use any Node.js RESTful framework):
const Hapi = require('hapi');
const request = require('request');
const server = new Hapi.Server();
server.connection({ port: 81, host: 'localhost' });
server.route({
method: 'GET',
path: '/new',
handler: (req, reply) => {
reply('Hello from Node.js API');
}
});
server.route({
method: 'GET',
path: '/{endpoint}',
handler: (req, reply) => {
request.get(`http://localhost:80/${req.params.endpoint}`)
.on('response', (response) => {
reply(response);
});
}
});
server.start((err) => {
if (err) {
throw err;
}
console.log(`Server running at: ${server.info.uri}`);
});
You could run both PHP and Node.js on the same server (using different ports), but you're probably better to run them on separate servers in the same network. Once you've moved all the endpoints, you'll not want PHP/etc on your server.
Found pretty good solution from my colleagues, just handle request with nginx and redirect to another server if request uri contains something, like this:
server {
listen 127.0.0.1:80;
server_name localhost.dev;
location ~* ^/[a-zA-Z0-9]+_[a-zA-Z0-9]+_(?<image_id>[0-9]+).* {
include proxy_headers.conf;
proxy_set_header X-Secure False;
add_header X-Image-Id $image_id;
access_log off;
proxy_pass http://localhost-image-cache;
proxy_next_upstream off;
}
}
upstream localhost-image-cache {
hash $server_name$image_id consistent;
server 127.0.0.1:81 max_fails=0;
keepalive 16;
}
Related
I'm currently trying to host my own WebSocket server using Ratchet http://socketo.me/docs/push.
The problem is that I can't find a good tutorial that shows me how I can host this on a subdomain. So hopefully someone can help me here.
My plan:
I already have a basic auth secured subdomain called ws.my-domain.de. Now I want to run Ratchet on my subdomain to provide this as a service for my main domain and all my subdomains.
At my main domain my-domain.de I've WordPress running so this is where I want to use my own WebSocket first via the client side tutorial from the page I've posted above:
<script src="https://gist.githubusercontent.com/cboden/fcae978cfc016d506639c5241f94e772/raw/e974ce895df527c83b8e010124a034cfcf6c9f4b/autobahn.js"></script>
<script>
var conn = new ab.Session('ws://ws.my-domain.de',
function() {
conn.subscribe('kittensCategory', function(topic, data) {
// This is where you would add the new article to the DOM (beyond the scope of this tutorial)
console.log('New article published to category "' + topic + '" : ' + data.title);
});
},
function() {
console.warn('WebSocket connection closed');
},
{'skipSubprotocolCheck': true}
);
</script>
So can please someone show me the steps I need to do? I'm completely new with this. I know how to use it on the client side, but I don't know how to provide it as a service and then use it in PHP (WordPress).
I'm assuming you are on a VPS or dedicated server.
You need to run your WebSocket as a daemon and then create a vhost for your subdomain in Nginx or Apache and using that create a reverse proxy to your WebSocket server.
Running ratchet as a daemon
Nginx WebSocket reverse proxy
Apache WebSocket reverse proxy
I’m trying to incrementally create a new version of an old api built with php.
What I’m trying to do is
Check if router can handle request
-case no: redirects request to old api (old api in another server on local network)
Challenges I have is I want to forward request including any request body sent from client.
Current code:-
try{
Router->run()
}catch ( NotFound $e ) {
// forward request to old server
}
die();
I tried using curl but couldn’t make it respect multi file upload requests.
I also considered using 300 redirect headers but was not sure if all clients will be able to handle that gracefully. (Current api is consumed by web and mobile applications)
Which method is recommended in such use case ? Are redirect headers reliable ?
If not is there a good client lib that can help me achieve what I need ?
Thanks
I think using a proxy is better than PHP handling, for example, something like this can pass requests to the old backend if catch 502 error in new server:
http {
upstream new_api {
server host:port;
}
upstream old_api {
server host:port;
}
server {
location / {
proxy_intercept_errors on;
error_page 502 = #old_api;
proxy_pass http://new_api;
}
location #old_server {
proxy_set_header Host old.api.com;
proxy_redirect https://old.api.com/ https://$http_host/;
proxy_cookie_domain old.api.com $http_host;
proxy_pass https://old_api;
}
}
I downloaded ngrok so i can test my site for http and https requests (if someone is trying to get in my site specific url and it will be a simple http request, i will deny it),
first, my localhost is working in 8080 port
I start ngrok, it gives me the following:
both at the same port, it's a problem i think, because if i do such simple route configuration in laravel:
Route::filter('force.ssl', function()
{
if( ! Request::secure())
{
return 'unsecured';
}
});
and i have this route:
Route::get('survey/payment/secured', array('before' => 'force.ssl', function(){
return 'secured!';
}));
and i do the following request:
https://75fdaa96.ngrok.com/survey/payment/secured
it thinks it unsecured and returns 'unsecured', how can i fix this?
Request::secure() relies on $_SERVER['HTTPS']. As the HTTPS is being provided by the proxy, not your webserver, Laravel doesn't know it's being served as HTTPS.
ngrok does pass the X-Forwarded-Proto header, but Laravel doesn't trust it by default. You can use the trusted proxy middleware to trust it.
I want to implement a simple WebSocket server using PHP and nginx, and am under the following impression:
Once a Websocket has gone through the "Protocol Switch" described in RFC 6455, it can communicate with any standard (non-"Web") socket server.
According to the nginx manual, nginx can perform the aforementioned "Protocol Switch".
With this in mind, I tried the following simple implementation consisting of a JavaScript client, an nginx configuration, and a PHP server (see below).
Results:
The PHP server receives the WebSocket HTTP header containing Connection: upgrade, Sec-WebSocket-Key: *** and similar fields. I assume this is good.
However, the onopen event is never triggered on the client, and this is where I'm stuck.
Questions:
Have I misunderstood some details, or maybe the entire concept of how this works?
How can I make my code examples work?
The Javascript client:
function socket() {
var ws = new WebSocket("ws://socket/hello/");
ws.onopen = function() {
ws.send("Hello, World!");
};
ws.onmessage = function(event) {
console.log(event.data);
};
}
socket();
The nginx configuration:
server {
location /hello/ {
proxy_pass http://localhost:10000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
server_name socket;
root socket;
}
The PHP server:
(Implemented exactly as described in Example #1 in the PHP manual, only changing $address to 127.0.0.1.)
It can, but only if the socket server implements the websocket protocol. If you are asking if it can be used as a generic TCP or UDP socket, it cannot. Full stop.
Is there a way to directly connect to Redis using client side (not Node.js) javascript?
I'm already using Node.js + PHP + Redis + Socket.io (for the client) successfully for a few projects. However, I really think this could be further simplified to something like PHP + Redis + Browser javascript - taking out the Node.js server which is just another server I'd rather not use if it isn't necessary. For simple things, I think it would be better to just connect directly to Redis using Javascript.
From what I understand, Redis just serves its request through a port so any language that can make requests to that port would work. In theory, couldn't you just hit the redis server's port using client side javascript?
I'm mostly interested in the publish/subscribe functions, which may or may not be possible.
I'm not sure if you can access a non-port 80 port using AJAX, but you technically should be able to forward Redis' port to port 80 using Nginx reverse proxy or something.
Any ideas? Just a thought. I'm very happy with my current solution, but it doesn't hurt to wonder if we could do this even better or more efficiently.
You can only make HTTP and websockets requests with client-side JavaScript. However, you should look into Webdis. It adds an easy HTTP/JSON layer to Redis and should do exactly what you want.
Edit: Link fixed.
The real obstacle is overcoming the non-port 80/443 limitation for the ajax request in the browser; Even with the Webdis solution, because it runs off port 7379 by defaul,t and would conflict with your Apache or Nginx process if ran off port 80.
My advice would be to use the nginx proxy_pass to point to webdis process. You can redirect traffic to port 80 and perform ajax request without the annoying security issues.
Below is a sample NGINX configuration that seems to do the trick for me.
upstream WebdisServerPool
{
server 127.0.0.1:7379; #webdis server1
server 192.168.1.1:7379; #webdis server 2
}
server {
listen 80; #
root /path/to/my/php/code/;
index index.php;
server_name yourServerName.com;
location ~* \.(ico|css|js|gif|jpe?g|png)(\?[0-9]+)?$ {
expires max;
log_not_found off;
}
location / {
# Check if a file exists, or route it to index.php.
try_files $uri $uri/ /index.php;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /path/to/my/php/code/$fastcgi_script_name;
}
location /redis {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
rewrite /(.*)/(.*)/(.*)$ /$2/$3 break; #ignore the /redis
proxy_redirect off;
proxy_pass http://webdisServerPool;
}
}
On the front end side, here is an example of getting all the keys. All redis requests would go through /redis for example:
$.ajax({
url: "/redis/KEYS/*",
method: 'GET',
dataType: 'json',
success:function(data)
{
$each(data.KEYS,function(key,value){
$('body').append(key+"=>"+value+" <br> ");
});
}
});
OR
You could use:
http://wiki.nginx.org/HttpRedis and parse the response yourself.
I have found that the direct Redis http interfaces don't work very well with pub/sub or are difficult to set up (at time of writing).
Here is my "workaround" for pub/sub based on the predis examples.
http://bradleygoldsmith.tumblr.com/post/35601539836/quick-and-dirty-redis-subscribe-publish-notifications
I have a bunch of predefined redis accessors in php, and I use a 'router' style function to use them from the client via $.post requests with jQuery. The router is just a big switch:
public function router() {
$response = array();
switch ($_POST['method']) {
case 'get_whole_list': //is a convenience function with arg $list_key
if ($_POST['list_key']) {//which will be provided by the POST request data
$response = $this->get_whole_list($_POST['list_key']);
break;
} else {
$response = (array('error' => 'must be passed with post key "list_key"'));
break;
} //and so on, until
//it's time to send the response:
return json_encode(array('response' => $response));
}
and then you just echo $myClass->router()
I access it with jQuery like:
redgets.get_whole_list = function(key, callback) {
$.post(redgets.router, //points to my php file
{method: 'get_whole_list', //tells it what to do
list_key: key}, //provides the required args
function(data) {
callback($.parseJSON(data).response); //parses the response
});
this all works fine; maybe it's not ideal, but it does make a node.js server redundant.
I am surprised that nobody has already made a general-purpose redis interface in this style.