How to call microservice without slowing down the response? - php

I want to integrate a new functionality with a Laravel based ecommerce solution. At this point the main scripts takes around 2.7s to run. The whole site loads in above 6s and we've just started to monitor it. The goal is to get below 2s with script and 4s with everything.
The microservice and the functionality is exposed through a gRPC.
There is a TLS based client-server authentication in place (ecommerce instances and my service can prove who they are). This eats few milliseconds.
When testing Go-client and Go-server, with a pool of 20 connections, it achieved below 35ms per requests.
In PHP each request takes above 200ms.
Is it possible to:
cache the connection to service between requests?
call RPC methods asynchronously?
Among other solutions I'm considering:
Setting up a local gRPC proxy which will accept only localhost GET requests made by PHP script and make them a secure gRPC calls.
Setting up a proxy in front of PHP application to call microservice.
Calling a service directly from website with JavaScript (puts a burden on a users browser, need to maintain JavaScript).
Any suggestions?

The connection should be re-used if you are using the same Client. On the other hand, there is an option to pre-create a Grpc\Channel object first and then pass it to the your service client as an optional 3rd parameter: https://github.com/grpc/grpc/blob/master/src/php/lib/Grpc/BaseStub.php#L58. That way you should be able to re-use the same connection across services.
Currently we don't provide an async API for PHP. We did have a tracking issue https://github.com/grpc/grpc/issues/6654 which we may consider in the future

Related

How to talk between HTTP request & cli class objects

I have a application running that's listing to HTTP request. Each request is passed to single page where a framework object $app is instantiated and this takes care of routing / controller / model etc.
Now i have a another class whose object is instantiated via. a CLI script lets call it $cliApp now problem is how do i make both the object talk to each other. $app is instantiate every-time there is a new request.
But $cliApp is instantiate only once when script is ran. This scripts runs in loop via $loop object by PHP React Event loop.
Cli App is running websockets. So basically i want http & sockets to communicate via. http api.
P.s. :
Right now i have one solution to use message queueing e.g. 0mq etc. but that seems overkill since i'm not looking to scale and keep it simple.
Another solution i'm currently trying and feels right is to share a SptStorageObject between threads created by $http request and thread created by $cli request. Maybe this is question of dependency injection and i'm having troubles to share this $store object.
If I understand correctly, you have (assumptions noted):
a normal PHP web app that communicates over HTTP (presumably on Apache or similar webserver)
a long-running PHP cli app that communicates over websockets.
Presumably both apps are receiving communication from web clients on an ongoing basis. Presumably they also have their own persistent data stores, such as a MySQL database or similar, perhaps even sharing the same one.
I'm going to assume that what you need goes beyond each application accessing the most up-to-date data from the persistent data store (or that the two processes use separate data stores), and you actually need on-demand communication between the two processes.
You're on the right path with message queues, but as you note it's needless complexity to add a third dedicated inter-process communication layer when you've already got two communication layers that work perfectly fine on their own.
What you need is for your cli app to speak HTTP when it needs to initiate communication with your web app, and for your web app to speak web sockets when it needs to initiate communication with your cli app.
What this looks like in practice is fairly simple.
In your cli app, just use cURL to initiate an HTTP connection to your web app. This is fairly simple, there are endless resources out on the web to help you along the way and if you get stuck then coming here with a new question specific to your problem will get you going. All this requires in your web app is the following:
appropriate endpoint(s) which the cli app can send requests to, if the basic client facing pages won't suffice
some method to authenticate the cli app if it needs to access data that should not be available to web clients
For your web app to initiate a websocket connection to the cli app, it's a bit more complicated because I'm not aware of any native PHP functionionality that specifically targets the websocket protocol. However, I did find this (extremely permissive) github project that purports to give you the ability to set up a web socket server, and it also includes a client script that you could use to connect to and send/receive data while your web app process lives, and then shut it down when you're done. It appears to still have some minimal activity, you could use that directly or use it as a starting point to write your own websocket client.
In this case, just as in the reverse situation, you need the cli client to recognize and authenticate traffic from your web client so it can serve appropriate data just to it.
If for some reason this scenario won't work for you, then you're back to message queues or shared data stores (someone suggested redis, which can act as a hybrid data store/message queue under some circumstances).

What is the fastest way to communicate between PHP and node.js on the same machine?

I have a Laravel application that is using a node.js (express) server to server-side render React pages. Both processes are on the same machine.
Currently Laravel sends a POST request to the local node server with some initial data (in JSON) and receives the rendered HTML string from it.
The POST request currently takes 100ms to 200ms, while the rendering itself if performed in an average of 20ms. What are the other 80/180ms being spent on?
I have tried using DNode for RPC, but the latency is comparable.
You've not provided any details of what this is actually running on - what operating system? What is the load on the system?
You've also not said how you measured the the request and "rendering" times.
Unless the delays are occurring in node.js HTTP layer or in some abstraction layer on the PHP side then switching to a different protocol is not gong to help. I think its safe to discount the former - node.js is reasonably fast at serving HTTP most of the time.
In the absence of further information, I'd be running a packet capture between the nodes to see which side the latency is occurring on.

Websocket with endless loop cycling. its really better than ajax?

I am trying understand websockets.
I have seen 2 examples here in doc and also here.
Both examples are using endless loop cycling, listening for when a new client connects, when they do something interesting and when they are disconnected.
My question is: Is using websockets (with endless loop cycling) better than an ajax solution with http requests per x time ?
AJAX and WebSockets are vastly different. Asking if one is better than the other is like asking if a screwdriver is better than a hammer.
WebSockets are used for real time, interactive communication. Both sides of a WebSocket connection can send data and it will be received within milliseconds by the other end. The connection stays open, reducing latency due to connection negotiation.
However, it only sort of plays nicely with HTTP. That is, it plays nicely with proxies that are WebSocket aware, and with firewalls. WebSocket traffic is most definitely not HTTP traffic, except for the client's first packet, which requests switching from HTTP to the WebSocket protocol.
AJAX, on the other hand, is pure HTTP. The only difference between AJAX and a standard web request is that an AJAX request is initiated by client side scripts and the response is available to that same script rather than reloading the page.
In both AJAX and WebSockets, the client scripts can receive data and use it within that same script. That's where the similarities end.
WebSockets set up a permanent connection and both sides can send data at any time, or sit quietly at any time. With AJAX, the client makes a request and the server responds.
For instance, if you were to set up a new message notification system, if you were using WebSockets, then as soon as a new message is available, the server sends it straight to the browser. If there are no new messages, the server stays quiet. If you were using AJAX, the client would periodically send a request to the server, which would always respond, either saying there were no new messages, or delivering the notifications that are pending. There is no way for the server to initiate things on its end, it must wait for the AJAX request.
Server side, things diverge from the traditional PHP web development paradigms. A typical WebSocket server will be a stand alone, CLI application running as a daemon. (If that last sentence doesn't make sense, please spend a while taking the time to really understanding how to administer a server.)
This means that multiple clients will be connecting to the same script, and superglobal variables like $_GET and $_SESSION will be absolutely meaningless. It seems easy to conceptualize in a small use case, but remember that you will most likely want to get information from other parts of your site, which often means using libraries and frameworks that have absolutely no concept of accessing data outside of the HTTP request/response model.
Thus, for simplicity, you'll usually want to stick with AJAX requests and periodic polling, unless you have the means to rethink the network data and possibly re-implement things that your libraries automate, if you're looking to update standard web traffic.
As for the server's loop:
It is not a busy loop, it is an IO blocked loop.
If the server tries to read network data and none is available, the operating system will block (pause) the script and go off to do whatever else needs to be done. In my WS server, I block waiting for network traffic for at most 1 second at a time, before the script returns to check and see if anything else new happened that I should notify my clients of. Typically, this is barely a few milliseconds before the server goes right back to its IO blocked state waiting for new data on the wire. Some others have implemented my server using LibEv, which allows them to respond to events outside of the network IO without having to wait for the block to timeout.
This is the way nearly every server does things. This is why you can have Apache actively listening and serving web traffic without every server that runs Apache being pegged at 100% CPU usage even when there is no traffic.
In closing, WebSockets is a wonderful technology, but web libraries and frameworks are simply not built to use them. Thus, unless you're working in a system where waiting 3 seconds for a full AJAX request is far, far too long, it's probably best to use AJAX. If you're writing a multiplayer interactive game or a chat system, then you've found a perfect use for WebSockets.
I do heartily encourage everyone to learn WebSockets... but it's not a magic bullet, and few parts of the web are designed in ways where people can get real use out of it.
Yes, sockets are better in many cases.
It's not forever loop with 100% cpu utilizing, it's just liveloop, which exists in each daemon application.
Sync accept operation is where 99.99% of time we are.
Ajax heartbeat is more traffic, more server CPU and memory.
I too am in the learning phase. I have built a php-based websocket server and have it communicating with web pages. Perhaps my 2c perspective is useful.
Getting the websocket server (wss) working using available sources as a starting point is not that difficult, but what to do with it all next is.
The wss runs in CLI version of php. Late model browser loads a normal http or https page containing a request to the wss, along with anything else that page needs to do, a handshake occurs. Communication is then possible directly between browser and wss at the whim of either end. This is low overhead and hence fast and simple. Very cool. What is said over that link needs to be understood by both ends - subprotocol agreement. You may have to roll your own in php and in javascript. No more http headers, urls, etc etc.
The wss is a long-lived, stateful instance of php (very unlike apache etc which forget you on sending the page). An entire app can be run in the wss instance, keeping state for itself and each connected client. It used to be said that php was too leaky for long life but I don't hear that much any more. But I believe you still have to be careful with memory.
However, being a single php instance there is not the usual separation between client instances. For example statics in classes are shared with every class instance and hence every client. So for a single user style app sharing data with a heap of clients this is great. I can see that Ajax type calls can be replaced in this way, but if the app still had to rebuild state to service each client, and then release it to save resources, that seems to lessen the advantage.
Going a step further and keeping truly stateful instances for clients seems like a possible next step. Replicating the traditional session based system is one possibility, alternatively fork new php interpreters and look after communications between parent and children via sockets or suchlike. But this would require resources per client that would be severely limiting for any non-trivial app.
Or perhaps it is possible to put the bulk of the app in the parent and let the children just do the very client specific stuff. Or break the app design into small independent units that can communicate directly via sockets. Socket communication does seem to be catching on nowadays.
As Ghedpunk says in so many ways, the real world does not yet seem ready to realise the full potential of the web socket concept but it can certainly replace Ajax. The added advantage of the server sending without being asked opens up new possibilities previously too difficult to consider.

Notifications via socket.io on php site

I am building a website in PHP that handles the sessions in Redis in JSON format.
This way the session can be accessed by both the PHP interpreter and a node.js server.
What I am trying to do is now to add notifications in said website; the procedure I was thinking of is the following: (just figure it as a simple friend request to simplify it all)
user A sends friend request.
PHP uses cURL to say to node.js service to send notification
user B gets a notification because he is connected to node.js via socket.io
What are the general guidelines to achieve this? Because it looks clear to me that if user A and B are in different servers, it will be impossible to scale horizontally;
Thanks in advance.
Sounds like a you could make use of Web Sockets here with a Publication / Subscription protocol, architecture.
You get Server client functionality with web sockets.
Node is a perfect choice for a websocket server, lots of small IO.
See http://en.wikipedia.org/wiki/Web_sockets
I'm wouldn't think if the shared session is required for php - node communication, just have your clients push requests through the socket and handle the reposes as needed.
I think the approach you propose sounds quite reasonable. However, instead of doing a direct request to the service, you could consider using a message queue (zeromq, rabbitmq, whatever) which would allow you to scale it more easily as you can easily add logic to the queue processing to pass the message to the correct node instance.
I managed to get 1400 concurrent connections with socket.io on a cheap VPS with no special configuration so even with no tricks it should scale quite well. In my case most of these connections were also sending and receiving data almost constantly. It could probably have handled more than that, 1400'ish was simply the max number of users I happened to get.
Though I'd worry more about getting all those users first ;)
Use Redis's built in pub-sub capability. When the PHP server gets a request, it publishes it to a channel set up for that purpose. Each of your node servers subscribes to that channel and checks if the user involved is connected to it. If so, it sends the notification via socket.io. As a bonus, you avoid an extra network connection and your overall logic is simplified.
simply setup ur database as per required then whenever a activity is made just tell ur node js to transfer the related information through redis to php and make a process and in make a response back from php to node via channel keep checking the notification from table and show

Handling multiple outbound API calls in a PHP web application

I'm working on a PHP (Zend Framework) web application that, for each user request, makes multiple calls to external APIs (SOAP and/or REST over HTTP).
At the moment, the API calls are sequential:
Call API A, wait around 1 second for results
Call API B, wait around 1 second for results
Send page back to the user
In this instance there is no dependency or relation between APIs A and B; I simply want to return the page with all the information as quickly as possible.
At the moment I'm thinking of either:
curl_multi_exec() - http://php.net/manual/en/function.curl-multi-exec.php
ZeroMQ - http://www.zeromq.org/
curl_multi_exec() would bind my client code for APIs A and B more tightly than I'd like.
ZeroMQ seems more complex to implement, and I'm not sure how I'd manage the worker processes and sockets.
Has anyone successfully implemented this behaviour in a PHP/Apache application without too much fuss?
Sounds like you need a cache. They are pretty easy to make and can be either filesystem or any database extension.

Categories