PHP session to be shared with Node.js using FileSystem? - php

There are several methods to share PHP session with Node.js.
One method is saving the PHP session in a nonSQL Database such as Redis and access it through Node.js.
Another popular method is using a memcached server.
Both of the mentioned methods require:
1) Running another server.
2) Altering the default PHP Session handler.
Why shouldn't I use the default PHP Session handler and access the sessions files by reading the file content within Node.js using 'fs'(FileSystem) core library ?
What other reasons there are besides speed to not access and read the sessions files directly, assuming that no remote operations between servers should be done ?

One huge advantage to both of the external session server options is that it becomes much easier to server the PHP and Node apps from separate servers themselves. While it's possible to access another server's filesystem directly, as would be necessary using the Node fs library, it's much simpler and more scalable to externalize the sessions on a redis server, for example, and not have to worry about the filesystem at all.
I also recommend reading The Twelve Factor App for more good practices in this vein.

This answer is very comprehensive: Performance of Redis vs Disk in caching application
Apart from that consider that you could deploy your application and DB on remote servers, if you are using Redis or so. Especially if you are considering to containerize your application this will be an advantage.

Related

Handling $_SESSION object in a load-balanced setup, PHP

I have a setup where I have a host which routes multiple requests in a load-balanced fashion. My backend uses PHP. Now, I need to use the $_SESSION object for some of my processing.
Will $_SESSION work where I have 3 backend servers which can receive any request at any time?
If not, Can one suggest alternatives to handle such cases?
EDIT: I do understand that we can store sessions in a database and find a way to track it. But, the problem in a realtime load-balanced production scenario is the number of calls that go into a DB. That can be a real bummer for my performance. I'm kind of hoping that, we can handle this at an webserver level.
Not sure, if it is possible, but, if two webservers have some kind of replication mechanism like databases do, it will be brilliant. I dont have to do a thing.
If such a thing does not exist, PHP should be modified to support it. That will actually, make it a seriously robust language.
My Suggestion is to setup PHP to handle the sessions in the database (this way they can all access the session data independent of which server is requesting it).
A good tutorial for that can be found HERE
Look into either memcache (Is it recommended to store PHP Sessions in MemCache?) or REDIS (https://joshtronic.com/2013/06/20/redis-as-a-php-session-handler/).
There is a good tutorial on setting up memcache on Ubuntu at https://www.globo.tech/learning-center/php-memcached-instances-ubuntu-16/. Which also covers using haproxy as a load balancer (although you may already a solution).
Perhaps have a read of https://blog.newtonhq.com/session-handling-for-1-million-requests-per-hour-68cdece15030.
You can store sessions in memcache in order to share session among servers.
Please have a look on documentation here

Memcached and PHP Sessions in multiple servers

I will like to know how memcached manage cache for php sessions i mean. I would like to design a php app that scale out and in each http-PHP server include a memcached layer for (db,app cache and session caching), but if memcached dont replicate de data when a user come to webserver1 dont see the same session in webserver2.
memcached1 and memcached2 need to be replicated to handle php sessions
thanks in advance.
regards.
While I agree there is no question here we could try to help the OP understand how memcache works.
When you use memcache which is an in-memory cache how you set it up is determined upon your current infrastructure.
For instance if you only have 1 web server you could install memcache on that same machine along with the database layer being on that machine as well. This works for increasing performance of the site because the site can get data from memcache (in memory) rather than from the database (on disk, and slower to read). Using it in this manner is good but as your site requires better performance or scalability you would probably start up a cluster of web servers behind a load balancer.
This is when things can get a bit tricky. You have all these machines and you are thinking that you need to have memcache on every machine so how do we replicate these instances? The simple answer is you don't. If you have multiple web servers the best method is to put memcache on it's own server (or cluster behind a load balancer), this way every web server is hitting the same IP address for the memcache server(s).
You do not need to worry about keeping anything in sync because the way memcache works is it creates a hash that specifies which server the key has been assigned to (when you have a cluster of memcache servers).
Based on this question it would appear that you would need to do one of the following:
1.) Read up on system architecture
2.) Hire someone to architect your systems layer.
My best suggestion would be to use a single server for your memcache instance and set the web servers to use that for memcache rather than trying to run memcache on each of the web servers.
Joseph.
I undertand your point, I already test the architecture with a separate memcached server (and redis too). My intentions is to "pack" the application server in a unit (docker) and the measure the load parameters to deploy a new instance, to scale out the infraestructure.
I found this.
https://www.digitalocean.com/community/tutorials/how-to-share-php-sessions-on-multiple-memcached-servers-on-ubuntu-14-04
thanks for your reply!
regards.

Is there any way to save PHP sessions in RAM?

I have php with nginx. I want to make PHP save it's sessions on RAM for security reasons. Is there any way doing it?
If it's impossible, is there is any advice to make php sessions unrecoverable from hard disk after the server is shutdown?
After a lot of searching I've found the Shared memory module of php, which can be used like persistent memory cache over sessions. is it shared with other applications too?, and how secure is it?
I would use memcached to store session data in RAM. If you are already using a database you might simply use a memory storage engine. However, I don't get what security reasons you have in mind. If you have concerns that somebody is able to access your session data then make sure that he is not able to do so. Regardless where they are stored as otherwise the security is completely broken.
Update
You told that the client and the server are running on the same physical machine. I can imagine of a kiosk application.
As a general advice the client needs to run as a different user. This is possible in Windows too. Then make sure that the client has a limited system access and is not able to access the secret data. that's it.
You might also consider to separate server and client using virtual machines.

How do I share objects between multiple get requests in PHP?

I created a small and very simple REST-based webservice with PHP.
This service gets data from a different server and returns the result. It's more like a proxy rather than a full service.
Client --(REST call)--> PHP Webservice --(Relay call)--> Remote server
<-- Return data ---
In order to keep costs as low as possible I want to implement a caching table on the PHP webservice system by maintaining data for a period of time in server memory and only re-request the data after a timeout (let's say after 30 mins).
In pseudo-code I basically want to do this:
$id = $_GET["id"];
$result = null;
if (isInCache($id) && !cacheExpired($id, 30)){
$result = getFromCache($id);
}
else{
$result = getDataFromRemoteServer($id);
saveToCache($result);
}
printData($result);
The code above should get data from a remote server which is identified by an id. If it is in the cache and 30 mins have not passed yet the data should be read from the cache and returned as a result of the webservice call. If not, the remote server should be queried.
While thinking on how to do this I realized 2 important aspects:
I don't want to use filesystem I/O operation because of performance concerns.
Instead, I want to keep the cache in memory. So, no MySQL or local
file operations.
I can't use sessions because the cached data must be shared across different users, browsers and internet connections worldwide.
So, if I could somehow share objects in memory between multiple GET requests, I would be able to implement this caching system pretty easily I think.
But how could I do that?
Edit: I forgot to mention that I cannot install any modules on that PHP server. It's a pure "webhosting-only" service.
I would not implement the cache on the (PHP) application level. REST is HTTP, therefore you should use a caching HTTP proxy between the internet and the web server. Both servers, the web server and the proxy could live on the same machine as long as the application grows (if you worry about costs).
I see two fundamental problems when it comes to application or server level caching:
using memcached would lead to a situation where it is required that a user session is bound to the physical server where the memcache exists. This makes horizontal scaling a lot more complicated (and expensive)
software should being developed in layers. caching should not being part of the application layer (and/or business logic). It is a different layer using specialized components. And as there are well known solutions for this (HTTP caching proxy) they should being used in favour of self crafted solutions.
Well, if you do have to use PHP, and you cannot modify the server, and you do want in-memory caching for performance reasons (without first measuring that any other solution has good enough performance), then the solution for you must be to change the webhosting.
Otherwise, you won't be able to do it. PHP does not really have any memory-sharing facilities available. The usual approach is to use Memcached or Redis or something else that runs separately.
And for a starter and proof-of-concept, I'd really go with a file-based cache. Accessing a file instead of requesting a remote resource is WAY faster. In fact, you'd probably not notice the difference between file cache and memory cache.

redis vs native sessions

I am using sessions in PHP to track if a user is logged in. I do not use it to store any other data about the user; essentially it is like checking a hash table to see if the user has authenticated.
Would there be some advantage to using redis instead of native PHP sessions?
I'm curious about performance, scalability, and security (not really concerned with code complexity).
Using something like Redis for storing sessions is a great way to get more performance out of load balanced servers.
For example on Amazon Web Services, the load balancers have what's called 'sticky sessions'. What this means is that when a user first connects to your web app, e.g. when logging in to it, the load balancer will choose one of your app servers and this user will continue to be served from this server until they exit your application. This is because the sessions used by PHP, for example, will be stored on the app server that they first start using.
Now, if you use Redis on a separate server, then configure your PHP on each of your app servers to store it's sessions in Redis, you can turn this 'sticky sessions' off. This would mean that any of your servers can access the sessions and, therefore, the user be served from a different server with every request to your app. This ultimately makes for more efficient use of your load balancing set-up.
You want the session save handler to be fast. This is due to the fact that a PHP session will block all other concurrent requests from the same user until the first request is finished.
There are a variety of handlers you could use for PHP sessions across multiple servers: File w/ NFS, MySQL Database, Memcache, and Redis.
The database method (using InnoDB) was the slowest in my experience followed by File w/ NFS. Locking and write contention are the main factors. Memcache and Redis provide similar performance and are by far the better alternatives since all operations are in RAM. Redis is my choice because you can enable disk persistence, and Memcache is only memory based.
I explain Redis Sessions in PHP with Kohana if you want more detail. Here is our dashboard for managing Redis keys:
I don't really think you need to worry much about sessions unless you get MASSIVE ammounts of traffic, PHP handle sessions nicely, and if you store only that little data, it should be fine even with a lot of requests, and about performance it should be close, as redis is not native to PHP.
With 10k users, if each user uses like 1kb data of sessions, it would consume 10,000kb or 10~mb, which is not much; PHP is smart enough to use a good enough data structure to hold and quickly write and read those values. The problem is if the session data is too big, or for some reason the server consumes too many resources reading the session data, but that's normally if it's the data is too big.

Categories