PHP, nodeJS and sessions - php

I have an classical apache server delivering php files, and a nodeJS server (with socket.io, but whithout express/connect) used for real-time event management on that PHP website. I sometimes need to authenticate the clients connecting to the nodeJS server, but this authentication is lost when the user reloads the page, because it also reloads the socket.io client (I store the socket ID on the server, which gets lost at each refresh)
The question is: Is there a way to keep the connection alive in socket.io, or a way to link the apache PHP sessions and the nodeJS server? Or maybe a way to keep this authentication using cookies (knowing I must store sensitive data like user passwords and keys)?

You can use memcached as your session storage handler in PHP. Memcached is a simple key value store that can be accessed via TCP; there is a memcached module available for Node.js.
PHP stores the session in memcached by using the session id as the key. The session data (value) stored in memcached is a serialized PHP object, with a slight twist. You can read more about this unusual serialization at the SO question "Parse PHP Session in Javascript". Luckily though, there is already an NPM module out there: php-unserialize.
Now for the How-To.
Assumptions
memcached is accessible at 127.0.0.1:11211
php.ini (or php.d/memcache.ini) is configured with: session.save_handler='memcached' and session.save_path='tcp://127.0.0.1:11211'
you have installed the required NPM modules (2): npm install memcached php-unserialize
you're ok with CLI
Prepare
First, just to get some test data to work with, save the following php script (s.php):
<?php
session_start();
$_SESSION['some'] = 'thing';
echo session_id()."\n";
print_r($_SESSION);
Execute it with php s.php, and it should put stuff in stdout:
74ibpvem1no6ssros60om3mlo5
Array
(
[some] => thing
)
Ok, now we know the session id (74ibpvem1no6ssros60om3mlo5), and have confirmed that the session data was set. To confirm it is in memcached, you can run memcached-tool 127.0.0.1:11211 dump which provides a dump of known key:value pairs, for example I have two in my test bed:
Dumping memcache contents
Number of buckets: 1
Number of items : 3
Dumping bucket 2 - 3 total items
add 74ibpvem1no6ssros60om3mlo5 0 1403169638 17
some|s:5:"thing";
add 01kims55ut0ukcko87ufh9dpv5 0 1403168854 17
some|s:5:"thing";
So far we have 1) created a session id in php, 2) stored session data from php in memcached, and 3) confirmed the data exists via CLI.
Retrieval with Node.js
This part is actually really easy. Most of the heavy-lifting has already been done by the NPM modules. I cooked up a little Node.js script that runs via CLI, but you get the picture:
var Memcached = require('memcached');
var PHPUnserialize = require('php-unserialize');
var mem = new Memcached('127.0.0.1:11211'); // connect to local memcached
var key = process.argv[2]; // get from CLI arg
console.log('fetching data with key:',key);
mem.get(key,function(err,data) { // fetch by key
if ( err ) return console.error(err); // if there was an error
if ( data === false ) return console.error('could not retrieve data'); // data is boolean false when the key does not exist
console.log('raw data:',data); // show raw data
var o = PHPUnserialize.unserializeSession(data); // decode session data
console.log('parsed obj:',o); // show unserialized object
});
Assuming the above is saved as m.js, it can be run with node m.js 74ibpvem1no6ssros60om3mlo5 which will output something like:
fetching data with key: 74ibpvem1no6ssros60om3mlo5
raw data: some|s:5:"thing";
parsed obj: { some: 'thing' }
Warnings/Gotchas
One of my PHP applications stores some binary data in the session values (i.e. encrypted), but the keys and the normal session object remain intact (as in the example above). In this case, memcached-tool <host:port> dump printed a malformed serialized session string to stdout; I thought this might be isolated to stdout, but I was wrong. When using PHPUnserialize.unserializeSession, it also had trouble parsing the data (delimited by |). I tried a few other session deserialization methods out on the net, but did not have any success. I would assume memcached is maintaining the correct data internally since it works with the native PHP session save handler, so, at the time of this writing, I'm not quite sure if it is the deserialization methods or if the memcached NPM module simply isn't retrieving/interpreting the data correctly. When sticking with non-binary data like ascii or utf-8, it should work as intended.

I think this link will be of some help to you
https://simplapi.wordpress.com/2012/04/11/php-nodejs-session-share-memcache/

Though the thread is old I would like to recommend what I used for my project.
Instead of memcached you can also use Redis for session handling.
I have used the phpredis as php redis client.Instead of storing session to files you can save in Redis. Most of the heavy lifting will be done by apache. For every request apache will append the session values to the cookies.And it reads the session values from every request and validates it.
Setting required to save the php session to redis is also very simple.
session.save_handler = redis
session.save_path = "tcp://host1:6379?weight=1, tcp://host2:6379?weight=2&timeout=2.5, tcp://host3:6379?weight=2"
That's it.This will make php save the sessions to redis instead of the file. This will also move the session that are stored in files to redis.

If your project stores session in database - some do - then you can consider using database as a transfer medium.
If analysis in your particular case shows promise, then node-mysql (or similar) can be used - see this: link

The answer from zamnuts helped me make since of doing authentication and was the approach I was already taking. Thanks for that.
The reason I am posting is the for me some reason when using :
var PHPUnserialize = require('php-unserialize');
Kept giving me error
SyntaxError: String length mismatch
I am not sure why? I wrote a function that does the job for me and wanted to share in case it may help someone else.
function Unserialize(data){
var result = {};
if(data !== undefined){
var preg = data.replace(/(^|s:[0-9]+:)|(^|i:)|(^|b:)|(")|(;$)/g,'').split(';');
var a = [];
preg.forEach(function(value){
a.push(value.split('|'));
});
var b = [];
a.forEach(function(value){
if(Array.isArray(value)){
Array.prototype.push.apply(b, value);
}else{
b.push(value);
}
});
var arr_A = [];
var arr_B = [];
b.forEach(function(value, k){
if(k % 2 == 0){
arr_A.push(value);
}else{
arr_B.push(value);
}
});
if (arr_A == null) return {};
for (var i = 0, l = arr_A.length; i < l; i++) {
if (arr_B) {
result[arr_A[i]] = arr_B[i];
} else {
result[arr_A[i][0]] = arr_A[i][1];
}
}
}
return result;
}
I just call it like this:
var PHPUnserialize = Unserialize;
memcached.get(key, function(err, data){
var memData = PHPUnserialize(data);
console.log(memData.is_logged_in);
});
You should be able to modify the regex to suit your needs fairly easily.

Related

PHP Predis: How to convert `redis-cli script load $LUA_SCRIPT` into Predis methods?

How to convert redis-cli script load $LUA_SCRIPT into Predis methods?
Follow is the lua script:
local lock_key = 'icicle-generator-lock'
local sequence_key = 'icicle-generator-sequence'
local logical_shard_id_key = 'icicle-generator-logical-shard-id'
local max_sequence = tonumber(KEYS[1])
local min_logical_shard_id = tonumber(KEYS[2])
local max_logical_shard_id = tonumber(KEYS[3])
local num_ids = tonumber(KEYS[4])
if redis.call('EXISTS', lock_key) == 1 then
redis.log(redis.LOG_INFO, 'Icicle: Cannot generate ID, waiting for lock to expire.')
return redis.error_reply('Icicle: Cannot generate ID, waiting for lock to expire.')
end
--[[
Increment by a set number, this can
--]]
local end_sequence = redis.call('INCRBY', sequence_key, num_ids)
local start_sequence = end_sequence - num_ids + 1
local logical_shard_id = tonumber(redis.call('GET', logical_shard_id_key)) or -1
if end_sequence >= max_sequence then
--[[
As the sequence is about to roll around, we can't generate another ID until we're sure we're not in the same
millisecond since we last rolled. This is because we may have already generated an ID with the same time and
sequence, and we cannot allow even the smallest possibility of duplicates. It's also because if we roll the sequence
around, we will start generating IDs with smaller values than the ones previously in this millisecond - that would
break our k-ordering guarantees!
The only way we can handle this is to block for a millisecond, as we can't store the time due the purity constraints
of Redis Lua scripts.
In addition to a neat side-effect of handling leap seconds (where milliseconds will last a little bit longer to bring
time back to where it should be) because Redis uses system time internally to expire keys, this prevents any duplicate
IDs from being generated if the rate of generation is greater than the maximum sequence per millisecond.
Note that it only blocks even it rolled around *not* in the same millisecond; this is because unless we do this, the
IDs won't remain ordered.
--]]
redis.log(redis.LOG_INFO, 'Icicle: Rolling sequence back to the start, locking for 1ms.')
redis.call('SET', sequence_key, '-1')
redis.call('PSETEX', lock_key, 1, 'lock')
end_sequence = max_sequence
end
--[[
The TIME command MUST be called after anything that mutates state, or the Redis server will error the script out.
This is to ensure the script is "pure" in the sense that randomness or time based input will not change the
outcome of the writes.
See the "Scripts as pure functions" section at http://redis.io/commands/eval for more information.
--]]
local time = redis.call('TIME')
return {
start_sequence,
end_sequence, -- Doesn't need conversion, the result of INCR or the variable set is always a number.
logical_shard_id,
tonumber(time[1]),
tonumber(time[2])
}
redis-cli script load $LUA_SCRIPT
I tried
$predis->eval(file_get_contents($luaPath), 0);
or
class ListPushRandomValue extends \Predis\Command\ScriptCommand {
public function getKeysCount() {
return 0;
}
public function getScript() {
$luaPath = Aa::$vendorRoot .'/icicle/id-generation.lua';
return file_get_contents($luaPath);
}
}
$predis->getProfile()->defineCommand('t', '\Controller\ListPushRandomValue');
$response = $predis->t();
But both of above showed error below.
ERR Error running script (call to f_5849d008682280eed4ec67b97ba50ae546fc5e8d): #user_script:19: user_script:19: attempt to perform arithmetic on local 'end_sequence' (a table value)
First, let me say that I am not an expert on LUA but I have just dealt with Redis and LUA script to implement simple locking and noticed a few errors in the question.
There is a conversion process between Redis and LUA that should be reviewed: Conversion. I think this will help with the error given.
On the this call:
$predis->eval(file_get_contents($luaPath), 0);
You pass the contents of the script and access keys but don't pass in any keys to evaluate. This call is more correct:
$predis->eval(file_get_contents($luaPath), 4, oneKey, twoKey, threeKey, fourKey);
This might actually be the reason for the above error. Hope this helps someone in the future.

Updating page info using jQuery from a PHP script that performs an external connection

I have a PHP script that performs a connection to my other server using file_get_contents, and then retrieves and displays the data.
//authorize connection to the ext. server
$xml_data=file_get_contents("http://server.com/connectioncounts");
$doc = new DOMDocument();
$doc->loadXML($xml_data);
//variables to check for name / connection count
$wmsast = $doc->getElementsByTagName('Name');
$wmsasct = $wmsast->length;
//start the loop that fetches and displays each name
for ($sidx = 0; $sidx < $wmsasct; $sidx++) {
$strname = $wmsast->item($sidx)->getElementsByTagName("WhoIs")->item(0)->nodeValue;
$strctot = $wmsast->item($sidx)->getElementsByTagName("Sessions")->item(0)->nodeValue;
/**************************************
Display only one instance of their name.
strpos will check to see if the string contains a _ character
**************************************/
if (strpos($strname, '_') !== FALSE){
//null. ignoring any duplicates
}
else {
//Leftovers. This section contains the names that are only the BASE (no _jibberish, etc)
echo $sidx . " <b>Name: </b>" . $strname . " Sessions: " . $strctot . "<br />";
}//end display base check
}//end name loop
From the client side, I'm calling on this script using jQuery load () and to execute using mousemove().
$(document).mousemove(function(event){
$('.xmlData').load('./connectioncounts.php').fadeIn(1000);
});
And I've also experimented with set interval which works just as well:
var auto_refresh = setInterval(
function ()
{
$('.xmlData').load('./connectioncounts.php').fadeIn("slow");
}, 1000); //refresh, 1000 milli = 1 second
It all works and the contents appear in "real time", but I can already notice an effect on performance and it's just me using it.
I'm trying to come up with a better solution but falling short. The problem with what I have now is that each client would be forcing the script to initiate a new connection to the other server, so I need a solution that will consistently keep the information updated without involving the clients making a new connection directly.
One idea I had was to use a cron job that executes the script, and modify the PHP to log the contents. Then I could simply get the contents of that cache from the client side. This would mean that there is only one connection being made instead of forcing a new connection every time a client wants the data.
The only problem is that the cron would have to be run frequently, like every few seconds. I've read about people running cron this much before, but every instance I've come across isn't making an external connection each time as well.
Is there any option for me other than cron to achieve this or in your experience is that good enough?
How about this:
When the first client reads your data, you retrieve them from the remote server and cache them together with a timestamp.
When the next clients read the same data, you check how old the contents of the cache is and only if it's older than 2 seconds (or whatever) you access the remote server again.
make yourself familiar with APC as a global storage. Once you have fetched the file, store it in the APC cache and set a timeout. You only need to connect to the remote server, once a page is not in the cache or outdated.
Mousemove: are you sure? That generates gazllions of parallel requests unless you set a semaphore clientside to not issue any AJAX queries anymore.

static variable for request count in laravel

In my routes.php I have a debug filter like so:
Route::filter('debug', function() {
if(App::environment() !== 'dev') { return; }
error_log("\n\n\n\n REQUEST NO. " . $staticRequestCount++ . "\n\n");
// log the request headers
// log the request body
I'm a noob in both php and laravel. Is it possible to create a static requestCount varaible as above which keep increasing all the time until you restart the server (or similar) ?
In php, its not possible to share a variable across different requests without using a external storage support. Each request will be a separate process or thread according to the apache worker implementation. So the code wont be able to share a common variable in memory to serve as a counter.
You can do it by writing the counter values on to a cache. Check out APC or memcached.
I don't think it's possible. You cannot detect if server was restarted using PHP. But you can simple save such counter into file and read it from file each time you run your filter, increase it and save modified value but of course it won't be automatically deleted (or set to 0) if server will be restarted.

Is phpredis pipeline the same as using the protocol for mass insertion?

I'm moving some part of my site from relational database to Redis and need to insert milions of keys in possibly short time.
In my case, data must be first fetched from MySQL, prepared by PHP and then added to corresponding sorted sets (time as a score + ID as a value). Currently I'm taking adventage of phpredis multi method with Redis::PIPELINE parameter. Despite noticeable speed improvements it turned out to block reads and slow down loading times while doing import.
So here comes the question - is using pipeline in phpredis an equivalent to the mass insertion described in http://redis.io/topics/mass-insert?
Here's an example:
phpredis way:
<?php
// All necessary requires etc.
$client = Redis::getClient();
$client->multi(Redis::PIPELINE); // OR $client->pipeline();
$client->zAdd('key', 1, 2);
...
$client->zAdd('key', 1000, 2000);
$client->exec();
vs protocol from redis.io:
cat data.txt | redis-cli --pipe
I'm one of the contributors to phpredis, so I can answer your question. The short answer is that it is not the same but I'll provide a bit more detail.
What happens when you put phpredis into Redis::PIPELINE mode is that instead of sending the command when it is called, it puts it into a list of "to be sent" commands. Then, once you call exec(), one big command buffer is created with all of the commands and sent to Redis.
After the commands are all sent, phpredis reads each reply and packages the results as per each commands specification (e.g. HMGET calls come back as associative arrays, etc).
The performance on pipelining in phpredis is actually quite good, and should suffice for almost every use case. That being said, you are still processing every command through PHP, which means you will pay the function call overhead by calling the phpredis extension itself for every command. In addition, phpredis will spend time processing and formatting each reply.
If your use case requires importing MASSIVE amounts of data into Redis, especially if you don't need to process each reply (but instead just want to know that all commands were processed), then the mass-import method is the way to go.
I've actually created a project to do this here:
https://github.com/michael-grunder/redismi
The idea behind this extension is that you call it with your commands and then save the buffer to disk, which will be in the raw Redis protocol and compatible with cat buffer.txt | redis-cli --pipe style insertion.
One thing to note is that at present you can't simply replace any given phpredis call with a call to the RedisMI object, as commands are processed as variable argument calls (like hiredis), which work for most, but not all phpredis commands.
Here is a simple example of how you might use it:
<?php
$obj_mi = new RedisMI();
// Some context we can pass around in RedisMI for whatever we want
$obj_context = new StdClass();
$obj_context->session_id = "some-session-id";
// Attach this context to the RedisMI object
$obj_mi->SetInfo($obj_context);
// Set a callback when a buffer is saved
$obj_mi->SaveCallback(
function($obj_mi, $str_filename, $i_cmd_count) {
// Output our context info we attached
$obj_context = $obj_mi->GetInfo();
echo "session id: " . $obj_context->session_id . "\n";
// Output the filename and how many commands were sent
echo "buffer file: " . $str_filename . "\n";
echo "commands : " . $i_cmd_count . "\n";
}
);
// A thousand SADD commands, adding three members each time
for($i=0;$i<1000;$i++) {
$obj_mi->sadd('some-set', "$i-one", "$i-two", "$i-three");
}
// A thousand ZADD commands
for($i=0;$i<1000;$i++) {
$obj_mi->zadd('some-zset', $i, "member-$i");
}
// Save the buffer
$obj_mi->SaveBuffer('test.buf');
?>
Then you can do something like this:
➜ tredismi php mi.php
session id: some-session-id
buffer file: test.buf
commands : 2000
➜ tredismi cat test.buf|redis-cli --pipe
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 2000
Cheers!

NodeJS receiving data from PHP server

I'm currently working on a simple NodeJS client that connects to a PHP server using the net classes. In addition, the NodeJS client is working as a Socket.IO server that sends data received from the PHP server to the browsers connected with Socket.IO.
So far, everything is working fine. Yet if I connect with another client to Socket.IO, the PHP server has to send a notification to every connected client. Thus, it sends a JSON-encoded array to the NodeJS client which processes the JSON data (decoding and modifying it a bit).
Now the problem is that sometimes two separate messages sent by the PHP server are concatenated in NodeJS' onData event handling function:
client.on("data", function(data) {
var msgData = JSON.parse(data.toString("utf8"));
[...]
}
The variable data now sometimes (not every time!) contains two JSON-strings, such as:
{ "todo":"message", [...] } { "todo":"message", [...] }
This of course results in an exception thrown by the JSON.parse function. I expected two calls of the onData-function with the variable data being:
{ "todo":"message", [...] }
On the PHP server side I have to iterate over an array containing all Socket.IO-connections that are currently served:
foreach($sockets as $id => $client) {
$nodeJS->sendData($client, array("todo" => "message", [...]);
}
The $nodeJS->sendData-function json-encodes the array and sends it to the NodeJS client:
socket_write($nodeClient, json_encode($dataToSend));
The $nodeJS->sendData function is definitively called two times, as socket_write is.
I now have no idea whether PHP or NodeJS concatenates those two strings. What I want, is that NodeJS calls the onData-handler once for each time the $nodeJS->sendData function is called (e.g. sendData is called twice → the onData-event is fired twice).
I could of course add some flag at the end of each json-encoded string and later split them into an array in the onData function. However, I don't like that solution much.
Is there an easier way to accomplish this?
It's important to remember that when you're reading from a socket, the data is going to come in arbitrary chunks and its entirely up to your code to split them up into units that are meaningful to process; there is absolutely no guarantee that each chunk will correspond to one meaningful unit.
yannisgu has given you the first part of the solution (terminate each unit with a newline, so your code can tell where it ends): now you need to implement the second part, which is to buffer your incoming data and split it into units.
At initialization, do something like
var buf = '';
and set client's encoding to utf8.
In your "data" handler:
[UPDATED: incorporated josh3736's suggestions]
buf += data;
var idx;
while ((idx = buf.indexOf('\n')) >= 0) {
// there's at least one complete unit buffered up
var unit = buf.slice(0, idx);
// extract it
if (unit.slice(-1) == '\r') {
// CRLF delimited
unit = unit.slice(0, -1);
}
if (unit.length) {
// ignore empty strings
var msgData = JSON.parse(unit);
[...]
// process it
}
buf = buf.slice(idx +1);
// discard it
}
// at this point, buf is either empty or contains an incomplete
// unit which will be processed as soon as the rest of it comes in
Try to add a new line after the JSON-String on the PHP side.

Categories