In our intranet application(s) we use SSO (single sign on) login while the sessions both on client and auth origin applications are stored in memcached.
The sessions are set to live for 12h before the garbage collector may consider them as for removal. Both applications are written using ZF2.
Unfortunately, the problem is, that after certain period of time (I don't have the exact value) the browser loses the session which causes the redirection to auth origin, where the session is still alive thus user is redirected back to client and the browser session is refreshed. This is not a big deal if the user has no unsaved work as these two redirects happen within 1 second and user even may not notice them.
But it really is a big deal when user has unsaved work and even an attempt to save it leads to redirects and the work is gone.
Here is the configuration of session in Bootstrap.php:
class Module
{
public function onBootstrap(MvcEvent $e)
{
// ...
$serviceManager = $e->getApplication()->getServiceManager();
$sessionManager = $serviceManager->get('session_manager_memcached');
$sessionManager->start();
Container::setDefaultManager($sessionManager);
// ...
}
public function getServiceConfig()
{
return array(
'factories' => array(
// ...
'session_manager_memcached' => function ($sm) {
$systemConfig = $sm->get('config');
$config = new SessionConfig;
$config->setOptions(array(
'phpSaveHandler' => 'memcache',
'savePath' => 'tcp://localhost:11211?timeout=1&retry_interval=15&persistent=1',
'cookie_httponly' => true,
'use_only_cookies' => true,
'cookie_lifetime' => 0,
'gc_maxlifetime' => 43200, // 12h
'remember_me_seconds' => 43200 // 12h
));
return new SessionManager($config);
},
// ...
);
}
}
The authentication service is defined as
'authService' => function ($sm) {
$authService = new \Zend\Authentication\AuthenticationService;
$authService->setStorage(new \Zend\Authentication\Storage\Session('user_login'));
return $authService;
},
the session storage uses the same memcached session manager.
Then anywhere within the application a session value needs to be retrieved or set I just use a \Zend\Session\Container like this:
$sessionContainer = new \Zend\Session\Container('ClientXYZ');
$sessionContainer['key1'] = $val1;
// or
$val2 = $sessionContainer['key2'];
The SSO is requested for the active session at any action using the token from session which contains PHPSESSID from the auth origin. It's quite complicated to describe here within this question.
Additionally an authentication service stores a user identity (with roles for ACL) also in memcached session - using the same settings. Obviously this is now the place which causes confusion. Apparently the session storage of authentication service times out prematurely causing the ACL to retrieve no user identity to check leading into SSO logout sequence (but because user didn't really log out, SSO redirects the user back as described above).
I'm not sure how much code should I (and can I) share here, maybe you'll lead me to the solution straight away or just by asking me some questions. I am quite helpless right now after many hours of debugging and trying to identify the problem.
Somewhere I have read that memcached wipes out the memory once the session cookie gets 1MB in size - may this be the case? For the user identity we save just general user information and array of roles, I'd guess this could be max. up to few kb in size...
EDIT 1: To dismiss all guesses and to save your time, here few facts (to keep an eye on):
only memcached is used
cookies serve only to transport the PHPSESSID between the browser and server and it's value is the key for memory chunk in memcached where the data is stored
client and SSO auth apps are running on one server (be it integration, staging or live environment, still just one server)
session on client app goes off randomly causing it to redirect to SSO auth app, but here the session is still alive thus user is redirected back to client app which gets new session and user stays logged in
this should dismiss discussion about memcached being wiped off or restarted
also observation on telneted memcached directly shows both data chunks (for client and auth apps) are established almost at the same time with the same ttl
I am going to implement some dies in PHP and returns in JS parts to catch the moment when the session is considered gone and further inspect the browser cookie, memcached data, etc. and will update you (unless somebody comes with explanation and solution).
public function initSession()
{
$sessionConfig = new SessionConfig();
$sessionConfig->setOptions([
'cookie_lifetime' => 7200, //2hrs
'remember_me_seconds' => 7200, //2hrs This is also set in the login controller
'use_cookies' => true,
'cache_expire' => 180, //3hrs
'cookie_path' => "/",
'cookie_secure' => Functions::isSSL(),
'cookie_httponly' => true,
'name' => 'cookie name',
]);
$sessionManager = new SessionManager($sessionConfig);
// $memCached = new StorageFactory::factory(array(
// 'adapter' => array(
// 'name' =>'memcached',
// 'lifetime' => 7200,
// 'options' => array(
// 'servers' => array(
// array(
// '127.0.0.1',11211
// ),
// ),
// 'namespace' => 'MYMEMCACHEDNAMESPACE',
// 'liboptions' => array(
// 'COMPRESSION' => true,
// 'binary_protocol' => true,
// 'no_block' => true,
// 'connect_timeout' => 100
// )
// ),
// ),
// ));
// $saveHandler = new Cache($memCached);
// $sessionManager->setSaveHandler($saveHandler);
$sessionManager->start();
return Container::setDefaultManager($sessionManager);
}
This is the function I use in order to create a cookie for X user. The cookie lives for 3 hours, no matter if there are redirects or if the user has closed the browser. It's still there. Just call this function in your onBootstrap() method from Module.php.
While logging, I use The ZF2 AuthenticationService and the Container to store and retrieve the user data.
I suggest you to install these module for easier debugging.
https://github.com/zendframework/ZendDeveloperTools
https://github.com/samsonasik/SanSessionToolbar/
Memcached & gc_maxlifetime
When using memcached as session.save_handler, garbage collection of session will not be done.
Because Memcached works with a TTL (time to live) value, garbage collection isn't needed. An entry that has not lived long enough to reach the TTL age will be considered "fresh" and will be used. After that it will be considered "stale" and will not be used any longer. Eventually Memcached will free the memory used by the entry, but this has nothing to do with session garbage collection of PHP.
In fact, the only session.gc_ setting that's actually used in this case is session.gc_maxlifetime, which will be passed as TTL to Memcached.
In short: garbage collection is not an issue in your case.
Memcached & Cronjobs
As you are using Memcached as storage for your sessions, any cronjobs provided by the OS that will manually clean session folders on disk (like Ubuntu does) will have no effect. Memcached is memory storage, not disk storage.
In short: cronjobs like this are not an issue in your case.
Issue of app, not SSO
You state that the SSO server/authority is on the same machine as the SSO client (the application itself), is using the same webserver / PHP configuration, and is using the same instance of Memcached.
This leads me to believe we have to search in how session management is done in the application, as that is the only difference between the SSO authority and client. In other words: we need to dive into Zend\Session.
Disclaimer: I've professionally worked on several Zend Framework 1 applications, but not on any Zend Framework 2 applications. So I'm flying blind here :)
Configuration
One thing I notice in your configuration is that you've set cookie_lifetime to 0. This actually means "until the browser closes". This doesn't really make sense together with remember_me_seconds set to 12 hours, because a lot of people will have closed their browser before that time.
I suggest you set cookie_lifetime to 12 hours as well.
Also note that remember_me_seconds is only used when the Remember Me functionality is actually used. In other words: if Zend\Session\SessionManager::rememberMe() is called.
Alternative implementation
Looking at the way you've implemented using Memcached as session storage, and what I can find on the subject, I'd say you've done something different than what seems to be "the preferred way".
Most resources on this subject advise to use Zend\Session\SaveHandler\Cache (doc, api) as save-handler, which gives you the ability to use Zend\Cache\Storage\Adapter\Memcached (doc, api). This gives you much more control over what's going on, because it doesn't rely on the limited memcached session-save-handler.
I suggest you try this implementation. If it won't immediately resolve your issue, there are at least a lot more resources to find on the subject. Your chances of finding a solution will be better IMHO.
This answer might not immediately address the cause of your memcache issue, but because of the unreliable nature of memcache I would suggest to make a backup of your memcached data in some persistent storage.
Memcaching your data will help you to improve performance of your application but it is not fail-safe.
Maybe you can make a fallback (persistent) storage in your AuthenticationService instance. Then first you try to get your authentication data from your memcache and if nothing is found you check if there is something available in your persistent storage.
This will at least solve all issues with unexpected memcache loss issues.
Related
Using PHP Laravel Framework to consume kafka messages with the help of the mateusjunges/laravel-kafka laravel package.
Is it possible to save the offset by consumer in, for example, Redis or DB?
And, when the broker shuts down and comes back up, is it possible to tell the consumer to start consuming messages from that specific offset?
Let's say I have a laravel Artisan command that builds the following consumer :
public function handle()
{
$topics = [
'fake-topic-1',
'fake-topic-2',
'fake-topic-3'
];
$cachedRegistry = new CachedRegistry(
new BlockingRegistry(
new PromisingRegistry(
new Client(['base_uri' => 'https://fake-schema-registry.com'])
)
),
new AvroObjectCacheAdapter()
);
$registry = new \Junges\Kafka\Message\Registry\AvroSchemaRegistry($cachedRegistry);
$recordSerializer = new RecordSerializer($cachedRegistry);
foreach ($topics as $topic)
{
$registry->addKeySchemaMappingForTopic(
$topic,
new \Junges\Kafka\Message\KafkaAvroSchema($topic . '-key')
);
$registry->addBodySchemaMappingForTopic(
$topic,
new \Junges\Kafka\Message\KafkaAvroSchema($topic . '-value')
);
}
$deserializer = new \Junges\Kafka\Message\Deserializers\AvroDeserializer($registry, $recordSerializer);
$consumer = \Junges\Kafka\Facades\Kafka::createConsumer(
$topics, 'fake-test-group', 'fake-broker.com:9999')
->withOptions([
'security.protocol' => 'SSL',
'ssl.ca.location' => storage_path() . '/client.keystore.crt',
'ssl.keystore.location' => storage_path() . '/client.keystore.p12',
'ssl.keystore.password' => 'fakePassword',
'ssl.key.password' => 'fakePassword',
])
->withAutoCommit()
->usingDeserializer($deserializer)
->withHandler(function(\Junges\Kafka\Contracts\KafkaConsumerMessage $message) {
KafkaMessagesJob::dispatch($message)->onQueue('kafka_messages_queue');
})
->build();
$consumer->consume();
}
My problem now is that, from time to time, the "fake-broker.com:9999" shuts down and when it comes up again, it misses a few messages...
offset_reset is set to latest ;
The option auto.commit.interval.ms is not set on the ->withOptions() method, so it is using the default value (5 seconds, I believe) ;
auto_commit is set to true and the consumer is built with the option ->withAutoCommit() as well ;
Let me know if you guys need any additional information ;)
Thank you in advance.
EDIT:
According to this thread here , I should set my "offset_reset" to "earliest", and not "latest".
Even tho, I'm almost 100% sure that an offset is committed (somehow, somewhere stored), because I am using the same consumer group ID in the same partition (0), so, the "offset_reset" is not even taken into consideration, I'm assuming...
somehow, somewhere stored
Kafka consumer groups store offsets in Kafka (__consumer_offsets topic). So, therefore, storing externally doesn't really make sense because you need Kafka to be up, regardless.
Is it possible to save the offset by consumer in, for example, Redis or DB? And, when the broker shuts down and comes back up, is it possible to tell the consumer to start consuming messages from that specific offset?
In general, it is, but it adds unnecessary complexity. You'd need to manually assign each partition to your client rather than subscribing the consumer to just a topic. It's not clear to me if that Kafka library supports custom partition assignment, though
It's not clear from your question why Kafka would be scaled to zero brokers and have less uptime than "Redis or DB" for you not to store offsets in Kafka. (Redis is a DB, so not sure why that's an "or"...)
Only when there is no consumer group does that offset_reset value matter. The consumer client isn't (shouldn't? I don't know the PHP client code.) "caching" the offsets locally, and broker restarts should preserve any committed values. If you want to guarantee you are able to commit every message, you need to disable auto-commits and handle it yourself. https://junges.dev/documentation/laravel-kafka/v1.8/advanced-usage/4-custom-committers
You can optionally inspect the message in your handler function, and store that message offset somewhere else, but then you are fully responsible for seeking the consumer when it starts back up (again, you want to disable all commit functionality in the consumer, and also set auto.offset.reset consumer config to none rather that latest/earliest). This config will throw an error when the offset doesn't exist, however
My ZF2 application logs out after a short period of inactivity - say, 60 minutes or so - and I can't understand why.
I have an 'auth' object which is a singleton that composes an instance of Zend\Session\Container. Its constructor creates the container with this following line:
$this->session = new Container('Auth');
The auth object has a login() method that stores the current user with the following line:
$this->getSession()->userId = $user->id;
The auth object also has an isLoggedIn() method that tests the status as follows:
if ($this->getSession()->userId) {
return true;
}
return false;
That's all pretty straightforward. Yet, from time to time when the bootstrap is checking to see if we are logged in, it comes back with false. Why?
Here's a printout of the config from the session manager:
'cookie_domain' => '',
'cookie_httponly' => false,
'cookie_lifetime' => 604800,
'cookie_path' => '/',
'cookie_secure' => '',
'name' => 'MyApplication',
'remember_me_seconds' => 1209600,
'save_path' => '/var/lib/php5',
'use_cookies' => true,
As you can see, the remember_me_seconds and cookie_lifetime are set to 2 weeks and 7 days respectively. Is there some other setting that I should be looking at?
I read somewhere that the default save handler, 'file', does not support concurrency. My bootstrap also opens a session container to the auth namespace with new Container('Auth'). Could this be conflicting with the Container in the auth singleton ? I doubt it, since the problem would then be likely to occur in periods of high activity (not after a period of inactivity). Also, I would expect to see an exception.
Woe is me.
EDIT: It is also worth noting that the session ID does not change when logged out, or upon logging back in.
There are many points why a session can become invalid.
check always following points:
session cookie lifetime (should become invalid only when closing the browser)
session lifetime itself
cache_expire key in zf2 (should be higher than session lifetime)
Try to add this
//NEW SECTION
'cache_expire' => 60 * 26, <-- this may help
'gc_maxlifetime' => 60 * 60 * 24, <-- or this
I have an Android app in which I've implemented AWS Cognito. I'm hoping to use this as a means for controlling access to PHP scripts on my web root which connect to an RDS instance with a MySQL db. So far, I've set the registration process in my app to use a developer authenticated id to register the user in a cognito identity pool. Now, what I would like to do is have a method for checking whether the user trying to access the various scripts I've exposed in my web root is indeed a verified user. What I was thinking of doing is implementing a script like this:
use Aws\CognitoIdentity\CognitoIdentityClient;
$identityId = $_POST['identityId'];//sending cached identity id from client
$client = CognitoIdentityClient::factory ( array (
'profile' => 'profile',
'region' => 'region'
) );
$result = $client->lookupDeveloperIdentity(array(
'IdentityPoolId' => 'IdentityPoolId',
'IdentityId' => $identityId,
'MaxResults' => 1,
));
if ($result != null) {
//connect to db and do whatever operation/query needs to be done
}
However, checking this every time I need to make some kind of transaction on my db seems to be pretty inefficient and slow.
a) Am I using Cognito in the intended fashion?
b) If not, what is a better way of going about this?
Please let me know if I'm way off base here. Thanks!
I have looked all over and I can see where people have created the initial session for ZF2 auth, remember me's, etc, but I can't find where people are updating the session when there is activity. Basically, I already have an authentication (with doctrine) system and my current solution and I set up the following configuration setting:
return array (
'session' => array(
'cookie_lifetime' => 1800, // 30 min
'remember_me_seconds' => 1800, // 30 min
'use_cookies' => true,
),
);
Then what I am trying to do is RELOAD this on every request like this:
NOTE: I have code that only does this if the user is already logged in.
class Module
{
public function onBootstrap(EventInterface $e)
{
$this->getEventManager()->attach('route', array($this, 'onRoute'), -100);
}
public function onRoute(EventInterface $e)
{
$sessionConfig = new SessionConfig();
$sessionConfig->setOptions($config['session']);
$sessionManager = new SessionManager($sessionConfig);
$sessionManager->rememberMe($config['session']['remember_me_seconds']);
$sessionManager->start();
}
}
My basic need is I'm trying to refresh the session (server and client) anytime there is a request, but 1. it feels like I'm re-creating it every time and 2. Sometimes the session seems to randomly die. I think this is because the original session dies after the 30 min I set it to.
Any advice?
PHP should be updating the session time for you, you don't need to do it manually.
Also, don't call rememberMe() on every request, as this will generate a new session token (assuming the session already exists).
Imagine that you have interfaces which describe the data access layer of your application. You haven't decided yet what kind of storing mechanism you want to use, you just want to make sure, that whatever you choose, it will handle concurrent requests well. For that you have to write concurrency tests against those interfaces.
I think a schematic concurrency test should be something like this:
public function testMoneyIsNotLostByConcurrentTransfers(){
$accountRepository = DataAccessLayer::getBankAccountRepository();
$accountOfTom = $accountRepository->create(array(
'owner' => 'Tom',
'balance' => new Money(10000)
));
$accountOfBob = $accountRepository->create(array(
'owner' => 'Bob',
'balance' => new Money(10000)
));
$accountOfSusanne = $accountRepository->create(array(
'owner' => 'Susanne',
'balance' => new Money(10000)
));
$this->concurrentExecution(
function () use ($accountOfTom, $accountOfBob){
$accountOfTom->transfer($accountOfBob, new Money(5000));
},
function() use ($accountOfTom, $accountOfSusanne){
$accountOfSusanne->transfer($accountOfTom, new Money(5000));
}
);
$this->assertEquals($accountOfTom->getBalanceAmount(), 10000);
$this->assertEquals($accountOfBob->getBalanceAmount(), 15000);
$this->assertEquals($accountOfSusanne->getBalanceAmount(), 5000);
}
Is it possible to write such tests, test runner in PHP? Or is there any existing tool which can help by concurrency testing in PHP?
I could not find any test runner for such concurrency tests. I found only paratest, which can run independent tests, like unit tests parallel.
According to PHP - parallel task runner the best option I think is using pthreads with debug_backtrace. I think it will be hard even with that. I am looking forward the installing problems, thread safety, resource sharing difficulties, backtrace bugs, etc... I will have a great time I am sure...:S
I found async calls in the pthreads examples.
If I ever manage to solve this, I will share it on github and add a link here. Until then...
update
I just realized that I don't need multi thread or multi process applications to test concurrency. For example I can start two transactions with 2 database connections from the same php file. What I need is add event triggering for the statements the db driver does, so I can add breakpoints and wait the other task wherever I want. File locking is just the same... So coroutines or some hand made multi tasking and statement logging is just enough...
Concurrency should be built into your saving mechanism, not the execution layer.
For example, if you are using SQL, instead of setting the variable use += and -=.