we're using DynamoDB in order to synchronize sessions between more than one EC2 machine under ELBs.
We noticed that this method slow down a lot the scripts.
Specifically, I made a js that calls 10 times 3 different php scripts on the server.
1) The first one is just an echo timestamp(); and takes about 50ms as roundtrip time.
2) The second one is a php script that connect through mysqli to the RDS MySQL and takes the same time (about 50-60ms).
3) The third script use the DynamoDB session keeping method described in official AWS documentation and takes about 150ms (3 times slower!!).
I'm cleaning the garbage every night (as documentation say) and the DynamoDB metrics seems OK (attached below).
The code I use is this:
use Aws\DynamoDb\DynamoDbClient;
use Aws\DynamoDb\Session\SessionHandler;
ini_set("session.entropy_file", "/dev/urandom");
ini_set("session.entropy_length", "512");
ini_set('session.gc_probability', 0);
require 'aws.phar';
$dynamoDb = DynamoDbClient::factory(array(
'key' => 'XXXXXX',
'secret' => 'YYYYYY',
'region' => 'eu-west-1'
));
$sessionHandler = SessionHandler::factory(array(
'dynamodb_client' => $dynamoDb,
'table_name' => 'sessions',
'session_lifetime' => 259200,
'consistent_read' => true,
'locking_strategy' => null,
'automatic_gc' => 0,
'gc_batch_size' => 25,
'max_lock_wait_time' => 15,
'min_lock_retry_microtime' => 5000,
'max_lock_retry_microtime' => 50000,
));
$sessionHandler->register();
session_start();
Am I doing something wrong, or is it normal all that time to retrieve the session?
Thanks.
Copying correspondence from an AWS engineer in AWS forums: https://forums.aws.amazon.com/thread.jspa?messageID=597493
Here a couple things to check:
Are you running your application on EC2 in the same region as your DynamoDB table?
Have you enabled OPcode caching to ensure that the classes used by the SDK do not need to be loaded from disk and parsed each time your
script is run?
Using a web server like Apache and connecting to a DynamoDB session
will require a new SSL connection to be established on each request.
This is because PHP doesn't (currently) allow you to reuse cURL
connection handles between requests. Some database drivers do allow
for a persistent connections between requests, which could account for
the performance difference.
If you follow up on the AWS forums thread, an AWS engineer should be able to help you with your issue. This thread is also monitored if you want to keep it open.
Related
I have a "legacy" php application that we just migrated to run on Google Cloud (Kubernetes Engine). Along with it I also have a ElasticSearch installation (Elastic Cloud on Kubernetes) running. After a few incidents with Kubernetes killing my Elastic Search when we're trying to deploy other services we have come to the conclusion that we should probably not run ES on Kubernetes, at least if are to manage it ourselves. This due to a apparent lack of knowledge for doing it in a robust way.
So our idea is now to move to managed Elastic Cloud instead which was really simple to deploy and start using. However... now that I try to load ES with the data needed for our php application if fails mid-process with the error message no alive nodes found in cluster. Sometimes it happens after less than 1000 "documents" and other times I manage to get 5000+ of them indexed before failure.
This is how I initialize the es client:
$clientBuilder = ClientBuilder::create();
$clientBuilder->setElasticCloudId(ELASTIC_CLOUD_ID);
$clientBuilder->setBasicAuthentication('elastic',ELASTICSEARCH_PW);
$clientBuilder->setRetries(10);
$this->esClient = $clientBuilder->build();
ELASTIC_CLOUD_ID & ELASTICSEARCH_PW are set via environment vars.
The request looks something like:
$params = [
'index' => $index,
'type' => '_doc',
'body' => $body,
'client' => [
'timeout' => 15,
'connect_timeout' => 30,
'curl' => [CURLOPT_HTTPHEADER => ['Content-type: application/json']
]
The body and which index depends on how far we get with the "ingestion", but generally pretty standard stuff.
All this works without any real problems when running on a own installation of Elastic in our own GKE cluster.
What I've tried so far is to add the retries and timeouts, but none of that seems to make much of a difference?
We're running:
php 7.4
ElasticSearch 7.11
Elastic Search client 7.12 (php via composer)
If you use WAMP64, this error will occur, You have to use XAMPP instead.
Try the following command in the command prompt, If it runs, there is a problem with your configurations.
curl -u elastic:<password> https://<endpoint>:<port>
(Ex for Elastic Cloud)
curl -u elastic:<password> example.es.us-central1.gcp.cloud.es.io:9234
Both CDbHttpSession and CHttpSession seem to be ignoring the timeout value and garbage collect data after a fairly short time (less than 12 hours). What could be the problem?
'session' => array(
'class'=> 'CDbHttpSession',
'autoCreateSessionTable' => true,
'autoStart'=>true,
'timeout' => 1209600,
'cookieMode' => 'only',
'sessionName' => 'ssession',
),
May be this is what you are looking for
Setting the timeout for CHttpSession just sets the
session.gc_maxlifetime PHP setting. If you run your application or
Debian or Ubuntu, their default PHP has the garbage collector disabled
and runs a cron job to clean it up.
In my apps I set the session dir somewhere in protected/runtime to
separate my session from other apps. This is important on shared
hosting sites and it's a good habbit. The downside is that I have to
remember to set up a cronjob to clean the files in that folder.
Anyway, you should also set a timeout when calling CWebUser.login to
log in a user.
from Yii Forum Post
Check duration parameter in CWebUser.login
Is there any RiakCS S3 PHP client library out there? The best I could find was S3cmd command line client software.
Also I've seen there is Riak PHP Client, but it looks like there is nothing related to S3.
I've installed aws-sdk-php-laravel and used same credentials as for RiakCS S3 but it doesn't seem to work. Error message below:
The AWS Access Key Id you provided does not exist in our records.
Thank you for any guidance or advice.
Actually, if you are using Riak, it wouldn't be a proxy, it would be a completely different endpoint. So you should do it this way with the base_url option:
$s3 = S3Client::factory([
'base_url' => 'http://127.0.0.1:8080',
'region' => 'my-region',
'key' => 'my-key',
'secret' => 'my-secret',
'command.params' => ['PathStyle' => true]
]);
Using 'command.params' allows you to set a parameter used in every operation. You will need to use the 'PathStyle' option on every request to make sure the SDK does not move your bucket into the host part of the URL like it is supposed to do for Amazon S3.
This is was all talked about on an issue on GitHub.
aws-sdk-php-laravel uses aws-sdk-php which is hard coded to use Amazon's URLs. If you want to use that library with Riak CS, you'll need to configure it to use your node as a proxy. According to the config docs that would be set using:
use Aws\S3\S3Client;
$s3 = S3Client::factory(array(
'request.options' => array(
'proxy' => '127.0.0.1:8080'
)
));
I haven't used Laravel, so I'm not sure where to put that so that it will pass the setting along to
I want to read my own twit in a litte localhost application in js + php.
I know how to read the json in api.twitter.com/1/statuses/user_timeline.json?screen_name=myName but due to the limit rate I need to make a User Stream (https://dev.twitter.com/docs/streaming-api/user-streams)
I have my 4 keys by create a dev account :
'consumer_key' => '*****',
'consumer_secret' => '*****',
'user_token' => '*******',
'user_secret' => '******',
So I try with this https://github.com/themattharris/tmhOAuth/blob/master/examples/userstream.php
download the lib
run my MAMP (or WAMP or LAMP)
open the example, put my key
go to the page
and nothing. except the browser loader.
Why this hapeens?
is it due to localhost ?
or no params ?
or new twitter restriction ?
Streaming API is not right tool for litte application You are better off with plain REST API.
Streaming API application is not supposed to be run in browser; don't forget to set_time_limit(0) and start your .php script in comandline - where it will run forever (you should save tweets in database so your normal browser scripts could display them)
I'm currently making use of Gearman with PHP using the standard bindings (docs here). All functioning fine, but I have one small issue with not being able to detect when a call to GearmanClient::addServer (docs here) is "successfull", by which I mean...
The issue is that adding the server attempts no socket I/O, meaning that the server may not actually exist or be operational. This means that subsequent code calls (in the scenario where the sever does not infact exist) fail and result in PHP warnings
Is there any way, or what is the best way, to confirm that the Gearman Daemon is operational on the server before or after adding it?
I would like to achieve this so that I can reliably handle scenarios in which Gearman may have died, or the server is un-contactable perhaps..
Many thanks.
We first tried this by manually calling fsockopen on the host and port passed to addServer, but it turns out that this can leave a lot of hanging connections as the Gearman server expects something to happen over that socket.
We use a monitor script to check the status of the daemon and its workers — something similar to this perl script on Google Groups. We modified the script to restart the daemon if it was not running.
If this does not appeal, have a look at the Gearman Protocol (specifically the “Administrative Protocol” section, referenced in the above thread) and use the status command. This will give you information on the status of the jobs and workers, but also means you can perform a socket connection to the daemon and not leave it hanging.
You can use this library: https://github.com/necromant2005/gearman-stats
It has no external dependencies.
$adapter = new \TweeGearmanStat\Queue\Gearman(array(
'h1' => array('host' => '10.0.0.1', 'port' => 4730, 'timeout' => 1),
'h2' => array('host' => '10.0.0.2', 'port' => 4730, 'timeout' => 1),
));
$status = $adapter->status();
var_dump($status);