My Laravel is setup with a MultiSite Middleware Provider that checks the subdomain of the address and based on this subdomain changes the connection on-the-fly to another database.
e.g.
Config::set('database.connections.mysql.host', $config['host'] );
Config::set('database.connections.mysql.database', $config['db_name'] );
Config::set('database.connections.mysql.username', $config['user']);
Config::set('database.connections.mysql.password', $config['password']);
Config::set('database.connections.mysql.prefix', $config['prefix']);
Config::set('database.connections.mysql.theme', $config['theme']);
// purge main to prevent issues (and potentially speed up connections??)
DB::disconnect('main');
DB::purge();
DB::reconnect();
return $next($request);
This all works fantastic, except that I now want to use Laravel Queues with the built-in Database driver (sync actually works fine but blocks the user experience for long report generations).
Except Artisan isn't sure which database to connect to so I'm guessing it connects to the default, which is a kind of supervisor database that stores all the subdomains and corresponding db names etc.
Note none of these databases are setup in my database conf as connections, they're stored in a singular management database as there's quite a lot of them.
I've tried cloning the built-in Queue listener and modifying it to swap to the different site connection as so:
/**
* Create a new queue listen command.
*
* #param \Illuminate\Queue\Listener $listener
* #return void
*/
public function __construct(Listener $listener)
{
// multisite swap
$site = MultiSites::where('machine_name', $this->argument('site'));
MultiSites::changeSite($site->id);
parent::__construct();
$this->setOutputHandler($this->listener = $listener);
}
But this fails with
$commandPath argument missing for the Listener class.
Trying a similar database/site swap in the fire() or handle() methods stops the $commandPath error however it simply does nothing, no feedback and doesn't begin to process any jobs from the database.
I'm at a loss how to get this working with a multisite environment, does anyone have any ideas or am I going the wrong way about this?
My ideal scenario would be being able to run a singular Queue command, have supervisor monitor that and it to skip through each database checking. But I am also willing to spawn a queue command per database/site if necessary.
Related
I setup a Unit Test in a Shopware custom (static) Plugin following this guide:
Shopware documentation
Everything runs fine and I'm able to run a unit test
class ProductReturnsTest extends TestCase
{
use IntegrationTestBehaviour;
use StorefrontPageTestBehaviour;
public function testConfirmPageSubscriber(): void
{
$container = $this->getKernel()->getContainer();
$dd = $container->get(CustomDataService::class); <== IT BREAKS HERE ServiceNotFoundException: You have requested a non-existent service
$dd = $container->get('event_dispatcher'); // WORKS WITH SHOPWARE ALIASES NOT WITH PLUGINS
}
}
I can make container->get on any shopware alias but as soon I try to recall and get from the container any service decleared in any xml of any 3th party plugin, i get
ServiceNotFoundException: You have requested a non-existent service "blabla"
What is wrong ?
Take a look at the answer given here: https://stackoverflow.com/a/70171394/10064036.
Probably your plugin is not marked as active in the DB your tests run against.
The test environment has a mostly unpopulated database to allow tests to to run unaffected with their own fixtures only. Therefore after each test there should be a rollback to all transactions made within the test. This principle also includes plugin installations and database transactions they may execute in their lifecycle events.
You may want to install your plugin properly before your tests, so you get a representative state of the environment with the plugins lifecycle events getting dispatched and thereby caused possible changes.
public function setUp(): void
{
$this->installPlugin();
}
private function installPlugin(): void
{
$application = new Application($this->getKernel());
$installCommand = $application->find('plugin:install');
$args = [
'--activate' => true,
'--reinstall' => false,
'plugins' => ['YourPluginName'],
];
$installCommand->run(new ArrayInput($args, $installCommand->getDefinition()), new NullOutput());
}
I want to implement connection pooling in Php in a similar way that works in java.
Why I need this :
Let's consider a flow
Step1: Connection To Db --- Resource Id #12
Step2: some computation... time taking .3 seconds
Step3: Query on Solr .... timing taking 2 seconds
Step4: Connection To Db --- Resource Id #12 (i am using same resource id)
Step5: Exit
Though in step4 I am using the same DB resource as of step1. However, the connection will go in the sleep state for both step2 and step3 and therefore can't be used by any other PHP process (other clients) until exit.
Solution:
use mysql_close every time after query get fired: Drawback: need to connect every time and hence time-consuming
Create a java service to handle queries (possible but too time-consuming and I am looking for other solution where I need to migrate queries )
Need to explore SQL relay like the third party but I am not sure will that be a success and not many good companies have used it
mysql_pconnect is not solving my case.
Please suggest
One way that you can apply scalability techniques to this pool model is to allow on the fly changes to your pool distribution. If you have a particular permalink that is extremely popular for some reason, you could move slaves from the primary pool to the comments pool to help it out. By isolating load, you’ve managed to give yourself more flexibility. You can add slaves to any pool, move them between pools, and in the end dial-in the performance that you need at your current traffic level.
There’s one additional benefit that you get from MySQL database pooling, which is a much higher hit rate on your query cache. MySQL (and most database systems) have a query cache built into them. This cache holds the results of recent queries. If the same query is re-executed, the cached results can be returned quickly.
If you have 20 database slaves and execute the same query twice in a row, you only have a 1/20th chance of hitting the same slave and getting a cached result. But by sending certain classes of queries to a smaller set of servers you can drastically increase the chance of a cache hit and get greater performance.
You will need to handle database pooling within your code - a natural extension of the basic load balancing code in Part 1. Let’s look at how we might extend that code to handle arbitrary database pools:
<?php
class DB {
// Configuration information:
private static $user = 'testUser';
private static $pass = 'testPass';
private static $config = array(
'write' =>
array('mysql:dbname=MyDB;host=10.1.2.3'),
'primary' =>
array('mysql:dbname=MyDB;host=10.1.2.7',
'mysql:dbname=MyDB;host=10.1.2.8',
'mysql:dbname=MyDB;host=10.1.2.9'),
'batch' =>
array('mysql:dbname=MyDB;host=10.1.2.12'),
'comments' =>
array('mysql:dbname=MyDB;host=10.1.2.27',
'mysql:dbname=MyDB;host=10.1.2.28'),
);
// Static method to return a database connection to a certain pool
public static function getConnection($pool) {
// Make a copy of the server array, to modify as we go:
$servers = self::$config[$pool];
$connection = false;
// Keep trying to make a connection:
while (!$connection && count($servers)) {
$key = array_rand($servers);
try {
$connection = new PDO($servers[$key],
self::$user, self::$pass);
} catch (PDOException $e) {}
if (!$connection) {
// Couldn’t connect to this server, so remove it:
unset($servers[$key]);
}
}
// If we never connected to any database, throw an exception:
if (!$connection) {
throw new Exception("Failed Pool: {$pool}");
}
return $connection;
}
}
// Do something Comment related
$comments = DB::getConnection('comments');
. . .
?>
I've managed to get a stable load balanced front end servers that can scale horizontally quite well however the next bottle neck would be the db. There was a blog post discussing scaling dbs horizontally however very little detail on it. I'm currently using PostgreSQL and so the only plugin I've found wouldn't work.
Are my only options creating my own HAProxy or rewriting the PostgreSQL plugin to allow connections with read replicas?
I'm using AWS for all my hosting
Firstly - I'd love to be corrected on this!
Having only had a quick look through some of the ORM classes in a SilverStripe 3.5 site, it looks like while the ORM does support multiple database connections (see DB::get_conn with argument for name) it is designed for specific use cases in mind. That is to say, you may have a module that needs to write to a specific database, so this would allow it to.
What you want is native and automatic support for this within the framework, so that all reads go to your slave(s) and writes go to your master. Unfortunately, it doesn't look like this comes out of the box. You might be able to achieve it by overloading a couple of the core SQL classes using the injector.
If you were to try it, this answer outlines how you could separate select statements out from the rest and run them through a different database connector.
As a quick example of how you might go at achieving this with SQLSelect, you will notice that it is injectable, which means you can easily overload it.
File: mysite/_config/injector.yml
Injector:
SQLSelect:
class: ReadOnlySQLSelect
You need to register a new database connection with the DB class:
File: mysite/_config.php
$readDatabaseConfig = array(/** define your DB credentials here, as with the default $databaseConfig **/);
if (!DB::connect($readDatabaseConfig, 'default_read')) {
user_error('Failed to connect to read replica DB!', E_USER_ERROR);
}
Now, overload the SQLSelect class and replace the parts of it that call the DB class methods. This class inherits from SQLExpression which is the class the contains the methods you actually care about in this instance:
File: mysite/code/ReadOnlySQLSelect.php
class ReadOnlySQLSelect extends SQLSelect
{
public function sql(&$parameters = array())
{
// Changed from SQLExpression: third parameter passed as connection name
$sql = DB::build_sql($this, $parameters, 'default_read');
if (empty($sql)) {
return null;
}
if ($this->replacementsOld) {
$sql = str_replace($this->replacementsOld, $this->replacementsNew, $sql);
}
return $sql;
}
public function execute()
{
$sql = $this->sql($parameters);
// Changed from SQLExpression: skip DB::prepared_query since it doesn't allow
// you to provide the connection name - replace it with its contents instead.
$conn = DB::get_conn('default_read');
return $conn->preparedQuery($sql, $parameters);
}
}
Note: SQLSelect::unlimitedRowCount should technically be replaced where it calls DB::prepared_query, since the prepared query method calls DB::get_conn with no arguments, so will always return the default connection. You could replace the DB::prepared_query line the same as used above:
$conn = DB::get_conn('default_read');
$result = $conn->preparedQuery($sql, $innerParameters);
If you implement the above method, also change new SQLSelect() to SQLSelect::create(), otherwise you'll end up with some queries that still hit the master server because it'll bypass your class by not using the injector.
There's also an instance in SQLConditionalExpression that you should replace too (::toSelect) but that is likely to affect query transformations from other child implementations of that class, and you won't be able to do much about it without either (A) PRing a fix to the framework or (B) overloading all the other SQL* classes.
At this point you should have everything you need to route select queries to your default_read connection.
Infrastructure
On the infrastructure side, you should be able to set up read replicas through the RDS console. When you do so it will provide you with a DNS endpoint for your replica node(s), which you can use in your _config.php to configure the connection to the read replica database.
If this works for you, you should create a module for it and put it up on GitHub - this would definitely be useful for others in future!
You may also consider making pull requests to the framework to add additional arguments to methods like DB::prepared_query to accept a connection name.
Also worth noting is that if you're using the mysqlnd database adapter you may be able to take advantage of read/write splitting, implemented with some sort of injector overloading but all handled at a lower level than the application layer.
I am using Laravel 5 with php 5.5.9-1ubuntu4.14 and mysql 5.5.47.
I have a multi-tenant environment with each tenant having a separate DB. So for every HTTP request, I set the appropriate DB connection in the BeforeMiddleware like so
public function handle($request, Closure $next)
{
...
// Set the client DB name and fire it up as the new default DB connection
Config::set('database.connections.mysql_client.host', $db_host);
Config::set('database.connections.mysql_client.database', $db_database);
Config::set('database.connections.mysql_client.username', $db_username);
Config::set('database.connections.mysql_client.password', $db_password);
DB::setDefaultConnection('mysql_client');
...
}
This works correctly even for concurrent HTTP requests.
But now I want to add scheduled jobs to my application. These jobs will send notifications to users belonging to all tenants. So, I again need to connect to respective tenant Databases. But this connection won't be through an HTTP request. Let's say I have a CommunicationController and a function inside it for sending notifications.
public function sendNotification()
{
...
DB::purge('mysql_client');//IMP
// Set the client DB name and fire it up as the new default DB connection
Config::set('database.connections.mysql_client.host', $db_host);
Config::set('database.connections.mysql_client.database', $db_database);
Config::set('database.connections.mysql_client.username', $db_username);
Config::set('database.connections.mysql_client.password', $db_password);
DB::setDefaultConnection('mysql_client');
...
}
This is where I set the Config values. This function will be executed every minute through Laravel Scheduler.
My questions are:
What will happen when an HTTP request comes in? Will there be any
concurrency issues as Config parameters are being set in both, HTTP request as well as Scheduled job?
Will the HTTP request connect to some incorrect DB if it's running simultaneously with the scheduled job?
According to the Laravel docs:
Configuration values that are set at run-time are only set for the
current request, and will not be carried over to subsequent requests.
Configuration values are set at run-time when you specifically call the Config::set method so the value will be only valid for that current request and you'll never have any concurrency issues due to that.
Your second question is impossible as well. Even if your previous task was running a request on the database, the second/subsequent tasks will wait for locks if any to be release on the database. You should try to be sure you're not performing any queries that can lock for a long period on your DB by using multi-insert, sqldump (if you're performing large queries).. etc
Looking at Illuminate\Config\Repository which is the implementation used for the Config facade and specifically the set method
public function set($key, $value = null)
{
if (is_array($key)) {
foreach ($key as $innerKey => $innerValue) {
Arr::set($this->items, $innerKey, $innerValue);
}
} else {
Arr::set($this->items, $key, $value);
}
}
you can be assured that the set is not an IO set (ie. to the filesystem) but an in-memory set. So the CLI scheduler command that runs every minute will run in its own context like every other http request.
I have a long running daemon (Symfony2 Command) that gets work off a work queue in Redis, and performs those jobs and writes to the database using the orm.
I noticed that when that there is a tendency for the worker to die because the connection to MySQL timed out when worker is idling waiting for work.
Specifically, I see this in the log: MySQL Server has gone away.
Is there anyway I can have doctrine automatically reconnect? Or is there some way I can manually catch the exception and reconnect the doctrine orm?
Thanks
I'm using this in my symfony2 beanstalkd daemon Command worker:
$em = $this->getContainer()->get('doctrine')->getManager();
if ($em->getConnection()->ping() === false) {
$em->getConnection()->close();
$em->getConnection()->connect();
}
It appears that whenever there is any error/exception encountered by the EntityManager in Doctrine, the connection is closed and the EntityManager is dead.
Since generally everything is wrapped in a transaction and that transaction is executed when $entityManager->flush() is called, you can try and catch the exception and attempt to re-excute or give up.
You may wish to examine the exact nature of the exception with more specific catch on the type, whether PDOException or something else.
For a MySQL has Gone Away exception, you can try to reconnect by resetting the EntityManager.
$managerRegistry = $this->getContainer()->get('doctrine');
$em = $managerRegistry->getEntityManager();
$managerRegistry->resetEntityManager();
This should make the $em usable again. Note that you would have to re-persist everything again, since this $em is new.
I had the same problem with a PHP Gearman worker and Doctrine 2.
The cleanest solution that I came up with is: just close and reopen the connection at each job:
<?php
public function doWork($job){
/* #var $em \Doctrine\ORM\EntityManager */
$em = Zend_Registry::getInstance()->entitymanager;
$em->getConnection()->close();
$em->getConnection()->connect();
}
Update
The solution above doesn't cope with transaction status. That means the Doctrine\DBAL\Connection::close() method doesn't reset the $_transactionNestingLevel value, so if you don't commit a transaction, that will lead to Doctrine not being in sync on the translation status with the underlying DBMS. This could lead to Doctrine silently ignoring begin/commit/rollback statements and eventually to data not being committed to the DBMS.
In other words: be sure to commit/rollback transactions if you use this method.
This with this wrapper it worked for me:
https://github.com/doctrine/dbal/issues/1454
In your daemon you can add method to restart the connection possibly before every query. I was facing similar problmes using gaerman worker:
I keep me connection data in zend registry so it looks like this:
private function resetDoctrineConnection() {
$doctrineManager = Doctrine_Manager::getInstance();
$doctrineManager->reset();
$dsn = Zend_Registry::get('dsn');
$manager = Doctrine_Manager::getInstance();
$manager->setAttribute(Doctrine_Core::ATTR_AUTO_ACCESSOR_OVERRIDE, true);
Doctrine_Manager::connection($dsn, 'doctrine');
}
If it is damenon you need perhaps call it statically.