Running into a problem where jobs cannot connect to the database.
Invalid catalog name: 1046 No database selected
I need to set the account in the job so I have an extended class to make sure the account is sent with the job so that I can ensure the database can connect to the correct database.
<?php
namespace App\Jobs;
use Illuminate\Support\Facades\DB;
abstract class Job
{
protected $account;
public function start()
{
// runs when creating the job, so the config holds the correct value
$this->account = config('database.connections.tenant.database');
}
public function handle()
{
// since the handle function runs outside of setting the job
// the database is no longer set in the config
config()->set('database.connections.tenant.database', $this->account);
// try to force it to reconnect incase it already did for some reason.
DB::reconnect();
}
}
This is my current version I am playing around with, variations seem to not affect it. I basically run start in the constructor and then make sure it runs the parent handle in the job so that it bootstraps the proper db configuration.
The end result I am looking for is it to set the tenant database as account and when its running the job it uses that database for all queries.
Found out a way around this. Its not pretty but based on what I can see laravels queues just don't really handle this sort of thing well.
First I removed the override for the handle function, all I really needed was to make sure the account the queue needed to run on was available on the Job class.
abstract class Job
{
protected $account;
public function start()
{
// runs when creating the job, so the config holds the correct value
$this->account = config('database.connections.tenant.database');
}
}
Next I moved the switch to the correct tenant database to the AppServiceProvider in the boot method.
Event::listen(JobProcessing::class, function ($event) {
if ($payload = $event->job->payload()) {
preg_match('/"account";s:[0-9]+:"(.*?)"/', $payload['data']['command'], $matches);
if (count($matches)) {
if (isset($matches[1])) {
config()->set('database.connections.tenant.database', $matches[1]);
config()->set('database.default', 'tenant');
}
}
}
});
What i did here is look into the serialize object for the account with some regex. Might be improvements to be made here but so far works in testing. It then sets the correct database once it confirms the account.
The reason I had to go this far is that the Job does queries when the job itself is serialized, so in order to pass the account it needed to be done before its serialized.
Related
I have a piece of code that takes the user (passed into the job constructor) and notifies the user via a websocket to the job status.
It is effectively one line that needs to be added to the start of the handle method (before the job starts), one to the end of the handle method (after the jobs has completed) and then on in the fail method.
Other than adding this to each job manually, what is the best way to do this? Something like a trait, middleware etc. but I don't think either of these will work.
One way could be extending the job/command class like:
class MyJob extends Job {
public function handle() {
try {
do_stuff_at_start();
$this->process();
} catch (Exception $e) {
do_stuff_when_fails();
}
abstract public function process();
}
and all Your jobs could implement process() method that is responsible for handling logic. Just a loose idea - not sure if it fits Your needs.
Edit: This question is more about how to create a contract within a function. How do I create methods that do simple things and have requirements between objects? Do I:
1) Add checks and exceptions in the start() method to create the contract and put the pausing loop in a different call? (Downside here is a minor repeat call to the data source.)
2) Add an event listener for whenever a timer is started to create the contract? (I'm not sure I can return data the way I would like with this method. I'm also not sure I can guarantee that the event will complete successfully before I start the new timer. May not matter that much in this case.)
3) Just return the ids from the start function and process them. (The function will be doing too much, but at least it will work properly with less overhead.)
========================================================================
I have this code in my model. This is a timer application and this code gets hit when starting a timer. Basically, any running timers should get paused and somehow the view needs to understand that it should refresh those timers.
public function start($input = array())
{
if($timers = TimeLog::where('status','running')->get()){
foreach($timers as $timer){
/** #var $timer TimeLog **/
$timer->pause();
}
}
$this->user_id = Auth::user()->id;
$this->addDetails($input);
$this->restarted_at = date('Y-n-d H:i:s'); //TODO timezones
$this->status = 'running';
$this->save();
}
I'm uncomfortable returning a list of paused timers from this function. Just doesn't seem to make sense.
I thought about moving the foreach to my controller, but this is really business logic and I wanted to make sure no running timers exist when I start a timer.
I could make another method in this class, which would solve the return issue, but then how do I guarantee that each start call will check for running timers?
This seems like a good fit for using the repository pattern as described here.
I recently had to solve a similar problem since I am just getting started with Laravel and I was putting all my business logic in either Model or Controller classes. I had business logic that didn't seem to make sense for either of those, and after some research I found Repositories.
I would try something like this:
class EloquentTimerRepository implements TimerRepository
{
/**
* Part of your TimerRepository interface
*/
public function startTimersForCurrentUser($inputs)
{
$this->pauseRunningTimers();
$newTimer = $this->createNewTimer($inputs);
$newTimer->start();
}
private function createNewTimer($inputs)
{
$timer = new Timer;
$timer->user_id = Auth::user()->id;
$timer->addDetails($input);
$timer->save();
return $timer;
}
private function getRunningTimers()
{
return TimeLog::where('status','running');
}
private function pauseRunningTimers()
{
if($this->getRunningTimers()){
foreach($timers as $timer){
/** #var $timer TimeLog **/
$timer->pause();
}
}
}
}
and then:
class Timer extends Model
{
public function start()
{
$this->restarted_at = date('Y-n-d H:i:s'); //TODO timezones
$this->status = 'running';
$this->save();
}
}
As for updating the view, you're going to have to either do a page reload or if you are using ajax, make a subsequent call to pull the latest timers and reset the page elements based on that data. There are probably ways to implement push (from the server) but I'm not familiar with those techniques yet.
I am working on a web application that lets users login and give an exam scheduled by the admin. I have an "accumulate" function that should run automatically once all the users have finished giving the test and update the database (according to some logic).
I know I can use the database to keep a store of users who are giving the test right now and accordingly run the logic but I am interested in knowing if this can be done by a singleton class.
My code:
class ScoreBoard{
var $current;
private static $board=NULL;
// private static $current=0;
private function __construct() {
$this->current=0;
}
static function scoreboard(){
if(!self::$board){
self::$board= new ScoreBoard();
$this->log("Created new");
// return self::$board;
}
return self::$board;
}
function add(){
$this->current+=1;
$this->log("add ".$this->current);
}
function subtract(){
$this->current-=1;
$this->log("subtract ".$this->current);
// file_put_contents("scoreboard.txt",self::$current);
if($this->current<0)
$this->current=0;
if($this->current<=0)
{
$this->accumulate();
self::$board=NULL;
}
}
}
And in the startExam.php file I am calling this function as :
$scoreboard= ScoreBoard::scoreboard();
$scoreboard->add();
And doing
$scoreboard= ScoreBoard::scoreboard();
$scoreboard->subtract();
when the exam ends. Thus, when each user starts the exam the singleton objects add function should be called and when he ends it the subtract function should be called. But this doesn't seem to work for some reason and the $current never increases beyond 1.
Kindly let me know if what I am trying to do is possible or not. And if there are any better ways to achieve what I want to do.Thank you.
You could use Memcached to achieve this -
Your constructor function would then look something like this:
$this->memcache = new Memcached();
$this->memcache->addServer("127.0.0.1", "11211") // I think that's the right port for default.
$this->current = $this->memcache->get("current");
But you could do the same logic via storing the value in a file - and that would be easier...
I'm writing a log class which has several methods like info, error or warning to insert log entries into the database.
Until now every one of those methods directly made a db insert. This is not good performance whise when it comes to batch processing. I now want to solve this with creating a queue and only generate and fire one insert statement at the end of a task.
I'm now not sure if the following makes sense or is good practice. The way I would do it right now is chaining the methods to start and submit a queue like:
Log::queue()->info('Just somehting')->warning('Strange stuff')->submit();
Or if I wan't to directly insert it:
Log::info('Just something');
The class structure would for example look like this:
class Log {
protected $queue = array();
protected $isQueued = false;
public function queue() {
$this->isQueued = true;
return $this;
}
public function info() {
if($this->isQueued) {
//Add to queue
} else {
//Insert in db
}
return $this;
}
//All the other log types following...
public function submit() {
//Generate single insert statement from queue and insert it
}
}
I'm using a Laravel facade hence the static calls.
Is there anything wrong with this design? I'm not sure because for example Log::submit() for itself would make absolutely no sense but would be possible. Does it even matter?
What you should probably do is drop the queue/commit methods, instead take incoming logs and store them in an array on the object, and then you can use a callback like App::shutdown(function() {...}) to tell it to commit the in-memory log strings to the database once the application is done serving the request.
Also worth mentioning - If you're not restricted to using a SQL database, there are already several existing database Monolog handlers for Redis, Mongo and more. The underlying Monolog instance is available via Log::getMonolog().
I am currently using PHP PDO to access my database. That is all working absolutely fine and dandy. However, I am going to be adding read-replicas to my server setup, so I wish to adjust my code accordingly.
My current plan of action is to store an array of database credential details. One "read and write" set for the master MySQL database and any number of credentials for "read replicas".
What I wish to do is add a method to the PDO class called "mode" where by a mode is passed through, such as "read" or (default) "write". By passing this through (eg. $dbh->mode("read"); ), it can lookup the details of a random read replica (not fussed which) and use those details for the connection. Then once i'm done reading from my replicas, do another $dbh->mode("default") to put it back into write mode, whereby I can use INSERT, UPDATE etc.
Can this be done without simply destroying the PDO object and creating a new one? Can connection details simply be changed after the object already exists?
So far I have the following (its barely anything, but figured its a start).
Class SwitchablePDO extends PDO
{
public function mode($mode = "default")
{
// Use the credentials for my master read and write server by default
if($mode == "read")
{
// Use one the credentials for my read replicas (randomly choose)
}
}
}
Any help regarding this would be appreciated!
I would rather set-up completely different database connection objects than deal with a mode. Using a mode, you will inevitably run into a situation where a piece of code does not set the mode and relies on the previous piece of code's mode, and will fail when called in a different context. This is known as sequential coupling.
With multiple objects provided by a factory method, or a dependency injection container, you make sure each piece of code specifies which database connection it needs, such as master or slave.
As a bonus, avoid using master/slave as a name and instead use names that relate to the type of tasks to be performs, like analytics, which allows you to change which server will be used without going through the code to find all related pieces of code.
Create separate classes for read and write mode, which then in turn assign to private/protected properties in your SwitchablePDO class. Calling mode() then should just set which property to use. Here's some pseudo-code:
class WriteablePDO extends PDO
{
// methods
}
class ReadablePDO extends PDO
{
// methods
}
class SwitchablePDO
{
protected $_mode = 'read'; // default
protected $_read;
protected $_write;
public function __construct()
{
$this->_read = new ReadablePDO();
$this->_write = new WriteablePDO();
}
public function mode($key)
{
if ($key === 'read')
{
$this->_mode = '_read';
}
elseif ($key === 'write')
{
$this->_mode = '_write';
}
}
public function __call($method, $arguments)
{
return call_user_func_array(array($this->{$this->_mode}, $method), $arguments);
}
}