I have a Zend Framework application (version 1.11) that uses Doctrine 2. I've got PHPUnit set up to run tests on my models and forms and whatnot. The tests work great, but there's one problem: they leave the test data in the database once they are done. Here's a basic sample of one of my tests:
class VisaTypeEntityTest extends ModelTestCase
{
public function testCanSaveAndRetrieveVisaType()
{
$addVisaType = new \Entities\VisaTypes();
$addVisaType->setEnglishName('Test Visa Type')
->setJapaneseName('試し')
->setKanaName('タメシ')
->setDescription('Description of the test visa type');
$this->em->persist($addVisaType);
$this->em->flush();
$getVisaType = $this->em->getRepository('\Entities\VisaTypes')
->findOneByEnglishName('Test Visa Type');
$this->assertEquals('Test Visa Type', $getVisaType->getEnglishName());
}
}
Obviously the data has to actually be entered into the database in order to make sure everything is kosher. But I don't want all the test data gumming up the database every time I run a test, nor do I want to go and manually remove it.
Is there something I can do, such as using the tearDown() method to get rid of the test data once the test is complete? And if so, is it possible to "roll back" the auto increments in the tables' id fields to what they were beforehand? I know it really shouldn't matter if there are gaps between ids, but if there is some way to get Doctrine to reset the auto increment value that would be great.
1. Regarding "cleaning up":
Because you are using InnoDB you can use transactions to restore the DB to the same state as it was before the test started:
So in setUp() you would add $this->em->getConnection()->beginTransaction(); and in tearDown() $this->em->getConnection()->rollback(); This restores the database to the previous state. Also have a look at MySQL handbook on "The InnoDB Transaction Model and Locking" to make sure that this does not interfere with any other data in your application (keyword: isolation level).
2. Regarding rolling back auto increment ids:
As far as I know this is not (easily) possible. There is a thread about it on SO in which it is stated that your application should not care if there are gaps between the auto increment ids. If you are using a test database (which is highly recommended) this should be even a lesser concern.
Related
I have a case in which i need to sync an external existing table with the website table every few minutes.
I previously had it with a simple foreach which would loop through every record, as the table grows it became slower and slower and now it is taking a long time for around 20.000 records.
I want to make sure it creates a new record or updates an existing one.
This is what I got but it doesn't seem to update the existing rows.
$no_of_data = RemoteUser::count(); // 20.000 (example)
$webUserData = array();
for ($i = 0; $i < $no_of_data; $i++) {
// I check the external user so i can match it.
$externalUser = RemoteUser::where('UserID', $i)
->first();
if($externalUser) {
$webUserData[$i]['username'] = $externalUser->username;
$webUserData[$i]['user_id'] = $externalUser->UserID;
}
}
$chunk_data = array_chunk($webUserData, 1000);
if (isset($chunk_data) && !empty($chunk_data)) {
foreach ($chunk_data as $chunk_data_val) {
\DB::table('WebUser')->updateOrInsert($chunk_data_val);
}
}
Is there something I am missing or is this the wrong approach?
Thanks in advance
I'll try to make a complete all-in-one answer on some possible event driven solutions. The ideal scenario would be to change the current situation of a static check of each and every row to an event-driven solution where each entry notifies a change.
I won't list solutions per database here and use MySQL by default.
I see three possible solutions:
using an internal solution if only one and the same database instance is at play using triggers
if the creation or modification of the eloquent models are based in one place, eloquent events could be optional
alternatively mysql replication could play a role to catch the events if modifications occur outside of the application (multiple applications modify the same database).
Using triggers
If the situation applies syncing data on the same database instance (different databases) or on the same database process (same database) and the data you copy doesn’t need intervention by an external interpreter, you can use SQL or any extension of SQL supported by your database to use triggers or prepared statements.
I assume you’re using MySQL, if not, SQL triggers are quite similar across all known databases supporting SQL.
A trigger structure has a simple layout like:
CREATE TRIGGER trigger_name
AFTER UPDATE ON table_name
FOR EACH ROW
body_to_execute
Where AFTER_UPDATE is the event to catch in this example.
So for an update event, we would like to know the data that has been changed AFTER it has been updated, so we’ll use the AFTER UPDATE trigger.
So an AFTER UPDATE for your table, calling both remote_user as original and web_user as the copy, using both user_id and username as fields, would look something like
CREATE TRIGGER user_updated
AFTER UPDATE ON remote_user
FOR EACH ROW
UPDATE web_user
SET username = NEW.username
WHERE user_id = NEW.user_id;
The variables NEW and OLD are available in triggers, where NEW owns the data after the update and OLD before the update.
For a new user that has been inserted, we have the same procedure, we just need to create the entry in web_user.
CREATE TRIGGER user_created
AFTER INSERT on remote_user
FOR EACH ROW
INSERT INTO web_user(user_id, username)
VALUES(NEW.user_id, NEW.username);
Hope this gives you a clear idea on how to use triggers with SQL. There is a lot of information to be found, guides, tutorials, you name it. SQL might be an old boring language created by old people with long beards, but to know its features gives you a great advantage to solve complicated problems with simple methods.
Using Eloquent events
Laravel has a bunch of Eloquent events that get triggered when models do stuff. If the creation or modification of a model (entry in the database) only occur in one place (e.g. on entry point or application), the use of Eloquent events could be an option.
This means that you have to guarantee that the modification and/or creation takes place using Eloquents model:
Model::create([...]);
Model::find(1)->update([...])
$model->save();
// etc
And not indirectly using DB or similar:
// won't trigger any event
DB::table('remote_users')->update()->where();
Also avoid using saveQuietly() or any method on the model that's been built deliberately to suppress events.
The simplest solution would be to directly register events in the model itself using the protected static boot method.
namespace App\Models;
use bunch\of\classes;
class SomeModel extends Model {
protected static function booted() {
static::updated(function($model) {
// access any database or service
});
static::created(function($model) {
// access any database or service
});
}
}
To put the callback on a queue, Laravel 8 and up offer the queueable function to utilize the queue.
static::updated(queueable(function($ThisModel) {
// access any database or service
}));
From Laravel 7 or lower, it would be wise to create an observer and push everything on queue using jobs.
example based on your comment
If a model is present for both databases, the Eloquent events could be used in such a way, where InternalModel presents the main model which will trigger the events (source) and ExternalModel the model to update its, to be synced, database (sync table or replication).
namespace App\Models;
use App\Models\ExternalModel;
class InternalModel extends Model {
protected static function booted() {
static::updated(function($InternalModel) {
ExternalModel::find($InternalModel->id)->update([
'whatever-needs' => 'to-be-updated'
]);
});
static::created(function($InternalModel) {
ExternalModel::create([
'whatever-is' => 'required-to-create',
'the-external' => 'model'
]);
});
static::deleted(function($InternalModel) {
// do know we only have the $InternalModel object left, the entry in the database doesn't exist anymore.
ExternalModal::destroy($InternalModel->id);
});
}
}
And remember to use the queueable() to utilize the queue if it might take longer than expected.
If indeed for some reason the InternalModel table get's updated by not using Eloquent, you can trigger each Eloquent event manually if the dispatch event() method is accessible, to keep the sync process functional. e.g.
$modal = InternalModel::find($updated_id);
// trigger the update manually
event('eloquent.updated: ' . $model::class, $model);
All Eloquent events related to the models can be triggered in such a way, so: retrieved, creating, created, updating, updated, saving, saved, deleting and so on.
I would also suggest to create an additional Console command to start the sync process once, before stepping over to the Eloquent model events. Such a command is like the foreach you already used where you check once if all data is synced, something like php artisan users:sync. This could help if sometimes events don't trigger caused by exceptions, this is rare, but it does happen once in a while.
MySQL Replication
If triggers isn't a solution and you can't guarantee the data is modified from one single source, replication would be my final solution.
Someone created a package for Laravel which uses the krowinski/php-mysql-replication or the more up to date fork moln/php-mysql-replication called huangdijia/laravel-trigger.
A few things need to be configured though:
Firstly MySQL should be configured to save all events in a log file to be read.
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 1
max_binlog_size = 100M
binlog_row_image = full
binlog-format = row
Secondly, the database user connected with the database should be granted replication privileges:
GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'user'#'host';
GRANT SELECT ON `dbName`.* TO 'user'#'host';
The general idea here is to readout a log file MySQL generates about events that occur. Writing this answer took me a while longer because I couldn't get this package up and running within a few minutes. Though I have used it in the past and know it worked flawlessly, I wrote a smaller package which would minimize the traffic and filter out events I didn't use.
I've already opened an issue and I'm going to open several over time to get this thing up and running again.
But to grasp the idea of its use-fullness, I'm going to explain its workings anyway.
To configure an event, listeners are put in a routes file called routes/trigger.php, where you have access to the $trigger instance (manager) to bind your listeners.
If we would put this into context of your tables, a listener would look like
$trigger->on('database_name.remote_users', 'update', function($event) {
// event will contain a `EventInfo` object with changed entry data.
});
Same would go for create (write) events on the table
$trigger->on('database_name.remote_users', 'write', function($event) {
// event will contain a `EventInfo` object with changed entry data.
});
To start listening for database events use the
php artisan trigger:start
To get a list of all listeners recognized from the routes/trigger.php use
php artisan trigger:list
To get a status of which bin file has been recognized and its current position use
php artisan trigger:status
In a ideal situation you would use supervisor to start the listener (artisan trigger:start) to be run in the background. If the service needs to boot again due to updates made in your application, you can simply use php artisan trigger:terminate to reboot the service. Supervisor will notice and start again with a fresh booted application.
update on package status
They seem to respond very well and some things have already been fixed. I can definitely say for sure that this package will be up and running the in a few weeks.
Normally I won't put anything in my answers on stuff I didn't used or tested myself, though I know this worked before, I'm giving it a chance it's going to work again in the next several weeks. It's a least something to watch out for or even test it to grasp ideas on how to implement in a real case scenario.
Hope you enjoyed reading.
I have a Zend Framework application (version 1.11) that uses Doctrine 2. I've got PHPUnit set up to run tests on my models and forms and whatnot. The tests work great, but there's one problem: they leave the test data in the database once they are done. Here's a basic sample of one of my tests:
class VisaTypeEntityTest extends ModelTestCase
{
public function testCanSaveAndRetrieveVisaType()
{
$addVisaType = new \Entities\VisaTypes();
$addVisaType->setEnglishName('Test Visa Type')
->setJapaneseName('試し')
->setKanaName('タメシ')
->setDescription('Description of the test visa type');
$this->em->persist($addVisaType);
$this->em->flush();
$getVisaType = $this->em->getRepository('\Entities\VisaTypes')
->findOneByEnglishName('Test Visa Type');
$this->assertEquals('Test Visa Type', $getVisaType->getEnglishName());
}
}
Obviously the data has to actually be entered into the database in order to make sure everything is kosher. But I don't want all the test data gumming up the database every time I run a test, nor do I want to go and manually remove it.
Is there something I can do, such as using the tearDown() method to get rid of the test data once the test is complete? And if so, is it possible to "roll back" the auto increments in the tables' id fields to what they were beforehand? I know it really shouldn't matter if there are gaps between ids, but if there is some way to get Doctrine to reset the auto increment value that would be great.
1. Regarding "cleaning up":
Because you are using InnoDB you can use transactions to restore the DB to the same state as it was before the test started:
So in setUp() you would add $this->em->getConnection()->beginTransaction(); and in tearDown() $this->em->getConnection()->rollback(); This restores the database to the previous state. Also have a look at MySQL handbook on "The InnoDB Transaction Model and Locking" to make sure that this does not interfere with any other data in your application (keyword: isolation level).
2. Regarding rolling back auto increment ids:
As far as I know this is not (easily) possible. There is a thread about it on SO in which it is stated that your application should not care if there are gaps between the auto increment ids. If you are using a test database (which is highly recommended) this should be even a lesser concern.
so far, it's "easy" to test something, but this time I need to test an algorithm based on database source. The database should be filled for that, but is there a good, working way to do it?
What you are describing is really an integration test.
You would need to ensure that your test can set up the required data and clean up after itself to keep the tests repeatable. I normally create the database / tables as part of the test set-up, then drop them when I'm done, which is easier than trying to get a table back into a particular state.
I'm testing that a function properly adds data to a db, but I want the test data removed after the test is finished. If the test fails, it quits at the failure and never gets the chance to delete the test rows.
It's the only test that hits the db, so I don't really want to do anything in the tearDown() method.
I'm testing an $obj->save() type method that saves data parsed from a flat file.
If your database supports transactions, you could issue a start_transaction at the beginning of the test. If the test fails (causing the program to quit), an implicit rollback will be executed and undo your changes. If the test succeeds, issue an explicit rollback.
Another option is to wrap the assertions in a try-catch statement - this prevents the test from halting (as well as other automatic features like capturing screenshots), and you can do whatever you need from that point.
You should use separate databases for development/production and testing. Which are identical in structure but every time you perform testing you drop the testing db and restore it from some data fixtures. The point is that this way you can be absolutely sure that your db contains the same set of data every time you run your tests. So deleting test data is no big deal.
Are you using the suggested approach for database testing via the Database Testcase Extension?
Basically, if the test fails (read if there is not an error that makes PHPUnit exit), there should be no issues because the database is seeded on startup of the testcase:
the default implementation in PHPUnit will automatically truncate all tables specified and then insert the data from your data set in the order specified by the data set.
so there should be no need to do that manually. Even if there is an error, PHPUnit will clear the table on next run.
I am using Zend_Db to insert some data inside a transaction. My function starts a transaction and then calls another method that also attempts to start a transaction and of course fails(I am using MySQL5). So, the question is - how do I detect that transaction has already been started?
Here is a sample bit of code:
try {
Zend_Registry::get('database')->beginTransaction();
$totals = self::calculateTotals($Cart);
$PaymentInstrument = new PaymentInstrument;
$PaymentInstrument->create();
$PaymentInstrument->validate();
$PaymentInstrument->save();
Zend_Registry::get('database')->commit();
return true;
} catch(Zend_Exception $e) {
Bootstrap::$Log->err($e->getMessage());
Zend_Registry::get('database')->rollBack();
return false;
}
Inside PaymentInstrument::create there is another beginTransaction statement that produces the exception that says that transaction has already been started.
The framework has no way of knowing if you started a transaction. You can even use $db->query('START TRANSACTION') which the framework would not know about because it doesn't parse SQL statements you execute.
The point is that it's an application responsibility to track whether you've started a transaction or not. It's not something the framework can do.
I know some frameworks try to do it, and do cockamamie things like count how many times you've begun a transaction, only resolving it when you've done commit or rollback a matching number of times. But this is totally bogus because none of your functions can know if commit or rollback will actually do it, or if they're in another layer of nesting.
(Can you tell I've had this discussion a few times? :-)
Update 1: Propel is a PHP database access library that supports the concept of the "inner transaction" that doesn't commit when you tell it to. Beginning a transaction only increments a counter, and commit/rollback decrements the counter. Below is an excerpt from a mailing list thread where I describe a few scenarios where it fails.
Update 2: Doctrine DBAL also has this feature. They call it Transaction Nesting.
Like it or not, transactions are "global" and they do not obey object-oriented encapsulation.
Problem scenario #1
I call commit(), are my changes committed? If I'm running inside an "inner transaction" they are not. The code that manages the outer transaction could choose to roll back, and my changes would be discarded without my knowledge or control.
For example:
Model A: begin transaction
Model A: execute some changes
Model B: begin transaction (silent no-op)
Model B: execute some changes
Model B: commit (silent no-op)
Model A: rollback (discards both model A changes and model B changes)
Model B: WTF!? What happened to my changes?
Problem scenario #2
An inner transaction rolls back, it could discard legitimate changes made by an outer transaction. When control is returned to the outer code, it believes its transaction is still active and available to be committed. With your patch, they could call commit(), and since the transDepth is now 0, it would silently set $transDepth to -1 and return true, after not committing anything.
Problem scenario #3
If I call commit() or rollback() when there is no transaction active, it sets the $transDepth to -1. The next beginTransaction() increments the level to 0, which means the transaction can neither be rolled back nor committed. Subsequent calls to commit() will just decrement the transaction to -1 or further, and you'll never be able to commit until you do another superfluous beginTransaction() to increment the level again.
Basically, trying to manage transactions in application logic without allowing the database to do the bookkeeping is a doomed idea. If you have a requirement for two models to use explicit transaction control in one application request, then you must open two DB connections, one for each model. Then each model can have its own active transaction, which can be committed or rolled back independently from one another.
Do a try/catch: if the exception is that a transaction has already started (based on error code or the message of the string, whatever), carry on. Otherwise, throw the exception again.
Store the return value of beginTransaction() in Zend_Registry, and check it later.
Looking at the Zend_Db as well as the adapters (both mysqli and PDO versions) I'm not really seeing any nice way to check transaction state. There appears to be a ZF issue regarding this - fortunately with a patch slated to come out soon.
For the time being, if you'd rather not run unofficial ZF code, the mysqli documentation says you can SELECT ##autocommit to find out if you're currently in a transaction (err... not in autocommit mode).
For innoDB you should be able to use
SELECT * FROM INFORMATION_SCHEMA.INNODB_TRX WHERE TRX_MYSQL_THREAD_ID = CONNECTION_ID();
This discussion is fairly old. As some have pointed out, you can do it in your application. PHP has a method since version 5 >= 5.3.3 to know if you are in the middle of a transaction. PDP::inTransaction() returns true or false. Link http://php.net/manual/en/pdo.intransaction.php
You can also write your code as per following:
try {
Zend_Registry::get('database')->beginTransaction();
}
catch (Exception $e) { }
try {
$totals = self::calculateTotals($Cart);
$PaymentInstrument = new PaymentInstrument;
$PaymentInstrument->create();
$PaymentInstrument->validate();
$PaymentInstrument->save();
Zend_Registry::get('database')->commit();
return true;
}
catch (Zend_Exception $e) {
Bootstrap::$Log->err($e->getMessage());
Zend_Registry::get('database')->rollBack();
return false;
}
In web-facing PHP, scripts are almost always invoked during a single web request. What you would really like to do in that case is start a transaction and commit it right before the script ends. If anything goes wrong, throw an exception and roll back the entire thing. Like this:
wrapper.php:
try {
// start transaction
include("your_script.php");
// commit transaction
} catch (RollbackException $e) {
// roll back transaction
}
The situation gets a little more complex with sharding, where you may be opening several connections. You have to add them to a list of connections where the transactions should be committed or rolled back at the end of the script. However, realize that in the case of sharding, unless you have a global mutex on transactions, you will not be easily able to achieve true isolation or atomicity of concurrent transactions because another script might be committing their transactions to the shards while you're committing yours. However, you might want to check out MySQL's distributed transactions.
Use zend profiler to see begin as query text and Zend_Db_Prfiler::TRANSACTION as query type with out commit or rollback as query text afterwards. (By assuming there is no ->query("START TRANSACTION") and zend profiler enabled in your application)
I disagree with Bill Karwin's assessment that keeping track of transactions started is cockamamie, although I do like that word.
I have a situation where I have event handler functions that might get called by a module not written by me. My event handlers create a lot of records in the db. I definitely need to roll back if something wasn't passed correctly or is missing or something goes, well, cockamamie. I cannot know whether the outside module's code triggering the event handler is handling db transactions, because the code is written by other people. I have not found a way to query the database to see if a transaction is in progress.
So I DO keep count. I'm using CodeIgniter, which seems to do strange things if I ask it to start using nested db transactions (e.g. calling it's trans_start() method more than once). In other words, I can't just include trans_start() in my event handler, because if an outside function is also using trans_start(), rollbacks and commits don't occur correctly. There is always the possibility that I haven't yet figured out to manage those functions correctly, but I've run many tests.
All my event handlers need to know is, has a db transaction already been initiated by another module calling in? If so, it does not start another new transaction and does not honor any rollbacks or commits either. It does trust that if some outside function has initiated a db transaction then it will also be handling rollbacks/commits.
I have wrapper functions for CodeIgniter's transaction methods and these functions increment/decrement a counter.
function transBegin(){
//increment our number of levels
$this->_transBegin += 1;
//if we are only one level deep, we can create transaction
if($this->_transBegin ==1) {
$this->db->trans_begin();
}
}
function transCommit(){
if($this->_transBegin == 1) {
//if we are only one level deep, we can commit transaction
$this->db->trans_commit();
}
//decrement our number of levels
$this->_transBegin -= 1;
}
function transRollback(){
if($this->_transBegin == 1) {
//if we are only one level deep, we can roll back transaction
$this->db->trans_rollback();
}
//decrement our number of levels
$this->_transBegin -= 1;
}
In my situation, this is the only way to check for an existing db transaction. And it works. I wouldn't say that "The Application is managing db transactions". That's really untrue in this situation. It is simply checking whether some other part of the application has started any db transactions, so that it can avoid creating nested db transactions.
Maybe you can try PDO::inTransaction...returns TRUE if a transaction is currently active, and FALSE if not.
I have not tested myself but it seems not bad!