I use MariaDB for a Symfony project and have setup a computed column with:
ALTER TABLE history_event ADD quote_status_change SMALLINT AS (JSON_VALUE(payload, '$.change_set.status[1]'));
When I run Doctrine migrations with bin/console doctrine:schema:update, the computed column is dropped, probably because it doesn't appear anywhere in the HistoryEvent entity class.
How can I prevent Doctrine from dropping computed columns when I run migrations ?
I solved this in doctrine 2.10 using the OnSchemaColumnDefinition Event. My code looked something like this:
public function onSchemaColumnDefinition(SchemaColumnDefinitionEventArgs $eventArgs)
{
if ($eventArgs->getTable() === 'my_table') {
if (!in_array($eventArgs->getTableColumn()['field'], ['id', 'column_1', 'column_2'])) {
$eventArgs->preventDefault();
}
}
}
In my case I'm using Symfony 4.2 so I set the event listener class up as per.
I'm afraid you might be out of luck, there is an open feature request but it's not implemented yet: https://github.com/doctrine/doctrine2/issues/6434
Related
Versions:
PHP 8.1
Worked in Symfony 5.3 finding behavior in Symfony 5.4.
"doctrine/doctrine-fixtures-bundle": "^3.4"
"doctrine/doctrine-bundle": "^2.2"
"doctrine/orm": "^2.8"
General Problem:
Multiple Fixture classes causes errors with references from other fixture classes
Expectations from old 5.3 Project:
On the old project I am able to make tons of separate Fixtures classes and run them all with DependentFixturesInterface and use the references already created (persisted) to then create the relations needed for the other fixtures.
Example:
I create Users first and then Teams, for each Team these is a ManyToOne $createdUser column that relates to a User that creates it. But I can make a UserFixtures class and safe the references (as seen in symfony casts etc.) then runs the TeamFixtures class to use the references in UserFixtures for TeamFixtures (all standard understanding)
Behavior in new 5.4 project:
In the new project I am no way able to create multiple fixture classes. In the same exact example above when I try to create the same relationship I get the follow error
A new entity was found through the relationship 'App\Entity\Team#createdUser' that was not configured to cascade persist operations for entity: App\Entity\User#whateveruser. To solve this issue: Either explicitly call EntityManager#persist() on this unknown entity or configure cascade persist this association in the mapping for example #ManyToOne(..,cascade={"persist"}). If you cannot find out which entity causes the problem implement 'App\Entity\User#__toString()' to get a clue.
So then I listen to the exception and add cascade={"persist"} to the entity relation, run the fixtures again and I get the following error:
Duplicate entry *** for key ****
To me this means somehow it is not correctly persisting the Users in the first place or I am completely off as to how Doctrine works.
This is the loader in my main fixture class
public function loadData(): void
{
$this->generateMockIntervals();
// Users, Contacts
$this->setGroup(self::TEST, 3)->createMany('generateFakeUser', [$this->testPasswordHash, self::TEST_EMAIL_ITERATED]);
$this->setGroup(self::DUMMY, 500)->createMany('generateFakeUser', [$this->dummyPasswordHashed]);
$this->setGroup(self::teamGroupName(), 100)->createMany('generateTeams');
$configArray = array(
MemberInterface::MEMBERS => array(
self::GEN => [$this, 'generateUserMembers'],
self::RANGE => 20,
self::R => self::REF_TYPES[0],
),
);
if (!empty($configArray)) {
$this->groupForAll($configArray);
} else {
$this->pr("Class %s not configured yet", static::class);
}
}
The createMany function loops through each item and creates the new User references and saves it in the reference Repo (straight from SymfonyCasts).
groupAll does the same thing but loops through the references that is configured for each new reference key. I super enhanced symfony casts createMany function. If you have not seen it this is the function that EVERY entity is passed to.
protected function manageReference($entity, $groupName)
{
if (null === $entity) {
throw new \LogicException('Did you forget to return the entity object from your callback to BaseFixture::createMany()?');
}
$this->manager->persist($entity);
// store for usage later as App\Entity\ClassName_#COUNT#
$storage = sprintf('%s_%d', $groupName, $this->i);
$this->addReference($storage, $entity);
}
ONLY DIFFERENCE between 5.3 project and 5.4 project:
The only major difference I can see that is causing this problem is that in my old (5.3) project I had ALL Bidirectional variables built into the entity. In 5.4 I removed ALL/MOST of the Bidirectional relationships for my entities in theory to generate less queries.
Basically if I were to take out this part here
$this->setGroup(self::teamGroupName(), 100)->createMany('generateTeams');
$configArray = array(
MemberInterface::MEMBERS => array(
self::GEN => [$this, 'generateUserMembers'],
self::RANGE => 20,
self::R => self::REF_TYPES[0],
),
);
if (!empty($configArray)) {
$this->groupForAll($configArray);
} else {
$this->pr("Class %s not configured yet", static::class);
}
And put it into a new fixture I start getting the Duplicate Entry and "must persist" errors. But as you can see I am Persisting every single entity and I am flushing every 25 entities are Peristed
imagine I have some doctrine Entity and I can have some records of this entity in the database which I dont want to be deleted, but I want them to be visible.
In general I can have entities, for which I have default records, which must stay there - must not be deleted, but must be visible.
Or for example, I want to have special User account only for CRON operations. I want this account to be visible in list of users, but it must not be deleted - obviously.
I was searching and best what I get was SoftDeletable https://github.com/Atlantic18/DoctrineExtensions/blob/v2.4.x/doc/softdeleteable.md It prevents fyzical/real deletion from DB, but also makes it unvisible on the Front of the app. It is good approach - make a column in the Entity's respective table column - 1/0 flag - which will mark what can not be deleted. I would also like it this way because it can be used as a Trait in multiple Entities. I think this would be good candidate for another extension in the above Atlantic18/DoctrineExtensions extension. If you think this is good idea (Doctrine filter) what is the best steps to do it?
The question is, is this the only way? Do you have a better solution? What is common way to solve this?
EDIT:
1. So, we know, that we need additional column in a database - it is easy to make a trait for it to make it reusable
But
2. To not have any additional code in each repository, how to accomplish the logic of "if column is tru, prevent delete" with help of Annotation? Like it is in SoftDeletable example above.
Thank you in advance.
You could do this down at the database level. Just create a table called for example protected_users with foreign key to users and set the key to ON DELETE RESTRICT. Create a record in this table for every user you don't want to delete. That way any attempt to delete the record will fail both in Doctrine as well as on db level (on any manual intervention in db). No edit to users entity itself is needed and it's protected even without Doctrine. Of course, you can make an entity for that protected_users table.
You can also create a method on User entity like isProtected() which will just check if related ProtectedUser entity exists.
You should have a look at the doctrine events with Symfony:
Step1: I create a ProtectedInterface interface with one method:
public function isDeletable(): boolean
Step2: I create a ProtectionTrait trait which create a new property. This isDeletable property is annotated with #ORM/Column. The trait implements the isDeletable(). It only is a getter.
If my entity could have some undeletable data, I update the class. My class will now implement my DeleteProtectedInterface and use my ProtectionTrait.
Step3: I create an exception which will be thrown each time someone try to delete an undeletable entity.
Step4: Here is the tips: I create a listener like the softdeletable. In this listener, I add a condition test when my entity implements the ProtectedInterface, I call the getter isDeleteable():
final class ProtectedDeletableSubscriber implements EventSubscriber
{
public function onFlush(OnFlushEventArgs $onFlushEventArgs): void
{
$entityManager = $onFlushEventArgs->getEntityManager();
$unitOfWork = $entityManager->getUnitOfWork();
foreach ($unitOfWork->getScheduledEntityDeletions() as $entity) {
if ($entity instanceof ProtectedInterface && !$entity->isDeletable()) {
throw new EntityNotDeletableException();
}
}
}
}
I think that this code could be optimized, because it is called each time I delete an entity. On my application, users don't delete a lot of data. If you use the SoftDeletable component, you should replace it by a mix between this one and the original one to avoid a lot of test. As example, you could do this:
final class ProtectedSoftDeletableSubscriber implements EventSubscriber
{
public function onFlush(OnFlushEventArgs $onFlushEventArgs): void
{
$entityManager = $onFlushEventArgs->getEntityManager();
$unitOfWork = $entityManager->getUnitOfWork();
foreach ($unitOfWork->getScheduledEntityDeletions() as $entity) {
if ($entity instanceof ProtectedInterface && !$entity->isDeletable()) {
throw new EntityNotDeletableException();
}
if (!$entity instance SoftDeletableInterface) {
return
}
//paste the code of the softdeletable subscriber
}
}
}
Well the best way to achieve this is to have one more column in the database for example boolean canBeDeleted and set it to true if the record must not be deleted. Then in the delete method in your repository you can check if the record that is passed to be deleted can be deleted and throw exception or handle the situation by other way. You can add this field to a trait and add it to any entity with just one line.
Soft delete is when you want to mark a record as deleted but you want it to stay in the database.
I have two separated Symfony projects working with one database.
The first project is on Symfony 3.2 and the second is on Symfony 2.8.
Database is MySQL.
All is in production stage and all is working fine.
Now I have some Entity classes in the first project and don't have them in the second one. We haven't needed the entities in the second project before but now I need to work with them in the second project.
I copied the entities from the first project to the second. We use annotations.
After this I checked my database and executed the command on the second project:
app/console doctrine:schema:update --force
And got the error: Base table or view already exists: 1050 Table 'crm_user' already exists.
If I execute the command with --dump-sql option (app/console doctrine:schema:update --dump-sql) I see creation the table that already exists!
CREATE TABLE crm_user (id INT AUTO_INCREMENT NOT NULL, ...
So the doctrine schema update doesn't see that the DB table has been already created. How to fix it?
I tried to clear all cache (cache:clear), doctrine metadata cache (doctrine:cache:clear-metadata), query cache (doctrine:cache:clear-query) and no success. I got the same error after this.
If I try to validate doctrine schema there will not be the new table.
And of course I cannot drop tables data because all is in production stage.
May be someone faced problems like this. I appreciate any suggestions.
I highly recommend not messing like this with two different projects and a single DB. If you need so, then simply let one be the Doctrine "master" where you do the modifications, and only there run the schema:update.
Better than that, and way more elegant, would be to create a vendor that you can import with composer and it's your Doctrine entities vendor.
This will then manage all the DB / Repositories and you can re-use it for many different projects having the code only in one place.
That will solve this and fix what you are not doing right in my point of view:
Having duplicate entities pointing to the same DB structure, which will be always be a pain to maintain, and it does not deal well with the KISS principle and code duplication.
If you can't include orm definitions using a shared project via composer like Jason suggested, you could create a regex to filter which tables each app should be concerned with.
Example with zend expressive:
/**
* #var \Doctrine\ORM\EntityManager $em
*/
$em = $container->get('doctrine.entity_manager.orm_default'); // substitute with how you get your entity manager
$filter = 'crm_user|other_table|another_table';
$em->getConnection()
->getConfiguration()
->setFilterSchemaAssetsExpression('/'.$filter.'/');
dynamically...
/**
* #var \Doctrine\ORM\EntityManager $em
*/
$em = $container->get('doctrine.entity_manager.orm_default');
$metadata = $em->getMetadataFactory()->getAllMetadata();
if(!empty($metadata)){
$filter = '';
/**
* #var \Doctrine\ORM\Mapping\ClassMetadata $metadatum
*/
foreach($metadata as $metadatum){
$filter .= ($metadatum->getTableName().'|');
$assocMappings = $metadatum->getAssociationMappings();
if(!empty($assocMappings)){
// need to scoop up manyToMany association table names too
foreach($assocMappings as $fieldName => $associationData){
if(isset($associationData['joinTable'])){
$joinTableData = $associationData['joinTable'];
if(isset($joinTableData['name']) && strpos($filter, $joinTableData['name']) === false){
$filter .= ($joinTableData['name'].'|');
}
}
}
}
}
$filter = rtrim($filter, '|');
$em->getConnection()->getConfiguration()
->setFilterSchemaAssetsExpression('/'.$filter.'/');
}
We are using Symfony to create some webservices. We use Doctrine-ORM to store entities and Doctrine-DBAL to retrive data because it's very light and can reuse the ORM (entity manager) connection.
When using Doctrine-DBAL, integer values are returned to PHP as strings and we want to have integer values, specially because they are retured to Javascript. Following this discussion How to get numeric types from MySQL using PDO? we have installed mysql native driver sudo apt-get install php5-mysqlnd and setup our symfony (dbal) configuration with PDO::ATTR_EMULATE_PREPARE = false :
doctrine:
dbal:
.
.
options:
20 : false # PDO::ATTR_EMULATE_PREPARES is 20
With this configuration we are getting integers when mysql fields are integers. So far so good.
But there is a new problem: When storing entities with boolean values through Doctrine-ORM the entity is not persisted. We see in the logs the INSERT and the COMMIT, but the record is not in the database (if we use a table with no boolean fields defined in the entity, the record is stored).
Furthermore, we don't get any Error or Exception, so we find this very dangerous. We think there is a bug in the PDO library but we have to look a bit more into it.
The question: Has anybody experienced this behaviour? any workaround? Should Doctrine account for this?
gseric's answer will work but with the effect of hydrating your entities with integers. To still get booleans in your entities you can simply extend Doctrine's BooleanType:
class BooleanToIntType extends \Doctrine\DBAL\Types\BooleanType
{
public function getBindingType()
{
return \PDO::PARAM_INT;
}
}
Then, in your application bootstrap:
\Doctrine\DBAL\Types\Type::overrideType('boolean', BooleanToIntType::class);
If it's not too late for you, you can fix this issue in your app bootstrap this way:
\Doctrine\DBAL\Types\Type::overrideType('boolean', 'Doctrine\\DBAL\\Types\\IntegerType');
After this line is executed Doctrine DBAL will map your PHP boolean values to PDO integers (PDO::PARAM_INT instead od PDO::PARAM_BOOL).
When developing i'm having so many issues with migrations in laravel.
I create a migration. When i finish creating it, there's a small error by the middle of the migration (say, a foreign key constraint) that makes "php artisan migrate" fail. He tells me where the error is, indeed, but then migrate gets to an unconsistent state, where all the modifications to the database made before the error are made, and not the next ones.
This makes that when I fix the error and re-run migrate, the first statement fails, as the column/table is already created/modified. Then the only solution I know is to go to my database and "rollback" everything by hand, which is way longer to do.
migrate:rollback tries to rollback the previous migrations, as the current was not applied succesfully.
I also tried to wrap all my code into a DB::transaction(), but it still doesn't work.
Is there any solution for this? Or i just have to keep rolling things back by hand?
edit, adding an example (not writing Schema builder code, just some kind of pseudo-code):
Migration1:
Create Table users (id, name, last_name, email)
Migration1 executed OK. Some days later we make Migration 2:
Create Table items (id, user_id references users.id)
Alter Table users make_some_error_here
Now what will happen is that migrate will call the first statement and will create the table items with his foreign key to users. Then when he tries to apply the next statement it will fail.
If we fix the make_some_error_here, we can't run migrate because the table "items" it's created. We can't rollback (nor refresh, nor reset), because we can't delete the table users since there's a foreign key constraint from the table items.
Then the only way to continue is to go to the database and delete the table items by hand, to get migrate in a consistent state.
It is not a Laravel limitation, I bet you use MYSQL, right?
As MYSQL documentation says here
Some statements cannot be rolled back. In general, these include data
definition language (DDL) statements, such as those that create or
drop databases, those that create, drop, or alter tables or stored
routines.
And we have a recommendation of Taylor Otwell himself here saying:
My best advice is to do a single operation per migration so that your
migrations stay very granular.
-- UPDATE --
Do not worry!
The best practices say:
You should never make a breaking change.
It means, in one deployment you create new tables and fields and deploy a new release that uses them. In a next deployment, you delete unused tables and fields.
Now, even if you'll get a problem in either of these deployments, don't worry if your migration failed, the working release uses the functional data structure anyway. And with the single operation per migration, you'll find a problem in no time.
I'm using MySql and I'm having this problem.
My solution depends that your down() method does exactly what you do in the up() but backwards.
This is what i go:
try{
Schema::create('table1', function (Blueprint $table) {
//...
});
Schema::create('tabla2', function (Blueprint $table) {
//...
});
}catch(PDOException $ex){
$this->down();
throw $ex;
}
So here if something fails automatically calls the down() method and throws again the exception.
Instead of using the migration between transaction() do it between this try
Like Yevgeniy Afanasyev highlighted Taylor Otwell as saying (but an approach I already took myself): have your migrations only work on specific tables or do a specific operation such as adding/removing a column or key. That way, when you get failed migrations that cause inconsistent states like this, you can just drop the table and attempt the migration again.
I’ve experienced exactly the issue you’ve described, but as of yet haven’t found a way around it.
Just remove the failed code from the migration file and generate a new migration for the failed statement. Now when it fails again the creation of the database is still intact because it lives in another migration file.
Another advantage of using this approach is, that you have more control and smaller steps while reverting the DB.
Hope that helps :D
I think the best way to do it is like shown in the documentation:
DB::transaction(function () {
DB::table('users')->update(['votes' => 1]);
DB::table('posts')->delete();
});
See: https://laravel.com/docs/5.8/database#database-transactions
I know it's an old topic, but there was activity a month ago, so here are my 2 cents.
This answer is for MySql 8 and Laravel 5.8
MySql, since MySql 8, introduced atomic DDL: https://dev.mysql.com/doc/refman/8.0/en/atomic-ddl.html
Laravel at the start of migration checks if the schema grammar supports migrations in a transaction and if it does starts it as such.
The problem is that the MySql schema grammar has it set to false. We can extend the Migrator, MySql schema grammar and MigrationServiceProvider, and register the service provider like so:
<?php
namespace App\Console;
use Illuminate\Database\Migrations\Migrator as BaseMigrator;
use App\Database\Schema\Grammars\MySqlGrammar;
class Migrator extends BaseMigrator {
protected function getSchemaGrammar( $connection ) {
if ( get_class( $connection ) === 'Illuminate\Database\MySqlConnection' ) {
$connection->setSchemaGrammar( new MySqlGrammar );
}
if ( is_null( $grammar = $connection->getSchemaGrammar() ) ) {
$connection->useDefaultSchemaGrammar();
$grammar = $connection->getSchemaGrammar();
}
return $grammar;
}
}
<?php
namespace App\Database\Schema\Grammars;
use Illuminate\Database\Schema\Grammars\MySqlGrammar as BaseMySqlGrammar;
class MySqlGrammar extends BaseMySqlGrammar {
public function __construct() {
$this->transactions = config( "database.transactions", false );
}
}
<?php
namespace App\Providers;
use Illuminate\Database\MigrationServiceProvider as BaseMigrationServiceProvider;
use App\Console\Migrator;
class MigrationServiceProvider extends BaseMigrationServiceProvider {
/**
* Register the migrator service.
* #return void
*/
protected function registerMigrator() {
$this->app->singleton( 'migrator', function( $app ) {
return new Migrator( $app[ 'migration.repository' ], $app[ 'db' ], $app[ 'files' ] );
} );
$this->app->singleton(\Illuminate\Database\Migrations\Migrator::class, function ( $app ) {
return $app[ 'migrator' ];
} );
}
<?php
return [
'providers' => [
/*
* Laravel Framework Service Providers...
*/
App\Providers\MigrationServiceProvider::class,
],
];
Of course, we have to add transactions to our database config...
DISCLAIMER - Haven't tested yet, but looking only at the code it should work as advertised :) Update to follow when I test...
Most of the answers overlook a very important fact about a very simple way to structure your development against this. If one were to make all migrations reversible and add as much of the dev testing data as possible through seeders, then when artisan migrate fails on the dev environment one can correct the error and then do
php artisan migrate:fresh --seed
Optionally coupled with a :rollback to test rolling back.
For me personally artisan migrate:fresh --seed is the second most used artisan command after artisan tinker.