I have some trouble with the Laravel transaction.
Laravel 9+
PHP 8+
Firebird 2.5
I have two DB connection MySQL (default) and Firebird. MySQL works fine, like this. I get the connection data.
DB::transaction(function ($conn) use ($request) {
dd($conn)
});
When I try to use with my other connection ('firebird'), it always throws "There is already an active transaction" error.
DB::connection('firebird')->transaction(function ($conn) use ($request) {
dd($conn);
$conn->table('A')->insert();
$conn->table('B')->insert();
$conn->table('C')->insert();
});
I tried this version too, but I get the same error if I use the 'firebird' connection:
DB::connection('firebird')->beginTransaction();
If I leave out the transaction, both are working just fine, but I want to use rollback if there is any error. Any thoughts why? I'm stuck at this.
Firebird always uses transactions. The transaction is started as soon as you make a change in the database and remains open for that session until you commit. Using your code, it's simply:
DB::connection('firebird')->insert();
DB::connection('firebird')->commit() or rollback();
When you do begin tran in SQL Server, it does not mean that you're starting the transaction now. You are already in transaction, since you are connected to the database! What begin tran really does is disable the "auto-commit at each statement", which is the default state in SQL Server (unless otherwise specified).
Respectively, commit tran commits and reverts the connection to "auto-commit at each statement" state.
In any database, when you are connected, you are already in transaction. This is how databases are. For instance, in Firebird, you can perform a commit or rollback even if only ran a query.
Some databases and connection libs, in the other hand, let you use the "auto-commit at each statement" state of connection, which is what SQL Server is doing. As useful as that feature might be, it's not very didactic and lead beginners to think they are "not in a transaction".
The solution:
Need to turn off auto commit(PDO::ATTR_AUTOCOMMIT), when you define the 'Firebird' connection in 'config/database.php'
Example:
'firebird' => [
'driver' => 'firebird',
'host' => env('DB_FIREBIRD_HOST', '127.0.0.1'),
'port' => env('DB_FIREBIRD_PORT', '3050'),
'database' => env('DB_FIREBIRD_DATABASE',
'path\to\db\DEFAULT.DATABASE'),
'username' => env('DB_FIREBIRD_USERNAME', 'username'),
'password' => env('DB_FIREBIRD_PASSWORD', 'password'),
'charset' => env('DB_FIREBIRD_CHARSET', 'UTF8'),
'options' => array(
PDO::ATTR_PERSISTENT => false,
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::ATTR_AUTOCOMMIT => false,
)
],
Then you can use Laravel transactions like:
try{
DB::connection('firebird')->beginTransaction();
DB::connection('firebird')->insert();
DB::connection('firebird')->commit();
} catch (Exception $exception) {
DB::connection('firebird')->rollBack();
throw $exception;
}
Or you can use this too and this do the commit or rollback automatic:
DB::connection('firebird')->transaction(function () use
($request) {
DB::connection('firebird')->insert($request);
})
But dont forget! If you do this, you must start the transaction every time! Even when you are just Select some data.
DB::connection('firebird')->beginTransaction();
or you will get SQL error like:
SQLSTATE[HY000]: General error: -901 invalid transaction handle (expecting explicit transaction start)
Thank you everybody!
Related
I am running laravel 5.4 and noticed that rollbacks in transactions do not work. I set my database engine to InnoDB in the settings.php file and tried DB::rollback(); and DB::rollBack(); (i.e. upper and lower case b) but it does not roll back my database.
I wrote a unit test bellow. It creates a record, commits it, then rolls back. However, the last assertion fails. After it rolls back, the record is still found in the database. Is there something I am missing? Or is there a bug with laravel?
public function testRollback()
{
$this->artisan('migrate:refresh', [
'--seed' => '1'
]);
DB::beginTransaction();
Season::create(['start_date' => Carbon::now(), 'end_date' => Carbon::now(),]);
DB::commit();
$this->assertDatabaseHas('seasons', [
'start_date' => Carbon::now(), 'end_date' => Carbon::now(),
]);
DB::rollBack();
// This assertion fails. It still finds the record after calling roll back
$this->assertDatabaseMissing('seasons', [
'start_date' => Carbon::now(), 'end_date' => Carbon::now(),
]);
}
The transaction consists of three steps:
You start it with DB::beginTransaction or MySQL equivalent BEGIN TRANSACTION, then you execute the commands you need to and then (and here's the important part) you either COMMIT or ROLLBACK
However once you've committed the transaction is done, you cant roll it back anymore.
Change the test to:
public function testRollback()
{
$this->artisan('migrate:refresh', [
'--seed' => '1'
]);
DB::beginTransaction();
Season::create(['start_date' => Carbon::now(), 'end_date' => Carbon::now(),]);
$this->assertDatabaseHas('seasons', [
'start_date' => Carbon::now(), 'end_date' => Carbon::now(),
]);
DB::rollback();
$this->assertDatabaseMissing('seasons', [
'start_date' => Carbon::now(), 'end_date' => Carbon::now(),
]);
}
This should work because until the transaction is rolled back the database "thinks" the record is in there.
In practice when using transactions you want to use what's suggested in the docs, for example:
DB::transaction(function()
{
DB::table('users')->update(array('votes' => 1));
DB::table('posts')->delete();
});
This will ensure atomicity of wrapped operations and will rollback if an exception is thrown within the function body (which you can also throw yourself as a means to abort if you need to).
You cannot rollback once you commit.As i can see you have used commit
DB::commit();
before rollback
so you can rollback only when it will fails to commit .you can use try catch block
DB::beginTransaction();
try {
DB::insert(...);
DB::commit();
} catch (\Exception $e) {
DB::rollback();
}
Emmm... you have misunderstood how transactions work.
After having begun a transaction, you could either commit it or rollback it. Committing means that all changes you did to the database during the transaction so far are "finalized" (i.e. made permanent) in the database. As soon as you have committed, there is nothing to roll back.
If you want to roll back, you have to do so before you commit. Rolling back will bring the database into the state it was before you have started the transaction.
This means you exactly have two options:
1) Begin a transaction, then commit all changes made so far.
2) Begin a transaction, then roll back all changes made so far.
Both committing and rolling back are final actions for a transaction, i.e. end the transaction. When having committed or rolled back, the transaction is finished from the database's point of view.
You could also look at this in the following way:
By starting a transaction, you are telling the database that all following changes are preliminary / temporary. After having done your changes, you can either tell the database to make those changes permanent (by committing), or you tell the database to throw away (revert) the changes (by rolling back).
After you have rolled back, the changes are lost and thus cannot be committed again. After you have committed, the changes are permanent and thus cannot be rolled back. Committing and rolling back is only possible as long as the changes are in the temporary state.
I have a laravel project with many connections to different IP's.
I want laravel to connect to a backup database in the case that the main SQL server was down
Example.
192.168.1.2 -> SQL DB #1
192.168.1.3 -> SQL DB #1 Backup
If 192.168.1.2 goes down, laravel must connect to 192.168.1.3
I'd like to do this in database.php file, but I think that's impossible.
I was trying to test connection before make a query like this:
if(DB::connection('connection')->getDatabaseName())
but it seems that it save data in cache and it still throw database name even if I shutdown the SQL server
For this answer, I'm considering Laravel 5.
By debugging a model query, I've found out that Laravel connections support not a single host, but a list of them.
[
'driver' => 'sqlsrv',
'host' => [
'192.168.1.2',
'192.168.1.3',
],
'database' => 'database_name',
'username' => 'username',
'password' => 'password',
'charset' => 'utf8',
'prefix' => '',
'prefix_indexes' => true,
'transaction_isolation' => PDO::SQLSRV_TXN_READ_UNCOMMITTED, // Not required, but worth mentioning it's possible to define it here too
'options' => [],
]
The underlying method behind Laravel connections resolving is Illuminate\Database\Connectors::createPdoResolverWithHosts which has the following behavior:
protected function createPdoResolverWithHosts(array $config)
{
return function () use ($config) {
foreach (Arr::shuffle($hosts = $this->parseHosts($config)) as $key => $host) {
$config['host'] = $host;
try {
return $this->createConnector($config)->connect($config);
} catch (PDOException $e) {
continue;
}
}
throw $e;
};
}
Such behavior means that Laravel will randomly pick one of the connection's hosts and try to connect to them. If the attempt fails, it keeps trying until no more hosts are found.
You could define two mysql connections in app/config/database.php
and using a middleware you could define the db that should be connected to.
You can find a more elaborate explanation in this URL:
http://fideloper.com/laravel-multiple-database-connections
i recently started searching for the same thing and to change the connection as soon as possible you can either
add the check inside a service provider
or through a global middleware
try{
\DB::connection()->getPdo(); // check if we have a connection
}catch{
\DB::purge(config('database.default')); // disconnect from the current
\DB::setDefaultConnection('my-fallback-db'); // connect to a new one
}
also check laravel api docs for more info.
I'm currently re-factoring my PHP framework's database wrapper class to use PDO.
I have few main tasks which I must accomplish with the database wrapper class:
always exact same result for basic DML operations (INSERT, SELECT, UPDATE, DELETE) against MySQL, PostgreSQL and Oracle databases
persistent connections only (new connection per request is not an option for Oracle as the cost of establishing a connection is very high in terms of latency)
bind parameters at all times (provide out of the box SQL injection proof methods)
Unfortunately I experience "ERR_CONNECTION_RESET" every single time I use the combination of the following two PDO attributes ATTR_EMULATE_PREPARES => FALSE and ATTR_PERSISTENT => TRUE while executing query against a MySQL database.
Here is the example code (not the actual wrapper class, but it duplicates the same error):
$connection = new PDO(
'mysql:host=localhost;dbname=test'
,'root'
,''
,array(
PDO::ATTR_AUTOCOMMIT => FALSE
,PDO::ATTR_CASE => PDO::CASE_LOWER
,PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC
,PDO::ATTR_EMULATE_PREPARES => FALSE
,PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION
,PDO::ATTR_PERSISTENT => TRUE
,PDO::ATTR_ORACLE_NULLS => PDO::NULL_EMPTY_STRING
)
);
if (!$connection->inTransaction()) $connection->beginTransaction();
$statement = $connection->prepare('INSERT INTO dms_devices (model_id, serial_no, name, status) VALUES (:model_id, :serial_no, :name, :status)');
foreach (array('model_id' => 1, 'serial_no' => 12219321, 'name' => 'Demo', 'status' => 'DSS_MANUFACTURED') as $name => $value) {
$statement->bindValue(':'.$name, $value);
}
$statement->execute();
$connection->commit();
Once I comment out any of the attributes ATTR_EMULATE_PREPARES or ATTR_PERSISTENT it works without problem.
I use WampServer 2.4 64-bit (Apache 2.4.4, PHP 5.4.12, MySQL 5.6.12) on my development machine.
Any suggestions what would be the best solution (keeping in mind the goals I must achieve)?
Found out that this is already registered PHP segmentation fault bug #61411.
Unfortunately it is still not fixed in version 5.4.12.
I have following database configuration in database.php file from my CakePHP app:
public $default = array(
'datasource' => 'Database/Mysql',
'persistent' => false,
'host' => 'localhost',
'login' => 'root',
'password' => '',
'database' => 'database',
'prefix' => '',
);
All is working fine, except of one queue shell script. The script is looping and waiting for commands to run (for example to update some reports). After a while 1-2 days database data is changing, but the script will still "see" the old data, and results of command is wrong. If I restart shell script, the results are OK... for few days.
I have to mention that I had "lost database connection" issue before in the script and I have solved it by runing every 10-15 min:
$user = $this->User->find('first');
Now I am affraid this is making the connection persistent somehow...
How can I reset the database connection ?
EDIT:
I was just refactoring the code to check if I can set $cacheQueries to false on the Model. But in few parts of the code I am using ConnectionManager directly, and only then I have "cache" problem. If I query database from Model->find results are ok. I need direct queries for performance reasons in few places...
$query = "SELECT COUNT(1) as result
FROM
......
";
$db = ConnectionManager::getDataSource('default');
$result = $db->query($query);
The property $cacheQueries which #burzum mentioned, seems not to be in use in any cake model method.
But I found another interesting fact in the source of the DboSource.
You need to use the second parameter of the DboSource::query() method to turn off the caching. Or the third if you want to provide additional parameters for the DboSource::fetchAll() method.
Eventhough this will fix your problem, you should write your queries with the Model::find() method that CakePHP offers.
You should only not use them if they are seriously impacting your performance.
func0der
Try to set these two model properties to false:
$cacheQuery http://api.cakephp.org/2.4/source-class-Model.html#265
$cacheSources http://api.cakephp.org/2.4/source-class-Model.html#499
I've rewritten my site php-code and added MySQL Stored Procedures.
In my local version everything works fine but after I uploaded my site to hosting server I'm constantly getting fatal error 'Prepared statement needs to be re-prepared'.
Sometimes page loads, sometimes loading fails and I see this error. What's that?
This is a possibility: MySQL bug #42041
They suggest upping the value of table_definition_cache.
You can read about statement caching in the MySQL docs.
#docwhat's answer seems nice, but on a shared hosting server, not everyone is allowed to touch the table_open_cache or table_definition_cache options.
Since this error is related to prepared statements, I have tried to 'emulate' those with PDO by providing the following option:
$dbh = new PDO('mysql:host=localhost;dbname=test', $user, $pass, [
PDO::ATTR_EMULATE_PREPARES => true
]);
Note: actually this is in a Laravel 5.6 project, and I added the option in config/database.php:
'connections' => [
'mysql' => [
'driver' => 'mysql',
'host' => env('DB_HOST', '127.0.0.1'),
'port' => env('DB_PORT', '3306'),
'database' => env('DB_DATABASE', 'forge'),
'username' => env('DB_USERNAME', 'forge'),
'password' => env('DB_PASSWORD', ''),
'unix_socket' => env('DB_SOCKET', ''),
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'prefix' => '',
'strict' => true,
'engine' => null,
'options' => [
PDO::ATTR_EMULATE_PREPARES => true,
],
],
(...)
],
I have not tested the impact of emulating prepared statements on the duration of loading my site, but it works against the error SQLSTATE[HY000]: General error: 1615 Prepared statement needs to be re-prepared I got.
Update on the performance: the emulated version seems to be slightly faster (32.7±1.4ms emulated, 35.0±2.3ms normal, n=10, p-value=0.027 for two-tailed Student's T-test).
In short: Don't use VIEWS in prepared statements.
This seems to be an on-going issue
Views are messy to handle with Dynamic SQL
Earliest Bug was Cannot create VIEWs in prepared statements from 11 years ago. There was a patch put in to address it.
Another bug report, Prepared-Statement fails when MySQL-Server under load, states that error 1615 is not a bug when the underlying tables are busy. (Really ?)
While there is some merit to increasing the table cache size (See MySql error when working with a mysql view), it does not always work (See General error: 1615 Prepared statement needs to be re-prepared (selecting mysql view))
ALTERNATIVES
Over a year ago, someone mentioned this in the MySQL Forum (MySql “view”, “prepared statement” and “Prepared statement needs to be re-prepared”).
Someone came up with the simple idea of not using the view in the prepared statement but using the SQL of view in a subquery instead. Another idea would be to create the SQL used by the view and execute it in your client code.
These would seems to be better workarounds that just bumping up the table cache size.
First gain access to mysql shell:
mysql
Check the value of the table_definition_cache:
show global variables like '%table_definition_cache%';
It might be 400 or 1400.
Enlarge it:
set global table_definition_cache = 4000;
Good to go!
Issue: 'Prepared statement needs to be re-prepared'
This issue generally occurs at the time of calling procedure either by using any Computer Language(like Java) or Calling Procedures from the backend.
Solution: Increase the size of the cache by using (executing) below script.
Script: set global table_definition_cache = 4000;
just do this:
SET GLOBAL table_definition_cache = 4096;
SET GLOBAL table_open_cache = 4096;
4096 can be to less, so set it to a higher value. But make sure that both values are the same.
FLUSH TABLES; comand on database solved for me, i was using doctrine orm.
my solutions is to create a routine like this:
DELIMITER $$
--
-- Procedimientos
--
DROP PROCEDURE IF EXISTS `dch_content_class_content`$$
CREATE DEFINER=`renuecod`#`localhost` PROCEDURE `dch_content_class_content`(IN $classId INTEGER)
BEGIN
-- vw_content_class_contents is a VIEW (UNIONS)
select * from vw_content_class_contents;
END$$
I hope this help someone
This is a workaround for people who are on shared hosting and don't want to risk it with sql injection. According to this post:
Laravel Fluent Query Builder Join with subquery
you could store the definition of your view in a function
private function myView(){
return DB::raw('(**definition of the view**) my_view_name)');
}
and then use it like this:
public function scopeMyTable(Builder $query)
{
return $query->join($this->myView(),'my_view_name.id','=','some_table.id');
}
This is a laravel approach, but I'm sure it could be applied in most cases and doesn't need huge code refactoring or architecture change. Plus, it's relatively secure as your statements stay prepared
I had this error being caused by a large group_concat_max_len statement
SET SESSION group_concat_max_len = 1000000000000000000;
removed it and the error went away
Okay, we were stuck on a laptop that would not allow Tableau to update or retrieve data from views on our MariaDB 10.3.31 databases. We did the usual, google and stack overflow, lots of solutions that just would not work. However, we have another laptop that did connect and ran just fine. So after many head scratches, a eureka moment came.
The offending laptop had both V5.3.14 and V8.0.26 ODBC connectors installed. We removed the V8.0.26 ODBC connector and Bang, all the view issues disappeared.
Hope someone finds this solution useful.
I tried to run an UPDATE query which was joining view and a table, and also got the same error. Since I had no user input, I decided to run DB::unprepared which fixed the problem.
You can read more about it in the Laravel Documentation.
I was getting this same error in Ruby on Rails in a shared hosting environment. It may not be the most secure solution, but disabling prepared statements got rid of the error message for me.
This can be done by adding the "prepared_statements: false" setting to your database.yml file:
production:
prepared_statements: false
This seems like a reasonable solution when you don't have control over the configuration settings on the MySQL server.