i'm trying to learn mongodb transactions using php-mongodb library v1.5 but i've found some problemes.
i've tried to start, commit, and abort transaction using the giving methods but abortTransaction is not working for me :
$session = self::$instance->startSession();
$this->db = self::$instance->{"mydb"};
$session->startTransaction();
$this->db->users->deleteOne([
'_id' => new MongoDB\BSON\ObjectId('5c88e197df815495df201a38')
]);
$session->abortTransaction();
$session->endSession();
the transaction is always commited even after the abort action !!!
what i'm missing here please save my day :(
the transaction is always commited even after the abort action
This is because the delete operation doesn't utilise the session object that you have instantiated. You need to pass the session as a $options parameter MongoDB\Collection::deleteOne(). Otherwise it will execute outside of the transaction. For example:
$session->startTransaction();
$this->db->users->deleteOne(
['_id' => new MongoDB\BSON\ObjectId('5c88e197df815495df201a38')],
['session' => $session]
);
See also MongoDB Transactions for more information
Related
I am new to MongoDB as I was a SuperFan of MySQL before. I recently moved to this NoSQL thing and loved it but now I am badly trapped at Transactions in MongoDB.
I found some related questions on SO but with no answers or obsolete which does not work with new MongoDB PHP Driver as there are many changes in syntax/functions and I could see many newbie like me are confused between MongoDB Docs and PHP Driver.
I found this way of committing transactions in MongoDB Docs
$client = new MongoDB\Driver\Manager("mongodb://127.0.0.1:27017");
$callback = function (\MongoDB\Driver\Session $session) use ($client)
{
$client->selectCollection('mydb1', 'foo')->insertOne(['abc' => 1], ['session' => $session]);
$client->selectCollection('mydb2', 'bar')->insertOne(['xyz' => 999], ['session' => $session]);
};
// Step 2: Start a client session.
$session = $client->startSession();
// Step 3: Use with_transaction to start a transaction, execute the callback, and commit
$transactionOptions =
[
'readConcern' => new \MongoDB\Driver\ReadConcern(\MongoDB\Driver\ReadConcern::LOCAL),
'writeConcern' => new \MongoDB\Driver\WriteConcern(\MongoDB\Driver\WriteConcern::MAJORITY, 1000),
'readPreference' => new \MongoDB\Driver\ReadPreference(\MongoDB\Driver\ReadPreference::RP_PRIMARY),
];
\MongoDB\with_transaction($session, $callback, $transactionOptions);
but this syntax/functions are obsolete for new PHP Driver and it gives following error
Call to undefined function MongoDB\with_transaction()
According to PHP Docs, the new PHP Driver for MongoDB provides these options to commit transaction but I don't understand how? because there is no example given in docs.
https://www.php.net/manual/en/mongodb-driver-manager.startsession.php
https://www.php.net/manual/en/mongodb-driver-session.starttransaction.php
https://www.php.net/manual/en/mongodb-driver-session.committransaction.php
My Question is, How can I update the above code with New PHP Driver's functions? I believe to use
MongoDB\Driver\Manager::startSession
MongoDB\Driver\Session::startTransaction
MongoDB\Driver\Session::commitTransaction
but I don't understand what their syntax is or their arguments etc because of incomplete documentation and no examples. Thanking you in anticipation for your time and support.
Ok, So, I found the answer to my question and I thought it can be helpful for some others
using Core Mongo Extension
$connection = new MongoDB\Driver\Manager("mongodb://127.0.0.1:27017");
$session = $connection->startSession();
$session->startTransaction();
$bulk = new MongoDB\Driver\BulkWrite(['ordered' => true]);
$bulk->insert(['x' => 1]);
$bulk->insert(['x' => 2]);
$bulk->insert(['x' => 3]);
$result = $connection->executeBulkWrite('db.users', $bulk, ['session' => $session]);
$session->commitTransaction();
using PHP Library
$session = $client->startSession();
$session->startTransaction();
try {
// Perform actions.
//insertOne(['abc' => 1], ['session' => $session]); <- Note Session
$session->commitTransaction();
} catch(Exception $e) {
$session->abortTransaction();
}
Note: To make the answer short and to the point, I have omitted some of the optional parameters and used a dummy insert data etc without any try-catch.
If you are running MongoDB instance as standalone version that is for development or testing purpose then you might get error something like
transaction numbers are only allowed on a replica set member or mongos
Then you can enable Replica on a standalone instance following this guide https://docs.mongodb.com/manual/tutorial/convert-standalone-to-replica-set/
I was working on a project which is required to use elasticsearch. I followed the guide: https://www.elastic.co/guide/en/elasticsearch/client/php-api/current/index.html
It works perfectly for me:
require 'vendor/autoload.php';
use Elasticsearch\ClientBuilder;
$hosts = [
'myhost'
];
$client = ClientBuilder::create() // Instantiate a new ClientBuilder
->setHosts($hosts) // Set the hosts
->build();
$params = [
'index' => 'php-demo-index',
'type' => 'doc',
'id' => 'my_id',
'body' => ['testField' => 'abc']
];
$response = $client->index($params);
print_r($response);
Now, that's only a basic thing. Now, what I want is to integrate this with Mysql i.e. as I update or insert into my table in database, it get indexed automatically in elasticsearch.
I know, we have Logstash that can query db constantly after a given interval and index into elasticsearch. But, I want indexing to be happened automatically after insertion into db using PHP without logstash.
I know such a library in (nodeJs+mongodb) ie. mongoosastics: https://www.npmjs.com/package/mongoosastic. Is there any library available in php which can do such a task automatically. Please provide me the sample code, if you know one.
There is indeed libraries to automate this task. However it generally requires the use of an ORM like Doctrine in order gracefully hook in to your database implementation. If you are able to use the Symfony framework in your project there is a library called FOSElasticaBundle which keeps your indices in sync with your database operations.
I need to trigger a laravel job within the transaction.
Since the jobs are asynchronous, sometimes they complete before the transaction commits. In such situations, the job cannot get the relevant raw using the id. (Because the transaction is not yet committed and changes are not visible to the outside)
Please suggest a method other than putting this part outside of the transaction to solve this problem.
DB::beginTransaction()
...
$process = DB::table("trn_users")->insertGetId([
"first_name" => $first_name,
"last_name" => $last_name
]);
$job = (new SendEmailJob([
'Table' => 'trn_users',
'Id' => $process
]))->onQueue('email_send_job');
$this->dispatch($job);
...
DB:commit()
For this purpose I've published a package http://github.com/therezor/laravel-transactional-jobs
The other option is use events:
DB::beginTransaction()
...
$process = DB::table("trn_users")->insertGetId([
"first_name" => $first_name,
"last_name" => $last_name
]);
$job = (new SendEmailJob([
'Table' => 'trn_users',
'Id' => $process
]))->onQueue('email_send_job');
Event::listen(\Illuminate\Database\Events\TransactionCommitted::class, function () use ($job) {
$this->dispatch($job);
});
...
DB:commit()
I recently solved this problem in a project.
Simply defined a "buffer" facade singleton with a dispatch() method which instead of dispatching it right away, buffers jobs in memory until transaction commit.
When the 'buffer' class is constructed, it registers an event listener for commit and rollback events, and either dispatches or forgets buffered jobs depending on which event is fired.
It does some other clever stuff around the actual transaction level and working out whether it needs to buffer or dispatch immediately.
Hopefully you get the idea, but let me know if you want me to go into more detail.
Lately I've been testing my PHP framework's database wrapper class which is based on PHP Data Objects. I've successfully passed all the tests with Oracle database and started to do the tests with MySQL when I came across to a bug which seems like an ACID nightmare.
In short my database driver wrapper class does the following:
1) It establishes a persistent database connection with the following attributes
self::$connection = new PDO(
$dsn
,DATABASE_USERNAME
,DATABASE_PASSWORD
,[
PDO::ATTR_AUTOCOMMIT => FALSE // Do not autocommit every single statement
,PDO::ATTR_CASE => PDO::CASE_LOWER // Force column names to lower case
,PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC // Return result set as an array indexed by column name
,PDO::ATTR_EMULATE_PREPARES => (DATABASE_DRIVER == 'mysql') // Allow emulation of prepared statements only for MySQL
,PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION // Throw an exception and rollback transaction on error
,PDO::ATTR_ORACLE_NULLS => PDO::NULL_EMPTY_STRING // Convert emtpy strings to NULL
,PDO::ATTR_PERSISTENT => TRUE // Use persistent connection
]
);
2) Wrapper class has a execute() method which is a backbone for running various SQL statements. When executing SQL statement with execute() method it checks if transaction is active using PDO::inTransaction() method. If not, it begins the transaction. Here is how this method looks like (skipping all boring parts):
public static function execute($sql, $bind_values = [], $limit = -1, $offset = 0) {
...
if (!self::$connection->inTransaction()) {
self::$connection->beginTransaction();
}
...
}
3) So far so good. But let's look at the following example which calls DELETE statement followed by a SELECT statement against the same table with the very same where conditions:
database::execute('DELETE FROM dms_test WHERE id = 5');
$data = database::execute('SELECT FROM dms_test WHERE id = 5');
4) Everyone would expect that SELECT statement returns an empty result-set since the previous DELETE statement just wiped out all the data within the same transaction.
5) But as crazy as it may sound, the SELECT statement returns non-empty result-set as though as DELETE statement would never have been issued.
6) It's interesting that the very same example works as intended within Oracle database.
Any ideas what is wrong with MySQL?
Have any of you had similar problems?
I was able to solve the problem by completely removing PDO's transaction mechanism and replacing it with my own. Apparently using PDO transaction mechanism with MySQL RDBMS and auto-commit mode disabled may cause unpredictable behavior. I don't know if this is a PDO or MySQL bug.
If you want to implement out-of-the-box transactional access to database using PDO do not use PDO's built-in PDO::beginTransaction(), PDO::commit() and PDO::rollback() methods.
Instead I suggest you to use the following approach:
When establishing connection, use attribute PDO::ATTR_AUTOCOMMIT =>
FALSE
Keep track of in-transaction state by declaring your own variable
for this purpose (e.g. $in_transaction)
Register shutdown function that calls native database ROLLBACK at
the end of the request (if in-transaction state)
Using this approach I was able to overcome the above mentioned bug.
auto commit is set to false, So none of the changes will be saved unless you commit them with transation, commit or rollback
PDO::ATTR_AUTOCOMMIT => FALSE
I've been working on converting an application of mine from CodeIgniter to Phalcon. I've noticed that [query heavy] requests that only took a maximum of 3 or 4 seconds using CI are taking up to 30 seconds to complete using Phalcon!
I've spent days trying to find a solution. I've tried using all the different means of access offered by the framework including submitting raw query strings directly to Phalcon's MySql PDO adapter.
I'm adding my database connection to the service container exactly like it is shown in Phalcon's INVO tutorial:
$di->set('db', function() use ($config) {
return new \Phalcon\Db\Adapter\Pdo\Mysql(array(
"host" => $config->database->host,
"username" => $config->database->username,
"password" => $config->database->password,
"dbname" => $config->database->name
));
});
Using webgrind output I've been able to narrow the bottleneck down to the constructor in Phalcon's PDO adapter class (cost is in milliseconds):
I've already profiled and manually tested the relevant SQL to make sure the bottleneck isn't in the database (or my poorly constructed SQL!)
I've discovered the problem, which to me wasn't immediately apparent, so hopefully others will find this useful as well.
Every time a new query was started, the application was getting a new instance of the database adapter. The request which produced the webgrind output above had a total of 20 queries.
While re-reading Phalcon's documentation section on dependency injection I saw that services can optionally be added to the service container as a "shared" service, which effectively forces the object to act as a singleton, meaning that once one instance of the class is created, the application will simply pass that instance to any request instead of creating a new instance.
There are several methods to force a service to be added as a shared service, details of which can be found here in Phalcon's Documentation:
http://docs.phalconphp.com/en/latest/reference/di.html#shared-services
Changing the code posted above to be added as a shared service looks like this:
$di->setShared('db', function() use ($config) {
return new \Phalcon\Db\Adapter\Pdo\Mysql(array(
"host" => $config->database->host,
"username" => $config->database->username,
"password" => $config->database->password,
"dbname" => $config->database->name
));
});
Here's what the webgrind output looks like for the same query referenced above, but after setting the database service to be added as a shared service (cost in milliseconds):
Notice that the invocation count is now 1 instead of 20, and the invocation cost dropped from 20 seconds down to 1 second!
I hope someone else finds this useful!
In most examples services are shared as de facto, not in the most apparent way though, but via:
$di->set('service', …, true);
The last bool argument passed to the set makes it shared and in 99.9% you'd want your DI services to be that way, otherwise similar things would happen as described by #the-notable, but because they are likely to be not as "impactful", they would be hard to trace down.