Persistent transaction in a client-server approach - php

In my application (client-server) I need to edit some rows (from a database), and as long as they are edited it needs nobody to be able to edit also. This is done by transactions of course. The problem is that in a client-side environment the transactions is managed on the server side, so the client that edits the rows can't access the transaction directly. (I'm working with PHP in that situation but think that the same approach is adopted in other technologies also). So I need to keep transaction opened (for keeping rows locked for editing) until the client finishes the edit.
In PHP, persistent connection won't help because they can be broken from other clients located in the same host with the aforesaid client. Do you have any ideeas for my scenario?
thank you.

Usually such cases are handled through business locks that you set directly on the objects, or on the parent of the objects.
Add a column such "inedition" that you set to true when user claims for edit, and set to false when user validate/cancel its edit.
Be aware that some users transactions are likely to be lost before that you unlock the row, so you'll probably require:
either a periodic treatment that unlock rows
either a functionnal screen from which the user or an admin can unlock the rows that remained locked.
Edit:
This kind of solution is used whenever you do not want to rely on database specific feature, such Oracle "Select for update". In Java an EJB statefull bean can keep a reference to the transaction from UI to database. There might be solutions using PHP for Oracle or other database specific feature regarding transactions, depending on databases.

Related

Update different databases in transaction (Laravel)

For our web application, written in Laravel, we use transactions to update the database. We have separated our data cross different database (for ease, let's say "app" and "user"). During an application update, an event is fired to update some user statistics in the user database. However, this application update may be called as part of a transaction on the application database, so the code structure looks something as follows.
DB::connection('app')->beginTransaction();
// ...
DB::connection('user')->doStuff();
// ...
DB::connection('app')->commit();
It appears that any attempt to start a transaction on the user connection (since a single query already creates an implicit transaction) while in a game transaction does not work, causing a deadlock (see InnoDB status output). I also used innotop to get some more information, but while it showed the lock wait, it did not show by which query it was blocked. There was a lock present on the user table, but I could not find it's origin. Relevant output is shown below:
The easy solution would be to pull out the user operation from the transaction, but since the actual code is slightly more complicated (the doStuff actually happens somewhere in a nested method called during the transaction and is called from different places), this is far from trivial. I would very much like the doStuff to be part of the transaction, but I do not see how the transaction can span multiple databases.
What is the reason that this situation causes a deadlock and is it possible to run the doStuff on the user database within this transaction, or do we have to find an entirely different solution like queueing the events for execution afterwards?
I have found a way to solve this issue using a workaround. My hypothesis was that it was trying to use the app connection to update the user database, but after forcing it to use the user connection, it still locked. Then I switched it around and used the app connection to update the user database. Since we have joins between the databases, we have database users with proper access, so that was no issue. This actually solved the problem.
That is, the final code turned out to be something like
DB::connection('app')->beginTransaction();
// ...
DB::connection('app')->table('user.users')->doStuff(); // Pay attention to this line!
// ...
DB::connection('app')->commit();

PDO transaction across multiple databases, is it possible?

I have the following problem with one of our users: they have two stores in two diferent locations, each place has its own database, however, they need to share the client base and the list of materials registered for sale. At the moment, what we do is when the client registers a new client, a copy is made on the other location's database. Problems quickly arise as their internet connection is unstable. If, at the time of the registration, the internet is down, it tries to make a copy, fails and carries with inconsistent databases.
I considered making the updates to the database via Pdo transactions that would manage the two databases, but it seems that you need a new instance of PDO $dbh1= new PDO('mysql:host=xxxx;dbname=test',$user,$pass); for each database, and I don't see a way to commit both updates. Looking at this related question what is the best way to do distributed transactions across multiple databases it seems that I need some for transation management. Can this be achieved by PDO?
No, PDO cannot do anything remotely resembling distributed transactions (which in any case are a very thorny issue where no silver bullets exist).
In general, in the presence of network partitions (i.e. actors falling off the network) it can be proved that you cannot achieve consistency and availability (guaranteed response to your queries) at the same time -- see CAP theorem.
It seems that you need to re-evaluate your requirements and design a solution based on the results of this analysis; in order to scale data processing or storage horizontally you have to take the scaling into account from day one and plan accordingly.
You can only instantiate a single PDO object. Therefore, you will need to switch databases using a query, then performing the same queries in the second DB.
Best bet is to do a transaction, then commit that transaction (if successful). Then do something like
$dbh->query('USE otherdb');
$dbh->exec();
Then do a second transaction, and commit or rollback based on whether or not it worked.
I'm not sure if this really answers what you are asking though.

PHP app log storage

Our PHP & MySQL based application creates custom logs which are written to a MySQL database for users actions. We mainly did this for ease of searching and because the app was already using MySQL for persistant storage, so it just made sense.
Our log now contains 17.6 million rows and is 2GB in size. Not that friendly when moving around the place.
I was wondering what the community might suggest as a better more efficient way to store logs.
You could obviously split this table to 1 weeks worth of all logs and then delete non critical logs and split the table in two for historic critical logs, for such things as payments etc.
In general we're writing to the log through the means of a function such as
playerlog($id,$message,$cash,$page,$ip,$time);
But that's a fairly simplified version, we're also using MySQL's INSERT DELAYED as the logs are not critical for page loads.
If you're interested in doing this with MongoDB (which I assume from the tag), you might want to take a look here: http://docs.mongodb.org/manual/use-cases/storing-log-data/
You should clarify for what the logs are needed. As a second step after inserting you could set up a job that works on the log data, e.g. reads the logs and processes them (which degrades your DBMS to some sort of messaging middleware). That may be storing parts (like payments) to an archive that doesn't get deleted or writing authentication logs to a place where they get deleted after a specified retention time. But this all depends on your use case.
Depending on what you plan to analyze or the way you have to query the data you could even store them outside of MySQL.
Some possibilities:
implement a SIEM system (http://en.wikipedia.org/wiki/Security_information_and_event_management) that is targeted to analyze events, trigger alerts etc.
use a SIEM-like software like Splunk (see splunk.com) that works on raw logs and is directed towards log searching and analyzing
stick with your DBMS solution if it is "fast enough"
simply use syslog and store text log files -- you could skip the whole MySQL thing then
...

High-load payment processing architecture in PHP

Imagine a local Groupon clone. Now imagine a deal that attracted 10x normal visitors and because visitors were trying to buy deal in parallel MySQL database went down and deal's maximum purchases limit was exceeded.
I'm looking for best practices of the payment processing for highly-loaded websites, that will handle payments for the limited amount of products in parallel.
For now the simplest options seems to lock/unlock deal while customer is trying to purchase it on a third-party payment processor's page.
Any thoughts?
I was with you until you started to talk about a 3rd party payment processors page. It's hard to control your user's experience while dishing them off to a 3rd party site, because you have no idea what they're doing while they're there, if they got side-tracked, how long they're going to take to finish the transaction, IF they finished the transaction, etc.
If processing payments locally is not an option, that's not necessarily a problem - it just presents an issue with how you have to actually think about handling your transactions.
So, if it were me, not thinking about the 3rd party right now - we'll set that aside for a minute. Obviously, I'd #1 make sure my MySQL database was resilient enough to not go down, because that creates a huge problem for reconciling transactions. But, things happen, so you need a backup.
My suggestion would be to utilize a caching system which kept track of the product, and the current # of products available. Memcache could be good for this, as it's just a single record which will be pretty easy to grab. You wouldn't have to hit the database at all to get info on your product (availability) and if it went down, your users/application would be none the wiser, as you'd be getting info straight from Memcache about your item (no mysql required).
This presents an issue (when the database goes down) with storing payment records. When you collect money, you obviously need that transaction information in your database, and if your database is down - well, that's a problem. Memcache is not such a great solution for this, because you're limited to the size of your value and you must know about every key you care about. On top of that, Memcache doesn't have sets or set operations, so you can't append to a value without fear of nuking some data.
So, lets add a different piece of technology, Redis.
A solution for the transaction problem would be to write them to redis in the event that your MySQL server is not available (or write to both if you really want to, but you don't really need to do that). Then have a background process that knows how to go get the transaction details from redis and write them to your MySQL table(s) when it comes back online. Redis is pretty resilient to crashing, and is capable of operating at huge volumes. It also has set operations so you can easily append data to a set without fear of a race condition during your read/change/write operations.
So, you could store all your transactions in a redis key as a single set (store them as json strings if you like, that'd be pretty easy), then when your DB crashes you can just go get that data from Redis and write it to MySQL when it comes back online.
To keep things simple, if you were going use redis to store transactions, you may as well also use it to store your product cache, instead of memcache - keep the stack simple.
This takes care of not accessing the database for your Product details, and also keeping track of your (potentially) missed transactions, should MySQL crash. But it doesn't handle the problem of keeping track of product inventory while new transactions come in while MySQL is down, and ensuring that you don't over-sell product.
To handle this case, when a transaction is saved, you can decrement the # of products available (keep it as a flat number, so you're not constantly re-calculating it on page-load). This will tell you instantly if the product is oversold or not. However, what this does not do is protect the time that the "product is in the cart." Once the user puts the product in the cart (which you've allowed because you said you have the inventory), you have the problem of making sure it doesn't sell out before they check out.
The solution to this problem also doubles as your solution to the 3rd party transaction problem. So you're using a caching mechanism for your products, and a fall-back mechanism for your transactions. What you should do now, is when a user tries to buy a product (either puts it in the carts, or is shot off to the 3rd party processor) create a "product reservation" for them. It's probably easiest to make a redis entry for each of these. Make product reservations have a expiry time, say 5 or 10, maybe even 15 minutes if you like. Every time you see a user on your site, refresh the timeout to make sure they don't run out of time (you can put more logic in this if you desire, obviously). When a transaction was completed and changed from pending to paid, you'd create your transaction record (mysql or redis, depending on database availability), decrement your available quantity, and delete your reservation record.
You'd then use your available quantity information, in addition to your un-expired reservation information, to determine the quantity available for sale. If this number ever drops to zero, then you are effectively sold out; but if a certain number of your users don't convert it frees up the inventory that they didn't buy, allowing you to rinse and repeat that process until you're in fact, sold out.
This is a pretty long explanation of a fairly robust system, and if you ever run into the situation where your MySQL server crashed, AND redis crashed, you'd be kind of screwed; so it makes sense to have a failover of both of those systems here (which is entirely feasible and possible). It should make for a pretty rock solid checkout/inventory management process.
Hope it helps.
Use master slave mysql configuration with read/write connections.
Use cache as much as possible (redis is good idea).
Try to put some logic into redis, so it will not make extra connection to mysql + it will be faster.
For transactions maybe it is wise to use some kind of message queuing system (rabbitMQ). it will allow you to forward some tasks into background.
Dispate all this optimization you will have big problems if db or cache engine or mq will fail. But using master slave for all these services you will be kind of on the safe side. i.e. using multiple machines that will be able to continue to work if other machine fails.
And that brings me to next idea. cloud services with auto scaling (like aws).
Do you consider Compensating Service Transaction ?

MySQL Transaction across many PHP Requests

I would like to create an interface for manipulating invoices in a transaction-like manner.
The database consists of an invoices table, which holds billing information, and an invoice_lines table, which holds line items for the invoices. The website is a set of scripts which allow the addition, modification, and removal of invoices and their corresponding lines.
The problem I have is this, I would like the ACID properties of the database to be reflected in the web application.
Atomic: When the user hits save, either the entire invoice is modified or the entire invoice is not changed at all.
Consistent: The application code already ensures consistency, lines cannot be added to non-existent invoices. Invoice IDs cannot be duplicated.
Isolated: If a user is in the middle of a set of changes to an invoice, I would like to hide those changes from other users until the user clicks save.
Durable: If the web site dies, the data should be safe. This already works.
If I were writing a desktop application, it would maintain a connection to the MySQL database at all times, allowing me to simply use the BEGIN TRANSACTION and COMMIT at the beginning and end of the edit.
From what I understand you cannot BEGIN TRANSACTION on one PHP page and COMMIT on a different page because the connection is closed between pages.
Is there a way to make this possible without extensions? From what I have found, only SQL Relay does this (but it is an extension).
you don't want to have long running transactions, because that'll limit concurrency. http://en.wikipedia.org/wiki/Command_pattern
The translation on the web for this type of processing is the use of session data or data stored in the page itself. Typically what is done is that after each web page is completed the data is stored in the session (or in the page itself) and at the point in which all of the pages have been completed (via data entry) and a "Process" (or "Save") button is hit, the data is converted into the database form and saved - even with the relational aspect of data like you mentioned. There are many ways to do this but I would say that most developers have an architecture similar to what I mentioned (using session data or state within the page) to satisfy what you are talking about.
You'll get much advice here on different architectures but I can say that the Zend Framework (http://framework.zend.com) and the use of Doctrine (http://www.doctrine-project.org/) make this fairy easy since Zend provides much of the MVC architecture and session management and Doctrine provides the basic CRUD (create, retrieve, update, delete) you are looking for - plus all of the other aspects (uniqueness, commit, rollback, etc). Keeping the connection open to mysql may cause timeouts and lack of available connections.
Database transactions aren't really intended for this purpose - if you did use them, you'd probably run into other problems.
But also you can't use them as each page request uses its own connection (potentially) so cannot share a transaction with any others.
Keep the modifications to the invoice somewhere else while the user is editing them, then apply them when she hits save; you can do this final apply step in a transaction (albeit quite a short-lived one).
Long-lived transactions are usually bad.
The solution is not to open the transaction during the GET phase. Do all aspects of the transaction—BEGIN TRANSACTION, processing, and COMMIT—all during the POST triggered by the "save" button.
Persistent connections may help you:
http://php.net/manual/en/features.persistent-connections.php
Another is that when using
transactions, a transaction block will
also carry over to the next script
which uses that connection if script
execution ends before the transaction
block does.
But I recommend you to find another approach to the problem.
For example: create a cache table.
When you need to "commit", transfer the records from the cache table to the "real" tables.
Altough there are some good answers, I think that found some good responses to your question, that I was stuck with also. I think the best approach is using a framework like Doctrine (O/R mapping) that has this kind of approach somehow implemented. Here you have a link to what I'm talking about.

Categories