Does (or, how does) MySQL natively take care of concurrent transactions? - php

Maybe this is an obvious question, but it's just something I'm unsure of. If I have two standalone PHP applications running on one LAMP server, and the two PHP applications share the same MySQL database, do I need to worry about data integrity during concurrent database transactions, or is this something that MySQL just takes care of "natively"?
What happens if the two PHP applications both try to update the same record at the same time? What happens if they try to update the same table at the same time? What happens if they both try to read data from the database at the same time? Or if one application tries to read a record at the same time as the other application is updating that record?

What happens if the two PHP applications both try to update the same record at the same time?
What happens if they try to update the same table at the same time?
What happens if they both try to read data from the database at the same time?
Or if one application tries to read a record at the same time as the other application is updating that record?
This depend from several factor ..
the db engine you are using
the locking policy / transaction you have setted for you envirement .. or for you query
https://dev.mysql.com/doc/refman/8.0/en/innodb-locking-reads.html
https://dev.mysql.com/doc/refman/8.0/en/innodb-locks-set.html
the code you are using .. you could use a select for update for lock only the rows you want modify
https://dev.mysql.com/doc/refman/8.0/en/update.html
and how you manage transaction
https://dev.mysql.com/doc/refman/8.0/en/commit.html
this is just a brief suggestion

Related

MySQL and UNIX_TIMESTAMP insert error

I have a problem with a project I am currently working on, built in PHP & MySQL. The project itself is similar to an online bidding system. Users bid on a project, and they get a chance to win if they follow their bid by clicking and cliking again.
The problem is this: if 5 users for example, enter the game at the same time, I get a 8-10 seconds delay in the database - I update the database using the UNIX_TIMESTAMP(CURRENT_TIMESTAMP), which makes the whole system of the bids useless.
I want to mention too that the project is very database intensive (around 30-40 queries per page) and I was thinking maybe the queries get delayed, but I'm not sure if that's happening. If that's the case though, any suggestions how to avoid this type of problem?
Hope I've been at least clear with this issue. It's the first time it happened to me and I would appreciate your help!
You can decide on
Optimizing or minimizing required queries.
You can cache queries do not need to update on each visit.
You can use Summery tables
Update the queries only on changes.
You have to do this cleverly. You can follow this MySQLPerformanceBlog
I'm not clearly on what you're doing, but let me elaborate on what you said. If you're using UNIX_TIMESTAMP(CURRENT_TIMESTAMP()) in your MySQL query you have a serious problem.
The problem with your approach is that you are using MySQL functions to supply the timestamp record that will be stored in the database. This is an issue, because then you have to wait on MySQL to parse and execute your query before that timestamp is ever generated (and some MySQL engines like MyISAM use table-level locking). Other engines (like InnoDB) have slower writes due to row-level locking granularity. This means the time stored in the row will not necessarily reflect the time the request was generated to insert said row. Additionally, it can also mean that the time you're reading from the database is not necessarily the most current record (assuming you are updating records after they were inserted into the table).
What you need is for the PHP request that generates the SQL query to provide the TIMESTAMP directly in the SQL query. This means the timestamp reflects the time the request is received by PHP and not necessarily the time that the row is inserted/updated into the database.
You also have to be clear about which MySQL engine you're table is using. For example, engines like InnoDB use MVCC (Multi-Version Concurrency Control). This means while a row is being read it can be written to at the same time. If this happens the database engine uses something called a page table to store the existing value that will be read by the client while the new value is being updated. That way you have guaranteed row-level locking with faster and more stable reads, but potentially slower writes.

Update a database on server from multiple local databases

I am building a web-based ERP application for the retail industry using PHP and MySQL. I am going to have different local databases and one on the server(same structure). What I plan to do is run this app in localhost in different stores and at the end of the day update the database on the server from different localhosts in different stores.
Remember, I would like to update the database on the server based on the sequence queries run in different databases.
Can anyone please help me with this?
Thank you.
Perhaps link to your main database from the localhost sites to begin with? No need to update at the end of the day, every change that's made to the database is simply made to the database with no "middle men", so to speak. If you need the local databases separate, run the queries on both at once?
Note: I'm unfamiliar with how an ERP application works, so forgive me if I'm way off base here.
You may have to log every insert/update/delete sql requests in a daily file with a timestamp of your request on local databases.
Example :
2012-03-13 09:15:00 INSERT INTO...
2012-03-13 09:15:02 UPDATE MYTABLE SET...
2012-03-13 09:15:02 DELETE FROM...
...
Then send your log files daily on main server, merge all files, sort them to keep execution order and read new file to execute request on main database.
However, it's a curious way to do thing on ERP application. A product stock information can't be merged, it's a common information, be careful with this kind of data.
You can't use autoincrement with this process, this will cause duplicate key on some request or update requests on bad records.

Best practice to record large amount of hits into MySQL database

Well, this is the thing. Let's say that my future PHP CMS need to drive 500k visitors daily and I need to record them all in MySQL database (referrer, ip address, time etc.). This way I need to insert 300-500 rows per minute and update 50 more. The main problem is that script would call database every time I want to insert new row, which is every time someone hits a page.
My question, is there any way to locally cache incoming hits first (and what is the best solution for that apc, csv...?) and periodically send them to database every 10 minutes for example? Is this good solution and what is the best practice for this situation?
500k daily it's just 5-7 queries per second. If each request will be served for 0.2 sec, then you will have almost 0 simultaneous queries, so there is nothing to worry about.
Even if you will have 5 times more users - all should work fine.
You can just use INSERT DELAYED and tune your mysql.
About tuning: http://www.day32.com/MySQL/ - there is very useful script (will change nothing, just show you the tips how to optimize settings).
You can use memcache or APC to write log there first, but with using INSERT DELAYED MySQL will do almost same work, and will do it better :)
Do not use files for this. DB will serve locks much better, than PHP. It's not so trivial to write effective mutexes, so let DB (or memcache, APC) do this work.
A frequently used solution:
You could implement an counter in memcached which you increment on an visit, and push an update to the database for every 100 (or 1000) hits.
We do this by storing locally on each server to CSV, then having a minutely cron job to push the entries into the database. This is to avoid needing a highly available MySQL database more than anything - the database should be able to cope with that volume of inserts without a problem.
Save them to a directory-based database (or flat file, depends) somewhere and at a certain time, use a PHP code to insert/update them into your MySQL database. Your php code can be executed periodically using Cron, so check if your server has Cron so that you can set the schedule for that, say every 10 minutes.
Have a look at this page: http://damonparker.org/blog/2006/05/10/php-cron-script-to-run-automated-jobs/. Some codes have been written in the cloud and are ready for you to use :)
One way would be to use Apache access.log. You can get a quite fine logging by using cronolog utility with apache . Cronolog will handle the storage of a very big number of rows in files, and can rotate it based on volume day, year, etc. Using this utility will prevent your Apache from suffering of log writes.
Then as said by others, use a cron-based job to analyse these log and push whatever summarized or raw data you want in MySQL.
You may think of using a dedicated database (or even database server) for write-intensive jobs, with specific settings. For example you may not need InnoDB storage and keep a simple MyIsam. And you could even think of another database storage (as said by #Riccardo Galli)
If you absolutely HAVE to log directly to MySQL, consider using two databases. One optimized for quick inserts, which means no keys other than possibly an auto_increment primary key. And another with keys on everything you'd be querying for, optimized for fast searches. A timed job would copy hits from the insert-only to the read-only database on a regular basis, and you end up with the best of both worlds. The only drawback is that your available statistics will only be as fresh as the previous "copy" run.
I have also previously seen a system which records the data into a flat file on the local disc on each web server (be careful to do only atomic appends if using multiple proceses), and periodically asynchronously write them into the database using a daemon process or cron job.
This appears to be the prevailing optimium solution; your web app remains available if the audit database is down and users don't suffer poor performance if the database is slow for any reason.
The only thing I can say, is be sure that you have monitoring on these locally-generated files - a build-up definitely indicates a problem and your Ops engineers might not otherwise notice.
For an high number of write operations and this kind of data you might find more suitable mongodb or couchdb
Because INSERT DELAYED is only supported by MyISAM, it is not an option for many users.
We use MySQL Proxy to defer the execution of queries matching a certain signature.
This will require a custom Lua script; example scripts are here, and some tutorials are here.
The script will implement a Queue data structure for storage of query strings, and pattern matching to determine what queries to defer. Once the queue reaches a certain size, or a certain amount of time has elapsed, or whatever event X occurs, the query queue is emptied as each query is sent to the server.
you can use a Queue strategy using beanstalk or IronQ

Accessing MySQL from PHP and another process at the same time

I'm writing a program that runs (24/7) on a Linux server and adds entries to a MySQL database.
The contents of the database are presented on a web interface with PHP and the user should be able to delete entries using the web interface.
Is it possible to access the database from multiple processes at the same time?
Yes, databases are designed for this purpose quite well. You'll want to keep a few things in mind in your designs:
Concurrency and race conditions on database writes.
Performance.
Separate database permissions for separate applications.
Unless you're doing something like accessing the DB using a singleton, the max number of simultaneous mysql connections php will use is limited in your php.ini. I believe it defaults to 100.
Yes multiple users can access the database at the same time.
You should however take care that the data is consistent.
If you create/edit entry with many small sql statements and in the meantime someone useses the web interface this may lead to some errors.
If you have a simple db this should not be a problem, else you should consider using transactions.
http://dev.mysql.com/doc/refman/5.0/en/ansi-diff-transactions.html
Yes and there will not be any problems while trying to delete records in the presence of that automated program which runs 24/7 if you are using the InnoDb engine. This is because transactions happen one at a time, one starts after another has finished and the database is consistent everytime.
This answer How to implement the ACID model for a database has many relevant points.
Read about the ACID Properties of a database. A Mysql database with InnoDb engine will take care of all these things for you and you need not worry about that.

PHP MySQL and Queues, Table Locking, Reader/Writer Problem

I have following Scenario:
PHP(Server, Writer) ----> MySQL Database <------ PHP(Client, Reader/ Writer);
PHPS = PHP Server
PHPC = PHP Client
How it works?
PHPS writes data to temporary database tables (queue_*).
PHPC is triggered by a 1 hour cron.
PHPC starts, connects to database and caches all records locally (how? no idea, local mysql db? sqlite?)
PHPC executes tasks defined in those records one by one
if the task is successful it removes it from databases
if its unsuccessful it adds that record in database under a reports table.
How do i implement this such that
No half written records from PHPS get to PHPC.
PHPC can cache all records locally after one query to process them.
Any other ideas that you might have and share are highly appreciated.
MySQL's default locking will ensure that no "half-written" rows are fetched. As far as "caching locally", it seems like all that means in your scenario is reading them out of the database into a local PHP data structure like an array.
you can see about MySQL locking here: Locking in MySQL. Remeber unlock table after finishing write data.

Categories