MySQL SELECT returning old value - php

I have a application that unfortunately uses legacy mysql_* functions with MyISAM tables (bleagh...), so I cannot use transactions. I have code that gets a current balance, checks whether this balance is okay, if yes, it will subtract a quantity and save the new balance.
The problem is, I have recently seen an instance where two queries grab the same starting balance, subtract a quantity, then record a new balance. Since they both grabbed the same starting balance, the ending balance after both UPDATES is wrong.
100 - 10 = 90
100 - 15 = 85
When it should be...
100 - 10 = 90
90 - 15 = 75
These requests executed a several minutes apart, so I do not believe the discrepancy is due to race conditions. My initial though is that MySQL cache is storing the result of the identical initial query that gets the balance. I read however that this type of cache is deleted if any relevant tables are modified.
I will most likely fix by putting everything into one query, but I still would like to figure this out. It mystifies me. If cache is deleted when a table is modified, then what happened shouldn't have happened. Has anyone heard of something like this, or have any ideas as to why it may have happened?

It's highly unlikely to be a query cache - MySQL is smart enough to invalidate a cache entry if the underlying data set has been modified by another query. If the query cache kept around old stale values long past their expiration, MySQL would be utterly useless.
Do you perhaps have outstanding uncommitted transactions causing this? Without the appropriate locks on the relevant records, your second query could be grabbing stale data quite easily.

Most likely your application has stale data. This is fine, it's how many database applications work, but when you perform your update, instead of doing something like this:
UPDATE account
SET balance = :current_balance - 10
WHERE account_id = 1
You need to do something more like this:
UPDATE account
SET balance = balance - 10
WHERE account_id = 1
That way, you use the current balance from the database, even if somebody changed it in the mean time, instead of relying on stale application data.
If you want to only change the value if no one else has modified it, then you do something like this:
UPDATE account
SET balance = balance - 10
WHERE account_id = 1
AND balance = :current_balance
If the number of affected rows is 1, then you succeeded, the record hadn't changed by someone else. However, if the number of affected rows is 0, then somebody else changed the record. You can then decide what you want to do from there.

Locking the tables is the solution to your problem, I think :)

Related

MySQL (MariaDB) execution timeout within query called from PHP

I'm stress testing my database for a geolocation search system. It has a lot of optimisation built in already such a square box long/lat index system to narrow searches before performing arc distance calculations. My aim is to serve 10,000,000 users from one table.
At present my query time is between 0.1 and 0.01 seconds based on other conditions such as age, gender etc. This is for 10,000,000 users evenly distributed across the UK.
I have a LIMIT condition as I need to show the user X people, where X can be between 16 and 40.
The issue is when there are no other users / few users that match, the query can take a long time as it cannot reach the LIMIT quickly and may have to scan 400,000 rows.
There may be other optimisation techniques which I can look at but my questions is:
Is there a way to get the query to give up after X seconds? If it takes more than 1 second then it is not going to return results and I'm happy for this to occur. In pseudo query code it would be something like:
SELECT data FROM table WHERE ....... LIMIT 16 GIVEUP AFTER 1 SECOND
I have thought about a cron solution to kill slow queries but that is not very elegant. The query will be called every few seconds when in production so the cron would need to be on continuously.
Any suggestions?
Version is 10.1.14-MariaDB
Using MariaDB in version 10.1, you have two ways of limiting your query. It can be done based on time or on total of rows queried.
By rows:
SELECT ... LIMIT ROWS EXAMINED rows_limit;
You can use the keyword EXAMINED and set an amount of lines like 400000 as you mentioned (since MariaDB 10.0).
By time:
If the max_statement_time variable is set, any query (excluding stored
procedures) taking longer than the value of max_statement_time
(specified in seconds) to execute will be aborted. This can be set
globally, by session, as well as per user and per query.
If you want it for a specific query, as I imagine, you can use this:
SET STATEMENT max_statement_time=1 FOR
SELECT field1 FROM table_name ORDER BY field1;
Remember that max_statement_time is set in seconds (just the opposite of MySQL, which are milliseconds), so you can change it until you find the best fit for your case (since MariaDB 10.1).
If you need more information I recommend you this excellent post about queries timeouts.
Hope this helps you.

Automatic deletion of SQL recrods over 5 seconds old using phpMyAdmin

So I'm using WampServer with the default phpMyAdmin to store this SQL table called Typing.
Table: Typing
Now I want to set the typing column to 0 for any row that has set the typing column to 1 more than five seconds ago.
For ex. I just set the typing column to 1 for the first row and my database detects the time since this 1 has been written, then it sets a 5 second timer to revert that 1 back to a 0. If 1 is overwritten with another 1 during that time, that timer should rest.
How should I go about this? Should I have a column for a 'timestamp' of each record? How do I make my database constantly check for entries older than 5 seconds without user input? Do I need an always on PHP script or a database trigger and how would I go about that?
As #JimL suggested, it might be a bit too ambitious to purge the records after only five seconds.
It might be helpful to have more information about what you're trying to accomplish, but I'll answer in a generic way that should answer your question
How I would handle this is that any queries should check for records that are less than five seconds old (I assume you're querying the data and only want records that are less than five seconds, otherwise I'm not really following the point of your question).
Once a day, or hourly if you have that much data, you can run a scheduled job (scheduled through MySQL itself, not through cron/Windows Scheduled Tasks) to purge the old records. You can use phpMyAdmin to set that up (the "Events" tab), although it's actually a MySQL feature that doesn't require phpMyAdmin.
I got it, I added a timestamp to each record and used this code:
mysqli_query($con,"DELETE FROM 'typing' WHERE TIMESTAMPDIFF(SECOND,recordDate, CURRENT_TIMESTAMP) > 1");
It's not a chron job though so it only runs if there is someone accessing the site, but it's good enough for what I need. Thanks for the help everyone :)

Auto-increment 5 numbers off?

I'm using a database for a project where I'm inserting things and I have an auto-increment. However, randomly, the auto-increment ID started counting wrong.
For example, the last one in the table has an ID of 227. When I insert another row, it should auto-assign it as 228, but instead it jumps to 232. How do I fix this?
Auto-increment doesn't mean use the next highest number in the table.
There are a few reasons why the auto-increment number is non-contiguous.
The auto-increment step may not be 1
The previous rows may have been deleted
Transactions inserting into the table may have been rolled back
A record may have updated the auto-increment column
The auto-increment start index may have been changed by a DDL modification.
There's probably a few other scenarios which would cause this, however.
To answer the question of "How do I fix this?", don't bother, the ID is supposed to be unique and nothing else, having IDs contiguous is usually not that useful (unless you're doing paging assuming they are contiguous, which is a bad assumption to have).
AutoIncrement numbers can be collected by the DB Server ahead of time. Instead of getting just the one asked for, a DB will sometimes get say 5 (or 10 or 100) and update the long term store of the next number to be assigned with the next + 5 (or 10 or 100). Then the next time the next one is assigned, it's got it in memory which is much faster, especially as it does not have to store the next to assign to the disk.
Timetable:
Asked for next number
Hasn't got one in memory so:
Go to disk to get the next (call it a)
Update disk with a + 100
Give the asker a
Store a+1 in memory as the next to give out
Asked for next number
Gives it a+1 and stores a+2 in memory
...and so on.
However if the DB Server gets stopped before it has given out all the numbers, then when it is restarted it will look for the next number and find a+100 - hence the gap. Autonumbers are generally guaranteed to given out as constantly increasing values, and should always be unique (watch what happens when max values are reached though). They are usually but not always sequential.
(Oracle does the same with Sequences. It's a while since I used it, but with a sequence you can specify a cache size. The only way to guarantee sequential assignments is to have a cache size of zero, which is slower.)

Hourly points add to users php and mysql solution?

In the website im working on i need to add user points. Every user will have it's own points and maximum amount of points will be 200. And upon registration user gets 100 points. With various tasks user points will be deducted.
But main problem im struggling is how to add points to the user, since every user need to gets 1 point every hour unless he have 200 or more points.
My first thought was to do a cronjob where it will run every hour a script which will check if user is verified and if user have less than 200 points and add 1 point to every user.
But after some reading im thinking of different approach which i don't understand quite exactly. The better approach, less server resource consuming would be to run a function which will check every time when user login how many points he have and add appropriate number of points to him. Problem is i don't know how to set it, how to calculate how many points to add to user if he was offline like 8 hours and how to time it? Or even maybe use ajax with timer?
What would be your suggestion and approach to this ?
Edit: Just to add since you ask, users doesn't see each other points.
When a user does something, check the last time you gave them points. If it was 5 hours ago, give them 5 points. If it was 10 hours ago, give them 10 points. Etc. Implement caching so if a user visits your site 50 times in one hour, you don't have to check against the DB every time.
Anyway, short answer is, do the check when loading the user data, rather than automatically every hour for all users whether they are active or not.
UPDATE users
SET points = MIN(points + 1, 200)
I don't really see the problem with this script running as a cron. Would be more problem if you handled each event as transaction points, since you'd have to run something like:
# Generates a row each hour per uncapped user, which may become a lot
INSERT INTO transcations (points, type, created)
SELECT 1, 'HOURLY_INCOME', NOW()
FROM users
WHERE points < 200
Is it relevant for other users, or official/inofficial statistics to check what their current point is? This is quite relevant, since it won't work fully if it only updates upon login.
user_table
---------------
id | reg_date
1 | 2013-10-10 21:10:15
2 | 2013-10-11 05:56:32
Just look how many hours left after user registration, add 100 points:
SELECT
TIMESTAMPDIFF(HOUR, `reg_date`, NOW())+100 AS `p`
FROM
user_table
WHERE
id = 1
And then check in PHP if result more than 200 just show 200.
Hmm, from mysql 5.1 there is neat feature which is basically mysql cron called MySQL Event Scheduler, and i think ill go with that for now since cron script will be very easy to write, small and not time consuming.
All i need to do is to write
UPDATE users SET points = (points +1) WHERE points<200
And add it to mysql event recurring every hour.

Mysql - Summary Tables

Which method do you suggest and why?
Creating a summary table and . . .
1) Updating the table as the action occurs in real time.
2) Running group by queries every 15 minutes to update the summary table.
3) Something else?
The data must be near real time, it can't wait an hour, a day, etc.
I think there is a 3rd option, which might allow you to manage your CPU resources a little better. How about writing a separate process that periodically updates the summarized data tables? Rather than recreating the summary with a group by, which is GUARANTEED to run slower over time because there will be more rows every time you do it, maybe you can just update the values. Depending on the nature of the data, it may be impossible, but if it is so important that it can't wait and has to be near-real-time, then I think you can afford the time to tweak the schema and allow the process to update it without having to read every row in the source tables.
For example, say your data is just login_data (cols username, login_timestamp, logout_timestamp). Your summary could be login_summary (cols username, count). Once every 15 mins you could truncate the login_summary table, and then insert using select username, count(*) kind of code. But then you'd have to rescan the entire table each time. To speed things up, you could change the summary table to have a last_update column. Then every 15 mins you'd just do an update for every record newer than the last_update record for that user. More complicated of course, but it has some benefits: 1) You only update the rows that changed, and 2) You only read the new rows.
And if 15 minutes turned out to be too old for your users, you could adjust it to run every 10 mins. That would have some impact on CPU of course, but not as much as redoing the entire summary every 15 mins.

Categories