The situation is something like the following:
1- MySQL InnoDB table undergo to transactional select as follows:
<?php
....
doQuery('START TRANSACTION');
$sql = "SELECT * FROM table where amount < 10 FOR UPDATE";
$res = $doQuery($sql);
// Then a looping through $res includes updates to some fields -amount field- in the same table and set it to values greater than 10
//After the loop
doQuery('COMMIT');
At the XAMPP localhost, I opened two different browsers' windows, FireFox and Opera, requesting The script URL at the same time. I expect that only one of them could able to retrieve values for $res. However, The script returns Fetal Error
Blockquote
Fatal error: Maximum execution time of 30 seconds exceeded
I need to know the cause of this Error? Does it due to the two clients, FireFox and Opera, don't able to select or because they are not able to update?
Also I need a solution that keep transaction and give me the expected result, i.e. only one browser can return results!
you could just add set_time_limit(0); at the top of the script but it's not a good solution for scripts accessible via http.
Your script enters a dead lock. To avoid this, add an ORDER BY to the query, to ensure that both queries will try to select the records in the same order. Also make sure there is index on amount, otherwise the query will have to lock the entire table.
Related
So I have a custom artisan command that I wrote to slug a column and store it into a new column. I have a progress bar implemented, and for some reason, when the command reaches 50% completion it jumps to 100%. The issue is that it has only executed the code on half of the data.
I am using the chunk() function to break the data into chunks of 1,000 rows to eliminate memory exhaustion issues. This is necessary because my dataset is extremely large.
I have looked into my PHP error logs, MySQL error logs, and Laravel logs. I can't find any error or log line pertaining to this command. Any ideas on a new place to even start looking for the issue.
$jobTitles = ModelName::where($columnName, '<>', '')
->whereNull($slugColumnName)
->orderBy($columnName)
->chunk(1000, function($jobTitles) use($jobCount, $bar, $columnName, $slugColumnName)
{
foreach($jobTitles as $jobTitle)
{
$jobTitle->$slugColumnName = Str::slug($jobTitle->$columnName);
$jobTitle->save();
}
$bar->advance(1000);
});
$bar->finish();
What's happening is the whereNull($slugColumnName) in combination with the callback setting the $slugColumnName is leading to missed results on subsequent loops.
The order of events is something like this:
Get first set of rows: select * from table where column is null limit 100;
For each of the rows, set column to a value.
Get next set of rows: select * from table where column is null limit 100 offset 100;.
Continue and increase the offset until no more results.
The problem here is that after the second step you have removed 100 results from the total. Say you begin with 1000 total rows, by the second query you now only have 900 matching rows.
This causes the offset to be seemingly skipping an entire chunk by starting at row 100, when the first 100 rows have not been touched yet.
For more official documentation, please see this section of this section on chunking.
I have not tested this to verify it works as expected for your use-case, but it appears that using chunkById will account for this issue and correct your results.
Hi have a bunch of unique codes in a database which should only be used once.
Two users hit a script which assigns them at the same time and got the same codes!
The script is in Magento and the user can order multiple codes. The issue is if one customer orders 1000 codes the script grabs the top 1000 codes from the DB into an array and then runs through them setting them to "Used" and assigning them to an order. If a second user hits the same script at a similar time the script then grabs the top 1000 codes in the DB at that point in time which crosses over as the first script hasn't had a chance to finish assigning them.
This is unfortunate but has happened quite a few times!
My idea was to create a new table, once the user hits the script a row is made with "order_id" "code_type". Then in the same script a check is done so if a row is in this new table and the "code_type" matches that of which the user is ordering it will wait 60 seconds and check again until the previous codes are issued and the table is empty where it will then create a row and off it goes.
I am not sure if this is the best way or if two users hit at the same second again whether two rows will just be inserted and off we go with the same problem!
Any advice is much appreciated!
The correct answer depends on the database you use.
For example in MySQL with InnoDB the possible solution is a transaction with SELECT ... LOCK IN SHARE MODE.
Schematically it works this by firing following queries:
START TRANSACTION;
SELECT * FROM codes WHERE used = 0 LIMIT 1000 LOCK IN SHARE MODE;
// save ids
UPDATE codes SET used=1 WHERE id IN ( ...ids....);
COMMIT;
More information at http://dev.mysql.com/doc/refman/5.7/en/innodb-locking-reads.html
I have around 700 - 800 visitors at all time on my home page (according to analytics) and a lot of hits in general. However, I wish to show live statistics of my users and other stuff on my homepage. I therefore have this:
$stmt = $dbh->prepare("
SELECT
count(*) as totalusers,
sum(cashedout) cashedout,
(SELECT sum(value) FROM xeon_stats_clicks
WHERE typ='1') AS totalclicks
FROM users
");
$stmt->execute();
$stats=$stmt->fetch();
Which I then use as $stats["totalusers"] etc.
table.users have `22210` rows, with index on `id, username, cashedout`, `table.xeon_stats_clicks` have index on `value` and `typ`
However, whenever I enable above query my website instantly becomes very slow. As soon as I disable it, the load time drastically falls.
How else can this be done?
You should not do it that way. You will eventually exhuast your precious DB resources, as you now are experiencing. The good way is to run a separate cronjob in 30 secs or 1 min interval, and then write the result down to a file :
file_put_contents('stats.txt', $stats["totalusers"]);
and then on your mainpage
<span>current users :
<b><? echo file_get_contents('stats.txt');?></b>
</span>
The beauty is, that the HTTP server will cache this file, so until stats.txt is changed, a copy will be upfront in cache too.
Example, save / load JSON by file :
$test = array('test' => 'qwerty');
file_put_contents('test.txt', json_encode($test));
echo json_decode(file_get_contents('test.txt'))->test;
will output qwerty. Replace $test with $stats, as in comment
echo json_decode(file_get_contents('stats.txt'))->totalclicks;
From what I can tell, there is nothing about this query that is specific to any user on the site. So if you have this query being executed for every user that makes a request, you are making thousands of identical queries.
You could do a sort of caching like so:
Create a table that basically looks like the output of this query.
Make a PHP script that just executes this query and updates the aforementioned table with the lastest result.
Execute this PHP script as a cron job every minute to update the stats.
Then the query that gets run for every request can be real simple, like:
SELECT totalusers cashedout, totalclicks FROM stats_table
From the query, I can't see any real reason to use a sub-query in there as it doesn't use any of the data in the users table, and it's likely that that is slowing it down - if memory serves me right it will query that xeon_clicks table once for every row in your users table (which is a lot of rows by the looks of things).
Try doing it as two separate queries rather than one.
I'm developing a web application with PHP and MySQL in that I have a situation where I have to limit the number of records to be inserted in a table.
...
const MAX= 10
if(/*record count query*/ < $this::MAX) {
/*insert query*/
}
...
For test purpose I'm triggering this code just by using GET request from the browser.
When I click F5 Key(refresh) continuously for about 5 seconds the count exceeds the MAX.
But when I go one by one the count is with in the limit.
This shows that when I click F5 continuously count query got executed while the insert query is executing simultaneously. I have no idea on how to solve this problem some guidance would be helpful to me.
You have to LOCK the table so no other process writes to the database when you are trying to get the current count. Otherwise you are always under the risk that another process currently inserts the data.
For performance reasons, you may use another table just as a counter, which you will lock during those operations.
I have a cron task running every x seconds on n servers. It will "SELECT FROM table WHERE time_scheduled<CURRENT_TIME" and then perform a lengthy task on this result set.
My problem is now: How do I avoid having two seperate servers perform the same task at the same time?
The idea is to update *time_scheduled* with a set interval after selecting it. But if two servers happen to run the query at the same time, that will be too late, no?
All ideas are welcome. It doesnt have to be a strict MySQL solution.
Thanks!
I am guessing you have a single MySQL instance, and connections from your n servers to run this processing job. You're implementing a job queue here.
The table you mention needs to use the InnoDB access method (or one of the other transaction-friendly access methods offered by Percona or MariaDB).
Do these items in your table need to be processed in batches? That is, are they somehow inter-related? Or is it possible for your server processes to handle them one-by-one? This is an important question, because you'll get better load balancing between your server processes if you can handle them individually or in small batches. Let's assume the small batches.
The idea is to prevent any server process from grabbing onto a row in your table if some other server process has that row. I've had to do this kind of thing a lot, and here is my suggestion; I know this works.
First, add an integer column to your table. Call it "working" or some such thing. Give it a default value of zero.
Second, assign a permanent id number to each server. The last part of the server's IP address (for example, if the server's IP address is 10.1.0.123, the id number is 123) is a good choice, because it's probably unique in your environment.
Then, when a server's grabbing work to do, use these two SQL queries.
UPDATE table
SET working = :this_server_id
WHERE working = 0
AND time_scheduled < CURRENT_TIME
ORDER BY time_scheduled
LIMIT 1
SELECT table_id, whatever, whatever
FROM table
WHERE working = :this_server_id
The first query will consistently grab a batch of rows to work on. If another server process comes in at the same time, it won't ever grab the same rows, because no process can grab rows unless working = 0. Notice that the LIMIT 1 will limit your batch size. You don't have to do this, but you can. I also threw in ORDER BY to process the rows first that have been waiting the longest. That's probably a useful way to do things.
The second query retrieves the information you need to do the work. Don't forget to retrieve the primary key values (I called them table_id) for the rows you're working on.
Then, your server process does whatever it needs to do.
When it's done, it needs to throw the row back into the queue for a later time. To do that, the server process needs to set the time_scheduled to whatever it needs to be, then to set working = 0. So, for example, you could run this query for each row you're processing.
UPDATE table
SET time_scheduled = CURRENT_TIME + INTERVAL 5 MINUTE,
working = 0
WHERE table_id = ?table_id_from_previous_query
That's it.
Except for one thing. In the real world these queuing systems get fouled up sometimes. Server processes crash. Etc. Etc. See Murphy's Law. You need a monitoring query. That's easy in this system.
This query will give a list of all jobs that are more than five minutes overdue, along with the server that's supposed to be working on them.
SELECT working, COUNT(*) stale_jobs
FROM table
WHERE time_scheduled < CURRENT_TIME - INTERVAL 5 MINUTE
GROUP BY WORKING
If this query comes up empty, all is well. If it comes up with lots of jobs with working set to zero, your servers aren't keeping up. If it comes up with jobs with working set to some server's id number, that server is taking a lunch break.
You can reset all the jobs assigned to the server that's gone to lunch with this query, if need be.
UPDATE table
SET working=0
WHERE working=?server_id_at_lunch
By the way, a compound index on (working, time_scheduled) will probably help this perform well.