How would I duplicate MySQL Delete using an offset in Elastic Search? - php

I have a MySQL script that takes a database query and cuts off a certain amount of rows depending on some settings. So if I have a user with a subscription of 100,000 things, and the user uploads 110,000, the script cuts off the last 10,000.
Here is the MySQL script:
DELETE FROM `my_table`
WHERE id <= (
SELECT id
FROM (
SELECT id
FROM `my_table`
WHERE some_id = $this->id
ORDER BY id DESC
LIMIT 1 OFFSET $max
) sp
Where max is 100,000
Which will delete any extra, I have since started implementing Elastic Search, and I am up to trying to duplicate this functionality but I don't know where to start because I am not that versed with this software just yet.
I have been looking at the deleteByQuery method in the PHP API, but I don't see anything about offsets or anything like that.
Can someone point me in the right direction?

Try this one, it will delete extra records
DELETE FROM my_table WHERE id IN (
SELECT id
WHERE some_id = $this->id
ORDER BY id ASC
LIMIT $maxRecordsAllowed, $countHowManyToDelete
)

Related

I need to select newest rows from a MySQL database, but verify that I am also returning a row with a given ID

I'm new to this, sorry if the title is confusing. I am building a simple php/mysql gallery of sorts. It will show the newest 25 entries when a user first goes to it, and also allows off-site linking to individual items in the list. If the URL contains an ID, javascript will scroll to it. But if there are 25+ entries, it's possible that my query will fetch the newest results, but omit an older entry that happens to be in the URL as an ID.
That means I need to do something like this...
SELECT * FROM `submissions` WHERE uid='$sid'
But after that has successfully found the submission with the special ID, also do
SELECT * FROM `submissions` ORDER BY `id` DESC LIMIT 0, 25`
So that I can populate the rest of the gallery.
I could query that database twice, but I am assuming there's some nifty way to avoid that. MySQL is also ordering everything (based on newest, views, and other vars) and using two queries would break that.
You could limit across a UNION like this:
(SELECT * FROM submissions WHERE uid = '$uid')
UNION
(SELECT * FROM submissions WHERE uid <> '$uid' ORDER BY `id` LIMIT 25)
LIMIT 25
Note LIMIT is listed twice as in the case that the first query returns a result, we would have 26 results in the union set. This will also place the "searched for" item first in the returned sort result set (with the other 24 results displayed in sort order). If this is not desirable, you could place an ORDER BY across the union, but your searched for result would be truncated if it happened to be the 26th record.
If you need 25 rows with all of them being sorted, my guess is that you would need to do the two query approach (limiting second query to either 24 or 25 records depending on whether the first query matched), and then simply insert the uid-matched result into the sorted records in the appropriate place before display.
I think the better solution is:
SELECT *
FROM `submissions`
order by (case when usid = $sid then 0 else 1 end),
id desc
limit 25
I don't think the union is guaranteed to return results in the order of the union (there is no guarantee in the standard or in other databases).

How to best reorder 100k database rows every hour?

Mysql - i want to reorder a 100k row database every hour. I have a field called 'order' that i sort by. how can i best reorder it?
I currently do this (pseudo):
mainpage.php : select * from table order by `order` desc limit 100;
and hourly:
cronjob.php : select * from table order by rand();
$i=0;
foreach($row) {
$i++;
update table set order = $i where id = $row['id']
}
but it takes ages.
If i just do 'update table set order = rand()' there will be duplicates and i don't want order to have duplicates (but it isn't set to a UNIQUE index because as it is updating there will be duplicates.
whats the best way to go about doing this?
(i do it this way because just doing "select * from table order by rand() limit 100" was really slow, but having it on an index is much faster. it just takes quite a while to reorder it)
(mysql 5)
Why not use an auto increment field and then generate a list of random values (between the min and max stored) to use when selecting?
MySQL isn't a sequential-on-disk storage anyway. This doesn't get you anything. Doing this might make rows show up in your management client in a certain order, but it won't actually add any speed to anything. Please just add an ORDER BY clause to your select statement.
how about creating another table - my_wierd_sort_table.
then delete it entirely, and insert the PK and your rand() column - and use that to order by in your selects...

Getting random results from large tables

I'm trying to get 4 random results from a table that holds approx 7 million records. Additionally, I also want to get 4 random records from the same table that are filtered by category.
Now, as you would imagine doing random sorting on a table this large causes the queries to take a few seconds, which is not ideal.
One other method I thought of for the non-filtered result set would be to just get PHP to select some random numbers between 1 - 7,000,000 or so and then do an IN(...) with the query to only grab those rows - and yes, I know that this method has a caveat in that you may get less than 4 if a record with that id no longer exists.
However, the above method obviously will not work with the category filtering as PHP doesn't know which record numbers belong to which category and hence cannot select the record numbers to select from.
Are there any better ways I can do this? Only way I can think of would be to store the record id's for each category in another table and then select random results from that and then select only those record ID's from the main table in a secondary query; but I'm sure there is a better way!?
You could of course use the RAND() function on a query using a LIMIT and WHERE (for the category). That however as you pointed out, entails a scan of the database which takes time, especially in your case due to the volume of data.
Your other alternative, again as you pointed out, to store id/category_id in another table might prove a bit faster but again there has to be a LIMIT and WHERE on that table which will also contain the same amount of records as the master table.
A different approach (if applicable) would be to have a table per category and store in that the IDs. If your categories are fixed or do not change that often, then you should be able to use that approach. In that case you will effectively remove the WHERE from the clause and getting a RAND() with a LIMIT on each category table would be faster since each category table will contain a subset of records from your main table.
Some other alternatives would be to use a key/value pair database just for that operation. MongoDb or Google AppEngine can help with that and are really fast.
You could also go towards the approach of a Master/Slave in your MySQL. The slave replicates content in real time but when you need to perform the expensive query you query the slave instead of the master, thus passing the load to a different machine.
Finally you could go with Sphinx which is a lot easier to install and maintain. You can then treat each of those category queries as a document search and let Sphinx randomize the results. This way you offset this expensive operation to a different layer and let MySQL continue with other operations.
Just some issues to consider.
Working off your random number approach
Get the max id in the database.
Create a temp table to store your matches.
Loop n times doing the following
Generate a random number between 1 and maxId
Get the first record with a record Id greater than the random number and insert it into your temp table
Your temp table now contains your random results.
Or you could dynamically generate sql with a union to do the query in one step.
SELECT * FROM myTable WHERE ID > RAND() AND Category = zzz LIMIT 1
UNION
SELECT * FROM myTable WHERE ID > RAND() AND Category = zzz LIMIT 1
UNION
SELECT * FROM myTable WHERE ID > RAND() AND Category = zzz LIMIT 1
UNION
SELECT * FROM myTable WHERE ID > RAND() AND Category = zzz LIMIT 1
Note: my sql may not be valid, as I'm not a mySql guy, but the theory should be sound
First you need to get number of rows ... something like this
select count(1) from tbl where category = ?
then select a random number
$offset = rand(1,$rowsNum);
and select a row with offset
select * FROM tbl LIMIT $offset, 1
in this way you avoid missing ids. The only problem is you need to run second query several times. Union may help in this case.
For MySQl you can use
RAND()
SELECT column FROM table
ORDER BY RAND()
LIMIT 4

Optimizing ORDER BY LIMIT queries in MySQL

In my web app I made an internal messaging system. I want to place a 'previous' and a 'next' link on each page (where the user viewing the message).
In order to get the next and previous id I execute two queries:
For the previous one:
SELECT id FROM pages WHERE (id<$requestedPageId) ORDER BY id DESC LIMIT 1
And for the next one:
SELECT id FROM pages WHERE (id>$requestedPageId) ORDER BY id LIMIT 1
EXPLAIN says the the query type is "range" and the rows column says it would examine all rows that has smaller or bigger id than the page's id (a big number). The Extra row says "Using where".
It seems MySQL ignores that I want only one row. Isn't MySQL smart enough to optimize this kind of query so it would find the row for the page and search for the first matching row back/forth?
Is there a better approach to get the next and previous page's id?
Additional notes:
This problem seems to exist on every ORDER BY LIMIT type queries (eg.: when I split a long list to multiple pages.).
Where clause is not this simple (I want to let the user access the next/previous page he has permission to access. No joins though.)
All columns appear in WHERE are indexed (id is the primary key)
variables are protected against injection.
EDIT1:
So the query I'm using currently:
SELECT id
FROM reports
WHERE (id<$requestedPageId) AND ((isPublic=1) OR (recipientId=$recipient))
ORDER BY id DESC
LIMIT 1
Or when I re-factor it as the answer said:
SELECT MAX(id)
FROM reports
WHERE (id<$requestedPageId) AND ((isPublic=1) OR (recipientId=$recipient))
For the previous
SELECT MAX(id) FROM pages WHERE id<$requestPageId
And for the next
SELECT MIN(id) FROM pages WHERE id>$requestedPageId
The database is behaving as expected. Your query is a range query because of the less-than symbol (id < $requestedPageId). The OR statement makes it harder to use a single index to find the results. And, sorting the results means it has to get all matching rows to perform the sort, even though you only want 1 row.
You're not going to be able to make this a "const" type query, but you may be able to optimize it using indexes, sub-queries, and/or union statements.
Here is one query to rule them all. I'm not saying this is the best solution, but just one way of approaching the problem. To start, this query will work better if you create two indexes, one on recipientId and another on isPublic.
SELECT
GREATEST(
( SELECT MAX( id ) FROM reports
WHERE id < $requestedPageId AND recipientId = $recipient ),
( SELECT MAX( id ) FROM reports
WHERE id < $requestedPageId AND isPublic = 1 )
) AS prev_id
LEAST(
( SELECT MIN( id ) FROM reports
WHERE id > $requestedPageId AND recipientId = $recipient ),
( SELECT MIN( id ) FROM reports
WHERE id > $requestedPageId AND isPublic = 1 )
) AS next_id

How to select the most recent 10 records

I have a mysql database. How do I select the most recent 10 records? Im not storing timestamps. But the most the recent records are the ones at the bottom rite? Also. How so I get the next ten , the next ten and so on on clicking a button. Kinda like a bunch of forum posts. The recent ones show up first.
I believe you have an auto increment column as a primary key you can use this column and to order by desc
select * from table order by id desc limit 10
otherwise you have a very poor database design
If you have an AUTO_INCREMENT column you can order by that in descending order then limit by 10.
But I suggest you store timestamps and order by that instead so you know you're sorting your records according to date, and not some other value that coincides with date of insertion.
In addition to what #BoltClock mentioned, prequerying the maximum ID might help the engine with what other records are retrieved... ie: if you have a million records, and most recent 10, I don't know if it will still try to query out the million, order them, and THEN dump it.. I would try something like
select STRAIGHT_JOIN
YT.*
from
( select max( IDColumn ) as MaxOnFile
from YourTable ) PreQuery,
YourTable YT
where
YT.IDColumn >= PreQuery.MaxOnFile -10
order by
YT.IDColumn DESC
limit 10
However, if for some reason, records are allowed to be deleted, you may opt to subtract a little farther back than the -10... but at least this way, the system won't even TRY to process all the other records...

Categories