php parsing speed optimization - php

I would like to add tooltip or generate link according to the element available in the database, for exemple if the html page printed is:
to reboot your linux host in single-user mode you can ...
I will use explode(" ", $row[page]) and the idea is now to lookup for every single word in the page to find out if they have a related referance in this exemple let's say i've got a table referance an one entry for reboot and one for linux
reboot: restart a computeur
linux: operating system
now my output will look like (replaced < and > by #)
to #a href="ref/reboot"#reboot#/a# your #a href="ref/linux"#linux#/a# host in single-user mode you can ...
Instead of have a static list generated when I saved the content, if I add more keyword in the future, then the text will become more interactive.
My main concerne and question is how can I create a efficient enough process to do it ?
Should I store all the db entry in an array and compare them ?
Do an sql query for each word (seems to be crazy)
Dump the table in a file and use a very long regex or a "grep -f pattern data" way of doing it?
Or or or or I'm sure it must be a better way of doing it, just don't have a clue about it, or maybe this will be far too resource un-friendly and I should avoid doing such things.
Cheers!

depending on the amount of keywords in the db there are two solutions.
1. if amount of keywords is less then amount of words in the text. Then you just pull all the keywords from db and compare them.
2. if amount of keywords is more then words in text. Dynamically create a single query which will bring all necessary words. eg. SELECT * FROM keywords WHERE keyword='system' OR keyword='linux' etc.
However if you are really concerned about resources i would suggest you to create a caching system. you process each page once, then store both original text and result in the db. if keyword table is updated you can reprocess all the pages once again.

I would have added an additional field for each article that would contain 'keyword table version' which was used to process this article.
Each time a user opens an article, you should compare this version with version of the keyword list. If it is outdated, you process the article and save the results to the articles table. Otherwise you just show the article.
You can control the load by adding a date column fro processing, and check it as well. If the item is relatively fresh, you may want to postpone the processing. Again, you may compare version difference, if it is greater than 5 or 10, for instance, you should update the article. If you have an important keyword added, just increase the version of keywords table by 10 and all your articles will be forced to update.
The main idea is distributing the load to user requests, and caching the results.
If your system is heavily-loaded, you may want to use random number generator to define that you should update the article only with a 10% chance, for instance.

You can have an index of keywords stored somewhere statically (database, file, or in an array). When the content is updated, you can rebuild or update the index accordingly. You just have to make sure that it can be looked up very quickly.
When you have it, you can then look up if there is a that word in the database very quickly, because the index is optimized for this.
I would store the index in a sorted list in a file, and look them up using binary search. This is a simple solution, and I think that should be quick enough if there are not too many data to process.
Or maybe you can send a list of words in the article to the database in one SQL query and have it return the list of articles that matches any of the word in the list.
Also, after the article is processed, you should also cache your data, so that in subsequent requests to the same article, you can give them the processed article instead of processing them everytime.

Related

Reduce join table with performing pagination, order by?

Sorry this may be a noob question, but I don't know how to search about this.
User case
A full-site Search function : when the user input keyword and submit the form, the system should be search in both title & content of forum, blog, products. The search result of all those type of page should display in one single list with pagination. The user can also chose to ordering the result by relevance or recency.
What I did
I am using LMAP. I have data tables for those three page type , and I have make the title & content column as index Key.
I knew that join table is a very bad idea, so I make three separate query for searching the forum, blog, and products. I get all the data into PHP, make them into array, write a function for making a relevance value for every row of search result. For recency, there is "updateDate" column in all those table, so it is ok.
Now I have three nice array. I can implode() them and sort() them easily. I can also render pagination by array_slice().
What make me Frown
Unnecessary performance waste. Yes, what I did is able to do all the things in user case , but --- I don't know how to do (I am a beginner), --- but I am sure the performance can be a lot better.
after the first time query, all the data we need has already get from database. but with my solution, whenever user click to another page of search result, or change the "sort by", the php will start over again, and do the [sql query, relevance function, implode()] again. can I someHow store the result array in someWhere , so the system can save some energy for next user action ?
most of the user will not click on all page of search result. I will guess 90% of user will not keep looking after 10th page, which mean (may be) the first 200 recorded. So, can I do any thing to stop the sql query somewhere instead of all result ?
furthermore, while the traffic grow, there may be some keywords be come common and repeated searching lots of time, what can I do reduce the repeat of those search ? (pls slap me if you think i am thinking too much)
Thank you for reading these, Please correct me if my concept is incorrect, or tell me if I miss something to notice in this user case. Thank you and may God's love be with you.
Edit : I am not using any php framework.
To get you the full story is probably like writing a book. Here are some extracted thoughts:
fully blown page indicators cost you extra data set counts - just present "Next" buttons which can be made up by select ... limit [nr_of_items_per_page+1] and then if(isset($result[nr_of_items_per_page+1])) output next button
these days net traffic costs are not as high as ten years ago and users demand for more. Increase your nr_of_items_per_page to 100, 200, 500 (depending on the data size per record)
Zitty Yams comments work out - I have loaded >10000 records in one go to a client and presented those piece by piece - it just rocks - eg. a list of 10000 names with 10 characters avg makes just 100000 Bytes. Most of the images you get in the net are bigger then that. Of cause there are limits...
php caching via $SESSION works as well - however keep in mind that each Byte to be reserved for php cannot be dedicated to the database (at least not on a shared server). As long as not all data in the database fit into memory, in most cases it is more efficient to extend database memory rather than increasing php caches or os caches.

Computing word, image, video and audio file counts in a scalable way?

I am attempting to gather as much interesting metadata as possible to display for readers of an expression engine site I'm developing and am looking for guidance on methods (or indeed the feasibility) of computing specific bits of this metadata in a scalable way.
Expression Engine allows for quite a few bits of data to be gathered and displayed natively, for example post totals and dates, comment totals and dates, tag totals, etc. However I'm specifically interested in finding a method to count and display totals for data like number of words, images, videos, or audio files, not only within individual posts but across a channel, as well as site-wide.
These totals would be displayed contextually depending on where they were accessed. So for example search results would display the number of words/images/etc contained in individual posts, a channel's "about" page would display totals for the entire channel, and the site's "about" page would display site-wide totals. I'm not clear on the best approach or whether this is even really feasible.
I'm not a professional web designer, so my knowledge of anything beyond html5/css3/ee is somewhat limited, but I've pondered:
Entering these numbers on a per-post basis, in custom fields, but am not clear on whether they can be added together for channel and site-wide totals.
Using PHP's "count" method, but am not very familiar with PHP so unsure of it's appropriate.
Using some mySql method to query the database, again unsure.
Utilizing the Expression Engine "Query Module." !?
Using some Jquery plug-in to do the counting individually and then adding after the fact.
It may be that the counting of words, images, video, and audio files and the scalability are different questions all together but the truth is I'm very confused as to what avenue to even explore. So any and all suggestions or guidance would be greatly appreciated.
Update: I'm looking into database methods to collect and add the results but am still interested in identifying the best ways to actually perform the word/image/video/audio file counts.
There's many solutions but I have a few in mind that may help you out. I'll just show the one I like really well that I even use for my own site.
One solution is to make a count column in tables you are interested in that is automatically updated when someone posts or does something. You can also make a new table called globalcount or whatever that counts everything site wide. This can then later just be displayed. You would need to first have a method/function of counting words and such if you want that info. And when someone makes a post, just count one up from the previous.
The above is what I use. I use a misc table (It has one row that contains all the data. You could instead make each row contain your info like 'name' 'value') that looks something like:
(`views`, `totalusers`, `totalgroups`, `totalthreads`, `totalposts`, `totalarticles`, `totalcomments`, `totalpms`, `activeusercount`)
And in something like my 'news' table I use 'totalcomments' to count the local comments posted in that article. So I have both the local and global comments.
In my case, if I wanted to update 'totalusers' in the 'misc' table after a new user registers, I'd just call my $misc array and go: $newtotalusers = intval($misc['totalusers'] + 1);
mysql_query("UPDATE `misc` SET `totalusers`='$newtotalusers'");
Or you could instead just use "totalusers+1".
Same can be done with any other thing you wish to do, such as with any file count or visa versa. Hope this helps :)
One last thing, you could also make a script that in the case the data becomes off because of an error that would update and fix any table's count values.

Basic version control for MySQL table

I'm trying to setup a (I thought) fairly simple versioning system for static html pages on a site. The goal is to keep previous versions of the content, then restore to them if needed (I guess basically creating a new version that's a duplicate of an old one), and optionally to toss out data older than X versions ago.
The table's setup is fairly straightforward:
id
reference_id (string/used to determine what page the item pertains to)
content (document/html page sized amount of data)
e_user (user who changed it last)
e_timestamp (when it was changed)
I just want to have something setup to create a previous version for each edit to the content, then be able to restore to it if needed.
What's the best method for accomplishing this? Should everything be in the same table, or spread across a few different ones?
I read through a few pages on the subject, but a lot of them seemed like overkill for what i'm trying to accomplish (ex http://www.jasny.net/articles/versioning-mysql-data/ )
Are there any platforms/guides about that will help me in this endeavorer?
Ideally you would want everything in the same table with something in your query to get the correct version, however you should be careful how you do this as an inefficient query will put extra load on your server. If normally you would select a single item like this:
SELECT * FROM your_table WHERE id = 42
This would then become:
SELECT * FROM your_table
WHERE id = 42
AND date < '2010-10-12 15:23:24'
ORDER BY date DESC
LIMIT 1
Index (id, e_timestamp) to allow this to perform efficiently.
Selecting multiple rows in a single query is more tricky and requires a groupwise-maximum approach but it can be done.
You can use a technique called "auditing". You would set up audit tables. Then you would either write it into your code or setup triggers on the DB side so that every time a change is made, an entry is added into the appropriate audit table. Then you can go back through the audit table and see things like:
"Oh, yesterday Sue went in and fixed a typo"
"Uh oh, steve wiped out an entire paragraph by accident earlier today while trying to rewrite this section"
Your primary table that stores the data doesn't keep all that data, so it can stay slim. If you ever need to look at that data and say roll stuff back, you can go look in your audit table and do that. You can setup the audit table however you want, so each audit row can have the entire content BEFORE edit, and not just what was edited. That should make "rolling back" fairly easy.
Add a version column and a delete column (bool) and create some functions that compare the versions of rows with the same id. You'll definitely want to be able to easily find the current version and the previous version. To get rid of the data you'll want to write another function that sorts all of the versions of id, figures out which are old enough to be deleted, and marks them for deletion by another function. You'll probably want to have an option to make certain pages immune to deletion or postpone it.

performance issue on displaying records

I have a table with just 3,000 records.
I render these 3000 records in the home page without pagination, my client is not interested in pagination...
So to show page completely it takes around 1 min, 15 sec. What can be done to make the page load more quickly?
My table structure:
customer table
customer id
customer name
guider id
and few columns
guider table
guider id
guider name
and few columns
Where's the slow down? The query or the serving?
If the former, see the comments above. If the latter:
Enable gzip on the server. Otherwise capture the [HTML?] output to a file, compress it (zip), then serve it as a download. Same for any other format if you think something else can render it better than a browser (CSV and Open Office).
If you're outputting the data into a HTML table then you may have an issue where the browser is waiting for the end of the table before rendering it. You can either break this into multiple table chunks like every 500 records/rows or try CSS "table-layout: fixed;".
Check the Todos
sql Connection (dont open the
connection in loop) for query it
should be one time connection
check your queries and analyse it if you are using some complex logic
which can be replaced
use standard class for sql connection and query ; use ezsql
sql query best practice
While you could implement a cache to do this, you don't necessarily need to do so, an introducing unnecessary cache structures can often cause problems of its own. Depending on where the bottleneck is, it may not even help you much, or at all.
You need to look in two places for your analysis:
1) The query you're using to get your data. Take a look at its plan, or if you're not comfortable doing that, run it in your favorite query tool and see how long it takes to come back. If it doesn't take too long, you've got a pretty good idea that your bottleneck isn't the query. If the query itself takes a long time, that's where you should focus your efforts.
2) How your page is rendering. What is the size of your page, in bytes? It may be too big. Can you cut the size down by formatting? Can you more effectively use CSS to eliminate duplicate styling on the page? Are you using a fixed or dynamic table layout? Dynamic is generally going to be quite a bit slower, especially for large tables. Try to avoid nesting tables. Do everything you can to make the page as small as possible, and keep testing!
while displaying records i want to
display guidername so , i did once
function that return the guider name
Sounds like you need to use a JOIN. Here's a simple example:
SELECT * FROM customer JOIN guider ON guider.id=customer.guider_id
This will change your page from using N + 1 (3001) queries to just one.
Make sure both guider.id and customer.guider_id are indexed and of appropriate data types (such as integers).
This is a little list, what you should think about for improving the performance, the importance is relative to each point, so the first ist not to be the most important to you - which depends on the details of your project.
Check your database structure. If there are just these two tables, their might be little you can do. But keep in mind that there is stuff like indices and with an increasing number of records a second denormalizes table structure will improve the speed of retrieving results.
Use rather one Query for selecting your data, than iterating through ids and doing selects repeatedly
Run a separate Query for the guiders, I assume there are only a few of them. Save all guiders in a data structure, e.g. a dictionary, first and use the foreign key to apply the correct one to the current record - this might save a lot of data which has to be transmitted from the database to your web server.
Get your result set by using something like mysqli_result::fetch_all() which returns a 2-dimensional array with all results. This should be faster than iteration through each row with fetch_row()
Sanitize your HTML Output, use (external) CSS. This will save a lot of output space if you format your stuff with style=" ... a lot of formatting code ..." attributes in each line. If you use one large table, split them up in multiple tables (some browsers wait for the complete table to load before rendering it).
In a lot of languages very important: Use a string builder for concatenating your results into the output string!
Caching: Think about generating the output once a day or once an hour. Write it to a cachefile which is opened instead of querying the database and building the same stuff on every request. Maybe you want to offer this generated file as download, rather than displaying it as plain HTML Site on the web.
Last but not least, check the connections to webserver and database, the server load as well as the number of requests. If your servers are running on heavy load everything ales here might help reducing the load or you just have to upgrade hardware.
LOL
everyone is talking of big boys toys, like database structure, caching and stuff.
While the problem most likely lays in mere HTML and browsers.
Just to split whole HTML table in chunks will help first chunk to show up immediately while others will eventually come.
Only ones were right who said to profile whole thing first.
Trying to answer without profiling results is shooting in the dark.

Optimizing queries for the next and previous element

I am looking for the best way to retrieve the next and previous records of a record without running a full query. I have a fully implemented solution in place, and would like to know whether there are any better approaches to do this out there.
Let's say we are building a web site for a fictitious greengrocer. In addition to his HTML pages, every week, he wants to publish a list of special offers on his site. He wants those offers to reside in an actual database table, and users have to be able to sort the offers in three ways.
Every item also has to have a detail page with more, textual information on the offer and "previous" and "next" buttons. The "previous" and "next" buttons need to point to the neighboring entries depending on the sorting the user had chosen for the list.
(source: pekkagaiser.com)
Obviously, the "next" button for "Tomatoes, Class I" has to be "Apples, class 1" in the first example, "Pears, class I" in the second, and none in the third.
The task in the detail view is to determine the next and previous items without running a query every time, with the sort order of the list as the only available information (Let's say we get that through a GET parameter ?sort=offeroftheweek_price, and ignore the security implications).
Obviously, simply passing the IDs of the next and previous elements as a parameter is the first solution that comes to mind. After all, we already know the ID's at this point. But, this is not an option here - it would work in this simplified example, but not in many of my real world use cases.
My current approach in my CMS is using something I have named "sorting cache". When a list is loaded, I store the item positions in records in a table named sortingcache.
name (VARCHAR) items (TEXT)
offeroftheweek_unsorted Lettuce; Tomatoes; Apples I; Apples II; Pears
offeroftheweek_price Tomatoes;Pears;Apples I; Apples II; Lettuce
offeroftheweek_class_asc Apples II;Lettuce;Apples;Pears;Tomatoes
obviously, the items column is really populated with numeric IDs.
In the detail page, I now access the appropriate sortingcache record, fetch the items column, explode it, search for the current item ID, and return the previous and next neighbour.
array("current" => "Tomatoes",
"next" => "Pears",
"previous" => null
);
This is obviously expensive, works for a limited number of records only and creates redundant data, but let's assume that in the real world, the query to create the lists is very expensive (it is), running it in every detail view is out of the question, and some caching is needed.
My questions:
Do you think this is a good practice to find out the neighbouring records for varying query orders?
Do you know better practices in terms of performance and simplicity? Do you know something that makes this completely obsolete?
In programming theory, is there a name for this problem?
Is the name "Sorting cache" is appropriate and understandable for this technique?
Are there any recognized, common patterns to solve this problem? What are they called?
Note: My question is not about building the list, or how to display the detail view. Those are just examples. My question is the basic functionality of determining the neighbors of a record when a re-query is impossible, and the fastest and cheapest way to get there.
If something is unclear, please leave a comment and I will clarify.
Starting a bounty - maybe there is some more info on this out there.
Here is an idea. You could offload the expensive operations to an update when the grocer inserts/updates new offers rather than when the end user selects the data to view. This may seem like a non-dynamic way to handle the sort data, but it may increase speed. And, as we know, there is always a trade off between performance and other coding factors.
Create a table to hold next and previous for each offer and each sort option. (Alternatively, you could store this in the offer table if you will always have three sort options -- query speed is a good reason to denormalize your database)
So you would have these columns:
Sort Type (Unsorted, Price, Class and Price Desc)
Offer ID
Prev ID
Next ID
When the detail information for the offer detail page is queried from the database, the NextID and PrevID would be part of the results. So you would only need one query for each detail page.
Each time an offer is inserted, updated or deleted, you would need to run a process which validates the integrity/accuracy of the sorttype table.
I have an idea somewhat similar to Jessica's. However, instead of storing links to the next and previous sort items, you store the sort order for each sort type. To find the previous or next record, just get the row with SortX=currentSort++ or SortX=currentSort--.
Example:
Type Class Price Sort1 Sort2 Sort3
Lettuce 2 0.89 0 4 0
Tomatoes 1 1.50 1 0 4
Apples 1 1.10 2 2 2
Apples 2 0.95 3 3 1
Pears 1 1.25 4 1 3
This solution would yield very short query times, and would take up less disk space than Jessica's idea. However, as I'm sure you realize, the cost of updating one row of data is notably higher, since you have to recalculate and store all sort orders. But still, depending on your situation, if data updates are rare and especially if they always happen in bulk, then this solution might be the best.
i.e.
once_per_day
add/delete/update all records
recalculate sort orders
Hope this is useful.
I've had nightmares with this one as well. Your current approach seems to be the best solution even for lists of 10k items. Caching the IDs of the list view in the http session and then using that for displaying the (personalized to current user) previous/next. This works well especially when there are too many ways to filter and sort the initial list of items instead of just 3.
Also, by storing the whole IDs list you get to display a "you are at X out of Y" usability enhancing text.
By the way, this is what JIRA does as well.
To directly answer your questions:
Yes it's good practice because it scales without any added code complexity when your filter/sorting and item types crow more complex. I'm using it in a production system with 250k articles with "infinite" filter/sort variations. Trimming the cacheable IDs to 1000 is also a possibility since the user will most probably never click on prev or next more than 500 times (He'll most probably go back and refine the search or paginate).
I don't know of a better way. But if the sorts where limited and this was a public site (with no http session) then I'd most probably denormalize.
Dunno.
Yes, sorting cache sounds good. In my project I call it "previous/next on search results" or "navigation on search results".
Dunno.
In general, I denormalize the data from the indexes. They may be stored in the same rows, but I almost always retrieve my result IDs, then make a separate trip for the data. This makes caching the data very simple. It's not so important in PHP where the latency is low and the bandwidth high, but such a strategy is very useful when you have a high latency, low bandwidth application, such as an AJAX website where much of the site is rendered in JavaScript.
I always cache the lists of results, and the results themselves separately. If anything affects the results of a list query, the cache of the list results is refreshed. If anything affects the results themselves, those particular results are refreshed. This allows me to update either one without having to regenerate everything, resulting in effective caching.
Since my lists of results rarely change, I generate all the lists at the same time. This may make the initial response slightly slower, but it simplifies cache refreshing (all the lists get stored in a single cache entry).
Because I have the entire list cached, it's trivial to find neighbouring items without revisiting the database. With luck, the data for those items will also be cached. This is especially handy when sorting data in JavaScript. If I already have a copy cached on the client, I can resort instantly.
To answer your questions specifically:
Yes, it's a fantastic idea to find out the neighbours ahead of time, or whatever information the client is likely to access next, especially if the cost is low now and the cost to recalculate is high. Then it's simply a trade off of extra pre-calculation and storage versus speed.
In terms of performance and simplicity, avoid tying things together that are logically different things. Indexes and data are different, are likely to be changed at different times (e.g. adding a new datum will affect the indexes, but not the existing data), and thus should be accessed separately. This may be slightly less efficient from a single-threaded standpoint, but every time you tie something together, you lose caching effectiveness and asychronosity (the key to scaling is asychronosity).
The term for getting data ahead of time is pre-fetching. Pre-fetching can happen at the time of access or in the background, but before the pre-fetched data is actually needed. Likewise with pre-calculation. It's a trade-off of cost now, storage cost, and cost to get when needed.
"Sorting cache" is an apt name.
I don't know.
Also, when you cache things, cache them at the most generic level possible. Some stuff might be user specific (such as results for a search query), where others might be user agnostic, such as browsing a catalog. Both can benefit from caching. The catalog query might be frequent and save a little each time, and the search query may be expensive and save a lot a few times.
I'm not sure whether I understood right, so if not, just tell me ;)
Let's say, that the givens are the query for the sorted list and the current offset in that list, i.e. we have a $query and an $n.
A very obvious solution to minimize the queries, would be to fetch all the data at once:
list($prev, $current, $next) = DB::q($query . ' LIMIT ?i, 3', $n - 1)->fetchAll(PDO::FETCH_NUM);
That statement fetches the previous, the current and the next elements from the database in the current sorting order and puts the associated information into the corresponding variables.
But as this solution is too simple, I assume I misunderstood something.
There are as many ways to do this as to skin the proverbial cat. So here are a couple of mine.
If your original query is expensive, which you say it is, then create another table possibly a memory table populating it with the results of your expensive and seldom run main query.
This second table could then be queried on every view and the sorting is as simple as setting the appropriate sort order.
As is required repopulate the second table with results from the first table, thus keeping the data fresh, but minimising the use of the expensive query.
Alternately, If you want to avoid even connecting to the db then you could store all the data in a php array and store it using memcached. this would be very fast and provided your lists weren't too huge would be resource efficient. and can be easily sorted.
DC
Basic assumptions:
Specials are weekly
We can expect the site to change infrequently... probably daily?
We can control updates to the database with ether an API or respond via triggers
If the site changes on a daily basis, I suggest that all the pages are statically generated overnight. One query for each sort-order iterates through and makes all the related pages. Even if there are dynamic elements, odds are that you can address them by including the static page elements. This would provide optimal page service and no database load. In fact, you could possibly generate separate pages and prev / next elements that are included into the pages. This may be crazier with 200 ways to sort, but with 3 I'm a big fan of it.
?sort=price
include(/sorts/$sort/tomatoes_class_1)
/*tomatoes_class_1 is probably a numeric id; sanitize your sort key... use numerics?*/
If for some reason this isn't feasible, I'd resort to memorization. Memcache is popular for this sort of thing (pun!). When something is pushed to the database, you can issue a trigger to update your cache with the correct values. Do this in the same way you would if as if your updated item existed in 3 linked lists -- relink as appropriate (this.next.prev = this.prev, etc). From that, as long as your cache doesn't overfill, you'll be pulling simple values from memory in a primary key fashion.
This method will take some extra coding on the select and update / insert methods, but it should be fairly minimal. In the end, you'll be looking up [id of tomatoes class 1].price.next. If that key is in your cache, golden. If not, insert into cache and display.
Do you think this is a good practice to find out the neighboring records for varying query orders? Yes. It is wise to perform look-aheads on expected upcoming requests.
Do you know better practices in terms of performance and simplicity? Do you know something that makes this completely obsolete? Hopefully the above
In programming theory, is there a name for this problem? Optimization?
Is the name "Sorting cache" is appropriate and understandable for this technique? I'm not sure of a specific appropriate name. It is caching, it is a cache of sorts, but I'm not sure that telling me you have a "sorting cache" would convey instant understanding.
Are there any recognized, common patterns to solve this problem? What are they called? Caching?
Sorry my tailing answers are kind of useless, but I think my narrative solutions should be quite useful.
You could save the row numbers of the ordered lists into views, and you could reach the previous and next items in the list under (current_rownum-1) and (current_rownum+1) row numbers.
The problem / datastructur is named bi-directional graph or you could say you've got several linked lists.
If you think of it as a linked list, you could just add fields to the items table for every sorting and prev / next key. But the DB Person will kill you for that, it's like GOTO.
If you think of it as a (bi-)directional graph, you go with Jessica's answer. The main problem there is that order updates are expensive operations.
Item Next Prev
A B -
B C A
C D B
...
If you change one items position to the new order A, C, B, D, you will have to update 4 rows.
Apologies if I have misunderstood, but I think you want to retain the ordered list between user accesses to the server. If so, your answer may well lie in your caching strategy and technologies rather than in database query/ schema optimization.
My approach would be to serialize() the array once its first retrieved, and then cache that in to a separate storage area; whether that's memcached/ APC/ hard-drive/ mongoDb/ etc. and retain its cache location details for each user individually through their session data. The actual storage backend would naturally be dependent upon the size of the array, which you don't go into much detail about, but memcached scales great over multiple servers and mongo even further at a slightly greater latency cost.
You also don't indicate how many sort permutations there are in the real-world; e.g. do you need to cache separate lists per user, or can you globally cache per sort permutation and then filter out what you don't need via PHP?. In the example you give, I'd simply cache both permutations and store which of the two I needed to unserialize() in the session data.
When the user returns to the site, check the Time To Live value of the cached data and re-use it if still valid. I'd also have a trigger running on INSERT/ UPDATE/ DELETE for the special offers that simply sets a timestamp field in a separate table. This would immediately indicate whether the cache was stale and the query needed to be re-run for a very low query cost. The great thing about only using the trigger to set a single field is that there's no need to worry about pruning old/ redundant values out of that table.
Whether this is suitable would depend upon the size of the data being returned, how frequently it was modified, and what caching technologies are available on your server.
So you have two tasks:
build sorted list of items (SELECTs with different ORDER BY)
show details about each item (SELECT details from database with possible caching).
What is the problem?
PS: if ordered list may be too big you just need PAGER functionality implemented. There could be different implementations, e.g. you may wish to add "LIMIT 5" into query and provide "Show next 5" button. When this button is pressed, condition like "WHERE price < 0.89 LIMIT 5" is added.

Categories