Performance for multiple searches ordered by random - php

I'm in the design phase of a website and I have a solution for a feature but I don't know if it will be the good one when the site, hopefully, grows. I want the users to be able to perform searches for other users and the results they find must be ordered: first the "spotlighted" users, then all the rest. The result must be ordered randomly, respecting the previously mentioned order, and with pagination.
One of the solutions I have in mind is to store the query results in a session variable in the server side. For performance, when the user leaves the search this variable is destroyed.
What will happen when the site has thousands of users and every day thousands of searches are performed? My solution will be viable or the server will be overloaded?
I have more solutions in mind like an intermediate table where n times by day users are dumped in the mentioned order. This way there is no need to create a big array in the user's session and pagination is done via multiple queries against the database.
Although I appreciate any suggestions I'm specially interested into hear opinions from developers seasoned in transited sites.
(The technology employed is LAMP, with InnoDb tables)

Premature optimization is bad. But you should be planning ahead. You dont need to implement it. But prepare yourself.
If there are thousands of users searching this query everyday then caching the query result in session is not a good idea. Cause same result can be cached for some users while other needs to execute it. For such case I'd recommend you save the search result in user independent data structure (File, memory etc).
For each search query save the result, creation date, last access date in your disk or in memory.
If any user searches the same query show the result from cache
Run a cron that invalidates the cache after sometime.
This way frequent searches will most time promptly available. Also it reduces the load on your database.

This is definitely not the answer you are looking for, but I have to say it.
Premature Optimization is the root of all evil.
Get that site up with a simple implementation of that query and come back and ask if that turns out to be your worst bottleneck.

I'm assuming you want to decrease the hitting on the DB by caching search results so other users searching for the same set of factors don't have to hit the DB again--especially on very loose query strings on non-indexed fields. If so, you can't store it in a session--that's only available to the single user.
I'd use a caching layer like Cache_Lite and cache the result set from the db query based on the query string (not the sql query, but the search parameters from your site). That way identical searches will be cached. Handle the sorting and pagination of the array in PHP, not in the DB.

Related

What is the best way to handle user statistics in PHP

How would I handle user statistics in PHP?
There are two obvious methods that I can choose. Both have their flaws.
Select MySQL COUNTs when necessary. The flaw here is that if you have many rows to count then it may be slow especially when you have to do it on seemingly every page load. The benefit is that the count will always be correct.
Store user statistics in a statistics table. The flaw here is that you have to continuously update it whenever a change is made, and this makes code overly complicated if you need to update in bulk. The benefit is that it will be fast to select a single row of stats for a user as opposed to performing counts.
Another possible method that I'm a bit "eh" about is storing a job in a queue (and have Laravel handle it). These jobs will update the statistics necessary using other tables so that it's synchronised properly. The benefit is that it takes the load off of the web server and the flaw is that a user may get incorrect statistics. It is not desirable for your own friends list to say there is for example, 15 friends and 7 friend requests when the actual numbers vary greatly.
I've put into detail the methods I have come up with and I'm not sure what's best in terms of giving correct results for the user, as well as balancing speed and simplicity. If I'm doing the COUNT method then potentially I could cache the result and remove the cache record if the statistics are to be updated but I'd imagine storing a row in the cache table for EACH user is a bit overkill. Maybe this isn't really a problem as long as the database has enough space but surely searching through a massive cache table is going to be slow anyway?
Maybe someone can give me the best choice to handle user statistics. My head is spinning as it's over-thinking everything and I need to be put on the straight and narrow.
Thanks in advance.
Don't exaggerate the cost of COUNT(*) as you plan this part of your app. If you have the correct index on your table, row counting is very quick. In fact, in if your table is MyISAM it can be O(1) in complexity.
For example, if you have an index on user the query SELECT COUNT(*) AS num FROM friend WHERE user = 'mickey#disney.com' will be very fast.
Build your app the easy way. When you have ten thousand users, you can rework this kind of statistical computation to be more elaborate and efficient. When you have more users, it will not be as obvious if you present approximate results.
Be careful, though. COUNT(*) is much faster than COUNT(expression) in most cases. The * allows MySQL to avoid evaluating every row.

Is querying a MySQL database dozens of times in order to allow effective caching better than a few large queries with less caching?

In my application, I try to grab all the data I need in as few queries as possible. This usually leads to large queries with many joins. This places limits on what you can cache using software like Memcache or Redis (as far as I know). With large queries, you don't know what parts might already be cached. It seems like you have to query everything in smaller parts so that these small parts can be cached individually. The idea would be that you only have to do dozens of small queries in order to populate caches and that most of the time you would hit the caches rather than query. Is this how high traffic PHP/MySQL websites handle this? Is there a good way to cache effectively even if you have large queries with many joins?
Example:
SELECT user.name, user.birthday
FROM follower
INNER JOIN user ON (user.id = follower.user)
WHERE follower.following = '1'
The results of this query include the names and birthdays of any users following user 1. The results of this query could be cached, but that would only be useful when getting followers of user 1.
The alternative:
SELECT follower.user
FROM follower
WHERE follower.following = '1'
For each result with ? populated by follower.user from the previous query:
SELECT name, birthday FROM user where user.id = ?
In this case, we can check to see if user ?'s name and birthday are cached before querying for them from MySQL. If they aren't cached, or some are cached and some not, then grab the missing ones and cache them. You could also cache the list of follower IDs and then none of the queries need to be run the next time. The difference is that the name and birthdays of the users will be useful to any other user that ends up need information about these followers in any other context.
Am I missing something on caching with larger queries? Or is the second way the right way?
The correct answer is: It depends.
Caching is a way of optimizing a recognized use pattern by shortcutting producing repeatedly expensive data with re-using the data from a previous run.
So the first question you should answer is: It there an observed repeated use pattern that has a noticable "expensive" step of producing data? If not: Don't use caching that you still do not need, wait until you can observe something.
The second question you should be able to answer is: Can you measure how long it takes with and without cache, and is the difference noticable?
And the third important question to answer is: How can you clean the cache from outdated information if the original data gets changed, and you want that new data to be displayed instantly?
So in your case you are asking if using a cache for plenty of small, but seemingly more universal queries that then get combined is more beneficial than caching one big query. There is no theoretical answer, because it depends on how much faster a cache hit for a big result is compared to multiple cache hits for the combined result. Making multiple requests to the cache may very well be SLOWER than fetching the data from the original source, and combining the data into the needed complex result might also be slower than fetching ONE complex result directly from the cache.
Also, if using multiple cache entries for a combined result, you'll now have to deal with plenty of cases where only parts of the information are outdated, while others are not. So the result just gets more unreliable - you cannot really be sure if every part of the result is up to date, or how old it is.
#Sven you make the point! I add few more raw suggestions.
#Barakat big queries usually are not a big deal for MySql, well designed db, indexes and tuning the engine parameters usually give high performances.
Do many little queries induces a lot of overhead (cached or not), I usually avoid that.
If your big query gives big results (hundred/thousands of row), may be you can avoid it paging the results or limit the answers to best scores.
A very simple and effecting tool to tune your mysql server is MysqlTuner.pl, because you can use the MySql internal cache without worry about coherence!

Should I break a larger mysql table into multiple?

I have a pretty large social network type site I have working on for about 2 years (high traffic and 100's of files) I have been experimenting for the last couple years with tweaking things for max performance for the traffic and I have learned a lot. Now I have a huge task, I am planning to completely re-code my social network so I am re-designing mysql DB's and everything.
Below is a photo I made up of a couple mysql tables that I have a question about. I currently have the login table which is used in the login process, once a user is logged into the site they very rarely need to hit the table again unless editing a email or password. I then have a user table which is basicly the users settings and profile data for the site. This is where I have questions, should it be better performance to split the user table into smaller tables? For example if you view the user table you will see several fields that I have marked as "setting_" should I just create a seperate setting table? I also have fields marked with "count" which could be total count of comments, photo's, friends, mail messages, etc. So should I create another table to store just the total count of things?
The reason I have them all on 1 table now is because I was thinking maybe it would be better if I could cut down on mysql queries, instead of hitting 3 tables to get information on every page load I could hit 1.
Sorry if this is confusing, and thanks for any tips.
alt text http://img2.pict.com/b0/57/63/2281110/0/800/dbtable.jpg
As long as you don't SELECT * FROM your tables, having 2 or 100 fields won't affect performance.
Just SELECT only the fields you're going to use and you'll be fine with your current structure.
should I just create a seperate setting table?
So should I create another table to store just the total count of things?
There is not a single correct answer for this, it depends on how your application is doing.
What you can do is to measure and extrapolate the results in a dev environment.
In one hand, using a separate table will save you some space and the code will be easier to modify.
In the other hand you may lose some performance ( and you already think ) by having to join information from different tables.
About the count I think it's fine to have it there, although it is always said that is better to calculate this kind of stuff, I don't think for this situation it hurt you at all.
But again, the only way to know what's better your you and your specific app, is to measuring, profiling and find out what's the benefit of doing so. Probably you would only gain 2% of improvement.
You'll need to compare performance testing results between the following:
Leaving it alone
Breaking it up into two tables
Using different queries to retrieve the login data and profile data (if you're not doing this already) with all the data in the same table
Also, you could implement some kind of caching strategy on the profile data if the usage data suggests this would be advantageous.
You should consider putting the counter-columns and frequently updated timestamps in its own table --- every time you bump them the entire row is written.
I wouldn't consider your user table terrible large in number of columns, just my opinion. I also wouldn't break that table into multiple tables unless you can find a case for removal of redundancy. Perhaps you have a lot of users who have the same settings, that would be a case for breaking the table out.
Should take into account the average size of a single row, in order to find out if the retrieval is expensive. Also, should try to use indexes as while looking for data...
The most important thing is to design properly, not just to split because "it looks large". Maybe the IP or IPs could go somewhere else... depends on the data saved there.
Also, as the socialnetworksite using this data also handles auth and autorization processes (guess so), the separation between login and user tables should offer a good performance, 'cause the data on login is "short enough", while the access to the profile could be done only once, inmediately after the successful login. Just do the right tricks to improve DB performance and it's done.
(Remember to visualize tables as entities, name them as an entity, not as a collection of them)
Two things you will want to consider when deciding whether or not you want to break up a single table into multiple tables is:
MySQL likes small, consistent datasets. If you can structure your tables so that they have fixed row lengths that will help performance at the potential cost of disk space. One thing that from what I can tell is common is taking fixed length data and putting it in its own table while the variable length data will go somewhere else.
Joins are in most cases less performant than not joining. If the data currently in your table will normally be accessed all at the same time then it may not be worth splitting it up as you will be slowing down both inserts and quite potentially reads. However, if there is some data in that table that does not get accessed as often then that would be a good candidate for moving out of the table for performance reasons.
I can't find a resource online to substantiate this next statement but I do recall in a MySQL Performance talk given by Jay Pipes that he said the MySQL optimizer has issues once you get more than 8 joins in a single query (MySQL 5.0.*). I am not sure how accurate that magic number is but regardless joins will usually take longer than queries out of a single table.

Tracking the views of a given row

I have a site where the users can view quite a large number of posts. Every time this is done I run a query similar to UPDATE table SET views=views+1 WHERE id = ?. However, there are a number of disadvantages to this approach:
There is no way of tracking when the pageviews occur - they are simply incremented.
Updating the table that often will, as far as I understand it, clear the MySQL cache of the row, thus making the next SELECT of that row slower.
Therefore I consider employing an approach where I create a table, say:
object_views { object_id, year, month, day, views }, so that each object has one row pr. day in this table. I would then periodically update the views column in the objects table so that I wouldn't have to do expensive joins all the time.
This is the simplest solution I can think of, and it seems that it is also the one with the least performance impact. Do you agree?
(The site is build on PHP 5.2, Symfony 1.4 and Doctrine 1.2 in case you wonder)
Edit:
The purpose is not web analytics - I know how to do that, and that is already in place. There are two purposes:
Allow the user to see how many times a given object has been shown, for example today or yesterday.
Allow the moderators of the site to see simple view statistics without going into Google Analytics, Omniture or whatever solution. Furthermore, the results in the backend must be realtime, a feature which GA cannot offer at this time. I do not wish to use the Analytics API to retrieve the usage data (not realtime, GA requires JavaScript).
Quote : Updating the table that often will, as far as I understand it, clear the MySQL cache of the row, thus making the next SELECT of that row slower.
There is much more than this. This is database killer.
I suggest u make table like this :
object_views { object_id, timestamp}
This way you can aggregate on object_id (count() function).
So every time someone view the page you will INSERT record in the table.
Once in a while you must clean the old records in the table. UPDATE statement is EVIL :)
On most platforms it will basically mark the row as deleted and insert a new one thus making the table fragmented. Not to mention locking issues .
Hope that helps
Along the same lines as Rage, you simply are not going to get the same results doing it yourself when there are a million third party log tools out there. If you are tracking on a daily basis, then a basic program such as webtrends is perfectly capable of tracking the hits especially if your URL contains the ID's of the items you want to track... I can't stress this enough, it's all about the URL when it comes to these tools (Wordpress for example allows lots of different URL constructs)
Now, if you are looking into "impression" tracking then it's another ball game because you are probably tracking each object, the page, the user, and possibly a weighted value based upon location on the page. If this is the case you can keep your performance up by hosting the tracking on another server where you can fire and forget. In the past I worked this using SQL updating against the ID and a string version of the date... that way when the date changes from 20091125 to 20091126 it's a simple query without the overhead of let's say a datediff function.
First just a quick remark why not aggregate the year,month,day in DATETIME, it would make more sense in my mind.
Also I am not really sure what is the exact reason you are doing that, if it's for a marketing/web stats purpose you have better to use tool made for that purpose.
Now there is two big family of tool capable to give you an idea of your website access statistics, log based one (awstats is probably the most popular), ajax/1pixel image based one (google analytics would be the most popular).
If you prefer to build your own stats database you can probably manage to build a log parser easily using PHP. If you find parsing apache logs (or IIS logs) too much a burden, you would probably make your application ouput some custom logs formated in a simpler way.
Also one other possible solution is to use memcached, the daemon provide some kind of counter that you can increment. You can log view there and have a script collecting the result everyday.
If you're going to do that, why not just log each access? MySQL can cache inserts in continuous tables quite well, so there shouldn't be a notable slowdown due to the insert. You can always run Show Profiles to see what the performance penalty actually is.
On the datetime issue, you can always use GROUP BY MONTH( accessed_at ) , YEAR( accessed_at) or WHERE MONTH(accessed_at) = 11 AND YEAR(accessed_at) = 2009.

How to increase performance for MySQL database

How to increase the performance for mysql database because I have my website hosted in shared server and they have suspended my account because of "too many queries"
the stuff asked "index" or "cache" or trim my database
I don't know what does "index" and cache mean and how to do it on php
thanks
What an index is:
Think of a database table as a library - you have a big collection of books (records), each with associated data (author name, publisher, publication date, ISBN, content). Also assume that this is a very naive library, where all the books are shelved in order by ISBN (primary key). Just as the books can only have one physical ordering, a database table can only have one primary key index.
Now imagine someone comes to the librarian (database program) and says, "I would like to know how many Nora Roberts books are in the library". To answer this question, the librarian has to walk the aisles and look at every book in the library, which is very slow. If the librarian gets many requests like this, it is worth his time to set up a card catalog by author name (index on name) - then he can answer such questions much more quickly by referring to the catalog instead of walking the shelves. Essentially, the index sets up an 'alternative ordering' of the books - it treats them as if they were sorted alphabetically by author.
Notice that 1) it takes time to set up the catalog, 2) the catalog takes up extra space in the library, and 3) it complicates the process of adding a book to the library - instead of just sticking a book on the shelf in order, the librarian also has to fill out an index card and add it to the catalog. In just the same way, adding an index on a database field can speed up your queries, but the index itself takes storage space and slows down inserts. For this reason, you should only create indexes in response to need - there is no point in indexing a field you rarely search on.
What caching is:
If the librarian has many people coming in and asking the same questions over and over, it may be worth his time to write the answer down at the front desk. Instead of checking the stacks or the catalog, he can simply say, "here is the answer I gave to the last person who asked that question".
In your script, this may apply in different ways. You can store the results of a database query or a calculation or part of a rendered web page; you can store it to a secondary database table or a file or a session variable or to a memory service like memcached. You can store a pre-parsed database query, ready to run. Some libraries like Smarty will automatically store part or all of a page for you. By storing the result and reusing it you can avoid doing the same work many times.
In every case, you have to worry about how long the answer will remain valid. What if the library got a new book in? Is it OK to use an answer that may be five minutes out of date? What about a day out of date?
Caching is very application-specific; you will have to think about what your data means, how often it changes, how expensive the calculation is, how often the result is needed. If the data changes slowly, it may be best to recalculate and store the result every time a change is made; if it changes often but is not crucial, it may be sufficient to update only if the cached value is more than a certain age.
Setup a copy of your application locally, enable the mysql query log, and setup xdebug or some other profiler. The start collecting data, and testing your application. There are lots of guides, and books available about how to optimize things. It is important that you spend time testing, and collecting data first so you optimize the right things.
Using the data you have collected try and reduce the number of queries per page-view, Ideally, you should be able to get everything you need in less 5-10 queries.
Look at the logs and see if you are asking for the same thing twice. It is a bad idea to request a record in one portion of your code, and then request it again from the database a few lines later unless you are sure the value is likely to have changed.
Look for queries embedded in loop, and try to refactor them so you make a single query and simply loop on the results.
The select * you mention using is an indication you may be doing something wrong. You probably should be listing fields you explicitly need. Check this site or google for lots of good arguments about why select * is evil.
Start looking at your queries and then using explain on them. For queries that are frequently used make sure they are using a good index and not doing a full table scan. Tweak indexes on your development database and test.
There are a couple things you can look into:
Query Design - look into more advanced and faster solutions
Hardware - throw better and faster hardware at the problem
Database Design - use indexes and practice good database design
All of these are easier said than done, but it is a start.
Firstly, sack your host, get off shared hosting into an environment you have full control over and stand a chance of being able to tune decently.
Replicate that environment in your lab, ideally with the same hardware as production; this includes things like RAID controller.
Did I mention that you need a RAID controller. Yes you do. You can't achieve decent write performance without one - which needs a battery backed cache. If you don't have one, each write needs to physically hit the disc which is ruinous for performance.
Anyway, back to read performance, once you've got the machine with the same spec RAID controller (and same discs, obviously) as production in your lab, you can try to tune stuff up.
More RAM is usually the cheapest way of achieving better performance - make sure that you've got MySQL configured to use it - which means tuning storage-engine specific parameters.
I am assuming here that you have at least 100G of data; if not, just buy enough ram that your entire DB fits in ram then read performance is essentially solved.
Software changes that others have mentioned such as optimising queries and adding indexes are helpful too, but only once you've got a development hardware environment that enables you to usefully do performance work - i.e. measure performance of your application meaningfully - which means real hardware (not VMs), which is consistent with the hardware environment used in production.
Oh yes - one more thing - don't even THINK about deploying a database server on a 32-bit OS, it's a ruinous waste of good ram.
Indexing is done on the database tables in order to speed queries. If you don't know what it means you have none. At a minumum you should have indexes on every foriegn key and on most fileds that are used frequently in the where clauses of your queries. Primary keys should have indexes automatically assuming you set them up to begin with which I would find unlikely in someone who doesn't know what an index is. Are your tables normalized?
BTW, since you are doing a division in your math (why I haven't a clue), you should Google integer math. You may neot be getting correct results.
You should not select * ever. Instead, select only the data you need for that particular call. And what is your intention here?
order by votes*1000+((1440 - ($server_date - date))/60)2+visites600 desc
You may have poorly-written queries, and/or poorly written pages that run too many queries. Could you give us specific examples of queries you're using that are ran on a regular basis?
sure
this query to fetch the last 3 posts
select * from posts where visible = 1 and date > ($server_date - 86400) and dont_show_in_frontpage = 0 order by votes*1000+((1440 - ($server_date - date))/60)*2+visites*600 desc limit 3
what do you think?

Categories