Pagination vs Load More? - php

nowadays, I know two methods how to present limited data from database to user.
the first method is using pagination like
first previous 1 2 3 .. next last
the second method is using
Load more
like on Facebook or Twitter, but of course this is need AJAX function to load more data.
which one the best to use if we look from system performance or user convenience?

which one the best to use if we look from system performance or user convenience?
Assuming that there is no need for a user to jump to page X of results, a "load more" option will (usually) work better for both fronts.
You'll have slightly lower load on the db server, since you're not making it re-calculate how many pages it has all the time.
Your users will like the "load more" option, since they won't need to have a full page refresh and can just keep on scrolling.
Of course, neither of those necessarily applies to your exact situation. If you have a database that has baked-in pagination, or a need of your users to select a particular page, or an existing application that supported paginatino and your users prefer it, then paginagion may be the better choice.
And you forgot the third "infinite scroll" option, btw.

Related

Getting all data once for future use

Well this is kind of a question of how to design a website which uses less resources than normal websites. Mobile optimized as well.
Here it goes: I was about to display a specific overview of e.g. 5 posts (from e.g. a blog). Then if I'd click for example on the first post, I'd load this post in a new window. But instead of connecting to the Database again and getting this specific post with the specific id, I'd just look up that post (in PHP) in my array of 5 posts, that I've created earlier, when I fetched the website for the first time.
Would it save data to download? Because PHP works server-side as well, so that's why I'm not sure.
Ok, I'll explain again:
Method 1:
User connects to my website
5 Posts become displayed & saved to an array (with all its data)
User clicks on the first Post and expects more Information about this post.
My program looks up the post in my array and displays it.
Method 2:
User connects to my website
5 Posts become displayed
User clicks on the first Post and expects more Information about this post.
My program connects to MySQL again and fetches the post from the server.
First off, this sounds like a case of premature optimization. I would not start caching anything outside of the database until measurements prove that it's a wise thing to do. Caching takes your focus away from the core task at hand, and introduces complexity.
If you do want to keep DB results in memory, just using an array allocated in a PHP-processed HTTP request will not be sufficient. Once the page is processed, memory allocated at that scope is no longer available.
You could certainly put the results in SESSION scope. The advantage of saving some DB results in the SESSION is that you avoid DB round trips. Disadvantages include the increased complexity to program the solution, use of memory in the web server for data that may never be accessed, and increased initial load in the DB to retrieve the extra pages that may or may not every be requested by the user.
If DB performance, after measurement, really is causing you to miss your performance objectives you can use a well-proven caching system such as memcached to keep frequently accessed data in the web server's (or dedicated cache server's) memory.
Final note: You say
PHP works server-side as well
That's not accurate. PHP works server-side only.
Have you think in saving the posts in divs, and only make it visible when the user click somewhere? Here how to do that.
Put some sort of cache between your code and the database.
So your code will look like
if(isPostInCache()) {
loadPostFromCache();
} else {
loadPostFromDatabase();
}
Go for some caching system, the web is full of them. You can use memcached or a static caching you can made by yourself (i.e. save post in txt files on the server)
To me, this is a little more inefficient than making a 2nd call to the database and here is why.
The first query should only be pulling the fields you want like: title, author, date. The content of the post maybe a heavy query, so I'd exclude that (you can pull a teaser if you'd like).
Then if the user wants the details of the post, i would then query for the content with an indexed key column.
That way you're not pulling content for 5 posts that may never been seen.
If your PHP code is constantly re-connecting to the database you've configured it wrong and aren't using connection pooling properly. The execution time of a query should be a few milliseconds at most if you've got your stack properly tuned. Do not cache unless you absolutely have to.
What you're advocating here is side-stepping a serious problem. Database queries should be effortless provided your database is properly configured. Fix that issue and you won't need to go down the caching road.
Saving data from one request to the other is a broken design and if not done perfectly could lead to embarrassing data bleed situations where one user is seeing content intended for another. This is why caching is an option usually pursued after all other avenues have been exhausted.

Page hit count or Google Analytics?

I'm developing a website that is sensitive to page visits. For instance it has sections that will show the users which parts of the website (which items) have been visited the most. To implement this features, two strategies come to my mind:
Create a page hit counter, sort the pages by the number of visits and pick the highest ones.
Create a Google Analytics account and use its info.
If the first strategy has been chosen, I would need a very fast and accurate hit counter with the ability to distinguish the unique IPs (or users). I believe that using MySQL wouldn't be a good choice, since a lot of page visits, means a lot of DB locks and performance problems. I think a fast logging class would be a good one.
The second option seems very interesting when all the problems of the first one emerge but I don't know if there is a way (like an API) for Google Analytics to make me able to access the information I want. And if there is, is it fast enough?
Which approach (or even an alternative approach) you suggest I should take? Which one is faster? The performance is my top priority. Thanks.
UPDATE:
Thank you. It's interesting to see different answers. These answers reminded me an important factor. My website updates the "most visited" items, every 8 minutes so I don't need the data in real time but I need it to be accurate enoughe every 8 minutes or so. What I had in mind was this:
Log every page visit to a simple text log file
Send a cookie to the user to separate unique users
Every 8 minutes, load the log file, collect the info and update the MySQL tables.
That said, I wouldn't want to reinvent the wheel. If a 3rd party service can meet my requirements, I would be happy to use it.
Given you are planning to use the page hit data to determine what data to display on your site, I'd suggest logging the page hit info yourself. You don't want to be reliant upon some 3rd party service that you'd have to interrogate in order to create your page. This is especially true if you are loading that data real time as you'd have to interrogate that service for every incoming request to your site.
I'd be inclined to save the data yourself in a database. If you're really concerned about the performance of the inserts, then you could investigate intercepting requests (I'm not sure how you go about this in PHP, but I'm assuming it's possible.) and then passing the request data of to a separate thread to store the request info. By having a separate thread handle the logging, then you won't interrupt your response to the end user.
Also, given you are planning using the data collected to "... show the users which parts of the website (which items) have been visited the most", then you'll need to think about accessing this data to build your dynamic page. Maybe it'd be good to store a consolidated count for each resource. For example, rather than having 30000 rows showing that index.php was requested, maybe have one row showing index.php was requested 30000 times. This would certainly be quicker to reference than having to perform queries on what could become quite a large table.
Google Analytics has a latency to it and it samples some of the data returned to the API so that's out.
You could try the API from Clicky. Bear in mind that:
Free accounts are limited to the last 30 days of history, and 100 results per request.
There are many examples of hit counters out there, but it sounds like you didn't find one that met your needs.
I'm assuming you don't need real-time data. If that's the case, I'd probably just read the data out of the web server log files.
Your web server can distinguish IP addresses. There's no fully reliable way to distinguish users. I live in a university town; half the dormitory students have the same university IP address. I think Google Analytics relies on cookies to identify users, but shared computers makes that somewhat less than 100% reliable. (But that might not be a big deal.)
"Visited the most" is also a little fuzzy. The easy way out is to count every hit on a particular page as a visit. But a "visit" of 300 milliseconds is of questionable worth. (Probably realized they clicked the wrong link, and hit the "back" button before the page rendered.)
Unless there are requirements I don't know about, I'd probably start by using awk to extract timestamp, ip address, and page name into a CSV file, then load the CSV file into a database.

DB operation, Is it very expensive?

I have table called playlist, and I display those details using display_playlist.php file.
screen shot of display_playlist.php:
Every time user clicks the 'up' or 'down' button to arrange the song order, I just update the table.But I feel updating DB very often is not recommended, so Is there any efficient way to accomplish this task.
I am still a newbie to AJAX, so if AJAX is the only way to do it, can you please explain it in detail.thank you in advance.
In relative terms, yes, hitting the database is an expensive operation. However, if the playlist state is meant to be persistent then you have to hit the database at some point, it's just a question of when/how often.
One simple optimization you might try is instead of sending each change the user makes to the server right away, allow them to make however many changes they want (using some client-side javascript to keep the UI in the correct state) and provide a "Save Playlist" button that they can press to submit all of their changes to the server at once. That will reduce database hits, and also the number of round-trips made to the server (in terms of what a user experiences, a round-trip to the server is far more expensive than a database hit).
More broadly though, you shouldn't get hung up over hypothetical performance concerns. Is your application too slow to handle its current load (and if so, have you done any profiling to verify that it is indeed this database query that is causing the issue)? If not, then you don't need to worry too much about changing it just yet.
You can have a save button, so instead of updating on each move there will only be one update where you update every row at one time. This also lets you have a cancel button for people to refresh the way it was.
You can do it so users can change locally all they wish; defer writing the final result to the database until they choose to move on from the page.
if you really want to avoid updating the database, you can try some JavaScript based MP3players , which allow you to pass the path to *.mp3 files.
Then I suggest you to use Jquery UI - Sortable
and use it to update the songs list to the flash player ..

Complex search workaround

Before anything, this is not necessarily a question.. But I really want to know your opinion about the performance and possible problems of this "mode" of search.
I need to create a really complex search on multiple tables with lots of filters, ranges and rules... And I realize that I can create something like this:
Submit the search form
Internally I run every filter and logic step-by-step (this may take some seconds)
After I find all the matching records (the result that I want) I create a record on my searches table generating a token of this search (based on the search params) like 86f7e437faa5 and save all the matching records IDs
Redirect the visitor to a page like mysite.com/search?token=86f7e437faa5
And, on the results page I only need to discover what search i'm talking about and page the results IDs (retrieved from the searches table).
This will make the refresh & pagination much faster since I don't need to run all the search logic on every pageview. And if the user change a filter or search criteria, I go back to step 2 and generate a new search token.
I never saw a tutorial or something about this, but I think that's wat some forums like BBForum or Invision do with search, right? After the search i'm redirect to sometihng like search.php?id=1231 (I don't see the search params on the URL or inside the POST args).
This "token" will no last longer than 30min~1h.. So the "static search" is just for performance reasons.
What do you think about this? It'll work? Any consideration? :)
Your system may have special token like 86f7e437faa5 and cache search requests. It's a very useful mechanism for system efficiency and scalability.
But user must see all parameters in accordance with usability principles.
So generating hash of parameters on the fly on server-side will be a good solution. System checks existanse of genereted hash in the searches table and returns result if found.
If no hash, system makes query from base tables and save new result into searches table.
Seems logical enough to me.
Having said that, given the description of you application, have you considered using Sphinx. Regardless of the number of tables and/or filters and/or rules, all that time consuming work is in the indexing, and is done beforehand/behind the scene. The filtering/rules/fields/tables is all done quickly and on the fly after the fact.
So, similar to your situation, Sphinx could give you your set of ID's very quickly, since all the hard work was pre-done.
TiuTalk,
Are you considering keeping searches saved on your "searches" table? If so, remember that your param-based generated token will remain the same for a given set of parameters, lasting in time. If your search base is frequently altered, you can't rely on saved searches, as it may return outdated results. Otherwise, it seems a good solution at all.
I'd rather base the token on the user session. What do you think?
#g0nc1n
Sphinx seems to be a nice solution if you have control of your server (in a VPS for example).
If you don't and a simple Full Text Search isn't enough for you, I guess this is a nice solution. But it seems not so different to me than a paginated search with caching. It seems better than a paginated search with simple url refered caching. But you still have the problem of the searches remaining static. I recommend you flush the saved searches from time to time.

SQL required to showthe highlights of a database on a website homepage

Wikipedia http://en.wikipedia.org/wiki/Main_Page , has sections like:
Todays featured article,
From the news,
Did you know etc.
Say in my page, I want to get the main highlights from the database table(s) (multiple databases possible), what is the best possible way to query? I mean create separate connections and then query or use multiple queries? Is it better to use PDO for this purpose?
And how can I make a particular section update without refreshing the page say every 10 min? Is the code going to be complicated?
Can anyone please let me know.
Thanks.
yes the code is going to be complicated but not difficult.
If you want to use PDO then you should use it, it depends if you want to use it or not.
First you need to decide the highlights that you want to show in the main page and then decide how to fetch this related info.
You can use multiple queries. First fetch and then display.
And how can I make a particular section update without refreshing the page say every 10 min
for this you will have to use ajax.
If you want to refresh the data, you have two options.
First of all, you could write JavaScript to run away after a set interval and get fresh data. This is a VERY BAD idea - it exposes you to browser specific bugs, it won't work on machines with JavaScript disabled, and far more importantly it means you'll be furnishing your pages with the connection details for your database (and also allowing connections to it from anywhere).
A better solution would be to use AJAX to handle this - which basically means that you serve "just the data" rather than the whole page. Again, it's dependent on Javascript, but then implementing client side behaviour without the benefit of client side scripting will never work anyway!
Martin.
#1 The todays feature is either calculated from the current date; or specified, either manually or via a randomizing script which saves todays' date together with the id, so it will stick for the rest of the day.
#2 From the news is a simple query with order by date desc from the news.
#3 Did you know says it's from their recently added articles, which would also be somewhat of an order by date desc-query.
PDO is just a wrapper for the same SQL-queries as usual, you can achieve the same result without it.
About updating a section, I'd use a cache with a 10 minute limit. This will not reload for people staying on the page for 10 minutes (for that you would need AJAX) but it will load fresh content on a timebased period. You should base your choice of whether you expect users to spend 10 minutes on the same page, or just give some news at least 10 minutes in the spotlight before they get exchanged.

Categories