Best way to create an auto-rotating "news" feed on a website? - php

my client is asking for an auto-rotating news feed type thing on their site. The content will not change, but it will automatically move from item to the next. It will also allow the user to mouse over previous items and hold them in place.
This is best shown by the type of thing you find on Yahoo's homepage:
The four news items will auto-rotate, but when a user puts their mouse over one (as shown), it will stop rotating and just show that one, until they move it away (then it will continue auto-rotating).
I imagine I can do this with a lot of $('item1').fade and $('item2').appear type malarky using Prototype and Scriptaculous, but I was wondering if there was a better way, or an existing bit of code I could use (it seems like quite a common thing, these days).
Thanks for any tips or assistance!

Took a little googling to find one but this looks exactly like your provided example:
http://www.agilecarousel.com/flavor_2.htm
Here are some other, simpler carousels that have been out in the wild for a little longer:
http://sorgalla.com/projects/jcarousel/
http://www.thomaslanciaux.pro/jquery/jquery_carousel.htm
http://www.baijs.nl/tinycarousel/
Hope this helps and good luck!

Related

Generating an image from pre-existing ones?

This is kinda confusing, so forgive me if you don't understand what I am asking. I'm trying to develop my skills and I wanted to move onto images as a next step. I did a bit of searching and I thought a good way to try this would maybe be to generate military ribbon racks depending on the options the user selects.
(See something like this as an example: http://www.ribbon-rack-builder.com/ribbons/build/4)
Now, from looking at the source code I can see that the creator of that website creates a form with all of the different ribbons and allows the user to select the ones they want with checkboxes. This form is then posted to some PHP on the page somewhere.
Being new to the image concept I have no idea what kind of PHP this would be. Could anyone give me an idea of how this website could do this and where I should start should I want to create something similar?
Thanks very much!
First, you'll need to get which checkboxes were checked:
Set the name in the form to check_list[] and you will be able to
access all the checkboxes as an array($_POST['check_list'][])
Second, you'll most likely want to use the GD and Image Functions built into PHP.
There is a lot there, and it can be confusing, so I suggest you do some reading through questions on SO on the subject: https://stackoverflow.com/search?q=merge+image+[php]

How to create fundraising status graphic with HTML5, Ajax and jQuery while incorporating an image in the background

I have been given the task of creating an HTML5 application which shows how much has been fundraised so far, and who has fundraised as well. This is one of the most difficult problems I have been given so far, but with some advice from here and pointers to help guide me I would like to have a decent crack at it.
Here is a more abstract description of the problem:
This is the graphic I would like to utilise below
The dark coloured line you see in this graphic is an old rail tunnel, pictured in an old surveying drawing. The tunnel is 2,880 feet long by 10 feet wide. The person who came to me asking for help is aiming to get people to "sponsor" 1 square foot sections at $10 per section until $288k has been funded. He is wanting the "donations" to be handled by http://dps.co.nz/. The thing he is wanting is to have the fundraising progress shown on the graphic above - ideally he is wanting the total amount raised at any time to "fill" the tunnel (rather like a thermometer does), showing the amount raised so far just above the tunnel (i.e it could say something like $10,000 raised out of $288,000 - n%). When someone hovers their cursor over the "filled" section of the tunnel, a tooltip of some kind will show who has sponsored that particular section of the tunnel. When you keep moving the mouse along the filled section, tooltips will keep appearing of who has sponsored each section, as you keep moving the cursor.
I know a database with at least 2 tables will be needed. One table would be needed to describe the people who are sponsoring the section, and the other table would be needed to describe the sections (possibly section numbers, their size, how much they cost to sponsor, if they have been sponsored or not, etc). My HTML5 knowledge is still relatively fresh so I am not sure how to go about this using HTML5, but I am thinking Ajax will be needed to pull the data from the server, showing who has sponsored particular sections (as something like tooltips) and the total amount raised (in real time, preferably). I am thinking the Ajax could be used in conjunction with jQuery for effects etc.
I have seen examples of fundraising thermometers online and they look relatively simple. The thing is, the tunnel you can see in the graphic has a slight bend in it - which makes this more difficult.
If anyone could help guide me, or show examples that would help me solve this particular problem, please let me know. If anyone also has suitable alternatives I would also be very interested.
Thanks in advance!
If editing the bend out of the image would be a suitable alternative, any decent image editor (GIMP?) would make things, as you say, simple.

Store data using a txt file

This question might seem strange but I have been searching for an answer for a long time and I couldn't find any.
Let's suppose you have a blog and this blog has many post entries just like any other blog. Now each post can have simple user comments. No like buttons or any other resource that would require data management. Now the query is: Can I store user comments on a single text file? Each post will be associated to a text file that holds the comments. So, if I have n posts I'll have n text files.
I know I can perfectly do this, but I have never seen it anywhere else and no one is talking about it. For me this seems better than storing all coments from all posts in a single mysql table but I don't know what makes it so bad that no one has implemented it yet.
Storing comments in text files associated with corresponding post? Lest see if it's good idea.
Okay adding new comments easy - write new text to the file. But what about format of your data? CSV? Ok then you would have to parse it before rendering.
Paging. If you have a lot of comments you may consider creating paging navigation for it. It can be done easily, sure. But you would need to open the file and read all the records to extract say 20.
Approve your comments. Someone posted new comment. You place it with pending status. So.. In admin panel you need to find those marked comments and process then accordingly - save or remove. Do you think it's convinient with text files? The same if use decided to remove its comment himself.
Reading files if you have many comments and many posts will be slower the it would be in case of database.
Scalability. One day you deside to extend you comments functionality to let one comment to respond to another. How would you do it with text files? Or example from comments by nico: "In 6 months time, when you will want to add a rating field to the comments... you'll have a big headache. Or, just run a simple ALTER query".
This is just for beggining. Someone may add something.
Well, there are good reasons why this isn't done. I can't possibly name them all, but the first things that come to mind:
Efficiency
Flexibility
Databases are much more efficient and flexible than plain text files. You can index, search and assign keys to individual comments and edit and delete any comments based on their key.
Furthermore, you'd get a huge pile of text files if the blog is quite big. While in itself that's not a problem, if you all save them in one directory, it can grow out of proportion and really increase the access time needed to find and open a specific text file.

How can I create a list catalouge page for my website, without shopping cart/detailed product page functions

I want to make Newegg's like catalouge functionality for my little website. I want mine to be sligthly different(greatly simplified) though. I haven't done anything so advanced(atleast in my books) before, and wanted to know if it's possible to do. I want to use PHP and JS. The new records will be added manually through using either phpMyAdmin or pehaps I will install and use either SQLyog, HeidiSQL or Navicat for such purposes. Could someone point me to the right resources to get this kind of job done as fast as possible and properly?
What I had in mind was:
For example the cell which contains the thumbimage, all the mini information about the product and the big price tag will not have a separate, more detailed page. Everything user will need to know will be inside that product cell.
Right under the thumbnail image there will be numbers(1 2 3 4 5 6), and when you hover over them, under the cursor, a big version of one of the all available images will appear.
Lastly, it should have the page generation(don't know what you call it). For example there's more than 20 product entries on the page, then the server should create a new page(First 1 >2< Last) to hold the older records.
Oh and there won't be any shopping cart functionality. You can't really "order" these kinds of products, you just find something you like and call me up about it.
TIA
I'm sure there are dozens of books on this subject. I'm attempting a short reply, however:
This sounds like something that could profit from:
a MCV-framework like CakePHP (or Django, Ruby on Rails etc), which could handle database-logic (including pagination, which is the word you're looking for), and
a JavaScript library like JQuery to handle Ajax, JavaScript and other UI-related stuff.
++?
For the page numbers, I recently had to do this. The technique is called pagination, and this thread helped me out immensely: PHP Formula For a Series of Numbers (Mathy Problem)
The thumbnail effect you want to include would need to be done in javascript. I'd recommend learning jQuery, as it is pretty easy to use for this sort of thing.
This is a hard question to answer because you haven't given much indication as to your skill level, or progress towards accomplishing your goal. Assuming we're starting at 0, there is probably more to discuss than this thread can contain. :\
UPDATE
To learn PHP's database functions, I would lean on W3School's PHP/MySQL tutorial for a quick summary, referring to the php manual's mysql documentation for details and code examples when W3schools isn't enough. This should at least get you the markup you will need to work with.
For the thumbnails, I would reiterate my recommendation for jQuery, specifically attaching a .hover() event to the image numbers (this is equivalent to the onmouseover and onmouseout events in JS) that uses the .fadeIn() and .fadeOut() animations to show and hide your full size images. Hope that helps.

How do search engines find relevant content?

How does Google find relevant content when it's parsing the web?
Let's say, for instance, Google uses the PHP native DOM Library to parse content. What methods would they be for it to find the most relevant content on a web page?
My thoughts would be that it would search for all paragraphs, order by the length of each paragraph and then from possible search strings and query params work out the percentage of relevance each paragraph is.
Let's say we had this URL:
http://domain.tld/posts/stackoverflow-dominates-the-world-wide-web.html
Now from that URL I would work out that the HTML file name would be of high relevance so then I would see how close that string compares with all the paragraphs in the page!
A really good example of this would be Facebook share, when you share a page. Facebook quickly bots the link and brings back images, content, etc., etc.
I was thinking that some sort of calculative method would be best, to work out the % of relevancy depending on surrounding elements and meta data.
Are there any books / information on the best practices of content parsing that covers how to get the best content from a site, any algorithms that may be talked about or any in-depth reply?
Some ideas that I have in mind are:
Find all paragraphs and order by plain text length
Somehow find the Width and Height of div containers and order by (W+H) - #Benoit
Check meta keywords, title, description and check relevancy within the paragraphs
Find all image tags and order by largest, and length of nodes away from main paragraph
Check for object data, such as videos and count the nodes from the largest paragraph / content div
Work out resemblances from previous pages parsed
The reason why I need this information:
I'm building a website where webmasters send us links and then we list their pages, but I want the webmaster to submit a link, then I go and crawl that page finding the following information.
An image (if applicable)
A < 255 paragraph from the best slice of text
Keywords that would be used for our search engine, (Stack Overflow style)
Meta data Keywords, Description, all images, change-log (for moderation and administration purposes)
Hope you guys can understand that this is not for a search engine but the way search engines tackle content discovery is in the same context as what I need it for.
I'm not asking for trade secrets, I'm asking what your personal approach to this would be.
This is a very general question but a very nice topic! Definitely upvoted :)
However I am not satisfied with the answers provided so far, so I decided to write a rather lengthy answer on this.
The reason I am not satisfied is that the answers are basically all true (I especially like the answer of kovshenin (+1), which is very graph theory related...), but the all are either too specific on certain factors or too general.
It's like asking how to bake a cake and you get the following answers:
You make a cake and you put it in the oven.
You definitely need sugar in it!
What is a cake?
The cake is a lie!
You won't be satisfied because you wan't to know what makes a good cake.
And of course there are a lot or recipies.
Of course Google is the most important player, but, depending on the use case, a search engine might include very different factors or weight them differently.
For example a search engine for discovering new independent music artists may put a malus on
artists websites with a lots of external links in.
A mainstream search engine will probably do the exact opposite to provide you with "relevant results".
There are (as already said) over 200 factors that are published by Google.
So webmasters know how to optimize their websites.
There are very likely many many more that the public is not aware of (in Google's case).
But in the very borad and abstract term SEO optimazation you can generally break the important ones apart into two groups:
How well does the answer match the question? Or:
How well does the pages content match the search terms?
How popular/good is the answer? Or:
What's the pagerank?
In both cases the important thing is that I am not talking about whole websites or domains, I am talking about single pages with a unique URL.
It's also important that pagerank doesn't represent all factors, only the ones that Google categorizes as Popularity. And by good I mean other factors that just have nothing to do with popularity.
In case of Google the official statement is that they want to give relevant results to the user.
Meaning that all algorithms will be optimized towards what the user wants.
So after this long introduction (glad you are still with me...) I will give you a list of factors that I consider to be very important (at the moment):
Category 1 (how good does the answer match the question?
You will notice that a lot comes down to the structure of the document!
The page primarily deals with the exact question.
Meaning: the question words appear in the pages title text or in heading paragraphs paragraphs.
The same goes for the position of theese keywords. The earlier in the page the better.
Repeated often as well (if not too much which goes under the name of keywords stuffing).
The whole website deals with the topic (keywords appear in the domain/subdomain)
The words are an important topic in this page (internal links anchor texts jump to positions of the keyword or anchor texts / link texts contain the keyword).
The same goes if external links use the keywords in link text to link to this page
Category 2 (how important/popular is the page?)
You will notice that not all factors point towards this exact goal.
Some are included (especially by Google) just to give pages a boost,
that... well... that just deserved/earned it.
Content is king!
The existence of unique content that can't be found or only very little in the rest of the web gives a boost.
This is mostly measured by unordered combinations of words on a website that are generally used very little (important words). But there are much more sophisticated methods as well.
Recency - newer is better
Historical change (how often the page has updated in the past. Changing is good.)
External link popularity (how many links in?)
If a page links another page the link is worth more if the page itself has a high pagerank.
External link diversity
basically links from different root domains, but other factors play a role too.
Factors like even how seperated are the webservers of linking sites geographically (according to their ip address).
Trust Rank
For example if big, trusted, established sites with redactional content link to you, you get a trust rank.
That's why a link from The New York Times is worth much more than some strange new website, even if it's PageRank is higher!
Domain trust
Your whole website gives a boost to your content if your domain is trusted.
Well different factors count here. Of course links from trusted sties to your domain, but it will even do good if you are in the same datacenter as important websites.
Topic specific links in.
If websites that can be resolved to a topic link to you and the query can be resolved to this topic as well, it's good.
Distribution of links in over time.
If you earned a lot of links in in a short period of time, this will do you good at this time and the near future afterwards. But not so good later in time.
If you slow and steady earn links it will do you good for content that is "timeless".
Links from restrited domains
A link from a .gov domain is worth a lot.
User click behaviour
Whats the clickrate of your search result?
Time spent on site
Google analytics tracking, etc. It's also tracked if the user clicks back or clicks another result after opening yours.
Collected user data
Votes, rating, etc., references in Gmail, etc.
Now I will introduce a third category, and one or two points from above would go into this category, but I haven't thought of that... The category is:
** How important/good is your website in general **
All your pages will be ranked up a bit depending on the quality of your websites
Factors include:
Good site architecture (easy to navgite, structured. Sitemaps, etc...)
How established (long existing domains are worth more).
Hoster information (what other websites are hosted near you?
Search frequency of your exact name.
Last, but not least, I want to say that a lot of these theese factors can be enriched by semantic technology and new ones can be introduced.
For example someone may search for Titanic and you have a website about icebergs ... that can be set into correlation which may be reflected.
Newly introduced semantic identifiers. For example OWL tags may have a huge impact in the future.
For example a blog about the movie Titanic could put a sign on this page that it's the same content as on the Wikipedia article about the same movie.
This kind of linking is currently under heavy development and establishment and nobody knows how it will be used.
Maybe duplicate content is filtered, and only the most important of same content is displayed? Or maybe the other way round? That you get presented a lot of pages that match your query. Even if they dont contain your keywords?
Google even applies factors in different relevance depending on the topic of your search query!
Tricky, but I'll take a stab:
An image (If applicable)
The first image on the page
the image with a name that includes the letters "logo"
the image that renders closest to the top-left (or top-right)
the image that appears most often on other pages of the site
an image smaller than some maximum dimensions
A < 255 paragraph from the best slice of text
contents of the title tag
contents of the meta content description tag
contents of the first h1 tag
contents of the first p tag
Keywords that would be used for our search engine, (stack overflow style)
substring of the domain name
substring of the url
substring of the title tag
proximity between the term and the most common word on the page and the top of the page
Meta data Keywords,Description, all images, change-log (for moderation and administration purposes)
ak! gag! Syntax Error.
I don't work at Google but around a year ago I read they had over 200 factors for ranking their search results. Of course the top ranking would be relevance, so your question is quite interesting in that sense.
What is relevance and how do you calculate it? There are several algorithms and I bet Google have their own, but ones I'm aware of are Pearson Correlation and Euclidean Distance.
A good book I'd suggest on this topic (not necessarily search engines) is Programming Collective Intelligence by Toby Segaran (O'Reilly). A few samples from the book show how to fetch data from third-party websites via APIs or screen-scraping, and finding similar entries, which is quite nice.
Anyways, back to Google. Other relevance techniques are of course full-text searching and you may want to get a good book on MySQL or Sphinx for that matter. Suggested by #Chaoley was TSEP which is also quite interesting.
But really, I know people from a Russian search engine called Yandex here, and everything they do is under NDA, so I guess you can get close, but you cannot get perfect, unless you work at Google ;)
Cheers.
Actually answering your question (and not just generally about search engines):
I believe going bit like Instapaper does would be the best option.
Logic behind instapaper (I didn't create it so I certainly don't know inner-workings, but it's pretty easy to predict how it works):
Find biggest bunch of text in text-like elements (relying on paragraph tags, while very elegant, won't work with those crappy sites that use div's instead of p's). Basically, you need to find good balance between block elements (divs, ps, etc.) and amount of text. Come up with some threshold: if X number of words stays undivided by markup, that text belongs to main body text. Then expand to siblings keeping the text / markup threshold of some sort.
Once you do the most difficult part — find what text belongs to actual article — it becomes pretty easy. You can find first image around that text and use it as you thumbnail. This way you will avoid ads, because they will not be that close to body text markup-wise.
Finally, coming up with keywords is the fun part. You can do tons of things: order words by frequency, remove noise (ands, ors and so on) and you have something nice. Mix that with "prominent short text element above detected body text area" (i.e. your article's heading), page title, meta and you have something pretty tasty.
All these ideas, if implemented properly, will be very bullet-proof, because they do not rely on semantic markup — by making your code complex you ensure even very sloppy-coded websites will be detected properly.
Of course, it comes with downside of poor performance, but I guess it shouldn't be that poor.
Tip: for large-scale websites, to which people link very often, you can set HTML element that contains the body text (that I was describing on point #1) manually. This will ensure correctness and speed things up.
Hope this helps a bit.
There are lots of highly sophisticated algorithms for extracting the relevant content from a tag soup. If you're looking to build something usable your self, you could take a look at the source code for readability and port it over to php. I did something similar recently (Can't share the code, unfortunately).
The basic logic of readability is to find all block level tags and count the length of text in them, not counting children. Then each parent node is awarded a fragment (half) of the weight of each of its children. This is used to fund the largest block level tag that has the largest amount of plain text. From here, the content is further cleaned up.
It's not bullet proof by any means, but it works well in the majority of cases.
Most search engines look for the title and meta description in the head of the document, then heading one and text content in the body. Image alt tags and link titles are also considered. Last I read Yahoo was using the meta keyword tag but most don't.
You might want to download the open source files from The Search Engine Project (TSEP) on Sourceforge https://sourceforge.net/projects/tsep/ and have a look at how they do it.
I'd just grab the first 'paragraph' of text. The way most people write stories/problems/whatever is that they first state the most important thing, and then elaborate. If you look at any random text and you can see it makes sense most of the time.
For example, you do it yourself in your original question. If you take the first three sentences of your original question, you have a pretty good summary of what you are trying to do.
And, I just did it myself too: the gist of my comment is summarized in the first paragraph. The rest is just examples and elaborations. If you're not convinced, take a look at a few recent articles I semi-randomly picked from Google News. Ok, that last one was not semi-random, I admit ;)
Anyway, I think that this is a really simple approach that works most of the time. You can always look at meta-descriptions, titles and keywords, but if they aren't there, this might be an option.
Hope this helps.
I would consider these building the code
Check for synonyms and acronyms
applying OCR on images to search as text(Abby Fine Reader and Recostar are nice, Tesseract is free and fine(no so fine as fine reader :) )
weight Fonts as well(size, boldness, underline, color)
weight content depending on its place on page(like contents on upper side of page is more relevant)
Also:
An optinal text asked from the webmaster to define the page
You can also check if you can find anything useful at Google search API: http://code.google.com/intl/tr/apis/ajaxsearch/
I'm facing the same problem right now, and after some tries I found something that works for creating a webpage snippet (must be fine-tuned):
take all the html
remove script and style tags inside the body WITH THEIR CONTENT (important)
remove unnecessary spaces, tabs, newlines.
now navigate through the DOM to catch div, p, article, td (others?) and, for each one
. take the html of the current element
. take a "text only" version of the element content
. assign to this element the score: text lenght * text lenght / html lenght
now sort all the scores, take the greatest.
This is a quick (and dirty) way to identify longest texts with a relatively low balance of markup, like what happens in normal contents. In my tests this seems really good. Just add water ;)
In addition to this you can search for "og:" meta tags, title and description, h1 and a lot of other minor techniques.
Google for 'web crawlers, robots, Spiders, and Intelligent Agents', might try them separately as well to get individual results.
Web Crawler
User-Agents
Bots
Data/Screen Scraping
What I think you're looking for is Screen Scraping (with DOM) which Stack has a ton of Q&A on.
Google also uses a system called Page Rank, where
it examines how many links to a site there are. Let's say that you're looking for a C++ tutorial, and you search Google for one. You find one as the top result, an it's a great tutorial. Google knows this because it searched through its cache of the web and saw that everyone was linking to this tutorial, while ranting how good it was. Google deceides that it's a good tutorial, and puts it as the top result.
It actually does that as it caches everything, giving each page a Page Rank, as said before, based on links to it.
Hope this helps!
To answer one of your questions, I am reading the following book right now, and I recommend it: Google's PageRank and Beyond, by Amy Langville and Carl Meyer.
Mildly mathematical. Uses some linear algebra in a graph theoretic context, eigenanalysis, Markov models, etc. I enjoyed the parts that talk about iterative methods for solving linear equations. I had no idea Google employed these iterative methods.
Short book, just 200 pages. Contains "asides" that diverge from the main flow of the text, plus historical perspective. Also points to other recent ranking systems.
There are some good answers on here, but it sounds like they don't answer your question. Perhaps this one will.
What your looking for is called Information Retrieval
It usually uses the Bag Of Words model
Say you have two documents:
DOCUMENT A
Seize the time, Meribor. Live now; make now always the most precious time. Now will never come again
and this one
DOCUMENT B
Worf, it was what it was glorious and wonderful and all that, but it doesn't mean anything
and you have a query, or something you want to find other relevant documents for
QUERY aka DOCUMENT C
precious wonderful life
Anyways, how do you calculate the most "relevant" of the two documents? Here's how:
tokenize each document (break into words, removing all non letters)
lowercase everything
remove stopwords (and, the etc)
consider stemming (removing the suffix, see Porter or Snowball stemming algorithms)
consider using n-grams
You can count the word frequency, to get the "keywords".
Then, you make one column for each word, and calculate the word's importance to the document, with respect to its importance in all the documents. This is called the TF-IDF metric.
Now you have this:
Doc precious worf life...
A 0.5 0.0 0.2
B 0.0 0.9 0.0
C 0.7 0.0 0.9
Then, you calculate the similarity between the documents, using the Cosine Similarity measure. The document with the highest similarity to DOCUMENT C is the most relevant.
Now, you seem to want to want to find the most similar paragraphs, so just call each paragraph a document, or consider using Sliding Windows over the document instead.
You can see my video here. It uses a graphical Java tool, but explains the concepts:
http://vancouverdata.blogspot.com/2010/11/text-analytics-with-rapidminer-part-4.html
here is a decent IR book:
http://nlp.stanford.edu/IR-book/pdf/irbookonlinereading.pdf

Categories