External links from search engine - Redirect? - php

I am making a site which is designed to contain a large amount of links to other sites, much like a search engine. I have seen two different approached with regards to linking to external sites.
Simply to link directly to the external content directly from the links on my own site
To redirect to the content via an internal link, such as www.site.com/r/myref123 -> www.internet.com/hello.php
Would anyone be able to tell me what the advantage is with each approach? I am stuck at a crossroads here and can't find much information on which approach I should be using.

This is a bit opinion based, so I wouldn't be surprised if the question gets closed off, but I think that the second option is the better by FAR.
The reason being is that you are then able to track who clicks through to what. You are then also able to perform some fancy code that the user will never see - such as internally ranking sites that generate a lot of click-through traffic when presented in a list of choices.
Of course, lastly, and most importantly, if you are going to possibly throw in some links that generate some sort of income, you need to be able to track those clicks. If you simply present them and do nothing more, you will have no way to bill your advertisers.

You may want to track when, who, from where, etc are links clicked. If you put one of your pages in between the original page and the linked one, you will be able to do so.
In case you use the 1st version, whenever a click is done it will directly go to the referred page and you won't have any way to track it.
If you use the 2nd version, you will be able to track the clicks done by visitors through a script located in the www.site.com/r/myref....

Related

Extract data from forum

İ want to extract data from a php forum based on keywords I entered.
İs there something ready that can do this?
Just to give example
Kadinlarkulubu.com/forum.php
Keywords ios, android
Thanks to this info I want to get date, time, message, URL of message, keyword in the message, nick of member who wrote this message.
I need to work in different forums, so I need one or more tools that will work on key big platforms like vBulletin.
You need to create your own web crawler. If you want it to work on various different platforms, you will have to create variations on that crawler.
To start, picks your favourite forum, and give it a seed page (the page where to start crawling). Tread carefully, since you may need to be logged in to be able to see posts, and if that's the case, it may not be easy to do (making a crawler that logs you in, and breaks the captcha, for example). You can also make use of the search functionality (since many forums have search URLs similar to ?q=your_tag&p=1, this could make things a lot easier.
Just check that you're on the same domain, and that you don't go into an infinite loop, other than that, you should be fine.
Expect this to be a long term project :)
The alternative would be using API, if the forum provides one, but I doubt you will be so lucky.
2 ways
The easy way is only possible if the owner of the forum provides you access to the forum API (if it has one) or the database
The extreme hard way is make a grabber that reads a forum page by page and parses the information you like to something you can use.

How to determine if a page is a domain holding page

Is there any way to determine if a page on the web is a holding page? This is because I need to determine if any of curl recieved pages are unavailable due to the domain expiring as part of my error handling.
I thought that a distinct HTTP code would be given at this circumstance but instead I am given a 200 OK which has made things difficult.
Is the only way to search for specific phrases using strpos() in PHP?
Any help would be appreciated!
There is no reliable way to do this. There are hundreds of different "domain holding pages" and there is nothing standard to all of them.
At the end of the day, a domain holding page is just a web page that has been served like any other, they are intended only to be human readable. Some hosts wont use one at all.
If you ever recieve a domain holding page, the status code will probably be a 2xx code, but maybe not. Some hosts may choose to use a 5xx code. Again, there is no real way to know.
Is the only way to search for specific phrases using strpos() in PHP?
Yup. There is nothing else distinguishing a domain holding page from a normal web site.
You could search for
Certain keywords ("For sale", "reserved for a customer"....)
Certain page structures (many domains held by the same company share the same basic holding page structure, like the "blonde domain parking woman" page)
It's probably going to be impossible to achieve 100% reliability though.
Is there any way to determine if a page on the web is a holding page?
Technically, a holding page is just a page. So you are technically looking for a page. But then? Can you give any specific parameters what a holding page is? That's hard to do.
So maybe it helps to invert the question:
Is there any way to determine if a page on the web is not a holding page?
If it's easier for you to answer that, you might have found a way. If not, next to what has already answered:
Holding pages often look the same, have the same structure. You can use statistics and determine across all pages, which of those pages are similar.
Holding pages might have the same remote IP address(es).
But specifically, if you can not define specific characteristics of a holding page, you can not decide whether or not one page is pro-grammatically.

Communication between websites

I'm creating a network of websites that should communicate between themselves, for example to let all of them display an article published on one of them, or display data stored in a database of another subdomain, etc...
And this all using ajax for interactivity.
Which could be the best (and simplest) way to achieve this?
I thought an ajax call could summon a php script that could call another script on another subdomain. Is it the right way?
Thanks
I don't know exactly what you want to do. If you control the sites and server you could save all your users a lot of ajax calls if you skip doing it that way and do it on the server itself.
If you display all the articles by using javascript, users without javascript won't see anything and search engines won't be able to crawl the website.. however, maybe that's what you want.
The correct design pattern for something like this is to implement a restful API that all the other sites read from..
So you have a central API on eg. http://api.example.com/
and when a server wants to display an article, he would do something on the back end to retrieve an article list.. eg.
http://api.example.com/retrieveNewestArticles
that would return eg. a json variable with a list of the newest article.. then when you want to display that article, you would call:
http://api.example.com/showArticle/58484
That's how I would do it at least.
Some people might suggest doing it by making all the websites connect directly to the same database. That's an option, a bit more messy in the long run, but will get the job done.
certainly easier than my suggestion.

How do I show "daily hits" in any URL (visits/page loads)?

I need to count page views (from any url on my site including search pages) and show them on my site but I can't manage to make it work. I wanted to show the numer of times a page is loaded daily but at this point I don't really care whether I get pageviews, single visitors, or any kind of visits, as long as I do have some kind of counter.
Is there an easy way to do it?
Thanks
Yeah. The easiest way is to use Google Analytics.
I would suggest one of the free web statistics programs out there to just analyze your web logs. They'll be more fully featured than just counting visits, and there will be no overhead of database transactions just because someone is visiting a page.
http://awstats.sourceforge.net/
First, I'd have to say it: displaying number of views is SO 2000.
Well, now to the actual question, you'll have to identify each page and find out how flexible that can be:
/?p=1
/?p=1&q=2
/?p=1&s=1
Those might be the paths and might be referring to the same object, so you'll have to grab it and parse it if necessary. Now, just save it to a table in your database and increase the counter each time a new view is there.
Back on Visonary Software Solutions' track. I would use a Google Analytics-based solution too, perhaps you will use it on your site anyway. I did a quick search and found a tutorial on how to create counters like you wanted, displaying Analytics data. It doesn't look so complicated.
http://www.webresourcesdepot.com/feedcount-like-google-analytics-counter/
As far as I can tell, there are quite a lot of extensions for this purpose for the popular CMSs:
For Drupal: http://drupal.org/project/google_analytics_counter
For WordPress: http://analytics.blogspot.com/2009/05/share-your-google-analytics-data-with.html

Top techniques to avoid 'data scraping' from a website database

I am setting up a site using PHP and MySQL that is essentially just a web front-end to an existing database. Understandably my client is very keen to prevent anyone from being able to make a copy of the data in the database yet at the same time wants everything publicly available and even a "view all" link to display every record in the db.
Whilst I have put everything in place to prevent attacks such as SQL injection attacks, there is nothing to prevent anyone from viewing all the records as html and running some sort of script to parse this data back into another database. Even if I was to remove the "view all" link, someone could still, in theory, use an automated process to go through each record one by one and compile these into a new database, essentially pinching all the information.
Does anyone have any good tactics for preventing or even just detering this that they could share.
While there's nothing to stop a determined person from scraping publically available content, you can do a few basic things to mitigate the client's concerns:
Rate limit by user account, IP address, user agent, etc... - this means you restrict the amount of data a particular user group can download in a certain period of time. If you detect a large amount of data being transferred, you shut down the account or IP address.
Require JavaScript - to ensure the client has some resemblance of an interactive browser, rather than a barebones spider...
RIA - make your data available through a Rich Internet Application interface. JavaScript-based grids include ExtJs, YUI, Dojo, etc. Richer environments include Flash and Silverlight as 1kevgriff mentions.
Encode data as images. This is pretty intrusive to regular users, but you could encode some of your data tables or values as images instead of text, which would defeat most text parsers, but isn't foolproof of course.
robots.txt - to deny obvious web spiders, known robot user agents.
User-agent: *
Disallow: /
Use robot metatags. This would stop conforming spiders. This will prevent Google from indexing you for instance:
<meta name="robots" content="noindex,follow,noarchive">
There are different levels of deterrence and the first option is probably the least intrusive.
If the data is published, it's visible and accessible to everyone on the Internet. This includes the people you want to see it and the people you don't.
You can't have it both ways. You can make it so that data can only be visible with an account, and people will make accounts to slurp the data. You can make it so that the data can only be visible from approved IP addresses, and people will go through the steps to acquire approval before slurping it.
Yes, you can make it hard to get, but if you want it to be convenient for typical users you need to make it convenient for malicious ones as well.
There are few ways you can do it, although none are ideal.
Present the data as an image instead of HTML. This requires extra processing on the server side, but wouldn't be hard with the graphics libs in PHP. Alternatively, you could do this just for requests over a certain size (i.e. all).
Load a page shell, then retrieve the data through an AJAX call and insert it into the DOM. Use sessions to set a hash that must be passed back with the AJAX call as verification. The hash would only be valid for a certain length of time (i.e. 10 seconds). This is really just adding an extra step someone would have to jump through to get the data, but would prevent simple page scraping.
Try using Flash or Silverlight for your frontend.
While this can't stop someone if they're really determined, it would be more difficult. If you're loading your data through services, you can always use a secure connection to prevent middleman scraping.
force a reCAPTCHA every 10 page loads for each unique IP
There is really nothing you can do. You can try to look for an automated process going through your site, but they will win in the end.
Rule of thumb: If you want to keep something to yourself, keep it off the Internet.
Take your hands away from the keyboard and ask your client the reason why he wants the data to be visible but not be able to be scraped?
He's asking for two incongruent things and maybe having a discussion as to his reasoning will yield some fruit.
It may be that he really doesn't want it publicly accessible and you need to add authentication / authorization. Or he may decide that there is value in actually opening up an API. But you won't know until you ask.
I don't know why you'd deter this. The customer's offering the data.
Presumably they create value in some unique way that's not trivially reflected in the data.
Anyway.
You can check the browser, screen resolution and IP address to see if it's likely some kind of automated scraper.
Most things like cURL and wget -- unless carefully configured -- are pretty obviously not browsers.
Using something like Adobe Flex - a Flash application front end - would fix this.
Other than that, if you want it to be easy for users to access, it's easy for users to copy.
There's no easy solution for this. If the data is available publicly, then it can be scraped. The only thing you can do is make life more difficult for the scraper by making each entry slightly unique by adding/changing the HTML without affecting the layout. This would possibly make it more difficult for someone to harvest the data using regular expressions but it's still not a real solution and I would say that anyone determined enough would find a way to deal with it.
I would suggest telling your client that this is an unachievable task and getting on with the important parts of your work.
What about creating something akin to the bulletin board's troll protection... If a scrape is detected (perhaps a certain amount of accesses per minute from one IP, or a directed crawl that looks like a sitemap crawl), you can then start to present garbage data, like changing a couple of digits of the phone number or adding silly names to name fields.
Turn this off for google IPs!
Normally to screen-scrape a decent amount one has to make hundreds, thousands (and more) requests to your server. I suggest you read this related Stack Overflow question:
How do you stop scripters from slamming your website hundreds of times a second?
Use the fact that scrapers tend to load many pages in quick succession to detect scraping behaviours. Display a CAPTCHA for every n page loads over x seconds, and/or include an exponentially growing delay for each page load that becomes quite long when say tens of pages are being loaded each minute.
This way normal users will probably never see your CAPTCHA but scrapers will quickly hit the limit that forces them to solve CAPTCHAs.
My suggestion would be that this is illegal anyways so at least you have legal recourse if someone does scrape the website. So maybe the best thing to do would just to include a link to the original site and let people scrape away. The more they scrape the more of your links will appear around the Internet building up your pagerank more and more.
People who scrape usually aren't opposed to including a link to the original site since it builds a sort of rapport with the original author.
So my advice is to ask your boss whether this could actually be the best thing possible for the website's health.

Categories