I have an affiliate tracking script that is currently being exploited by an affiliate. In the main, site I track the affiliate clicks using this url www.example.com?member=affiliatecode,
then I capture the query string $_GET['member'];
Problem is, an affiliate is exploiting this simple system and page loads on his site is being recorded as clicks going to mine.
What measures can I add to prevent this without changing the affiliate link to my site? An idea that I had is to check if my page has actually been loaded, but how would I do that in PHP?
Thanks
I don't quite grasp the exact problem (or more to the point, exactly how the affiliate is logging hits), but a solution may be to put a image on your page which should ensure that a browser has loaded it. So at the bottom of you page you should insert
<?
if ( isset($_GET['affiliate']) ){
echo '<img src="affimg.php">';
}
?>
And then in the affimg.php, you would log the affiliate and then output a 1x1 image (remembering to set the headers). The downside is that there is no way to stop an affiliate just simply putting that image in to his page and if the affiliate is using an iframe, the image would still be loaded.
A better way may be to simply do some tracking. Rather than just requiring that one page gets visited, change it so that you require two or more using a database to track the ip addresses.
There may be a better way, but then I don't know the exact details.
First, you can never be sure that a bot/script instead a human "clicks" on an image, this is a fact. Secondly, you can make things a bit difficult. An idea would be:
Deliver a banner including a unique link that is triggered via a Javascript-click-event, like:
<img src="http://www.targetsite.com/image.jpg" />
Save the token in your database before and give it a expiration time of some minutes. Then, only count the click if the token is valid later. So your affiliate has to change the "onClick"-Event or parse the source code to extract the token.
As said, it only makes things more difficult. You could also parse your affiliate's site source to see whether there, your banner is "clicked" automatically (which would be very cheeky).
Another addition would be to read a cookie on the client side and attach it to the generated link to implement a check if the client has already requested your target site.
Since you can't protect yourself completely from fakes, you can build several little tools like these that increase safety.
HTH
Related
What is the proper way, using PHP, to record web page views? I believe that currently we just record a view each time a page is hit on, but I am assuming that is including hits from bots, or other things we don't want to be recording.
How can we just record real legit views into our DB and not include stuff that shouldn't be counted as an actual page view?
Thanks!
Use google analytics
To set up the web tracking code:
Find the tracking code snippet for your property.
Sign in to your Google Analytics account, and select the Admin tab. ...
Find your tracking code snippet. ...
Copy the snippet. ...
Paste your snippet (unaltered, in its entirety) into every web page you want to track. ...
Check your setup.
1) Ignore any known bots that visits your web page. (best use robots.txt)
2) Use Ajax call at the end of page to get rid of bouncers (visitors that opens web by mistake and closes it before everything is loaded).
3) I assume you can call Ajax in some delay, so robots will already has left your page and visitor is still browsing it.
4) Record IP addresses and (if possible) some device identifier to find unique visitors.
I'm trying to enable screenshots of the page a logged in user is currently on. I've placed a button that needs to:
read in the content of the referring page
save it to a file
render that file as a PDF
redirect back to the referring page
The problem I've run into is that users are logged in and on pages that are very specific to them. I can't grab the page via CURL with generic credentials because the screenshot won't be applicable, and I don't have the user's credentials.
How can I read in the contents of the current/referrering page with PHP without access to the users credentials? I've tried file_get_contents which was not working either.
It sounds like your mechanism is going to be faulty anyway: you're not saving the page as it looks to them, but rather saving the page as it looks to CURL at some point in the future.
If you want an accurate solution, then you need to save a copy of the rendered HTML somewhere server-side as you send it out (you can use PHP's output buffering to capture it) and mark the file you save with some sort of key that goes to the user. If the user clicks the button, it sends that key to the server which you use to look up the saved HTML file, and process it as desired.
Significantly less efficient, of course, but there you go. Alternately, you can save just the parameters processed in the page such that you can re-render it with PHP if required. Still no curl involved, but less saving going on. Obviously you don't need to keep this cache information long; just a few minutes, so storing it in ram (e.g. memcache) would be sufficient.
I don't believe this can be accomplished ethically without obtaining the user's credentials.
I need a way to count how many times a link is being clicked and I was thinking of creating a php script to redirect to and do the counting. Is there a better way to do this and how would i count each time the user visits the link and would it be best to save in the database somewhere...any suggestions
Yes, it must be a PHP script - JavaScript for example won't work all the time.
So - instead of a link to
http://some.site.com/page2.php
You would link to
http://some.site.com/redirect.php?page2.php
And in the redirect.php you will track, for example, in a database, the values, and in the end throw this header:
header("Location: http://some.site.com/".$_SERVER["QUERY_STRING"]);
To redirect to the path after ?...
// yeah - logs might work... a little bit more work, though and it is also very server specific.
I would analyze your web log files as this will work whether it's a static page or a script.
If the page you need to count is a script, you could insert code that updates a table.
Website statistics is a big industry and there are many free and pay solutions out there to explore and get ideas from.
If you need to track clicks on a specific link then you'll probably need to use javascript to capture the click and send a notification to a tracking server. If you need to track page views then you're best off looking at your server logs. Remember that a page can have many links pointing to it, you have to differentiate between link clicks events page page impressions. Another possibility, depending on your application, is to use Google tracking, or a similar third party tracking app.
DO they use a php page to analyze the link, and return all of the images as josn?
Is there a way to do this with just javascript, so you dont have to go to the server to analyze the page?
I don't now how they do it. I'd implement a small service for that purpose. Given an URL return some relevant image (or generate a screenshot). This service could also cache results for better performance. But still, the page needs to be accessed in order to grab the <img src=... or to take the photograph.
Facebook calls back to the server. If you use Firebug (or, as I did, the Web Inspector in Safari), you can inspect the ajax calls. Facebook calls back to a script at /ajax/composer/attachment.php - in there is some JavaScript which contains HTML that gets inserted into the page. Here is what it looks if I point the Facebook attach link dialogue to the BBC News homepage in Safari Web Inspector:
Facebook JavaScript response when you attach a link in Safari Web Inspector http://tommorris.org/files/Facebook-20100529-181745.jpg
I put up the full JavaScript response on Gist (it is all one-line and minified originally, so I just flung it through TextMate to wrap it).
I'm not sure if you could do it on the client-side - because of browser protections on cross-site scripting - and even if you could, you probably ought not to because of this potential security problem: imagine if someone puts in a URL that points to a page which only they have access to. You don't necessarily want to put what's on someone else's customised or private page up on your Facebook/Digg type site. Imagine if it was something like Flickr and there were private pics - or worse, a porno site. No, better to proxy it back to your server and then grab the images. Plus, it'll probably be faster. No need to tax your end user's potentially slow connection downloading a page when your server will probably be able to do it quicker...
Ok, firstly this is not about forms this is about consistent layout as a user explores a site.
let me explain:
If we imagine a (non-ajax) digital camera online store, say someone was on the DSLR section and specified to view the cameras in Gallery mode and order by price. They then click onto the Compact camera's page. It would be in the users interests if the 'views' they selected we're carried over to this new page.
Now, i'd say use a session - am i wrong?
are there performance issues i should be aware of for a few small session vars ( ie view=1 , orderby=price) ?
Speaking of performances, there should not be much problems with either solutions.
Some things that have to be considered are :
With GET, if an URL gets copy-pasted (in a email or MSN), the other who will receive the URL will have the same GET parameters
is that a good thing, or not ?
On the other hand, session will not be shared, if an URL is copy-pasted
which means the first guy will say to the other "key, look at this", and the second guy will not see the same page ;; same thing with bookmarking, should I add.
GET is specific to each URL
While SESSION is shared accross all tabs of the user
Which means browsing with several tabs at the same time can cause troubles, when using Session, if you don't take care of that
I'd say use both. Store it in the session, but also put it in the get parameters for the page.
Why? This way the user is able to carry his options from page to page, but they are also in the URL so if he sends search results to his friend, his friend sees them the exact same way he did.
No, the session's performance will not degrade by putting those small variables in there. Unless you're storing monolithic arrays in your session, the vast majority of the time loading a session will be reading it from its storage medium (file, database, memcache, etc).
You should use GET in your case.
There is one simple rule in the web development: each page with different content must have it's own address. So, customer can save any page into favorites, send it to a frend. It's pain in the bottom then someone sends you a link to a particular page saying "Look!" but site uses frames and you land at the front page and dunno where to look.
You can save user's preferences into his profile/cookie (not session), but it should be reflected in the address bar as well.
Sessions being used for completely different purpose, shopping cart is an example.
It's a subjective question, it would work either way.
Personally I would go with sessions as it doesn't interfere with the URL so people can bookmark the url if they wanted.
However the argument for that would be if they bookmarked it they might see different things if it was done using $_SESSION.