Here is the deal. I have created some HTML/Javacript dashboards that will be displayed on big screen displays. The displays are powered by thin clients running WinXP and Firefox 4. There will also be a desktop version as well. I would like to use one url (dashboard.php) and then redirect to the appropriate page. I need to be able to differentiate between the big screen displays and someone using Firefox from the desktop. My thought was to permanently change the UserAgent string on the big screen deployments and use browser sniffing to determine which version to forward the user too. The problem is, it appears that FF4 has removed the ability to change the UA string permanently. Anyone have any ideas on how I could do this or an idea on how I can otherwise differentiate between big screens and a desktop user.
What about using the IP address of the computers displaying on the big screens? Especially if the big displays are on an internal network, assign them a static IP address and use that to identify the computers. Other than that, just pass a get string saying ?view=bigDisplay or similar. You can simply put in your code
$bigDisplay = (isset($_GET['view'])&&$_GET['view']=='bigDisplay');
then you would have a boolean of whether to display the bigDisplay code.
Edit:
also, just googled and found this: http://support.mozilla.com/en-US/questions/806795
Javascript
if((screen.width >= 1024) && (screen.height >=768))
{
window.location= '?big=1';
}
PHP
if($_GET['big'] == 1){
setcookie('big', 1, 0);
}
Then just read cookie, and that's it...
If IP address detection is not an option, you could simply set a cookie for the big screen machines.
You can do this by creating a special URL, e.g., /bigscreen which will set the cookie with an expiration date far into the future. Then in your script, simply check for the existence of that cookie.
Using a cookie means that you don't have to worry about continuing to append query strings to subsequent URLs.
Edit: You could even manually place the cookie in Firefox if you wish to avoid visiting a special URL. There are add-ons to facilitate that.
You can set the UA string just fine in Firefox 4. The general.useragent.override preference will let you set it to whatever you want.
What was removed was a way to modify parts of the UA string without overriding the whole thing.
Related
I would like to check if mobile version exists for a specific website or not. To my understanding, we cannot be sure if every website has mobile version located at http://m.example.com/ therefore I am testing through CURL() request. Here is how I am doing it:
* I send mobile browser headers in curl request, this returns contents of
the returning URL.
* If it has a mobile version, then it would return contents of a mobile version site.
* I then check if the content includes #media keyword, if it exists then I assume it has a mobile version.
The problem is, if its css loads externally then I will have to further send CURL() requests to the CSS files as well, which will make it even more slower. Is there any specific solution to my problem or can I boost this process a bit more?
Any help would be appreciated. Thanks.
The problem with your approach, which smells a bit like an XY Problem, is that it is simply unreliable.
The website has many choices for mobile websites, which include:
1. Using CSS media queries
The problem with this method is twofold. For starters, you would have to scan every single CSS file and <link> declaration. Secondly, the site can dynamically introduce stylesheets to the page using JavaScript, which you will never see using cURL because it lacks a JavaScript parser.
2. Browser sniffing using (client side) JavaScript, or screen width sniffing using JavaScript
Again, this JavaScript will never get executed, so you will never see that result.
3. Browser sniffing using server side code
Well, I guess you could try to use a mobile user-agent string with your cURL request, and see where that takes you, but all of these methods are hackish and unreliable.
4. The page could be mobile friendly from the get-go (credit to #Quentin)
As #Quentin mentioned in the comments, the page could be mobile friendly without any additional checks on the client/server side (responsive design without media queries, by simply using percentage-based values, for example).
i was looking for a way to block old browsers from accessing the contents of a page because the page isn't compatible with old browsers like IE 6.0 and to return a message saying that the browser is outdated and that an upgrade is needed to see that webpage.
i know a bit of php and doing a little script that serves this purpose isn't hard, then i was just about to start doing it and a huge question popped up in my mind.
if i do a php script that blocks browsers based on their name and version is it impossible that this may block some search engine spiders or something?
i was thinking about doing the browser identification via this function: http://php.net/manual/en/function.get-browser.php
a crawler will probably be identified as a crawler but is it impossible that the crawler supplies some kind of browser name and version?
if nobody tested this stuff before or played a bit with this kind of functions i will probably not risk it, or i will make a testfolder inside a website to see if the pages there get indexed and if not i abandon this idea or i will try to modify it in a way that it works but to save me the trouble i figured it would be best to ask around and because i didn't found this info after a lot of searching.
No, it shouldn't affect any of major crawlers. get_browser() relies on the User-Agent string sent with the request, and thus it shouldn't be a problem for crawlers, which happen to use custom user-agent strings (eg: Google's spiders will have "Google" in their names).
Now, I personally think it's a bit unfriendly to completely block a website to someone with IE. I'd just put a red banner above saying "Site might not function correctly. Please update your browser or get a new one" or something to that effect.
I need to confirm something before I go accuse someone of ... well I'd rather not say.
The problem:
We allow users to upload images and embed them within text on our site. In the past we allowed users to hotlink to our images as well, but due to server load we unfortunately had to stop this.
Current "solution":
The method the programmer used to solve our "too many connections" issue was to rename the file that receives and processes image requests (image_request.php) to image_request2.php, and replace the contents of the original with
<?php
header("HTTP/1.1 500 Internal Server Error") ;
?>
Obviously this has caused all images with their src attribute pointing to the original image_request.php to be broken, and is also the wrong code to be sending in this case.
Proposed solution:
I feel a more elegant solution would be:
In .htaccess
If the request is for image_request.php
Check referrer
If referrer is not our site, send the appropriate header
If referrer is our site, proceed to image_request.php and process image request
What I would like to know is:
Compared to simply returning a 500 for each request to image_request.php:
How much more load would be incurred if we were to use my proposed alternative solution outlined above?
Is there a better way to do this?
Our main concern is that the site stays up. I am not willing to agree that breaking all internally linked images is the best / only way to solve this. I refuse to tell our users that because of something WE changed they must now manually change the embed code in all their previously uploaded content.
Ok, then you can use mod_rewrite capability of Apache to prevent hot-linking:
http://www.cyberciti.biz/faq/apache-mod_rewrite-hot-linking-images-leeching-howto/
Using ModRwrite will probably give you less load than running a PHP script. I think your solution would be lighter.
Make sure that you only block access in step 3 if the referer header is not empty. Some browsers and firewalls block the referer header completely and you wouldn't want to block those.
I assume you store image paths in database with ids of images, right?
And then you query database for image path giving it image id.
I suggest you install MemCached to the server and do caching of user requests. It's easy to do in PHP. After that you will see server load and decide if you should stop this hotlinking thing at all.
Your increased load is equal to that of a string comparison in PHP (zilch).
The obfuscation solution doesn't even solve the problem to begin with, as it doesn't stop future hotlinking from happening. If you do check the referrer header, make absolutely certain that all major mainstream browsers will set the header as you expect. It's an optional header, and the behavior might vary from browser to browser for images embedded in an HTML document.
You likely have sessions enabled for all requests (whether they're authenticated or not) -- as a backup plan, you can also rename your session cookie name to something obscure (edit: obscurity here actually doesn't matter as long as the cookie is set for your host only (and it is)) and check that a cookie by that name is set in image_request.php (no cookie set would indicate that it's a first-request to your site). Only use that as a fallback or redundancy check. It's worse than checking the referrer.
If you were generating the IMG HTML on the fly from markdown or something else, you could use a private key hash strategy with a short-live expire time attached to the query string. Completely air tight, but it seems way over the top for what you're doing.
Also, there is no "appropriate header" for lying to a client about the availability of a resource ;) Just send a 404.
my site is automatically getting download from other site when ever i try to open my site after opening my site it trys to download any thing from this address....
google-sk.pch.com.tagged-com.superore.ru
please help me what's going on....
Sounds like your site has been hacked. The site the address is pointing at is blocked in FIrefox as containing malicious code.
If this is it, you should take the site down, analyze what happened and change all your access passwords.
Maybe this helps a bit: Google Webmaster Central: My Site's been hacked: Now what?
Your site may have been compromised. Check .htaccess files, and crucial template or index files. For any unexpected code. You may also find a solution in restoring an archived version of your site.
You should immediately change passwords, and usernames. Use difficult usernames and passwords, consisting of many letters (varying case) and numbers.
My first guess is that you are the victim of a Cross-Site Scripting hack. Someone has added content to your site that contains HTML or Javascript tags, and when the content is displayed in a browser, it activates the browser to load more content from that site in Russia.
You should use htmlentities() when you echo any content that may have been contributed by users. This translates any characters that might be dangerous to output verbatim, such as < or >, into their HTML entity equivalents (e.g. < and >) so that they can't affect browsers and are safely output as literal characters.
I would also search the database for any content that may have been contributed, that contains HTML or Javascript tags, and delete it.
Don't forget to check your database too!
It sounds like your site has been hacked.
First change the password on your ftp access and review the security of your site, including database access.
Then go in and download what's on the site to a different area of your hard drive.
Compare this code against what you think should be there and remove any code you didn't create.
Also - as BlueRaja points out - check your database for corruption. If it has been compromised you'll probably have to restore it from backups.
Upload the corrected version (or just upload your backup).
Is it safe to create a back link with:
$backLink = htmlentities($_SERVER['HTTP_REFERER']);
or is there a better solution?
An easier way might be to do something like this:
Go back
That does not rely on the browser populating the Referer header, but instead does exactly the same thing as pressing the browser "Back" button.
This may be considered better since it actually goes back in the browser history, instead of adding the previous page to the browser history in the forward direction. It acts just as you would expect the Back button to act.
It's quite safe, as long as you check for its existance. In some browsers it can be turned off, and I'm not sure that it's mandatory for browsers anyhow. But the baseline is, you can't count on it existing. (RFC2616 doesn't say the referer-header must exist.)
If you really need reverse navigation, perhaps you could instead use a session variable to save the previous (current really, but only update it after displaying the back-link) page visited.
Given that:
The referer header is optional
Some security software will rewrite the referer header (e.g. to XXXX:XXXXXXXX or Advert For Product)
Linking to the referer will, at best, duplicate the built in functionality of the back button
User's will often expect a link marked 'back' to take them to the previous page in a sequence, not to the previous page they were on
No, it isn't safe. The dangers are not great, but the benefits are tiny.
It will work in some cases. However, you should be aware that the HTTP referer header is not guaranteed. User agents (browsers, search spoders etc) cannot be relied on to send anything, correct or not. In addition, if a user browses directly to the page, no referer header will be present. Some internet security software products even strip out the HTTP referer for "security" reasons.
If you wish to use this solution, be sure to have a fallback in place such as not showing the back link, or linking to a default start page or something (it would depend on the situation this is to be used in).
An alternative solution might be to use javascript to navigate to "history.back". This will use the browser's back/history function to return to the previous page the user was on.
I think Facebook use a similar technique to redirect the user.
They use GET variable called 'from'.
You must be careful with htmlentities because it corrupts non-ASCII encoding. For example,
echo(htmlentities("Привет, друг!")); //Contains russian letters
is displayed as
Ïðèâåò,
äðóã!
Which is of course incorrect.
Every browser sends non-ASCI chars in URLs as it wants to. Mozilla in Unicode, IE in system's current charset (Windows-1251 in Russia).
So, that might be useful to replace htmlentities with htmlspecialchars.