I am pretty new to php so bear with me :)
I am trying to create a web page that can use the get_user_browser function in php to then direct the user to a separate HTML page.
My website is http://www.danabohne.com and I just realized it is barely visible in IE. Unless there's a different way to go around this I want to make a separate static HTML site that can be seen by IE users.
Any feedback would be extremely helpful!
<?php
$useragent = $_SERVER['HTTP_USER_AGENT'];
if( strpos($useragent,"MSIE 6.0") ) {
header("Location: http://google.com");
}
?>
You can add more if conditions as needed.
However , Like John mentioned in the comments. I would advise you to create a separate stylesheet and create a fallback design rather than redirecting to another page.
Firstly, it's important to note that browser detection on the server is not recommended, because it is possible for browsers to provide false user agent details, or none at all. (I know of some firewall products that routinely strip out this kind of data from the http headers).
Secondly, the get_user_browser function only works if you have a valid browsecap.ini file. If you're having trouble getting the function to work, check that you have this ini file and that it is up-to-date. (also note that you will need to keep it updated whenever new browsers or browser versions are released).
Finally, most (virtually all) IE-specific display issues can be resolved without having to create a separate page for IE.
Specifically in your case, I can see what the problem is straight away when looking at the HTML source code for your page:
The problem is the <pre></pre> that is in the first line of your code immediately before the <!DOCTYPE>. I assume this is the left-overs from some debugging code that hasn't been removed properly.
This <pre></pre> is going to cause IE to fall into "quirks mode", because IE sees the <pre> and assumes it doesn't have a doctype. Without a doctype, IE assumes the page should be in quirks mode.
Quirks mode makes IE's rendering engine display the page completely differently (it's basically an IE5-backward-compatibility mode), so it's no wonder your page looks rubbish in IE.
This behaviour is the same in all versions of IE.
If you have other IE-specific problems, it would be better to try to fix them on the page, as there are a lot of tools and hacks available to make IE work better.
Hope that helps.
Related
I'm in the process of (slowly) learning how to make my websites more secure. I was checking out D&D Beyond, and noticed a few things I've never seen before, and I would like to learn more about.
Portions of the source code don't show up when you View the Source.
It's hard to explain. I tried to explain it in a different post, and I got a ton of snarky remarks. I'm telling you, I know what I saw. I would like to know how this is possible and how I can replicate it.
I typically write in PHP/JQuery, so I'd primarily like to learn more using those languages.
Example:
You can create a Character using their Character Builder, then view your Character Sheet. The main portion of your character's stats are enclosed in a very large parent div: ".character_sheet"
If you MANUALLY save your Character Sheet to your Desktop, you can see the HTML for this section. If you inspect this section in Firefox, you can also see the data. However, if you try to CTRL+U while in the browser, the HTML in this section does not appear. It also will not appear if you try to curl/fopen/file_get_contents
Additionally, images are not visible by normal means.
For Example: I am aware of how to disable right-clicking on a website, but if someone wanted to take my images, all they'd have to do is open my source code and look at the image url and save it from there.
On the D&D Beyond site, I can bring up Firefox's web inspector where an Image SHOULD be, take a look at the CSS, and... nothing. No link to an image, where one should be. I don't know how they're getting images to appear without css/html. I'd be very interested to know how this is done.
If anyone has any insight/guesses/etc and can point me in the right direction to learn some more, I'd really appreciate it!
Server-side code such as PHP is always hidden to visitors (unless you have a security vulnerability of some sort).
Client-side code such as HTML, JavaScript and CSS is always visible to the visitor. Even if you can't see it immediately in the DOM, it will be hiding there somewhere.
The most likely scenario is that it is hidden within an embedded .js or .css file, which would look similar to the following:
<script src="scripts.js"></script>
<link rel="stylesheet" type="text/css" href="theme.css">
HTML can be outputted to the page through JavaScript, which will not show in the DOM (though it would show up with a PHP echo). HTML can also be 'hidden' through use of <iframe> tags and HTML imports.
JavaScript has a wide array of ways in which it can be obscured / malformed, so it can be hard to track down. You may some some strange, 'unreadable' code in the DOM / .js files, which in turn could be outputting the HTML itself.
Please consider the below points,
All client side resources are viewable although you can make it easyless readable by javascript and it's better to do most of your codes by server side.
You need to know about what search engines love if your app is a public web site & will be indexed by those search engines, as some search engines don't scrape to the web pages which have only JavaScript code.
You can create images without <img> tags using CSS background-image Property.
there are some useful lib's to make your code more hard readable like Closure Compiler Service & JSFuck & JS Packers although it's better to make it by yourself and just add like those techniques to your knowledge, noting that this will make your code size larger.
and at all there are no white page source, it should contains at least <script> and if you saw a real white page it may be disabled from sever side to be viewable at top of window and it may be works if embedded in iframe or by sending specific headers to it or whatever else.
You can make your server & client sides cooperate :) to get great result and more secured.
I assume the answer is: "it is not possible because php is server language not client language", but I would like someone more expert than me to state this and eventually list all possible workarounds...
Question: is it (at all) possible to have a php function executed (only) when the user "clicks" or performs some other kind of action (e.g. mouse-over) in an html page without using any javascript?
(P.S. As a workaround I considered to access an intermediate page containing the php code to be executed when the client action occurs and then redirect as needed but this is not straightforward as far as passing the results of the php code goes.)
In general no. The only workaround regarding mouse-overs I can possibly think of would be a small 1x1 transparent background image that is generated by a PHP script and that is only shown if a user hovers over a certain element.
html:
<div id="mouseover_php">execute php</div>
css:
#mouseover_php:hover {
background-image:url(/path/to/php-script)
}
php:
<?php
// your code
// set http headers to correct content type and to disable caching
// output 1x1 pixel transparent image
But as all modern browsers use pre-fetching and caching (although this can be influenced by setting the Cache Control header) I certainly would not rely on this as an unquestionable indicator for a mouse-over event. So this would be, if anything, a very unclean hack.
Regarding clicks: Here the only possible way is to load an intermediate page, just as you proposed it. As far as I am concerned there is no way to achieve this without AJAX.
This is not possible without making a new request.
You must use a link to send the user to a new page (or the same page -- anything as long as a new request is made), or you must use something like AJAX.
There is actually one very hack-ish way I can think of. It's not exactly pretty, but it should work.
You could use an iframe as the target for a link. Basically instead of a link opening in the same or a new window, it would open in a hidden iframe.
Untested, but in theory:
<iframe name="testframe" id="testframe"></iframe>
Test
Edit: After some rough testing, it looks like chrome will not obey this. IE9 will to some extent though, it seems.
I have never understood why some people say making custom css for each browser is a bad thing. To keep my page size down and download times fast it makes perfect sense to me to make a custom css for the major browsers (especially IE in its many different forms), and a general catch all css for everything else.
If you want to send out a bloated, huge, Swiss army knife of the css world, for all situations then go right ahead I'm not going to stop you.
Fast detection of the browser is important when doing this. Loading a JavaScript file to detect the browser seems slow. So I would prefer to use php to detect the browser, and send out the specified css. Or at least a general browser specific css then use the JavaScript to load a more detailed version of the css.
But I've read article after article about why this is a bad thing. The main reason behind each of these articles is because the user agent can be faked. Or there using Firefox but the server thinks they're using IE7 so it sends out the wrong css file.
As a developer/designer of web apps why is this my problem? If you want to use Firefox, but tell my server your using safari or IE*, and get a crappy looking page, why is it my problem?
And don't throw that whole if the user can't see your site right they'll never come back, or some kind of similar argument at me. a normal user isn't going to be doing this. its only going to be the people who know how to do this, and will know whats wrong when my site looks crappy.
This is similar to looking at my site on a old Apple II (I have no clue how), and yelling at me because everything looks green.
So is there a good reason, not a personal preference, why I shouldn't use php to detect the browser and send out customized css files?
I do this mostly for the different versions of IE. It just seems like for some sites, adding the if IE6 and if IE7 parts just double or triple the size of the css file.
Typically when a user intentionally fakes the user agent string, it is because something is not viewable in the user's browser that should be. For example, some sites may restrict users to IE or Firefox, but the user is using Iceweasel on Debian. Iceweasel is just a Firefox renamed for trademarked reasons (there are a few other changes also), so there is no reason that the site should not work.
Realize that this happens because of (bad) browser detection, not despite it. I would say you don't need to be terribly concerned about this issue. Further, if you can just make your site reasonably cross-browser compatible, it won't matter at all. If you really want to use browser-specific CSS, and you don't want to do so all in one CSS file, don't let a fake user agent stop you.
As long as the only thing you're doing is changing style sheets, there is no valid reason as far as I can tell. If you're attempting to deliver custom security measures by browser, then you'll have issues.
Not sure about php but in Rails it is normal and dead simple practice to provide css files and different layouts based on the user agent particularly when considering that your site is just as likely to be accessed by any of the myriad of available mobile devices, never mind writing for the most popular (Currently Firefox) browsers and even writing custom MIME types if need be is also dead simple.
IMO not doing so is pure laziness on the coders part but then not all sites are developed by professional teams of developers with styling gurus at hand. Also in languages other than Rails it might not be so simple. Sorry, I haven't a clue about PHP so this may not be an appropriate reply
In my opinion, starting with normalize.css, and having a base style sheet to start, overriding the base styles as needed usually works along with making sure you set appropriate fallbacks. If you really need it a few media queries, and feature detection can go a long way.
One reason you shouldn't base things off of the browser is because major search engines like Google and Yahoo prohibit displaying different content for different browsers. GoogleBot can detect different CSS and HTML and you may get bad search positioning. Additionally, if you use any advertising services you may be in breach of their contract by displaying varying content.
I have a very strange problem and i don't know what to do about it. My site seems to work just fine all browsers other than internet explorer, so i've been trying to figure out why.
I've narrowed it down to the a file that I'm including in my site, this file is a php class that has a number of different functions like login getters and setters and so on.
I took all the php code out of my pages and it renders fine, so i added the php back in line by line and released that it stopped working when i used this:
require_once 'classes/Membership2.php';
Does anyone know why some php code will be messing with the style of my website.
For more detail on the matter, i have a number of divs that are centered, they all have curved edges as well as shadows. So by taking away the php i can see that IE is loading the page properly, no incompatibilities or anything like that.
Has anyone had a problem like this before? While i'm waiting for an aswesome or two i'll be removing functions and part of the code till i can narrow it down. (I would give code, but the file has a lot of lines of code.)
Thanks for the help.
Oh yeah and I'm testing on Internet Explorer 9 and every other browser is the latest version or close enough.
Okay so i've done some more digging into this, i've found that if i delete all the code in the class (All the functions) and leave just and empty class in the include file it still doesn't work. Okay, so in my view that means the functions aren't whats making this problem. So i deleted EVERYTHING, so now the include points to a blank php file. This worked and the page rendered as it should but obviously there is no functionality, i can't login or anything like that. I decided to add a constructor instead of leaving it as default, this function does nothing but return true; and it made the site mess up again.
Does this info change anything? Also i'm reiterating the fact that i do not get this error or any other browser but Internet Explorer 9 (Haven't tried any other IE version).
Thanks again for the help.
Okay, so i've solved the problem. At the start of my PHP class i have used
<!-- blah blah blah -->
forgetting that there is only PHP in this document and no HTML. So when i include the file it just outputs that into my source code and and messes things up, should have used the PHP commenting style.
Still strange that EVERY browser other than IE just ignores this and goes about its business, even the site that #blankabout suggested didn't give me any error (Although i assure thats because its part of the included PHP file and not the HTML itself).
as #fajran says to you, save both outputs with "view source code" on the browser and compare them to find the diference. To compare outputs use winmerge or similar tool. Once you now which text it generating the trouble, modify it inside the include file.
Given that your php, because it runs on the server, should never actually reach the browser, it may very well be some unterminated HTML or similar that is causing the problem. Perhaps the PHP is causing a break in the HTML that is unexpected.
i was looking for a way to block old browsers from accessing the contents of a page because the page isn't compatible with old browsers like IE 6.0 and to return a message saying that the browser is outdated and that an upgrade is needed to see that webpage.
i know a bit of php and doing a little script that serves this purpose isn't hard, then i was just about to start doing it and a huge question popped up in my mind.
if i do a php script that blocks browsers based on their name and version is it impossible that this may block some search engine spiders or something?
i was thinking about doing the browser identification via this function: http://php.net/manual/en/function.get-browser.php
a crawler will probably be identified as a crawler but is it impossible that the crawler supplies some kind of browser name and version?
if nobody tested this stuff before or played a bit with this kind of functions i will probably not risk it, or i will make a testfolder inside a website to see if the pages there get indexed and if not i abandon this idea or i will try to modify it in a way that it works but to save me the trouble i figured it would be best to ask around and because i didn't found this info after a lot of searching.
No, it shouldn't affect any of major crawlers. get_browser() relies on the User-Agent string sent with the request, and thus it shouldn't be a problem for crawlers, which happen to use custom user-agent strings (eg: Google's spiders will have "Google" in their names).
Now, I personally think it's a bit unfriendly to completely block a website to someone with IE. I'd just put a red banner above saying "Site might not function correctly. Please update your browser or get a new one" or something to that effect.