OK, so on some of the pages on my site, I've included a foot.php file at the end so that when I make changes to it, it effects all pages on the site. On most pages, this works perfectly, but on some pages, it just cuts off, and no includes after it take effect. The weird thing is that it includes only a portion of the file on pages where this happens.
I thought it might be because of the number of includes I used, but I have pages with more that work just fine. Take for instance, this one:
http://www.kelvinshadewing.net/codeSquirrel5.php
Here, you can see the bottom gets cut off, and if you view the source code, the rest of what goes in that div is gone, yet the div itself is closed off properly. But then go here:
http://www.kelvinshadewing.net/sprTartii.php
You'll see that the full code is there, and the Disqus app is present as well. This issue has been going on since before I added Disqus, and also happened when I'd been using includes in a different way to generate global content, so it's something about those pages in particular. It does it with only my Squirrel tutorials, and nothing else. I'm totally stumped and have no idea what's causing this. I've gone over my code a dozen times, and verified that every page uses the same PHP scripts.
As for the scripts themselves, it's just this:
<?php include "foot.php";
include "disqus.php"; ?>
The problem just disappeared, so I'm ruling it as a server glitch. If anyone else is reading this, I suggest checking out Andrew's comment, because that code was nice to know.
Related
I have a wordpress site which I manage(I am not a developer)
I ran a pagespeed test via https://developers.google.com/speed/pagespeed/
I got some issues like caching problems and so on so I used several plugins to take care of them.
however Im now stuck with Optimize CSS Delivery problems.
so I thought to try and fix it by myself and move the problematic URLs to the end of the page, however I cant figure out where these URLS are coming from. or which page is requesting for them.
appreciate any help with this
What I do in similiar situations: put some text inside the HTML elements of any php-file. E.g. put in page.php the "page-top" somewhere in the beginning of the file, put "page-end" at the end. In footer.php etc do the same. So you come very fast to the corner where the URLs might come from.
Every MediaWiki has a load.php.
If called without parameters it returns:
/* No modules requested. Max made me put this here */
As a curious programmer I wonder:
Why did he do this?
I am sure in a big project like this there is a good reason for it. Looks to me like it would be bad to return an empty file to an ajax query or something like it.
BTW: Normally it is called with parameters like this: load.php?debug=true&lang=de&modules=user.options&only=scripts&skin=modern&user=pi&*
This message comes from ResourceLoader.php. In the history of the file, using git blame, you can see the code was written by Roan Kattouw (RK) in this changeset. From the changeset comment:
Make load.php output a comment explaining what's going on when no modules were requested rather than outputting nothing. Max made me do this because he hates blank pages
So, your answer is, because Max hates blank pages, and if you want to know more, you should ask Roan. My guess would be that it's a debugging aid; rather than staring at a blank page wondering why it's blank, at least you know that you did something that caused a module loader request to load nothing...
As #svick points out, there is also a link to the code review, including discussion of whether it's a good idea to mention Max at all. Mentioning Max was seen as a possibility to partially close MediaWiki bug 20281, which notes that there aren't enough Easter Eggs in MediaWiki.
And that's why public repositories of open source software are cool :D
It is just to know, whats going on.
If i browse the load.php file of my MediaWiki installation with my web browser and want to check if there are any errors, they may get displayed or leave me a blank page.
A blank white page indicates a PHP error which isn't being printed to the screen.
But if i see a comment thats similar to /* No modules requested. Max made me put this here */. i do know its alright
AND that is the reason, why they needed to diff it.
I have a very strange problem and i don't know what to do about it. My site seems to work just fine all browsers other than internet explorer, so i've been trying to figure out why.
I've narrowed it down to the a file that I'm including in my site, this file is a php class that has a number of different functions like login getters and setters and so on.
I took all the php code out of my pages and it renders fine, so i added the php back in line by line and released that it stopped working when i used this:
require_once 'classes/Membership2.php';
Does anyone know why some php code will be messing with the style of my website.
For more detail on the matter, i have a number of divs that are centered, they all have curved edges as well as shadows. So by taking away the php i can see that IE is loading the page properly, no incompatibilities or anything like that.
Has anyone had a problem like this before? While i'm waiting for an aswesome or two i'll be removing functions and part of the code till i can narrow it down. (I would give code, but the file has a lot of lines of code.)
Thanks for the help.
Oh yeah and I'm testing on Internet Explorer 9 and every other browser is the latest version or close enough.
Okay so i've done some more digging into this, i've found that if i delete all the code in the class (All the functions) and leave just and empty class in the include file it still doesn't work. Okay, so in my view that means the functions aren't whats making this problem. So i deleted EVERYTHING, so now the include points to a blank php file. This worked and the page rendered as it should but obviously there is no functionality, i can't login or anything like that. I decided to add a constructor instead of leaving it as default, this function does nothing but return true; and it made the site mess up again.
Does this info change anything? Also i'm reiterating the fact that i do not get this error or any other browser but Internet Explorer 9 (Haven't tried any other IE version).
Thanks again for the help.
Okay, so i've solved the problem. At the start of my PHP class i have used
<!-- blah blah blah -->
forgetting that there is only PHP in this document and no HTML. So when i include the file it just outputs that into my source code and and messes things up, should have used the PHP commenting style.
Still strange that EVERY browser other than IE just ignores this and goes about its business, even the site that #blankabout suggested didn't give me any error (Although i assure thats because its part of the included PHP file and not the HTML itself).
as #fajran says to you, save both outputs with "view source code" on the browser and compare them to find the diference. To compare outputs use winmerge or similar tool. Once you now which text it generating the trouble, modify it inside the include file.
Given that your php, because it runs on the server, should never actually reach the browser, it may very well be some unterminated HTML or similar that is causing the problem. Perhaps the PHP is causing a break in the HTML that is unexpected.
I am honestly not sure where the issue lays but here is my problem:
I have a single file: card.gif. When I check firebug or Google pagespeed, I learn the file is called twice during the page fetch once as normal file name and a second time with a random number (that does not change). Example:
card.gif
card.gif?1316720450953
I have scoured my actual source code, the image is only called once. It is not called in a CSS file. To be honest I have no idea what is the issue, some thought that when I originally installed mod_pagespeed that it appended ID's to each image in cache for any future overwrites but I can't be certain.
Has anybody ever had this issue before?
In the end - Dagon's comments above led me to believe that things like Firebug and Pagespeed may not always be correct. I do show two images being loaded in the timelines for both plugins but it is very difficult for me to decifer otherwise. If another answer is provided contradicting this, I am more than happy to test that theory.
I've been doing some scraping with PHP and getting some strange results on a particular domain. For example, when I download this page:
http://pitchfork.com/reviews/tracks/
It works fine. However if I try to download this page:
http://pitchfork.com/reviews/tracks/1/
It returns an incomplete page, even though the content is exactly the same. All subsequent pages (tracks/2/, tracks/3/, etc) also return incomplete data.
It seems to be a problem with the way the URLs are formed during pagination. Most other sections on the site exhibit the same behaviour (the landing page works, but not subsequent pages). One exception is this section:
http://pitchfork.com/forkcast/
Where forkcast/2/ etc work fine. This may be due to it being only one directory deep, where most other sections are multiple directories deep.
I seem to have a grasp on WHAT is causing the problem, but not WHY or HOW it can be fixed.
Any ideas?
I have tried using file_get_contents() and cURL and both give the same result.
Interestingly, on all the pages that do not work, the incomplete page is roughly 16,000 chars long. Is this a clue?
I have created a test page where you can see the difference:
http://fingerfy.com/test.php?url=http://pitchfork.com/reviews/tracks/
http://fingerfy.com/test.php?url=http://pitchfork.com/reviews/tracks/1/
It prints the strlen() and content of the downloaded page (plus it makes relative urls into absolute, so that CSS is correct).
Any hints would be great!
UPDATE: Mowser, which optimizes pages for mobile has no trouble with these pages (http://mowser.com/web/pitchfork.com/reviews/tracks/2/) so the must be a way to do this without it failing....
It looks like pitchfork's running a CMS with "human" urls. That'd mean that /review/tracks would bring up a "homepage" with multiple postings listed, but "/reviews/tracks/1" would bring up only "review #1". It's possible they've configured the CMS to output only a fixed length excerpt, or have an output filter mis-configured and chop off the individual posts pages early.
I've tried fetching /tracks/1 through /tracks/6 using wget, and they all have different content which terminates at 16,097 bytes exactly, usually in the middle of a tag. So, it's not likely this is anything you can fix on your end, as it's the site itself sending bad data.