I've really big problem with my website: http://ap.v11.pl/sklep/
It loads really slow and I dont know how to fix that.
I've getting some weird errors from Chrome console: http://scr.hu/0an/xq5bz
There errors are random, for example i'm getting error that something cant be found but this resource exists and the paths are good.
My htaccess:
http://pastebin.com/ewZZBLFg
Page is working on ZendFramework 2
Thank you for any advices
My hypothesis is:
you are running Ghostery as Chrome plugin or something similar so that e.g. your browser will block a couple of your scripts like the adstat thing and google analyticis
your webserver has a problem sending the correct mime type for the javascript stuff. Check out this posting on the "resource interpreted as a ..." error message
It may be that only one frontend is not working correctly. This would explain why you get the errors not all the time.
In general your site is packed with scripts and images. The first page has > 250 requests & almost 4 Mb. That's very much and it takes time. Amazon's Frontpage has half the number of requests and something like 300kb.
You should check if you can reduce the number of requests - the yslow plugin may give you some good advise here. Can you reduce the image size and number of image? (css sprites?)
You should also check if you have to deliver all the images through your regular web browser or if you can use a lightweight alternative. Are you using NGINX? AFAIK it has good options for performance tunings.
Edit: As a starting point: http://gtmetrix.com/reports/www.ap.v11.pl/fBGKScZ6
Related
The last 2-3 days some of our users have been reporting this error. Random pages on our site will output raw binary data. None of the developers can duplicate the issue which comes and goes randomly.
All users reporting the issue have so far been using IE11. The server is Apache 2.4.16 with PHP 5.3.29. There are NO errors being logged by PHP or by Apache related to the issue.
Oddly the HTTP header is embedded in the middle of the data. I can't even fathom a reason that would ever happen. One would expect that if the browser was having issues rendering the content, then it wouldn't be making further requests for more resources to the server and there wouldn't be another HTTP header to show.
We really don't know where to start with this one, we can't tell if it's server, php code, or browser related. Is anyone aware of bugs in Apache or IE that would cause this?
Attached is a screenshot one of the users sent.
I found the bug. The PHP code was generating a 302 redirect, but then also outputting it's normal page content. In certain high load conditions outputting the page could take 3-5 seconds. For some still undetermined reason the output from the first (redirected) request was being munged onto the beginning of the new request before that request's headers, turning the entire thing into garbage.
If anyone knows why THAT behavior - the content of one terminated request being prepended onto a new active request - is happening feel free to answer the question so I can fully mark it solved.
I hope someone here can help me
I have started getting alerts from google about increasing 404's.
every one of these has the string "%5C%22" in the url rather than the ascii character.
This issue comes and goes every few months. It's a wp site, with only premium plugins.
The best/ nearest answer I have found is here:
It seems that google is looking in code that is not designed for it to
look at. Indeed Stack Overflow lists a similar issue
Ajax used for image loading causes 404 errors.
But there appears to be no real cause identified.
For example
https://rapidbi.com/swotanalysistemplates/%5C%22/ listed in google - but when I go to the page it says contains this link ( https://rapidbi.com/swotanalysistemplates/ ) there is no such link.
Sometimes the %5c%22 is in the middle as well as at the end of a url. So the theory that its a /\ code in PHP makes sense - but how do we solve this?
Could it be that google is reading the PHP instructions?
Should this be an issue that google coders fix rather than us poor webmasters/
or is there a server side solution to this.
I have 100s of these errors increasingly daily!
Should I just ignore these google reported errors?
Should I mark them as fixed (they are not, as they never existed in the first place)?
Is there a fix? It's a wp based site, should I be changing the robot text to block something? If so what?
do we know of any plugins that might create this issue?
Thank you in advanced
Mike
First, from Google itself. 404s doesn't affect ranking of your website.
But of course we want to fix this kind of error.
Second, on Google Webmaster Tools you can see where "google" saw / crawled this link. I suggest that you check where google pick this URL and look in your code if there's any code that add /%5C%22/ in the URL.
I help a friend with some dev on her wordpress powered site - http://fulltwist.net/
We installed a new theme about 3 months ago and since then she gets feedback from less than 5% of visitors saying the site doesn't render properly.
All of them say that, when viewing a single post, only the logo and the comment box show on the page - nothing else. Many say they use their phone to view it because they can't on the desktop.
I've tried load of different computers but I can't replicate the problem. I've asked them to clear their cache (hard refresh) and that hasn't helped.
I want to dig into the code but I just don't know what I'm even looking for. Can different browsers render PHP differently? I thought it was purely server side?
Anyone have any idea what could be causing this to render incorrectly? What diagnostic tools or approaches should I take?
Any points in the right direction would be much appreciated.
Browsers don't render PHP, they render the HTML sent back from the server.
Look for any commonality among those with problems. Browser, browser version, operating system. Does the website require a plugin that some may not have or may block? Does it render correctly with JavaScript disabled (and are complaining users disabling JavaScript / does your JavaScript run error free on their browser versions)?
You can use a service like Gomez to test your page on multiple browsers if you don't see commonality among the users.
I am honestly not sure where the issue lays but here is my problem:
I have a single file: card.gif. When I check firebug or Google pagespeed, I learn the file is called twice during the page fetch once as normal file name and a second time with a random number (that does not change). Example:
card.gif
card.gif?1316720450953
I have scoured my actual source code, the image is only called once. It is not called in a CSS file. To be honest I have no idea what is the issue, some thought that when I originally installed mod_pagespeed that it appended ID's to each image in cache for any future overwrites but I can't be certain.
Has anybody ever had this issue before?
In the end - Dagon's comments above led me to believe that things like Firebug and Pagespeed may not always be correct. I do show two images being loaded in the timelines for both plugins but it is very difficult for me to decifer otherwise. If another answer is provided contradicting this, I am more than happy to test that theory.
i was looking for a way to block old browsers from accessing the contents of a page because the page isn't compatible with old browsers like IE 6.0 and to return a message saying that the browser is outdated and that an upgrade is needed to see that webpage.
i know a bit of php and doing a little script that serves this purpose isn't hard, then i was just about to start doing it and a huge question popped up in my mind.
if i do a php script that blocks browsers based on their name and version is it impossible that this may block some search engine spiders or something?
i was thinking about doing the browser identification via this function: http://php.net/manual/en/function.get-browser.php
a crawler will probably be identified as a crawler but is it impossible that the crawler supplies some kind of browser name and version?
if nobody tested this stuff before or played a bit with this kind of functions i will probably not risk it, or i will make a testfolder inside a website to see if the pages there get indexed and if not i abandon this idea or i will try to modify it in a way that it works but to save me the trouble i figured it would be best to ask around and because i didn't found this info after a lot of searching.
No, it shouldn't affect any of major crawlers. get_browser() relies on the User-Agent string sent with the request, and thus it shouldn't be a problem for crawlers, which happen to use custom user-agent strings (eg: Google's spiders will have "Google" in their names).
Now, I personally think it's a bit unfriendly to completely block a website to someone with IE. I'd just put a red banner above saying "Site might not function correctly. Please update your browser or get a new one" or something to that effect.