Problem
When you load up a page on my site, often times one (or several) images (jpg files that I have saved from Lightroom and/or Photoshop) will not appear. It instead looks like a broken link (ALT description appears) but no image. Hard reload of the browser solves problem (e.g. all images load properly after a hard reload).
Error Message
Chrome displays an "ERR_CONTENT_LENGTH_MISMATCH" warning for all images it does not load. (Sometimes the image will flash quickly before going to what looks like a dead image)
Setup
Running latest version of Wordpress (4.2.2) on a Shared Host. Site is SSL (https) if that matters. Images are located in an image upload folder (nothing complex like Imagemagick, etc) on the host.
My Troubleshooting
I have replicated the issue from multiple locations using various ISPs on various machines (both Mac & PC) and with various browsers (Chrome & Safari) some of which are not using any ad-blockers.
What I've tried is the following:
I asked the host if there was an issue on the server side. They claim no.
I've tried resetting the functions.php file. No impact.
I've disabled all plug-ins. No impact.
I've hardkeyed in the meta charset as UTF-8. No impact.
Checked if I'm using Gzip. I am not.
Enabled Wordpress Cache plugin. No impact.
Cleared .htaccess of all non-necessary redirects & commands. No
impact.
Replaced wp-admin and wp-includes folders from fresh install. No
impact.
Deleted Wordpress & Reinstalled from a Backup. No dice.
I've put source code from pages that have this issue into a test.html file and the images seem to load up fine doing that.
My Thoughts & Questions
The images are 100-200kb each and sometimes there are a fair amount of them on the page. Is something timing out and then once I hard reload, everything show up because the timeout isn't tripped? That is the best random guess I can gather without understanding the issue perfectly.
Any ideas of things I can try? Should I delete the whole database and start again? Everything I know about computers is self-taught and server issues are not a strong point for me. Even if you don't know what it might be, could someone explain what a content length mismatch is in general terms?
Thanks much!
When you request data from a web server, it responds first with some information about the data (HTTP headers) and then with the data. One of these pieces of information, an HTTP header, is called Content-Length. It tells the client how much data it should expect to receive from the server. When your browser gets an image, the server's response (very simplified looks like)
Content-Length: 100000
< the image, 100000 bytes of data >
The client knows the request is complete when it has received the amount of data told by Content-Length. Until it receives in this case 100KB (100000 bytes), it considers the image, for example, to not be done loading.
If the server breaks the request before the client receives the data from the server, or if the client receives more data than it received, the client will throw some sort of error and assume the data to be corrupted/unusable and dispose of it. How this is handled can vary between browsers.
How did you upload the images to your website? Myself, I have encountered this problem in a situation where the file's supposed size was stored in the database, and this was used to set the Content-Length header. The file size in the DB wasn't correct for the file. HOWEVER, I know that WordPress does not store file sizes in the database; media uploads are simply represented by a URL.
This could also happen if the web server runs out of resources and can no longer fulfill your requests; you said you had lots of images per page. If you are on a really lousy shared hosting plan, it may be the case that the host imposes limits, or that the server simply can't handle the traffic of all the sites it hosts.
I wanted to circle back on this in case someone else is experiencing this problem. It appears that there is some type of glitch between HTTPS and image retrieval that was causing the problem. While I don't understand WHY that is, I converted my site from SSL/HTTPS to simple HTTP (which I was able to do as it doesn't require encryption) and it appears all images load as they should.
If someone understands the "why", I'd love to understand what the issue actually is. Luckily, I was able to come up with workaround. So, while this doesn't answer my question, it does provide context of what is causing the problem and my common sense workaround.
You might see this problem with a shared hosting service. Free bandwidth is like free speech, not free beer. Resource outage policies are invoked during traffic spikes.
A distributed system architecture solves this by inserting a front-end CDN tier (eg. CloudFlare). CDNs cache your static resources and can vastly reduce the load on your host. In fact, for completely static sites the host can be shut down.
There are other advantages to CDNs, like attack detection, free SSL (not beer) and overall improved performance and security compared to shared hosting alone.
Many CDNs are free (as in speech). You could also upgrade to private hosting, but $ and you still might want a front-end tier.
Related
Trying to get the loading time of a Wordpress website (with three.js) - https://igotchamedia.com/arvr down from 6seconds to under 1.5s - the "Waiting" and "Receiving" part of the page loading is taking the bulk of the time. Caching plugins did not help.
Any help much appreciated!
Your times are slow for the initial Get parameter for the root document of the site, and that's called Time to First Byte. You have a redirect from the non-https site to the https site, and that is part of the slowness issue.
You can get rid of the redirects depending on how you implement SSL on your site and in WordPress: either by a redirect in .htacccess (not the best), or simply being sure your WordPress site and address settings are https and all URLs in the database are https, and then no redirects are needed.
But overall slow TTFB times are a server lag issue. If you are on a shared host, slow TTFB speeds can be slow because of all the other users of the server. Your overall speed - 4 seconds - is not bad for a very image heavy site with a fairly high number of http requests: https://gtmetrix.com/reports/igotchamedia.com/GLQwMRRs
You can talk to the webhost about the TTBF issues. But GoDaddy shared hosting is well-known to be a slow.
If you want to get under a few seconds, don't depend on a caching plugin to do all the work. 1) Get a better server and use a CDN; 2) lower the weight of the images and get the total weight of the site under 1 meg; 3) and use a theme that requires fewer scripts and style sheets which result in a high number of http requests; and 4) keep your external requests, like third-party fonts, to a minimum.
The vast majority of the time a Wordpress or any website is slow; is because of too much rich media (video and images) hosted or used on the site.
Try going back in and optimizing your website's media files for web. Save them at a smaller weight, and use better practices with file extensions. Also, CSS3 techniques are powerful these days and in many cases you don't even need to call so many image files. ie. for backgrounds, gradients, shadows, menus, buttons, etc. If you have lots' and lot's of media being hosted on your site, consider the use of a CDN (content delivery network).
Another good practice. Go in and make sure your website was coded properly. Calling external .js files in the footer, html is semantic, etc etc. Test your Wordpress plugins being used. Make sure your Wordpress is up to date, and has correct memory set. Check your developer console for any JavaScript errors or conflicts bogging the site down. Check your Wordpress database for any corruptions.
Lastly, check your host or server environment. Make sure your using the correct version of PHP, you have enough space, everything is efficient etc.
Also, check out these additional site performance optimization tools.
https://www.pingdom.com/
https://developers.google.com/speed/pagespeed/
Hope this helps, g'luck!
I have a website built in Symfony 2.8 running on Apache on Ubuntu Server. I had some broken image links on the website, and whenever somebody went to the offending page, it would peg out both the processor and memory, and the browser would continue to try and fetch the missing assets for 60 seconds plus.
I've fixed the missing images, but is there something I can do on the server end to quit searching for missing assets when they're clearly not there?
The way it stands now it would be very easy for an attacker to simply demand content from my website that isn't there, and bring my server to it's knees.
--Edit--
What I'm wondering if there is a way to stop my server from taking a resource hit if somebody tries to access a path/file that's non-existent directly. i.e. if they went directly to example.com/images/non-existent.jpg. Currently on my server my CPU will spike to 100% on one core and RAM usage will slowly climb until the request times out somewhere around 1 min. Just a few more requests like that would use all of my resources.
--Edit 2--
I have discovered since I posted this, that this isn't just limited to images. Any path that should return a 404 error is behaving the same. ie example.com/this/is/not/a/real/path will just hang, and finally time out.
Check if the file exsists before including it in your HTML
if(file_exists('img.jpg')){
// your code for inserting the image into the HTML
}
http://php.net/manual/en/function.file-exists.php
A e-shop has developed using perstashop and put to the three server:
the first 2 is amazon, should be same setting
Server 1:
http://be-pure.com/en/women/3-slim-y-tank.html
Server 2:
http://52.77.216.83/en/women/3-slim-y-tank.html
the last one is just local hosting
Server 3:
http://internal001.zizsoft.com/be_pure/en/women/3-slim-y-tank.html
The problem is server 1 loading very slow compare to the other two server, but the performance should be the best among 3.
It looks as if server 1 hasn't cache the files
but in fact, all of them has
turn on smarty cache, using file system , with recomplie when modify
and
turn on the file system cache
Given that the code and server setting are the same, both 2 amazon server is same setting, and localhost one is other server, however it should be slower than server 1
1) How to debug/ check whether the file is using cache already?
(the cache file locate in cache/smarty and cache/cachefs in server)
2) And what takes the long load time for server 1? Just consider it as an PHP site, any ways to check why it is slow?
Thanks a lot for helping
Refer to the comments - I misinterpreted the data I was looking at earlier. It appears the server can only handle maybe 5-10 requests at a time so things get blocked until the other things are done loading. You likely just need to update your web server's configuration to handle more requests.
There is also a lot of JS data in the file. It is 318KB just to load the page and it has to do many requests to get JS/CSS files before it even gets to any of the HTML. So it is 318KB + all the external JS/CSS it needs to fetch (wow!). That's like 4MB of stuff just to load a page.
Check the modify timestamp on the files generated by your caching system to verify that caching is working.
Edit:
Since there is now a bounty out - please review the comment discussion we had. There is an issue where a traceroute doesn't make it to the server destination and I suspect that is related to the slowness but that type of network issue is over my head.
I am giving answer of your 2nd question.
I dont't know the exact problem of slow loading. but We had faced same issue in one of our projects in last month. Server was amazon.
One of our Instance was very slow. We had tried many solutions but none of them were working. Then We have found a solution which looks very unfair but it Worked for us.
We had Just restarted the slow Instance and We got success.
I hope this solution will work for u also.
All the Best :)
Answer to question 1:
You can use chromes developer tool, F12, and then network tab. It will show you all the files being downloaded, in the size column you can find if it's being loaded from cache or not.
Answer to question 2:
You can use chrome's YSlow plugin. it will help you a lot, but is obvious about your site you have too many files; css, js and lots of images; try to merge your css and js files and use image map for your images.
Hope you can solve your problem
No body can clear answer what is problem with servers. But you can find it with profiling. If you have budget i highly recommend to you buy a profiling tool "Tideways.io", "Blackfire" or "New Relic". I used New Relic, and it really help full to find bottlenecks. If you don't have budget to get a profiling tool you can use php profiling extension Xdebug. It is help full too, but reading profiling output of xdebug can be a bit difficult. But it's setup is really easy, and you can do partial profiling ( you can profile url which you want) with "xdebug profile trigger" instead profiling all requests.
Use this google solid tool to get insight about your page performance.
https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fbe-pure.com%2Fen%2Fwomen%2F3-slim-y-tank.html.
Also check if server1 is matching server2 configuration.
This tools shows me lots of suggestions for improvements for your website.
I have a page that uploads a file to my server, where it then gets copied to a permanent directory via move_uploaded_file. This all seems to work great, with the exception that in a real-life scenario I will be expecting much larger files than I have successfully sent up.
I have already tacked the timeout for the file upload by changing the connection timeout in my site settings in IIS - so the file continues to upload up to six hours ( -_- ) - but this is where I run into my current problem - It might just take six hours!
After getting the upload process to get past 10% or so ( on a 300 meg file ), I noticed that the file continues to push up, but my upload rate seems to be 'falling off' - as in, I observed faster speeds when I started the transfer, than I am seeing halfway through it. The numbers here aren't necessarily relevant, as I know that my upload ( while Im uploading, still 2 Mbps ) is capable of pushing faster than it is, and the server on the other end is on fiber.
I wonder if anyone has encountered this before, and if so, have you determined a work-around. Any help appreciated. Thanks.
You should not be using HTTP for this task. You may have observed that all the "file locker" services (and others which involve uploading files, such as Apple's online-music service) provide you with an "uploader" program rather than making use of the browser. There are reasons for this.
First off, the overhead of the transfer encoding is large. You take your (presumably binary) data, and Base64 encode it; that's 33% overhead. So if it would take four hours with HTTP, it would only take three with a binary protocol - and that's disregarding the chunked-transfer overhead, so the reality is probably more severe.
Second, there's no way to "resume" an upload in HTTP. So if your connection is broken, you'll either have to write application-specific code to handle the resumption, or start all over.
Third, HTTP servers are not designed for super-long-lived connections: they usually have a finite or small pool of workers to service the (usually seconds-long at the outset) client requests, and occasionally they have smallish limits on the size of request data (2GB is common, and PHP by default has only a few MB).
I strongly recommend using a file transfer protocol to transfer files (such as FTP). You don't have to give out a single username/password pair to everyone: you can have a gatekeeper which integrates with whatever authentication system you already have in place. FTP-over-TLS also exists and is relatively mature.
There is a fairly good summary of the differences between the two protocols here. Note that you gain nothing from any of the advantages of HTTP listed, due to your circumstances.
Don't feel limited to FTP - rsync is a great protocol for transferring files as well, especially if you only change part of the file (it even does binary deltas!). git can also efficiently transport large blobs over secure connections or even HTTP, if you insist on using that.
I have a website. It's my first website with Zend Framework but I think it's written good. Generatiom time is about 0.9s now. I'll do it something like 0.2 but leave it now. When you press any link on the website it tooks about 1,5-2s before web browser is loading page. Then it tooks 0.15s to show it. So if execution time is 0.9 where are the other 1.1s? Ping is about 13ms. Website address is http://zgarnijlicke.pl
Edit:
Strange. Second domain, http://lottek.eu, is working good. Look at http://lottek.eu/picostreamer. It isn't lagging like the zgarnijlicke.pl domain.
Edit 2:
There is a problem with Zend-Framework. I setted up action without rendering view (layout disabled too) and it's working as fast as server can do it. I'll make new question for it.
Here's a webpagetest.org report for your site: http://www.webpagetest.org/result/100721_1P0Y/
If you view the waterfall graph for the first view, you'll see that the browser gets your HTML source at around the 1.2 second mark, and is first able to render your page at just after 4 seconds. What happens in between those two is downloading of your three javascript files and two CSS files. So, this is where you want to start. Some suggestions:
Consider using a free CDN for jquery.js insteading of serving it from your server, e.g. Google's: http://code.google.com/apis/ajaxlibs/ . This way, users are more likely to already have it cached, Google will serve it from a location geographically closer to the user, and (I think) in compressed format.
For jquery.corner.js and jquery.media.js, consider merging them into one file and serving them compressed (the Apache module mod_deflate makes this very easy to do)
Same for your CSS files - consider merging them into one file and serving them compressed.
Those will give you some quick wins. However there are other things you can improve:
Add width and height attributes to your image tags. Without these, some browsers will halt rendering while they download the images so that they know how much space they'll occupy. None of your image tags have these attributes.
Make sure you're using the right image format for the job. Your banner.png image is over 300k which is far too large. I converted this to a JPEG image (80% quality) and it was 30k.
As for the execution time, 0.9 seconds seems quite high. Are you using APC or similar? Is the page doing any heavy database work?
Try putting some timer code in your php that measures the length of time it takes to generate the page content. This way you can confirm or rule out server problems.
You might also use network tools like ping and traceroute to see if your problem is caused by network latency.
A quick test with wget here gets an overall execution time of 1.5s to transfer one of the pages, with an actual download time of 0.2 seconds, so 1.3s of overhead. The pause occurs before the transfer starts, so that's a server-side problem.
Is that site on a virtual server? It's possible that if the underlying physical server is heavily loaded, your image could be getting swapped out or otherwise CPU-starved and takes ~1 second to become responsive again.
Perhaps it's an internal resource thing - are you connecting to a DB, especially a remote one? Even if some or most of the pages aren't DB-driven, the overhead of connecting to a DB could be causing this slowdown. And then gets swapped/delayed again as there's little further activity to keep the image active.
It could even be something as silly as Apache being configured with 'IdentityCheck' on, though unlikely, as this would slow down all requests. I'm not seeing any slowdown on the requests for .css/.js files from your server when viewed from HTTPFox. Interestingly, requesting the .css/.js via wget returns a '500 Internal Server Error'.
I found it. It's problem with ZF because when I did hello.php page with code like that:
hello world
Without any < ?php ?> script took 0.4s to complete.