I'm offering large downloads from my server of around 6.5gb and doing so using PHP and x-sendfile.
This has been working fine for months however recently the downloads have been ending prematurely for users at anywhere from 3gb-5.5gb.
No errors are given in the browsers (tried Chrome, Firefox and IE) they behave as if the downloads have completed but the filesize is smaller than it should be.
From my limited understanding this usually happens because no data is sent for a certain period of time and the browser mistakes this for the download having completed.
However I am struggling to find any specific information about how browsers behave in this regard.
I have contacted my host to see if they can help identify the problem at their end, perhaps the connection is being interrupted or something, however they insist it must be an issue with my servers configuration, even though nothing changed in the setup of the server around the time this began.
Now I am at a complete loss of how to investigate this. The time the download stops is completely random. Different file sizes, different amount of time taken and no errors in the server logs or in the browser.
Can anyone offer any suggestions on how to investigate this or has anyone had any similar problems before and found a fix?
Related
I'm a bit of an amateur so I'm sure I've missed something.
I'm running Divi on Wordpress. When i go to update a page, I get the "Your updates couldn't be saved" error. My Wordpress site, as well as it's CPanel, also are loading unusually slowly, which I think is related to the issue. After working on this for a bit, both my site and it's CPanel will fail to load, giving me a "can't establish a secure connection to the server" error. The third symptom, which I can't make heads nor tails of, when I click "update" in the page editor, my browser will often (but not always) launch another tab/pop-up either displaying a preview of the edits or the "pages" page on the WP admin side. All of these issues are new (although I've had similar loading speed issues in the past with this site).
Thinking it may be an overload on my server (which happened due to an attack a few months ago), I let it sit for a few days with no luck. Then, thinking it may be a caching issue on my end, I changed my DNS servers, cleared my browser cache, tried private browsing, used my phone, used different wifi and cellular networks. All to no avail. I briefly had slight luck using my phone as a hotspot, but it only temporarily improved the loading speeds.
I also tried disabling plugins. I made sure everything was up to date. No help.
I went into my wp-config.php file and increased the memory limit to 128M and the WP-max memory limit to 256M. This helped briefly–I could update and save one page but when I tried to change the next, I was back to base 1. I've also increased the memory limits in my .htaccess file. I don't have access to my PHP.init file (there are often delays reaching my host so I'm trying to avoid relying on them when possible).
My last guess (which I have yet to implement) is to update my PHP. That said, I'm running 7.3.6 and had no issue updating the site a few days ago so I'm not sure that's the problem, unless divi's newest update has compatibility issues with 7.3 versions of PHP...
Any further ideas would be greatly appreciated! I'm partway through a cosmetic update (which, I know should be done on a staging site but sometimes best practices are best learnt through mistakes like this) so my site looks somewhat half-finished. That is, I'm anxious to be able to edit it again.
Many thanks in advance
Whenever you try to save something, Divi will make a request through admin-ajax.php, it often happens that a security firewall detects that as a threat (which is obviously not), thus giving you the failed save message. Can you ask you host to check the rules that are triggered and whitelist that action? It can also come from plugins like Wordfence, make sure to whitelist it there too.
You can also attach that layout as JSON here, I can test it on my own server and if I can save changes, we should be on the right path.
I have a website built in Symfony 2.8 running on Apache on Ubuntu Server. I had some broken image links on the website, and whenever somebody went to the offending page, it would peg out both the processor and memory, and the browser would continue to try and fetch the missing assets for 60 seconds plus.
I've fixed the missing images, but is there something I can do on the server end to quit searching for missing assets when they're clearly not there?
The way it stands now it would be very easy for an attacker to simply demand content from my website that isn't there, and bring my server to it's knees.
--Edit--
What I'm wondering if there is a way to stop my server from taking a resource hit if somebody tries to access a path/file that's non-existent directly. i.e. if they went directly to example.com/images/non-existent.jpg. Currently on my server my CPU will spike to 100% on one core and RAM usage will slowly climb until the request times out somewhere around 1 min. Just a few more requests like that would use all of my resources.
--Edit 2--
I have discovered since I posted this, that this isn't just limited to images. Any path that should return a 404 error is behaving the same. ie example.com/this/is/not/a/real/path will just hang, and finally time out.
Check if the file exsists before including it in your HTML
if(file_exists('img.jpg')){
// your code for inserting the image into the HTML
}
http://php.net/manual/en/function.file-exists.php
Problem
When you load up a page on my site, often times one (or several) images (jpg files that I have saved from Lightroom and/or Photoshop) will not appear. It instead looks like a broken link (ALT description appears) but no image. Hard reload of the browser solves problem (e.g. all images load properly after a hard reload).
Error Message
Chrome displays an "ERR_CONTENT_LENGTH_MISMATCH" warning for all images it does not load. (Sometimes the image will flash quickly before going to what looks like a dead image)
Setup
Running latest version of Wordpress (4.2.2) on a Shared Host. Site is SSL (https) if that matters. Images are located in an image upload folder (nothing complex like Imagemagick, etc) on the host.
My Troubleshooting
I have replicated the issue from multiple locations using various ISPs on various machines (both Mac & PC) and with various browsers (Chrome & Safari) some of which are not using any ad-blockers.
What I've tried is the following:
I asked the host if there was an issue on the server side. They claim no.
I've tried resetting the functions.php file. No impact.
I've disabled all plug-ins. No impact.
I've hardkeyed in the meta charset as UTF-8. No impact.
Checked if I'm using Gzip. I am not.
Enabled Wordpress Cache plugin. No impact.
Cleared .htaccess of all non-necessary redirects & commands. No
impact.
Replaced wp-admin and wp-includes folders from fresh install. No
impact.
Deleted Wordpress & Reinstalled from a Backup. No dice.
I've put source code from pages that have this issue into a test.html file and the images seem to load up fine doing that.
My Thoughts & Questions
The images are 100-200kb each and sometimes there are a fair amount of them on the page. Is something timing out and then once I hard reload, everything show up because the timeout isn't tripped? That is the best random guess I can gather without understanding the issue perfectly.
Any ideas of things I can try? Should I delete the whole database and start again? Everything I know about computers is self-taught and server issues are not a strong point for me. Even if you don't know what it might be, could someone explain what a content length mismatch is in general terms?
Thanks much!
When you request data from a web server, it responds first with some information about the data (HTTP headers) and then with the data. One of these pieces of information, an HTTP header, is called Content-Length. It tells the client how much data it should expect to receive from the server. When your browser gets an image, the server's response (very simplified looks like)
Content-Length: 100000
< the image, 100000 bytes of data >
The client knows the request is complete when it has received the amount of data told by Content-Length. Until it receives in this case 100KB (100000 bytes), it considers the image, for example, to not be done loading.
If the server breaks the request before the client receives the data from the server, or if the client receives more data than it received, the client will throw some sort of error and assume the data to be corrupted/unusable and dispose of it. How this is handled can vary between browsers.
How did you upload the images to your website? Myself, I have encountered this problem in a situation where the file's supposed size was stored in the database, and this was used to set the Content-Length header. The file size in the DB wasn't correct for the file. HOWEVER, I know that WordPress does not store file sizes in the database; media uploads are simply represented by a URL.
This could also happen if the web server runs out of resources and can no longer fulfill your requests; you said you had lots of images per page. If you are on a really lousy shared hosting plan, it may be the case that the host imposes limits, or that the server simply can't handle the traffic of all the sites it hosts.
I wanted to circle back on this in case someone else is experiencing this problem. It appears that there is some type of glitch between HTTPS and image retrieval that was causing the problem. While I don't understand WHY that is, I converted my site from SSL/HTTPS to simple HTTP (which I was able to do as it doesn't require encryption) and it appears all images load as they should.
If someone understands the "why", I'd love to understand what the issue actually is. Luckily, I was able to come up with workaround. So, while this doesn't answer my question, it does provide context of what is causing the problem and my common sense workaround.
You might see this problem with a shared hosting service. Free bandwidth is like free speech, not free beer. Resource outage policies are invoked during traffic spikes.
A distributed system architecture solves this by inserting a front-end CDN tier (eg. CloudFlare). CDNs cache your static resources and can vastly reduce the load on your host. In fact, for completely static sites the host can be shut down.
There are other advantages to CDNs, like attack detection, free SSL (not beer) and overall improved performance and security compared to shared hosting alone.
Many CDNs are free (as in speech). You could also upgrade to private hosting, but $ and you still might want a front-end tier.
We have develop a CURL function on our application. This curl function is mainly to map the data over from 1 site to our form-field in our application.
However, this function has been working fine all the while and ready for use for more than 2 months. Yesterday, this fucntion was broken down. the data from this website is no longer able to map over. We are trying to find out why the problem is. When we troubleshooting, it shows that there is response timeout issue.
To re-ensure there were nothing wrong on our coding and our server performance is working, we have duplicates this instance to another server and try out the function. It was working perfectly.
Wondering if any one out there facing such problem?
What could the possibility to cause this issue?
When we are using cURL, will the site owner know that we are calling their data to map into ours server application? If so, is there a way that we can overcome this?
Could be the owner that block our server ip address? tht's why it function works well on my another server but not in the original server?
Appreciate your help on this.
Thank you,
Your problem description is far too generic to determine a specific cause. Most likely however there is a specific block in place.
For example a firewall rule on the other end, or on your end, would cause all traffic to be dropped, thus causing the timeout. There could also be a regular network outage between both servers, but that's unlikely.
Yes, they will see it in their Apache (or IIS) logs regularly. No, you cannot hide from the server logs - it logs all successful requests. You either get the data, or you stay stealthy. Not both.
Yes, the webserver logs will contain the IP doing all the requests. Adding a DROP rule to the firewall is then a trivial task.
I have applied such a firewall rule to bandwidth and/or data leechers a lot of times in the past few years, although usually I prefer the more resilient deny from 1.2.3.4 approach in Apache vhost/htaccess. Usually, if you use someone else's facilities, it's nice to ask for proper permission - lessens the chance you get blocked this way.
I faced a similar problem some time ago
My server IP was blocked from the website owner
It can be seen in the server logs. Google Analytics, however, won't see this, as cURL doesn't execute javascript.
Try to ping the destination server from the one executing the cURL.
Some advices are:
Use a browser header to mask your request.
If you insist on using this server, you can run trough a proxy.
Put some sleep() between the requests.
The last two days we've been going over this problem for several hours to figure out what's going on and we can't find any clues.
Here's what's happening; We have a Flash application that allows people to place orders. Users configure a product and an image of that product is generated by Flash on the fly and presented to the user. When satisfied, they can send an order to the server. A byte array of the image and some other variables are sent to the server which processes the order and generates a PDF with a summary of the order and the image of the product. The order script then sends everything back to the browser.
This is all going really well, except for Safari on OSX 10.4. Occasionally the order comes through but most of the time Safari hangs. When looking at the Activity window in Safari it states that it's waiting for the order script and that it's "0 bytes of ?".
We thought there was something wrong with the server so we've tried several other servers but the problem persists.
Initially we used a simple post to process the order but, in an effort to solve this problem we resorted to some more sophisticated methods as Flash remoting via AMFPHP. This didn't solve the problem either.
We use Charles to monitor the http trafic to figure out whether the requests are leaving the browser at all but the strange thing is that when Charles is running, we can't reproduce the problem.
I hope someone has any clue what's happening because we can't figure it out.
just a wild guess:
Is getting the PDF back the result of 1 http request that both sends all needed data to the server and gets the pdf as a result? Otherwise this could be a timing issue - are you sure all data is available at the server the moment the pdf is being requested? The number of allowd parallel connections to a website is not the same for all browser brands/versions, and maybe that could influence the likelyhood of a 'clash' happening.
Easy test: introduce a delay between sending the data to the server and retrieving the pdf and see if that has any effect.