I am having following problem:
I am running BIG memory process but have divided memory load into smaller chunks so no CPU time out issue.
In the Server I am creating .xml files with around 100kb sizes and they will be created around 100+.
Now main problem is browser shows Response Time out and IE at the below (just upper status bar) shows .php file download message.
During this in the backend (Server side) process is still running and continuously creating .xml files in incremental order. So no issue with that.
I have following php.ini configuration.
max_execution_time = 10000 ; Maximum execution time of each script, in seconds
max_input_time = 10000 ; Maximum amount of time each script may spend parsing request data
memory_limit = 2000M ; Maximum amount of memory a script may consume (128MB)
; Maximum allowed size for uploaded files.
upload_max_filesize = 2000M
I am running my site on IE. And I am using ZSCE with PHP 5.3
Can anybody redirect me on proper way on this issue?
Edit:
Uploading image of Time out and that's why asking for .php file download.
Edit 2:
I briefly explain my execution flow:
I have one PHP file with objects of Class Hierarchies which will start to execute Function1() from each class Hierarchy.
I have class file.
First, let say, Function1() is executed which contains logic of creating XML files in chunks.
Second, let say, Function2() is executed which will display output generated by Function1().
All is done in Class Hierarchies manner. So I can't terminate, in between, execution of Function1() until it get executed. And after that Function2() will be called.
Edit 3:
This is specially for #hakre.
As you asked some cross questions and I agree with some points but let me describe more in detail about the issue.
First I was loading around 100+ MB size XML Files at a time and that's why my Memory in local setup was hanging and stops everything on Machine and CPU time was utilizing its most resources.
I, then, divided this big size XML files in to small size (means now I am loading single XML file at a time and then unloading it after its usage). This saved me from Memory overload and CPU issue on local setup.
Now my backend process is running no CPU or Memory issue but issue is with Browser Timeout. I even tried cURL but as per my current structure it does seems to fit because of my class hierarchy issue. I have a set of classes in hierarchy and they all execute first their Process functions and then they all execute their Output functions. So unless and until Process functions get executed the Output functions do not comes in picture and that's why Browser shows Timeout.
I even followed instructions suggested by #vortex and got little success but not what I am looking for. Why I could not implement cURl because My process function is Creating required XML files at one go so it's taking too much time to output to Browser. As Process function is taking that much time no output is possible to assign to client unless and until it get completed.
cURL Output:
URL....: myurl
Code...: 200 (0 redirect(s) in 0 secs)
Content: text/html Size: -1 (Own: 433) Filetime: -1
Time...: 60.437 Start # 60.437 (DNS: 0 Connect: 0.016 Request: 0.016)
Speed..: Down: 7 (avg.) Up: 0 (avg.)
Curl...: v7.20.0
Contents of test.txt file
* About to connect() to mylocalhost port 80 (#0)
* Trying 127.0.0.1... * connected
* Connected to mylocalhost (127.0.0.1) port 80 (#0)
\> GET myurl HTTP/1.1
Host: mylocalhost
Accept: */*
< HTTP/1.1 200 OK
< Date: Tue, 06 Aug 2013 10:01:36 GMT
< Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8o
< X-Powered-By: PHP/5.3.9-ZS5.6.0 ZendServer
< Set-Cookie: ZDEDebuggerPresent=php,phtml,php3; path=/
< Cache-Control: private
< Transfer-Encoding: chunked
< Content-Type: text/html
<
* Connection #0 to host mylocalhost left intact
* Closing connection #0
Disclaimer : An answer for this question is chosen based on the first little success based on answer selected. The solution from #Hakre is also feasible when this type of question is occurred. But right now no answer fixed my question but little bit. Hakre's answer is also more detail in case of person finding for more details about this type of issues.
assuming you made all the server side modifications so you dodge a server timeout [i saw pretty much everyting explained above], in order to dodge browser timeout it is crucial that you do something like this
<?php
set_time_limit(0);
error_reporting(E_ALL);
ob_implicit_flush(TRUE);
ob_end_flush();
I can tell you from experience that internet explorer doesn't have any issues as long as you output some content to it every now and then. I run a 30gb database update everyday [that takes around 2-4 hours] and opera seems to be the only browser that ignores the content output.
if you don't set "ob_implicit_flush" you need to do an "ob_flush()" after every piece of content.
References
ob_implicit_flush
ob_flush
if you don't use ob_implicit_flush at the top of your script as I wrote earlier, you need to do something like:
<?php
echo 'dummy text or execution stats';
ob_flush();
within your execution loop
1. I am running BIG memory process but have divided memory load into smaller chunks so no CPU time out issue.
Now that's a wild guess. How did you find out it was a CPU time out issue in the first place? Did you even? If yes, what does your test now gives? If not, how do you test now that this is not a time-out issue?
Despite you state there won't be a certain issue, you don't proof that and many questions are still open. That invites for guessing which is counter-productive for trouble-shooting (which you are doing here).
What you write here just means that you wrote code to chunk memory, however, this is not a test for CPU time out issues. The one is writing code the other part is test. Don't mix the two. And don't draw wild assumptions. Issues are for the test, otherwise it didn't happen.
So much for your first point already just to show you that when doing troubleshooting, look for facts (monitor, test, profile, step-debug) not run assumptions. This is curcial otherwise you look in the wrong places and ask the wrong questions.
From what you describe how the client (browser) behaves, this is not a time-out-issue per-se. The problem you've got is that the answer between the header response and the body response is taking to long for the taste of your browser. The one browser is assuming a time-out (as such a boundary value has been triggered and this looks more correct to me) and the other browser is assuming somthing is coming up, why not save it.
So you merely have a processing issue here. Please consult the menual of your internet browsers (HTTP clients) which configuration values you can change to change this behavior. E.g. monitor with a curl-request on the command-line how long the request actually take. Then configure your browser to not time-out when connecting to that server under such an amount of time you just measured. For example if you're using Internet Explorer: http://www.ehow.com/how_6186601_change-internet-timeout-options.html or if you're using Mozilla Firefox: http://forums.mozillazine.org/viewtopic.php?f=7&t=102322&start=0
As you didn't show any code on the server-side I assume you want to solve this problem with client settings. Curl will help you to measure the number of seconds such a request takes. Use the -v (Verbose) switch to obtain detailed information about the request.
In case you don't want to solve this on the client, curl will still help you to measure important data and easily reproduce any underlying server-related timing issue. So you should go for Curl on the command-line in any case, especially as looking into response-headers might reveal what triggers the (again) esoteric internet explorer behavior. Again the -v switch does reveal you request and response headers.
If you like to automate such tests with a PHP script, it's also possible with the PHP Curl Extension. This has been outlined in:
Php - Debugging Curl
The problem is with your web-server, not the browser.
If you're using Apache, you need to adjust your Timeout value at httpd.conf or virtual hosts config.
You have 3 pages
Process - Creates the XML files and then updates a database value saying that the process is done
A PHP page that returns {true} or {false} based on the status of the process completion database value
An ajax front end, polling page 2 every few seconds to check weather the process is done or not
Long Polling
I have had this issue several times, while reading large csv file and puting it in database. I solved it in way, that i divided the reading and putting in database process into smaller parts. Like i created a new table to make log of how much data is readed and inserted, and next time the page reloads itself and start from that position. So you can do it by creating one xml in one attempt,and reload page and start form next one. In this way the memory used by browser is refreshed.
Hope it will help.
Is it possible to send some output to browser from the script while it's still processing, even white space? If, then do it, it should reset the timeout counter.
If it's not possible, you have to increase the timeout of IE in the registry:
HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings
You need ReceiveTimeout, if it's not there, create it as dword, and set the value in miliseconds.
What is a "CPU time out issue"?
The right way to solve the problem is to run the heavy stuff asynchronously, in a seperate session group (not the webserver process tree).
Try to include set_time_limit(0); in your PHP script page.
The following links might help you.
http://php.net/manual/en/function.set-time-limit.php
http://php.net/manual/en/function.ignore-user-abort.php
Related
I am calling filemtime() from a PHP file executed by POST from a JavaScript/HTML app. It returns the same time stamp for a separate test HTML file every two seconds even when I edit the test file with a text editor and I can see its DTM change in the local file system.
If I reload the entire app (Ctrl+F5), the timestamp reported stays the same. At times (once after 4 hours) the time stamp changes, but I don't know what makes this happen.
The PHP part of my code looks like this:
clearstatcache(true,$FileArg);
$R=filemtime($FileArg);
if ($R===false)
echo "error: file not found";
else
echo $R;
This code is called by synchronous Ajax, given only its PHP filename, using setInterval every 2 seconds.
Windows 10 Home, Apache 2.4.33 running locally for HTTP access, PHP 7.0.30 .
ADDED:
The behavior is the same in Firefox, Chrome, Opera, and Edge.
The results are being cached: http://php.net/manual/en/function.filemtime.php
Note: The results of this function are cached. See clearstatcache() for more details.
It almost sounds like Windows is doing some write caching...
stat() on the other hand has an additional note:
Note:
Note that time resolution may differ from one file system to another.
Maybe worth checking stat output.
edit
Maybe it's a bug, or Windows not playing nice, but you could also do a shell_exec with the Windows command showing DTM.
News: it turns out to be an ordinary bug in my app. I copied my Ajax call and forgot to edit it to apply to the test file. So it applied to one of my app files instead and the DTM only got updated when I edited that app file (FTAdjust.js).
When I specify the correct test file, the DTM updates just fine each time I edit it in another process.
It can sometimes be hard to find one's own bug even when it stares one in the face! I kept looking everywhere else but where the mistake was.
Is there a way to delete a thread from Stack Overflow, since it is irrelevant to others?
I'm using php wget to download mp4 files from another server
exec("wget -P files/ $http_url");
but I didn't find any option to check if file downloaded correctly, or not yet.
I tried to get duration file using getID3(), but it always return good value, even if file not downloaded correctly
// Check file duration
$file = $getID3->analyze($filepath);
echo $file['playtime_string']; // 15:00 always good value
there is any function to check that?
Thanks
First off I would try https instead. If the server(s) you're connecting to happen to support it, you get around this entire issue because lost bytes are usually caused by flaky hardware or bad MTU settings on a router on their network. The http connections gracefully degrade to giving you as much of the file as it could manage, whereas https connections just plain fail when they lose bytes because you can't decrypt non-intact packets.
Lazy IT people tend to get prodded to fix complete failures of https, but they get less pressure to diagnose and fix corner cases like missing bytes that only occur larger transactions over http.
If https is not available, keep reading.
An HTTP server response may include a Content-Length header indicating the number of bytes in a particular transaction.
If the header is there, you should be able to see it by running wget directly, adding the -v flag.
If it's not there, I believe wget will report Length: unspecified followed by the content-type header's value.
If it tells you (and assuming the byte count is accurate) then you can just compare the byte count of the file you got and the one in the transaction.
If the server(s) you're contacting don't provide this header, you're left with less exact methods, like finding some player that will basically play the mp3 until it ends and then see how long it took and then compare that to the length listed in the ID3 tag (which is in the very beginning of the file). You're not going to be able to get an exact match though, because the time in the tag (if it's there) is only accurate to the second, meaning half a second could be gone from the end of the file and you wouldn't know.
I have the Telit LE910 4G LTE module connected to a Teensy board (Arduino will do). While I am able to send data to my PHP server using HTTP requests (POST and GET), I am not able to send continuous data due to necessary delays for the server to respond back:
[...]
// SOCKET DIAL
LTESerial.print("AT#SD=1,0,80,\"SERVER IP\"\r\n");
delay(5000);
// POST
LTESerial.print("POST /server/index.php?data=");
LTESerial.print(random(1000));
LTESerial.print(" HTTP/1.1\r\n");
LTESerial.print("Host: SERVER IP\r\n\r\n");
delay(5000);
while (getResponse() > 0);
This is simply an example (written here), but it somewhat illustrates what I am doing. The above code is supposed to be put inside a while loop, so that once the data is uploaded to a .txt file on the server, the module reconnects to the server and POST another data point.
Obviously, I want to avoid these delays and parse data to the server as fast as possible (as soon as the data is available). This is why I opted for the 4G LTE version.
Tweaking the delays might give me an extra second or so, but my project includes plotting a lot of data points in "real time", so it is very time sensitive.
Any idea on how to send a continuous data stream to the server on 4G? I am thinking about buffering some data points and use FTP to upload the data, but I assume uploading files to the server might even take more time than now.
Any help is much appreciated!
It sounds like your use case might be better suited to a special IoT (internet of things) protocol rather than a more client server connection orientated protocol, like HTTP.
There are several protocols in use in the IoT world but some of the most common are:
MQTT - http://mqtt.org
COAP - http://coap.technology
XMPP - https://xmpp.org
These should not only address your latency concerns but are generally also designed to minimise data overhead and processing/battery use also.
You should be able to find PHP examples for these also - for example one for MQTT:
https://www.cloudmqtt.com/docs-php.html
I somewhat got it to work using some of the existing code above, but it is still not optimal. This might be useful for others.
This is what I did:
1) I socket dial only once (during initialization)
2) The POST-section is running inside a loop infinitely. The 5 second delay is now reduced to 200 ms and I added some headers, like so:
//unsigned long data = random(1000000000000000, 9999999999999999);
LTESerial.print("POST /index.php?data=");
LTESerial.print(data);
LTESerial.print(" HTTP/1.1\r\n");
LTESerial.print("Host: ADDRESS\r\n");
LTESerial.print("Connection: keep-alive\r\n\r\n");
delay(200);
while (getResponse() > 0);
3) Turns out my WAMP server (PHP) had limitations as default in terms of maximum HTTP requests, timeouts and the like. I had to increase these numbers (I changed them to unlimited) inside php.ini.
However, while I am able to "continuously" send data to my server, a delay of 200 ms is still a lot. I would like to see something close to serial communication, if possible.
Also, when looking at the serial monitor, I get:
[...]
408295030
4238727231
3091191349
2815507344
----------->(THEN SUDDENLY)<------------
HTTP/1.1 200 OK
Date: Thu, 02 Jun 2
2900442411
016 19:29:41 GMT
Server: Apache/2.4.17 (Win32) PHP/5.6.15
X-P16
3817418772
Keep-Alive: timeout=5
Connection: Keep-Alive
Content-Type: te
86026031
HTTP/1.1 200 OK
Date: Thu, 02 Jun 2016 19:29:4
3139838298
75272508
[...]
----------->(After 330 iterations/POSTs, I get)<------------
NO CARRIER
NO CARRIER
NO CARRIER
NO CARRIER
So my question is:
1) How do I eliminate the 200 ms delay as well?
2) If my data-points have different sizes, the delay will have to change as well. How to do this dynamically?
3) Why does it stop at 330-ish iterations? This doesn't happen if data is only 4 digits.
4) Why do I suddenly get responses from the server?
I hope someone can use this for their own project, however this does not suffice for mine. Any ideas?
We're using a normal PHP download script (with headers etc) to serve files to users.
The issue however is that with some browsers and large downloads the download script is requested multiple times. NGINX logs show the requests with a 206 status code, (suggesting chunked streaming?) which is strange because we don't serve any streamable content?
Regardless, this means the download script is requested multiple times and thus the MySQL function of +1'ing the download counter for the file is run multiple times per download.
We tried using sessions, but seeing as the download is severed from an external server + domain we have no way to clear said sessions after they're set.
We're using Laravel with NGINX + MySQL, any help would be appreciated. Thanks!
Looking at the spec and the headers for the request which would ultimately result in a 206 response, there was one header which struck out which looks like it would be perfect.
The header in question is the Content-Range header which could look like the following:
Content-Range: bytes 21010-47021/47022
What this is saying is it wants to grab bytes 21010-47021 out of 47022 bytes. All you should need to be worried about is the first number here and if it's 0 or not. If the header was set and the first number is 0, you can assume it's just beginning the download and you should increment the counter.
I am trying to investigate the cause of slowness on my website.
Here I attach firebug screenshot:
As you can see, all of content is loaded in just 2.92s, but javascript onload event is fired up AFTER 17.67s.
In case you want to see the website itself: http://maylashop.com .
I have tried to use YSlow, I get A grade and it doesn't help.
If anyone have fix or know what caused this, please kindly let me know.
why http://cf.addthis.com ? http://platform.twitter.com, plusone.google.com .... I dont see you use them any where ? if you are using , add them when they are desired
follow the guide lines Yslow , get some matrix and the check what is the bottleneck
You will be happy to follow these rules
This is not a JavaScript problem. Your PHP script is taking that long to execute (see screenshot). All the other resources that page is loading (JS, CSS, images, etc.) are taking less than a second to load. I'm 95% sure this is caused by zlib.output_compression. Try adding the following code to the top of your script to see if disabling it does anything useful:
ini_set('zlib.output_compression', 0);
If that fixes it, then you could consider not using zlib.out_compression or figure out what specific thing in your code is causing problems with it (usually output buffering).
Pretty sure this is not related to javascript. Just to request your main page took about 2 seconds. Ran this on a linux machine:
date ; lynx -source http://maylashop.com/ > /dev/null ; date
Fri Apr 13 22:38:19 CEST 2012
Fri Apr 13 22:38:21 CEST 2012
This is an independent confirmation that the host is either generating the index page too slowly, or there is a network transfer issue.
Doing the same thing with /index.php or /index.html or even a 404 page I created on the fly results with same ~2 second delay.
Edit: checked image download speed, and that one is <1 second. Close to 0.
Something in your PHP code might be creating the problem (inducing a delay.) One of those things could be delay in connecting to a MySQL server (or whatever you're using.) Is the database server on the same exact machine, or remote? Are you connecting to it on each call, or do you have a caching system in place?