I have a PHP script that downloads videos from various locations.
The video files can be any where from 20mb to 100mb+
I've got PHP currently saving the video file in a directory using CURLOPT_FILE. This is working fine with no problems.
Because of the large files that are being dowloaded I've set the cURL timeout period to 45 minutes to allow the file to download. I have also set set_time_limit(0) so that the PHP page should continue processing after the download has completed. I've also set ini_set("memory_limit","500M");
When the download completes it should echo "Downloaded" and then update a mysql record stating the file has been downloaded.
What is happening though, is the video file is being downloaded correctly by cURL but it is not displaying "Downloaded" in the browser BUT it is updating mysql.
Why is this? I've tried to come up with a solution myself, but I cannot work out what the issue here is...
If you're in a browser environment, the browser will timeout after a certain time, and so will stop listening for output from the script, even though the script will continue to run. It varies across browsers, but the number I've seen is 30 seconds.
To overcome this problem, you should send output (even if meaningless echo "<!--empty comment-->";) every so often.
I recently had a similar problem, and I dealt with it by not outputting any content from the script, and instead polling from the browser every so often using AJAX to see if it was done.
Or, don't use the browser environment (as it's not ideally suited for this problem), and instead use a command line prompt, as it does not have (to my knowledge) these timeouts.
Related
Well, I have a web application with multiple tools.
One such tool sends a simple Ajax request at the same PHP script, which in turn sends an HTTP request via Curl, but the problem is that this request takes a long time.
As this process takes a long time, I can not perform other tasks within the application, so I expect to complete the process in order to use other tools.
How I can assign or enable PHP to use multiple children or processes?
In this particular case, I don't need and don't want to use Thread Class, or "exec" for execute via command line.
Explanation of the problem:
I have script for upload file, but when I upload file(with big size), this script take a long time, so while loading the file, I would like to see my history of uploaded files.
To this should open another tab in the browser with the URL history uploads.
The problem is that when I open the record, this is left "waiting" until the other tab finishes loading (when finished upload the file).
(I thing)The problem is that PHP handles all in the same process/thread and this prevents you use multiple script at once(with multiples tab on browser).
So my problem is that I need to run multiple processes at the same time, without waiting for any of the process finishes running.
I am currently working with Linux Centos 7 servers, with Apache + PHP 5.4 and 4GB of RAM allocated to PHP.
Thanks
So I am using a form with a file input to submit an excel file and upload the data into a MySQL database. The script imports everything just fine.
Only problem is that the process seems to timeout on the client browser, even though it seems to continue it's process on the server, but it seems to timeout on the client browser. I have already added all the common fixes, I.E.:
ini_set('max_execution_time', 0);
set_time_limit(0);
ini_set('memory_limit', '-1');
ignore_user_abort(true);
I am currently using a flush(); and ob_end_flush(); to output a message every 1000 records that get scanned for upload.
Unfortunately, after exactly 2 minutes, the page stops loading.
On the flip side, it does continue to upload the records all records get scanned and uploaded regardless of the page. This leads me to believe that somehow the request is continuing to process on the web server, but somehow the browser stops getting info from the server after a certain amount of time. Based on my calculations, the browser stops getting the request after exactly 2 minutes.
I have tried this on both Internet Explorer and Chrome and I get the same result. I have not made any changes to the php.ini file for the security reasons.
Using Old Zend Server with Apache2.
Background Info - I created a online shop a while ago dropshipping products i created the website and added all product info by hand. Now i have knowledge in php i created a scraper/spider to get all the required info i need without doing anything by hand
Question - My script runs on my local server collecting all links from the sites sitemap.xml this is uploaded to my database once this script is complete it starts going through the links extracting the data needed Picture, Price, Name, Desc etc... the site i am scraping is not happy that i am doing it due to human/computer errors that can only be spotted by a human, but have allowed it. anyway my script sometimes throws me an error when a item cannot be scraped due to some unknown reasons so i have put a die() when the script throws this error.
This is placed inside the mysql while loop for the links, i have noticed a few times that when an error does occur the script stops loading shows me the exact error, but when i shut down the browser it carries on deleting queries and extracting information i need to manually restart the server before it stops.
How is this possible and what can i do to prevent this? is it the die() statement just kills the client side script and keeps the server side script running ?
So you are running PHP locally to gather data from a remote site. You start a PHP script in your local browser. And the script does not stop when the browser is closed.
Of course the local server must be stopped.
However I think PHP can also be run from the command line (maybe only Linux?) and then output could go to the command line, and the command line might be simply killed.
Another solution is: in the loops checking for the (non-)existence of a file and then die. A second PHP script, callable in a second browser tab, then adds/removes that signal file.
(The file might serve as a lock too, so you do not start the data gathering twice.)
I have a PHP page with multiple links. Each link writes a different content to the same file. This generally works. However, using the same link within a minute of having it used already, it neither creates nor modifies the file anymore.
This can be verified using Terminal.
Waiting said minute will resume the PHP script to work properly again.
$file = fopen("/private/tmp/iTunesRemoteCommand", "w");
fwrite($file, $_GET['action']);
fclose($file);
chmod("/private/tmp/iTunesRemoteCommand", 0777);
print_r("Done");
For testing purposes, I swapped $_GET['action'] with a fixed, manually entered string.
In essence, every link works once every minute.
The installed version of PHP is v5.3.4.
Having tried it with multiple browsers, I wonder if writing the same content into a file in relative short succession is a limitation of PHP or is there a setting (php.ini ?) where this delay can be reduced.
PHP isn't doing any delays. Writing to a file has no timing penalties whatsoever.
But you have something next to PHP that interferes here. Obviously you are trying to remote-control iTunes. That cannot happen without iTunes reading the file. And probably this affects what can be written to the file, and when.
Just think if iTunes uses a cronjob internally to look for the file contents, then this might happen only every minute. It probably is beyond our influence to get to know what really happens.
I created a script that gets data from some web services and our database, formats a report, then zips it and makes it available for download. When I first started I made it a command line script to see the output as it came out and to get around the script timeout limit you get when viewing in a browser. But because I don't want my user to have to use it from the command line or have to run php on their computer, I want to make this run from our webserver instead.
Because this script could take minutes to run, I need a way to let it process in the background and then start the download once the file has been created successfully. What's the best way to let this script run without triggering the timeout? I've attempted this before (using the backticks to run the script separately and such) but gave up, so I'm asking here. Ideally, the user would click the submit button on the form to start the request, then be returned to the page instead of making them stare at a blank browser window. When the zip file they exists (meaning the process has finished), it should notify them (via AJAX? reloaded page? I don't know yet).
This is on windows server 2007.
You should run it in a different process. Make a daemon that runs continuously, hits a database and looks for a flag, like "ShouldProcessData". Then when you hit that website switch the flag to true. Your daemon process will see the flag on it's next iteration and begin the processing. Stick the results in to the database. Use the database as the communication mechanism between the website and the long running process.
In PHP you have to tell what time-out you want for your process
See PHP manual set_time_limit()
You may have another problem: the time-out of the browser itself (could be around 1~2 minutes). While that time-out should be changeable within the browser (for each browser), you can usually prevent the time-out user side to be triggered by sending some data to the browser every 20 seconds for instance (like the header for download, you can then send other headers, like encoding etc...).
Gearman is very handy for it (create a background task, let javascript poll for progress). It does of course require having gearman installed & workers created. See: http://www.php.net/gearman
Why don't you make an ajax call from the page where you want to offer the download and then just wait for the ajax call to return and also set_time_limit(0) on the other page.