PHP Script results in Server Timeout - php

I'm running a script which reads a lot of data from the database. The script itself gets triggered by another script. (To ensure the script will not run twice or more times) Unfortunately I keep running into a server timeout.I tried to solve it with
set_time_limit(0);
and
ini_set('max_execution_time', 0);
but that didn't do the trick.
What the scirpt does is, it generates a CSV file which then should be downloaded by the user ( via browser ).
The file itself has to be created just in time, so I cannot create it during night and push it or anything like this.
Is there something like a best-practise?
Since I cannot change the generating script itself, how can I ensure, the file gets generated and the user gets informed that the file is ready for download?
(I can't use mail)?
Thank you so much in advance.

Related

File upload via PHP CurlFile failing from cron

I have an 80M zip file that I am uploading to 'ShareFile' through their REST API using PHP cURL. The script works fine when run via the browser or on my local Windows machine using the php7 CLI. However when I attempt to run the script via a cron job on our client's shared hosting using /usr/local/bin/ea-php71, the file and script suddenly stop running mid-upload.
I have set my email address to be notified of any and all cron output.
I have a set_error_handler error handler that echoes any received errors and attempts to log them to a file (this never gets called).
I have a register_shutdown_function that also never gets called.
Finally, I have a curl_setopt CURLOPT_PROGRESSFUNCTION callback to see what is happening with the upload - it writes the total upload bytes $uploadTotal and the current upload bytes $uploadCurrent to a log along with a timestamp.
(Edit) - Forgot to mention I've also got the verbose setting enabled CURLOPT_VERBOSE and am outputting stderr to a log file - but nothing relevant is displayed.
Number 4 has given me the most insight into the issue - when the script is run as a cron job the upload stops around:
upload progress: [UploadTotal: 87455948] [UploadCurrent: 34913772]
every time (roughly around 67 seconds).
Some final details ...
When uploading the 80MB file through cron - my error handler and shutdown function never get called, nor does the cron email from any echoed data. The script just seems to simply PAUSE forever (it's almost like the script is waiting for user input or something).
However:
If I run the script locally or change the upload file to a 5MB file - everything works as normal - all functions get called and I receive an email notifying that the script has finished running.
It looks to me like I'm hitting some kind of timeout through cron which, per everything I've read should not be the case. I'm looking for some ideas for what I could be running into or how to troubleshoot this one.
Thank you!

how can detect a php file is still running on apache?

I have a php script with an unlimited while, it must run 24/7
in another php file, how can I check is that file running on server or stopped?
how can i send a signal to apache to stop and re-execute that file?
I'm going to assign numbers to files. The file that is running 24/7 will be the first file and the file that will change the state of the first one will be called the second file.
Now, the first file can write to a file or in database, let's say, every 10 minutes. This way you know if it's running by checking that file and the last date when it wrote that file. So you create a second file or database table and write in it the state you want the first file to be. Example: active or disabled. Now you read the file/table with the first file, if it's disabled, you stop executing the script.
easiest might be to save timestamped status in memcache or other shared location, and have the other php script check the status timestamp
it's easy to kill an apache process and hit the page again, that will restart the script. Or you can add a signal handler to restart on SIGHUP
You cannot check whether a specific file is running. You'll have to check whether a process is still running with that file. That also means this isn't something Apache can do for you. You'll either have to:
1) use an OS-dependant kill-signal to the process running the script
2) check for a kill-signal from inside the script
The former requires a lot of privilige on the server, so it's probably easier to do the second.
The easiest way is for the running file to write to a database or file somewhere to signify that it´s still going, and to read the same location for a signal to stop every so often. If the running process sees a stop-signal, it can simply break from whatever loop is keeping it going.
The second script can set the signal at whatever point and for whatever reason, and the other script will quickly after terminate.
If the first script terminates, write a file to a disk called error.txt, and then make the second script check for this every minute or so.
The second script once spots an error.txt file will be signaled to restart your first script. More realtime would be to use a database and with the current timestamp.

Create file with fwrite and use it immediately

I have a script that creates a file using fwrite and right after creating it it sends and email with this file anexed.
When I run this script I get an error "File does no exists". When I run the script again it all works.
So I guess that it tries to send the file to fast after creating it and the server maybe needs a few more ms before being able to send it.
Does anyone know this problem? Any solutions?
Delays should not be necessary; it is possible your script does not close the file, but PHP and other script languages will auto close open file handles at the end of execution, which means that the file will exist for the second run.
Try sleep(1) before attaching the file, this will cause PHP to wait 1 second before trying to access the file which should be plenty of time for it to be created. This should tell you if the file creation speed is the problem or not.

Make output of cli based PHP script viewable from web without piping to a file?

I have a command line PHP script that runs constantly (infinite loop) on my server in a 'screen' session. The PHP script outputs various lines of data using echo.
What I would like to do is create a PHP web script to interface the command line script so that I can view the echo output without having to SSH into the server.
I had considered writing/piping all of the echo statements to a text file, and then having the web script read the text file. The problem here is that the text file will grow to several megabytes in the space of only a few minutes.
Does anyone know of a more elegant solution?
I think expect_popen will work for you, if you have it available.
Another option is to used named pipes - no disk usage, the reading end has output available as it comes.
The CLI script can write to a file like so:
file_put_contents( '/var/log/cli-log-'.date('YmdHi').'.log', $data );
Thereby a new log file being created every minute to keep the file size down. You can then clean up the directory at that point, deleting previous log files or moving them or whatever you want to do.
Then the web script can read from the current log file like so:
$log = file_get_contents( '/var/log/cli-log-'.date('YmdHi').'.log' );
As Elias Van Ootegem suggested, I would definitely recommend a cron instead of an constantly running script.
If you want to view the data from a web script you can do a few things....one is write the data to a log file or a database so you can pull it out later....I would consider limiting what you output if you there is so much data (if that is a possiblity).
I have a lot of crons email me data, not sure if that would work for you but I figured I would mention it.
The most elegant suggestion I can think of is to run the commands using exec in a web script which will directly output to the browse if you use : http://php.net/manual/en/function.flush.php

get output from shell_exec command as command runs

I am coding a PHP-scripted web page that is intended to accept the filename of a JFFS2 image which was previously uploaded to the server. The script is to then re-flash a partition on the server with the image, and output the results. I had been using this:
$tmp = shell_exec("update_flash -v " . $filename . " 4 2>&1");
echo '<h3>' . $tmp . '</h3>';
echo verifyResults($tmp);
(The verifyResults function will return some HTML that indicates to the user whether the update command completed successfully. I.e., in the case that the update completes successfully, display a button to restart the device, etc.)
The problem with this is that the update command takes several minutes to complete, and the PHP script blocks until the shell command is complete before it returns any of the output. This typically means that the update command will continue running, while the user will see an HTTP 504 error (at worst) or wait for the page to load for several minutes.
I was thinking about doing something like this instead:
shell_exec("rm /tmp/output.txt");
shell_exec("update_flash -v " . $filename . " 4 2>&1 >> /tmp/output.txt &");
echo '<div id="output"></div>';
echo '<div id="results"></div>';
This would theoretically put the command in the background and append all output to /tmp/output.txt.
And then, in a Javascript function, I would periodically request getOutput.php, which would simply print the contents of /tmp/output.txt and stick it into the "output" div. Once the command is completely done, another Javascript function would process the output and display a result in the "results" div.
But the problem I see here is that getOutput.php will eventually become inaccessible during the process of updating the device's flash memory, because it's on the partition to which is targeted for an update. So that could leave me in the same position as before, albeit without the 504 or a seemingly eternally-loading page.
I could move getOutput.php to another partition in the device, but then I think I would still have to do some funky stuff with the webserver configuration to be able to access it there (a symlink to it from the webroot would, like any other file, eventually be overwritten during the re-flash).
Is there any other way of displaying the output of the command as it runs, or should I just make do with the solution I have?
Edit 1: I'm currently testing some solutions. I'll update my question with results later.
Edit 2: It seems that the filesystem does not get overwritten as I had originally thought. Instead, the system seems to mount the existing filesystem in read-only mode, so I can still access getOutput.php even after the filesystem is re-flashed.
The second solution I described in my question does seem to work in addition with using popen (as mentioned in an answer below) instead of shell_exec. The page loads, and via Ajax I can display the contents of output.txt.
However, it seems that output.txt does not reflect the output from the re-flash command in real time-- it seems to display nothing until the update command returns from execution. I will need to do further testing to see what's going on here.
Edit 3: Never mind, it looks like the file is current as I access it. I was just hitting a delay while the kernel did some JFFS2-related tasks triggered by my use of the partition on which the source JFFS2 image is stored. I don't know why, but this apparently causes all PHP scripts to block until it's done.
To work around that, I'm going to put the update command invocation in a separate script and request it via Ajax-- that way, the user will at least receive some prepackaged feedback while technically still waiting on the system.
Look at the popen: http://it.php.net/manual/en/function.popen.php
Interesting scenario.
My first thought was to do something regarding proc_* and $_SESSION, but I'm not sure if that will work or not. Give it a try, but if not...
If you're worried about the file being flashed during the process, you could always instantiate a mysql database in the secondary process and write to that. The database can exist on another partition, and you can address it by local ip and the system will take care of the routing.
Edit
When I mentioned proc_* with sessions, I meant something similar to this where $descriptorspec would become:
$_SESSION = array(
1 => array("pipe", "w"),
);
However I kind of doubt that will work. The process will end up writing to the $_SESSION in memory which no longer exists once the first script is killed.
Edit 2
ACTUALLY, on that note, you could install memcache and have your secondary process write directly to memory, which can then be re-read by your web-interfaced process.
If you wipe the DocRoot there is no resource/script that can respond to requests from the user during this time. Therefore you have to send updates to the user in the same request that does the wipe. This requires you to start the shell process and immediately return to PHP. This can be accomplished with pcntl_fork() and pcntl_exec(). Your PHP script should now continuously send the output of the shell script to the client. If the shell script appends to a file in /tmp, you could fpassthru() that file and clear it until the shell script ends.
Regarding your However:
My guess is you are trying to use the file as a stream. I haven't done any production tests, but I believe that the file will only be written back to disk on fclose().
If you are writing to the file continually in script #2, those writes are actually going directly into memory until the file is closed.
Again - I cannot verify this, but if you want to test it, try re-opening and closing the file for every write. This will confirm or deny my theory and you can modify your approach accordingly.

Categories