Continue PHP Operation After User Exits Early - php

I'm using Laravel, PHP7, PHP-FPM, APCu and NGINX.
I have an HTML form where a user can upload a file, it connects to Upload.php.
File Process:
validate
name
move from /tmp to /media
create thumbnail
create database record
Once the PHP script reaches a certain point, how can I have it continue running even if the user exits the upload page early? Or else a rogue file will be left in the directory without a database entry.
// Move uploaded file from /tmp to /media
Input::file('file')->move("/var/www/mysite/media", $image);
// Continue even if user exits early
// prevent a file in /media from not having a database record
// Thumbnail creation and other operations here
// May take several seconds
// Save database record
$image = new Gallery();
$image->name = $name;
$image->created_at = $date;
$image->save();
Should I use ignore_user_abort(true) and wrap the operations in a while(true)?
I have other bools in the script such as $upload = true. How does the while(true) know to represent ignore_user_about(true) and not another bool I have set?

User abort is not an easy thing to catch in PHP... Typically, your script will run until it actually detects the client has baled out by trying to send something back to the browser... In your case, since you are not sending anything back while processing, you should run to completion... even if the user closes the connection... To make sure, you can use the register_shutdown_function() which will be called when PHP shuts down... now be careful, as Laravel has hooks there too... so any timeout will trigger Laravel error first, yours second... However, you can know in that function if it shut down properly or if it aborted...
To play with the function I created a route like this:
use Illuminate\Support\Facades\Log;
Route::get('/abort', function(){
Log::info('Entering abort route...');
set_time_limit(5);
register_shutdown_function(function(){
Log::info('Entering shutdown function... status: ' . connection_status());
switch(connection_status()){
case CONNECTION_ABORTED:
Log::info('Connection Aborted');
break;
case CONNECTION_TIMEOUT:
echo 'Connection Timeout';
break;
default:
echo 'All ok, user did not abort and function did not time out.';
}
});
while(1){
echo 'Ping<br />';
};
});
Here you can catch (I used the Laravel logger for the abort... find the log in storage/logs/laravel.log) ...
Now interestingly, if you abort this, you will get an abord call in the shutdown function, because the while(1) echoes 'Ping' to the browser, detecting the connection loss before the time out... however, if you remove the echo in the while and replace it with non buffer work $cnt++; or something, Then even if you abort, you will only get the timeout... the script did not detect the connection closing...
Note that your handler runs after all other handlers...
Also, it runs every time the script shuts down... so even when all is good... as in the default: switch above... naturally, this script will not run to completion because of the while(1)... just remove it if you want to see the normal completion behaviour...
I think this is probably the easiest way to do the clean up... catch it in there and do any clean up you need to...
Hope this helps...

Related

How to Execute a PHP task and put on hold/sleep another task that is being executed right after until the first one is completed/half through?

I have a PHP Code that does some tasks.
Lets say someone executes the code by doing so https://localhost/code.php.
I have an employee that executes the script over curl from a separate server, what is the best way to prevent him from launching the script twice, before the (already running) script is actually completed/finished goes to the end.
TLDR: I would need a function, to wait until the task/code (that's running now) completes and the secondary task that is trying to be launched has given (sleep for few seconds or until the first tasks completes).
TLDR2: Looking for function [The title says it]
Any ideas? thanks.
While a session won't work with cURL, the idea is valid -- you need to set something persistent outside of your script. So, how about writing to a local file, or writing to a database?
if ( file_exists('lock.txt') ) die;
file_put_contents ('lock.txt', 'This file prevents script execution', LOCK_EX);
(... your script code here...)
unlink ('lock.txt');
If you know that there is only one user who will hit your server you can simply use session data.
<?php
session_start();
if (true === $_SESSION["NOT_FINISHED"] ?? false) {
die("Previous job is not finished yet!");
} else {
$_SESSION["NOT_FINISHED"] = true;
// start whatever job need to be done here
...
// when job is done and finished lets release out busy flag
unset( $_SESSION["NOT_FINISHED"]);
}

ReactPHP Socket Server: What triggers the write (to client)?

When trying to write to the client, the message is getting buffered, and in some cases, it's not being written at all.
CURRENT STATUS:
When I telnet into the server, the Server Ready: message is readily printed as expected.
When I send random data (other than "close"), the server's terminal nicely shows progress every second, but the clients output waits until after all the sleeping, and then prints it all at once.
Most importantly, when sending "close", it just waits the obligatory second, and then closes without ANY writeout in the client.
GOAL:
My main goal is for a quick message to be written to the client prior closing a connection.
CODE:
// server.php
$loop = React\EventLoop\Factory::create();
$socket = new React\Socket\Server($loop);
$socket->on('connection', function ($conn)
{
$conn->write("Server ready:\n");
$conn->on('data', function ($data) use ($conn)
{
$data = trim($data);
if( $data == 'close')
{
$conn->write("Bye\n");
sleep(1);
$conn->close();
}
for ($i = 1; $i<5; $i++) {
$conn->write(". ");
echo '. ';
sleep(1);
}
$conn->write(".\n");
echo ".\n";
$conn->write("You said \"".$data."\"\n");
});
});
$socket->listen(1337, '127.0.0.1');
$loop->run();
SUMMARY:
Why can't I get anything written to the client before closing?
The problem your are encountering is because you are forgetting about the event loop that drives ReactPHP. I ran into this issue recently when building a server and after following the code around, I found 2 things out that should help you solve your problem.
If you close the connection after writing to it, it simply closes the connection before it can write. Solving this issue can help you fix the next issue... The correct call for writing something to the client, THEN closing the connection is called $conn->end('msg'); If you follow this chain of code the behaviour becomes clear; First it basically writes to the connection just as if you ran $conn->write('msg'); but then it registers a new event handler for the full-drain event, this handler simple calls $conn->close();; the full-drain event is only dispatched when the write buffer is completely emptied. So you can see that the use of end, simply waits to write before it closes the connection.
The drain and full-drain are only dispatched after writing to a stream. full-drain occurs when the write buffer is completely empty. drain is dispatched after the write buffer is emptied past its softLimit which by default is 2048 bytes.
The reason your writes are not making it through is because $conn->write('msg') only adds the string to the write buffer; it does not actually write. The event loop needs to run before it will be given time to write. Your use of sleep() is incorrect with react because this blocks the call at that line. In react you don't want to block any code from executing. If you are done a task, then let your function return, and code execution returns to the react main event loop. If you wish to have things run on certain intervals, or simply on the next iteration of the loop, you can schedule callbacks for the main event loop with the use of $loop->addTimer($seconds, $callback), $loop->addPeriodicTimer($seconds, $callback), $loop->nextTick($callback) or $loop->futureTick($callback)
Ultimately it is because you are programming without acknowledging that we are still in a blocking thread. Anything your code that blocks, blocks the entire react event loop, in turn blocking everything that react does for you. Give up processing time back to the loop to ensure it can do the reads/writes that you have queued up. You only need on iteration of the loop to occur for the write buffer to begin emptying (depending on the size of the buffer it may or may not write it all out)
If you're goal here is just to fix the connection closing bit, switch your call to $conn->end('msg') instead of the write -> close. However I believe that the other loop you have printing the dots also does not behave in the way that I think you expect/desire it to work. As it's purpose is not as clear, if you can tell me what your goal was for it I can possibly help you restructure that step as well.

Daemonize a PHP script with the option to kill it

I have a script that should start running when the user presses the start button on my application. This script takes a couple of variables given by the user like configuration etc. This script should then be running until the user clicks the stop button again.
Now I have been doing my research and found out that daemonizing the script would be the best option but now I have a couple of problems.
Which daemon PHP package do I use for this kind of process? How do I pass in the variables? How do I kill the script once the user commands it to?
I am using a Digital Ocean VPS to host my application and I'll be using it to host all the processes of the Daemons. I'm using Ubuntu and as a PHP framework I'm am using Laravel 4.
There is other option - you can use async messaging like RabbitMQ. It's very easy to use, massive amount of tutorials on website: RabbitMQ tutorials.
Your worker script need to listen for user commands and process enabled tasks.
I don't know the PHP-Daemon package sa can't tell you if that is a suitable solution. However I can show you a solution we used I've used before. You should modify it somewath to make it fit to your own needs.
// Let the script may run forever, maybe you want another limit
set_time_limit(0);
// Keep the script running even if the browser disconnects
ignore_user_abort(true);
// Create a temporary file that can tell you the user clicked stop
$tmpFile = tempnam('/tmp', 'mydaemon-');
// Ouptput buffering
ob_start();
// echo some output, including a stop-button with a link to a PHP-script
// that will delete the $tmpFile. It's up to you to implement stop.php
echo 'stop';
// Make sure to turn off all output-buffering
while(ob_get_level() > 1) {
ob_end_flush;
}
// Store output of last buffer in a string
$fullContent = ob_get_clean();
// Tell the browser to stop loading the page
$fullContent .= '<script type="text/javascript">window.stop();</script>';
// Close the session (if you have one), otherwise stop-action can't
// be opened
session_close();
// Sent output to the browser
header("Content-Length: ".strlen($fullContent));
header('Connection: close');
echo $fullContent;
// Force all output be sent
flush();
// Time-consuming script, regurarly check if the $tmpFile exists,
// if not, stop execution
while(true) {
clear_stat_cache();
if ( ! file_exists($tmpFile)) {
break;
}
sleep(1);
}

PHP - how to assure correct command execution sequence

Given a simple code like :
$file = 'hugefile.jpg';
$bckp_file = 'hugeimage-backup.jpg';
// here comes some manipulation on $bckp_file.
The assumed problem is that if the file is big or huge - let´s say a jpg - One would think that it will take the server some time to copy it (by time I mean even a few milliseconds) - but one would also assume that the execution of the next line would be much faster ..
So in theory - I could end up with "no such file or directory" error when trying to manipulate file that has not yet created - or worse - start to manipulate a TRUNCATED file.
My question is how can I assure that $bckp_file was created (or in this case -copied) successfully before the NEXT line which manipulates it .
What are my options to "pause" , "delay" the next line execution until the file creation / copy was completed ?
right now I can only think of something like
if (!copy($file, $bckp_file)) {
echo "failed to copy $file...\n";
}
which will only alert me but will not resolve anything (same like having the php error)
or
if (copy($file, $bckp_file)) {
// move the manipulation to here ..
}
But this is also not so valid - because let´s say the copy was not executed - I will just go out of the loop without achieving my goal and without errors.
Is that even a problem or am I over-thinking it ?
Or is PHP has a bulit-in mechanism to ensure that ?
Any recommended practices ?
any thoughts on the issue ? ??
What are my options to "pause" , "delay" the next line execution until the file is creation / copy was completes
copy() is a synchronous function meaning that code will not continue after the call to copy() until copy() either completely finishes or fails.
In other words, there's no magic involved.
if (copy(...)) { echo 'success!'; } else { echo 'failure!'; }
Along with synchronous IO, there is also asynchronous IO. It's a bit complicated to explain in technical detail, but the general idea of it is that it runs in the background and your code hooks into it. Then, whenever a significant event happens, the background execution alerts your code. For example, if you were going to async copy a file, you would register a listener to the copying that would be notified when progress was made. That way, your code could do other things in the mean time, but you could also know what progress was being made on the file.
PHP handles file uploads by saving the whole file in a temporary directory on the server before executing any of script (so you can use $_FILES from the beginning), and it's safe to assume all functions are synchronous -- that is, PHP will wait for each line to execute before moving to the next line.

Limiting Parallel/Simultaneous Downloads - How to know if download was cancelled?

I have a simple file upload service, written out in PHP, which also includes a script that controls download speeds by sending limited-sized packets when a user requests a download from this site.
I want to implement a system to limit parallel/simultaneous downloads to 1 per user if they are not premium members. In the download script above, I can use a MySQL database to store a record that has: (1) the user ID; (2) the file ID; (3) when the download was initiated; and (4) when the last packet was sent, which is updated each time this is done (if DL speed is limited to 150 kB/sec, then after every 150 kB, this record is updated, etc.).
However, thus far, the database record will only be deleted once the download has successfully completed — at the end of the script, after the download has been fully served, the record is deleted from the table:
insert DB record;
while (download is being served) {
serve packet of data;
update DB record with current date/time;
}
// Download is now complete
delete DB record;
How will I be able to detect when a download has been cancelled? Would I just have to have a Cron job (or something similar) detect if an existing download record is more than X minutes/hours old? Or is there something else I can do that I'm missing?
I hope I've explained this well enough. I don't think posting specific code is required; I'm interested more in the logistics of how/whether this can be done. If specific is needed, I will gladly provide it.
NOTE: I know how to detect if a file was successfully downloaded; I need to know how to detect if it was cancelled, aborted, or otherwise stopped (and not just paused). This will be useful in stopping parallel downloads, as well as preventing a situation where the user cancels Download #1 and tries to initiate Download #2, only to find that the site claims he is still downloading file #1.
EDIT: You can find my download script here: http://codetidy.com/1319/ — it already supports multi-part downloads and download resuming.
<?php
class DownloadObserver
{
protected $file;
public function __construct($file) {
$this->file = $file;
}
public function send() {
// -> note in DB you've started
readfile($this->file);
}
public function __destruct() {
// download is done, either completed or aborted
$aborted = connection_aborted();
// -> note in DB
}
}
$dl = new DownloadObserver("/tmp/whatever");
$dl->send();
should work just fine. No need for a shutdown_function or any funky self-built connection observation.
You will want to check out the following functions: connection_status(), connection_aborted() and ignore_user_abort() (see the connection handling section of the PHP manual for more info).
Although I can't guarantee the reliability (it's been a while since I've played around with it), with the right combination you should be able to accomplish what you want. There are a few caveats when working with these though, the big one being that if something goes wrong you could end up with stranded PHP scripts running on the server requiring you to kill Apache to stop them.
The following should give you a good idea of how to do it (adapted from the PHP code examples and a couple of the comments):
<?php
//Set PHP not to cancel execution if the connection is aborted
//and drop the time limit to allow for big file downloads
ignore_user_abort(true);
set_time_limit(0);
while(true){
//See the ignore_user_abort() docs re having to send data
echo chr(0);
//Make sure the data gets flushed properly or the connection check won't work
flush();
ob_flush();
//Check then connection status and exit loop if aborted
if(connection_status() != CONNECTION_NORMAL || connection_aborted()) break;
//Just to provide some spacing in this example
sleep(1);
}
file_put_contents("abort.txt", "aborted\n", FILE_APPEND);
//Never hurts to ensure that the script halts execution
die();
Obviously for how you would be using it the data being sent would simply be the download data chunk (just make sure you flush the buffer properly to ensure the data is actually sent). As far as I'm aware, there is no way of making a distinction between pausing and aborting/stopping. Pause/resume functionality (and multi-part downloading - i.e. how download managers accelerate downloads) relies on the "Range" header, basically requesting byte x to byte y of the file. So if you want to allow resumable downloads you'll have to deal with that too.
There is no HTTP "cancel" signal that is sent by default. So, it looks like you will need to decide on a timeout, the length of time a connection can sit without sending/receiving another packet. If you are sending rather small packets (as I presume you are) keep the timeout short for best effect.
In your while condition you will need to check the age of the last timestamp update, if its too old, stop sending the file.

Categories