php - unlink throws error: Resource temporarily unavailable - php

Here is the piece of code:
public function uploadPhoto(){
$filename = '../storage/temp/image.jpg';
file_put_contents($filename,file_get_contents('http://example.com/image.jpg'));
$photoService->uploadPhoto($filename);
echo("If file exists: ".file_exists($filename));
unlink($filename);
}
I am trying to do the following things:
Get a photo from a URL and save it in a temp folder in my server. This works fine. The image file is created and echoes If file exists: 1 when echo("If file exists: ".file_exists('../storage/temp/image.jpg'));.
Pass that file to another function that hanldes uploading the file to Amazon s3 bucket. The file gets stored in my s3 bucket.
Delete the photo stored in the temp folder. This doesn't work! I get an error saying:
unlink(../storage/temp/image.jpg): Resource temporarily unavailable
If I use rename($filename,'../storage/temp/renimage.jpg'); instead of unlink($filename); i get an error:
rename(../storage/temp/image.jpg,../storage/temp/renimage.jpg): The process cannot access the file because it is being used by another process. (code: 32)
If I remove the function call $photoService->uploadPhoto($filename);, everything works perfectly fine.
If the file is being used by another process, how do I unlink it after the process has been completed and the file is no longer being used by any process? I do not want to use timers.
Please help! Thanks in advance.

Simplest solution:
gc_collect_cycles();
unlink($file);
Does it for me!
Straight after uploading a file to amazon S3 it allows me to delete the file on my server.
See here: https://github.com/aws/aws-sdk-php/issues/841
The GuzzleHttp\Stream object holds onto a resource handle until its
__destruct method is called. Normally, this means that resources are freed as soon as a stream falls out of scope, but sometimes, depending
on the PHP version and whether a script has yet filled the garbage
collector's buffer, garbage collection can be deferred.
gc_collect_cycles will force the collector to run and call __destruct
on all unreachable stream objects.
:)

Just had to deal with a similar Error.
It seems your $photoService is holding on to the image for some reason...
Since you didn't share the code of $photoService, my suggestion would be to do something like this (assuming you don't need $photoService anymore):
[...]
echo("If file exists: ".file_exists($filename));
unset($photoService);
unlink($filename);
}
The unset() method will destroy the given variable/object, so it can't "use" (or wharever it does) any files.

I sat over this problem for an hour or two, and finally realized that "temporarily unavailable" really means "temporarily".
In my case, concurrent PHP scripts access the file, either writing or reading. And when the unlink() process had a poor timing, then the whole thing failed.
The solution was quite simple: Use the (generally not very advisable) # to prevent the error being shown to the user (sure, one could also stop errors from beinf printed), and then have another try:
$gone = false;
for ($trial=0; $trial<10; $trial++) {
if ($gone = #unlink($filename)) {
break;
}
// Wait a short time
usleep(250000);
// Maybe a concurrent script has deleted the file in the meantime
clearstatcache();
if (!file_exists($filename)) {
$gone = true;
break;
}
}
if (!$gone) {
trigger_error('Warning: Could not delete file '.htmlspecialchars($filename), E_USER_WARNING);
}
After solving this issue and pushing my luck further, I could also trigger the "Resource temporarily unavailable" issue with file_put_contents(). Same solution, now everything works fine.
If I'm wise enough and/or unlinking fails in the future, I'll replace the # by ob_start(), so the error message could tell me the exact error.

I had the same problem. The S3 Client doesn't seem to want to unlock before unlink is being executed. If you extract the contents into a variable and set it as the 'body' in the putObject array:
$fileContent = file_get_contents($filepath);
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $folderPath,
'Body' => $fileContent,
//'SourceFile' => $filepath,
'ContentType' => 'text/csv',
'ACL' => 'public-read'
));
See this answer: How to unlock the file after AWS S3 Helper uploading file?

The unlink method return bool value, so you can build a cycle, with some wait() and retries limit to wait for your processes to complete.
Additionally put "#" on the unlink, to hide the access error.
Throw another error/exception if retries count reached.

Related

Google Cloud Storage get temp filename (using fopen('php://temp'))

Similar question asked here a few years ago but with no answer:
Get path of temp file created via fopen('php://temp')
I am using Google Cloud Storage to download a number of large files in parallel and then upload them to another service. Essentially transferring from A to C, via my server B.
Under the hood, Google's StorageObject -> downloadAsStream() uses Guzzle to get the file via fopen('php://temp','r+').
I am running into a disk space issue because Google's Cloud Storage library is not cleaning up the temp files if there is an exception thrown during the transfer. (This is expected behaviour per the docs). Every retry of the script dumps another huge file in my tmp dir which isn't cleaned up.
If Guzzle used tmpfile() I would be able to use stream_get_meta_data()['uri'] to get the file path, but because it uses php://temp, this option seems to be blocked off:
[
"wrapper_type" => "PHP",
"stream_type" => "TEMP",
"mode" => "w+b",
"unread_bytes" => 0,
"seekable" => true,
"uri" => "php://temp", // <<<<<<<< grr.
]
So: does anyone know of a way to get the temporary file name created by fopen('php://temp') such that I can perform a manual clean-up?
UPDATE:
It appears this isn't possible. Hopefully GCS will update their library to change the way the temp file is generated. Until then I am using the following clean-up code:
public function cleanTempDir(int $timeout = 7200) {
foreach (glob(sys_get_temp_dir()."/php*") as $f) {
if (is_writable($f) && filemtime($f) < (time() - $timeout))
unlink($f);
}
}
UPDATE 2
It is possible, see accepted answer below.
Something like the following should do the trick:
use Google\Cloud\Storage\StorageClient;
$client = new StorageClient;
$tempStream = tmpfile();
$tempFile = stream_get_meta_data($tempStream)['uri'];
try {
$stream = $client->bucket('my-bucket')
->object('my-big-ol-file')
->downloadAsStream([
'restOptions' => [
'sink' => $tempStream
]
]);
} catch (\Exception $ex) {
unlink($tempFile);
}
The restOptions option allows you to proxy through commands to the underlying HTTP 1.1 transport (Guzzle, by default). My apologies this isn't clearly documented, but hope it helps!
Google Cloud Platform Support here!
At the moment, using the php Cloud Storage library it is not possible to get the temporary file name created when using the method downloadAsStream(). Therefore I have created a Feature Request on your behalf, you can follow it here.
As a workaround, you may be able to remove the file by hand, you can get the temp file name using the following command:
$filename = shell_exec('ls -lt | awk 'NR==2' | cut -d: -f2 | cut -d " " -f2');
After that, $filename will contain the last modified file name, which will be the one that failed and you wish to remove. With the filename you can now proceed to remove it.
Notice that you will have to be in your php://temp folder before executing the function.
It will most likely be the system configured temporary directory which you can get by sys_get_temp_dir.
Note that this will only save to file if needed and can reside in memory.
https://www.php.net/manual/en/wrappers.php.php
Edit: Ok, the file created. Then you can probably use stream_get_meta_data on the stream handle to get that information from the stream.

Load a 'php://temp' or 'php://memory' file within a Symfony File object

I have a blob resource from my db. I want to wrap temporaly this file into Symfony File object because I want to use specific methods like the extension guesser, and apply symfony file validators.
I want to store this temporary file into memory, because the blobs are small files and i dont want to create a file in disk in every request.
I tried to do this in that way:
$file = new File ('php://temp');
but symfony throws an error that says 'The file "php://temp" does not exist'. Looking at File source, the error is caused by a "is_file($path)" check that is made in the constructor, and I can invalidate this putting false in the second argument. But, if I do:
$file = new File ('php://temp', false);
the File is created, but then the error comes back later, e.g. when i use the guesser:
$file->guessExtension($file)
because in Symfony/Component/HttpFoundation/File/MimeType/MimeTypeGuesser.php:
public function guess($path)
{
if (!is_file($path)) {
throw new FileNotFoundException($path);
}
(...)
Ok. Then my question is: There is a way to load a 'php://temp' or 'php://memory' within a File object?
Pretty sure php://temp writes to memory until it is full and then writes to a file, whereas php://memory ensures only in memory with no fall back.
This likely happens because php://temp and php://memory are non-reusable, so once you've written to it the content may not still be there when you next want it. From the PHP manual:
php://memory and php://temp are not reusable, i.e. after the streams have been closed there is no way to refer to them again.
file_put_contents('php://memory', 'PHP');
echo file_get_contents('php://memory'); // prints nothing
How are you writing to php://temp to begin with? That will be the more important issue rather than with Symfony's File class. I suspect that by the time you are creating a File instance that php://temp has already gone.
It's worth noting that using php://temp will create a file on disk in the temporary location, so you might as well use write to a tempnam() handle anyway. At least then you will have a reference to a physical (but temporary) file.
I suggested to allow passing the file contents (instead of the path) to Symfony's MIME type guesser, to enable guessing "on-the-fly": https://github.com/symfony/symfony/issues/40916
Here's how I do it right now:
use Symfony\Component\Mime\MimeTypes;
$tmpFilename = tempnam(sys_get_temp_dir(), 'guessMimeType_');
file_put_contents($tmpFilename, $content);
$mimeTypes = new MimeTypes();
$guessedMimeType = $mimeTypes->guessMimeType($tmpFilename);
unlink($tmpFilename);
The first line is taken from https://www.php.net/manual/en/function.tempnam.php#93256

About PHP parallel file read/write

Have a file in a website. A PHP script modifies it like this:
$contents = file_get_contents("MyFile");
// ** Modify $contents **
// Now rewrite:
$file = fopen("MyFile","w+");
fwrite($file, $contents);
fclose($file);
The modification is pretty simple. It grabs the file's contents and adds a few lines. Then it overwrites the file.
I am aware that PHP has a function for appending contents to a file rather than overwriting it all over again. However, I want to keep using this method since I'll probably change the modification algorithm in the future (so appending may not be enough).
Anyway, I was testing this out, making like 100 requests. Each time I call the script, I add a new line to the file:
First call:
First!
Second call:
First!
Second!
Third call:
First!
Second!
Third!
Pretty cool. But then:
Fourth call:
Fourth!
Fifth call:
Fourth!
Fifth!
As you can see, the first, second and third lines simply disappeared.
I've determined that the problem isn't the contents string modification algorithm (I've tested it separately). Something is messed up either when reading or writing the file.
I think it is very likely that the issue is when the file's contents are read: if $contents, for some odd reason, is empty, then the behavior shown above makes sense.
I'm no expert with PHP, but perhaps the fact that I performed 100 calls almost simultaneously caused this issue. What if there are two processes, and one is writing the file while the other is reading it?
What is the recommended approach for this issue? How should I manage file modifications when several processes could be writing/reading the same file?
What you need to do is use flock() (file lock)
What I think is happening is your script is grabbing the file while the previous script is still writing to it. Since the file is still being written to, it doesn't exist at the moment when PHP grabs it, so php gets an empty string, and once the later processes is done it overwrites the previous file.
The solution is to have the script usleep() for a few milliseconds when the file is locked and then try again. Just be sure to put a limit on how many times your script can try.
NOTICE:
If another PHP script or application accesses the file, it may not necessarily use/check for file locks. This is because file locks are often seen as an optional extra, since in most cases they aren't needed.
So the issue is parallel accesses to the same file, while one is writing to the file another instance is reading before the file has been updated.
PHP luckily has a mechanisms for locking the file so no one can read from it until the lock is released and the file has been updated.
flock()
can be used and the documentation is here
You need to create a lock, so that any concurrent requests will have to wait their turn. This can be done using the flock() function. You will have to use fopen(), as opposed to file_get_contents(), but it should not be a problem:
$file = 'file.txt';
$fh = fopen($file, 'r+');
if (flock($fh, LOCK_EX)) { // Get an exclusive lock
$data = fread($fh, filesize($file)); // Get the contents of file
// Do something with data here...
ftruncate($fh, 0); // Empty the file
fwrite($fh, $newData); // Write new data to file
fclose($fh); // Close handle and release lock
} else {
die('Unable to get a lock on file: '.$file);
}

check file for changes using php

Is there any way to check id a file is being accessed or modified by another process from a php script. i have attempted to use the filemtime(), fileatime() and filectime() functions but i have the script in a loop which is checking continuously but it seems once the script has been executed it will only take the time from the first time the file was checked.. an example would be uploading files to a FTP or SMB share i attempted this below
while(1==1)
{
$LastMod = filemtime("file");
if(($LastMod +60) > time())
{
echo "file in use please wait... last modified : $LastMod";
sleep(10);
}else{
process file
}
}
I know the file is constantly changing but the $LastMod variable is not updating but end process and execute again will pick up a new $LastMod from the file but dosnt seem to update each time the file is checked in the loop
I have also attempted this with looking at filesize() but get the same symptoms i also looked into flock() but as the file is created or modified outside PHP I don't see how this would work.
If anyone has any solutions please let me know
thanks Vip32
PS. using PHP to process the files as requires interaction with mysql and querying external websites
The file metadata functions all work off stat() output, which caches its data, as a stat() call is a relatively expensive function. You can empty that cache to force stat() to fetch fresh data with clearstatcache()
There are other mechanisms that allow you to monitor for file changes. Instead of doing a loop in PHP and repeatedly stat()ing, consider using an external monitoring app/script which can hook into the OS-provided mechanism and call your PHP script on-demand when the file truly does change.
Add clearstatcache(); to your loop:
while(true)
{
$LastMod = filemtime("file");
clearstatcache();
if(($LastMod +60) > time())
{
echo "file in use please wait... last modified : $LastMod";
sleep(10);
}else{
process file
}
}

Creating files on a time (hourly) basis

I experimenting with twitter streaming API,
I use Phirehose to connect to twitter and fetch the data but having problems storing it in files for further processing.
Basically what I want to do is to create a file named
date("YmdH")."."txt"
for every hour of connection.
Here is how my code looks like right now (not handling the hourly change of files)
public function enqueueStatus($status)
$data = json_decode($status,true);
if(isset($data['text'])/*more conditions here*/) {
$fp = fopen("/tmp/$time.txt");
fwirte ($status,$fp);
fclose($fp);
}
Help is as always much appreciated :)
You want the 'append' mode in fopen - this will either append to a file or create it.
if(isset($data['text'])/*more conditions here*/) {
$fp = fopen("/tmp/" . date("YmdH") . ".txt", "a");
fwrite ($status,$fp);
fclose($fp);
}
From the Phirehose googlecode wiki:
As of Phirehose version 0.2.2 there is
an example of a simple "ghetto queue"
included in the tarball (see file:
ghetto-queue-collect.php and
ghetto-queue-consume.php) that shows
how statuses could be easily collected
on to the filesystem for processing
and then picked up by a separate
process (consume).
This is a complete working sample of doing what you want to do. The rotation time interval is configurable too. Additionally there's another script to consume and process the written files too.
Now if only I could find a way to stop the whole sript, my log keeps filling up (the script continues execution) even if I close the browser tab :P

Categories