move_uploaded_file hangs? - php

I seem to have a bizarre error I just can't quite figure out. My website was working on one server, but when I transferred it to a new one it stopped working. I believe I've narrowed the error down to this line of code:
$ret = move_uploaded_file($tmp_name, $orig_path);
This is executed through an AJAX call so it's a little bit tricky to debug, but the script can send back an error code and then my JavaScript will alert it. So, I've wrapped it in two of these debug statements:
echo json_encode(array(
'success' => false,
'errno' => $tmp_name.' -> '.$orig_path,
));
exit;
$ret = move_uploaded_file($tmp_name, $orig_path);
echo json_encode(array(
'success' => false,
'errno' => 'no error',
));
exit;
The first one works fine and spits out something like:
error /tmp/phpk3RICU -> /home/username/Websites/website/photos/o/2-4a3354dd017a9.jpg
Perhaps I'm a bit of a linux noob, but I can't actually find /tmp/phpk3RICU on my system (is it deleted as soon as the script exits or what?). More on that in a sec though.
If I delete the first debug check and let move_uploaded_file run, the 2nd debug check never seems to get executed, which leads me to believe move_uploaded_file is hanging.
If instead of using $tmp_name I use a file I know doesn't exist, then the 2nd check DOES get executed. So... it seems like it just doesn't want to move that tmp file, but it's not reporting an error.
I'm running a fresh install of the LAMP stack on my Unbutu machine, installed through apt-get... let me know if you need more info.
Oh.. and don't know if it's relevant, but the file gets uploaded through flash.

Do you upload the file via the AJAX call?
Uploaded files are deleted as soon as the script you uploaded them to finishes executing - that's why you can't find it in /tmp.

Try telling PHP to spit out all errors:
error_reporting(E_ALL);
It could be a configuration discrepancy that is breaking it on one of your servers. From the move_uploaded_file() manual page:
Note: move_uploaded_file() is both
safe mode and open_basedir aware.
However, restrictions are placed only
on the destination path as to allow
the moving of uploaded files in which
filename may conflict with such
restrictions. move_uploaded_file()
ensures the safety of this operation
by allowing only those files uploaded
through PHP to be moved.

Ugh. The problem was with permissions. 755 was enough on the other server, but not for this server it seems... not really sure why, I guess PHP is running under a different user? I'm not really sure how the whole permissions stuff works. What really boggles me is why mkdir and move_uploaded_file didn't fail and return false...

Related

fopen creates the file, so why is the fwrite failing?

I have this stupid little test PHP script running on a Ubuntu system inside an instance of a virtual server (Oracle Virtual Box) running on my pc:
<?
error_reporting(E_ALL);
ini_set('display_errors', 1); // show errors
echo "<p>test</p>";
$filename = "andy.txt";
$fh = fopen($filename, 'w') or die('fopen failed');
fwrite($fh, "qwerty") or die('fwrite failed');
fclose($fh);
?>
Despite all appropriate directory and file permissions being set, it is failing on the fwrite. The fopen works and creates the file, so write access is clearly enabled, but the fwrite dies, and the 'fwrite failed' message is output (no other error output is displayed).
The same script works perfectly well when I upload to my real server, so I am completely stumped as to why it won't write to the file; maybe it's something about my virtual server that is causing the problem.
Seems like such a pathetic thing, but it's driving me nuts! Considerable time Googling has failed to yield an answer, so can anybody here please provide some insight? Many thanks.
Not sure why the fwrite() call would die, as it returns the number of bytes written.
That said, have you tried with file_put_contents() instead? It's a simpler way of writing to a file, and is the recommended way since early PHP 5.
With it you only need to do the following
$filename = "andy.txt";
if(!file_put_contents ($contents, $filename)) {
// Write failed!
}
No need to bother with opening and closing the file pointer, as that's automatically handled by the function. :)
Solved! It was a disk space error on my virtual server. At the back of my mind, I knew I had seen this mentioned elsewhere as an issue with write fails, but in this case I failed to make the connection.
#ChristianF Thanks! Switching to file_put_contents() was very helpful, since it also failed, but gave me a meaningful error message:
'file_put_contents(): Only 0 of 6 bytes written, possibly out of free disk space'
Aha! Having recalled that growing log files can be a problem, I took it upon myself to delete everything inside /var/log (after saving them) and Presto! it now works! So, thank you for that tip - I will switch to using file_put_contents from now on. BTW: The contents of error.log itself was 2GB, while the remaining size of everything else in /var/log was only about 15MB, but deleting error.log by itself did not work, so I deleted everything.
#Clayton Smith Thank you, but removing the "or die('fwrite failed')" part did not result in any further error info - which is what is so frustrating: It's a shame that those error reporting directives at the start of the script didn't seem to do much.
#NaeiKinDus Thank you, but I don't think I have SELinux running (I'm afraid I don't know anything about this). Although I have a /etc/selinux directory present, there's no config file in it, just what appears to be a skeleton semanage.conf - whatever that is. Commands such as sestatus are not recognised.

Randomly arising "Permission denied" in php mkdir (windows)

I am wrestling with a problem. I stripped the example to this script that can be run as stand-alone application:
<?php
if(file_exists("x")){
print "<div>Deleting dir</dir>";
rmdir("x");
} else {
print "<div>Not exists</dir>";
}
clearstatcache();
mkdir("x");
If I call it repeatedly (F5 in browser) then sometimes this error occurs:
Deleting dir
Warning: mkdir(): Permission denied in F:\EclipseWorkspaces\Ramses\www\deploy\stripped_example.php on line 9
10-20 times it works OK and next time this error occurs. I googled more users has this problem but without solution.
https://github.com/getgrav/grav/issues/467
My example creates the directory in cwd, where anybody has full control. In addition the mkdir $mode parameter is ignored in windows. After the error the "x" directory truly not exists and in next attempt (F5) it is always created without error. I hopped later added "clearstatcache()" will help but nope.
In my full application I am using full file path. The deleted directory is not empty and I must clean it first. After successfull deleting the error occurs almost always.
My system is Windows 7, PHP 7.0.5, Apache 2.4
Windows doesn't let you delete things if another process is accessing them.
Check if your antivirus or some other process is opening the folder.
You can check this in resource monitor, from task manager.
Try the code with additional check on existing:
<?php
if(is_dir("x")){
print "<div>Deleting dir</dir>";
rmdir("x");
} else {
print "<div>Not exists</dir>";
}
clearstatcache();
if (!is_dir("x")) {
mkdir("x");
}
Had the same problem with Windows 10, Xampp and PHP 7. Problem was Kaspersky Internet Security, scanning and blocking the directory. Disabling KIS mkdir always works for me. Instead of directly recreating you can try rename, if disabling security software is not an option for you.
$time = time();
mkdir($path . $time);
rename($path . $time, $path);
juste delete espace name folder with Function Trim in php

Why is unlink successful on an open file?

Why open file is deleted? On Windows Xamp, I get message "still working", but on other PHP serwer file is deleted, even if it is open and I get message "file deleted". I can delete file from FTP too, even if first script is still working :(
<?php
$handle = fopen("resource.txt", "x");
sleep(10);
?>
<?php
if (file_exists("resource.txt") && #unlink("resource.txt") === false) {
echo "still worning";
exit;
}
else
echo "file deleted";
?>
UNIX systems typically let you do this, yes. The underlying C unlink function is documented as such:
The unlink() function removes the link named by path from its directory
and decrements the link count of the file which was referenced by the
link. If that decrement reduces the link count of the file to zero, and
no process has the file open, then all resources associated with the file
are reclaimed. If one or more process have the file open when the last
link is removed, the link is removed, but the removal of the file is
delayed until all references to it have been closed.
In other words, you can basically mark the file for deletion at any time, but the system will actually keep it around as long as applications are still accessing it. Only when all applications have let go of the file will it finally actually be removed. Windows apparently does not do it that way. Update: Since PHP 7.3 it's now possible to unlink open files.
As a side note, UNIX' behaviour is the only sane behaviour in a multi-process environment. If you have to wait for all processes to close access to a file before the system lets you remove it, it's basically impossible to remove frequently accessed files at all. Yes, that's where those Windows dialog boxes about "Cannot delete file, still in use, retry?" come from which you can never get rid of.

PHP: ftp_get() can download file with same name only once

I'm having a strange problem using ftp_get() on one of the two identical instances. One is on localhost and another on an actual server. I'm using the following to download a file via FTP. Both of the instances download from the same FTP servers with the same credentials and same paths.
$result = ftp_get($connection, $downloadPath, $serverPath, FTP_BINARY);
if ($result) {
$successfulWrites[] = $downloadPath; // file name only without path
} else {
// on second attempt to download file with same name, ftp_get() returns false
// this is where I throw an exception in my code
}
On my localhost, I can download the same file over and over, and it doesn't matter what the file name on the FTP server is or where it's located.
On second instance, which is identical to the localhost's (i.e. pulled from the same git repo) in terms of code, I can download a file once, but the same file cannot be downloaded again, and ftp_get() returns false. If I change the name of the file on the FTP server, I can download it, but after that it won't work again. i.e. ftp_get() will return false.
I don't have access to the FTP server log. If it's available, I'm going to try to get it today from the host. But can anyone think of a reason this might be happening? ftp_get() just returns true or false without any explanation, so I'm pretty stuck with this.
I'm using PHP 5.4, and I have no idea what the spec is of the FTP (regular FTP) server.
As discussed, it sounded like ftp_get was successfully obtaining the file and writing it locally. I wonder whether due to a permissions problem, when it tries to write the file locally again, it fails. Thus, the FTP channel itself is fine, and the problem is just local.
I'm somewhat surprised at this though, as I would imagine PHP would have raised a warning. Is your error_reporting set to allow this whilst you are debugging?

can't include("absolute_path")

I can't figure out why this won't work.
$docRoot = getenv("DOCUMENT_ROOT");
include_once($docRoot."/conn/connection.php");
include_once($docRoot."/auth/user.php");
It works locally through wamp, but it won't work on my live server. I tried this:
if(!include_once($docRoot."/auth/user.php")){
session_start();
$debug = array();
$debug["docRoot"] = $docRoot;
$debug["inc_path"] = $docRoot."/auth/user.php";
$debug["file_exists"] = file_exists($debug["inc_path"]);
$_SESSION['DEBUG'] = $debug;
// exit
header("Location:debug.php");
exit();
}
The debug page just echoes that array and it shows the correct absolute paths and indicates that the file does in fact exist. So why didn't the include_once() work? The server (a DV account on a MediaTemple server) has not been configured at all, so I wonder if there is an apache or php setting that is messing with me.
Ultimately, what I want here is a way to refer to a file in such a way that if I move the file, or include it in another file, nothing will break. Any ideas?
In your debugging you might try is_readable($docRoot."/conn/connection.php"). The file_exists function will return true even if the file does not have readable permissions.
If you get an error code we might be able to provide more info as to what is going wrong.
Turn on error reporting dummy. It turns out one of the includes on another file was breaking this page and I wasn't able to trace that out until I turned on error reporting.
Incidentally, the problematic include was missing a leading slash in the filepath ( include("dir/file.ext"); ) which works on my local wamp setup and breaks on the linux server.

Categories