Everytime I try to modify a file, I get this error and I don't know what it means:
A PHP Error was encountered
Severity: Warning
Message: file_put_contents() [function.file-put-contents]: Only 0 of 19463 bytes written, possibly out of free disk space
Filename: Template/String.php
Line Number: 369
I tried looking for solutions and so far none of them made sense, well, in my opinion.
Any thoughts? A little help here please. Thank you very much.
This is an old question but it comes up when Googling for the error message. Here is another possible cause for this error message.
The ext2/3/4 filesystems can reserve disk space for root. Often it is 5% of the drive. df shows the drive is not entirely used. Root can write to the drive. Non-root users will only be able to create files but not write any data to them. See dumpe2fs and tune2fs for more details.
This probably means that PHP is able to get a valid file descriptor, but is hitting a wall (such as a quota, sticky bit, etc) when actually trying to write the data.
It is also conceivable that you are writing (perhaps unwittingly) to a network file system that is having a problem with its peer.
More information regarding your platform would help (I've seen SELinux do strange things when improperly configured), but I think you get the gist of what to check.
It's just a permission to where you wanted to save the content, e.g. readonly or just like the error itself, no disk space.
You may need to increase the quota for that user on the server. You can confirm this by deleting a file and seeing if it will let you re-upload that file, but nothing further.
If you have Webmin, go to System > Disk Quota. (For a detailed explanation, see this wiki.)
If you do not have Webmin or a similar interface, you will need to look up how to manually edit the user quota settings depending on which Linux distro you are using.
If you do not have access to the server you will need to contact the person who does and ask what your disk quota is and if it can be increased.
For me I was also having the same set of errors on my login page. While exploring I found that the storage/logs/laravel.log file has grown up to the size of 24G. And clearing it solved the issue. To find out the size of directory use linux command du -sh * or du -sh <filename> . To clear up the log file using truncate command is the best option. Because oping with vim and deleting could be difficult because of Heavy size of the file. For truncating use the command truncate -s 0 <filename>
I went to the root directory
cd /
and searched for the folder that had the biggest space usage with
du -sh *
from there I was able to trace the file giving the headache, pluto.log in /var/log. this could be any file.
Maybe you have a previous lock on your target file, try to:
$fp = fopen('yourfile.etc', 'r+');
if (!flock($fp, LOCK_EX | LOCK_NB)) {
//this file is actually being used from another script / user, that's the problem.
} else {
//ok, there wasn't lock on it, must be something else
fclose($fp);
}
Related
We are using ImageMagick for resizing/thumbnailing JPGs to a specific size. The source file is loaded via HTTP. It's working as expected, but from time to time some images are partially broken.
We already tried different software like GraphicsMagick or VIPS, but the problem is still there. It also only seems to happen if there are parallel processes. So the whole script is locked via sempahores, but it also does not help
We found multiple similar problems, but all without any solution: https://legacy.imagemagick.org/discourse-server/viewtopic.php?t=22506
We also wonder, why it is the same behaviour in all these softwares. We also tried different PHP versions. It seems to happen more often on source images with a huge dimension/filesize.
Any idea what to do here?
Example 1 Example 2 Example 3
I would guess the source image has been truncated for some reason. Perhaps something timed out during the download?
libvips is normally permissive, meaning that it'll try to give you something, even if the input is damaged. You can make it strict with the fail flag (ie. fail on the first warning).
For example:
$ head -c 10000 shark.jpg > truncated.jpg
$ vipsthumbnail truncated.jpg
(vipsthumbnail:9391): VIPS-WARNING **: 11:24:50.439: read gave 2 warnings
(vipsthumbnail:9391): VIPS-WARNING **: 11:24:50.439: VipsJpeg: Premature end of JPEG file
$ echo $?
0
I made a truncated jpg file, then ran thumbnail. It gave a warning, but did not fail. If I run:
$ vipsthumbnail truncated.jpg[fail]
VipsJpeg: Premature end of input file
$ echo $?
1
Or in php:
$thumb = Vips\Image::thumbnail('truncated.jpg[fail]', 128);
Now there's no output, and there's an error code. I'm sure there's an imagemagick equivalent, though I don't know it.
There's a downside: thumbnailing will now fail if there's anything wrong with the image, and it might be something you don't care about, like invalid resolution.
After some additional investigation we discovered that indeed the sourceimage was already damaged. It was downloaded via a vpn connection which was not stable enough. Sometimes the download stopped, so the JPG was only half written.
I have this stupid little test PHP script running on a Ubuntu system inside an instance of a virtual server (Oracle Virtual Box) running on my pc:
<?
error_reporting(E_ALL);
ini_set('display_errors', 1); // show errors
echo "<p>test</p>";
$filename = "andy.txt";
$fh = fopen($filename, 'w') or die('fopen failed');
fwrite($fh, "qwerty") or die('fwrite failed');
fclose($fh);
?>
Despite all appropriate directory and file permissions being set, it is failing on the fwrite. The fopen works and creates the file, so write access is clearly enabled, but the fwrite dies, and the 'fwrite failed' message is output (no other error output is displayed).
The same script works perfectly well when I upload to my real server, so I am completely stumped as to why it won't write to the file; maybe it's something about my virtual server that is causing the problem.
Seems like such a pathetic thing, but it's driving me nuts! Considerable time Googling has failed to yield an answer, so can anybody here please provide some insight? Many thanks.
Not sure why the fwrite() call would die, as it returns the number of bytes written.
That said, have you tried with file_put_contents() instead? It's a simpler way of writing to a file, and is the recommended way since early PHP 5.
With it you only need to do the following
$filename = "andy.txt";
if(!file_put_contents ($contents, $filename)) {
// Write failed!
}
No need to bother with opening and closing the file pointer, as that's automatically handled by the function. :)
Solved! It was a disk space error on my virtual server. At the back of my mind, I knew I had seen this mentioned elsewhere as an issue with write fails, but in this case I failed to make the connection.
#ChristianF Thanks! Switching to file_put_contents() was very helpful, since it also failed, but gave me a meaningful error message:
'file_put_contents(): Only 0 of 6 bytes written, possibly out of free disk space'
Aha! Having recalled that growing log files can be a problem, I took it upon myself to delete everything inside /var/log (after saving them) and Presto! it now works! So, thank you for that tip - I will switch to using file_put_contents from now on. BTW: The contents of error.log itself was 2GB, while the remaining size of everything else in /var/log was only about 15MB, but deleting error.log by itself did not work, so I deleted everything.
#Clayton Smith Thank you, but removing the "or die('fwrite failed')" part did not result in any further error info - which is what is so frustrating: It's a shame that those error reporting directives at the start of the script didn't seem to do much.
#NaeiKinDus Thank you, but I don't think I have SELinux running (I'm afraid I don't know anything about this). Although I have a /etc/selinux directory present, there's no config file in it, just what appears to be a skeleton semanage.conf - whatever that is. Commands such as sestatus are not recognised.
I'm running php5 FPM with APC as an opcode and application cache. As is usual, I am logging php errors into a file.
Since that is becoming quite large, I tried to configure logrotate. It works, but after rotation, php continues to log to the existing logfile, even when it is renamed. This results in scripts.log being a 0B file, and scripts.log.1 continuing to grow further.
I think (haven't tried) that running php5-fpm reload in postrotate could resolve this, but that would clear my APC cache each time.
Does anybody know how to get this working properly?
I found that "copytruncate" option to logrotate ensures that the inode doesn't change. Basically what is [sic!] was looking for.
This is probably what you're looking for. Taken from: How does logrotate work? - Linuxquestions.org.
As written in my comment, you need to prevent PHP from writing into the same (renamed) file. Copying a file normally creates a new one, and the truncating is as well part of that options' name, so I would assume, the copytruncate option is an easy solution (from the manpage):
copytruncate
Truncate the original log file in place after creating a copy,
instead of moving the old log file and optionally creating a new
one, It can be used when some program can not be told to close
its logfile and thus might continue writing (appending) to the
previous log file forever. Note that there is a very small time
slice between copying the file and truncating it, so some log-
ging data might be lost. When this option is used, the create
option will have no effect, as the old log file stays in place.
See Also:
Why we should use create and copytruncate together?
Another solution I found on a server of mine is to tell php to reopen the logs. I think nginx has this feature too, which makes me think it must be quite common place. Here is my configuration :
/var/log/php5-fpm.log {
rotate 12
weekly
missingok
notifempty
compress
delaycompress
postrotate
invoke-rc.d php5-fpm reopen-logs > /dev/null
endscript
}
Is there a way to view the PHP error logs or Apache error logs in a web browser?
I find it inconvenient to ssh into multiple servers and run a "tail" command to follow the error logs. Is there some tool (preferably open source) that shows me the error logs online (streaming or non-streaming?
Thanks
A simple php code to read log and print:
<?php
exec('tail /var/log/apache2/error.log', $error_logs);
foreach($error_logs as $error_log) {
echo "<br />".$error_log;
}
?>
You can embed error_log php variable in html as per your requirement. The best part is tail command will load the latest errors which wont make too load on your server.
You can change tail to give output as you want
Ex. tail myfile.txt -n 100 // it will give last 100 lines
See What commercial and open source competitors are there to Splunk? and I would recommend https://github.com/tobi/clarity
Simple and easy tool.
Since everyone is suggesting clarity, I would also like to mention tailon. I wrote tailon as a more modern and secure alternative to clarity. It's still in its early stages of development, but the functionality you need is there. You may also use wtee, if you're only interested in following a single log file.
You good make a script that reads the error logs from apache2..
$apache_errorlog = file_get_contents('/var/log/apache2/error.log');
if its not working.. trying to get it with the php functions exec or shell_exec and the command 'cat /var/log/apache2/error.log'
EDIT: If you have multi servers(i quess with webservers on it) you can create a file on the machine, when you make a request to that script(hashed connection) you get the logs from that server
I recommend LogHappens: https://loghappens.com, it allows you to view the error log in web, and this is what it looks like:
LogHappens supports kinds of web server log format, it comes with parses for Apache and CakePHP, and you can write your own.
You can find it here: https://github.com/qijianjun/logHappens
It's open source and free, I forked it and do some work to make it work better in dev env or in public env. That is:
Support token for security, one can't access the site without the token in config.php
Support IP whitelists for security and privacy
Sopport config the interval between ajax requests
Support load static files from local (for local dev env)
I've found this solution https://code.google.com/p/php-tail/
It's working perfectly. I only needed to change the filesize, because I was getting an error first.
56 if($maxLength > $this->maxSizeToLoad) {
57 $maxLength = $this->maxSizeToLoad;
58 // return json_encode(array("size" => $fsize, "data" => array("ERROR: PHPTail attempted to load more (".round(($maxLength / 1048576), 2)."MB) then the maximum size (".round(($this->maxSizeToLoad / 1048576), 2) ."MB) of bytes into memory. You should lower the defaultUpdateTime to prevent this from happening. ")));
59 }
And I've added default size, but it's not needed
125 lastSize = <?php echo filesize($this->log) || 1000; ?>;
I know this question is a bit old, but (along with the lack of good choices) it gave me the idea to create this tiny (open source) web app. https://github.com/ToX82/logHappens. It can be used online, but I'd use an .htpasswd as a basic login system. I hope it helps.
I am importing public domain books from archive.org to my site, and have a php import script set up to do it. However, when I attempt to import the images and run
exec( "unzip $images_file_arg -d $book_dir_arg", $output, $status );
it will occasionally return me a $status of 1. Is this ok? I have not had any problems with the imported images so far. I looked up the man page for unzip, but it didn't tell me much. Could this possibly cause problems, and do I have to check each picture individually, or am I safe?
EDIT: Oops. I should have checked the manpage straight away. They tell us what the error codes mean:
The exit status (or error level) approximates the exit codes defined by PKWARE and takes on the following values, except under VMS:
normal; no errors or warnings detected.
one or more warning errors were encountered, but processing completed successfully anyway. This includes zipfiles where one or more files was skipped due to unsupported compression method or encryption with an unknown password.
a generic error in the zipfile format was detected. Processing may have completed successfully anyway; some broken zipfiles created by other archivers have simple work-arounds.
a severe error in the zipfile format was detected. Processing probably failed immediately.
(many more)
So, apparently some archives may have had files in them skipped, but zip didn't break down; it just did all it could.
It really should work, but there are complications with certain filenames. Are any of them potentially tricky with unusual characters? escapeshellarg is certainly something to look into. If you get a bad return status, you should be concerned because that means zip exited with some error or other. At the very least, I would suggest you log the filenames in those case (error_log($filename)) and see if there is anything that might cause problems. zip itself runs totally independantly of PHP and will do everything fine if it's getting passed the right arguments by the shell, and the files really are downloaded and ready to unzip.
Maybe you are better suited with the php integrated ziparchive-class.
http://www.php.net/manual/de/class.ziparchive.php
Especially http://www.php.net/manual/de/function.ziparchive-extractto.php it returns you TRUE, if extracting was successful, otherwise FALSE.