I am importing public domain books from archive.org to my site, and have a php import script set up to do it. However, when I attempt to import the images and run
exec( "unzip $images_file_arg -d $book_dir_arg", $output, $status );
it will occasionally return me a $status of 1. Is this ok? I have not had any problems with the imported images so far. I looked up the man page for unzip, but it didn't tell me much. Could this possibly cause problems, and do I have to check each picture individually, or am I safe?
EDIT: Oops. I should have checked the manpage straight away. They tell us what the error codes mean:
The exit status (or error level) approximates the exit codes defined by PKWARE and takes on the following values, except under VMS:
normal; no errors or warnings detected.
one or more warning errors were encountered, but processing completed successfully anyway. This includes zipfiles where one or more files was skipped due to unsupported compression method or encryption with an unknown password.
a generic error in the zipfile format was detected. Processing may have completed successfully anyway; some broken zipfiles created by other archivers have simple work-arounds.
a severe error in the zipfile format was detected. Processing probably failed immediately.
(many more)
So, apparently some archives may have had files in them skipped, but zip didn't break down; it just did all it could.
It really should work, but there are complications with certain filenames. Are any of them potentially tricky with unusual characters? escapeshellarg is certainly something to look into. If you get a bad return status, you should be concerned because that means zip exited with some error or other. At the very least, I would suggest you log the filenames in those case (error_log($filename)) and see if there is anything that might cause problems. zip itself runs totally independantly of PHP and will do everything fine if it's getting passed the right arguments by the shell, and the files really are downloaded and ready to unzip.
Maybe you are better suited with the php integrated ziparchive-class.
http://www.php.net/manual/de/class.ziparchive.php
Especially http://www.php.net/manual/de/function.ziparchive-extractto.php it returns you TRUE, if extracting was successful, otherwise FALSE.
Related
We are using ImageMagick for resizing/thumbnailing JPGs to a specific size. The source file is loaded via HTTP. It's working as expected, but from time to time some images are partially broken.
We already tried different software like GraphicsMagick or VIPS, but the problem is still there. It also only seems to happen if there are parallel processes. So the whole script is locked via sempahores, but it also does not help
We found multiple similar problems, but all without any solution: https://legacy.imagemagick.org/discourse-server/viewtopic.php?t=22506
We also wonder, why it is the same behaviour in all these softwares. We also tried different PHP versions. It seems to happen more often on source images with a huge dimension/filesize.
Any idea what to do here?
Example 1 Example 2 Example 3
I would guess the source image has been truncated for some reason. Perhaps something timed out during the download?
libvips is normally permissive, meaning that it'll try to give you something, even if the input is damaged. You can make it strict with the fail flag (ie. fail on the first warning).
For example:
$ head -c 10000 shark.jpg > truncated.jpg
$ vipsthumbnail truncated.jpg
(vipsthumbnail:9391): VIPS-WARNING **: 11:24:50.439: read gave 2 warnings
(vipsthumbnail:9391): VIPS-WARNING **: 11:24:50.439: VipsJpeg: Premature end of JPEG file
$ echo $?
0
I made a truncated jpg file, then ran thumbnail. It gave a warning, but did not fail. If I run:
$ vipsthumbnail truncated.jpg[fail]
VipsJpeg: Premature end of input file
$ echo $?
1
Or in php:
$thumb = Vips\Image::thumbnail('truncated.jpg[fail]', 128);
Now there's no output, and there's an error code. I'm sure there's an imagemagick equivalent, though I don't know it.
There's a downside: thumbnailing will now fail if there's anything wrong with the image, and it might be something you don't care about, like invalid resolution.
After some additional investigation we discovered that indeed the sourceimage was already damaged. It was downloaded via a vpn connection which was not stable enough. Sometimes the download stopped, so the JPG was only half written.
I'm using php wget to download mp4 files from another server
exec("wget -P files/ $http_url");
but I didn't find any option to check if file downloaded correctly, or not yet.
I tried to get duration file using getID3(), but it always return good value, even if file not downloaded correctly
// Check file duration
$file = $getID3->analyze($filepath);
echo $file['playtime_string']; // 15:00 always good value
there is any function to check that?
Thanks
First off I would try https instead. If the server(s) you're connecting to happen to support it, you get around this entire issue because lost bytes are usually caused by flaky hardware or bad MTU settings on a router on their network. The http connections gracefully degrade to giving you as much of the file as it could manage, whereas https connections just plain fail when they lose bytes because you can't decrypt non-intact packets.
Lazy IT people tend to get prodded to fix complete failures of https, but they get less pressure to diagnose and fix corner cases like missing bytes that only occur larger transactions over http.
If https is not available, keep reading.
An HTTP server response may include a Content-Length header indicating the number of bytes in a particular transaction.
If the header is there, you should be able to see it by running wget directly, adding the -v flag.
If it's not there, I believe wget will report Length: unspecified followed by the content-type header's value.
If it tells you (and assuming the byte count is accurate) then you can just compare the byte count of the file you got and the one in the transaction.
If the server(s) you're contacting don't provide this header, you're left with less exact methods, like finding some player that will basically play the mp3 until it ends and then see how long it took and then compare that to the length listed in the ID3 tag (which is in the very beginning of the file). You're not going to be able to get an exact match though, because the time in the tag (if it's there) is only accurate to the second, meaning half a second could be gone from the end of the file and you wouldn't know.
I'm running php5 FPM with APC as an opcode and application cache. As is usual, I am logging php errors into a file.
Since that is becoming quite large, I tried to configure logrotate. It works, but after rotation, php continues to log to the existing logfile, even when it is renamed. This results in scripts.log being a 0B file, and scripts.log.1 continuing to grow further.
I think (haven't tried) that running php5-fpm reload in postrotate could resolve this, but that would clear my APC cache each time.
Does anybody know how to get this working properly?
I found that "copytruncate" option to logrotate ensures that the inode doesn't change. Basically what is [sic!] was looking for.
This is probably what you're looking for. Taken from: How does logrotate work? - Linuxquestions.org.
As written in my comment, you need to prevent PHP from writing into the same (renamed) file. Copying a file normally creates a new one, and the truncating is as well part of that options' name, so I would assume, the copytruncate option is an easy solution (from the manpage):
copytruncate
Truncate the original log file in place after creating a copy,
instead of moving the old log file and optionally creating a new
one, It can be used when some program can not be told to close
its logfile and thus might continue writing (appending) to the
previous log file forever. Note that there is a very small time
slice between copying the file and truncating it, so some log-
ging data might be lost. When this option is used, the create
option will have no effect, as the old log file stays in place.
See Also:
Why we should use create and copytruncate together?
Another solution I found on a server of mine is to tell php to reopen the logs. I think nginx has this feature too, which makes me think it must be quite common place. Here is my configuration :
/var/log/php5-fpm.log {
rotate 12
weekly
missingok
notifempty
compress
delaycompress
postrotate
invoke-rc.d php5-fpm reopen-logs > /dev/null
endscript
}
I'm upgrading a huge codebase for thousands of web pages to PHP 5.3 from an earlier version. We've dropped the use of short tags (<%, <\?=, etc...) and have them disabled in the php.ini and have made reasonable effort to find any in the code and replace them.
However, When someone creates something with or short tag or some legacy code still has one we missed, Apache returns a blank document with a 200 status. The problem is, PHP doesn't throw an error (obviously since it's not parsing them) and Apache doesn't seem to log it is an error either. This creates a problem for detecting these without visually inspecting all pages (a simple crawler is happy with the 200 the url returns).
Does anyone know of any way to get Apache or PHP to throw an error when it hits a short tag as a site is being crawled?
Can't really find a way to have PHP or Apache to issue some kind of a warning related to documents with short tags, but you could set a cron job to search all files under your server's web folder and for example send an email with the results, thus at the very least having pointed out the files with short tags on it:
Simple Example: cron job
<?php
// run grep command for '<?' that don't have an immediately p
$found = shell_exec('grep -rn "<?[^p]" *');
if ($found!='') {
// email or any other action...
}
?>
Everytime I try to modify a file, I get this error and I don't know what it means:
A PHP Error was encountered
Severity: Warning
Message: file_put_contents() [function.file-put-contents]: Only 0 of 19463 bytes written, possibly out of free disk space
Filename: Template/String.php
Line Number: 369
I tried looking for solutions and so far none of them made sense, well, in my opinion.
Any thoughts? A little help here please. Thank you very much.
This is an old question but it comes up when Googling for the error message. Here is another possible cause for this error message.
The ext2/3/4 filesystems can reserve disk space for root. Often it is 5% of the drive. df shows the drive is not entirely used. Root can write to the drive. Non-root users will only be able to create files but not write any data to them. See dumpe2fs and tune2fs for more details.
This probably means that PHP is able to get a valid file descriptor, but is hitting a wall (such as a quota, sticky bit, etc) when actually trying to write the data.
It is also conceivable that you are writing (perhaps unwittingly) to a network file system that is having a problem with its peer.
More information regarding your platform would help (I've seen SELinux do strange things when improperly configured), but I think you get the gist of what to check.
It's just a permission to where you wanted to save the content, e.g. readonly or just like the error itself, no disk space.
You may need to increase the quota for that user on the server. You can confirm this by deleting a file and seeing if it will let you re-upload that file, but nothing further.
If you have Webmin, go to System > Disk Quota. (For a detailed explanation, see this wiki.)
If you do not have Webmin or a similar interface, you will need to look up how to manually edit the user quota settings depending on which Linux distro you are using.
If you do not have access to the server you will need to contact the person who does and ask what your disk quota is and if it can be increased.
For me I was also having the same set of errors on my login page. While exploring I found that the storage/logs/laravel.log file has grown up to the size of 24G. And clearing it solved the issue. To find out the size of directory use linux command du -sh * or du -sh <filename> . To clear up the log file using truncate command is the best option. Because oping with vim and deleting could be difficult because of Heavy size of the file. For truncating use the command truncate -s 0 <filename>
I went to the root directory
cd /
and searched for the folder that had the biggest space usage with
du -sh *
from there I was able to trace the file giving the headache, pluto.log in /var/log. this could be any file.
Maybe you have a previous lock on your target file, try to:
$fp = fopen('yourfile.etc', 'r+');
if (!flock($fp, LOCK_EX | LOCK_NB)) {
//this file is actually being used from another script / user, that's the problem.
} else {
//ok, there wasn't lock on it, must be something else
fclose($fp);
}