I've been building a class to create ZIP files in PHP. An alternative to ZipArchive assuming it is not allowed in the server. Something to use with those free servers.
It is already sort of working, build the ZIP structures with PHP, and using gzdeflate() to generate the compressed data.
The problem is, gzdeflate() requires me to load the whole file in the memory, and I want the class to work limitated to 32MB of memory. Currently it is storing files bigger than 16MB with no compression at all.
I imagine I should make it compress data in blocks, 16MB by 16MB, but I don't know how to concatenate the result of two gzdeflate().
I've been testing it and it seems like it requires some math in the last 16-bits, sort of buff->last16bits = (buff->last16bits & newblock->first16bits) | 0xfffe, it works, but not for all samples...
Question: How to concatenate two DEFLATEd streams without decompressing it?
PHP stream filters are used to perform such tasks. stream_filter_append can be used while reading from or writing to streams. For example
$fp = fopen($path, 'r');
stream_filter_append($fp, 'zlib.deflate', STREAM_FILTER_READ);
Now fread will return you deflated data.
This may or may not help. It looks like gzwrite will allow you to write files without having them completely loaded in memory. This example from the PHP Manual page shows how you can compress a file using gzwrite and fopen.
http://us.php.net/manual/en/function.gzwrite.php
function gzcompressfile($source,$level=false){
// $dest=$source.'.gz';
$dest='php://stdout'; // This will stream the compressed data directly to the screen.
$mode='wb'.$level;
$error=false;
if($fp_out=gzopen($dest,$mode)){
if($fp_in=fopen($source,'rb')){
while(!feof($fp_in))
gzwrite($fp_out,fread($fp_in,1024*512));
fclose($fp_in);
}
else $error=true;
gzclose($fp_out);
}
else $error=true;
if($error) return false;
else return $dest;
}
Related
Let's say I'd like to download some information from a file on the internet within PHP, but I do not need the entire file. Therefore, loading the full file through
$my_file = file_get_contents("https://www.webpage.com/".$filename);
would use up more memory and resources than necessary.
Is there a way to download only e.g. the first 5kb of a file as plain text with PHP?
EDIT:
In the comments it was suggested to use e.g. maxlen arg for file_get_contents or similar. But what I noticed that the execution time of the call does not vary appreciably for different maxlen which means that the function loads the full file and then just returns a substring to the variable.
Is there a way to make PHP download just the required amount of bytes and no more, to speed things up?
<?php
$fp = fopen("https://www.webpage.com/".$filename, "r");
$content = fread($fp,5*1024);
fclose($fp);
?>
Note: Make sure allow_url_fopen is enabled.
PHP Doc: fopen, fread
Maybe I'm asking the impossible but I wanted to clone a stream multiple times. A sort of multicast emulation. The idea is to write every 0.002 seconds a 1300 bytes big buffer into a .sock file (instead of IP:port to avoid overheading) and then to read from other scripts the same .sock file multiple times.
Doing it through a regular file is not doable. It works only within the same script that generates the buffer file and then echos it. The other scripts will misread it badly.
This works perfectly with the script that generates the chunks:
$handle = #fopen($url, 'rb');
$buffer = 1300;
while (1) {
$chunck = fread($handle, $buffer);
$handle2 = fopen('/var/tmp/stream_chunck.tmp', 'w');
fwrite($handle2, $chunck);
fclose($handle2);
readfile('/var/tmp/stream_chunck.tmp');
}
BUT the output of another script that reads the chunks:
while (1) {
readfile('/var/tmp/stream_chunck.tmp');
}
is messy. I don't know how to synchronize the reading process of chunks and I thought that sockets could make a miracle.
It works only within the same script that generates the buffer file and then echos it. The other scripts will misread it badly
Using a single file without any sort of flow control shouldn't be a problem - tail -F does just that. The disadvantage is that the data will just accululate indefinitely on the filesystem as long as a single client has an open file handle (even if you truncate the file).
But if you're writing chunks, then write each chunk to a different file (using an atomic write mechanism) then everyone can read it by polling for available files....
do {
while (!file_exists("$dir/$prefix.$current_chunk")) {
clearstatcache();
usleep(1000);
}
process(file_get_contents("$dir/$prefix.$current_chunk"));
$current_chunk++;
} while (!$finished);
Equally, you could this with a database - which should have slightly lower overhead for the polling, and simplifies the garbage collection of old chunks.
But this is all about how to make your solution workable - it doesn't really address the problem you are trying to solve. If we knew what you were trying to achieve then we might be able to advise on a more appropriate solution - e.g. if it's a chat application, video broadcast, something else....
I suspect a more appropriate solution would be to use mutli-processing, single memory model server - and when we're talking about PHP (which doesn't really do threading very well) that means an event based/asynchronous server. There's a bit more involved than simply calling socket_select() but there are some good scripts available which do most of the complicated stuff for you.
I have an interesting problem. I need to do a progress bar from an asycronusly php file downloading. I thought the best way to do it is before the download starts the script is making a txt file which is including the file name and the original file size as well.
Now we have an ajax function which calling a php script what is intended to check the local file size. I have 2 main problems.
files are bigger then 2GB so filesize() function is out of business
i tried to find a different way to determine the local file size like this:
.
function getSize($filename) {
$a = fopen($filename, 'r');
fseek($a, 0, SEEK_END);
$filesize = ftell($a);
fclose($a);
return $filesize;
}
Unfortunately the second way giving me a tons of error assuming that i cannot open a file which is currently downloading.
Is there any way i can check a size of a file which is currently downloading and the file size will be bigger then 2 GB?
Any help is greatly appreciated.
I found the solution by using an exec() function:
exec("ls -s -k /path/to/your/file/".$file_name,$out);
Just change your OS and PHP to support 64 bit computing. and you can still use filesize().
From filesize() manual:
Return Values
Returns the size of the file in bytes, or FALSE (and generates an
error of level E_WARNING) in case of an error.
Note: Because PHP's integer type is signed and many platforms use
32bit integers, some filesystem functions may return unexpected
results for files which are larger than 2GB.
Why is it that on some mp3s files, when I call mime_content_type($mp3_file_path) it returns application/octet-stream?
This is my code:
if (!empty($_FILES)) {
$tempFile = $_FILES['Filedata']['tmp_name'];
$image = getimagesize($tempFile);
$mp3_mimes = array('audio/mpeg', 'audio/x-mpeg', 'audio/mp3', 'audio/x-mp3', 'audio/mpeg3', 'audio/x-mpeg3', 'audio/mpg', 'audio/x-mpg', 'audio/x-mpegaudio');
if (in_array(mime_content_type($tempFile), $mp3_mimes)) {
echo json_encode("mp3");
} elseif ($image['mime']=='image/jpeg') {
echo json_encode("jpg");
} else{
echo json_encode("error");
}
}
EDIT:
I've found a nice class here:
http://www.zedwood.com/article/127/php-calculate-duration-of-mp3
MP3 files are a strange beast when it comes to identifying them. You can have an MP3 stored with a .wav container. There can be an ID3v2 header at the start of the file. You can embed an MP3 essentially within any file.
The only way to detect them reliably is to parse slowly through the file and try to find something that looks like an MP3 frame. A frame is the smallest unit of valid MP3 data possible, and represents (going off memory) 0.028 seconds of audio. The size of the frame varies based on bitrate and sampling rate, so you can't just grab the bitrate/sample rate of the first frame and assume all the other frames will be the same size - a VBR mp3 must be parsed in its entirety to calculate the total playing time.
All this boils down to that identifying an MP3 by using PHP's fileinfo and the like isn't reliable, as the actual MP3 data can start ANYWHERE in a file. fileinfo only looks at the first kilobyte or two of data, so if it says it's not an MP3, it might very well be lying because the data started slightly farther in.
application/octet-stream is probably mime_content_type s fallback type when it fails to recognize a file.
The MP3 in that case is either not a real MP3 file, or - more likely - the file is a real MP3 file, but does not contain the "magic bytes" the PHP function uses to recognize the format - maybe because it's a different sub-format or has a variable bitrate or whatever.
You could try whether getid3 gives you better results. I've never worked with it but it looks like a pretty healthy library to get lots of information out of multimedia files.
If you have access to PHP's configuration, you may also be able to change the mime.magic file PHP uses, although I have no idea whether a better file exists that is able to detect your MP3s. (The mime.magic file is the file containing all the byte sequences that mime_content_type uses to recognize certain file types.)
Fleep is the answer to this question. Allowing application/octet-stream is dangerous since .exe and other dangerous files can display with that mime type.
See this answer https://stackoverflow.com/a/52570299/14482130
is file_get_contents() enough for downloading remote movie files located on a server ?
i just think that perhaps storing large movie files to string is harmful ? according to the php docs.
OR do i need to use cURL ? I dont know cURL.
UPDATE: these are big movie files. around 200MB each.
file_get_contents() is a problem because it's going to load the entire file into memory in one go. If you have enough memory to support the operation (taking into account that if this is a web server, you may have multiple hits that generate this behavior simultaneously, and therefore each need that much memory), then file_get_contents() should be fine. However, it's not the right way to do it - you should use a library specifically intended for these sort of operations. As mentioned by others, cURL will do the trick, or wget. You might also have good luck using fopen('http://someurl', 'r') and reading blocks from the file and then dumping them straight to a local file that's been opened for write privileges.
As #mopoke suggested it could depend on the size of the file. For a small movie it may suffice. In general I think cURL would be a better fit though. You have much more flexibility with it than with file_get_contents().
For the best performance you may find it makes sense to just use a standard unix util like WGET. You should be able to call it with system("wget ...") or exec()
http://www.php.net/manual/en/function.system.php
you can read a few bytes at a time using fread().
$src="http://somewhere/test.avi";
$dst="test.avi";
$f = fopen($src, 'rb');
$o = fopen($dst, 'wb');
while (!feof($f)) {
if (fwrite($o, fread($f, 2048)) === FALSE) {
return 1;
}
}
fclose($f);
fclose($o);