Send partial of FTP stream to php://output - php

I have a PHP-server that serves audio-files by streaming them from an FTP-server not publicly available.
After sending the approriate headers, I just stream the file to the client using ftp_get like this:
ftp_get($conn, 'php://output', $file, FTP_BINARY);
For reasons that has to do with Range headers, I must now offer to only send a part of this stream:
$start = 300; // First byte to stream
$stop = 499; // Last byte to stream (the Content-Length is then $stop-$start+1)
I can do it by downloading the entire content temporarily to a file/memory, then send the desired part to the output. But since the files are large, that solution will cause a delay for the client who has to wait for the file to first be downloaded to the PHP-server before it even starts to download to the client.
Question:
How can I start streaming to php://output from an FTP-server as soon as the first $start bytes have been discarded and stop streaming when I've reached the '$stop' byte?

Instead of using PHP's FTP extension (eg. ftp_get), it is possible to open a stream using PHP's built-in FTP wrapper.
The following code would stream parts of an FTP-file to php://output:
$start = 300; // First byte to stream
$stop = 499; // Last byte to stream
$url = "ftp://username:password#server.com/path/file.mp3";
$ctx = stream_context_create(array('ftp' => array('resume_pos' => $start)));
$fin = fopen($url, 'r', false, $ctx);
$fout = fopen('php://output', 'w');
stream_copy_to_stream($fin, $fout, $stop-$start+1);
fclose($fin);
fclose($fout);
While stream_copy_to_stream has an $offset parameter, using it resulted in an error because the stream was not seekable. Using the context option resume_pos worked fine however.

Related

PHP: fseek() for large file (>2GB)

I have a very large file (about 20GB), how can I use fseek() to jump around and read its content.
The code looks like this:
function read_bytes($f, $offset, $length) {
fseek($f, $offset);
return fread($f, $length);
}
The result is only correct if $offset < 2147483647.
Update: I am running on windows 64,
phpinfo - Architecture: x64,
PHP_INT_MAX: 2147483647
WARNING: as noted in comments, fseek uses INT internally and it simply cant work
with such large files on 32bit PHP compilations. Following solution
wont work. It is left here just for reference.
a little bit of searching led me to comments on PHP manual page for fseek:
http://php.net/manual/en/function.fseek.php
problem is maximum int size for offset parameter but seems that you can work around it by doing multiple fseek calls with SEEK_CUR option and mix it with one of big numbers processing library.
example:
function fseek64(&$fh, $offset)
{
fseek($fh, 0, SEEK_SET);
$t_offset = '' . PHP_INT_MAX;
while (gmp_cmp($offset, $t_offset) == 1)
{
$offset = gmp_sub($offset, $t_offset);
fseek($fh, gmp_intval($t_offset), SEEK_CUR);
}
return fseek($fh, gmp_intval($offset), SEEK_CUR);
}
fseek64($f, '23456781232');
for my project, i needed to READ blocks of 10KB from a BIG offset in a BIG file (>3 GB). Writes were always append, so no offsets needed.
this will work, irrespective of which PHP version and OS you are using.
Pre-requisite = your server should support Range-retrieval queries. Apache & IIS already support this, as do 99% of other webservers (shared hosting or otherwise)
// offset, 3GB+
$start=floatval(3355902253);
// bytes to read, 100 KB
$len=floatval(100*1024);
// set up the http byte range headers
$opts = array('http'=>array('method'=>'GET','header'=>"Range: bytes=$start-".($start+$len-1)));
$context = stream_context_create($opts);
// bytes ranges header
print_r($opts);
// change the URL below to the URL of your file. DO NOT change it to a file path.
// you MUST use a http:// URL for your file for a http request to work
// this will output the results
echo $result = file_get_contents('http://127.0.0.1/dir/mydbfile.dat', false, $context);
// status of your request
// if this is empty, means http request didnt fire.
print_r($http_response_header);
// Check your file URL and verify by going directly to your file URL from a web
// browser. If http response shows errors i.e. code > 400 check you are sending the
// correct Range headers bytes. For eg - if you give a start Range which exceeds the
// current file size, it will give 406.
// NOTE - The current file size is also returned back in the http response header
// Content-Range: bytes 355902253-355903252/355904253, the last number is the file size
...
...
...
SECURITY - you must add a .htaccess rule which denies all requests for this database file except those coming from local ip 127.0.0.1.

uncompressing gzip with stream_filter_append and stream_copy_to_stream

Found this:
https://stackoverflow.com/a/11373078/530599 - great, but
how about stream_filter_append($fp, 'zlib.inflate', STREAM_FILTER_*
Looking for another way to uncompress data.
$fp = fopen($src, 'rb');
$to = fopen($output, 'wb');
// some filtering here?
stream_copy_to_stream($fp, $to);
fclose($fp);
fclose($to);
Where $src is some url to http://.../file.gz for example 200+ Mb :)
Added test-code that works, but in 2 steps:
<?php
$src = 'http://is.auto.ru/catalog/catalog.xml.gz';
$fp = fopen($src, 'rb');
$to = fopen(dirname(__FILE__) . '/output.txt.gz', 'wb');
stream_copy_to_stream($fp, $to);
fclose($fp);
fclose($to);
copy('compress.zlib://' . dirname(__FILE__) . '/output.txt.gz', dirname(__FILE__) . '/output.txt');
Try gzopen which opens a gzip (.gz) file for reading or writing. If the file is not compress, it transparently reads it so you can safely read a non-gzipped file.
$fp = gzopen($src, 'rb');
$to = fopen($output, 'w+b');
while (!feof($fp)) {
fwrite($to, gzread($fp, 2048)); // writes decompressed data from $fp to $to
}
fclose($fp);
fclose($to);
One of the annoying omissions in PHP's stream filter subsystem is the lack of a gzip filter. Gzip is essentially contents compressed using the deflate method. It adds a 2-byte header before the deflated data, however, and a Adler-32 checksum at the end. If you just add an zlib.inflate filter to a stream, it's not going to work. You have to skip the first two bytes before attaching the filter.
Note that there's a serious bug with stream filters in PHP version 5.2.X. It's due to stream buffering. Basically PHP would fail to pass data already in the stream's internal buffer through the filter. If you do a fread($handle, 2) to read the gzip header before attaching the inflate filter, there's a good chance that it's going to fail. A call to fread() would cause PHP to try to fill up the its buffer. Even if the call to fread() asks for only two bytes, PHP might actually read many more bytes (let say 1024) from the physical medium in an attempt to improve performance. Due to the aforementioned bug, the extra 1022 bytes would not get send to the decompression routine.

Stream FTP upload in chunks with PHP?

Is it possible to stream an FTP upload with PHP? I have files I need to upload to another server, and I can only access that server through FTP. Unfortunately, I can't up the timeout time on this server. Is it at all possible to do this?
Basically, if there is a way to write part of a file, and then append the next part (and repeat) instead of uploading the whole thing at once, that'd save me. However, my Googling hasn't provided me with an answer.
Is this achievable?
OK then... This might be what you're looking for. Are you familiar with curl?
CURL can support appending for FTP:
curl_setopt($ch, CURLOPT_FTPAPPEND, TRUE ); // APPEND FLAG
The other option is to use ftp:// / ftps:// streams, since PHP 5 they allow appending. See ftp://; ftps:// Docs. Might be easier to access.
The easiest way to append a chunk to the end of a remote file is to use file_put_contents with FILE_APPEND flag:
file_put_contents('ftp://username:pa‌​ssword#hostname/path/to/file', $chunk, FILE_APPEND);
If it does not work, it's probably because you do not have URL wrappers enabled in PHP.
If you need a greater control over the writing (transfer mode, passive mode, etc), or you cannot use the file_put_contents, use the ftp_fput with a handle to the php://temp (or the php://memory) stream:
$conn_id = ftp_connect('hostname');
ftp_login($conn_id, 'username', 'password');
ftp_pasv($conn_id, true);
$h = fopen('php://temp', 'r+');
fwrite($h, $chunk);
rewind($h);
// prevent ftp_fput from seeking local "file" ($h)
ftp_set_option($conn_id, FTP_AUTOSEEK, false);
$remote_path = '/path/to/file';
$size = ftp_size($conn_id, $remote_path);
$r = ftp_fput($conn_id, $remote_path, $h, FTP_BINARY, $size);
fclose($h);
ftp_close($conn_id);
(add error handling)

PHP socket programming transferred image is corrupted

I have a PHP socket client which transfer a image (BMP) to the socket server
$host="127.0.0.1" ;
$port=8000;
$timeout=30;
$socket=fsockopen($host,$port,$errnum,$errstr,$timeout) ;
$bmp=file_get_contents("C:/Image.bmp");
$bytesWritten = fwrite($socket, $bmp);
fclose($socket);
The transferred image is always corrupted and halfly streamed and giving the error message
Fatal error: Maximum execution time of 60 seconds exceeded
im transferring from localhost to localhost ;) and i have a ASP.NET app which does the same thing in milliseconds ! so why not PHP? why it takes long time ?
i think there is some thing to do with file_get_contents which creates a large BLOB behalf of that is there a way to use a FileStream in PHP ?
any idea how to transfer the file without corrupting ?
file_get_contents returns a string. I think you want to use fread instead.
Example:
$filename = "c:\\files\\somepic.gif";
$handle = fopen($filename, "rb");
$contents = fread($handle, filesize($filename));
fclose($handle);

Cannot resume downloads bigger than 300M

I am working on a program with php to download files.
the script request is like: http://localhost/download.php?file=abc.zip
I use some script mentioned in Resumable downloads when using PHP to send the file?
it definitely works for files under 300M, either multithread or single-thread download, but, when i try to download a file >300M, I get a problem in single-thread downloading, I downloaded only about 250M data, then it seems like the http connection is broken. it doesnot break in the break-point ..Why?
debugging the script, I pinpointed where it broke:
$max_bf_size = 10240;
$pf = fopen("$file_path", "rb");
fseek($pf, $offset);
while(1)
{
$rd_length = $length < $max_bf_size? $length:$max_bf_size;
$data = fread($pf, $rd_length);
print $data;
$length = $length - $rd_length;
if( $length <= 0 )
{
//__break-point__
break;
}
}
this seems like every requested document can only get 250M data buffer to echo or print..But it works when i use a multi-thread to download a file
fread() will read up to the number of bytes you ask for, so you are doing some unnecessary work calculating the number of bytes to read. I don't know what you mean by single-thread and multi-thread downloading. Do you know about readfile() to just dump an entire file? I assume you need to read a portion of the file starting at $offset up to $length bytes, correct?
I'd also check my web server (Apache?) configuration and ISP limits if applicable; your maximum response size or time may be throttled.
Try this:
define(MAX_BUF_SIZE, 10240);
$pf = fopen($file_path, 'rb');
fseek($pf, $offset);
while (!feof($pf)) {
$data = fread($pf, MAX_BUF_SIZE);
if ($data === false)
break;
print $data;
}
fclose($pf);

Categories