I am using OCI-LOB::import to store a file into database.
What will happen if the file is large, larger than php memory_limit setting? Will OCI-LOB::import do streaming and send file data to database by smaller chunks, or not?
Are there any OCI functions, which can control the LOB-related streaming of data? Most important, for setting chunk size, for example.
1) you don't have to worry about php's memory_limit when you write large data into lob
2) you can write data to lob object by chunks using OCI-Lob::write function
$chunkSize = 1024;
$f = fopen ($filename, 'r');
while ($buf = fread($f, $chunkSize))
{
$lob->write($buf);
}
After examination of oci8_lob.c from PHP 5.3.18 source found that
OCI-LOB::import reads file data and write into LOB descriptor using fixed sized buffer. Length of buffer is set to 8192 bytes and is hardcoded in source. That means, OCI-LOB::import sends data to database using 8K-sized chunks.
It is impossible to modify chunk size, which is used by OCI-LOB::import, since it is hardcoded in source.
Related
These two codes both do the same thing in reading files , so what's the main difference ?
1-First code :
$handle = fopen($file, 'r');
$data = fread($handle, filesize($file));
2-Second code :
readfile($file);
There's a significant difference between fread() and readfile().
First, readfile() does a number of things that fread() does not. readfile() opens the file for reading, reads it, and then prints it to the output buffer all in one go. fread() only does one of those things: it reads bytes from a given file handle.
Additionally, readfile() has some benefits that fread() does not. For example, it can take advantage of memory-mapped I/O where available rather than slower disk reads. This significantly increases the performance of reading the file since it delegates the process away from PHP itself and more towards operating system calls.
Errata
I previously noted that readfile() could run without PHP (this is corrected below).
For truly large files (think several gigs like media files or large archive backups), you may want to consider delegating the reading of the file away from PHP entirely with X-Sendfile headers to your webserver instead (so that you don't keep your PHP worker tied up for the length of an upload that could potentially take hours).
So you could do something like this instead of readfile():
<?php
/* process some things in php here */
header("X-Sendfile: /path/to/file");
exit; // don't need to keep PHP busy for this
Reading the docs, readfile reads the whole content and writes it into STDOUT.
$data = fread($handle, filesize($file));
While fread puts the content into the variable $data.
I have an API endpoint that can receive a POST JSON/XML body (or even raw binary data) inside the body content as payload that should be written immediately to a file on the filesystem.
For backwards compatibility reasons, it cannot be a multipart/form-data.
It works with no problems for body content up to a certain size (around 2.3GB with a 8GB script memory limit).
I've tried all of the followings:
both with and without setting the buffers' sizes
$filename = '/tmp/test_big_file.bin';
$input = fopen('php://input', 'rb');
$output = fopen($filename, 'wb');
stream_set_read_buffer($input, 4096);
stream_set_write_buffer($output, 4096);
stream_copy_to_stream($input, $output);
fclose($input);
fclose($output);
and
$filename = '/tmp/test_big_file.bin';
file_put_contents($filename, file_get_contents('php://input'));
and
$filename = '/tmp/test_big_file.bin';
$input = fopen('php://input', 'rb');
$output = fopen($filename, 'wb');
while (!feof($input)) {
fwrite($output, fread($input, 8192), 8192);
}
fclose($input);
fclose($output);
Unfortunately, none of them works. At one point, I get always the same error:
PHP Fatal error: Allowed memory size of 8589934592 bytes exhausted (tried to allocate 2475803056 bytes) in Unknown on line 0
Also unsetting enable_post_data_reading makes no difference and all the php.ini post/memory/whatever sizes are set to 8GB.
I'm using php-fpm.
Looking what's happening at the memory with free -mt, I can see that the memory used increases slowly at the beginning, going faster after a while, up to a point that no more free memory is left, so the error.
On the temp directory, the file is not directly stream-copied, but instead it is written on a temporary file named php7NARsX or other random strings which is not deleted after the script crashes, so that at the following free -mt check, the available memory is 2.3GB less.
Now my questions:
Why the stream is not copied directly from php://input to the output instead of loading it into memory? (also using php://temp as output stream leads to the same error)
Why is PHP using so much memory? I'm sending a 3GB payload, so why it needs more than 8GB?
Of course, any working solution will be much appreciated. Thank You!
I have a issue with PHP function xml_parse. It's not working with huge files - I have xml file with 10MB size.
Problem is, that I have old XML-RPC library from Zend and there are another functions (element handlers and case folding).
$parser_resource = xml_parser_create('utf-8');
xml_parser_set_option($parser_resource, XML_OPTION_CASE_FOLDING, true);
xml_set_element_handler($parser_resource, 'XML_RPC_se', 'XML_RPC_ee');
xml_set_character_data_handler($parser_resource, 'XML_RPC_cd');
if (!xml_parse($parser_resource, $data, 1)) {
// ends here with 10MB file
}
On another place, I just use siple_load_xml_file with option LIBXML_PARSEHUGE, but in this case I don't know what can I do.
Best way will be, if function xml_parse will have some parameter for huge files too.
Thank you for your advices
Error is:
XML error: No memory at line ...
The chunk length of file to parse could be to huge.
if you use fread
while ($data = fread($fp, 1024*1024)) {...}
use smaller length (at my case it has to be smaller than 10 MB) e.g. 1MB and put the xml_parse function in the while loop.
I want to send an external MP4 file in chunks of 1 MB each to a user. With each chunk I update a database entry to keep track of the download progress. I use fread() to read the file in chunks. Here is the stripped down code:
$filehandle = fopen($file, 'r');
while(!feof($filehandle)){
$buffer = fread($filehandle, 1024*1024);
//do some database stuff
echo $buffer;
ob_flush();
flush();
}
However, when I check the chunk size at some iteration inside the while loop, with
$chunk_length = strlen($buffer);
die("$chunk_length");
I do never get the desired chunk size. It fluctates somewhere around 7000 - 8000 bytes. Nowhere near 1024*1024 bytes.
When I decrease the chunk size to a smaller number, for example 1024 bytes, it works as expected.
According to the PHP fread() manual:
"When reading from anything that is not a regular local file, such as
streams returned when reading remote files or from popen() and
fsockopen(), reading will stop after a packet is available."
In this case I opened a remote file. Apparently, this makes fread() stop not at the specified length, but when the first package has arrived.
I wanted to keep track of a download of an external file.
If you to do this (or keep track of an upload), use CURL instead:
curl_setopt($curl_handle, CURLOPT_NOPROGRESS, false);
curl_setopt($curl_handle, CURLOPT_PROGRESSFUNCTION, 'callbackFunction');
function callbackFunction($download_size, $downloaded, $upload_size, $uploaded){
//do stuff with the parameters
}
I've been building a class to create ZIP files in PHP. An alternative to ZipArchive assuming it is not allowed in the server. Something to use with those free servers.
It is already sort of working, build the ZIP structures with PHP, and using gzdeflate() to generate the compressed data.
The problem is, gzdeflate() requires me to load the whole file in the memory, and I want the class to work limitated to 32MB of memory. Currently it is storing files bigger than 16MB with no compression at all.
I imagine I should make it compress data in blocks, 16MB by 16MB, but I don't know how to concatenate the result of two gzdeflate().
I've been testing it and it seems like it requires some math in the last 16-bits, sort of buff->last16bits = (buff->last16bits & newblock->first16bits) | 0xfffe, it works, but not for all samples...
Question: How to concatenate two DEFLATEd streams without decompressing it?
PHP stream filters are used to perform such tasks. stream_filter_append can be used while reading from or writing to streams. For example
$fp = fopen($path, 'r');
stream_filter_append($fp, 'zlib.deflate', STREAM_FILTER_READ);
Now fread will return you deflated data.
This may or may not help. It looks like gzwrite will allow you to write files without having them completely loaded in memory. This example from the PHP Manual page shows how you can compress a file using gzwrite and fopen.
http://us.php.net/manual/en/function.gzwrite.php
function gzcompressfile($source,$level=false){
// $dest=$source.'.gz';
$dest='php://stdout'; // This will stream the compressed data directly to the screen.
$mode='wb'.$level;
$error=false;
if($fp_out=gzopen($dest,$mode)){
if($fp_in=fopen($source,'rb')){
while(!feof($fp_in))
gzwrite($fp_out,fread($fp_in,1024*512));
fclose($fp_in);
}
else $error=true;
gzclose($fp_out);
}
else $error=true;
if($error) return false;
else return $dest;
}