filesize() dynamic - PHP - php

I want to make filesize(), a dynamic filesize. What does it mean?
I mean, when I give it a link and the link is dynamic, too.
The filesize() will calculate the size of file in Kb, Mb and Gb.
My link is dynamic I just want the filesize() in converted of Mb and Gb. I want it for URLs.

This will do the trick
You can also pass a precision, maybe you want this to be 0.
<?php
function human_filesize($size, $precision = 2) {
$units = ['B','kB','MB','GB','TB','PB','EB','ZB','YB'];
$step = 1024;
$i = 0;
while (($size / $step) > 0.9) {
$size = $size / $step;
$i++;
}
return round($size, $precision).$units[$i];
}
function getFileSizeFromUrl($url){
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_HEADER, TRUE);
curl_setopt($ch, CURLOPT_NOBODY, TRUE);
$data = curl_exec($ch);
$size = curl_getinfo($ch, CURLINFO_CONTENT_LENGTH_DOWNLOAD);
curl_close($ch);
return human_filesize($size);
}
echo getFileSizeFromUrl(" YOUR URL HERE ");

Related

how to get external web page size and load time in php

how to get load time and page size in php.
I'm using following technique to get load time but do we have better option to calculate load time.(any technique to get load time from header or other technique)
$t = microtime( TRUE );
$file = file_get_contents( "http://google.com/" );
print_r($file );
$t = microtime( TRUE ) - $t;
print "It took $t seconds!";
I just need to confirm if this is the right technique or we have better choice and how to calculate web page size in php.
Code to get page size
$url = 'http://xAppsol.com/';
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_VERBOSE, 1);
curl_setopt($ch, CURLOPT_HEADER, 1);
$response = curl_exec($ch);
$header_size = curl_getinfo($ch, CURLINFO_HEADER_SIZE);
print_r($header_size);
This code is providing with header size which is in Kb but how to check size of images and videos on web page. how to calculate exact size of all stuff on webpage which would be in MB.
As far as I know, You need to parse the page and get the size for each element.
$url = 'http://google.com/';
$curl = curl_init($url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE);
$subject = curl_exec($curl);
//get the download size of page
print("Download size: " . curl_getinfo($curl, CURLINFO_SIZE_DOWNLOAD) .'<br>');
preg_match_all('/(?:src=)"([^"]*)"/m', $subject, $matchessrc);
preg_match_all('/link.*\s*(?:href=)"([^"]*)"/m', $subject, $matcheslink);
$matches = array_merge($matchessrc[1], $matcheslink[1]);
$domain = parse_url($url, PHP_URL_SCHEME). '://'.parse_url($url, PHP_URL_HOST);
$path = parse_url($url, PHP_URL_PATH);
$checked = array();
foreach($matches as $m)
{
if($m[0] == '/')
$m = $domain.$m;
elseif(substr($m, 0, 5) != 'http:' and substr($m, 0, 6) != 'https:')
$m = $domain.'/'.$path.'/'.$m;
if(in_array($m, $checked))
continue;
$checked[] = $m;
$curl = curl_init($m);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE);
$subject = curl_exec($curl);
//get the download size of element
print("Download size: " . curl_getinfo($curl, CURLINFO_SIZE_DOWNLOAD) .'<br>');
}
This will search form src elements (usually images and scripts), and href in <link> (usually css).

PHP curl_multi - how to limit the download size of file

I need to limit the size of the file to be downloaded and tried using CURLOPT_PROGRESSFUNCTION option with a callback to check on the size of the download and stop when it goes beyond 1kb this way:
$progress_handler = function( $download_size, $downloaded, $upload_size, $uploaded ) {
return ( $downloaded > 1024 ) ? 1 : 0;
}
curl_setopt($ch, CURLOPT_NOPROGRESS, false);
curl_setopt($ch, CURLOPT_PROGRESSFUNCTION, $progress_handler);
I tested this on few sites with download size in ~100kb but doesn't seem to stop at 1kb. Are there any other ways to apply the limit?
Thanks!
This works:
<?php
$url = 'https://example.com/file';
$ch = curl_init($url);
$bytes = 0;
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_NOHEADER, 1);
curl_setopt($ch, CURLOPT_WRITEFUNCTION, function($ch, $data) use(&$bytes) {
static $size = 0;
//echo $data;
$size += strlen($data);
$bytes = $size;
if ($size > 1024) {
return -1;
}
return strlen($data);
});
$res = curl_exec($ch);
echo "Got $bytes bytes\n";
The concept is to use CURLOPT_WRITEFUNCTION to receive the data from the response body and increment a static counter local to the function. Once the number of bytes exceeds 1024, return -1 to abort the transfer. A value is shared between the callback and the program so you can check the value of $bytes after the transfer to see if it was greater than your target size or not.

PHP - CURLOPT_BUFFERSIZE ignored

I would like to execute the callback function every X bytes uploaded, but I don't understand why php keeps calling the callback function way way more often.
here is my code:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,$converter);
curl_setopt($ch, CURLOPT_POST,1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $post_fields);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_NOPROGRESS, false);
curl_setopt($ch, CURLOPT_PROGRESSFUNCTION, 'callback');
curl_setopt($ch, CURLOPT_BUFFERSIZE, 10485764);
$result=curl_exec ($ch);
//$info = curl_getinfo($ch);
//print_r($info);
curl_close ($ch);
function callback($resource, $download_size, $downloaded, $upload_size, $uploaded) {
echo $uploaded . '/' . $upload_size ."\r";
}
The file to upload is around 68 MB, the callback function should get executed 68 times (10485764 bytes = 1 MB), but it gets executed around 9k times...
The function should write the progress in a mysql db, that's why I need it to get executed less time.
As Barman stated, CURLOPT_BUFFERSIZE is related to download and won't work for upload.
The solution is to check the size and do something only if a certain amount of byte has been uploaded.
Exemple:
$i= 0;
$up = 0;
function callback($resource, $download_size, $downloaded, $upload_size, $uploaded) {
global $i, $up;
if ($uploaded > ($up + 1048576)){
$i++;
$up = $uploaded + 1048576;
echo $i . ' => ' . formatBytes($uploaded) . '/' . formatBytes($upload_size) ."\r";
}
}

Get JPG Dimensions from Partial Extract Without Writing to Disk

I wrote a PHP script that allows me to get the dimensions (width and height) of a remotely hosted JPG without having to download it in full (just the first 10K).
The problem with this is I write the partial download to a file, then read that file to extract the information I need (using getImageSize).
I know this can be down without writing to disk, but I do not know how.
Anyone have suggestions/solutions?
Here is my original code:
function remoteImage($url){
$ch = curl_init ($url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,1);
curl_setopt($ch, CURLOPT_RANGE, "0-10240");
$fn = "partial.jpg";
$raw = curl_exec($ch);
$result = array();
if(file_exists($fn)){
unlink($fn);
}
if ($raw !== false) {
$status = curl_getinfo($ch, CURLINFO_HTTP_CODE);
if ($status == 200 || $status == 206) {
$result["w"] = 0;
$result["h"] = 0;
$fp = fopen($fn, 'x');
fwrite($fp, $raw);
fclose($fp);
$size = getImageSize($fn);
if ($size===false) {
// Cannot get file size information
} else {
// Return width and height
list($result["w"], $result["h"]) = $size;
}
}
}
curl_close ($ch);
return $result;
}
My original question, which lead to this, is here - and might be helpful.
It may be possible to use a memory file stream.
$fn = 'php://memory';
See: http://php.net/manual/en/wrappers.php.php

How can I download multiple parts of a file in parallel with PHP's curl library?

I decided to start a project about download acceleration with curl in PHP, using curl_multi functions.
Here is my code:
set_time_limit(0);
error_reporting(E_ALL);
$fileurl = "http://hq-scenes.com/tv.exe";
$filename = basename($fileurl);
$size = getFileSize($fileurl);
$splits = range(0, $size, round($size/5));
$megaconnect = curl_multi_init();
$partnames = array();
for ($i = 0; $i < sizeof($splits); $i++) {
$ch[$i] = curl_init();
curl_setopt($ch[$i], CURLOPT_URL, $fileurl);
curl_setopt($ch[$i], CURLOPT_RETURNTRANSFER, 0);
curl_setopt($ch[$i], CURLOPT_FOLLOWLOCATION, 0);
curl_setopt($ch[$i], CURLOPT_VERBOSE, 1);
curl_setopt($ch[$i], CURLOPT_BINARYTRANSFER, 1);
curl_setopt($ch[$i], CURLOPT_FRESH_CONNECT, 0);
curl_setopt($ch[$i], CURLOPT_CONNECTTIMEOUT, 10);
$partnames[$i] = $filename . $i;
$bh[$i] = fopen(getcwd(). '/' . $partnames[$i], 'w+');
curl_setopt($ch[$i], CURLOPT_FILE, $bh[$i]);
$x = ($i == 0 ? 0 : $splits[$i]+1);
$y = ($i == sizeof($splits)-1 ? $size : $splits[$i+1]);
$range = $x.'-'.$y;
curl_setopt($ch[$i], CURLOPT_RANGE, $range);
curl_setopt($ch[$i], CURLOPT_USERAGENT, "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/14.0.835.29 Safari/535.1");
curl_multi_add_handle($megaconnect, $ch[$i]);
}
$active = null;
do {
$mrc = curl_multi_exec($megaconnect, $active);
} while ($mrc == CURLM_CALL_MULTI_PERFORM);
while ($active && $mrc == CURLM_OK) {
if (curl_multi_select($megaconnect) != -1) {
do {
$mrc = curl_multi_exec($megaconnect, $active);
} while ($mrc == CURLM_CALL_MULTI_PERFORM);
}
}
$final = fopen($filename, "w+");
for ($i = 0; $i < sizeof($splits); $i++) {
$contents = fread($bh[$i], filesize($partnames[$i]));
fclose($bh[$i]);
fwrite($final, $contents);
}
fclose($final);
function getFileSize($url) {
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
$h = fopen('header', "w+");
curl_setopt($ch, CURLOPT_WRITEHEADER, $h);
$data = curl_exec($ch);
curl_close($ch);
if (preg_match('/Content-Length: (\d+)/', $data, $matches)) {
return $contentLength = (int)$matches[1];
}
else return false;
}
Everything goes OK, except one thing:
The last part file doesn't reach the end of the file.
the actual file size is : 3279848 bytes
ranges are:
0-655970
655971-1311940
1311941-1967910
1967911-2623880
2623881-3279848
part files with size
tv.exe0 655360
tv.exe1 655360
tv.exe2 655360
tv.exe3 655360
tv.exe4 655360
That makes the final file 3276800 bytes length, but it must be 3279848 bytes.
And of course the executable didn't work :(
Notice that the part files have the same size. Even the last one, which should have some more bytes. So the problem is in the download range or something, not in the merge process.
What did I do wrong?
I suggest you to add this after fclose($final); to delete the fileparts that are not needed anymore!
foreach($partnames as $files_to_delete){
unlink($files_to_delete);
}
you must set your filepointers to 0 before fread. reading xy bytes from end is 0 bytes ;)
$final = fopen($filename, "w+");
for ($i = 0; $i < sizeof($splits); $i++) {
fseek($bh[$i], 0, SEEK_SET);
$contents = fread($bh[$i], filesize($partnames[$i]));
fclose($bh[$i]);
fwrite($final, $contents);
}
$size is not set to anything.
After setting it to the size you were expecting
655971 17 Aug 22:59 tv.exe0
655970 17 Aug 22:59 tv.exe1
655970 17 Aug 22:59 tv.exe2
655970 17 Aug 22:59 tv.exe3
655967 17 Aug 22:59 tv.exe4
You'd want to use ceil() instead of round. Round may round DOWN, which'd chop off the end of the file. CEIL will round UP, guaranteeing that the specified range(s) covers the whole file:
$splits = range(0, $size, ceil($size/5));
^^^^
e.g. If the file's size is 12, and you do round(13/5), you'll end up with 2.

Categories