I'm going to implement a website in PHP using a Web Framework. I never implemented a website in PHP... I don't have problems learning new languages.
The problem is that I would like to know if frameworks like Zend, CakePHP can be used to create a page which lets you download files at a given rate (eg 50 KB/s)?
Thank you.
Your server should deal with this issue, not PHP.
In case you have Apache see here:
Apache Module mod_ratelimit
Webserver Bandwidth Limiting in Apache
For Lighttpd see here:
http://www.debianhelp.co.uk/ssllighttpd.htm
TrafficShaping
As far as I know limiting the download speed is not part of the core but that is very easy to implement, simply extend the MediaView class and add that simple feature to it.
All you can do with PHP is possible in these Frameworks, too. The trick is to do it without violating the rules of the framework (e.g. the MVC pattern).
In CakePHP it is absolutely possible to create a controller action which puts out a binary file with all needed headers. In your controller action you can than limit the download speed with standard php.
PHP is not very good language for limiting download in my opinion. I have never done it, but I would do it in this way
header('Content-type: image/jpeg');
header('Content-Disposition: attachment; filename="image.jpg"');
$f = file('may_image_or_file_to_download.jpg');
foreach($f as $line){
echo $line;
flush();
usleep(10000); //change sleep time to adjusting download speed
}
you better use some Apache mods if you have possibilities
sorry for my english
If the framework doesn't provide that feature use an existing library, e.g. bandwidth-throttle/bandwidth-throttle
use bandwidthThrottle\BandwidthThrottle;
$in = fopen(__DIR__ . "/resources/video.mpg", "r");
$out = fopen("php://output", "w");
$throttle = new BandwidthThrottle();
$throttle->setRate(100, BandwidthThrottle::KIBIBYTES); // Set limit to 100KiB/s
$throttle->throttle($out);
stream_copy_to_stream($in, $out);
Related
I already tried to find an answer to this question on this site but I'm a bit unexperienced and my knowledge is limited yet.
My situation is as follows: I have an external PHP file (say http://example.org/get.php) (which I cannot edit) that returns an MP3 file as an octet-stream. The problem is that I need it to be an audio/mp3 in order to use it with an HTML5 audio player. How can I achieve this?
I think I cannot just do something like
$mp3_url = 'http://example.org/get.php?file=123'; header('Content-Type: audio/mp3');
, can I?
One way is to setup a proxy on your server, e.g. /your/script.php might look like
header('Content-Type: audio/mp3', true, 200);
echo file_get_contents($_GET['url']);
(security checks skipped)
Then you can use it as link
Another way is to setup a proxy using Web server such as Nginx or Apache.
Use another php script to act as a proxy, i.e:
<?php
$mp3 = file_get_contents("http://example.org/get.php?file=123");
header('Content-Type: audio/mp3');
echo $mp3;
use audio/mpeg as content-type
Maybe I'm asking the impossible but I wanted to clone a stream multiple times. A sort of multicast emulation. The idea is to write every 0.002 seconds a 1300 bytes big buffer into a .sock file (instead of IP:port to avoid overheading) and then to read from other scripts the same .sock file multiple times.
Doing it through a regular file is not doable. It works only within the same script that generates the buffer file and then echos it. The other scripts will misread it badly.
This works perfectly with the script that generates the chunks:
$handle = #fopen($url, 'rb');
$buffer = 1300;
while (1) {
$chunck = fread($handle, $buffer);
$handle2 = fopen('/var/tmp/stream_chunck.tmp', 'w');
fwrite($handle2, $chunck);
fclose($handle2);
readfile('/var/tmp/stream_chunck.tmp');
}
BUT the output of another script that reads the chunks:
while (1) {
readfile('/var/tmp/stream_chunck.tmp');
}
is messy. I don't know how to synchronize the reading process of chunks and I thought that sockets could make a miracle.
It works only within the same script that generates the buffer file and then echos it. The other scripts will misread it badly
Using a single file without any sort of flow control shouldn't be a problem - tail -F does just that. The disadvantage is that the data will just accululate indefinitely on the filesystem as long as a single client has an open file handle (even if you truncate the file).
But if you're writing chunks, then write each chunk to a different file (using an atomic write mechanism) then everyone can read it by polling for available files....
do {
while (!file_exists("$dir/$prefix.$current_chunk")) {
clearstatcache();
usleep(1000);
}
process(file_get_contents("$dir/$prefix.$current_chunk"));
$current_chunk++;
} while (!$finished);
Equally, you could this with a database - which should have slightly lower overhead for the polling, and simplifies the garbage collection of old chunks.
But this is all about how to make your solution workable - it doesn't really address the problem you are trying to solve. If we knew what you were trying to achieve then we might be able to advise on a more appropriate solution - e.g. if it's a chat application, video broadcast, something else....
I suspect a more appropriate solution would be to use mutli-processing, single memory model server - and when we're talking about PHP (which doesn't really do threading very well) that means an event based/asynchronous server. There's a bit more involved than simply calling socket_select() but there are some good scripts available which do most of the complicated stuff for you.
I am using amazon s3 API and setting the client to read as stream. It is working fine for me to use file_get_contents("s3://{bucket}/{key}"), which read the full data for the file(I am using video file & testing on my local system). However, I am trying to optimize the memory used by the script and thus trying to read and return data by chunk as below:
$stream = #fopen("s3://{bucket}/{key}", 'r');
$buffer = 1024;
while(!feof($stream)) {
echo #fread($stream, $buffer);
flush();
}
This is not working on my local system. I am just wondering what might be the issue using this technique. by searching, I found that this is also a very widely used technique. So, if anybody can please give any suggestion about what might be wrong here or any other approach, I should try with, it will be very helpful. Thanks.
OK, finally got the solution. Somehow, some other output are being added to buffer. I had to put:
ob_get_clean();
header('Content-Type: video/quicktime');
In this way to clean anything if were added. Now its working fine.
Thanks Mark Baker for your valuable support through the debugging process.
is file_get_contents() enough for downloading remote movie files located on a server ?
i just think that perhaps storing large movie files to string is harmful ? according to the php docs.
OR do i need to use cURL ? I dont know cURL.
UPDATE: these are big movie files. around 200MB each.
file_get_contents() is a problem because it's going to load the entire file into memory in one go. If you have enough memory to support the operation (taking into account that if this is a web server, you may have multiple hits that generate this behavior simultaneously, and therefore each need that much memory), then file_get_contents() should be fine. However, it's not the right way to do it - you should use a library specifically intended for these sort of operations. As mentioned by others, cURL will do the trick, or wget. You might also have good luck using fopen('http://someurl', 'r') and reading blocks from the file and then dumping them straight to a local file that's been opened for write privileges.
As #mopoke suggested it could depend on the size of the file. For a small movie it may suffice. In general I think cURL would be a better fit though. You have much more flexibility with it than with file_get_contents().
For the best performance you may find it makes sense to just use a standard unix util like WGET. You should be able to call it with system("wget ...") or exec()
http://www.php.net/manual/en/function.system.php
you can read a few bytes at a time using fread().
$src="http://somewhere/test.avi";
$dst="test.avi";
$f = fopen($src, 'rb');
$o = fopen($dst, 'wb');
while (!feof($f)) {
if (fwrite($o, fread($f, 2048)) === FALSE) {
return 1;
}
}
fclose($f);
fclose($o);
I would like to know the best way to save an image from a URL in php.
At the moment I am using
file_put_contents($pk, file_get_contents($PIC_URL));
which is not ideal. I am unable to use curl. Is there a method specifically for this?
Using file_get_contents is fine, unless the file is very large. In that case, you don't really need to be holding the entire thing in memory.
For a large retrieval, you could fopen the remote file, fread it, say, 32KB at a time, and fwrite it locally in a loop until all the file has been read.
For example:
$fout = fopen('/tmp/verylarge.jpeg', 'w');
$fin = fopen("http://www.example.com/verylarge.jpeg", "rb");
while (!feof($fin)) {
$buffer= fread($fin, 32*1024);
fwrite($fout,$buffer);
}
fclose($fin);
fclose($fout);
(Devoid of error checking for simplicity!)
Alternatively, you could forego using the url wrappers and use a class like PEAR's HTTP_Request, or roll your own HTTP client code using fsockopen etc. This would enable you to do efficient things like send If-Modified-Since headers if you are maintaining a cache of remote files.
I'd recommend using Paul Dixon's strategy, but replacing fopen with fsockopen(). The reason is that some server configurations disallow URL access for fopen() and file_get_contents(). The setting may be found in php.ini and is called allow_url_fopen.