I've written a C++ program for an embedded system. My program processes images taken by a camera (350 fps or even more ) and apply some machine vision algorithms on them. I wrote the result of my program which is an jpg image in a specific location in my hard-drive (e.g localhost/output/result.jpg)
Now I want to write a PHP code for displaying this image every 30 ms or faster (it depends to the output of my c++ program .... sometimes it is 30 fps and sometimes 10 or 40 but never more than 40 !!!) the video frames are not related to each other so I cannot use a streaming video since these algorithms suppose the frames are sequential ....
It is possible that the PHP code reads a corrupted image (Since, one program wants to write and the other one wants to read)
I thought to use a mutex concept here ... create a flag (text file) which is accessible to both programs whenever my C++ program wants to write into the same location set the flag high (write something in the text file) and when it finished the writing clear the flag so when php code sees that flag waits until the image is written into the hard drive and then displays it ....
I'm familiar to C++ but completely new to php, I'm able to display images in php but I don't know how to use timers in a way that the above problem would be solved.
what should I do ? I put a code which works but I don't want to use it since it makes only this part of the webpage be updated (Because of while(1)) ? is there any alternative solution ? if I'm not able to display more than 20 frames per second, what frame rate is possible in this scripting language ? what factor plays the role in this thing ?
Thanks
My Code:
<?php
while(1)
{
$handle = #fopen("/var/www/CTX_ITX/Flag_File.txt", "r");
if ($handle) {
if(($buffer = fgets($handle, 10)) !== false)
{
if($buffer=="Yes)
{
echo "<img src = "result.jpg" alt='test' \>";
}
}
if (!feof($handle)) {
echo "Error: unexpected fgets() fail\n";
}
fclose($handle);
}
sleep(.01); //10 ms pause before starting again ....
}
?>
PHP is not suitable for your task. It responds to a request. Your JS would have to load the image every 30ms.
I would suggest you take a look at Node.JS and HTML5 WebSockets. This technology will enable you to theoretically send pictures fast enough without reconnects etc. (only if your connection is fast enough to handle the traffic)
Related
Maybe I'm asking the impossible but I wanted to clone a stream multiple times. A sort of multicast emulation. The idea is to write every 0.002 seconds a 1300 bytes big buffer into a .sock file (instead of IP:port to avoid overheading) and then to read from other scripts the same .sock file multiple times.
Doing it through a regular file is not doable. It works only within the same script that generates the buffer file and then echos it. The other scripts will misread it badly.
This works perfectly with the script that generates the chunks:
$handle = #fopen($url, 'rb');
$buffer = 1300;
while (1) {
$chunck = fread($handle, $buffer);
$handle2 = fopen('/var/tmp/stream_chunck.tmp', 'w');
fwrite($handle2, $chunck);
fclose($handle2);
readfile('/var/tmp/stream_chunck.tmp');
}
BUT the output of another script that reads the chunks:
while (1) {
readfile('/var/tmp/stream_chunck.tmp');
}
is messy. I don't know how to synchronize the reading process of chunks and I thought that sockets could make a miracle.
It works only within the same script that generates the buffer file and then echos it. The other scripts will misread it badly
Using a single file without any sort of flow control shouldn't be a problem - tail -F does just that. The disadvantage is that the data will just accululate indefinitely on the filesystem as long as a single client has an open file handle (even if you truncate the file).
But if you're writing chunks, then write each chunk to a different file (using an atomic write mechanism) then everyone can read it by polling for available files....
do {
while (!file_exists("$dir/$prefix.$current_chunk")) {
clearstatcache();
usleep(1000);
}
process(file_get_contents("$dir/$prefix.$current_chunk"));
$current_chunk++;
} while (!$finished);
Equally, you could this with a database - which should have slightly lower overhead for the polling, and simplifies the garbage collection of old chunks.
But this is all about how to make your solution workable - it doesn't really address the problem you are trying to solve. If we knew what you were trying to achieve then we might be able to advise on a more appropriate solution - e.g. if it's a chat application, video broadcast, something else....
I suspect a more appropriate solution would be to use mutli-processing, single memory model server - and when we're talking about PHP (which doesn't really do threading very well) that means an event based/asynchronous server. There's a bit more involved than simply calling socket_select() but there are some good scripts available which do most of the complicated stuff for you.
Is there a way in PHP to take some action (mysql insert for example) if there is no new requests for say 1 second?
What I am trying to achieve is to determinate beginning and the end of image sequence sent from a IP camera. Camera sends series of images on detected movement and stops sending when movement stops. I know that camera makes 5 images per second (every 200ms). When there is no new images for more than 1 sec I want to flag last image as end of the sequence, insert a record in mysql, place img in appropriate folder (where all other imgs from the same sequence are already written) and instruct app to make a MJPEG clip of images in that folder.
Right now I am able to determine the first image in the sequence using Alternative PHP cash to save reference time from the previous request but the problem is because next image sequence can happen hours later and I can not instruct PHP to close the sequence if there is NO requests for some time, only when first request of the new sequence arrives.
I really need help on this. My PHP sucks almost as my English... :)
Pseudo code for my problem:
<?php
if(isset($headers["Content-Disposition"]))
{
$frame_time = microtime(true);
if(preg_match('/.*filename=[\'\"]([^\'\"]+)/', $headers["Content-Disposition"], $matches))
{ $filename = $matches[1]; }
else if(preg_match("/.*filename=([^ ]+)/", $headers["Content-Disposition"], $matches))
{ $filename = $matches[1]; }
}
preg_match("/(anpr[1-9])/", $filename, $anprs);
$anpr = $anprs[1];
$apc_key = $anpr."_last_time"; //there are several cameras so I have to distinguish those
$last = apc_fetch($apc_key);
if (($frame_time - $last)>1)
{
$stream = "START"; //New sequence starts
} else {
$stream = "-->"; //Streaming
};
$file = fopen('php://input', 'r');
$temp = fopen("test/".$filename, 'w');
$imageSize = stream_copy_to_stream($file, $temp); //save image file on the file system
fclose($temp);
fclose($file);
apc_store($apc_key, $frame_time); //replace cashed time with this frame time in APC
// here goes mysql stuff...
/* Now... if there is no new requests for 1 second $stream = "END";
calling app with exec() or similar to grab all images in the particular folder and make
MJPEG... if new request arrives cancel timer or whatever and execute script again. */
?>
Could you make each request usleep 1.5 seconds before exiting, and as a last step check to see if the sequence timestamp was updated? If yes, exit and do nothing. If no, save the sequence to mysql. (This will require mutexes, since each http request will be checking and trying to save the sequence, but only one must be allowed to.)
This approach would merge the sub-file/script into the php code (single codebase, easier to maintain), but it can possibly balloon memory use (each request will stay in memory 1.5 seconds, which is a long time for a busy server).
Another approach is to make the sub-file/script into a loopback request on the localhost http server, with presumably a much smaller memory footprint. Each frame would fire off a request to finalize the sequence (similarly again, with mutexes).
Or maybe create a separate service call that checked and saved all sequences, and have a cron job ping it every few seconds. Or have each frame ping it, if a second request can detect that the service is already running it can exit. (share state in the APC cache)
Edit: I think I just suggested what bytesized said above.
What if you just keep the script running for 1 second after it stores the frame to check for more added frames? I imagine you may want to close the connection before the 1 second expires, but tomlgold2003 and arr1 have the answer for you: http://php.net/manual/en/features.connection-handling.php#93441
I think this would work for you:
<?php
ob_end_clean();
header("Connection: close\r\n");
header("Content-Encoding: none\r\n");
ignore_user_abort(true); // optional
ob_start();
// Do your stuff to store the frame
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush(); // Strange behaviour, will not work
flush(); // Unless both are called !
ob_end_clean();
// The connection should now be closed
sleep(1);
// Check to see if more frames have been added.
?>
If your server is expected to see a high load, this may not be the answer for you since when receiving 5 frames per second, there will be 5 scripts checking to see if they submitted the last frame.
Store each request from the camera with all its data and timestamp in a file (in php serialized form). In cronjob run (every 10 seconds or so) a script that reads that file and finds requests that have more then one second after them before the following request. Save the data from such requests and delete all other requests.
I'm attempting to render a largish html table to pdf with dompdf. There is minimal css styling, but maybe 200 - 300 rows in the table. Each row has 4 td's with basic text.
I can generate a pdf of a smaller table with no issues but a larger table will exhaust memory limits and the script will terminate. What is the best way to approach this? I started a discussion on serverfault and one user suggested spawning a new process so as to not exhaust memory limits of php / apache? Would this be the best way to do this? That leads me to questions about how that would work, in that dompdf currently streams the download to the browser, but I'm assuming if I create a new process to generate the report, I can no longer send the output to the browser for the user to download?
Thanks to anyone who might be able to suggest a good way to tackle this!
If you render your HTML using a secondary PHP process (e.g. using exec()) the execution time and memory limits are eased. When rendering in this method you save the rendered PDF to a directory on the web site and redirect the user to the download (or even email them a link if you want to run a rendering queue). Generally I've found that this method does offer modest improvements in speed and memory use.
That doesn't, however, mean the render will perform significantly faster in your situation. Rendering tables with dompdf is resource intensive at present. What might work better, if you can, is to break the document into parts, render each of those parts separately (again using a secondary PHP process), combining the resulting collection of PDFs into a single file (using something like pdftk), then saving the PDF where the user can access it. I've seen a significant performance improvement using this method.
Or go with something like wkhtmltopdf (if you have shell access to your server and are willing to deal with the installation process).
I have had this trouble with many different generated file types, not just PDFs. Spawning processes did not help because the problem was in the sizes of the variables, and no matter how fresh the process, the variables were still too big. My solution was to create a file and write to it in manageable chunks, so that my variables never got above a certain size.
A basic, untested example:
$tmp = fopen($tmpfilepath, 'w');
if(is_resource($tmp)) {
echo 'Generating file ... ';
$dompdf = new DOMPDF();
$counter = 0;
$html = '';
while($line = getLineOfYourHtml()) {
$html .= $line;
$counter++;
if($counter%200 == 0) { //pick a good chunk number here
$dompdf->load_html($html);
$dompdf->render();
$output = $dompdf->output();
fwrite($tmp, $output);
echo round($counter/getTotalLines()).'%... '; //echo percent complete
$html = '';
}
}
if($html != '') { //the last chunk
$dompdf->load_html($html);
$dompdf->render();
$output = $dompdf->output();
fwrite($tmp, $output);
}
fclose($tmp);
if(file_exists($tmpfilepath)) {
echo '100%. Generation complete. ';
echo 'Download';
} else {
echo ' Generation failed.';
}
} else {
echo 'Could not generate file.';
}
Because it takes a while to generate the file, the echoes appear one after another, giving the user something to look at so they don't think the screen has frozen. The final echoed link will only appear after the file has been generated, which means the user is automatically waiting until the file is ready before they can download it. You may have to extend the max execution time for this script.
Is it possible to use PHP readfile function on a remote file whose size is unknown and is increasing in size? Here is the scenario:
I'm developing a script which downloads a video from a third party website and simultaneously trans-codes the video into MP3 format. This MP3 is then transferred to the user via readfile.
The query used for the above process is like this:
wget -q -O- "VideoURLHere" | ffmpeg -i - "Output.mp3" > /dev/null 2>&1 &
So the file is fetched and encoded at the same time.
Now when the above process is in progress I begin sending the output mp3 to the user via readfile. The problem is that the encoding process takes some time and therefore depending on the users download speed readfile reaches an assumed EoF before the whole file is encoded, resulting in the user receiving partial content/incomplete files.
My first attempt to fix this was to apply a speed limit on the users download, but this is not foolproof as the encoding time and speed vary with load and this still led to partial downloads.
So is there a way to implement this system in such a way that I can serve the downloads simultaneously along with the encoding and also guarantee sending the complete file to the end user?
Any help is appreciated.
EDIT:
In response to Peter, I'm actually using fread(read readfile_chunked):
<?php
function readfile_chunked($filename,$retbytes=true) {
$chunksize = 1*(1024*1024); // how many bytes per chunk
$totChunk = 0;
$buffer = '';
$cnt =0;
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
//usleep(120000); //Used to impose an artificial speed limit
$buffer = fread($handle, $chunksize);
echo $buffer;
ob_flush();
flush();
if ($retbytes) {
$cnt += strlen($buffer);
}
}
$status = fclose($handle);
if ($retbytes && $status) {
return $cnt; // return num. bytes delivered like readfile() does.
}
return $status;
}
readfile_chunked($linkToMp3);
?>
This still does not guarantee complete downloads as depending on the users download speed and the encoding speed, the EOF() may be reached prematurely.
Also in response to theJeztah's comment, I'm trying to achieve this without having to make the user wait..so that's not an option.
Since you are dealing with streams, you probably should use stream handling functions :). passthru comes to mind, although this will only work if the download | transcode command is started in your script.
If it is started externally, take a look at stream_get_contents.
Libevent as mentioned by Evert seems like the general solution where you have to use a file as a buffer. However in your case, you could do it all inline in your script without using a file as a buffer:
<?php
header("Content-Type: audio/mpeg");
passthru("wget -q -O- http://localhost/test.avi | ffmpeg -i - -f mp3 -");
?>
I don't think there's any of being notified about there being new data, short of something like inotify.
I suggest that if you hit EOF, you start polling the modification time of the file (using clearstatcache() between calls) every 200 ms or so. When you find the file size has increased, you can reopen the file, seek to the last position and continue.
I can highly recommend using libevent for applications like this.
It works perfect for cases like this.
The PHP documentation is a bit sparse for this, but you should be able to find more solid examples around the web.
All,
I have built a form, if the users fills in the form, my code can determine what (large) files need to be downloaded by that user. I display each file with a download button, and I want to show the status of the download of that file next to the button (download in progress, cancelled/aborted or completed). Because the files are large (200 MB+), I use a well-known function to read the file to be downloaded in chuncks.
My idea was to log the reading of each (100 kbyte) chunk in a database, so I can keep track of the progress of the download.
My code is below:
function readfile_chunked($filename, $retbytes = TRUE)
{
// Read the personID from the cookie
$PersonID=$_COOKIE['PERSONID'];
// Setup connection to database for logging download progress per file
include "config.php";
$SqlConnectionInfo= array ( "UID"=>$SqlServerUser,"PWD"=>$SqlServerPass,"Database"=>$SqlServerDatabase);
$SqlConnection= sqlsrv_connect($SqlServer,$SqlConnectionInfo);
ChunckSize=100*1024; // 100 kbyte buffer
$buffer = '';
$cnt =0;
$handle = fopen($filename, 'rb');
if ($handle === false)
{
return false;
}
$BytesProcessed=0;
while (!feof($handle))
{
$buffer = fread($handle, $ChunckSize);
$BytesSent = $BytesSent + $ChunckSize;
// update database
$Query= "UPDATE [SoftwareDownload].[dbo].[DownloadProgress] SET LastTimeStamp = '" . time() . "' WHERE PersonID='$PersonID' AND FileName='$filename'";
$QueryResult = sqlsrv_query($SqlConnection, $Query);
echo $buffer;
ob_flush();
flush();
if ($retbytes)
{
$cnt += strlen($buffer);
}
}
$status = fclose($handle);
if ($retbytes && $status)
{
return $cnt; // return num. bytes delivered like readfile() does.
}
return $status;
}
The weird thing is that my download runs smoothly at a constant rate, but my database gets updated in spikes: sometimes I see a couple of dozen of database updates (so a couple of dozens of 100 kilobyte blocks) within a second, the next moment I see no database updates (so no 100 k blocks being fed by PHP) for up to a minute (this timeout depends on the bandwidth available for the download). The fact that no progress is written for a minute makes my code believe that the download was aborted.
In the meantime I see the memory usage of IIS increasing, so I get the idea that IIS buffers the chuncks of data that PHP delivers.
My webserver is running Windows 2008 R2 datacenter (64 bits), IIS 7.5 and PHP 5.3.8 in FastCGI mode. Is there anything I can do (in either my code, in my PHP config or in IIS config) to prevent IIS from caching the data that PHP delivers, or is there any way to see in PHP if the data generated was actually delivered to the downloading client?
Thanks in advance!
As no usefull answers came up, I've created a real dodgy workaround by using Apache strictly for the download of the file(s). The whole form, CSS etc is still delivered through IIS, just the downloads links are server by Apache (a reverse proxy server sits in between, so users don't have to fiddle with port numbers, the proxy takes care of this).
Apache doesn't cache PHP output, so the timestamps I get in my database are reliable.
I've asked a similar question once, and got very good results with the answers there.
Basically, you use a APC cache to cache a variable, then retrieve it in a different page with an Ajax call from the client.
Another possibility is to do the same with a database. Write the progress to a database and have the client to test for that (let's say every second) using an Ajax call.