I'm using a PHP script to stream a live video (i.e. a file which never ends) from a remote source. The output is viewed in VLC, not a web browser. I need to keep a count of the number of bytes transferred. Here is my code:
<?php
ignore_user_abort(true);
$stream = $_GET['stream'];
if($stream == "vid1")
{
$count = readfile('http://127.0.0.1:8080/');
logThis($count);
}
function logThis($c)
{
$myFile = "bytecount.txt";
$handle = fopen($myFile,'a');
fwrite($handle,"Count: " . $c . "\n");
fclose($handle);
}
?>
However it appears that when the user presses the stop button, logThis() is never called, even though I've put in ignore_user_abort(true);
Any ideas on what I'm doing wrong?
Thanks
Update2: I've changed my code as I shoudn't be using ignore_user_abort(true) as that would continue to download the file forever even after the client has gone. I've changed my code to this:
<?php
$count = 0;
function bye()
{
//Create Dummy File with the filename of equal to count
}
register_shutdown_function('bye');
set_time_limit(0);
ignore_user_abort(false);
$stream = $_GET['stream'];
if($stream == "vid1")
{
$GLOBALS['count'] = readfile('http://127.0.0.1:8080/');
exit();
}
?>
My problem now is that when the script is aborted (i.e. user presses stop), readfile won't return a value (i.e. count remains at 0). Any ideas on how I can fix this?
Thanks
When a PHP script is running normally the NORMAL state, is active. If the remote client disconnects the ABORTED state flag is turned on. A remote client disconnect is usually caused by the user hitting his STOP button. If the PHP-imposed time limit (see set_time_limit()) is hit, the TIMEOUT state flag is turned on.
so setting the set_time_limit to 0 should help.
Ok folks I managed to fix this. The trick was to not use readfile() but read the video stream byte by byte. Ok it may not be 100% accurate, however a few bytes inaccuracy here or there is ok.
<?php
$count = 0;
function logCount()
{
//Write out dummy file with a filename equal to count
}
register_shutdown_function('logCount');
set_time_limit(0);
ignore_user_abort(false);
$stream = $_GET['stream'];
if($stream == "vid1")
{
$filename = 'http://127.0.0.1:8080/';
$f = fopen($filename, "rb");
while($chunk = fread($f, 1024)) {
echo $chunk;
flush();
if(!connection_aborted()) {
$GLOBALS['count'] += strlen($chunk);
}
else {
exit();
}
}
}
?>
Related
First of all, sorry for my probably bad english...
I'm trying to make a script that works only until an event occur or the connection is stopped. I have to work with output buffers (for logging and debugging reasons) but I noticed that after the connection is lost, everything I try to put in the buffer it doesn't appear.
Here I made a scripts to test this behaviour:
<?php
// here I avoid to let the script stops when the connection is lost
// in order to continue with logging, debugging or wathever
ignore_user_abort(true);
// here there is an utility function that flushes the content
// to let us know if the connection is still active
$fnExec = 0; // this is a variable useful for log the script activity
function is_connection_aborted() {
global $fnExec;
$fnExec++;
// now I get the content of all output buffers and store them,
// then I clean and flush the outputs
$obs = 0;
$contents = Array();
while ($level = ob_get_level()) {
$obContent = ob_get_contents();
array_unshift($contents, $obContent);
ob_clean();
ob_end_flush();
$obs++;
}
echo "\r"; // this is needed in order to have at least a character to flush
flush();
$conn = connection_aborted();
// here I start an output buffer to log what's inside the $contents variable
ob_start();
var_dump($contents);
$fh = fopen('test/fn_'.$fnExec.'.txt', 'w');
fwrite($fh, ob_get_clean());
fclose($fh);
// Now I restore the content in all the output buffers
$count = count($contents) - 1;
$index = 0;
do {
ob_start();
echo $contents[$index];
} while (++$index < $obs);
// Finally returns the value of connection_aborted()
return $conn;
}
// I want to start an output buffer here to manage the output of the script
ob_start();
// This is a simple script that run at most for 5 seconds and print an incremental
// number each iteration, after slept 2 seconds.
// It can be stopped aborting the connection!
$start = time();
$attempts = 0;
while (true) {
echo ++$attempts."\r\n";
if (is_connection_aborted() || (time() - $start) > 5) {
break;
}
sleep(2);
}
// Here I get the content of the output buffer
$content = ob_get_clean();
// And finally I log the content in a file
$fh = fopen('test/test_ob.txt', 'w');
fwrite($fh, $content);
fclose($fh);
Now to the test!
I leave the connection opened up to 2 seconds, and so I expect to see only 1-2 iterations and then a final output like '1 2 3' in the 'test/test_ob.txt' file.
The result, instead, is that I have the file 'test/test_ob.txt' empty but also the file log of the fn()'s execution! It seems like every ob_start() after the connection is lost, will be filled with anything!
Why do output buffers work only until the connection is lost? Is the loss of connection really the problem?
Thanks in advance!
I'm working on a cron system and need to execute a script only once at a time. By using the following codes, I execute the script first time and while it's looping (for delaying purpose), executing it again but file_exists always returns false while first execution returns content of file after loop is done.
Cronjob.php:
include "Locker.class.php";
Locker::$LockName = __DIR__.'/OneTime_[cron].lock';
$Locker = new Locker();
for ($i = 0 ; $i < 1000000; $i++){
echo 'Z';
$z = true;
ob_end_flush();
ob_start();
}
Locker.class.php:
class Locker{
static $LockName;
function __construct($Expire){
if (!basename(static::$LockName)){
die('Locker: Not a filename.');
}
// It doesn't help
clearstatcache();
if (file_exists(static::$LockName)){ // returns false always
die('Already running');
} else {
$myfile = fopen(static::$LockName, "x"); // Tried with 'x' and 'w', no luck
fwrite($myfile, 'Keep it alive'); // Tried with file_put_content also, no luck
fclose($myfile);
}
// The following function returns true by the way!
// echo file_exists(static::$LockName);
}
function __destruct(){
// It outputs content
echo file_get_contents(static::$LockName);
unlink(static::$LockName);
}
}
What is the problem? Why file_exists returns false always?
I suspect the PHP parser has noticed that you never use the variable $Locker, so it immediately destroys the object, which runs the destructor and removes the file. Try putting a reference to the object after the loop:
include "Locker.class.php";
Locker::$LockName = __DIR__.'/OneTime_[cron].lock';
$Locker = new Locker();
for ($i = 0 ; $i < 1000000; $i++){
echo 'Z';
$z = true;
ob_end_flush();
ob_start();
}
var_dump($Locker);
If you're goal is to prevent a potentially long running job from executing multiple copies at the same time, you can take a simpler approach and just flock() the file itself.
This would go in cronjob.php
<?php
$wb = false;
$fp = fopen(__FILE__, 'r');
if (!$fp) die("Could not open file");
$locked = flock($fp, LOCK_EX | LOCK_NB, $wb);
if (!$locked) die("Couldn't acquire lock!\n");
// do work here
sleep(20);
flock($fp, LOCK_UN);
fclose($fp);
To address your actual question, I found that by running your code, the reason the file is going away is because on subsequent calls, it outputs Already running if a job is running, and then the second script invokes the destructor and deletes the file before the initial task finishes running.
The flock method above solves this problem. Otherwise, you'll need to ensure that only the process that actually creates the lock file is able to delete it (and take care that it never gets left around too long).
I have a simple php service set up on a IIS web server. It is used by my client to retrieve files from the server. It looks like this:
<?php
if (isset($_GET['file']))
{
$filepath = "C:\\files\\" . $_GET['file'];
if (!strpos(pathinfo($filepath, PATHINFO_DIRNAME), "..") && file_exists($filepath) && !is_dir($filepath))
{
set_time_limit(0);
$fp = #fopen($filepath, "rb");
while(!feof($fp))
{
print(#fread($fp, 1024*8));
ob_flush();
flush();
}
}
else
{
echo "ERROR at www.testserver.com\r\n";
}
exit;
}
?>
I retrieve the files using WinHttp's WinHttpReadData in C++.
EDIT #2: Here is the C++ code. This is not exactly how it appears in my program. I had to pull pieces from multiple classes, but the gist should be apparent.
session = WinHttpOpen(appName.c_str(), WINHTTP_ACCESS_TYPE_NO_PROXY, WINHTTP_NO_PROXY_NAME, WINHTTP_NO_PROXY_BYPASS, 0);
if (session) connection = WinHttpConnect(session, hostName.c_str(), INTERNET_DEFAULT_HTTP_PORT, 0);
if (connection) request = WinHttpOpenRequest(connection, NULL, requestString.c_str(), NULL, WINHTTP_NO_REFERER, WINHTTP_DEFAULT_ACCEPT_TYPES, 0);
bool results = false;
if (request)
{
results = (WinHttpSendRequest(request, WINHTTP_NO_ADDITIONAL_HEADERS, 0, WINHTTP_NO_REQUEST_DATA, 0, 0, 0) != FALSE);
}
if (results)
{
results = (WinHttpReceiveResponse(request, NULL) != FALSE);
}
DWORD bytesCopied = 0;
DWORD size = 0;
if (results)
{
do {
results = (WinHttpQueryDataAvailable(request, &size) != FALSE);
if (results)
{
// More available data?
if (size > 0)
{
// Read the Data.
size = min(bufferSize, size);
ZeroMemory(buffer, size);
results = (WinHttpReadData(request, (LPVOID)buffer, size, &bytesCopied) != FALSE);
}
}
if (bytesCopied > 0 && !SharedShutDown.GetValue())
{
tempFile.write((PCHAR)RequestBuffer, bytesCopied);
if (tempFile.fail())
{
tempFile.close();
return false;
}
fileBytes += bytesCopied;
}
} while (bytesCopied > 0 && !SharedShutDown.GetValue());
}
Everything works fine when I test (thousands of files) over the local network using the server computer name from either a Windows 7 or Windows 10 machine. It also works fine when I access the service over the internet from a Windows 7 machine. However, when I run the client on a Windows 10 machine accessing over the internet, I get dropped characters. The interesting thing is that it is a specific set of characters that gets dropped every time from XML files. (Other, binary, files are affected as well, but I have not yet determined what changes in them.)
If the XML file contains an element starting with "<Style", that text disappears. So, this:
<Element1>blah blah</Element1>
<Style_Element>hoopa hoopa</Style_Element>
<Element2>bip bop bam</Element2>
becomes this:
<Element1>blah blah</Element1>
_Element>hoopa hoopa</Style_Element>
<Element2>bip bop bam</Element2>
Notice that the beginning of the style element is chopped off. This is the only element that is affected, and it seems to only affect the first one if there are more than one in the file.
What perplexes me is why this doesn't happen running the client from Windows 7.
EDIT: Some of the other files, binary and text, are missing from 1 to 3 characters each. It seems that a drop only happens once in a file. The rest of the contents of the file are identical to the source.
I can't make sense of the above read routine, it is also incomplete. Just keep it simple like the example below.
The fact that you are having problems with binary files suggest you are not opening the output tempFile in binary mode.
std::ofstream tempFile(filename, std::ios::binary);
while(WinHttpQueryDataAvailable(request, &size) && size)
{
std::string buf(size, 0);
WinHttpReadData(request, &buf[0], size, &bytesCopied);
tempFile.write(buf.data(), bytesCopied);
}
Your php file can be simplified as follows:
<?php
readfile('whatever.bin');
?>
I solved the problem, it seems. My php service did not include header information (didn't think I needed it), so I figured I would try adding a header specification for content type application/octet-stream just to see what would result. My updated service looked like this:
if (isset($_GET['file']))
{
$filepath = "C:\\Program Files (Unrestricted)\\Sony Online Entertainment\\Everquest Yarko Client\\" . $_GET['file'];
if (!strpos(pathinfo($filepath, PATHINFO_DIRNAME), "..") && file_exists($filepath) && !is_dir($filepath))
{
header("Content-Type:application/octet-stream");
set_time_limit(0);
$fp = #fopen($filepath, "rb");
while(!feof($fp))
{
print(#fread($fp, 1024*8));
ob_flush();
flush();
}
}
else
{
echo "ERROR at www.lewiefitz.com\r\n";
}
exit;
}
Now, the files download without any corruption. Why I need such a header in this situation is beyond me. What part of the system is messing with the response message before it ended up in my buffer? I don't know.
Nowdays there are a lot of websites for files hosting (uploading websites) and it count for example point per complete download of certain file.
My question
I want to understand what is the idea they are using !
How does it only count on complete downloading of the file ?!
i mean if i canceled downloading of the file after it started , it won't count point!
how does it knew ! is there any php function that able to know if i canceled downloading certain exact file or not !
that question was all time in my mind and thinking about it but i can't understand how does it works or what is the idea behind it. ~ thanks
This can be done by using my other answer as base How can I give download access to files outside public_html directory? and replacing readfile( $filename )
with readfileWhileConnected( $filename ):
Read file until EOF or disconnect:
/** Read $filename until EOF or disconnect,
* if disconnect then error_log() count of bytes read already
*/
function readfileWhileConnected( $filename ) {
// Save and set ini values:
$user_abort = ignore_user_abort();
ignore_user_abort(false);
// Get file size and set bytes_sent to zero:
$fsize = filesize($filename);
$bytes_sent = 0;
// Open file:
$f = fopen($filename, 'r');
// Read file:
while($chunk = fread($f, 1024)) {
// Check if connection is still open:
if(!connection_aborted()) {
// Send $chunk to buffer (if any), then flush() buffers:
echo $chunk;
flush();
// Add $chunk length to $bytes_sent
$bytes_sent += strlen($chunk);
} else {
// Close file:
fclose($f);
error_log("Connection closed at $bytes_sent/$fsize");
exit();
}
// Close file:
fclose($f);
// Reset ini values:
ignore_user_abort($user_abort);
return $bytes_sent;
}
}
After you have your new shiny class myNewSuperDownloadHandlerClass { ... } ready, then make sure you only serve downloads through filedownload.php described here or if have done good myNewSuperDownloadHandlerClass(), then use that, just make sure that readfileWhileConnected() is used for every download requiring connection status polling.
You can easily add callback to be triggered if user closes connection, only 2 exit points here. (seen many functions that have every often return false; return true; return null; return false; return true; and so on..)
I have a script i use that checks an IP address stored within my hosts.allow file against what IP is mapped to my dyndns hostname so i can log into my servers once i've synced my current IP to that hostname. For some reason though the script seems to cause really intermittent issues.
within my hosts.allow file i have a section like this:
#SOme.gotdns.com
sshd : 192.168.0.1
#EOme.gotdns.com
#SOme2.gotdns.com
sshd : 192.168.0.2
#EOme2.gotdns.com
I have a script running on a cron (every minute) that looks like this:
#!/usr/bin/php
<?php
$hosts = array('me.gotdns.com','me2.gotdns.com');
foreach($hosts as $host)
{
$ip = gethostbyname($host);
$replaceWith = "#SO".$host."\nsshd : ".$ip."\n#EO".$host;
$filename = '/etc/hosts.allow';
$handle = fopen($filename,'r');
$contents = fread($handle, filesize($filename));
fclose($handle);
if (preg_match('/#SO'.$host.'(.*?)#EO'.$host.'/si', $contents, $regs))
{
$result = $regs[0];
}
if($result != $replaceWith)
{
$newcontents = str_replace($result,$replaceWith,$contents);
$handle = fopen($filename,'w');
if (fwrite($handle, $newcontents) === FALSE) {
}
fclose($handle);
}
}
?>
The problem i have is that intermittently characters are being dropped (i assume during the replace) that causes future updates to fail as it inserts something like:
#SOme.gotdns.com
sshd : 192.168.0.1
#EOme.gotdn
note the missing "s.com"
This of course means i lose access to the server, any ideas why this would be happening?
Thanks.
that might be because of script execution time - can be too short- OR 1 min interval is too short. While cron is doing the job, another process of script starts and it may effect the first one.
This is almost certainly because the script hasn't finished executing within the one minute time period before it's started again via cron. You need to implement some sort of locking, or use a tool that only allows once instance of the script to be run. There are several tools available out there that can do this, for example lockrun.
I would say that in order to do this safely, you should acquire an exclusive lock on the file at the beginning of the script, read it all into memory once, modify it in memory, then write it back to the file at the end. This would also be considerably more efficient in terms of disk I/O.
You should also alter the cron job to run less frequently. It is likely that the reason you currently have this problem is because two processes are running at the same time - by locking the file, if this is the case, you risk having the processes stack up waiting to acquire a lock. Setting it for every 5 minutes should be good enough - your IP shouldn't change that often!
So do this (FIXED):
#!/usr/bin/php
<?php
// Settings
$hosts = array(
'me.gotdns.com',
'me2.gotdns.com'
);
$filename = '/etc/hosts.allow';
// No time limit (shouldn't be necessary with CLI, but just in case)
set_time_limit(0);
// Open the file in read/write mode and lock it
// flock() should block until it gets a lock
if ((!$handle = fopen($filename, 'r+')) || !flock($handle, LOCK_EX)) exit(1);
// Read the file
if (($contents = fread($handle, filesize($filename)) === FALSE) exit(1);
// Will be set to true if we actually make any changes to the file
$changed = FALSE;
// Loop hosts list
foreach ($hosts as $host) {
// Get current IP address of host
if (($ip = gethostbyname($host)) == $host) continue;
// Find the entry in the file
$replaceWith = "#SO{$host}\nsshd : {$ip}\n#EO{$host}";
if (preg_match("/#SO{$host}(.*?)#EO{$host}/si", $contents, $regs)) {
// Only do this if there was a match - otherise risk overwriting previous
// entries because you didn't reset the value of $result
if ($regs[0] != $replaceWith) {
$changed = TRUE;
$contents = str_replace($regs[0], $replaceWith, $contents);
}
}
}
// We'll only change the contents of the file if the data changed
if ($changed) {
ftruncate($handle, 0); // Zero the length of the file
rewind($handle); // start writing from the beginning
fwrite($handle, $contents); // write the new data
}
flock($handle, LOCK_UN); // Unlock
fclose($handle); // close