I have an app that processes images and use jQuery to display progress to the user.
I done this with writing to a textfile each time and image is processed and than read this status with a setInterval.
Because no images are actually written in the processing (I do it in PHP's memory) I thought a log.txt would be a solution, but I am not sure about all the fopen and fread's. Is this prone to issues?
I tried also with PHP sessions, but can't seem to get it to work, I don't get why..
HTML:
<a class="download" href="#">request download</a>
<p class="message"></p>
JS:
$('a.download').click(function() {
var queryData = {images : ["001.jpg", "002.jpg", "003.jpg"]};
$("p.message").html("initializing...");
var progressCheck = function() {
$.get("dynamic-session-progress.php",
function(data) {
$("p.message").html(data);
}
);
};
$.post('dynamic-session-process.php', queryData,
function(intvalId) {
return function(data) {
$("p.message").html(data);
clearInterval(intvalId);
}
} (setInterval(progressCheck, 1000))
);
return false;
});
process.php:
// session_start();
$arr = $_POST['images'];
$arr_cnt = count($arr);
$filename = "log.txt";
for ($i = 1; $i <= $arr_cnt; $i++) {
$content = "processing $val ($i/$arr_cnt)";
$handle = fopen($filename, 'w');
fwrite($handle, $content);
fclose($handle);
// $_SESSION['counter'] = $content;
sleep(3); // to mimic image processing
}
echo "<a href='#'>download zip</a>";
progress.php:
// session_start();
$filename = "log.txt";
$handle = fopen($filename, "r");
$contents = fread($handle, filesize($filename));
fclose($handle);
echo $contents;
// echo $_SESSION['counter'];
What if two clients process images at the same time?
You can try adding session_write_close() between setting the new status in the session, so that the new session data is stored, otherwise it will only get stored once your script finishes.
Another solution would be to save the status in memcache or to use a database,
perhaps separate the statuses with a userid or creating an md5 hash on the image data
Related
I managed to make this PHP counter without database, it is very basic as it increments the visits in a .txt file:
$counter_file = ("count.txt");
$fp = fopen($counter_file, "r");
$count = fread($fp, 1024);
fclose($fp);
$count = $count +1;
$fp = fopen($counter_file, "w");
fwrite($fp, $count);
fclose($fp);
But this counter fails on a distant server, when the visits are too fast. It goes back to 0.
What can explain this behaviour and how to make sure the counter will never go back to 0?
Edit: This script seams to be more robust. It uses flock as #ghopst suggested.
$counter_file = ("count.txt");
$handle = fopen($counter_file,"r+");
//Lock File, error if unable to lock
if(flock($handle, LOCK_EX)) {
$count = fread($handle, filesize($counter_file));
$count = $count + 1;
ftruncate($handle, 0);
rewind($handle);
fwrite($handle, $count);
flock($handle, LOCK_UN);
} else {
echo "Could not Lock File!";
}
fclose($handle);
It's down to the file system, your code makes a request to open, read, close, open, write and close the file for each visitor. If the file is being written it is locked against being written to by another instance, it's a behavior of the file system. Perhaps it would be better to have a simple database table with a autoincrement column and just insert a row for each visit then delete it , then you could just select the top row to return a value.
Try this version:
<?php
$counter_file = ("count.txt");
$count = #file_get_contents($counter_file);
$count = $count ? intval($count) + 1 : 1;
file_put_contents($counter_file, $count);
I need to read a file that is changing all the time. the file only and will only ever have one line that changes all the time.
I found the following code that should do what I want here: PHP: How to read a file live that is constantly being written to
But the code does not work, the page just keeps loading, I tried to add a "flush" like one user suggested, but I still cant make it work.
Here's the code
$file='/home/user/youfile.txt';
$lastpos = 0;
while (true) {
usleep(300000); //0.3 s
clearstatcache(false, $file);
$len = filesize($file);
if ($len < $lastpos) {
//file deleted or reset
$lastpos = $len;
}
elseif ($len > $lastpos) {
$f = fopen($file, "rb");
if ($f === false)
die();
fseek($f, $lastpos);
while (!feof($f)) {
$buffer = fread($f, 4096);
echo $buffer;
flush();
}
$lastpos = ftell($f);
fclose($f);
}
}
Please could someone have a look and let me know how to fix it.
Thanks in advance.
If your file have only one string, and you need to read it on change, use this code:
$file = '/path/to/test.txt';
$last_modify_time = 0;
while (true) {
sleep(1); // 1 s
clearstatcache(true, $file);
$curr_modify_time = filemtime($file);
if ($last_modify_time < $curr_modify_time) {
echo file_get_contents($file);
}
$last_modify_time = $curr_modify_time;
}
Note:
filemtime() returns last file modification time in seconds, so if you need to check modification more than one time per second, probably you'll need to find other solutions.
Also, you may need to add set_time_limit(0); it depends on your requirements.
Update:
index.html
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"
type="text/javascript">
</script>
</head>
<body>
<div id="file_content"></div>
<script>
var time = 0;
setInterval(function() {
$.ajax({
type: "POST",
data: {time : time},
url: "fileupdate.php",
success: function (data) {
var result = $.parseJSON(data)
if (result.content) {
$('#file_content').append('<br>' + result.content);
}
time = result.time;
}
});
}, 1000);
</script>
</body>
fileupdate.php
<?php
$file = 'test.txt';
$result = array();
clearstatcache(true, $file);
$data['time'] = filemtime($file);
$data['content'] = $_POST['time'] < $data['time']
? file_get_contents($file)
: false;
echo json_encode($data);
You might be dealing with 3 drawbacks:
First, the code you already have is holding a $lastpos. Meaning, it will always look for what is added at the end of the file. You are not clear in the OP, but I think this is not what you want. I think your one and only line is continuously changing, but not necessarily changing size. So you might want to remove the line $lastpos = ftell($f);.
Secondly, in regards to the same, you are check the file size to know if the file has changed. But as I explained, the file might have changed, while the file size stayed equal. Try changing the check of the file size to checking the file last-edit date.
Third, and probably most importantly: your web browser might be buffering your output until the php script is done running, before it releases the buffered output to the browser. Disable output buffering in both PHP and your web server. Things like gzip/compression by the web server can also be forcing output-buffering effects.
I'm programming a visitcounter for my website...
The textfile should look like this:
index.php: 4 views
contact.php: 6
views etc.
Here is my code:
function set_cookie(){
setcookie("counter", "Don't delete this cookie!", time()+600);
}
function count_views(){
$page = basename($_SERVER['PHP_SELF']);
$file = fopen("counter.txt","r+");
$page_found = false;
if (!isset($_COOKIE['counter'])) {
while (!feof($file)) {
$currentline = fgets($file);
if(strpos($currentline, ":")){
$filecounter = explode(":", $currentline);
$pif = $filecounter[0]; $counterstand = $filecounter[1];
if ($pif == $page) {
$counterstand = intval($counterstand);
$counterstand++;
fseek($file, -1);
fwrite($file, $counterstand);
$page_found = true;
set_cookie();
}
}
}
if (!$page_found) { fwrite($file, $page . ": 1\n"); }
fclose($file);
}
}
And now my problem:
Everytime i visit the page he is not able to update the new value. so at the end it looks like this
home.php: 1
index.php: 1
2222
It looks like he takes the 1 from the correct line after the filename, and prints it at the end of the file...
how can I write the new value in the correct line?
this is another method to store your data in textfile which frequently changed.
function count_views(){
$page = basename($_SERVER['PHP_SELF']);
$filename = "counter.txt";
if (!isset($_COOKIE['counter']))
{
$fh = fopen($filename, 'r+');
$content = #fread($fh,filesize($filename));
$arr_content = json_decode($content, true);
if(isset($arr_content[$page]))
{
$arr_content[$page] = $arr_content[$page]+1;
}
else
{
$arr_content[$page] = 1;
}
$content = json_encode($arr_content);
#ftruncate($fh, filesize($filename));
#rewind($fh);
fwrite($fh, $content);
}
}
here we use a array which key is the page and the value is the counter.
and we store it in json_encode format in it.
whenever we want to update a particular page count. read the json written in file decode it in php array and update the count if page exists or assign 1 with new page if page index not exists in array.
then we again encode it in json and store it in textfile.
I have a mysql table where each record can have unlimited custom fields (EAV model, doesn't matter this) and each field can have unlimited options and each option can have unlimited values.
Right now i am trying to built a export tool that will export all these custom fields with their values, that is: name => value pairs for each field. That's not the important part, it's here just to highlight that we're talking about a lot of mysql queries for a single record and that the size of the export will be pretty large.
For each row from my main table i must do around 100 separate sql queries to get the fields, fields options and field options values. These queries are pretty fast because they all use the right indexes, but still we're talking about 100 queries for a single record and i expect to have around 50k records in my main table just to start with.
Right now, what i do is:
set_time_limit(0);
ini_set('memory_limit', '1G');
ini_set("auto_detect_line_endings", true);
$count = $export->count();
$date = date('Y-m-d-H-i-s');
$fileName = CHtml::encode($export->name) .'-'. $date . '.csv';
$processAtOnce = 100;
$rounds = round($count / $processAtOnce);
header("Content-disposition: attachment; filename={$fileName}");
header("Content-Type: text/csv");
$headerSet = false;
for ($i = 0; $i < $rounds; ++$i) {
$limit = $processAtOnce;
$offset = $i * $processAtOnce;
$rows = $export->find($limit, $offset);
if (empty($rows)) {
continue;
}
$outStream = fopen('php://output', 'w');
if (!$headerSet) {
fputcsv($outStream, array_keys($rows[0]), ',', '"');
$headerSet = true;
}
foreach ($rows as $row) {
fputcsv($outStream, array_values($row), ',', '"');
}
echo fgets($outStream);
fclose($outStream);
}
Basically i count all the records and i "paginate" them for export, then run through the pages to avoin loading too many sql results at once.
I am wondering if this is a valid approach? Any thoughts?
My alternative would be to count all the records, split them into "pages" and for each page do an ajax request(recursive function called after the previous request has been made successfully). When doing the ajax request, process maybe 1k records at once(these 1k would also be splitted like in the above example, run internally 10 times with 100 results for example), write them into a temporary directory(like part-1.csv, part-2.csv) and at the end when all the records are processed, create an archive from the folder containing all the csv parts and force the browser to download it then remove it from the server(window.location.href from within the last ajax call).
Is this a good alternative to the above?
Please note, my goal is to limit the amount of memory usage that's why i think the second approach would help me more.
Please let me know what you think.
Thanks.
My final approach is the second one, after a lot of tests i concluded that in my case the second approach is way better in terms of memory usage, even if the time to complete the entire export is longer, that doesn't matter since the GUI will update with live stats about the export and overall is a good user experience while waiting for the export to finish.
These are the steps i took:
1) Load the page and make first ajax request to server.
2) Server will read first 1000 records in batches of 100 records at a time to avoid getting to many results back at once from mysql.
3) The results are written to a file as part-x.csv, where x is the request number sent by ajax.
4) When there are no more records to add to the file, the last ajax call will create the archive, and delete the folder containing the part-x.csv files. The server then will return a json param called "download" which will contain the url to download the file via PHP(fopen + fread + flush + fclose, followed by unlink the archive file)
5) Using the "download" param, the browser will do a window.location.href = json.download and force the file to be downloaded.
I know, it's more work like this, but as i said, the end result seems to be better than just loading all at once in the way i did first time.
Below is the more optimized approach to Export large CSV file ( thanks to #Joe for above code ) -
Make Ajax request in loops to the server. Below will be the AJAX
call process.
The server will read first records in batches of
records ( chunkSize ) at a time to avoid getting too many results back at once from MySQL.
The file exported_file.csv will be open in
write mode in the first request and append mode in subsequent requests.
The results are written to this file. When there are no
more records to add to the file, the js function will send the file to download.
Below is the example JS function -
<script>
var exportedRecords = 0;
var chunkSize = 500; // as per query performance
for( start=0; start <= totalRecords; start += chunkSize){
chunkCSVExport(,0, chunkSize);
}
function chunkCSVExport(start,chunkSize){
requestData['start'] = start;
requestData['limit'] = chunkSize;
jQuery.ajax({
type : "post",
dataType : "json",
url : action,
data : formData,
success: function(response) {
console.log(response);
exportedRecords += chunkSize;
downloadfile();
}
});
}
function downloadfile(){
if(exportedRecords>=totalRecords){
// call download file function here
}
}
</script>
Below is example PHP code -
<?php
$start = $_POST['start']; //added the missing closing single quote
$limit = $_POST['limit'];
if($start==0) {
$handle = fopen( 'file-export.csv', 'w' );
}else{
$handle = fopen( 'file-export.csv', 'a' );
}
// Run The query from start to limit
$results = getresults($query)
if($start==0) {
$headerDisplayed = false;
}else{
$headerDisplayed = true;
}
foreach ( $results as $data ) {
// Add a header row if it hasn't been added yet
if ( !$headerDisplayed ) {
// Use the keys from $data as the titles
fputcsv($handle, $arrHeaders);
$headerDisplayed = true;
}
// Put the data into the stream
fputcsv($handle, $data);
}
// Close the file
fclose($handle);
// Output some stuff for jquery to use
$response = array(
'result' => 'success'
);
echo json_encode($response);
exit;
?>
Thanks for the post Twisted1919 gave me some inspiration. I know this post is a bit old but I thought I would post some code of my solution so far in case it helps anyone else.
It's using some Wordpress functions for the DB queries.
I am replacing your steps 3 and 4 with.
<?php
// if its a fist run truncate the file. else append the file
if($start==0) {
$handle = fopen( 'temp/prod-export'. '.csv', 'w' );
}else{
$handle = fopen( 'temp/prod-export'. '.csv', 'a' );
}
?>
Some basic jQuery
<script>
// do stuff on the form submit
$('#export-form').submit(function(e){
e.preventDefault();
var formData = jQuery('#export-form').serializeObject();
var chunkAndLimit = 1000;
doChunkedExport(0,chunkAndLimit,formData,$(this).attr('action'),chunkAndLimit);
});
// function to trigger the ajax bit
function doChunkedExport(start,limit,formData,action,chunkSize){
formData['start'] = start;
formData['limit'] = limit;
jQuery.ajax({
type : "post",
dataType : "json",
url : action,
data : formData,
success: function(response) {
console.log(response);
if(response.result=='next'){
start = start + chunkSize;
doChunkedExport(start,limit,formData,action,chunkSize);
}else{
console.log('DOWNLOAD');
}
}
});
}
// A function to turn all form data into a jquery object
jQuery.fn.serializeObject = function(){
var o = {};
var a = this.serializeArray();
jQuery.each(a, function() {
if (o[this.name] !== undefined) {
if (!o[this.name].push) {
o[this.name] = [o[this.name]];
}
o[this.name].push(this.value || '');
} else {
o[this.name] = this.value || '';
}
});
return o;
};
</script>
The php bit
<?php
global $wpdb;
$postCols = array(
'post_title',
'post_content',
'post_excerpt',
'post_name',
);
header("Content-type: text/csv");
$start = intval($_POST['start']);
$limit = intval($_POST['limit']);
// check the total results to workout the finish point
$query = "SELECT count(ID) as total FROM `wp_posts` WHERE post_status = 'publish';";
$results = $wpdb->get_row( $query, ARRAY_A );
$totalResults = $results['total'];
$result = 'next';
if( ($start + $limit ) >= $totalResults){
$result = 'finished';
}
// if its a fist run truncate the file. else append the file
if($start==0) {
$handle = fopen( 'temp/prod-export'. '.csv', 'w' );
}else{
$handle = fopen( 'temp/prod-export'. '.csv', 'a' );
}
$cols = implode(',',$postCols);
//The query
$query = "SELECT {$cols} FROM `wp_posts` WHERE post_status = 'publish' LIMIT {$start},{$limit};";
$results = $wpdb->get_results( $query, ARRAY_A );
if($start==0) {
$headerDisplayed = false;
}else{
$headerDisplayed = true;
}
foreach ( $results as $data ) {
// Add a header row if it hasn't been added yet
if ( !$headerDisplayed ) {
// Use the keys from $data as the titles
fputcsv($handle, array_keys($data));
$headerDisplayed = true;
}
// Put the data into the stream
fputcsv($handle, $data);
}
// Close the file
fclose($handle);
// Output some stuff for jquery to use
$response = array(
'result' => $result,
'start' => $start,
'limit' => $limit,
'totalResults' => $totalResults
);
echo json_encode($response);
// Make sure nothing else is sent, our file is done
exit;
?>
Is there any alternative to file_get_contents that would create the file if it did not exist. I am basically looking for a one line command. I am using it to count download stats for a program. I use this PHP code in the pre-download page:
Download #: <?php $hits = file_get_contents("downloads.txt"); echo $hits; ?>
and then in the download page, I have this.
<?php
function countdownload($filename) {
if (file_exists($filename)) {
$count = file_get_contents($filename);
$handle = fopen($filename, "w") or die("can't open file");
$count = $count + 1;
} else {
$handle = fopen($filename, "w") or die("can't open file");
$count = 0;
}
fwrite($handle, $count);
fclose($handle);
}
$DownloadName = 'SRO.exe';
$Version = '1';
$NameVersion = $DownloadName . $Version;
$Cookie = isset($_COOKIE[str_replace('.', '_', $NameVersion)]);
if (!$Cookie) {
countdownload("unqiue_downloads.txt");
countdownload("unique_total_downloads.txt");
} else {
countdownload("downloads.txt");
countdownload("total_download.txt");
}
echo '<META HTTP-EQUIV=Refresh CONTENT="0; URL='.$DownloadName.'" />';
?>
Naturally though, the user accesses the pre-download page first, so its not created yet. I do not want to add any functions to the pre download page, i want it to be plain and simple and not alot of adding/changing.
Edit:
Something like this would work, but its not working for me?
$count = (file_exists($filename))? file_get_contents($filename) : 0; echo $count;
Download #: <?php
$hits = '';
$filename = "downloads.txt";
if (file_exists($filename)) {
$hits = file_get_contents($filename);
} else {
file_put_contents($filename, '');
}
echo $hits;
?>
you can also use fopen() with 'w+' mode:
Download #: <?php
$hits = 0;
$filename = "downloads.txt";
$h = fopen($filename,'w+');
if (file_exists($filename)) {
$hits = intval(fread($h, filesize($filename)));
}
fclose($h);
echo $hits;
?>
Type juggling like this can lead to crazy, unforeseen problems later. to turn a string to an integer, you can just add the integer 0 to any string.
For example:
$f = file_get_contents('file.php');
$f = $f + 0;
echo is_int($f); //will return 1 for true
however, i second the use of a database instead of a text file for this. there's a few ways to go about it. one way is to insert a unique string into a table called 'download_count' every time someone downloads the file. the query is as easy as "insert into download_count $randomValue" - make sure the index is unique. then, just count the number of rows in this table when you need the count. the number of rows is the download count. and you have a real integer instead of a string pretending to be an integer. or make a field in your 'download file' table that has a download count integer. each file should be in a database with an id anyway. when someone downloads the file, pull that number from the database in your download function, put it into a variable, increment, update table and show it on the client however you want. use PHP with jQuery Ajax to update it asynchronously to make it cool.
i would still use php and jquery.load(file.php) if you insist on using a text file. that way, you can use your text file for storing any kind of data and just load the specific part of the text file using context selectors. the file.php accepts the $_GET request, loads the right portion of the file and reads the number stored in the file. it then increments the number stored in the file, updates the file and sends data back to the client to be displayed any way you want. for example, you can have a div in your text file with an id set to 'downloadcount' and a div with an id for any other data you want to store in this file. when you load file.php, you just send div#download_count along with the filename and it will only load the value stored in that div. this is a killer way to use php and jquery for cool and easy Ajax/data driven apps. not to turn this into a jquery thread, but this is as simple as it gets.
You can use more concise equivalent yours function countdownload:
function countdownload($filename) {
if (file_exists($filename)) {
file_put_contents($filename, 0);
} else {
file_put_contents($filename, file_get_contents($filename) + 1);
}
}