I am using JSON as a data store and over time I foresee that several parties might be wanting to write to my JSON file like a chat log within a short space of time.
<?php
$foo = json_decode(file_get_contents("foo.json"), true);
if (! is_array($foo["bar"])) { $foo["bar"] = array(); }
array_push($foo["bar"], array("time" => time(), "who" => $_SERVER['REMOTE_ADDR'], msg => $_GET['m']));
file_put_contents("foo.json", json_encode($foo, JSON_PRETTY_PRINT));
?>
So the above code works, but I am worried what happens if the file is read before it's written out, or some case where they are writing out at the same time, leading to some data loss?
What's a better or safer design, preferably using flat file storage (i.e. not databases)?
As for a bonus I really don't want to return to my client who made the request this that were was some "lock". Ideally the request is made to wait until it's safe to return.
You can use the flock() function for this. What it does is it locks the file for all processes except the current one.
http://php.net/manual/en/function.flock.php
Basic usage:
<?php
$fp = fopen('path/to/data.json', 'r+');
if (flock($fp, LOCK_EX)) // locks the file
{
// write to the file
flock($fp, LOCK_UN); // remove the lock
}
fclose($fp);
flock() is blocking by default. That means a process is waiting until it gets permission to access the lock. Have a look at the docs on how to implement a nonblocking version.
How a bout you create individual JSON for each line so you do not need load Json every time. and you just append JSON to the file.
Each Joson will be in One line.
When you load it you will read each line in text file then convert it to Json within PHP.
The reason I recommend this way as we want to find other way to have better solution for file read and write using //http://php.net/manual/en/function.file-put-contents.php
file_put_contents("foo.json", json_encode($foo, JSON_PRETTY_PRINT).
JSON is just a format for text. So I believe that It is a good solution to do.
To write
<?php
$_SERVER['REMOTE_ADDR'], msg => $_GET['m']));
$foo = array("time" => time(), "who" => $_SERVER['REMOTE_ADDR'], msg => $_GET['m']);
//http://php.net/manual/en/function.file-put-contents.php
file_put_contents("foo.json", json_encode($foo, JSON_PRETTY_PRINT), FILE_APPEND | LOCK_EX));
?>
As you mention it is a log, so you may not need to load to see it all the time.
So we will to focus on writing. It may seem long process to read.
TO read
<?php
$log_array = array();
$handle = #fopen("foo.json", "r");
if ($handle) {
while (($buffer = fgets($handle, 4096)) !== false) {
$foo = json_decode($buffer, true);
$log_array[] = $foo;
}
if (!feof($handle)) {
echo "Error: unexpected fgets() fail\n";
}
fclose($handle);
}
print_r($log_array);
?>
Related
When I run this function on multiple scripts one script generated warning:
fread(): Length parameter must be greater than 0
function test($n){
echo "<h4>$n at ".time()."</h4>";
for ($i = 0; $i<50; $i++ ){
$fp = fopen("$n.txt", "r");
$s = fread($fp, filesize("$n.txt") );
fclose($fp);
$fp = fopen("$n.txt", "w");
$s = $_SERVER['HTTP_USER_AGENT'].' '.time();
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
fwrite($fp, $s);
// fflush($fp);// flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't get the lock!";
}
}
}
I try to write reading of the file for multiple users, but only one user can write the file. I know that when I use fwrite with flock - LOC_EX, next scripts must wait till the write is finished. But here it seems like filesize doesn't wait till the write operation is finished. My opinion is that it tries to reach the file when the file size is 0, and as a result this produces the problem: 0 bytes will be read from the file, when it is written by original script.
Is it possible to fix this for fread function?
Purpose of this script is to test fread with some limit and to check the data which I read later, if the data are really written when I did not used fflush.
function test($n){
echo "<h4>$n at ".time()."</h4>";
for ($i = 0; $i<50; $i++ ){
$start = microtime(true);
$fp = fopen("$n.txt", "r");
if(filesize($n.txt) > 0)
{
$s = fread($fp, filesize($n.txt) );
fclose($fp);
$fp = fopen("$n.txt", "w");
$s = $_SERVER['HTTP_USER_AGENT'].' '.time();
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
fwrite($fp, $s);
// fflush($fp);// flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't get the lock!";
}
}
else
{
echo "Filesize must be greater than 0";
}
}
}
please change $s variables name its use same things two time
$fp = fopen("$n.txt", "r");
$s = fread($fp, filesize("$n.txt") );
fclose($fp);
The error occurs in the middle line of the above three lines.
Firstly, these three lines could be rewritten into a single line as follows:
$s = file_get_contents("$n.txt");
However, these isn't necessary, as these three lines are entirely redundant in your code. They don't do anything useful.
What they do is open a file, store its contents to $s and then close it.
But you are then immediately setting $s to a different value, thus throwing away the previous value, and making it pointless to have read it from the file in the first place.
If you need to keep the original contents of the file, then use file_get_contents() and make sure you don't overwrite the contents of the variable.
If you don't need the original contents of the file, then just delete those three lines from your code.
Incidentally, this error highlights a couple of good coding practices that you should take on board: Firstly, never re-use a variable for two different things, and secondly always give your variables (and functions) good names. $s is not a good name; $previousFileContents would be a better name; it would have made the error much more obvious.
I want to have a temp file that gets updated from time to time.
What I was thinking of doing is:
<!-- language: lang-php -->
// get the contents
$s = file_get_contents( ... );
// does it need updating?
if( needs_update() )
{
$s = 'some new content';
file_put_contents( ... );
}
The issue that I could see happening is that whatever condition causes 'needs_update()' to return true could cause more than one process to update the same file at, (almost), the same time.
In an ideal situation, I would have one single process updating the file and prevent all other processes from reading the file until I am done with it.
So as soon as 'needs_update()' return true is called I would prevent others processes from reading the file.
<!-- language: lang-php -->
// wait here if anybody is busy writing to the file.
wait_if_another_process_is_busy_with_the_file();
// get the contents
$s = file_get_contents( ... );
// does it need updating?
if( needs_update() )
{
// prevent read/write access to the file for a moment
prevent_read_write_to_file_and_wait();
// rebuild the new content
$s = 'some new content';
file_put_contents( ... );
}
That way, only one process could possibly update the file and the files would all be getting the latest values.
Any suggestions on how I could prevent such a conflict?
Thanks
FFMG
You are looking for the flock function. flock will work as long as everyone that acess the file is using it. Example from php manual:
$fp = fopen("/tmp/lock.txt", "r+");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
ftruncate($fp, 0); // truncate file
fwrite($fp, "Write something here\n");
fflush($fp); // flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't get the lock!";
}
fclose($fp);
Manual: http://php.net/manual/en/function.flock.php
I have a function that keeps track of events that happen through out the script. In an effort to use my resources effectively, I decided to compress the data that it generates. However, I keep getting this error:
Unknown error type: [2] gzuncompress() [function.gzuncompress]: data error
Here's the function:
function eventlog($type, $message){
// Types: account,run,queue,system
// Set up file name/location
$eventfile = '/myprivatedirectory/'.date('Ymd').$type.'.log';
if(file_exists($eventfile)){
while(!is_writable($eventfile)){clearstatcache();}
$fh_log = fopen($eventfile,'r+');
flock($fh_log, LOCK_EX);
$logcontents = gzuncompress(fread($fh_log,filesize($eventfile)));
rewind($fh_log);
ftruncate($fh_log, 0);
$logcompressed = gzcompress($logcontents.$message."\n");
fwrite($fh_log,$logcompressed);
flock($fh_log, LOCK_UN);
fclose($fh_log);
} else {
$fh_log = fopen($eventfile,'w');
flock($fh_log, LOCK_EX);
$logcompressed = gzcompress($message."\n");
fwrite($fh_log,$logcompressed);
flock($fh_log, LOCK_UN);
fclose($fh_log);
}
}
So everyday, at midnight, a new error log is created as any of the above events occur (account,run,queue,system), otherwise each new event is appended to the respectful log file.
I would love to keep the compression, but I can not keep having these errors, can anyone please help? Thanks in advance.
I think the implementation is all wrong i would not advice you to gzcompress($message."\n"); every message ...
I think what you should do is that at the end of the day you can compress the whole log file which is more efficient
Save your information using
file_put_contents
At the end of the day
$eventfile = '/myprivatedirectory/'.date('Ymd').$type.'.log';
$eventfileCompressed = '/myprivatedirectory/'.date('Ymd').$type.'.gz';
$gz = gzopen($eventfileCompressed ,"w9");
gzwrite($gz, file_get_contents($eventfile));
gzclose($gz);
To read the file
$zd = gzopen($eventfileCompressed,"r");
$zr = gzread($zd,$fileSize);
gzclose($zd);
This approach would save you more processing power
Question: Is it possible to use php://memory on a exec or passthru command?
I can use php variables in the exec or passthru with no problem, but I am having trouble with php://memory
background:
I am trying to eliminate all of my temporary pdf file writing with PDFTK.
1)I am writing an temporary fdf file
2) form-fill a temporary pdf file using #1
3) repeat #1 and #2 for all the pdfs
4) merge all pdf's together.
This currently works - but it creates a lot of files, and is the bottleneck.
I would like to speed things up with pdftk by making use of the virtual file php://memory
First, I am trying to just virtualize the fdf file used in #1. Answering this alone is enough for a 'correct answer'. :)
The code is as follows:
$fdf = 'fdf file contents here';
$tempFdfVirtual= fopen("php://memory", 'r+');
if( $tempFdfVirtual ) {
fwrite( $tempFdfVirtual, $fdf);
} else {
echo "Failure to open temporary fdf file";
exit;
}
rewind( $tempFdfVirtual);
$url = "unfilled.pdf";
$temppdf_fn = "output.pdf";
$command = "pdftk $url fill_form $tempFdfVirtual output $temppdf_fn flatten";
$error="";
exec( $command, $error );
if ($error!="") {
$_SESSION['err'] = $error;
} else {
$_SESSION['err'] = 0;
}
I am getting an errorcode #1. If I do a stream_get_contents($tempFdfVirtual), it shows the contents.
Thanks for looking!
php://memory and php://temp (and in fact any file descriptor) are only available to the currently-running php process. Besides, $tempFdfVirtual is a resource handle so it makes no sense to put it in a string.
You should pass the data from your resource handle to the process through its standard-in. You can do this with proc-open, which gives you more control over input and output to the child process than exec.
Note that for some reason, you can't pass a 'php://memory' file descriptor to a process. PHP will complain:
Warning: proc_open(): cannot represent a stream of type MEMORY as a File Descriptor
Use php://temp instead, which is supposed to be exactly the same except it will use a temporary file once the stream gets big enough.
This is a tested example that illustrates the general pattern of code that uses proc_open(). This should be wrapped up in a function or other abstraction:
$testinput = "THIS IS A TEST STRING\n";
$fp = fopen('php://temp', 'r+');
fwrite($fp, $testinput);
rewind($fp);
$cmd = 'cat';
$dspec = array(
0 => $fp,
1 => array('pipe', 'w'),
);
$pp = proc_open($cmd, $dspec, $pipes);
// busywait until process is finished running.
do {
usleep(10000);
$stat = proc_get_status($pp);
} while($stat and $stat['running']);
if ($stat['exitcode']===0) {
// index in $pipes will match index in $dspec
// note only descriptors created by proc_open will be in $pipes
// i.e. $dspec indexes with an array value.
$output = stream_get_contents($pipes[1]);
if ($output == $testinput) {
echo "TEST PASSED!!";
} else {
echo "TEST FAILED!! Output does not match input.";
}
} else {
echo "TEST FAILED!! Process has non-zero exit status.";
}
// cleanup
// close pipes first, THEN close process handle.
foreach ($pipes as $pipe) {
fclose($pipe);
}
// Only file descriptors created by proc_open() will be in $pipes.
// We still need to close file descriptors we created ourselves and
// passed to it.
// We can do this before or after proc_close().
fclose($fp);
proc_close($pp);
Untested Example specific to your use of PDFTK:
// Command takes input from STDIN
$command = "pdftk unfilled.pdf fill_form - output tempfile.pdf flatten";
$descriptorspec = array(
0 => $tempFdfVirtual, // feed stdin of process from this file descriptor
// 1 => array('pipe', 'w'), // Note you can also grab stdout from a pipe, no need for temp file
);
$prochandle = proc_open($command, $descriptorspec, $pipes);
// busy-wait until it finishes running
do {
usleep(10000);
$stat = proc_get_status($prochandle);
} while ($stat and $stat['running']);
if ($stat['exitcode']===0) {
// ran successfully
// output is in that filename
// or in the file handle in $pipes if you told the command to write to stdout.
}
// cleanup
foreach ($pipes as $pipe) {
fclose($pipe);
}
proc_close($prochandle);
It's not just that you're using php://memory, it's any file handle. File handles only exist for the current process. For all intents and purposes, the handle you get back from fopen cannot be transferred to any other place outside of your script.
As long as you're working with an outside application, you're pretty much stuck using temporary files. Your only other option is to try and pass the data to pdftk on stdin, and retrieve the output on stdout (if it supports that). As far as I know the only way to invoke an external process with that kind of access to its descriptors (stdin/stdout) is using the proc_ family of functions, specifically proc_open.
Is there a quick way to load every line of a file into an array from a file once it has already been opened?
For example:
$handle = fopen("file", "r+");
flock($handle, LOCK_EX);
$array = load_lines($handle); <- need this
// compute on the array
fwrite($handle, $array);
flock($handle, LOCK_UN):
fclose($handle);
The reason I need this is because I currently use the file() function to grab the contents of a file and put them into an array. However, I need to incorporate file locking into my design and I'm hoping to not have to change it too much (it is current array-based). Is there an easy way to do this?
On php <5.3, or if you choose to with LOCK_NB, file locks in php are advisory. That is, you have to test the lock yourself .. they don't actually prevent you from updating the file. This will do:
$fh = fopen(__FILE__, 'r+');
if (flock($fh, LOCK_EX)) {
$array = file(__FILE__);
fwrite($fh, implode($array));
flock($fh, LOCK_UN);
flcose($fh);
}
else {
echo "Could not acquire the lock!"
}
I also tested this out in php 5.3. It seems that file() ignores locking.
Try this:
function load_lines($handle)
{
$array = array();
while(!feof($handle)
{
$array[] = fgets($handle);
}
return $array;
}