I'm trying to write a php function that takes the $name and $time and write it to a txt file (no mySQL) and sort the file numerically.
For example:
10.2342 bob
11.3848 CandyBoy
11.3859 Minsi
12.2001 dj
just added Minsi under a faster time, for example.
If the $name already exists in the file, only rewrite it if the time is faster (smaller) than the previous one, and only write if the time fits within 300 entries to keep the file small.
My forte isn't file writing but I was guessing to go about using the file() to turn the whole file into an array, but to my avail, it didn't work quite like I wanted. Any help would be appreciated
If your data sets are small, you may consider using var_export()
function dump($filename, Array &$data){
return file_put_contents('<?php return ' . var_export($data, true) . ';');
}
// create a data set
$myData = array('alpha', 'beta', 'gamma');
// save a data set
dump('file.dat', $myData);
// load a data set
$myData = require('file.dat');
Perform your sorts using the PHP array_* functions, and dump when necessary. var_export() saves the data as PHP parsable text, which is why the dump() function prepends the string <?php return. Of course, this is really only a viable option when your data sets are going to be small enough that keeping their contents in memory is not unreasonable.
Try creating a multi dimensional array "$timeArray[key][time] = name" and then sort($timeArray)
Related
Situation.
I download and process multiple xml files. Download the first file.
Open it with $xml_file = simplexml_load_file( dirname(__FILE__). '/_downloaded_files/filename.xml' );
Go through the file, create variables to insert into mysql, insert into mysql. Do the same with next xml files.
After i processed the opened xml file, i unset (set null) variables. Like $xml_file = null; Tried also like unset($xml_file);, saw no difference. Somewhere found advice to use gc_enable(); gc_collect_cycles();, also no difference (no effect).
After executed mysql code, also set to null all used variables.
As result with echo '<pre>', print_r(get_defined_vars(), true), '</pre>'; saw like
[one_variable] =>
[another_variable] => 1
I saw many (~ 100) variables with empty content (or one short value for the a variable).
But with echo (memory_get_peak_usage(false)/1024/1024); see 38.01180267334 MiB memory used.
Can someone advice where is problem? ~100 empty variables can not use 38 megabytes... What else may use the memory?
I'm using the Sebastian Bergmann PHPUnit selenium webdriver.
Current I have:
$csv = file_get_contents('functions.csv', NULL,NULL,1);
var_dump($csv);
// select 1 random line here
This will load my CSV file and give me all possible data from the file.
It has multiple rows for example:
Xenoloog-FUNC/8_4980
Xylofonist-FUNC/8_4981
IJscoman-FUNC/8_4982
Question: How can I get that data randomly?
I just want to use 1 ( random) line of data.
Would it be easier to just grab 1 (random) line of the file instead of everything?
Split the string into an array, then grab a random index from that array:
$lines = explode("\n", $csv);
$item = $lines[array_rand($lines)];
You could use the offset and maxlen parameters to grab part of the file using file_get_contents. You could also use fseek after fopen to select part of a file. These functions both take numbers of bytes as arguments. This post has a little more information:
Get part of a file by byte number in php
It may require some hacking to translate a particular row-index of a CSV file to a byte offset. You might need to generate and load a small meta-data file which contains a list of bytes-occupancies for each row of of CSV data. That would probably help.
$static = ');';
$file = 'onebigarray.php';
$fh = fopen($file, 'r+');
include $file;
if (in_array($keyname, $udatarray)) {
$key = array_search($keyname, $udatarray);
$fsearch = $key + 4;
fseek($fh, $fsearch, SEEK_END);
fwrite($fh, 'new data');
fseek($fh, - 2, SEEK_END);
fwrite($fh, $static);
fclose($fh);
}
I'm a novice at PHP.
What I've done is created a form that writes array elements to a file "onebigarray.php".
The file looks like
Array (
'name',
'data',
'name2',
'data2',
);
What I ultimately need to do, is load that $file, search the array for an existing name then replace 'data(2)' with whatever was put in the form. The entire script is extremely large and consists of 3 different files, it works but now I need to find a way to search and replace array elements within an existing file, and then write them to that file. This section of the script is where that's going to need to occur and it's the part that's giving me the most trouble, currently it seems to properly load and enact the if/else statement, however completely ignores fwrite (I can only assume there's an issue with including and opening the same file within the same script).
Thank you in advance for any help.
Your code can be simplified greatly, but with some notable design changes. Firstly, your stored array structure is not efficient and negates the benefit of using an array in the first place. A very powerful feature of arrays is their ability to store data with a meaningful key (known as an associative array). In this case your array should look like this:
array(
'name' => 'data',
'name2' => 'data2'
);
This allows you to retrieve the data directly by the key name, e.g. $data = $array['name2'].
Secondly, changing the data stored in 'onebigarray.php' from PHP code to JSON makes it trivial for any other system to interact and makes it easier to extend later.
Assuming that the file is renamed to 'onebigarray.json' and its content looks like this (after using json_encode() on the array above):
{"name":"data","name2":"data2"}
Then the below code will work nicely:
<?php
$file = 'onebigarray.json';
$key = 'name2';
$new_data = 'CHANGED';
$a = (array) json_decode(file_get_contents($file));
if (array_key_exists($key, $a)) {
$a[$key] = $new_data;
file_put_contents($file, json_encode($a));
}
?>
After running the above, the JSON file will now contain this:
{"name":"data","name2":"CHANGED"}
A big caveat: this will read the entire JSON file into memory! So depending on how big the file really is this may negatively impact server performance (although it would probably need to be several megabytes before the server even noticed, and even then with a trivial impact on performance).
I'm looking for a very fast method to read a csv file. My data structure looks like this:
timestamp ,float , string ,ip ,string
1318190061,1640851625, lore ipsum,84.169.42.48,appname
and I'm using fgetcsv to read this data into arrays.
The problem: Performance. On a regular basis the script has to read (and process) more than 10,000 entries.
My first attempt is very simple:
//Performance: 0,141 seconds / 13.5 MB
while(!feof($statisticsfile))
{
$temp = fgetcsv($statisticsfile);
$timestamp[] = $temp[0];
$value[] = $temp[1];
$text[] = $temp[2];
$ip[] = $temp[3];
$app[] = $temp[4];
}
My second attempt:
//Performance: 0,125 seconds / 10.8 MB
while (($userinfo = fgetcsv($statisticsfile)) !== FALSE) {
list ($timestamp[], $value[], $text, $ip, $app) = $userinfo;
}
Is there any way to improve performance even further, or is my method as fast as it could get?
Probably more important: Is there any way to define what columns are read, e.g. sometimes only the timestamp, float columns are needed. Is there any better way than my way (have a look at my second attempt :)
Thanks :)
How long is the longest line? Pass that as the second parameter to fgetcsv() and you'll see the greatest improvement.
Check time that PHP read this file:
If is bigg move file to ramdisk or SSD
[..]sometimes only the timestamp
Somthing like that
preg_match_all('#\d{10},\d{10}, (.*?),\d.\d.\d.\d,appname#',$f,$res);
print_r($res);
I have a PHP script that builds a binary search tree over a rather large CSV file (5MB+). This is nice and all, but it takes about 3 seconds to read/parse/index the file.
Now I thought I could use serialize() and unserialize() to quicken the process. When the CSV file has not changed in the meantime, there is no point in parsing it again.
To my horror I find that calling serialize() on my index object takes 5 seconds and produces a huge (19MB) text file, whereas unserialize() takes unbearable 27 seconds to read it back. Improvements look a bit different. ;-)
So - is there a faster mechanism to store/restore large object graphs to/from disk in PHP?
(To clarify: I'm looking for something that takes significantly less than the aforementioned 3 seconds to do the de-serialization job.)
var_export should be lots faster as PHP won't have to process the string at all:
// export the process CSV to export.php
$php_array = read_parse_and_index_csv($csv); // takes 3 seconds
$export = var_export($php_array, true);
file_put_contents('export.php', '<?php $php_array = ' . $export . '; ?>');
Then include export.php when you need it:
include 'export.php';
Depending on your web server set up, you may have to chmod export.php to make it executable first.
Try igbinary...did wonders for me:
http://pecl.php.net/package/igbinary
First you have to change the way your program works. divide CSV file to smaller chunks. This is an IP datastore i assume. .
Convert all IP addresses to integer or long.
So if a query comes you can know which part to look.
There are <?php ip2long() /* and */ long2ip(); functions to do this.
So 0 to 2^32 convert all IP addresses into 5000K/50K total 100 smaller files.
This approach brings you quicker serialization.
Think smart, code tidy ;)
It seems that the answer to your question is no.
Even if you discover a "binary serialization format" option most likely even that would be to slow for what you envisage.
So, what you may have to look into using (as others have mentioned) is a database, memcached, or on online web service.
I'd like to add the following ideas as well:
caching of requests/responses
your PHP script does not shutdown but becomes a network server to answer queries
or, dare I say it, change the data structure and method of query you are currently using
i see two options here
string serialization, in the simplest form something like
write => implode("\x01", (array) $node);
read => explode() + $node->payload = $a[0]; $node->value = $a[1] etc
binary serialization with pack()
write => pack("fnna*", $node->value, $node->le, $node->ri, $node->payload);
read => $node = (object) unpack("fvalue/nre/nli/a*payload", $data);
It would be interesting to benchmark both options and compare the results.
If you want speed, writing to or reading from the file system in less than optimal.
In most cases, a database server will be able to store and retrieve data much more efficiently than a PHP script that is reading/writing files.
Another possibility would be something like Memcached.
Object serialization is not known for its performance but for its ease of use and it's definitely not suited to handle large amounts of data.
SQLite comes with PHP, you could use that as your database. Otherwise you could try using sessions, then you don't have to serialize anything, you just saving the raw PHP object.
What about using something like JSON for a format for storing/loading the data? I have no idea how fast the JSON parser is in PHP, but it's usually a fast operation in most languages and it's a lightweight format.
http://php.net/manual/en/book.json.php