I recently switched to PHPCassa to manage db connection in my PHP platform.
This is the code i'm using:
$indexExpression = new IndexExpression("Username", $username);
$indexClause = new IndexClause(array($indexExpression));
$cf = new ColumnFamily($this->cassandra, "Users");
$rows = $cf->get_indexed_slices($indexClause);
The problem is that actually $rows is not an array containing the data i'd like to fetch but it contains an IndexedColumnFamilyIterator object.
I'm I doing something wrong?
Thanks for helping.
Since you already cross-posted to the user mailing list (tisk, tisk :), I'll link to the answer and copy the answer here for others: https://groups.google.com/forum/?fromgroups#!topic/phpcassa/RrYTQc_jQ7s
It returns an iterator so that it can break up the query into manageable chunks (100 rows, by default) automatically.
$row_iterator = $cf->get_indexed_slices($indexClause);
foreach ($row_iterator as $key => $columns) {
// do stuff
}
Related
Using the following code, I can grab a node from a collection:
<?php
$user = "xxxx";
$pwd = 'xxxx';
if (isset($_POST['needleID'])) {
$needleID = $_POST['needleID'];
} else {
echo "needle ID not set";
}
//Manager Class
$connection = new MongoDB\Driver\Manager("mongodb://${user}:${pwd}#localhost:27017");
// Query Class
$filter = ['id'=> $needleID];
$query = new MongoDB\Driver\Query($filter);
// Output of the executeQuery will be object of MongoDB\Driver\Cursor class
$rows = $connection->executeQuery('DBNAME.DBCOLLECTION', $query);
// Convert rows to Array and send result back to javascript as json
$rowsArr = $rows->toArray();
echo json_encode($rowsArr);
?>
However, what I'm really looking to do is get everything from the DBCOLLECTION.
I'm kind of at a loss on how to do this. A few searches either go over my head or are for older versions of the PHP driver, such as this one fetch all data from mongodb collection
If you query on a specific ID, then you will only receive the document with that ID as its value. If you want to retrieve all document in a collection, leave the filter empty, i.e. with $filter = [];.
It is better to use mongoexport for exporting collections. On large collections your code will be slow and will timeout. Consider using pagination for results.
I'm using this PHP code to get objects from class.
I've got 100000 objects.
I want to get all the objects in a single query.
I'm using the following code.
$query = new ParseQuery("news_master");
$results = $query->find();
You can remove limit to get all data, please try following code :-
$query = new ParseQuery("news_master");
$query->equalTo("All",true);
results = $query->find();
As of 2018 and Parse Community PHP SDK and server you can use the each function, which provides a callback and can iterate over all data. Note this cannot be used in conjunction with skip, sort, limit. See Docs
An example of a query from their TEST suite would look like this.
$query = new ParseQuery('Object');
$query->lessThanOrEqualTo('x', $count);
$values = [];
$query->each(
function ($obj) use (&$values) {
$values[] = $obj->get('x');
},
10
);
$valuesLength = count($values);
the value of 10 is what batch size you want. If your database table is locked down and requires master key then you can do the following.
$query = new ParseQuery('Object');
$query->lessThanOrEqualTo('x', $count);
$values = [];
$query->each(
function ($obj) use (&$values) {
$values[] = $obj->get('x');
},
true, 10 // notice the value of true
);
$valuesLength = count($values);
The reason I'm adding to this old comment is because if you search getting more than 1000 records from parse no good link comes up and this is usually the first one.
Cheers to anyone that stumbles across this!
I've table in modx database (orders), and i need to export data from that db to table at site.
I pushing into db with following snipept
<?php
function agregarCargas( &$fields )
{
global $modx;
// Init our array
$dbTable = array();
$dbTable['subject'] = $modx->db->escape($fields['subject']);
$dbTable['fullname'] = $modx->db->escape($fields['fullname']);
$dbTable['message'] = $modx->db->escape($fields['message']);
// Run the db insert query
$dbQuery = $modx->db->insert($dbTable, 'orders' );
return true;
}
?>
How can i export from DB? Snippet or? Thanks.
(Old thread, just for new folks trying to tackle this...)
Looking at the API you're using I'm guessing you are stuck with an old MODx version. (Evolution)
You should take a look at the API::DB docs for MODX Evolution
Something along the lines of the following would fill your HTML table:
$res = $modx->db->select("subject, fullname", 'orders');
$res_rows = $modx->db->makeArray($res);
$rows = "";
for($n=0;$n<count($res_rows);$n++){
$rows .= "<tr><td>".$res_rows['subject']."</td><td>".$res_rows['fullname']."</td></tr>\n";
}
return $rows;
(Of course you should use chunks instead of hardcoded HTML)
I want to get all column names of a row in cassandra , how can I do it in phpcassa?
If phpcassa does not support it, does any other language, libs can do it?
In my case, column names are short, but rows are long(around 1000+),data are big(around 100K)
You have a good question. Try something like that:
$pool = new ConnectionPool('feed', array('127.0.0.1'));
$raw = $pool->get();
$rows = $raw->client->execute_cql_query("SELECT * FROM posts", cassandra_Compression::NONE);
var_dump($rows);
Maybe it will help...
Do you mean to get the names directly and only with phpCassa? I don't know any way to do it directly but I used to do that by getting all the row and then executing a foreach loop over the array I have from the column family, like this:
1.- A small function to use everywhere (build your own if you need ;) ):
function f_get_data_as_array($p_pool, $p_cf, $p_key, $p_col_count = 100, $p_column_names = NULL, $p_range_start = '', $p_range_end = '', $p_inverted_sort = false)
{
try{
$lv_slice = new ColumnSlice($p_range_start, $p_range_end, $p_col_count, p_inverted_sort);
$lv_cf = new ColumnFamily($p_pool, $p_cf);
$lv_cf->insert_format = ColumnFamily::ARRAY_FORMAT;
$lv_cf->return_format = ColumnFamily::ARRAY_FORMAT;
$lv_result = $lv_cf->get($p_key, $lv_slice, $p_column_names);
}catch(Exception $lv_e)
{
return false;
}
return $lv_result;
2.- I call it using the first four parameters, setting the pool, the column family name, the key I need and the number of columns I want to get (set the number as you need).
3.- A foreach loop over the returned array to get each column name. Or, if you know the structure you will get from your column family, you just need to use the right indexes, probably: $lv_result[0][0], $lv_result[0][1], and so...
Hope it helps. And sorry for my English!
This question already has answers here:
How to find array / dictionary value using key?
(2 answers)
Closed 1 year ago.
With a list of around 100,000 key/value pairs (both string, mostly around 5-20 characters each) I am looking for a way to efficiently find the value for a given key.
This needs to be done in a php website. I am familiar with hash tables in java (which is probally what I would do if working in java) but am new to php.
I am looking for tips on how I should store this list (in a text file or in a database?) and search this list.
The list would have to be updated occasionally but I am mostly interested in look up time.
You could do it as a straight PHP array, but Sqlite is going to be your best bet for speed and convenience if it is available.
PHP array
Just store everything in a php file like this:
<?php
return array(
'key1'=>'value1',
'key2'=>'value2',
// snip
'key100000'=>'value100000',
);
Then you can access it like this:
<?php
$s = microtime(true); // gets the start time for benchmarking
$data = require('data.php');
echo $data['key2'];
var_dump(microtime(true)-$s); // dumps the execution time
Not the most efficient thing in the world, but it's going to work. It takes 0.1 seconds on my machine.
Sqlite
PHP should come with sqlite enabled, which will work great for this kind of thing.
This script will create a database for you from start to finish with similar characteristics to the dataset you describe in the question:
<?php
// this will *create* data.sqlite if it does not exist. Make sure "/data"
// is writable and *not* publicly accessible.
// the ATTR_ERRMODE bit at the end is useful as it forces PDO to throw an
// exception when you make a mistake, rather than internally storing an
// error code and waiting for you to retrieve it.
$pdo = new PDO('sqlite:'.dirname(__FILE__).'/data/data.sqlite', null, null, array(PDO::ATTR_ERRMODE=>PDO::ERRMODE_EXCEPTION));
// create the table if you need to
$pdo->exec("CREATE TABLE stuff(id TEXT PRIMARY KEY, value TEXT)");
// insert the data
$stmt = $pdo->prepare('INSERT INTO stuff(id, value) VALUES(:id, :value)');
$id = null;
$value = null;
// this binds the variables by reference so you can re-use the prepared statement
$stmt->bindParam(':id', $id);
$stmt->bindParam(':value', $value);
// insert some data (in this case it's just dummy data)
for ($i=0; $i<100000; $i++) {
$id = $i;
$value = 'value'.$i;
$stmt->execute();
}
And then to use the values:
<?php
$s = microtime(true);
$pdo = new PDO('sqlite:'.dirname(__FILE__).'/data/data.sqlite', null, null, array(PDO::ATTR_ERRMODE=>PDO::ERRMODE_EXCEPTION));
$stmt = $pdo->prepare("SELECT * FROM stuff WHERE id=:id");
$stmt->bindValue(':id', 5);
$stmt->execute();
$value = $stmt->fetchColumn(1);
var_dump($value);
// the number of seconds it took to do the lookup
var_dump(microtime(true)-$s);
This one is waaaay faster. 0.0009 seconds on my machine.
MySQL
You could also use MySQL for this instead of Sqlite, but if it's just one table with the characteristics you describe, it's probably going to be overkill. The above Sqlite example will work fine using MySQL if you have a MySQL server available to you. Just change the line that instantiates PDO to this:
$pdo = new PDO('mysql:host=your.host;dbname=your_db', 'user', 'password', array(PDO::ATTR_ERRMODE=>PDO::ERRMODE_EXCEPTION));
The queries in the sqlite example should all work fine with MySQL, but please note that I haven't tested this.
Let's get a bit crazy: Filesystem madness
Not that the Sqlite solution is slow (0.0009 seconds!), but this about four times faster on my machine. Also, Sqlite may not be available, setting up MySQL might be out of the question, etc.
In this case, you can also use the file system:
<?php
$s = microtime(true); // more hack benchmarking
class FileCache
{
protected $basePath;
public function __construct($basePath)
{
$this->basePath = $basePath;
}
public function add($key, $value)
{
$path = $this->getPath($key);
file_put_contents($path, $value);
}
public function get($key)
{
$path = $this->getPath($key);
return file_get_contents($path);
}
public function getPath($key)
{
$split = 3;
$key = md5($key);
if (!is_writable($this->basePath)) {
throw new Exception("Base path '{$this->basePath}' was not writable");
}
$path = array();
for ($i=0; $i<$split; $i++) {
$path[] = $key[$i];
}
$dir = $this->basePath.'/'.implode('/', $path);
if (!file_exists($dir)) {
mkdir($dir, 0777, true);
}
return $dir.'/'.substr($key, $split);
}
}
$fc = new FileCache('/tmp/foo');
/*
// use this crap for generating a test example. it's slow to create though.
for ($i=0;$i<100000;$i++) {
$fc->add('key'.$i, 'value'.$i);
}
//*/
echo $fc->get('key1', 'value1');
var_dump(microtime(true)-$s);
This one takes 0.0002 seconds for a lookup on my machine. This also has the benefit of being reasonably constant regardless of the cache size.
It depends on how frequent you would access your array, think it this way how many users can access it at same time.There are many advantages towards storing it in database and here you have two options MySQL and SQLite.
SQLite works more like text file with SQL support, you can save few milliseconds during queries as it located within reach of your application, the main disadvantage of it that it can add only one record at a time (same as text file).
I would recommend SQLite for arrays with static content like GEO IP data, translations etc.
MySQL is more powerful solution but require authentication and located on separate machine.
PHP arrays will do everything you need. But shouldn't that much data be stored in a database?
http://php.net/array