I'm trying to write a bit of code to inventory our OpenStack deployment, and I've run into an issue where serverList() only ever returns 100 results instead of the 600+ I'm expecting. I've reviewed the documentation and a bit of the source, and as far as I can tell there's no reason that this should be happening as the PaginatedIterator should be doing its pagination transparently.
There are no errors or warning either generated in my code, or logged on my controller [that I can find]. I am using php-opencloud v1.12 via composer.
use OpenCloud\OpenStack;
$client = new OpenStack('http://1.2.3.4:5000/v2.0/', array(
'username' => 'admin',
'password' => 'hunter2',
'tenantName'=> 'admin',
));
$service = $client->computeService('nova', 'RegionOne');
$stmt = $dbh->prepare('INSERT INTO servers VALUES (?,?)');
/* foreach($service->serverList() as $server) {
$stmt->execute([$server->id, $server->name]);
} // neither method works */
$list = $service->serverList();
while( $list->valid() ) {
$server = $list->current();
$stmt->execute([$server->id, $server->name]);
$list->next();
}
echo "\n";
var_dump($dbh->query('SELECT * FROM servers')->fetchAll(PDO::FETCH_ASSOC));
The default limit for pagination is 100. It is possible to override this with a higher limit like so:
$list = $service->serverList(null, array('limit' => 700));
Related
Trying to update a batch of emails. I think I've tried every way to do this, but my use of DrewM's MailChimp wrapper only returns the following $result content:
Array ( [id] => 1234abcd [status] => pending [total_operations] => 0 [finished_operations] => 0
And so on. No errors, but no operations!
Essentially, my code looks like this, where $emails stores all the emails in an array.
include("MailChimp.php");
include("Batch.php");
$list_id = "1234abcd";
use \DrewM\MailChimp\MailChimp;
use \DrewM\MailChimp\Batch;
$apiKey = 'aslkjf84983hg84938h89gd-us13';
if(!isset($emails)){ // If not sending bulk requests
$MailChimp = new MailChimp($apiKey);
$subscriber_hash = $MailChimp->subscriberHash($email);
$result = $MailChimp->patch("lists/$list_id/members/$subscriber_hash",
array(
'status' => 'subscribed',
)
);
/* SENDING BATCH OF EMAILS */
} else if($emails){
$MailChimp = new MailChimp($apiKey);
$Batch = $MailChimp->new_batch();
$i = 1;
foreach($emails as &$value){
$Batch->post("op".$i, "lists/$list_id/members", [
'email_address' => $value,
'status' => 'subscribed',
]);
$i++;
}
$result = $Batch->execute(); // Send the request (not working I guess)
$MailChimp->new_batch($batch_id); // Now get results
$result = $Batch->check_status();
print_r($result);
}
If anyone can see what I'm not seeing, I'll be very grateful!
Problem solved. After talking with a rep at MailChimp, he helped to find two major problems.
Instead of using a POST method, he said to use PUT, when working with already existing emails. POST is best used for adding emails, while PUT can add and update emails.
So, change
$Batch->post
to
$Batch->put
Secondly, after successfully sending requests and getting errors in the $result, he found they were 405 errors and told me to add the md5 hash to my emails.
So, change
$Batch->post("op".$i, "lists/$list_id/members", [ ...
to
$subscriber_hash = $MailChimp->subscriberHash($value);
$Batch->put("op$i", "lists/$list_id/members/$subscriber_hash", [ ...
And they sent me a MailChimp stocking cap for being a good sport :-)
Veni. Vidi. Vici.
I use this code in the MongoDB PHP driver to get all documents in the database
$result = $collection->find();
foreach ($result as $doc) {
print_r($doc);
}
However, when adding a limit to it, it doesn't work anymore: no documents get printed anymore:
$result = $collection->find()->limit(10);
foreach ($result as $doc) {
print_r($doc);
}
There are certainly enough documents in the database. I cannot figure out what the problem with this is.
I have fixed the problem by taking a look at the source of the beta version. The documentation only appeared to be for the legacy mongo extension and not the newer mongodb extension.
The error logs showed this: Call to undefined method MongoDB\\Driver\\Cursor::addOption(). I checked out the documentation and concluded the function should have worked because it said (PECL mongo >=0.9.0). Note the missing db after mongo.
I fixed it by doing:
$collection->find([], [ 'limit' => 2 ]);, providing an empty filters array and adding my options in another array afterwards.
I am trying to describe with example for new php mongodb driver. showing in example skip,limit,Fields slection
require 'vendor/autoload.php'; // include Composer's autoloader
$client = new MongoDB\Client("mongodb://localhost:27017");
// SELECT * FROM YOUR_TABLE_NAME ;
// db.YOUR_COLLECTION_NAME.find({});
$result = $clinet->YOUR_DB_NAME->YOUR_COLLECTION_NAME->find(array());
//SELECT * from YOUR_TABLE_NAME WHERE YOUR_COLUMN = "A"
// db.YOUR_COLLECTION_NAME.find({{ YOUR_FIELD: "A" }});
$result = $clinet->YOUR_DB_NAME->YOUR_COLLECTION_NAME->find(array('YOUR_FIELD'=>'A'));
//Return the Specified Fields and the _id Field Only
//SELECT _id, item,status YOUR_TABLE_NAME from inventory WHERE status = "A"
//db.YOUR_COLLECTION_NAME.find( { status: "A" }, { item: 1, status: 1 } )
$result = $clinet->YOUR_DB_NAME->YOUR_COLLECTION_NAME->find(array('status'=>'A'),array('projection' =>array('item'=>TRUE,'status' => TRUE)));
//Suppress _id Field
//SELECT item, status from YOUR_TABLE_NAME WHERE status = "A"
//db.YOUR_COLLECTION_NAME.find( { status: "A" }, { item: 1, status: 1, _id: 0 } )
$result = $clinet->YOUR_DB_NAME->YOUR_COLLECTION_NAME->find(array('status'=>'A'),array('projection' =>array('item'=>TRUE,'status' => TRUE,'_id'=>FALSE)));
//SELECT * FROM YOUR_TABLE_NAME LIMIT 10
//db.YOUR_COLLECTION_NAME.find({}).limit(10);
$result = $clinet->YOUR_DB_NAME->YOUR_COLLECTION_NAME->find(array(),array('limit'=>10));
//SELECT * FROM YOUR_TABLE_NAME LIMIT 5,10
//db.YOUR_COLLECTION_NAME.find({}).skip(5).limit(10)
$result = $clinet->YOUR_DB_NAME->YOUR_COLLECTION_NAME->find(array(),array('skip'=>5,'limit'=>10));
//Suppress _id Field
//SELECT item, status from YOUR_TABLE_NAME WHERE status = "A" LIMIT 5,10;
//db.YOUR_COLLECTION_NAME.find( { status: "A" }, { item: 1, status: 1, _id: 0 } ).skip(5).limit(10);
$result = $clinet->YOUR_DB_NAME->YOUR_COLLECTION_NAME->find(array('status'=>'A'),array('projection' =>array('item'=>TRUE,'status' => TRUE,'_id'=>FALSE),'skip'=>5,'limit'=>10));
foreach ($result as $entry){
echo "<pre>";
print_r($entry);
echo "</pre>";
}
As a solution to your above mentioned problem please try executing following code snippet.
$result = $collection->find();
$result->limit(10);
foreach ($result as $doc) {
print_r($doc);
}
I had the same issue. There are a lot of code examples using $result = $collection->find()->limit(10);
It turns out, that while this was totally valid for the original version of the MongoDB PHP driver, there is a new version of that very same driver. The original driver is now considered "The Legacy Driver".
Here is one example, how the "old" driver was supposed to be used:
<?php
$m = new MongoClient;
// Select 'demo' database and 'example' collection
$collection = $m->demo->example;
// Create the cursor
$cursor = $collection->find();
At this moment, although a cursor object had been created, the query had not yet executed (i.e. it was not sent to the server). The query would only be executed by starting iteration with foreach ( $cursor as $result ) or calling $cursor->rewind(). This gives you the chance to configure the cursor's query with sort(), limit(), and skip() before it is executed by the server:
// Add sort, and limit
$cursor->sort( [ 'name' => 1 ] )->limit( 40 )
In the new driver, as soon as you have a \MongoDB\Driver\Cursor object, it has already been processed by the server. Because sort (and limit and skip) parameters need to be sent to the server before the query is executed, you can not retroactively call them on an existing Cursor object.
This is the reason, why there is no limit() method anymore, as there used to be. Also, the accepted answer is correct. I want to give a more elaborate example:
$filter = [
'author' => 'rambo',
'views' => [
'$gte' => 100,
],
];
$options = [
/* Return the documents in descending order of searchPage */
'sort' => [
'searchPage' => -1
],
/* Limit to 2 */
'limit' => 2,
/* close the cursor after the first batch */
'singleBatch' => true,
];
$cursor = $collection->find($filter, $options);
I am using this https://api.twitter.com/1.1/followers/ids.json?cursor=-1&screen_name=sitestreams&count=5000 to list the Twitter followers list, But I got only list of 200 followers. How to increase the list of Twitter followers using the new API 1.1?
You must first setup you application
<?php
$consumerKey = 'Consumer-Key';
$consumerSecret = 'Consumer-Secret';
$oAuthToken = 'OAuthToken';
$oAuthSecret = 'OAuth Secret';
# API OAuth
require_once('twitteroauth.php');
$tweet = new TwitterOAuth($consumerKey, $consumerSecret, $oAuthToken, $oAuthSecret);
You can download the twitteroauth.php from here: https://github.com/elpeter/pv-auto-tweets/blob/master/twitteroauth.php
Then
You can retrieve your followers like this:
$tweet->get('followers/ids', array('screen_name' => 'YOUR-SCREEN-NAME-USER'));
If you want to retrieve the next group of 5000 followers you must add the cursor value from first call.
$tweet->get('followers/ids', array('screen_name' => 'YOUR-SCREEN-NAME-USER', 'cursor' => 9999999999));
You can read about: Using cursors to navigate collections in this link: https://dev.twitter.com/docs/misc/cursoring
You can't fetch more than 200 at once... It was clearly stated on the documentation where count:
The number of users to return per page, up to a maximum of 200. Defaults to 20.
you can somehow make it via pagination using
"cursor=-1" #means page 1, "If no cursor is provided, a value of -1 will be assumed, which is the first “page."
Here's how I run/update full list of follower ids on my platform. I'd avoid using sleep() like #aphoe script. Really bad to keep a connection open that long - and what happens if your user has 1MILL followers? You going to keep that connection open for a week? lol If you must, run cron or save to redis/memcache. Rinse and repeat until you get all the followers.
Note, my code below is a class that's run through a cron command every minute. I'm using Laravel 5.1. So you can probably ignore a lot of this code, as it's unique to my platform. Such as the TwitterOAuth (which gets all oAuths I have on db), TwitterFollowerList is another table and I check if an entry already exists, TwitterFollowersDaily is another table where I store/update total amount for the day for the user, and TwitterApi is the Abraham\TwitterOAuth package. You can use whatever library though.
This might give you a good sense of what you might do the same or even figure out a better way. I won't explain all the code, as there's a lot happening, but you should be able to guide through it. Let me know if you have any questions.
/**
* Update follower list for each oAuth
*
* #return response
*/
public function updateFollowers()
{
TwitterOAuth::chunk(200, function ($oauths)
{
foreach ($oauths as $oauth)
{
$page_id = $oauth->page_id;
$follower_list = TwitterFollowerList::where('page_id', $page_id)->first();
if (!$follower_list || $follower_list->updated_at < Carbon::now()->subMinutes(15))
{
$next_cursor = isset($follower_list->next_cursor) ? $follower_list->next_cursor : -1;
$ids = isset($follower_list->follower_ids) ? $follower_list->follower_ids : [];
$twitter = new TwitterApi($oauth->oauth_token, $oauth->oauth_token_secret);
$results = $twitter->get("followers/ids", ["user_id" => $page_id, "cursor" => $next_cursor]);
if (isset($results->errors)) continue;
$ids = $results->ids;
if ($results->next_cursor !== 0)
{
$ticks = 0;
do
{
if ($ticks === 13)
{
$ticks = 0;
break;
}
$ticks++;
$results = $twitter->get("followers/ids", ["user_id" => $page_id, "cursor" => $results->next_cursor]);
if (!$results) break;
$more_ids = $results->ids;
$ids = array_merge($ids, $more_ids);
}
while ($results->next_cursor > 0);
}
$stats = [
'page_id' => $page_id,
'follower_count' => count($ids),
'follower_ids' => $ids,
'next_cursor' => ($results->next_cursor > 0) ? $results->next_cursor : null,
'updated_at' => Carbon::now()
];
TwitterFollowerList::updateOrCreate(['page_id' => $page_id], $stats);
TwitterFollowersDaily::updateOrCreate([
'page_id' => $page_id,
'date' => Carbon::now()->toDateString()
],
[
'page_id' => $page_id,
'date' => Carbon::now()->toDateString(),
'follower_count' => count($ids),
]
);
continue;
}
}
});
}
I tried following the basic example of solr in php from the official docs (http://www.php.net/manual/en/book.solr.php).
I wanted to write a function that simply return the solr index.
For example, consider the following code:
$options = array( 'hostname' => SOLR_SERVER_HOSTNAME );
$client = new SolrClient($options);
$doc = new SolrInputDocument();
$doc->addField('id', 12345);
$doc->addField('title', 'Stack Overflow');
$client->addDocument($doc);
$client->commit();
This works perfectly. But, lets say I wanted to write a function that simply returns me the solr index. For example:
function get_index(){
$index = //something here
...
return $index;
}
How can I do this? Is this possible? I'm new to solr and I'm using the PECL Solr client.(http://www.php.net/manual/en/book.solr.php)
Please refer to examples #4 & #5 from the Examples page for the Solr PECL client. Then you can build a query that searches across and returns all fields, like the following:
$options = array (
'hostname' => SOLR_SERVER_HOSTNAME,
'port' => SOLR_SERVER_PORT);
$client = new SolrClient($options);
$query = new SolrQuery();
$query->setQuery('*:*'); // *:* means search all fields for all values.
$query->setStart(0);
$query->setRows(100000); //very large to ensure all rows are returned.
$query->addField('*'); // * will return all fields
$query_response = $client->query($query);
$query_response->setParseMode(SolrQueryResponse::PARSE_SOLR_DOC);
$response = $query_response->getResponse();
print_r($response);
For more details on querying Solr and the options that you can use, please refer to the following:
Searching in Solr
Solr Query Syntax & Common Query Parameters
I have a query that running way too slow. the page takes a few minutes to load.
I'm doing a table join on tables with over 100,000 records. In my query, is it grabbing all the records or is it getting only the amount I need for the page? Do I need to put a limit in the query? If I do, won't that give the paginator the wrong record count?
$paymentsTable = new Donations_Model_Payments();
$select = $paymentsTable->select(Zend_Db_Table::SELECT_WITH_FROM_PART);
$select->setIntegrityCheck(false)
->from(array('p' => 'tbl_payments'), array('clientid', 'contactid', 'amount'))
->where('p.clientid = ?', $_SESSION['clientinfo']['id'])
->where('p.dt_added BETWEEN \''.$this->datesArr['dateStartUnix'].'\' AND \''.$this->datesArr['dateEndUnix'].'\'')
->join(array('c' => 'contacts'), 'c.id = p.contactid', array('fname', 'mname', 'lname'))
->group('p.id')
->order($sortby.' '.$dir)
;
$payments=$paymentsTable->fetchAll($select);
// paginator
$paginator = Zend_Paginator::factory($payments);
$paginator->setCurrentPageNumber($this->_getParam('page'), 1);
$paginator->setItemCountPerPage('100'); // items pre page
$this->view->paginator = $paginator;
$payments=$payments->toArray();
$this->view->payments=$payments;
Please see revised code below. You need to pass the $select to Zend_Paginator via the correct adapter. Otherwise you won't see the performance benefits.
$paymentsTable = new Donations_Model_Payments();
$select = $paymentsTable->select(Zend_Db_Table::SELECT_WITH_FROM_PART);
$select->setIntegrityCheck(false)
->joinLeft('contacts', 'tbl_payments.contactid = contacts.id')
->where('tbl_payments.clientid = 39')
->where(new Zend_Db_Expr('tbl_payments.dt_added BETWEEN "1262500129" AND "1265579129"'))
->group('tbl_payments.id')
->order('tbl_payments.dt_added DESC');
// paginator
$paginator = new Zend_Paginator(new Zend_Paginator_Adapter_DbTableSelect($select));
$paginator->setCurrentPageNumber($this->_getParam('page', 1));
$paginator->setItemCountPerPage('100'); // items pre page
$this->view->paginator = $paginator;
Please see revised code above!
In your code, you are :
first, selecting and fetching all records that match your condition
see the select ... from... and all that
and the call to fetchAll on the line just after
and, only the, you are using the paginator,
on the results returned by the fetchAll call.
With that, I'd say that, yes, all your 100,000 records are fetched from the DB, manipulated by PHP, passed to Zend_Paginator which has to work with them... only to discard almost all of them.
Using Zend_Paginator, you should be able to pass it an instance of Zend_Db_Select, and let it execute the query, specifying the required limit.
Maybe the example about DbSelect and DbTableSelect adapter might help you understand how this can be achieved (sorry, I don't have any working example).
I personally count the results via COUNT(*) and pass that to zend_paginator. I never understood why you'd deep link zend_paginator right into the database results. I can see the pluses and minuses, but really, its to far imho.
Bearing in mind that you only want 100 results, you're fetching 100'000+ and then zend_paginator is throwing them away. Realistically you want to just give it a count.
$items = Eurocreme_Model::load_by_type(array('type' => 'list', 'from' => $from, 'to' => MODEL_PER_PAGE, 'order' => 'd.id ASC'));
$count = Eurocreme_Model::load_by_type(array('type' => 'list', 'from' => 0, 'to' => COUNT_HIGH, 'count' => 1));
$paginator = Zend_Paginator::factory($count);
$paginator->setItemCountPerPage(MODEL_PER_PAGE);
$paginator->setCurrentPageNumber($page);
$this->view->paginator = $paginator;
$this->view->items = $items;