codeigniter mongodb performance - php

When i am running my mongodb query in shell, i get results set in few milliseconds. and when i execute this same query in codeigniter i get results in 12 seconds.
Shell script
db.order.find({customer_email:/^asd#asd.com/}).explain()
Codeigniter script
$orderData = $this->mongo_db->get_where('order', array('customer_email'=> new MongoRegex("/^asd#asd.com/i")));
Is there any solution to optimize speed of fetching results?
There are 7272699 total records and i need to find asd#asd.com.

First, you should set an index on customer_email if not already. Second, try to remove the i flag in the MongoRegex to use the index:
$orderData = $this->mongo_db->get_where('order', array('customer_email'=> new MongoRegex("/^asd#asd.com/")));

Related

Adding a short sleep to mysql query to debug caching

I'd like to make a sql query sleep for a couple of seconds to verify if my application caches the query result properly.
I tried to add SLEEP(2) to the select query, which results in mysql hanging up until it gets restarted.
Also tried to add a DO SLEEP(2); line before the actual query, which made php throw a "General Error" Exception.
Here's some example code:
$sql = "SELECT ... HUGE LIST OF THINGS";
$result = $myCachedDatabase->query($sql); // Does this actually cache the query result? Or does it perform the query every time?
What I'd like is something along of this:
$sql = "DELAY(5 seconds); SELECT ... HUGE LIST OF THINGS";
$result = $myCachedDatabase->query($sql); // First time it took 5s, second time it was instant - yay it gets cached!
the DELAY(5 seconds); is the part I'm looking for
What is the best way to accomplish this?

PHP Laravel / Oracle compare timestamp . how?

I am trying to extract from my php laravel some data like this:
$x = Carbon::now()->timestamp;
$data=Notifications->where('happened_at','>',$x)
->where('id_user',Auth::user()->getAuthIdentifier());
If I enter this sql statement in my oracle database directly it works perfectly:
select * from notifications where happened_at > '10-06-2017 00:11:12,000000000';
^ this returns correct, however in my php laravel it doesn't return the good rows(returns all the rows from the database)
Later Edit: The where is the problem, I just want to compare the timestamp location in my DB('happened_at') with the current time...I don't know how though
Let's break down your model call code.
// first, you retrieve all notifications and assign them to $data
$data = Notifications::all()
// then, you try to start a new query
->where('happened_at','>',$x)
// then, you continue the new query
->where('id_user',Auth::user()->getAuthIdentifier())
// and finally, you finish the call
;
So basically, you already get all rows at the beginning, and then you try to start building a query that you never execute. (But you can't, since all() returns a collection.)
Remove the all() and finish up with ->get() at the end, and it should work. (Although I don't know anything about Oracle timestamps.)

Laravel Query Builder returning empty rows from 100k+ rows

My users table has over 100K rows. I need to fetch them all for export purpose. But, when I use following code, it doesn't return anything. But, if I apply limit then it returns them. It basically returns 10K rows and when I provide 20K on limit, it doesn't return anything.If I use mysqli_query, it returns all well in the same server and DB.
$myRows = DB::table('users')->get();//returns empty
$myRows = DB::table('users')->take(10000);//returns 10,000 rows
$myRows = DB::table('users')->take(20000);//returns empty
I am new to Laravel.
Thanks in advance.
I think chunking (scroll to "Chunking Results From A Table" point) would be good option here.
DB::table('users')->chunk(10000, function($users) {
//some connect code
});

Speed up 6,000 row query with Zend Framework 2

I have a query that returns roughly 6,000 results. Although this query executes in under a second in MySQL, once it is run through Zend Framework 2, it experiences a significant slowdown.
For this reason, I tried to do it a more "raw" way with PDO:
class ThingTable implements ServiceLocatorAwareInterface
{
// ...
public function goFast()
{
$db_config = $this->getServiceLocator()->get('Config')['db'];
$pdo = new PDO($db_config['dsn'], $db_config['username'], $db_config['password']);
$statement = $pdo->prepare('SELECT objectNumber, thingID, thingmaker, hidden, title FROM Things ', array(PDO::MYSQL_ATTR_COMPRESS, PDO::CURSOR_FWDONLY));
$statement->execute();
return $statement->fetchAll(PDO::FETCH_ASSOC);
}
}
This doesn't seem to have much of a speedup, though.
I think the problem might be that Zend is still trying to create a new Thing object for each record, even though it is only a partial list of columns. I'd really be okay not populating any objects. I really just need a few columns with those attributes to iterate over.
As suggested by user MonkeyZeus, the following was used for bench-marking:
$start = microtime(true);
$result = $statement->fetchAll(PDO::FETCH_ASSOC);
echo (microtime(true) - $start).' seconds';
And in response:
In a VM, that returns 0.0050520896911621. This is in line with what it
is when I just run the command straight in MySQL. I believe the
overhead is in Zend, but not sure how to quite go about that. Again if
I had to guess, I'd say it is because Zend is adding overhead while
trying to be nice with the results, but I'm not quite sure how to
proceed after that.
[I'm] not so worried about the query. It is a single select statement.
goFast() gets called by the Zend indexAction() --similar to other
actions used across the project--this one is just way slower at
returning the page. One problem I found was that Zend's $this->url()
was slowing things down a bit. So I removed those, but the performance
still isn't great.
How can I speed this up?
When you say , that query runs under a second in MySQL , what do you mean ? did you try to run this query and print ALL 6000 rows ? or you just queried this and command line printed first/last few of them ?
The problem might be that , you are fetching them all , going through cursor , you are copying all the data ( 6000 rows ) from MySQL to PHP and then returning it , are you sure you want to do this ?
Maybe you could return a statement/cursor to the Query and then iterate through rows when you really need it ?
Your problem is not the SQL itself , but fetching them into PHP array all at once.
You can test it by logging the time it needs to actually execute SQL and fetching it into PHP array.
Do not use fetchall , return the statement itself and in the function/code where you have to run "foreach" this array , use statement to fetch each row one by one.

cursor performace with mongodb php driver

Is there any performance issues with php mongo query cursor handling?
My code:
$cursor = $collection->find($searchCriteria)->limit($limit_rows);
// Sort ascending based on S_DTTM
$cursor->sort(array('S_DTTM' => 1 , 'SYMBOL' => 1 ));
// How many results found?
$num_docs = $cursor->count();
if( $num_docs > 0 )
{
// loop over the results
foreach ($cursor as $ticks)
{
See codes like
// request data
$result = $cursor->getNext();
My issue is after the first query returns ( full with limit of 100 rows ) the next query just goes on looping. Have millions of rows returning, so I wanted to put the limits with "limit".
I did do re-index just in case, still no difference.
What am I doing wrong? Does the getNext works better?
Using mongod ver 2.5.4 and the latest php mongo driver downloaded a week ago.
Collection size is 100Gb including 2 additional indexes.
mongo log shows all the query executing in less than 200ms.
Turns out to be Query Issue and not php mongo driver issue ..
Use of count() and sort() may decrease performance.

Categories