I am creating an application that has several servers running at the same time and several process on each server all of those are processing data making query/updates and inserts. So a total of 35+ concurrent connections are being made at all times. These servers are all processing data that is being sent to a single mongodb server (mongod). I am not sharding my database at the moment. The problem is that I am being limited by my mongodb server. Whenever I add more servers the queries/updates/inserts are running slower (they take more time). I was running this mongohq.com, then I just recently created my own amazon server for mongod but I am still getting nearly the same result. List below is my db.serverStatus({}). I am somewhat new to mongodb but basically I need to know how to speed up the process for the amount of concurrent operations going on with my mongo server. I need it to be able to handle a lot of requests. I know sharding is a possible way around this but if it is at all possible can you list some other solutions available. Thanks.
> db.serverStatus({})
{
"host" : "ip-10-108-245-21:28282",
"version" : "2.0.1",
"process" : "mongod",
"uptime" : 11380,
"uptimeEstimate" : 11403,
"localTime" : ISODate("2011-12-13T22:27:56.865Z"),
"globalLock" : {
"totalTime" : 11380429167,
"lockTime" : 86138670,
"ratio" : 0.007569017717695356,
"currentQueue" : {
"total" : 0,
"readers" : 0,
"writers" : 0
},
"activeClients" : {
"total" : 35,
"readers" : 35,
"writers" : 0
}
},
"mem" : {
"bits" : 64,
"resident" : 731,
"virtual" : 6326,
"supported" : true,
"mapped" : 976,
"mappedWithJournal" : 1952
},
"connections" : {
"current" : 105,
"available" : 714
},
"extra_info" : {
"note" : "fields vary by platform",
"heap_usage_bytes" : 398656,
"page_faults" : 1
},
"indexCounters" : {
"btree" : {
"accesses" : 798,
"hits" : 798,
"misses" : 0,
"resets" : 0,
"missRatio" : 0
}
},
"backgroundFlushing" : {
"flushes" : 189,
"total_ms" : 29775,
"average_ms" : 157.53968253968253,
"last_ms" : 185,
"last_finished" : ISODate("2011-12-13T22:27:16.651Z")
},
"cursors" : {
"totalOpen" : 34,
"clientCursors_size" : 34,
"timedOut" : 0,
"totalNoTimeout" : 34
},
"network" : {
"bytesIn" : 89743967,
"bytesOut" : 59379407,
"numRequests" : 840133
},
"opcounters" : {
"insert" : 5437,
"query" : 8957,
"update" : 4312,
"delete" : 0,
"getmore" : 76,
"command" : 821388
},
"asserts" : {
"regular" : 0,
"warning" : 0,
"msg" : 0,
"user" : 0,
"rollovers" : 0
},
"writeBacksQueued" : false,
"dur" : {
"commits" : 29,
"journaledMB" : 0.147456,
"writeToDataFilesMB" : 0.230233,
"compression" : 0.9999932183619632,
"commitsInWriteLock" : 0,
"earlyCommits" : 0,
"timeMs" : {
"dt" : 3031,
"prepLogBuffer" : 0,
"writeToJournal" : 29,
"writeToDataFiles" : 2,
"remapPrivateView" : 0
}
},
"ok" : 1
}
What is surprising about more load generating higher response times from mongod? There are a few possible reasons for degradation of performance.
For example, every write to mongod uses a process wide write lock. So the more servers you add the more updates will be attempted (assuming update load is about stable per server) and thus the longer the process will spend in write lock. You can keep an eye on this through mongostat's "locked %" field.
Additionally if you use JS powered functionality (m/r, db.eval(), etc.) these operations cannot be executed concurrently by mongod due to the fact that each mongod has a single JavaScript context (which is single threaded).
If you want a more specific analysis then you might want to consider posting exact numbers. How many reads and writes per second, what are the query plans for the queries you execute, what effect does adding an additional app server have on your overall database performance, etc.
Related
Hello I am trying to use php to access my data base from a Swift app I am coding. So far reading the tables has been going great, except now I am trying to read a table that has multiple rows containing json. This has been throwing errors and I can not seem to get the final output to equal what I want, or anything that works with the swift code for that matter. The json originally just outputted as null. Upon researching how to fix that I tried utf8_encode() but that gave too many extra characters and the Swift code in the app couldn't make sense of it. When outputting just one of the rows it comes out fine, Its when I try putting them in one associative array to out put as json is when they come up as null.
PHP Code:
$sql = "Select * FROM User WHERE Id = '".$UserId."' LIMIT 1";
mysql_select_db($database, $User);
$result = mysql_query($sql , $User) or die(mysql_error());
$FleetRaw = mysql_fetch_assoc($result);
$Fleet1 = $FleetRaw['Fleet1'];
$Fleet2 = $FleetRaw['Fleet2'];
$Fleet3 = $FleetRaw['Fleet3'];
$Fleet4 = $FleetRaw['Fleet4'];
$Fleet5 = $FleetRaw['Fleet5'];
$Fleet6 = $FleetRaw['Fleet6'];
$Fleets = array("1"=>$Fleet1,"2"=>$Fleet2,"3"=>$Fleet3,"4"=>$Fleet4,"5"=>$Fleet5,"6"=>$Fleet6);
//Output 1
echo $Fleets["1"]."<br><br><br>";
//Output 2
echo json_encode(utf8_encode($Fleets["1"]))."<br><br><br>";
//Output 3
echo json_encode($Fleets);
?>
Outputs:
Output 1:
{ “status” : 3, “game” : 0, “ships” : { "1" : { "level" : 0, "className" : "LighteningShip", "posX" : 100, "health" : 50, "posY" : 100 }, "3" : { "level" : 0, "className" : "LighteningShip", "posX" : 100, "health" : 50, "posY" : -100 }, "2" : { "level" : 0, "className" : "LighteningShip", "posX" : 100, "health" : 50, "posY" : 0 }, "0" : { "level" : 0, "className" : "MotherShip", "posX" : 0, "health" : 100, "posY" : 0 } } }
Output 2:
"{\n\u0093status\u0094 : 3,\n\u0093game\u0094 : 0,\n\u0093ships\u0094 : {\n \"1\" : {\n \"level\" : 0,\n \"className\" : \"LighteningShip\",\n \"posX\" : 100,\n \"health\" : 50,\n \"posY\" : 100\n },\n \"3\" : {\n \"level\" : 0,\n \"className\" : \"LighteningShip\",\n \"posX\" : 100,\n \"health\" : 50,\n \"posY\" : -100\n },\n \"2\" : {\n \"level\" : 0,\n \"className\" : \"LighteningShip\",\n \"posX\" : 100,\n \"health\" : 50,\n \"posY\" : 0\n },\n \"0\" : {\n \"level\" : 0,\n \"className\" : \"MotherShip\",\n \"posX\" : 0,\n \"health\" : 100,\n \"posY\" : 0\n }\n}\n}"
Output 3:
{"1":null,"2":null,"3":null,"4":null,"5":null,"6":null}
Output 1 is exactly the format I want (the one Swift understands), except it is only one of the six rows (Also app rejects this form because it is not json_encode before echoing). Output 2 is an example of one of the six rows that when used utf8_encode() before saved to the array gives to many extra characters, however it does output as not null when put into an array of the six. Output 3 is what I want to eventually output, just without the null.
The ideal situation would be to combine outputs 1 and 3 so that I can output an array of six with them looking like Output 1. Also the app has only worked when I json_encode what I echo. If there is anyone possible to accomplish this please let me know!!
Thanks!!
closest attempt, working but double the data?
$Fleet1 = $FleetRaw['Fleet1'];
$Fleet2 = $FleetRaw['Fleet2'];
$Fleet3 = $FleetRaw['Fleet3'];
$Fleet4 = $FleetRaw['Fleet4'];
$Fleet5 = $FleetRaw['Fleet5'];
$Fleet6 = $FleetRaw['Fleet6'];
$Fleets = array("1"=>$Fleet1,"2"=>$Fleet2,"3"=>$Fleet3,"4"=>$Fleet4,"5"=>$Fleet5,"6"=>$Fleet6);
// Convert an array of JSON-Strings to unified array of structured data..
foreach ( $Fleets as $key => $sJSONString ){
$FleetRaw[$key] = json_decode($sJSONString);
}
// Now return the whole lot as a json-string to the client
header("Content-type: application/json"); // My assumption of your model..
print json_encode($Fleets);
There are two problems, as far as I can see:
Issue A: Broken JSON in database
Output 1:
{ “status” : 3, “game” : 0, “ships” : { "1" : { ... etc
Those characters “” are not legal in JSON... so you won't be able to parse the data you have within your database as JSON. You will have to replace them with legitimate " characters. Where did the JSON come from?
Issue B: Mixed string & structure
You're mixing JSON-as-a-string (coming from the database) and an array data-structure in PHP (the array of rows from the database) that you wish to represent as JSON.
So to fix that should be doing something like:
<?php
// Convert an array of JSON-Strings to unified array of structured data..
foreach ( $FleetRaw as $key => $sJSONString ){
$FleetRaw[$key] = json_decode($sJSONString);
}
// Now return the whole lot as a json-string to the client
header("Content-type: application/json"); // My assumption of your model..
print json_encode($FleetRaw);
?>
What this should output is an array of objects:
[{ "status" : 3, "game" : 0, "etc" : "..." },{ "status" : 99, "game" : 123, "etc" : "..." },{ "status" : 345, "game" : 456, "etc" : "..." },{ .. }]
Note on your 'nulls' & UTF8 (Output 3)
I would imagine your nulls are caused by PHP failing to even encode the JSON-strings as strings because they contain UTF8 characters -- which is why the Output 3 shows nulls. But those encoding issues may just be those dodgy “” you have in your database.
If you fix issue A you may find you fix Output 3 too. Though this doesn't preclude you from having to address issue B. Output 3 would become an array of your JSON-strings (represented as strings that just happen to look like JSON). Issue B will then sort you out.
Incidentally: http://php.net/manual/en/function.json-last-error.php should help you narrow down any remaining issues with your source JSON if the above doesn't.
Hope this helps! J.
I'm not sure but I think the problem is that your fleet data is already in json format. That is way the first output echoes what you want. In the second output you just encode json data from Fleets["1"] into utf8 and then encode it into json again). The third output with the same problem but this time you just trying to reencode your json data into json again.
Try this one:
$Fleet1 = json_decode($FleetRaw['Fleet1']);
$Fleet2 = json_decode($FleetRaw['Fleet2']);
$Fleet3 = json_decode($FleetRaw['Fleet3']);
$Fleet4 = json_decode($FleetRaw['Fleet4']);
$Fleet5 = json_decode($FleetRaw['Fleet5']);
$Fleet6 = json_decode($FleetRaw['Fleet6']);
You get objects.
$Fleets = array("1"=>$Fleet1,"2"=>$Fleet2,"3"=>$Fleet3,"4"=>$Fleet4,"5"=>$Fleet5,"6"=>$Fleet6);
You get array of objects
echo json_encode($Fleets);
You should get a vaild json data.
Try to json_decode() in last line as follows:
$Fleets["1"] = '{ "status" : 3, "game" : 0, "ships" : { "1" : { "level" : 0, "className" : "LighteningShip", "posX" : 100, "health" : 50, "posY" : 100 }, "3" : { "level" : 0, "className" : "LighteningShip", "posX" : 100, "health" : 50, "posY" : -100 }, "2" : { "level" : 0, "className" : "LighteningShip", "posX" : 100, "health" : 50, "posY" : 0 }, "0" : { "level" : 0, "className" : "MotherShip", "posX" : 0, "health" : 100, "posY" : 0 } } }';
//Output 1
echo $Fleets["1"]."<br><br><br>";
//Output 2
echo json_encode(utf8_encode($Fleets["1"]))."<br><br><br>";
//Output 3
echo '<pre>';
print_r(json_decode($Fleets["1"]), true);
echo json_decode($Fleets["1"]);
Your output 1 should be :-
{ "status" : 3, "game" : 0, "ships" : { "1" : { "level" : 0, "className" : "LighteningShip", "posX" : 100, "health" : 50, "posY" : 100 }, "3" : { "level" : 0, "className" : "LighteningShip", "posX" : 100, "health" : 50, "posY" : -100 }, "2" : { "level" : 0, "className" : "LighteningShip", "posX" : 100, "health" : 50, "posY" : 0 }, "0" : { "level" : 0, "className" : "MotherShip", "posX" : 0, "health" : 100, "posY" : 0 } } }
It may help you.
When the primary server of my replication set fails, currently open connections also fail immediately (!) throwing MongoConnectionException (No candidate servers found) or with MongoCursorException (mongoUbuntu:8004: Remote server has closed the connection) when I use GridFS.
Is this a bug or do I have to change my setup in order to get automatic failover working?
Installation
Linux ubuntu kernel 3.16.0-31-generic
PHP 5.5.12-2ubuntu4.3
pecl/mongo 1.6.6
mongo 2.6.3
PHP via cli
Server's hostname is mongoUbuntu, mongodb processes were started on one single computer with the following command line
Mongod options
I run 4 servers on port 8001, 8002, 8003 and 8004 plus one arbiter on 8010. Command line for 8001 is like following:
mongod --replSet rs1 --dbpath /var/lib/mongodb1 --port 8001 --smallfiles --oplogSize 200 --httpinterface --rest
Replication Set rs.status()
{
"set" : "rs1",
"date" : ISODate("2015-04-08T14:48:57Z"),
"myState" : 3,
"members" : [
{
"_id" : 0,
"name" : "mongoUbuntu:8004",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 467,
"optime" : Timestamp(1428501340, 1),
"optimeDate" : ISODate("2015-04-08T13:55:40Z"),
"lastHeartbeat" : ISODate("2015-04-08T14:48:56Z"),
"lastHeartbeatRecv" : ISODate("2015-04-08T14:48:55Z"),
"pingMs" : 0,
"syncingTo" : "mongoUbuntu:8001"
},
{
"_id" : 1,
"name" : "mongoUbuntu:8003",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 987,
"optime" : Timestamp(1428501340, 1),
"optimeDate" : ISODate("2015-04-08T13:55:40Z"),
"lastHeartbeat" : ISODate("2015-04-08T14:48:56Z"),
"lastHeartbeatRecv" : ISODate("2015-04-08T14:48:56Z"),
"pingMs" : 0,
"syncingTo" : "mongoUbuntu:8001"
},
{
"_id" : 2,
"name" : "mongoUbuntu:8002",
"health" : 1,
"state" : 3,
"stateStr" : "RECOVERING",
"uptime" : 3142,
"optime" : Timestamp(1428498901, 1),
"optimeDate" : ISODate("2015-04-08T13:15:01Z"),
"infoMessage" : "still syncing, not yet to minValid optime 55252e9b:37",
"self" : true
},
{
"_id" : 3,
"name" : "mongoUbuntu:8001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 3139,
"optime" : Timestamp(1428501340, 1),
"optimeDate" : ISODate("2015-04-08T13:55:40Z"),
"lastHeartbeat" : ISODate("2015-04-08T14:48:56Z"),
"lastHeartbeatRecv" : ISODate("2015-04-08T14:48:56Z"),
"pingMs" : 0,
"electionTime" : Timestamp(1428503596, 1),
"electionDate" : ISODate("2015-04-08T14:33:16Z")
},
{
"_id" : 4,
"name" : "mongoUbuntu:8010",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 3139,
"lastHeartbeat" : ISODate("2015-04-08T14:48:56Z"),
"lastHeartbeatRecv" : ISODate("2015-04-08T14:48:55Z"),
"pingMs" : 0
}
],
"ok" : 1
}
PHP Script
The following script is running when I terminate the current primary (Example without real usage of GridFS but directly executing a query.)
<?php
$conn = new \MongoClient(
'mongodb://mongoUbuntu:8001,mongoUbuntu:8002,mongoUbuntu:8003',
array('replicaSet' => 'rs1', 'readPreference' => \MongoClient::RP_PRIMARY_PREFERRED)
);
$db = $conn->bat; //db name: bat
$gridfs = $this->_db->getGridFS();
while(true) {
$documents = $db->execute('db.getCollection(\'fs.files\').count()');
echo $documents['retval']."\n";
sleep(1);
}
Issue
Until I terminate the current primary, the script is printing the count of documents to command line. When I terminate the current primary (pressing Ctrl+C on respectiv mongod command line) the php script immediately because an exception, in this case MongoConnectionException No candidate servers found.
I used the script of github to create full debug log, the and saved it as Gist mongo-php-log-automaticfailover-fails.
Do I have to add another option to the creation of the connection or the config of the replication set? If this is described in mongodb documentation or documentation of mongodb php driver, where can I find it?
Yes, the driver does throw a exception. Technically it is the right thing to do, especially if you wish to know when the set fails over. What you need to do is catch the exception and do a retry.
Your biggest problem here is that for some reason you are evaling. Eval requires to be run ONLY on the master, as such when a set fails over it has to wait for a elected master which can take upto 10 seconds.
However, it seems this facet of eval is not actually documented, though it is recognised: https://dba.stackexchange.com/questions/75852/mongodb-replicaset-reconfig-when-primary-and-majority-dont-exist since the answerer of that question is in fact 10gen (MongoDB Inc) and he seems to not deny that eval does not work on secondaries. I am fairly certain that this used to be in the documentation though, that is where I first saw it.
This is quite well explained within the MongoDB 101 course for Python. The same applies for most languages. I personally just disallow the connection until a new primary is found (but then I don't eval stuff), but if you are solely reading you can remove the eval and do this through PHP. That should allow you to read without hinderance.
May be this is a silly question, but anyway I have the doubt.
Please take a look at this query:
db.posts.find({ "blog": "myblog",
"post_author_id": 649,
"shares.total": { "$gt": 0 } })
.limit(10)
.skip(1750)
.sort({ "shares.total": -1, "tstamp_published": -1 });
actually I see into the mongodb profiler this report:
mongos> db.system.profile.find({ nreturned : { $gt : 1000 } }).limit(10).sort( { millis : 1 } ).pretty();
{
"ts" : ISODate("2013-04-04T13:28:08.906Z"),
"op" : "query",
"ns" : "mydb.posts",
"query" : {
"$query" : {
"blog" : "myblog",
"post_author_id" : 649,
"shares.total" : {
"$gt" : 0
}
},
"$orderby" : {
"shares.total" : -1,
"tstamp_published" : -1
}
},
"ntoreturn" : 1760,
"nscanned" : 12242,
"scanAndOrder" : true,
"nreturned" : 1760,
"responseLength" : 7030522,
"millis" : 126,
"client" : "10.0.232.69",
"user" : ""
}
Now the question is: why mongodb is returning 1760 documents when I have explicitly asked to skip 1750?
This is my current Mongodb version, in cluster/sharding.
mongos> db.runCommand("buildInfo")
{
"version" : "2.0.2",
"gitVersion" : "514b122d308928517f5841888ceaa4246a7f18e3",
"sysInfo" : "Linux bs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41",
"versionArray" : [
2,
0,
2,
0
],
"bits" : 64,
"debug" : false,
"maxBsonObjectSize" : 16777216,
"ok" : 1
}
Now the question is: why mongodb is returning 1760 documents when I have explicitly asked to skip 1750?
Because the server side skip() does exactly that: it iterates over the first 1750 results and then gets 10 more (according to the limit).
As #devesh says, this is why very large pagination should be avoided since MongoDB does not make effective use of an index for skip() or limit().
I think you have hit a bulls eye , I think it is reason why mongoDB document asks us to avoid the large skips http://docs.mongodb.org/manual/reference/method/cursor.skip/ . Please have a look here It will answer your outcome . Use some other key which will be used with $gt operator will be much faster. Like Datetime stamp of last key in the page 1 then use the $get on the datetime.
The cursor.skip() method is often expensive because it requires the server to walk from the beginning of the collection or index to get the offset or skip position before beginning to return result
**
UPDATE
**
I posted an answer as it's been confirmed to be an issue
**
ORIGINAL
**
First, I apologize -- I have just started using MongoDB yesterday, and I am still pretty new at this. I have a pretty simple query, and using PHP my findings are this:
Mongo version is 2.0.4, running on CentOS 6.2 (Final) x64
$start = microtime(true);
$totalactive = $db->people->count(array('items'=> array('$gt' => 1)));
$end = microtime(true);
printf("Query lasted %.2f seconds\n", $end - $start);
Without index, it returns:
Query lasted 0.15 seconds
I have 280,000 records in people the database. So I thought adding an index on "items" should be helpful, because I query this data a lot. But to my disbelief, after adding the index I get this:
Query lasted 0.25 seconds
Am I doing anything wrong?
Instead of count i used find to get the explain and this is the output:
> db.people.find({ 'items' : { '$gte' : 1 } }).explain();
{
"cursor" : "BtreeCursor items_1",
"nscanned" : 206396,
"nscannedObjects" : 206396,
"n" : 206396,
"millis" : 269,
"nYields" : 0,
"nChunkSkips" : 0,
"isMultiKey" : false,
"indexOnly" : false,
"indexBounds" : {
"items" : [
[
1,
1.7976931348623157e+308
]
]
}
}
If I change my query to be "$ne" 0, it takes 10ms more!
Here are the collection stats:
> db.people.stats()
{
"ns" : "stats.people",
"count" : 281207,
"size" : 23621416,
"avgObjSize" : 84.00009957077881,
"storageSize" : 33333248,
"numExtents" : 8,
"nindexes" : 2,
"lastExtentSize" : 12083200,
"paddingFactor" : 1,
"flags" : 0,
"totalIndexSize" : 21412944,
"indexSizes" : {
"_id_" : 14324352,
"items_1" : 7088592
},
"ok" : 1
}
I have 1GB of ram free, so I believe the index fits in memory.
Here's the people index, as requested:
> db.people.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "stats.people",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"items" : 1
},
"ns" : "stats.people",
"name" : "items_1"
}
]
Having an index can be beneficial for two reasons:
when accessing only a small part of the collection (because of a restrictive filter that can be satisfied by the index). Rule of thumb is less than 10%.
when the collection does not need to be accessed at all (because all necessary data is in the index, both for the filtering, and for the result set). This will be indicated by "indexOnly = true".
For the "find" query, both of this is not true: You are accessing almost the whole collection (206396 out of 281207) and need all fields data. So you will go through the index first, and then through almost the whole collection anyway, defeating the purpose of the index. Just reading the whole collection would have been faster.
I would have expected the "count" query to perform better (because that can be satisfied by just going through the index). Can you get an explain for that, too?
Look at this:
http://www.mongodb.org/display/DOCS/Indexing+Advice+and+FAQ#IndexingAdviceandFAQ-5.MongoDB%27s%24neor%24ninoperator%27saren%27tefficientwithindexes.
Which made me consider this solution. How about this?
$totalactive = $db->people->count() - $db->people->count(array('items'=> array('$eq' => 1)));
This was confirmed to be a bug or something that needed optimization in the MongoDB engine. I posted this in the mongo mailing list and the response I received from Eliot Horowitz
That's definitely a bug, or at least a path that could be way better
optimized. Made a case: https://jira.mongodb.org/browse/SERVER-5607
Priority: Major
Fix Version/s: 2.3 desired
Type: Bug
Thanks for those who helped confirming this was a bug =)
Can you please provide an example of an object in this collection? The "items" field is an array? If so, I would recommend you add a new field "itemCount" and put an index on that. Doing $gt on this field will be extremely fast.
This is because your queries are near-full collection scans. The query optimizer is picking to use the index, when it should not use it for optimum performance. It's counterintuitive, yes, but it's because the cursor is walking the index b-tree and fetching the documents that the tree points to, which is slower than just walking the collection if it has to scan the almost the whole tree.
If you really need to do this kind of query, and you want to use that index for other things, like sorting, you can use .hint({$natural: 1}), to tell the query to not use the index.
Coincidentally, I posted about a similar issue in a blog post recently: http://wes.skeweredrook.com/testing-with-mongodb-part-1/
I was using a PHP mongo command:
$db->command(array("create" => $name, "size" => $size, "capped" => true, "max" => $max));
And my collections grew way past their supposed capped limits. I put on a fix:
$db->createCollection($name, true, $size, $max);
Currently, the counts are so low I can't tell whether the 'fix' worked.
How can you tell if a collection is capped, either from the shell or PHP? I wasn't able to find this information in the system.namespaces.
Turns out there's also the isCapped() function.
db.foo.isCapped()
In the shell, use db.collection.stats(). If a collection is capped:
> db.my_collection.stats()["capped"]
1
If a collection is not capped, the "capped" key will not be present.
Below are example results from stats() for a capped collection:
> db.my_coll.stats()
{
"ns" : "my_db.my_coll",
"count" : 221,
"size" : 318556,
"avgObjSize" : 1441.4298642533936,
"storageSize" : 1000192,
"numExtents" : 1,
"nindexes" : 0,
"lastExtentSize" : 1000192,
"paddingFactor" : 1,
"flags" : 0,
"totalIndexSize" : 0,
"indexSizes" : {
},
"capped" : 1,
"max" : 2147483647,
"ok" : 1
}
This is with MongoDB 1.7.4.
From the shell:
db.system.namespaces.find()
You'll see a list of all collections and indexes for the given db. If a collection is capped, that will be indicated.
For PHP:
$collection = $db->selectCollection($name);
$result = $collection->validate();
$isCapped = isset($result['capped']);