When the primary server of my replication set fails, currently open connections also fail immediately (!) throwing MongoConnectionException (No candidate servers found) or with MongoCursorException (mongoUbuntu:8004: Remote server has closed the connection) when I use GridFS.
Is this a bug or do I have to change my setup in order to get automatic failover working?
Installation
Linux ubuntu kernel 3.16.0-31-generic
PHP 5.5.12-2ubuntu4.3
pecl/mongo 1.6.6
mongo 2.6.3
PHP via cli
Server's hostname is mongoUbuntu, mongodb processes were started on one single computer with the following command line
Mongod options
I run 4 servers on port 8001, 8002, 8003 and 8004 plus one arbiter on 8010. Command line for 8001 is like following:
mongod --replSet rs1 --dbpath /var/lib/mongodb1 --port 8001 --smallfiles --oplogSize 200 --httpinterface --rest
Replication Set rs.status()
{
"set" : "rs1",
"date" : ISODate("2015-04-08T14:48:57Z"),
"myState" : 3,
"members" : [
{
"_id" : 0,
"name" : "mongoUbuntu:8004",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 467,
"optime" : Timestamp(1428501340, 1),
"optimeDate" : ISODate("2015-04-08T13:55:40Z"),
"lastHeartbeat" : ISODate("2015-04-08T14:48:56Z"),
"lastHeartbeatRecv" : ISODate("2015-04-08T14:48:55Z"),
"pingMs" : 0,
"syncingTo" : "mongoUbuntu:8001"
},
{
"_id" : 1,
"name" : "mongoUbuntu:8003",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 987,
"optime" : Timestamp(1428501340, 1),
"optimeDate" : ISODate("2015-04-08T13:55:40Z"),
"lastHeartbeat" : ISODate("2015-04-08T14:48:56Z"),
"lastHeartbeatRecv" : ISODate("2015-04-08T14:48:56Z"),
"pingMs" : 0,
"syncingTo" : "mongoUbuntu:8001"
},
{
"_id" : 2,
"name" : "mongoUbuntu:8002",
"health" : 1,
"state" : 3,
"stateStr" : "RECOVERING",
"uptime" : 3142,
"optime" : Timestamp(1428498901, 1),
"optimeDate" : ISODate("2015-04-08T13:15:01Z"),
"infoMessage" : "still syncing, not yet to minValid optime 55252e9b:37",
"self" : true
},
{
"_id" : 3,
"name" : "mongoUbuntu:8001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 3139,
"optime" : Timestamp(1428501340, 1),
"optimeDate" : ISODate("2015-04-08T13:55:40Z"),
"lastHeartbeat" : ISODate("2015-04-08T14:48:56Z"),
"lastHeartbeatRecv" : ISODate("2015-04-08T14:48:56Z"),
"pingMs" : 0,
"electionTime" : Timestamp(1428503596, 1),
"electionDate" : ISODate("2015-04-08T14:33:16Z")
},
{
"_id" : 4,
"name" : "mongoUbuntu:8010",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 3139,
"lastHeartbeat" : ISODate("2015-04-08T14:48:56Z"),
"lastHeartbeatRecv" : ISODate("2015-04-08T14:48:55Z"),
"pingMs" : 0
}
],
"ok" : 1
}
PHP Script
The following script is running when I terminate the current primary (Example without real usage of GridFS but directly executing a query.)
<?php
$conn = new \MongoClient(
'mongodb://mongoUbuntu:8001,mongoUbuntu:8002,mongoUbuntu:8003',
array('replicaSet' => 'rs1', 'readPreference' => \MongoClient::RP_PRIMARY_PREFERRED)
);
$db = $conn->bat; //db name: bat
$gridfs = $this->_db->getGridFS();
while(true) {
$documents = $db->execute('db.getCollection(\'fs.files\').count()');
echo $documents['retval']."\n";
sleep(1);
}
Issue
Until I terminate the current primary, the script is printing the count of documents to command line. When I terminate the current primary (pressing Ctrl+C on respectiv mongod command line) the php script immediately because an exception, in this case MongoConnectionException No candidate servers found.
I used the script of github to create full debug log, the and saved it as Gist mongo-php-log-automaticfailover-fails.
Do I have to add another option to the creation of the connection or the config of the replication set? If this is described in mongodb documentation or documentation of mongodb php driver, where can I find it?
Yes, the driver does throw a exception. Technically it is the right thing to do, especially if you wish to know when the set fails over. What you need to do is catch the exception and do a retry.
Your biggest problem here is that for some reason you are evaling. Eval requires to be run ONLY on the master, as such when a set fails over it has to wait for a elected master which can take upto 10 seconds.
However, it seems this facet of eval is not actually documented, though it is recognised: https://dba.stackexchange.com/questions/75852/mongodb-replicaset-reconfig-when-primary-and-majority-dont-exist since the answerer of that question is in fact 10gen (MongoDB Inc) and he seems to not deny that eval does not work on secondaries. I am fairly certain that this used to be in the documentation though, that is where I first saw it.
This is quite well explained within the MongoDB 101 course for Python. The same applies for most languages. I personally just disallow the connection until a new primary is found (but then I don't eval stuff), but if you are solely reading you can remove the eval and do this through PHP. That should allow you to read without hinderance.
Related
I had restored MongoDB server version: 4.2.3 to MongoDB server version: 4.2.7 and I had an error about ISODate as below when saving data to the database again:
{ "_id" : ObjectId("5ed4b193ed6fab6d2272c5c4"), "id" : 1, "timestamp" : ISODate("2020-05-31T05:59:59Z") } #new data run after change db (it must disappear for unique)
{ "_id" : ObjectId("5ed33bef1e499012bf35e412"), "id" : 1, "timestamp" : ISODate("2020-05-31T04:59:59.999Z") } #old data
{ "_id" : ObjectId("5ed4b193ed6fab6d2272c5c3"), "id" : 1, "timestamp" : ISODate("2020-05-31T04:59:59Z") } #new data run after change db (it must disappear for unique)
{ "_id" : ObjectId("5ed32de165269b416f6c7362"), "id" : 1, "timestamp" : ISODate("2020-05-31T03:59:59.999Z") } #old data
{ "_id" : ObjectId("5ed4b193ed6fab6d2272c5c2"), "id" : 1, "timestamp" : ISODate("2020-05-31T03:59:59Z") } #new data run after change db (it must disappear for unique)
{ "_id" : ObjectId("5ed31fcff2a5076cc947bc02"), "id" : 1, "timestamp" : ISODate("2020-05-31T02:59:59.999Z") } #old data
{ "_id" : ObjectId("5ed311bfb0d88300f81e90d2"), "id" : 1, "timestamp" : ISODate("2020-05-31T01:59:59.999Z") } #old data
I have an index id and timestamp which is unique, but because timestamp has microseconds, not exactly so. Please give me a solution to keep microseconds in an ISODate.
PS: my code did not change. I use PHP and always format dates with 'Y-m-d\TH:i:s.uP'
MongoDB time resolution is 1 millisecond. Values with more precision will be truncated to millisecond precision.
I'm having problems updating a specific field in all the arrays of a subdocument. I have the following structure in MongoDB:
{
"_id" : ObjectId("539c9e97cac5852a1b880397"),
"DocumentoDesgloseER" : [
{
"elemento" : "COSTO VENTA",
"id_rubroer" : "11",
"id_documento" : "45087",
"abreviatura" : "CV",
"orden" : "1",
"formula" : "Cuenta Contable",
"tipo_fila" : "1",
"color" : "#FFD2E9",
"sucursal" : "D",
"documentoID" : "0",
"TOTAL" : "55426.62",
},
{ ... MORE OF THE SAME ... }
],
"id_division" : "2",
"id_empresa" : "9",
"id_sucursal" : "37",
"ejercicio" : "2008",
"lastMonthNumber" : NumberLong(6),
}
I need to update the field "documentoID" to a specific value; like "20" for example, in all the arrays of the subdocument "DocumentoDesgloseER". How I can do this?
I tried the following (with $ operator) and is not working:
$querySearch = array('id_division'=>'2', 'id_empresa'=>'9', 'id_sucursal'=>'37', 'ejercicio'=>'2008');
$queryUpdate = array('$set'=>array('DocumentoDesgloseER.$.documentoID'=>'20'));
Yii::app()->edmsMongoCollection('DocumentosDesgloseER')->update($querySearch,$queryUpdate);
By the way, I'm using Yii Framework to make the connection with Mongo. Any help or advice is welcome.
Thanks ;D!
Unfortunately, you can't currently use a positional operator to update all items in an array. There is a ticket opened in the MongoDB JIRA about this issue.
There a two "solutions":
Change your schema so that your embedded documents are in the separate collection (it's probably not what you want).
The best you can do, if you don't want to change your schema, is to update each subdocument in PHP and then save the whole document.
May be this is a silly question, but anyway I have the doubt.
Please take a look at this query:
db.posts.find({ "blog": "myblog",
"post_author_id": 649,
"shares.total": { "$gt": 0 } })
.limit(10)
.skip(1750)
.sort({ "shares.total": -1, "tstamp_published": -1 });
actually I see into the mongodb profiler this report:
mongos> db.system.profile.find({ nreturned : { $gt : 1000 } }).limit(10).sort( { millis : 1 } ).pretty();
{
"ts" : ISODate("2013-04-04T13:28:08.906Z"),
"op" : "query",
"ns" : "mydb.posts",
"query" : {
"$query" : {
"blog" : "myblog",
"post_author_id" : 649,
"shares.total" : {
"$gt" : 0
}
},
"$orderby" : {
"shares.total" : -1,
"tstamp_published" : -1
}
},
"ntoreturn" : 1760,
"nscanned" : 12242,
"scanAndOrder" : true,
"nreturned" : 1760,
"responseLength" : 7030522,
"millis" : 126,
"client" : "10.0.232.69",
"user" : ""
}
Now the question is: why mongodb is returning 1760 documents when I have explicitly asked to skip 1750?
This is my current Mongodb version, in cluster/sharding.
mongos> db.runCommand("buildInfo")
{
"version" : "2.0.2",
"gitVersion" : "514b122d308928517f5841888ceaa4246a7f18e3",
"sysInfo" : "Linux bs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41",
"versionArray" : [
2,
0,
2,
0
],
"bits" : 64,
"debug" : false,
"maxBsonObjectSize" : 16777216,
"ok" : 1
}
Now the question is: why mongodb is returning 1760 documents when I have explicitly asked to skip 1750?
Because the server side skip() does exactly that: it iterates over the first 1750 results and then gets 10 more (according to the limit).
As #devesh says, this is why very large pagination should be avoided since MongoDB does not make effective use of an index for skip() or limit().
I think you have hit a bulls eye , I think it is reason why mongoDB document asks us to avoid the large skips http://docs.mongodb.org/manual/reference/method/cursor.skip/ . Please have a look here It will answer your outcome . Use some other key which will be used with $gt operator will be much faster. Like Datetime stamp of last key in the page 1 then use the $get on the datetime.
The cursor.skip() method is often expensive because it requires the server to walk from the beginning of the collection or index to get the offset or skip position before beginning to return result
I am creating an application that has several servers running at the same time and several process on each server all of those are processing data making query/updates and inserts. So a total of 35+ concurrent connections are being made at all times. These servers are all processing data that is being sent to a single mongodb server (mongod). I am not sharding my database at the moment. The problem is that I am being limited by my mongodb server. Whenever I add more servers the queries/updates/inserts are running slower (they take more time). I was running this mongohq.com, then I just recently created my own amazon server for mongod but I am still getting nearly the same result. List below is my db.serverStatus({}). I am somewhat new to mongodb but basically I need to know how to speed up the process for the amount of concurrent operations going on with my mongo server. I need it to be able to handle a lot of requests. I know sharding is a possible way around this but if it is at all possible can you list some other solutions available. Thanks.
> db.serverStatus({})
{
"host" : "ip-10-108-245-21:28282",
"version" : "2.0.1",
"process" : "mongod",
"uptime" : 11380,
"uptimeEstimate" : 11403,
"localTime" : ISODate("2011-12-13T22:27:56.865Z"),
"globalLock" : {
"totalTime" : 11380429167,
"lockTime" : 86138670,
"ratio" : 0.007569017717695356,
"currentQueue" : {
"total" : 0,
"readers" : 0,
"writers" : 0
},
"activeClients" : {
"total" : 35,
"readers" : 35,
"writers" : 0
}
},
"mem" : {
"bits" : 64,
"resident" : 731,
"virtual" : 6326,
"supported" : true,
"mapped" : 976,
"mappedWithJournal" : 1952
},
"connections" : {
"current" : 105,
"available" : 714
},
"extra_info" : {
"note" : "fields vary by platform",
"heap_usage_bytes" : 398656,
"page_faults" : 1
},
"indexCounters" : {
"btree" : {
"accesses" : 798,
"hits" : 798,
"misses" : 0,
"resets" : 0,
"missRatio" : 0
}
},
"backgroundFlushing" : {
"flushes" : 189,
"total_ms" : 29775,
"average_ms" : 157.53968253968253,
"last_ms" : 185,
"last_finished" : ISODate("2011-12-13T22:27:16.651Z")
},
"cursors" : {
"totalOpen" : 34,
"clientCursors_size" : 34,
"timedOut" : 0,
"totalNoTimeout" : 34
},
"network" : {
"bytesIn" : 89743967,
"bytesOut" : 59379407,
"numRequests" : 840133
},
"opcounters" : {
"insert" : 5437,
"query" : 8957,
"update" : 4312,
"delete" : 0,
"getmore" : 76,
"command" : 821388
},
"asserts" : {
"regular" : 0,
"warning" : 0,
"msg" : 0,
"user" : 0,
"rollovers" : 0
},
"writeBacksQueued" : false,
"dur" : {
"commits" : 29,
"journaledMB" : 0.147456,
"writeToDataFilesMB" : 0.230233,
"compression" : 0.9999932183619632,
"commitsInWriteLock" : 0,
"earlyCommits" : 0,
"timeMs" : {
"dt" : 3031,
"prepLogBuffer" : 0,
"writeToJournal" : 29,
"writeToDataFiles" : 2,
"remapPrivateView" : 0
}
},
"ok" : 1
}
What is surprising about more load generating higher response times from mongod? There are a few possible reasons for degradation of performance.
For example, every write to mongod uses a process wide write lock. So the more servers you add the more updates will be attempted (assuming update load is about stable per server) and thus the longer the process will spend in write lock. You can keep an eye on this through mongostat's "locked %" field.
Additionally if you use JS powered functionality (m/r, db.eval(), etc.) these operations cannot be executed concurrently by mongod due to the fact that each mongod has a single JavaScript context (which is single threaded).
If you want a more specific analysis then you might want to consider posting exact numbers. How many reads and writes per second, what are the query plans for the queries you execute, what effect does adding an additional app server have on your overall database performance, etc.
I was using a PHP mongo command:
$db->command(array("create" => $name, "size" => $size, "capped" => true, "max" => $max));
And my collections grew way past their supposed capped limits. I put on a fix:
$db->createCollection($name, true, $size, $max);
Currently, the counts are so low I can't tell whether the 'fix' worked.
How can you tell if a collection is capped, either from the shell or PHP? I wasn't able to find this information in the system.namespaces.
Turns out there's also the isCapped() function.
db.foo.isCapped()
In the shell, use db.collection.stats(). If a collection is capped:
> db.my_collection.stats()["capped"]
1
If a collection is not capped, the "capped" key will not be present.
Below are example results from stats() for a capped collection:
> db.my_coll.stats()
{
"ns" : "my_db.my_coll",
"count" : 221,
"size" : 318556,
"avgObjSize" : 1441.4298642533936,
"storageSize" : 1000192,
"numExtents" : 1,
"nindexes" : 0,
"lastExtentSize" : 1000192,
"paddingFactor" : 1,
"flags" : 0,
"totalIndexSize" : 0,
"indexSizes" : {
},
"capped" : 1,
"max" : 2147483647,
"ok" : 1
}
This is with MongoDB 1.7.4.
From the shell:
db.system.namespaces.find()
You'll see a list of all collections and indexes for the given db. If a collection is capped, that will be indicated.
For PHP:
$collection = $db->selectCollection($name);
$result = $collection->validate();
$isCapped = isset($result['capped']);