MongoDB Rename collection in Sharding - php

How can I rename a collection in mongodb sharding? Because I tried in my
production server, its thrown the following error just for your references.
My Requirement we have to add few more fields with value on existing collection. But In production we can't able to disrupt the traffic based collection and anytime request comes from member simultaneously. So we have planned to populate the entire records from sql to mongo in another collection and then rename the collection and make it as production. its our plan. But we cant able to take it further. Following error we got it.
db.clientdetails.renameCollection( "clientdetails_bkup" ); {
"assertion" : "You can't rename a sharded collection",
"assertionCode" : 13138,
"errmsg" : "db assertion failure",
"ok" : 0 }
If not possible in mongodb sharding, please share your suggestion or alternative way we can solve this issue.
Please suggest what we have to take it further?.

There is no way to rename a sharded collection.
You can copy all the docs to a new collection.
Or create multiple collections based on a weekly/period date, and use as the current one. Then have your application always use the current one by name and change the name at each period break.

This is one of the worst limitations of MongoDB. we can not rename sharded collection.
Mongodb Document says:
db.collection.renameCollection() is not supported on sharded collections.
We had the same situation where we created another temporary collection, loaded data into that temporary collection. Dropped the existing sharded collection and created new collection with new name and then again loaded back data in newly named collection. This process was very

Related

Doctrine MongoDB ODM does not persist

I'm trying to start using MongoDB with Doctrine MongoDB ODM 1.1.3 and Laravel 5.4. Everything had been going fine before I removed the database called "doctrine" manually (the default database name I guess) in order to clean up the rubbish in it, so basically I just wanted to remove the database and was hoping that Doctrine will create a new one. Now when I'm trying to call
$mgr->persist($divRoot);
$mgr->flush();
It assigns an ID to the $divRoot object, but doesn't persist it. I.e. when I then call the findAll() method on the repo, it returns nothing. And it doesn't create any databases anymore. The $divRoot has changing fields every time I'm trying to save it. I've got really stuck, please help
UPDATE 1
If I initialize a new DocumentManager specifying a new path to the documents (AnnotationDriver::create($documents)), the ODM works normally persisting and retrieving the documents.
I've figured out what was wrong. I was working only with embedded documents whereas it must be at least one root Document. Originally all of them were marked as Documents that's why I was able to persist them, it's nothing to do with the database removal.
So, I was trying to persist a composite consisting out EmbeddedDocuments only.
The solution was to create a root wrapper for the composite marked as Document.

Mongodb Multiple Updates with PHP

I'm new to Mongodb (using PHP) and being that I'm used to RDMS I have what maybe a newbie question. I have a collection of "pages" that have a field called "tags" in which I have a series of tags, "happy, sad, angry, irtated".
Now I have another collection, called... let's say "users" and I want the user to be able to specify which tags are important to them... so this collection also has a field called "tags" in which I would have maybe, "Happy, and irtated"
Now... here comes the question, let's say I wanted to correct the spelling of irtated in both collections. Normally the RDMS world, I would have referenced these to a single table and then done an innerjoin such that changing the value in one spot would cascade everywhere... Or let say I wanted to remove a tag from the system... say, I didn't want Happy to be used anymore and I wanted to just remove it from all my collections where it exists...
Thoughts?
Why are you using Mongodb instead of RDBMS? most probably you want higher speed. Since in mongodb most related data in one place (in storage devices) so it is easy to retrieve data.That's why we keep same data in different places (Data redundancy). But when it comes to your case you need to keep more time to do the programming to do the same over RDBMS. So both RDBMS and NOSQL have their won pros and cons, and you will never have both profit from one account(Mongodb).

MongoDB Create Collection like MySQL CREATE TABLE

I am using MongoDB as a replacement for MySQL in Zend Framework 2, I was wondering, is there any CREATE TABLE statement like thing in MongoDB to create collections programmatically, most preferably within ZendFramework 2? Or maybe what is the approach one needs to take while creating DBs and collections when working with MongoDB?
Collections are created automatically in MongoDB as soon as you try to save a document to them.
If you really, really want to have an empty collection, try inserting any document to it to create it and then delete the document. But generally it's best to just let MongoDB do its thing.
I hope this helps.
In MongoDB, a collection is created implicitly when you first insert a document into it. Nevertheless, you can create a collection explicitly with MongoDB by using the command db.createCollection(). This command will also allow you to pass options specifying the nature of the collection, such as whether it is a capped collection, what sort of validation it should have, indexing options, etc. The syntax for MongoDB 3.2 is as follows:
db.createCollection(<name>, { capped: <boolean>,
autoIndexId: <boolean>,
size: <number>,
max: <number>,
storageEngine: <document>,
validator: <document>,
validationLevel: <string>,
validationAction: <string>,
indexOptionDefaults: <document> });
For more information, you can visit this page in the documentation.
Regarding database creation, as of MongoDB 3.2 there is no method to explicitly create a database. So in order to create a database, you need to insert a document to a collection inside it, or create a collection directly using db.createCollection().

Loading a list page in Symfony 1.4 taking ~10 secs on new server

I moved from an old sever running centOS on a managed hoster to a new one running Ubuntu in AWS.
Post the move I've noticed that the page that loads a list of items is now taking ~10-12 secs to render (sometimes even up to 74secs). This was never noticed on the old server. I used newrelic to look at what was taking so long and found that the sfPHPView->render() was taking 99% of the time. From nerelic there is approximately ~500 calls to the DB to render the page.
The page is a list of ideas, with each idea a row. I use the $idea->getAccounts()->getSlug() ability of Doctrine 1.2. Where accounts is another table linked to the idea as a foreign relation. This is called several times for each idea row. A partial is not currently used to hold the code for each row element.
Is there a performance advantage to using a partial for the row element? (ignoring for now the benefit of code maintability)
What is best practice for referencing data connected via a foreign relation? I'm surprised that a call is made to the DB everytime $idea->getAccounts()->getSlug() is called.
Is there anything obvious in ubuntu that would otherwise be making sfPHPView->render() run slower than centOS?
I'll give you my thought
When using a partial for a row element, it's more easy to put it in cache, because you can affine the caching by partial.
Because you don't explicit define the relation when making the query, Doctrine won't hydrate all elements with the relation. For this point, you can manually define relations you want to hydrate. Then, your call $idea->getAccounts()->getSlug() won't perform a new query every time.
$q = $this->createQuery();
$q->leftJoin('Idea.Account');
No idea for the point 3.
PS: for the point 2, it's very common to have lots of queries in admin gen when you want to display an information from a relation (in the list view). The solution is to define the method to retrieve data:
In your generator.yml:
list:
table_method: retrieveForBackendList
In the IdeaTable:
public function retrieveForBackendList(Doctrine_Query $q)
{
$rootAlias = $q->getRootAlias();
$q->leftJoin($rootAlias . '.Account');
return $q;
}
Though I would add what else I did to improve the speed of page load in addition to jOk's recommendations.
In addition to the explicit joins I did the following:
Switched to returning a DQL Query object which was then passed to Doctrine's paginator
Changed from using include_partial() to PHP's include() which reduced the object creation time of include_partial()
Hydrate the data from the DB as an array instead of an object
Removed some foreach loops by doing more leftJoins in the DB
Used result & query caching to reduce the number of DB calls
Used view caching to reduce PHP template generation time
Interestingly, by doing 1 to 4 it made 5 and 6 more effective and easier to implement. I think there is something to be said for improving your code before jumping in with caching.

How to store & deploy chunks of relational data?

I have a Postgres DB containing some configuration data spread over several tables.
This configurations need to be tested before they get deployed to the production system.
Now I'm looking for a way to
store single configuration objects with their child entities in SVN, and
to deploy this objects with child entities to different target DB's
The point is that the relations between the objects needs to be somehow maintained without the actual id's which would cause conflicts when copying the data to another DB.
For example, if the database would contain data about music artists, albums and tracks with a simple tree table schema like artist -> has albums -> has tracks, then the solution I'm looking for would allow to export e.g. one selected album with all tracks (or one artist with all albums with all tracks) into one file which could be stored to SVN and later be 'deployed' to whatever DB which has the same schema.
I was thinking of implementing something myself, e.g. to have config file describing dependencies, and an export script which replaces id's with PHP variables and generates some kind of PHP-SQL INSERT or UPDATE script.
But then I thought it would be really silly not to ask before to double check if something like this already exists :o)
This is one of the arguments for Natural Keys. An album has an artist and is made up of tracks. No "id" necessary to link these pieces of information together, just use the names. Perl-esque example of a data file:
"Bob Artist" => {
"First Album" => ["My Best Song", "A Slow Song",],
"Comeback Album" => ["One-Hit Wonder", "ORM Blues",],
}, "Noname Singer" => {
"Parse This Record!" => ["Song Named 'D'",],
}
To add the data, just walk the tree creating INSERT statements based on each level of parent data and if you must have one, use "RETURNING id" (PostgreSQL extension) at the end of each INSERT statement to get the auto-generated ids to pass to the next level down in the tree.
I second Matthew's suggestion. As a refinement of that concept, you may want to create "derived natural keys", for example "bob_artist" for "Bob Artist". The derived natural key would be well suited as a filename when storing the record into svn, for example.
The derived natural key should be generated such that any two different natural keys result in different derived natural keys. That way conflicts can't happen between independent datasets.
The concept of Rails migrations seems relevant although it aims mainly on performing schema updates: http://guides.rubyonrails.org/migrations.html
The idea has been transferred into PHP with the name Ruckusing, but seem to support only mySQL at this point: http://code.google.com/p/ruckusing/wiki/BigPictureOverview
Doctrine also provides migrations functionality but seems again to focus on schema transformations rather than on migrating or deploying data: http://www.doctrine-project.org/projects/migrations/2.0/docs/en
Possibly Ruckusing or Doctrine could be used (abused?) or if needed modified / extended to do the job?

Categories