I am using C#, Mongo Driver. I am planning to implement below pattern for setting up document versioning in application itself.
C#, Mongo Driver
I was also thinking about implementing versioning in my application itself with the below approach.
Below operations would be in the transaction
Insert for the document (newDocument) comes in with PrimaryKey = PolicytId
Get the CurrentVersion from the database using PolicyId
Assign the Version number of the newDocument as CurrentVersion + 1
Delete existing document and insert newDocument in collection
Also insert newDocument in history_collection or archive_collection
Do you guys see in drawback in above ? Thanks
Related
I have a collection in MongoDB which I am indexing into Elasticsearch. I am doing this in a C# process. The collection has 100 million documents, and for each document, I have to query other documents in order to denormalise into the Elasticsearch index.
This all takes time. Reading from MongoDB is the slow part (indexing is relatively quick). I am batching the data from MongoDB as efficiently as I can but the process takes over 2 days.
This only has to happen when the mapping in Elasticsearch changes, but that has happened a couple of times over the last month.
Are there any ways of improving the performance for this?
Maybe you don't need launch import from scratch (I mean import from MongoDB), when you change mappings. Read this: Elasticsearch Reindex API
When you need to change mapping you must:
Create new index with new mapping
Reindex data from the old index into a new index using the built-in feature of elasticsearch.
After this old documents will be indexed with new mappings inside the new index. And built-in reindex in elasticsearch will work more quickly, than import from MongoDB via HTTP API.
If you will use reindex, don't forget to use parameter wait_for_completion(this parameter described in the documentation). This will run the reindex in the background.
Is this approach will solve your problem?
My objective is to make Orchard CMS works with MongoDB.
I look on Google for some resources about how to start with the integration but I didn't find any documentation on this.
Is anyone already make Orchard works with a NoSQL DB ?
What the first step in order to modify default database from SQL Server to MongoDB ?
I read this documentation:
http://weblogs.asp.net/bleroy/the-shift-how-orchard-painlessly-shifted-to-document-storage-and-how-it-ll-affect-you
read on release note from 1.8 that
Performance improvements by unleashing the power of the document db architecture built in Orchard
But I can't figure what that's mean exactly
In the Orchard Uservoice, there are already 43 votes for extension for MongoDB
https://orchard.uservoice.com/forums/50435-general/suggestions/2262572-mongodb
Orchard 1 stores a lot of its data encoded as XML in a special column of its content item table. That is all it means. The database still has to be relational, and still has to work with nHibernate. That excludes MongoDB.
Orchard 2's data story on the other hand is built for document storage, or more precisely it separates storage from querying, and can work with pretty much anything for the storage part.
So MongoDB as a content store for Orchard 1 will never happen, but it will for Orchard 2.
I have to insert many documents in a MongoDB collection, using the new C# 2.0 driver. Is using either collection.InsertManyAsync(...) or collection.BulkWriteAsync(...) making any difference? (particularly about performance).
From what I understand from MongoDB documentation, an insert with an array of documents should be a bulk operation under the hood. Is that correct?
Thanks for your help.
I found the answer looking at the driver source code: the InsertManyAsync uses internally the BulkWriteAsync.
So using InsertManyAsync it's the same as writing:
List<BsonDocument> documents = ...
collection.BulkWriteAsync(documents.Select(d => new InsertOneModel<BsonDocument>(d)));
Obviously, if all operations are Inserts, the InsertManyAsync should be used.
I need to know if a new record has been added to a particular table in my windows application. Table might be manipulated by different applications.
For now I'm using a Timer control and querying to see if there is a new record (I add record content somewhere in my application and delete record to avoid getting duplicate record), but of course this is not a clean way for this purpose.
Is there something like an event or something better than my approach?
I'm using entity framework 6
Update: I've read about SqlDependency but I don't know if it can be implemented using entity framework
You can give Linq2Cache a try to leverage SqlDependency, last time I hear EF6 was cleaned up its act and now formulates queries which are Query Notifications compatible.
Here's an existing StackOverflow article about that using the SQLDependency class.
Alternatively, you can cache the last record ID in memory/local to your program (in a file) or write the last processed record ID to a database table and search for anything more recent than that.
How do one clear a RavenDB database of all data must keeping its structure? I have little experience with RavenDB and the NoSQL database so I must ask for assistance. Do I have to create a .NET interface for managing the database or can this operation be performed from the web interface?
Raven Studio http://localhost:8080/raven/studio.html
If I have understood the structure correctly there are documents that needs to be removed? Can they be removed without damaging the database structure and/or involving .NET integration?
Thank you.
A RavenDB database doesn't have a "database structure". All documents in RavenDB are stored as JSON with a metadata element that describes the name of the corresponding CLR type in .NET.
You can just delete all document collections, or you could even recreate the database. The latter would require you to recreate all indexes. All of this can be done from the web interface.