My objective is to make Orchard CMS works with MongoDB.
I look on Google for some resources about how to start with the integration but I didn't find any documentation on this.
Is anyone already make Orchard works with a NoSQL DB ?
What the first step in order to modify default database from SQL Server to MongoDB ?
I read this documentation:
http://weblogs.asp.net/bleroy/the-shift-how-orchard-painlessly-shifted-to-document-storage-and-how-it-ll-affect-you
read on release note from 1.8 that
Performance improvements by unleashing the power of the document db architecture built in Orchard
But I can't figure what that's mean exactly
In the Orchard Uservoice, there are already 43 votes for extension for MongoDB
https://orchard.uservoice.com/forums/50435-general/suggestions/2262572-mongodb
Orchard 1 stores a lot of its data encoded as XML in a special column of its content item table. That is all it means. The database still has to be relational, and still has to work with nHibernate. That excludes MongoDB.
Orchard 2's data story on the other hand is built for document storage, or more precisely it separates storage from querying, and can work with pretty much anything for the storage part.
So MongoDB as a content store for Orchard 1 will never happen, but it will for Orchard 2.
Related
I want to use on entity framework with elastic search.
I saw this article MVC APPLICATION WITH ENTITY FRAMEWORK AND ELASTICSEARCH.
But as I understood, I need 2 DB (ms sql+ elastic) and there they explain how to translate the data.
The ms sql I will save the data and I done the search on the elastic .
So all the data will be twice so it will be waste of storage...
Is there any direct way to do that?
thanks
You can use entity framework with elastic search by utilising ElasticsearchCrud api.
This article clearly explains the steps to do so.
P.S: I rather not to copy/paste the steps here as it might look redundant.
Yes you understood right you would need to use two different sources.
There is no direct way to use elasticsearch with EF, you would need to write you custom logic to fit Database and Elasticsearch together.
If you ask why? Answer is that Database and Elasticsearch are different.
First of all Elastic is document database and you should save whole object while in database you can split items to multiple tables in ES "preferable" to save as a one document (Still in ES you can use nested objects but you will not be able to join).
Secondly search queries are totally different in SQl and Elastic. So sometimes only you would decide which source should be used to search. To search Elastic you can use NEST package but you would need to learn ES queries and indexing part since depends on analysis you will have defferent results.
I have Sphinx SE running against a ms sql server currently and it has worked great for the past few years. The table sphinx used has recently expanded a lot and we need to leverage the speed provided by moving the table to an azure table storage.
What options do I have to allow sphinx to index this table from azure? I know it supports ms sql, but the azure table storage offering is a different beast. I also have found that Sphinx supports an xml input, but it would be very hard to export all of this data into a file to be read every 5 minutes. Has anyone conquered this issue using Azure Table Storage?
thanks
Well XMLpipe (or even TSVpipe) would be the way to to connect to the table-store. Lacking a native SQL based driver.
... but yes a simple implementation might well load all data. Which is actully what you possibly doing with MS-SQL. It's just the data is actully small enough that its reasonable practical.
Loading all data on MS-SQL would be similally "expensive"
So really your question is more how to index a 'large' dataset. Some sort of incremental update system, so you only need to load the 'changes. (The fact that using against a Storage Table, kind of then becomes just a trivial detail of the implementation)
One concept might see quite a bit in Sphinx is so called 'main'+'delta'
http://www.sphinxconsultant.com/sphinx-search-delta-indexing/
That works quite well with XMLpipe too. So can work with Asure. You just need to come up with a couple of scripts, one to download large quantity of data (to initially commission the 'main', it doesnt get used often)
... then a second script to only get the new records. Run some sort of query
You just need somesort of script to stream from Azure, and output itehr XML or TSV
https://www.google.com/search?q=Azure+Table+Storage+stream
I am currently working on asp.net mvc4 web application. Part of the application, users can log in and browse the site etc. The data for the site is stored in a sql server database, contains users information etc.
A new feature to the site will be for all users to add comments to particular products shown on the site. As there could be hundreds of thousands of customers and thousands of products, this is alot of data.
So I have started looking at a NoSql option for this data and not store it in the relational sql server database. I have been looking at Mongo Db. My first question, is this a correct approach I am taking?
Next topic, how easily does c#/.net integrate with a mongo database. I havent worked with this before so my knowledge in the area is poor. Ideally, I would be querying (for the want of the correct term) the mongo db for comments based on a particular products identifier. I presume I can write a query style to get this data.
My next question is around the redundancy of a mongo db. With sql server, I have a fail over server if an issue occurs with the main db server. Is there a similar concept with mongo or how does it work? My consideration is for mongo to run on the same server as the sql server database. The data in the mongo db will not be mission critical, but the data in sql server is. My web application will run on multiple servers in a load balanced environment.
Can a mongo db be easily moved to another server? ie. how well can it be scaled out. Even can data from it be copied to another mongo db?
I appreciate my questions are of a beginner standard but I am currently researching the topic so assistance would be great.
Sql server should suffice for housing comments as long as you have some caching configured. The good thing about Sql Server is the data integrity of the foreign keys as well as the querying power.
However, working with Mongo in C# is not a huge deal. There is a slight learning curve, but this is with learning any new technology.
Connecting and Using MongoDB
MongoDB has official drivers and NuGet packages for you to use. http://www.mongodb.org/display/DOCS/CSharp+Language+Center for more information there.
Redundancy
Mongo supports replica sets where your second server would mimic all the data from the first server. Information on setting this up can be found here: http://docs.mongodb.org/manual/tutorial/deploy-replica-set/ It should be noted though that querying is a bit different in MongoDB than Sql Server.
Now I personally use mongoDB in one of my enterprise applications, but I would say as a rule of thumb: If you don't absolutely need to use it you would probably be better off sticking with one database engine. Mostly so that you only have to manage one database engine. Just my opinion though. Maybe redis for caching?
If you have not hardware memory problem(you can buy a lots of memory , you will need) Mongo can be your solution.
the thing is in mongodb design you will do a kind of denormalization...
and in my opinion hundreds of thousands user case your sql server is enough... do some more denormalizations in your db design and try implementing good cache design....
you say you are new to mongodb... so there is going to be a learning curve...
put more rams and cpus till you will have millions users...
to feel safe with mongodb you are going to need at least 3 servers
please also check this link
is this the optimal minimum setup for mongodb to allow for sharding/scaling?
try this
MVC Application With MongoDB - Part 1
MVC Application With MongoDB - Part 2
Getting Started With MongoDB in ASP.Net MVC4
I'm quite new to NoSql databases, but I'm really lovin' MongoDB with the official c# driver. It's currently the backend to an MVC application I'm writing, and the simplicity AND speed make my life way way easier.
However I've come to that point in the application where I need really great search. I used to use Solr, but have become quite interested in ElasticSearch.
ElasticSearch, as far as I can tell (from a very superficial level), can do everything MongoDB can in terms of being a document database.
So, if I'm already using a NoSql db, and I need great search, is there any point in Mongo? What's the use case?
Is Mongo faster? Easier to use? Is it the BSON datatypes and drivers? Why not use ElasticSearch as my DB?
I'm currently using AppHarbor and lovin' "The Cloud". I hate IT and want to focus on my application only. With that said, the only advantage I see so far is:
There are already a number of "Cloud" MongoDB providers. With ElasticSearch I've got to set it all up myself.
This is a very good question. I have asked myself the same question and came up with the following answer.
ElasticSearch doesn’t have a good way to backup data. As an example, do a quick search for “ElasticSearch backup” and one for “mongodb backup”. MongoDB has tools and documentation on how to backup data. While there is documentation on how to backup ElasticSearch data, this documentation doesn’t seem as mature.
In general, MongoDB has much better documentation. In particular its admin documentation is much better than ElasticSearch.
MongoDB provides commercial support. You might not care about commercial support at the moment, but it is nice to know it is available.
MongoDB has MapReduce built in, ElasticSearch does not. This may not be a big thing, but worth noting.
My personal opinion is that I wouldn’t use ElasticSearch if you cannot afford to lose data. I could see using ElasticSearch as a primary data store for something requiring real-time analytics, but did not have any long-term data retention requirements. Otherwise, I would suggest using MongoDB and ElasticSearch together. There is a MongoDB River Plugin for ElasticSearch. This makes it pretty easy to update the ElasticSearch index automatically.
I am new in the game for open source.
had a question, before i dive into what i plan to do. Assuming I plan to use c# , with no NoSQL (not planned which one (RavenDb, or MongoDb)), I wanted to do indexing for a site in asp.net.
I would like to use Lucene.net for indexing data and page links on my site, When do you actually tell Lucene.Net to start indexing?
I mean, is it a background process that starts indexing every night, just like the SharePoint indexes or the moment you call insert to nosql at the time you should call to index a record.
How about links on pages, when should the crawl engine run. I guess I am thinking in terms of SharePoint world and needs to be corrected by some people on this board.
I am particularly interested in sequence of steps, I am sorry, i am failing to understand when and why.
Any explanation or links to examples would help.
Appreciate your help.
Thanks
Sweety
Lucene is a search engine, not a crawler. So you would need to find a crawler which inserts the data into the Lucene index.
Think of Lucene as a SQL server. It can store data and retrieve data based on queries. But you have to create the application which actually inserts and queries the data.
You could very well use Solr (built on top of Lucene) and Nutch, both java projects, and use web services to between your C# app and the search index. The java version of Lucene is also under constant development, while the .Net version is somewhat up in the air.