How to use Redis with ElasticSearch - c#

I found NEST for ElasticSearch. But I did not realize how the relation between Redis and ElasticSearch. I'll build a social network and would like to know whether you have some parts Redis and some parts of ElasticSearch should be used or a combination of them.what part of the project i use Redis and which parts ElasticSearch use and which parts should be combined use.
I use C# , BookSleeve for Redis , ElasticSearch with NEST , ASP.NET MVC

There is exactly zero relationship between these two things. I suspect you may have gotten the wrong end of the stick in a previous conversation, where you were wanting to search inside an individual value in redis for uses of a work (this question: How to search content value in redis by BookSleeve). The point I was trying to make is that this simply isn't a feature of redis. So you have two options:
write your own word extraction code (stemmer, etc) and build an index manually inside redis
use a tool that is designed to do all of that for you
Tools like ElasticSearch (which sits on top of lucene) are good at that.
Or to put the question in other terms:
X asks "how do I cut wood in half with a screwdriver"
Y says "use a saw"
X then asks "how do I use a screwdriver with a saw to cut wood in half?"
Answer: you don't. These things are not related.

Actually Redis and Elasticsearch can be combined in quite a useful way; if you are pushing data into Elasticsearch from a source stream, and that stream of data suddenly bursts and becomes too much data for your Elasticsearch instance to ingest, then it will drop data. If however, you put a Redis instance in front of Elasticsearch to cache the data, then your Elasticsearch instance can survive the bursting without losing data because it will be cached in Redis.
That's just one example, but there are many more. see here for an example of how to cache queries.

Related

Search Box with predictive result

I want to create a search box that will show result relating to the typed the text. I am using .NET MVC and I have been stuck on this for awhile. I want to use the AlphaVantage API search endpoint to create this.
It would like this. I just don't know what component to use or how to implement it.
As we don't know amount of your data and possible stack/budget in your project, autocompletion/autosuggestion could be implemented differently:
In memory (you break your word into all possible prefixes and map them to your entity through dictionary, could be optimized, like so - https://github.com/omerfarukz/autocomplete). Limit is around 10 million entries, a lot of memory consumption. Also support some storage mechanics, but I don't think it is more powerfull than fully fledged Lucene.
In Lucene index (Lucene.Net (4.8) AutoComplete / AutoSuggestion). Limited to 2 billions, very optimized usage of memory, stored on hard drive or anywhere else. Hard to work with, because provide low-level micro optimizations on indexes and overall pipeline of tokenization/indexing.
In Elasticsearch cluster (https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/suggest-usage.html). Unlimited, uses Lucene indexes as sharding units. Same as lucene, but every cloud infrastructure provide it for pretty penny.
In SQL using full text index (SQL Server Full Text Catalog and Autocomplete). Limited by database providers such as SQLite/MSSQL/Oracle/etc, cheap, easy to use, usually consumes CPU as there is no tomorrow, but hey, it is relational, so you could join any data to results.
As to how to use it - basically you send request to desired framework instance and retrieve first N results, which then you serve in some REST GET response.
You'll have to make a POST request (HttpClient) to the API that will return your data. You'll also need to provide all required authorization information (whether headers or keys). That would need to be async or possibly a background worker so that it doesn't block your thread. The requests need to happen when there's a change in your search box.
You can probably find details on how to do the request here.

How to use entity framework with elastic search

I want to use on entity framework with elastic search.
I saw this article MVC APPLICATION WITH ENTITY FRAMEWORK AND ELASTICSEARCH.
But as I understood, I need 2 DB (ms sql+ elastic) and there they explain how to translate the data.
The ms sql I will save the data and I done the search on the elastic .
So all the data will be twice so it will be waste of storage...
Is there any direct way to do that?
thanks
You can use entity framework with elastic search by utilising ElasticsearchCrud api.
This article clearly explains the steps to do so.
P.S: I rather not to copy/paste the steps here as it might look redundant.
Yes you understood right you would need to use two different sources.
There is no direct way to use elasticsearch with EF, you would need to write you custom logic to fit Database and Elasticsearch together.
If you ask why? Answer is that Database and Elasticsearch are different.
First of all Elastic is document database and you should save whole object while in database you can split items to multiple tables in ES "preferable" to save as a one document (Still in ES you can use nested objects but you will not be able to join).
Secondly search queries are totally different in SQl and Elastic. So sometimes only you would decide which source should be used to search. To search Elastic you can use NEST package but you would need to learn ES queries and indexing part since depends on analysis you will have defferent results.

Fast in-memory graph database

Does anyone know of a good solution out there that can deal with processing a graph of interconnected nodes? For our purpose the nodes are locations and we move material with various attributes between these locations. At some point a user may need to query what material is at a particular location, where it came from etc. What I need to do is walk the graph/tree and sum up quantities along the way depending on what a user requests.
I was thinking an in-memory graph database or alternatively a graph library may be suitable for this kind of problem but I am not 100% sure. It needs to be called from c# 4.5.
I read about Microsoft's Trinity and there is also Neo4j but I haven had any experience with any of them.
There are at least two in-memory c# alternatives:
Fallen-8 - http://www.fallen-8.com/
OrigoDB - https://origodb.com/ The author just mentioned in a mailing list that he was working on a graph example.
We're using VelocityGraph for our graph needs - http://www.velocitygraph.com/
But VelocityGraph not an in-memory solution, so I'm not sure how it suits your requirements.
Memgraph is an in-memory graph database, and it has support for C#.

MongoDB, C# and NoRM + Denormalization

I am trying to use MongoDB, C# and NoRM to work on some sample projects, but at this point I'm having a much harder time wrapping my head around the data model. With RDBMS's related data is no problem. In MongoDB, however, I'm having a difficult time deciding what to do with them.
Let's use StackOverflow as an example... I have no problem understanding that the majority of data on a question page should be included in one document. Title, question text, revisions, comments... all good in one document object.
Where I start to get hazy is on the question of user data like username, avatar, reputation (which changes especially often)... Do you denormalize and update thousands of document records every time there is a user change or do you somehow link the data together?
What is the most efficient way to accomplish a user relationship without causing tons of queries to happen on each page load? I noticed the DbReference<T> type in NoRM, but haven't found a great way to use it yet. What if I have nullable optional relationships?
Thanks for your insight!
The balance that I have found is using SQL as the normalized database and Mongo as the denormalized copy. I use a ESB to keep them in sync with each other. I use a concept that I call "prepared documents" and "stored documents". Stored documents are data that is only kept in mongo. Useful for data that isn't relational. The prepared documents contain data that can be rebuilt using the data within the normalized database. They act as living caches in a way - they can be rebuilt from scratch if the data ever falls out of sync (in complicated documents this is an expensive process because these documents require many queries to be rebuilt). They can also be updated one field at a time. This is where the service bus comes in. It responds to events sent after the normalized database has been updated and then updates the relevant mongo prepared documents.
Use each database to their strengths. Allow SQL to be the write database that ensures data integrity. Let Mongo be the read-only database that is blazing fast and can contain sub-documents so that you need less queries.
** EDIT **
I just re-read your question and realized what you were actually asking for. I'm leaving my original answer in case its helpful at all.
The way I would handle the Stackoverflow example you gave is to store the user id in each comment. You would load up the post which would have all of the comments in it. Thats one query.
You would then traverse the comment data and pull out an array of user ids that you need to load. Then load those as a batch query (using the Q.In() query operator). Thats two queries total. You would then need to merge the data together into a final form. There is a balance that you need to strike between when to do it like this and when to use something like an ESB to manually update each document. Use what works best for each individual scenario of your data structure.
I think you need to strike a balance.
If I were you, I'd just reference the userid instead of their name/reputation in each post.
Unlike a RDBMS though, you would opt to have comments embedded in the document.
Why you want to avoid denormalization and updating 'thousands of document records'? Mongodb db designed for denormalization. Stackoverlow handle millions of different data in background. And some data can be stale for some short period and it's okay.
So main idea of above said is that you should have denormalized documents in order to fast display them at ui.
You can't query by referenced document, in any way you need denormalization.
Also i suggest have a look into cqrs architecture.
Try to investigate cqrs and event sourcing architecture. This will allow you to update all this data by queue.

How can I cache LINQ to SQL results safely?

I have an ASP.Net MVC web app that includes a set of Forums. In order to maintain flexible security, I have chosen an access-control-list style of security.
However, this is getting to be a pretty heavy chunk of data to retrieve every time somebody views the forum index.
I am using the EnterpriseLibrary.Caching functionality to cache various non-LINQ items on the site. (The BBCode interpreter, the Skins, and etc.)
My question is this:
What is the safest and most elegant way to cache a LINQ result?
Essentially, I would like to keep a copy of the ACL for each forum in memory to prevent the database hit. That way, for each person that hits the site, at most I would have to fetch group membership information.
All-in-all I'm really looking for a way to cache large abouts of LINQ data effectively, not just these specific rows.
If you've already got a caching system for general objects, all you should need is this:
var whatever = linkQuery.ToList();

Categories

Resources