So with the latest update with Elastic Search 6, The C# Client was also upgraded too. But i can't figure out how to write this code the new way with the new Client Nest 7. I just need to rewrite this code
var indexExists = Client.IndexExists(CurrentAliasName).Exists;
Client.Alias(aliases => {
if (indexExists)
{
var oldIndices = Client.GetIndicesPointingToAlias(CurrentAliasName);
var indexName = oldIndices.First().ToString();
//remove alias from live index
aliases.Remove(a => a.Alias(CurrentAliasName).Index("*"));
}
return aliases.Add(a => a.Alias(CurrentAliasName).Index(CurrentIndexName));
});
The APIs have been moved into API groupings
var client = new ElasticClient();
var CurrentAliasName = "alias_name";
var CurrentIndexName = "index_name";
var indexExists = client.Indices.Exists(CurrentAliasName).Exists;
client.Indices.BulkAlias(aliases =>
{
if (indexExists)
{
var oldIndices = client.GetIndicesPointingToAlias(CurrentAliasName);
var indexName = oldIndices.First().ToString();
//remove alias from live index
aliases.Remove(a => a.Alias(CurrentAliasName).Index("*"));
}
return aliases.Add(a => a.Alias(CurrentAliasName).Index(CurrentIndexName));
});
You can also reference Nest.7xUpgradeAssistant package and keep using the same methods as in 6.x to help with the move to 7.x. You'll get compiler warnings with messages to indicate where the new API methods are located.
Related
The Confluent.Kafka AdminClient allows you to create a topic, specifying the name, number of partitions, replication factor, and retention (and I'm guessing other settings through the configs property). The GetMetadata() call, however, returns a TopicMetadata that only has the name and partition information on it. Is there a way to retrieve the replication factor and retention time using the .Net client?
await adminClient.CreateTopicsAsync(new[]
{
new TopicSpecification
{
Name = topicName,
NumPartitions = _connectionSettings.TopicAutoCreatePartitionCount,
ReplicationFactor = _connectionSettings.TopicAutoCreatePartitionCount,
Configs = new Dictionary<string, string> {{"retention.ms", "9999999999999"}}
}
});
To get the retention time you can use DescribeConfigsAsync:
var results = await adminClient.DescribeConfigsAsync(new[] { new ConfigResource { Name = "topic_name", Type = ResourceType.Topic } });
foreach (var result in results)
{
var retentionConfig = result.Entries.SingleOrDefault(e => e.Key == "retention.ms");
}
But I'm not sure what the correct way to get the replication factor is, since it's not retrieved with DescribedConfigsAsync. One way I can think of is to use GetMetadata but it's not a very clean solution:
var meta = adminClient.GetMetadata(TimeSpan.FromSeconds(5));
var topic = meta.Topics.SingleOrDefault(t => t.Topic == "topic_name");
var replicationFactor = topic.Partitions.First().Replicas.Length;
I am trying to get my head around a problem. I am building an application where we are indexing assets in Elastic. The nature of the assets is very dynamic, because they contain client metadata, which is different from client to client.
Because of this, the Index is built from a List of dynamics in C#. This actually works like a charm. The problem is, I cannot control the _id property in Elastic when using the C# interface. This means when I update the documents, instead of updating the correct one a new duplicate is made.
My code looks like this:
List<dynamic> assets = new List<dynamic>();
var settings1 = new ConnectionSettings(
new Uri("http://localhost:9200")
).DefaultIndex("assets");
var client = new ElasticClient(settings1);
//assets is build here
var indexResponse = client.Indices.Create("assets");
var BulkResponse = client.IndexMany(assets);
This actually works and the index is built as I expect it to - almost. Even though I have a property called Id on the dynamic, it is not inferred correctly, which means the document is given an _Id decided by Elastic. Thus the next time I run this code using the same Id a new document is created rather than updated.
I have been searching high and low, but cannot seem to find a good solution. One thing I have tried is the following:
var bulkResponse = client.Bulk(bd => bd.IndexMany(assets, (descriptor, s) => descriptor.Id(s.Id)));
But this throws an error I cannot catch in the .net kernel. This actually works on lower versions on Elastic, but seems to have been broken with 7.2 and 7.0.1 of the C# interface.
Any help is much appreciated.
To allow the following to work
var bulkResponse = client.Bulk(bd => bd.IndexMany(assets, (descriptor, s) => descriptor.Id(s.Id)));
You just need to cast the Id type to the type that it is. For example, if it's a string
var client = new ElasticClient();
var assets = new dynamic[]
{
new { Id = "1", Name = "foo" },
new { Id = "2", Name = "bar" },
new { Id = "3", Name = "baz" },
};
var bulkResponse = client.Bulk(bd => bd.IndexMany(assets, (descriptor, s) => descriptor.Id((string)s.Id)));
This is a runtime limitation.
Instead of using dynamic type you could create dictionary-based custom type like:
public class DynamicDocument : Dictionary<string, object>
{
public string Id => this["id"]?.ToString();
}
and use it as follow:
class Program
{
public class DynamicDocument : Dictionary<string, object>
{
public string Id => this["id"]?.ToString();
}
static async Task Main(string[] args)
{
var pool = new SingleNodeConnectionPool(new Uri("http://localhost:9200"));
var connectionSettings = new ConnectionSettings(pool);
connectionSettings.DefaultIndex("documents");
var client = new ElasticClient(connectionSettings);
await client.Indices.DeleteAsync("documents");
await client.Indices.CreateAsync("documents");
var response = await client.IndexAsync(
new DynamicDocument
{
{"id", "1"},
{"field1", "value"},
{"field2", 1}
}, descriptor => descriptor);
//will update document with id 1 as it's already exists
await client.IndexManyAsync(new[]
{
new DynamicDocument
{
{"id", "1"},
{"field1", "value2"},
{"field2", 2}
}
});
await client.Indices.RefreshAsync();
var found = await client.GetAsync<DynamicDocument>("1");
Console.WriteLine($"Id: {found.Source.Id}");
Console.WriteLine($"field1: {found.Source["field1"]}");
Console.WriteLine($"field2: {found.Source["field2"]}");
}
}
Output:
Id: 1
field1: value2
field2: 2
Tested with elasticsearch 7.2.0 and NEST 7.0.1.
Hope that helps.
I try to get all data from collection into MongoDB server using C# driver.
The idea is connect to the server and get all collection than insert into list of class.
List<WatchTblCls> wts;
List<UserCls> users;
List<SymboleCls> syms;
public WatchTbl()
{
InitializeComponent();
wts = new List<WatchTblCls>();
users = new List<UserCls>();
syms = new List<SymboleCls>();
}
public async void getAllData()
{
client = new MongoClient("mongodb://servername:27017");
database = client.GetDatabase("WatchTblDB");
collectionWatchtbl = database.GetCollection<WatchTbl>("Watchtbl");
collectionUser = database.GetCollection<UserCls>("Users");
collectionSymbole = database.GetCollection<SymboleCls>("Users");
var filter = new BsonDocument();
using (var cursor = await collectionWatchtbl.FindAsync(filter))
{
while (await cursor.MoveNextAsync())
{
var batch = cursor.Current;
foreach (var document in batch)
{
wts.Add(new WatchTblCls(document["_id"], document["userId"], document["wid"], document["name"], document["Symboles"]));
}
}
}
}
I get this error under
wts.Add(new WatchTblCls(document["_id"], document["userId"], document["wid"], document["name"], document["Symboles"]));
Cannot apply indexing with [] to an expression of type 'WatchTbl'
I don't understand the reason behind using WatchTbl and WatchTblCls both together. Is WatchTblCls a model for the entity WatchTbl here? Im not sure.
In any case. If you go for aggregation and want to convert WatchTbl collection to WatchTblCls list, your desired solution might look like the following. I don't know the defiitions of the classes so I'm assuming:
var client = new MongoClient("mongodb://servername:27017");
var database = client.GetDatabase("WatchTblDB");
var collectionWatchtbl = database.GetCollection<WatchTbl>("Watchtbl");
var collectionUser = database.GetCollection<UserCls>("Users");
var collectionSymbole = database.GetCollection<SymboleCls>("Users");
var list = collectionWatchtbl.AsQueryable().Select(x => new WatchTblCls() {
id = x.id,
userId = x.userId,
.....
});
If you can use the same WatchTbl class and still want to load the full collection to a local List (which is definitely not a good idea):
List<WatchTbl> list = await collectionWatchtbl.Find(x => true).ToListAsync();
I am building some abstraction functions for my application to call, which will hit elasticsearch through Nest. One of such functions is a Delete(string id) call, which is easy to accomplish. I have done this as follows:
public void Delete(string id)
{
esClient.Delete(id);
}
Now let's say I want to do the same thing, but operate on several documents simultaneously. My original hunch was to do something like this:
public void Delete(IEnumerable<string> ids)
{
esClient.DeleteMany(ids); // won't compile
}
As my comment states, doing this won't compile. What is the proper way of batch deleting documents by ID in Nest?
To use esClient.DeleteMany(..) you have to pass collection of objects to delete.
var objectsToDelete = new List<YourType> {.. };
var bulkResponse = client.DeleteMany<YourType>(objectsToDelete);
You can get around this by using following code:
var ids = new List<string> {"1", "2", "3"};
var bulkResponse = client.DeleteMany<YourType>(ids.Select(x => new YourType { Id = x }));
Third option, use bulk delete:
var bulkResponse = client.Bulk(new BulkRequest
{
Operations = ids.Select(x => new BulkDeleteOperation<YourType>(x)).Cast<IBulkOperation>().ToList()
});
I was working on a .NET client for ElasticSearch 5.x, and I was fortunate to have the following code running (as well as succeeding all unit tests) for bulk deletion using ID's:
//IList<string> ids = ...
var descriptor = new BulkDescriptor();
foreach (var id in ids.Where(x => !string.IsNullOrWhiteSpace(x)))
descriptor.Delete<T>(x => x
.Id(id))
.Refresh(Refresh.WaitFor);
var response = await _client.BulkAsync(descriptor);
I have Elasticsearch up and running. Using Sense within Marvel, I am able to get a result, with this query:
GET _search
{
"query": {
"query_string": {
"query": "massa"
}
}
}
My c# code, trying to recreate the above:
var node = new Uri("http://localhost:9200");
var settings = new ConnectionSettings(node).SetDefaultIndex("mediaitems");
var client = new ElasticClient(settings);
var results = client.Search<stuff>(s => s
.Query(qs => qs.QueryString(q => q.Query("massa"))));
var d = results.Documents;
But unfortunately I'm not getting any results, nothing in "results.Documents". Any suggestions? Maybe a way to see the generated json? What is the simplest way to just query everything in an index? Thanks!
Even though your search results are going to be mapped to the proper type because you are using .Search<stuff>, you still need to set the default type as part of your query.
var node = new Uri("http://localhost:9200");
var settings = new ConnectionSettings(node).SetDefaultIndex("mediaitems");
var client = new ElasticClient(settings);
var results = client.Search<stuff>(s => s
.Type("stuff") //or .Type(typeof(stuff)) if you have decorated your stuff class correctly.
.Query(qs => qs.QueryString(q => q.Query("massa"))));
var d = results.Documents;
Additionally, your results response contains a ConnectionStatus property. You can interrogate this property to see the Request and Response to/from Elasticsearch to see if your query is being executed as you expect.
Update: You can also set a default type the index settings as well.
var settings = new ConnectionSettings(node).SetDefualtIndex("mediaitems");
settings.MapDefaultTypeIndices(d=>d.Add(typeof(stuff), "mediaitems");
You can also check nest raw client
var results = client.Raw.SearchPost("mediaitems", "stuff", new
{
query = new
{
query_string = new
{
query = "massa"
}
}
});
You can get the values of search request URL and JSON request body as under:
var requestURL = response.RequestInformation.RequestUrl;
var jsonBody = Encoding.UTF8.GetString(response.RequestInformation.Request);
You can find other useful properties in RequestInformation for debugging.