I try to execute a search with NEST ElasticClient and getting only the _id of the hits.
Here is my Code:
var client = new ElasticClient();
var searchResponse = client.Search<ElasticResult>(new SearchRequest {
From = this.query.Page * 100,
Size = 100,
Source = new SourceFilter {
Includes = "_id"
},
Query = new QueryStringQuery {
Query = this.query.Querystring
}
});
public class ElasticResult {
public string _id;
}
But the _id of the Documents (ElasticResult-Objects) is always null. What am I doing wrong?
The _id is not part of the _source document, but part of the hit metadata for each hit in the hits array.
The most compact way to return just the _id fields would be with using response filtering which is exposed as FilterPath in NEST
private static void Main()
{
var defaultIndex = "documents";
var pool = new SingleNodeConnectionPool(new Uri("http://localhost:9200"));
var settings = new ConnectionSettings(pool)
.DefaultIndex(defaultIndex)
.DefaultTypeName("_doc");
var client = new ElasticClient(settings);
if (client.IndexExists(defaultIndex).Exists)
client.DeleteIndex(defaultIndex);
client.Bulk(b => b
.IndexMany<object>(new[] {
new { Message = "hello" },
new { Message = "world" }
})
.Refresh(Refresh.WaitFor)
);
var searchResponse = client.Search<object>(new SearchRequest<object>
{
From = 0 * 100,
Size = 100,
FilterPath = new [] { "hits.hits._id" },
Query = new QueryStringQuery
{
Query = ""
}
});
foreach(var id in searchResponse.Hits.Select(h => h.Id))
{
// do something with the ids
Console.WriteLine(id);
}
}
The JSON response from Elasticsearch to the search request looks like
{
"hits" : {
"hits" : [
{
"_id" : "6gs8lmQB_8sm1yFaJDlq"
},
{
"_id" : "6Qs8lmQB_8sm1yFaJDlq"
}
]
}
}
Related
This bounty has ended. Answers to this question are eligible for a +50 reputation bounty. Bounty grace period ends in 9 hours.
Keppy is looking for an answer from a reputable source.
I have a Elastic Client from Elasticsearch.Net which fetching data from InMemoryConnection and add Query filter on the search but result is not filtering. Its returning entire data from responseBody as result.
Am I missing something or this is how InMemoryConnection is working?
CurrenciesDTO.cs
internal class CurrenciesDTO
{
[Keyword(Name = "CCY")]
public string CCY { get; set; }
}
Program.cs
using ConsoleApp_Elastic;
using Elasticsearch.Net;
using Nest;
using Newtonsoft.Json;
using System.Collections.Generic;
using System.Text;
using System.Threading;
List<CurrenciesDTO> listCurrencies = new List<CurrenciesDTO> { new CurrenciesDTO() { CCY = "GEL" }, new CurrenciesDTO() { CCY = "INR" }, new CurrenciesDTO() { CCY = "JPY" }, new CurrenciesDTO() { CCY = "USD" } };
var response = new
{
took = 1,
timed_out = false,
_shards = new
{
total = 1,
successful = 1,
skipped = 0,
failed = 0
},
hits = new
{
total = new
{
value = 193,
relation = "eq"
},
max_score = 1.0,
hits = Enumerable.Range(0, listCurrencies.Count).Select(i => (object)new
{
_index = "test.my.currencies",
_type = "_doc",
_id = listCurrencies[i].CCY,
_score = 1.0,
_source = new
{
CCY = listCurrencies[i].CCY,
}
})
}
};
string json = JsonConvert.SerializeObject(response);
var responseBody = Encoding.UTF8.GetBytes(json);
ConnectionSettings connectionSettings = new ConnectionSettings(new InMemoryConnection(responseBody, 200));
connectionSettings.OnRequestCompleted(apiCallDetails =>
{
if (apiCallDetails.RequestBodyInBytes != null)
{// not reaching here
Console.WriteLine(
$"{apiCallDetails.HttpMethod} {apiCallDetails.Uri} " +
$"{Encoding.UTF8.GetString(apiCallDetails.RequestBodyInBytes)}");
}
});
var client = new ElasticClient(connectionSettings);
var filterItems = new List<Func<QueryContainerDescriptor<CurrenciesDTO>, QueryContainer>>();
filterItems.Add(p => p.Term(v => v.Field(f=>f.CCY).Value("USD")));
var result = await client.SearchAsync<CurrenciesDTO>(s => s
.Index("test.my.currencies")
.Query(q => q.Bool(x => x.Filter(filterItems))), CancellationToken.None);
// .Query(q => q.Term(p => p.CCY, "USD")));
//expected 1 record but 4 records are returned.
foreach (var a in result.Documents.ToArray())
{
Console.WriteLine(a.CCY);
}
Console.ReadLine();
Yes, this is by design. InMemoryConnection was created to make unit testing easier and won't be much help with validating actual queries.
For making sure that Elasticsearch is configured the way you are expecting it to be and that queries sent to Elasticsearch are valid I would suggest using Testcontainers.
Simple test would look like:
spin up new docker instance of Elasticsearch with Testcontainers help
index some data
run you code against Elasticsearch running inside container
I made a mistake with my SQL query with ComsosDb .NET SDK 3. I want to request a list of objects from a document.
This is my document as stored in CosmosDb (fid is partition key):
{
"id": "1abc",
"fid": "test",
"docType": "Peeps",
"People": [
{
"Name": "Bill",
"Age": 99
},
{
"Name": "Marion",
"Age": 98
},
{
"Name": "Seb",
"Age": 97
}
],
"_rid": "mo53AJczKUuL9gMAAAAAAA==",
"_self": "dbs/mo53AA==/colls/mo53AJczKUs=/docs/mo53AJczKUuL9gMAAAAAAA==/",
"_etag": "\"9001cbc7-0000-1100-0000-60c9d58d0000\"",
"_attachments": "attachments/",
"_ts": 1623840141
}
My results show an item count of 1 with properties set to default values - Name is null and Age is 0.
I was expecting a IEnumerable of 3 persons. Here is the code:
class MyPeople
{
public IEnumerable<Person> People { get; set; }
}
class Person
{
public string Name { get; set; }
public int Age { get; set; }
}
[Fact]
public async Task CosmosPeopleTest_ReturnsThreePeople()
{
var config = GetConFig();
var cosmosClientV2 = new CosmosClient(config["Cosmos:ConnectionString"]);
var container = cosmosClientV2.GetContainer(config["Cosmos:DbName"], config["Cosmos:Collectionname"]);
var sql = "SELECT c.People FROM c WHERE c.docType = 'Peeps'";
QueryDefinition queryDefinition = new QueryDefinition(sql);
var results = new List<Person>();
FeedIterator<Person> q = container.GetItemQueryIterator<Person>(queryDefinition, null, new QueryRequestOptions { PartitionKey = new PartitionKey("test") });
while (q.HasMoreResults)
{
var x = await q.ReadNextAsync();
results.AddRange(x.ToList());
}
Assert.Equal(3, results.Count);
}
If I change the query to
sql = "SELECT c.People FROM c JOIN d IN c.People";
I have three Person all with properties Name & Age which are defaults.
You have an issue with the types. SELECT c.People return an object in this form:
{
People: [
...
]
}
When you iterate with this code
FeedIterator<Person> q = container.GetItemQueryIterator<Person>(queryDefinition, null, new QueryRequestOptions { PartitionKey = new PartitionKey("test") });
The CosmosDB try to "convert" every result object (as above) to a Person class. But it using reflection for that. So it creates a Person object but doesn't find any fields to fill its properties - it will not fail, but create empty objects with all the properties initialized with default values.
So to solve it you need to use MyPeople instead of Person:
FeedIterator<MyPeople> q = container.GetItemQueryIterator<MyPeople>(queryDefinition, null, new QueryRequestOptions { PartitionKey = new PartitionKey("test") });
Since MyPeople is the right form of the returned object, it will be able to read the objects that the CosmosDB returns and use them.
The full working code:
var config = GetConFig();
var cosmosClientV2 = new CosmosClient(config["Cosmos:ConnectionString"]);
var container = cosmosClientV2.GetContainer(config["Cosmos:DbName"], config["Cosmos:Collectionname"]);
var sql = "SELECT c.People FROM c WHERE c.docType = 'Peeps'";
QueryDefinition queryDefinition = new QueryDefinition(sql);
var results = new List<Person>();
FeedIterator<MyPeople> q = container.GetItemQueryIterator<MyPeople>(queryDefinition, null, new QueryRequestOptions { PartitionKey = new PartitionKey("test") });
while (q.HasMoreResults)
{
var x = await q.ReadNextAsync();
var myPeopleRes = x.Resource;
foreach (var people in myPeopleRes)
{
results.AddRange(people.People);
}
}
Assert.Equal(3, results.Count);
I'm using Mongo 4 with the latest C# driver. My application creates DBs and Collections on the fly and I want to enable sharding. I'm using the following code:
if (!ShardingEnabled) return;
var database = collection.Database;
var databaseName = database.DatabaseNamespace.DatabaseName;
var collectionName = collection.CollectionNamespace.CollectionName;
var shardDbScript = $"{{ enableSharding: \"{databaseName}\" }}";
var shardDbResult = database.RunCommand<MongoDB.Bson.BsonDocument>(new MongoDB.Bson.BsonDocument() {
{ "eval",shardDbScript }
});
var adminDb = Client.GetDatabase("admin");
var shardScript = $"{{shardCollection: \"{databaseName}.{collectionName}\"}}";
var commandDoc = new BsonDocumentCommand<MongoDB.Bson.BsonDocument>(new MongoDB.Bson.BsonDocument() {
{ "eval",shardScript }
});
var response = adminDb.RunCommand(commandDoc);
I get an 'ok' response back from mongo, but my dbs arent sharded.
Output from sh.status()
{
"_id" : "uat_Test_0",
"primary" : "SynoviaShard2",
"partitioned" : false,
"version" : {
"uuid" : UUID("69576c3b-817c-4853-bb02-ea0a8e9813a4"),
"lastMod" : 1
}
}
How can I enable sharding from within C#?
I figured it out. This is how you shard a database and its collections from c#, note, that the sharding key index must already exist:
if (!ShardingEnabled) return;
var database = collection.Database;
var adminDb = Client.GetDatabase("admin");
var configDb = Client.GetDatabase("config");
//var dbs = Client.ListDatabaseNames().ToList();
var databaseName = database.DatabaseNamespace.DatabaseName;
var collectionName = collection.CollectionNamespace.CollectionName;
var shardDbResult = adminDb.RunCommand<MongoDB.Bson.BsonDocument>(new MongoDB.Bson.BsonDocument() {
{ "enableSharding",$"{databaseName}" }
});
var shardScript = $"{{shardCollection: \"{databaseName}.{collectionName}\"}}";
var commandDict = new Dictionary<string,object>();
commandDict.Add("shardCollection", $"{databaseName}.{collectionName}");
commandDict.Add("key",new Dictionary<string,object>(){{"_id","hashed"}});
var bsonDocument = new MongoDB.Bson.BsonDocument(commandDict);
var commandDoc = new BsonDocumentCommand<MongoDB.Bson.BsonDocument>(bsonDocument);
var response = adminDb.RunCommand(commandDoc);
I have a collection in MongoDB which has document with names of collections I need to work. I need to query this collection, get all the collection names from the document inside this collection and then query those collections and join them based on ParentId references. Following is the collection which stores the name of other collection
db.AllInfoCollection.find()
{
"_id" : ObjectId("5b83b982a5e17c383c8424f3"),
"CollName" : "Collection1",
},
{
"_id" : ObjectId("5b83b9aaa5e17c383c8424f7"),
"CollName" : "Collection2",
},
{
"_id" : ObjectId("5b83b9afa5e17c383c8424f8"),
"CollName" : "Collection3",
},
{
"_id" : ObjectId("5b83b9b5a5e17c383c8424f9"),
"CollName" : "Collection4",
},
{
"_id" : ObjectId("5b83b9b9a5e17c383c8424fa"),
"CollName" : "Collection5",
},
{
"_id" : ObjectId("5b84f41bc5eb3f1f7c291f94"),
"CollName" : "Collection6",
}
All the above collections (Collection1, Collection2,.... Collection6) are created on run time with empty documents. They are connected to each other with Id and ParentId fields.
Now I need to query this AllInfoCollection, get the collection names and join them and generate the final joined ($lookup) output. I am able to query and get the collection list, but I am not sure how to add lookup projection inside the for loop. Any help would be appreciated.
public void RetrieveDynamicCollection()
{
IMongoDatabase _db = client.GetDatabase("MyDb");
var collectionList = _db.GetCollection<AllInfoCollection>("AllInfoCollection").AsQueryable().Distinct().Select(x => x.CollectionName).ToList();
for(int i = 0; i < collectionList.Count; i++)
{
var collectionName = collectionList[i];
IMongoCollection<BsonDocument> collection = _db.GetCollection<BsonDocument>(collectionName);
var options = new AggregateOptions()
{
AllowDiskUse = false
};
//not able to proceed here
}
}
Finally I was able to retrieve collections dynamically with all required joins(lookup aggregation) as below, hope it helps someone:
public async Task<string> RetrieveDynamicCollection()
{
try
{
IMongoDatabase _db = client.GetDatabase("MyDB");
var list = _db.GetCollection<HazopCollectionInfo>("AllCollectionInfo").AsQueryable().ToList();
var collectionList = list.OrderBy(x => x.CollectionOrder).Select(x => x.CollectionName).Distinct().ToList();
var listOfJoinDocuments = new List<BsonDocument>();
var firstCollection = _db.GetCollection<BsonDocument>(collectionList[0]);
var options = new AggregateOptions()
{
AllowDiskUse = false
};
var previousCollectionName = "";
for (int i = 0; i < collectionList.Count; i++)
{
var collectionName = collectionList[i];
IMongoCollection<BsonDocument> collection = _db.GetCollection<BsonDocument>(collectionName);
if (i == 0)
{
firstCollection = collection;
var firstarray = new BsonDocument("$project", new BsonDocument()
.Add("_id", 0)
.Add(collectionName, "$$ROOT"));
listOfJoinDocuments.Add(firstarray);
}
else
{
var remainingArray = new BsonDocument("$lookup", new BsonDocument()
.Add("localField", previousCollectionName + "." + "Id")
.Add("from", collectionName)
.Add("foreignField", "ParentId")
.Add("as", collectionName));
listOfJoinDocuments.Add(remainingArray);
remainingArray = new BsonDocument("$unwind", new BsonDocument()
.Add("path", "$" + collectionName)
.Add("preserveNullAndEmptyArrays", new BsonBoolean(true)));
listOfJoinDocuments.Add(remainingArray);
}
previousCollectionName = collectionName;
}
// Project the columns
list.OrderBy(x => x.ColumnOrder);
var docProjection = new BsonDocument();
for(int i=0;i<list.Count;i++)
{
docProjection.Add(list[i].ColumnName, "$"+list[i].CollectionName + "." + list[i].FieldName);
}
listOfJoinDocuments.Add(new BsonDocument("$project", docProjection));
PipelineDefinition<BsonDocument, BsonDocument> pipeline = listOfJoinDocuments;
var listOfDocs = new List<BsonDocument>();
using (var cursor = await firstCollection.AggregateAsync(pipeline, options))
{
while (await cursor.MoveNextAsync())
{
var batch = cursor.Current;
foreach (BsonDocument document in batch)
{
listOfDocs.Add(document);
}
}
}
var jsonString = listOfDocs.ToJson(new MongoDB.Bson.IO.JsonWriterSettings { OutputMode = MongoDB.Bson.IO.JsonOutputMode.Strict });
return jsonString;
}
catch(Exception ex)
{
throw ex;
}
}
I'm trying to implement a getNextSequence function for mongoDB explain on this Link I'm using the lattes C# driver but I not sure how to map the new : true property in the FindOneAndUpdateOptions
MongoDB Code
function getNextSequence(name) {
var ret = db.counters.findAndModify(
{
query: { _id: name },
update: { $inc: { seq: 1 } },
new: true,
upsert: true
}
);
return ret.seq;
}
C# Code
public async Task<long> GetNextObjectSequenceAsync(string objectName)
{
var collection = this.Context.GetCollection<ObjectSequence>("Counters");
var filter = new FilterDefinitionBuilder<ObjectSequence>().Where(x => x.Name == objectName);
var options = new FindOneAndUpdateOptions<ObjectSequence, ObjectSequence>() { IsUpsert = true };
var update = new UpdateDefinitionBuilder<ObjectSequence>().Inc(x => x.Sequence, 1);
ObjectSequence seq = await collection.FindOneAndUpdateAsync<ObjectSequence>(filter, update, options);
return seq.Sequence;
}
FindOneAndUpdateOptions has ReturnDocument enum where
ReturnDocument.Before = 'new':false
ReturnDocument.After = 'new':true
In your case options should be:
var options = new FindOneAndUpdateOptions<ObjectSequence, ObjectSequence>() { ReturnDocument = ReturnDocument.After, IsUpsert = true };