I have a problem with downloading CosmosDb data, even when doing it like in the tutorial.
So in the beginning, my CosmosDb looks like this:
I tried to simply add a new class:
public class CaseModel
{
[JsonProperty("_id")]
public string Id { get; set; }
[JsonProperty("vin")]
public string Vin { get; set; }
}
and then just do like it is mentioned in the documentation
using (FeedIterator<Case> iterator = collection.GetItemLinqQueryable<Case>(true).ToFeedIterator())
{
while (iterator.HasMoreResults)
{
foreach (var item in await iterator.ReadNextAsync())
{
var x = item;
}
}
}
This way, the code iterates over many elements (like it is working),
but the properties are always null - as if the mapping would not work:
Then I tried something like this
using (FeedIterator<Case> feedIterator = collection.GetItemQueryIterator<Case>(
"select * from cases",
null,
new QueryRequestOptions() { PartitionKey = new PartitionKey("shardedKey") }))
{
while (feedIterator.HasMoreResults)
{
foreach (var item in await feedIterator.ReadNextAsync())
{
var x = item;
}
}
}
But this query returns no results.
I have no idea what is wrong.
Lately I was working with CosmosDb on Azure some year ago, and was doing some similar things.
The only thing that I think is strange, that the elements are marked as 'documents'
In the end, my code which should work looks like this
var dbClient = new CosmosClient(info.ConnectionString);
var db = dbClient.GetDatabase(info.DatabaseName);
var collection = db.GetContainer(info.Collection);
using (FeedIterator<CaseModel> iterator = collection.GetItemLinqQueryable<CaseModel>(true)
.ToFeedIterator())
{
while (iterator.HasMoreResults)
{
foreach (var item in await iterator.ReadNextAsync())
{
var x = item;
}
}
}
In the debug windows, I see that 3 steps at the beginning (like connect with connection string, then get database then get-container) work.
You are mixing APIs. The SDK you are referencing to (Microsoft.Azure.Cosmos) is the SQL API SDK: https://learn.microsoft.com/azure/cosmos-db/sql/sql-api-sdk-dotnet-standard
The screenshot in your question is from a Mongo API account.
Either you use a SQL API account with that SDK or you use the C# Mongo driver to interact with your Mongo API account.
SQL API accounts use id as the property for Ids/document identifier, not _id.
Related
I've been working on C# app to amend the ipaddress/s of a Named Location in conditional access in AAD.
I can authenticate and return the request collection. For whatever reason I cant access the isTrusted property or the ipRanges odata.
I can see the properties and the vales when I run through in debug, but cant output them.
I think its something to do with the list type, I'm using Microsoft.Graph.NamedLocation, there is Microsoft.Graph.IpNamedLocation type available but it can be converted from Microsoft.Graph.NamedLocation, which the api call makes.
The image shows what's available during runtime.
Code Below:
private static async Task GetnamedLocations(IConfidentialClientApplication app, string[] scopes)
{
GraphServiceClient graphServiceClient = GetAuthenticatedGraphClient(app, scopes);
var namedlocationsList = new List<Microsoft.Graph.NamedLocation>();
var namedLocations = await graphServiceClient.Identity.ConditionalAccess.NamedLocations
.Request()
.Filter("isof('microsoft.graph.ipNamedLocation')")
.GetAsync();
// var ipNamedLocations = new List<Microsoft.Graph.IpNamedLocation>();
namedlocationsList.AddRange(namedLocations.CurrentPage);
foreach (var namedLocation in namedlocationsList)
{
Console.WriteLine(namedLocation.Id + namedLocation.DisplayName + namedLocation.ODataType + namedLocation);
if (namedLocation.ODataType == "#microsoft.graph.ipNamedLocation")
{
Console.WriteLine("Write out all the properties");
}
}
Console.WriteLine(($"Named location: {namedLocations}"));
}
Any pointers gratefully received, I'm not a C# developer so be gentle :-)
You need to cast namedLocation to IpNamedLocation type.
foreach (var namedLocation in namedlocationsList)
{
Console.WriteLine(namedLocation.Id + namedLocation.DisplayName + namedLocation.ODataType + namedLocation);
if (namedLocation is IpNamedLocation ipNamedLocation)
{
var isTrusted = ipNamedLocation.IsTrusted;
var ipRanges = ipNamedLocation.IpRanges;
if (ipRanges is IEnumerable<IPv4CidrRange> ipv4cidrRanges)
{
foreach(var ipv4cidrRange in ipv4cidrRanges)
{
Console.WriteLine($"{ipv4cidrRange.CidrAddress}");
}
}
Console.WriteLine("Write out all the properties");
}
}
I couldn't get the updated answer to work, it still didn't evaluate the if statement to true, after a bit of googling and trying different options, the following returns the IP address, not sure if its the right way to go about it but it works.
var ipv4CidrRanges = ipRanges.Cast<IPv4CidrRange>().ToList();
foreach (var ipv4CidrRange in ipv4CidrRanges)
{
Console.WriteLine(ipv4CidrRange.CidrAddress);
}
Many thanks to user2250152 who solved the first conundrum for me.
So my goal is to use ElasticSearch, ES, as a log. To be more specific, i want to upload basically just a timestamp from when my application last ran. The uploading works fine but i cannot figure out how to fetch the data from the index. I've tried both using the Query and Aggregetion but in neither of cases have I managed to get some data. I get a response that says :
Valid NEST response built from a low level call on POST: /lastrun/lastrun/_search.
I have also tried searching for solutions but cannot manage to find anything that works for me. Can anyone help me fetch the data?
The index name is 'lastrun' and the class I upload to the index is called LastRun.
The Logger class
public static Boolean WriteLastRun()
{
var response = Elastic.Index(new LastRun { Date = DateTime.Now });
return response.IsValid ? true : false;
}
public static DateTime ReadLastRun()
{
var SearchResponse = Elastic.Search<LastRun>(s => s
.Query(q => q.MatchAll())
.Index("lastrun"));
Console.WriteLine(SearchResponse.Documents);
return new DateTime();
}
The LastRun class I upload to ES.
public class LastRun
{
public DateTime Date { get; set; }
}
Thanks!
EDIT
Settings for the Elastic:
var settings = new ConnectionSettings(new Uri("http://localhost:9200/")).DefaultIndex('lastrun');
ElasticClient Elastic = new ElasticClient(settings);
EDIT 2
I can verify that the same index is being uploaded to and searched by this code and by checking the same index in kibana.
var resolver = new IndexNameResolver(settings);
var index = resolver.Resolve<LastRun>();
Console.WriteLine(index); //prints 'lastrun'
Turns out that there wasn't a problem from the beginning. The Search method worked fine and I had the wrong ideĆ” of accessing the doc in the respons.
This is what is did, which was wrong:
Console.WriteLine(SearchResponse.Documents);
And here the right way to do it:
foreach (var item in SearchResponse.Documents)
{
Console.WriteLine(item.Date)
}
I have a program that performs several bulk index operation on an ElasticSearch cluster. At some point, I start getting errors like this one (snipped):
RemoteTransportException[...][indices:data/write/bulk[s]]]; nested: EsRejectedExecutionException[rejected execution (queue capacity 100) ...];
Is there a way I can verify the status of the bulk upload queue, ideally using NEST, so that I can slow down the client application in case I see that the queue on the server is getting full?
The NodesInfo method looks interesting, but I don't see how to access the information I need:
using Nest;
using System;
class Program {
static void Main(string[] args) {
ElasticClient client = new ElasticClient(new ConnectionSettings(new Uri("http://whatever:9200/")));
var nodesInfoResponse = client.NodesInfo();
if (nodesInfoResponse.IsValid) {
foreach (var n in nodesInfoResponse.Nodes) {
Console.WriteLine($"Node: {n.Key}");
var bulk = n.Value.ThreadPool["bulk"];
// ???
}
}
}
}
You need to use NodesStats() and not NodesInfo().
var nodesStatsResponse = client.NodesStats();
if (nodesStatsResponse.IsValid)
{
foreach (var node in nodesStatsResponse.Nodes)
{
long bulkThreadPoolQueueSize = node.Value.ThreadPool["bulk"].Queue;
}
}
UPDATE:
The above query will bring in a lot of information than required. A highly optimized request for getting the same information is through the usage of _cat/thread_pool API. See below:
var catThreadPoolResponse = client.CatThreadPool(d => d.H("host", "bulk.queue"));
if (catThreadPoolResponse.IsValid)
{
foreach (var record in catThreadPoolResponse.Records)
{
string nodeName = record.Host;
long bulkThreadPoolQueueSize = int.Parse(record.Bulk.Queue);
Console.WriteLine($"Node [{nodeName}] : BulkThreadPoolQueueSize [{bulkThreadPoolQueueSize}]");
}
}
I'm developing a "Task Control System" that will allow its users to enter task description information including when to execute the task and what environment (OS, browser, etc.) the task requires.
The 'controller' saves the description information and schedules the task. When the scheduled time arrives, the scheduler retrieves the task information and 'queues' the task for a remote machine that matches the required environment.
My first cut at this used a relational database to persist the task descriptions and enough history information to track problems (about 2 weeks worth). But this is not a 'big data' problem and the relationships are simple and I need better performance.
So I'm looking for something that offers more performance.
I'm trying to use redis for this, but I'm having some problems. I'm using ServiceStack.Redis version 3.9.71.0 for the client and Redis 2.8.4 is the server.
This sample code is taken from Dan Swain's tutorial. It's updated to work with ServiceStack.Redis client v 3.9.71.0. Much of it works, but 'currentShippers.Remove(lameShipper);' does NOT work.
Can anyone see why that might be?
Thanks
public void ShippersUseCase()
{
using (var redisClient = new RedisClient("localhost"))
{
//Create a 'strongly-typed' API that makes all Redis Value operations to apply against Shippers
var redis = redisClient.As<Shipper>();
//Redis lists implement IList<T> while Redis sets implement ICollection<T>
var currentShippers = redis.Lists["urn:shippers:current"];
var prospectiveShippers = redis.Lists["urn:shippers:prospective"];
currentShippers.Add(
new Shipper
{
Id = redis.GetNextSequence(),
CompanyName = "Trains R Us",
DateCreated = DateTime.UtcNow,
ShipperType = ShipperType.Trains,
UniqueRef = Guid.NewGuid()
});
currentShippers.Add(
new Shipper
{
Id = redis.GetNextSequence(),
CompanyName = "Planes R Us",
DateCreated = DateTime.UtcNow,
ShipperType = ShipperType.Planes,
UniqueRef = Guid.NewGuid()
});
var lameShipper = new Shipper
{
Id = redis.GetNextSequence(),
CompanyName = "We do everything!",
DateCreated = DateTime.UtcNow,
ShipperType = ShipperType.All,
UniqueRef = Guid.NewGuid()
};
currentShippers.Add(lameShipper);
Dump("ADDED 3 SHIPPERS:", currentShippers);
currentShippers.Remove(lameShipper);
.
.
.
}
}
Fixed the problem by adding these overrides to the 'Shipper' class:
public override bool Equals(object obj)
{
if (obj == null)
{
return false;
}
var input = obj as Shipper;
return input != null && Equals(input);
}
public bool Equals(Shipper other)
{
return other != null && (Id.Equals(other.Id));
}
public override int GetHashCode()
{
return (int)Id;
}
This working example shows how to implement List<>.Contains, List<>.Find, and List<>.Remove. Once applied to the 'Shipper' class the problem was solved!
I'm coming from a SQL Server background, and experimenting with Redis in .NET using ServiceStack. I don't mean for Redis to be a full replacement for SQL Server, but I just wanted to get a basic idea of how to use it so I could see where we might make good use of it.
I'm struggling with what I think is a pretty basic issue. We have a list of items that are maintained in a couple of different data stores. For the sake of simplicity, assume the definition of the item is basic: an integer id and a string name. I'm trying to do the following:
Store an item
Retrieve an item if we only know its id
Overwrite an existing item if we only know its id
Show all the items for that specific type
And here's some of the code I've put together:
public class DocumentRepositoryRedis
{
private static string DOCUMENT_ID_KEY_BASE = "document::id::";
public IQueryable<Document> GetAllDocuments()
{
IEnumerable<Document> documentsFromRedis;
using (var documents = new RedisClient("localhost").As<Document>())
{
documentsFromRedis = documents.GetAll();
}
return documentsFromRedis.AsQueryable();
}
public Document GetDocument(int id)
{
Document document = null;
using (var redisDocuments = new RedisClient("localhost").As<Document>())
{
var documentKey = GetKeyByID(document.ID);
if (documentKey != null)
document = redisDocuments.GetValue(documentKey);
}
return document;
}
public void SaveDocument(Document document)
{
using (var redisDocuments = new RedisClient("localhost").As<Document>())
{
var documentKey = GetKeyByID(document.ID);
redisDocuments.SetEntry(documentKey, document);
}
}
private string GetKeyByID(int id)
{
return DOCUMENT_ID_KEY_BASE + id.ToString();
}
}
It all seems to work - except for GetAllDocuments. That's returning 0 documents, regardless of how many documents I have stored. What am I doing wrong?
The typed Redis client also gives you access to the non-typed methods - since Redis ultimately doesn't know or care about your object types. So when you use the client.SetEntry() method, it bypasses some of the typed client's features and just stores the object by a key. You'll want to use the client.Store method since it goes ahead and creates a SET in Redis with all the object IDs related to your type. This SET is important because it's what the GetAll method relies on to serve back all the objects to you. The client.Store method does infer the ID automatically so you'll want to play around with it.
You'd change your GetDocument(int id) and SaveDocument(Document document) methods to use the client.GetById(string id) method, and you'd use client.Store(T value) method. You won't need your GetKeyByID() method anymore. I believe your Document object will need an "Id" property for the typed client to infer your object ID.