Basically its just query get operation in azure cosmos db based on machine id and datetime.
I am really stuck on querying in cosmos based on datetime.The same query I am able to run and get the result on azure portal.But code wise I am not getting the result.
A few more details:
SELECT *
FROM c
WHERE c.IoTHub.ConnectionDeviceId IN ('hub20') AND c.MACHINE_ID = 'TAP_20' AND (c.EventEnqueuedUtcTime >= '2021-02-03T10:40:42.5180000Z' AND c.EventEnqueuedUtcTime <= '2021-02-03T10:40:42.5180000Z')
QueryDefinition queryDefinition = new QueryDefinition(sqlQueryText);
FeedIterator<dynamic> queryResultSetIterator = container.GetItemQueryIterator<dynamic>(queryDefinition);
FeedResponse<dynamic> currentResultSet;
while (queryResultSetIterator.HasMoreResults)
{
currentResultSet = await queryResultSetIterator.ReadNextAsync();
}
I am able to get all the data till MACHINE_ID but as soon as I apply c.EventEnqueuedUtcTime condition. I am not able to get the data.I tried every possible solution.c.EventEnqueuedUtcTime value we are getting as a string and also in database it is saved as string as you can see in the image.
{
"MESSAGE_GROUP_ID": "24c9e3ad-4fd6-4abb-88d8-eafb9060884e",
"TYPE": "Gauges",
"MACHINE_ID": "TAP_20",
"Gauges": {
"OVERRIDE": 85.8
},
"EventProcessedUtcTime": "2021-02-03T10:41:48.0493615Z",
"PartitionId": 3,
"EventEnqueuedUtcTime": "2021-02-03T10:40:42.5180000Z",
"IoTHub": {
"MessageId": "498b7df3-55e6-4b3f-a18e-698fd991e526",
"CorrelationId": null,
"ConnectionDeviceId": "hub20",
"ConnectionDeviceGenerationId": "637332663098221999",
"EnqueuedTime": "2021-02-03T10:41:11.2570000Z"
},
"id": "498b7df3-55e6-4b3f-a18e-698fd991e526"
}
Any lead will be appreciated.
Thanks
Thank you user2911592 for sharing the resolution steps. Posting them as answer to help other community members.
Initially the query/code was saved incorrectly using casting. Fixing
the same resolved the issue.
user2911592: feel free to add furthermore details.
Related
I'm using a query as shown below, however no matter what I try, I'm unable to insert a string variable into the query string (string variable is 'searchString'). It just won't compile.
I've tried various suggestions for inserting string variables but nothing works for me.
var searchResponse = await _elasticLowLevelClient.SearchAsync<StringResponse>("webapp-razor-*", #"
{
""from"": 0,
""size"": 10,
""query"": {
""match"": {
""_metadata.log_event_type"": {
""query"": """ + searchString + """
}
}
}
}");
The above method in the Elastic docs:
Well what a marathon to find the answer to this.
What we're dealing with here is knowing how to insert a variable into a string literal, which is in essence what the query body is. The following SO post helped me solve this here: Adding string to verbatim string literal
From the method I showed in my original question, I re-wrote the query body into a single line:
var vertabimString = #"{ ""from"": 0, ""size"": 10, ""query"": { ""match"": { ""_metadata.log_event_type"": { ""query"": """ + searchString + #"""}}}}";
var searchResponse = await _elasticLowLevelClient.SearchAsync<StringResponse>("webapp-razor-*", vertabimString);
I'm now able to run the query and get the required results back. So the issue I had wasn't something related to using the Elastic Low Level Client, but more as a consequence in trying to use the low level client with the flexibility to build up my own queries using my string variables in the mix.
Hopefully this saves someone else ripping their hair out as I have none left!
I have the following function using entity framework, that checks a db table for expired product data, the entire query seems extremely slow and i am wondering if there is a better / more efficient way of adapting this query as there is always a bulk amount of data to clean sometimes upto 3k items.
It is purely the selection item that can take 4-10 mins dependant on db size on the day.
Datetime format Example = 2021-03-31T23:59:59.000
using (ProductContext db = new ProductContext())
{
log.Info("Checking for expired data in the Product Data db");
//rule = now - configured hours, default = 2 hours in past.
var checkWindow = Convert.ToInt32(PRODUCTMAPPING_CONFIG.MinusExpiredWindowHours);
var dtCheck = Convert.ToDateTime(DateTime.Now.AddHours(-checkWindow).ToString("s"));
var rowData = db.ProductData.Where(le => Convert.ToDateTime(le.ProductEndDate.Value.ToString().Trim()) < dtCheck).ToList();
rowData.ForEach(i => {log.Debug($"DB Row ID {i.Id} with Product ID Value: {i.ProductUid} has expired with Product End Date: {i.ProductEndDate}, marked for removal."); });
log.Info($"Number of expired Products being removed: {rowData.Count()}");
db.ProductData.RemoveRange(rowData);
db.SaveChanges();
log.Info(rowData.Count == 0
? "No expired data present."
: $"Number of expired assets Successfully removed from database = {rowData.Count}");
}
Thanks in advance
EDIT::
Thanks to all the suggestions i will be looking at the ORM comments made by Panagiotis Kanavos below in regard to direct queries for this type of item and amended the column datatype based on the comment from Panagiotis Kanavos again. Finally the .ToList comment by jdweng removed the lag immediately so this at least gets my moving faster for now while i look at the suggestions by Panagiotis Kanavos as i think that is probably the best way forward?
Faster code is now:
using (ProductContext db = new ProductContext())
{
log.Info("Checking for expired data in the Product Data db");
//rule = now - configured hours, default = 2 hours in past.
var checkWindow = Convert.ToInt32(PRODUCTMAPPING_CONFIG.MinusExpiredWindowHours);
var dtCheck = Convert.ToDateTime(DateTime.Now.AddHours(-checkWindow).ToString("s"));
// Ammended DB Table from nvarchar to DateTime to allow direct comparison based on comment by: Panagiotis Kanavos
// Removed ToList() which returned the IEnumerable immediately based on comment by: jdweng
var rowData = db.ProductData.Where(le => le.ProductEndDate < dtCheck);
log.Info($"Number of expired Products being removed: {rowData.Count()}");
// added print out on debug only.
if(log.IsDebugEnabled)
rowData.ToList().ForEach(i => {log.Debug($"DB Row ID {i.Id} with Product ID Value: {i.ProductUid} has expired with Product End Date: {i.ProductEndDate}, marked for removal."); });
var rowCount = rowData.Count();
db.ProductData.RemoveRange(rowData);
db.SaveChanges();
log.Info(rowCount == 0
? "No expired data present."
: $"Number of expired assets Successfully removed from database = {rowCount}");
}
So grateful to all the useful comments below and grateful for the time you all took to respond and help me learn from this.
I`m using MongoDB.Driver for .NET to query Mongo(Version 3.0.11). This is my code to query a field and limit the query to 200 documents.
BsonDocument bson = new BsonDocument();
bson.Add("Field", "Value");
BsonDocumentFilterDefinition<ResultClass> filter = new BsonDocumentFilterDefinition<ResultClass>(bson);
FindOptions queryOptions = new FindOptions() { BatchSize = 200 };
List<ResultClass> result = new List<ResultClass>();
result.AddRange(myCollection.Find<ResultClass>(filter, queryOptions).Limit(200).ToList());
My issue is that when I check the database`s current operations, the operation query field shows only :
{ Field : "Value" }
Which is different from the query using "AsQueryable" below:
List<ResultClass> result = myCollection.AsQueryable<ResultClass>().Where(t => t.Field == "Value").Take(200)
Query operation using "AsQueryable"
{ aggregate: "CollectionName", pipeline: [ { $match: { Field:
"Value" } }, { $limit: 200 } ], cursor: {} }
Why can't I see the limit in the query using Find?Is the limit being handled in the client side instead of the server?
I need to limit this in the server side but I can't use the second query because the field searched needs to be a string which can't be done using AsQueryable.
Using limit in the first piece of code executes the limit on a cursor object, which is still serverside until you actually request the document by invoking ToList(). At which point only 200 documents will go over the wire to your application.
It looks like the AsQueryable is executing an aggregation pipeline which will show up in currentOp, but both are essentially the same.
I'm not sure if there is a performance impact for either one though
I Am trying to do a cross partition query on Azure CosmosDB without a partition key. The throughput is set to be 4000, I get 250RU/s per partition key range.
My cosmos db collection has about 1million documents and is a total of 70gb in size. They are spread evenly across approx 40,000 logical partitions, the json documents are on average 100kb in size. This is what the structure of my json documents look like:
"ArrayOfObjects": [
{
// other properties omitted for brevity
"SubId": "ed2a49fb-51d4-45b4-9690-df0721d6a32f"
},
{
"SubId": "35c87833-9bea-4151-86da-4d9c482ae1fe"
},
"ParitionKey": "b42"
This is how I am querying currently without a partition key:
public async Task<ResponseModel> GetBySubId(string subId)
{
var collectionId = _cosmosClient.CollectionId;
var query = $#"SELECT * FROM {collectionId} c
WHERE ARRAY_CONTAINS(c.ArrayOfObjects, {{'SubId': '{subId}'}}, true)";
var feedOptions = new FeedOptions { EnableCrossPartitionQuery = true };
var docQuery = _cosmosClient.Client.CreateDocumentQuery(
_collectionUri,
query,
feedOptions)
.AsDocumentQuery();
var results = new List<ResponseModel>();
while (docQuery.HasMoreResults)
{
var executedQuery = await docQuery.ExecuteNextAsync<ResponseModel>();
if (executedQuery.Count != 0)
{
results.AddRange(executedQuery.ToList());
}
}
if (results.Count == 0)
{
return null;
}
return results.FirstOrDefault();
}
I am expecting to to be able to retrieve the document via one of the SubId's right after inserting it. What actually happens is that it is unable to get the document and returns back null even after the query finishes execution by draining all continuation tokens. This issue is intermittent and inconsistent as sometimes it can get the document after it is inserted other times not.
For those documents that are failing to be retrieved after being inserted, if you wait some time (a couple of minutes usually) and repeat the query with the same SubId it is able to then retrieve the document. There seems to be a delay.
I have checked the cosmosdb metrics in the Azure portal, the metrics indicate that I have not exceeded the provisioned RU/s per partition at all or that there has been any rate limiting in my requests (HTTP 429).
Given the above why am I still seeing issues with cross partition querying even when there is enough throughput provisioned?
As the title says, I have a problem writing to local database. I generated a edmx Model from this, and I can easily read from it.
EDMXNS.TOWDataBasev1Entities db = new EDMXNS.TOWDataBasev1Entities();
var query = from p in db.Accounts select p;
foreach (EDMXNS.Accounts s in query)
Console.WriteLine(s.AccountName);
That works fine. However when I try to write to the database, nothing happens. I do not get any errors, exceptions etc. I figure, since I can read from the database, that it's not a connection problem.
Here is the code i have for writing.
EDMXNS.TOWDataBasev1Entities db = new EDMXNS.TOWDataBasev1Entities();
EDMXNS.Accounts acc = new EDMXNS.Accounts();
acc.AccountID = 1;
acc.AccountName = "testuser";
acc.AccountPW = "testpw";
acc.PersonDataID = 0;
db.AddToAccounts(acc);
db.SaveChanges();
It is worthwhile to meantion that my Accounts.AccountID has identity/autoincrement, but I have tried both setting it to the next known value, or simply not setting it at all.
Do anyone have an idea as to what might cause this problem?
EDIT: I also tried to remove the custom name space, delete all records of the database and reimport it all.
Removing the custom tool name space, results in errors like these:
Ambiguity between 'TOWServer.Accounts.AccountName' and 'TOWServer.Accounts.AccountName'
Which doesnt tell me anything.
Reimporting everything now gives me an exception:
"Unable to load the specified metadata resource"
I've always use this format when adding records with EF, Try following this format:
using (MovieStoreEntities context = new MoveStoreEntities())
{
try
{
context.Movies.AddObject(new Movie() { MovieID = 234,
Title = "Sleepless Nights in Seattle", Quantity = 10 });
context.SaveChanges();
}
catch(Exception ex)
{
Console.WriteLine(ex.InnerException.Message);
}
}