I have a CosmosDB instance that is using the SQL / DocumentDB interface. I am accessing it via the .NET SDK.
I have the stored procedure that I call with ExecuteStoredProcedureAsync. But I can only get a max of 100 documents back. I know this is the default option. Can I change it?
The optional parameter to ExecuteStoredProcedureAsync is a RequestOptions object. The RequestOptions doesn't have properties for MaxItemCount or continuation tokens.
You need to change the SP itself to adjust the amount of records you'd like to return. Here is a complete example with the implemented skip/take logic in SP-
function storedProcedure(continuationToken, take){
var filterQuery = "SELECT * FROM ...";
var accept = __.queryDocuments(__.getSelfLink(), filterQuery, {pageSize: take, continuation: continuationToken},
function (err, documents, responseOptions) {
if (err) throw new Error("Error" + err.message);
__.response.setBody({
result: documents,
continuation: responseOptions.continuation
});
});
}
Here is a corresponding C# code:
string continuationToken = null;
int pageSize = 500;
do
{
var r = await client.ExecuteStoredProcedureAsync<dynamic>(
UriFactory.CreateStoredProcedureUri(DatabaseId, CollectionId, "SP_NAME"),
new RequestOptions { PartitionKey = new PartitionKey("...") },
continuationToken, pageSize);
var documents = r.Response.result;
// processing documents ...
// 'dynamic' could be easily substituted with a class that will cater your needs
continuationToken = r.Response.continuation;
}
while (!string.IsNullOrEmpty(continuationToken));
As you can see, there is a parameter that controls the number of records to send back - pageSize. As you've noticed, pageSize is 100 by default. In case you need to return all at once, specify -1.
The RequestOptions doesn't have properties for MaxItemCount or
continuation tokens.
MaxItemCount is a parameter in Feedoptions.
ExecuteStoredProcedureAsync method does not limit the returned data entries, the key is your query operation in the Stored Procedure set the maximum number of entries you want to return.
Please refer to the sample stored procedure code as below :
function sample(prefix) {
var collection = getContext().getCollection();
var isAccepted = collection.queryDocuments(
collection.getSelfLink(),
'SELECT * FROM root r',
{ pageSize: 1000 },
function (err, feed, options) {
if (err) throw err;
if (!feed || !feed.length) {
var response = getContext().getResponse();
response.setBody('no docs found');
}
else {
var response = getContext().getResponse();
var body = "";
for(var i=0 ; i<feed.length;i++){
body +="{"+feed[i].id+"}";
}
response.setBody(JSON.stringify(body));
}
});
if (!isAccepted) throw new Error('The query was not accepted by the server.');
}
Result :
Related
I am running a query against my Cosmos db instance, and I am occasionally getting 0 results back, when I know that I should be getting some results.
var options = new QueryRequestOptions()
{
MaxItemCount = 25
};
var query = #"
select c.id,c.callTime,c.direction,c.action,c.result,c.duration,c.hasR,c.hasV,c.callersIndexed,c.callers,c.files
from c
where
c.ownerId=#ownerId
and c.callTime>=#dateFrom
and c.callTime<=#dateTo
and (CONTAINS(c.phoneNums_s, #name)
or CONTAINS(c.names_s, #name)
or CONTAINS(c.xNums_s, #name))
order by c.callTime desc";
var queryIterator = container.GetItemQueryIterator<CallIndex>(new QueryDefinition(query)
.WithParameter("#ownerId", "62371255008")
.WithParameter("#name", "harr")
.WithParameter("#dateFrom", dateFrom) // 5/30/2020 5:00:00 AM +00:00
.WithParameter("#dateTo", dateTo) // 8/29/2020 4:59:59 AM +00:00
.WithParameter("#xnum", null), requestOptions: options, continuationToken: null);
if (queryIterator.HasMoreResults)
{
var feed = queryIterator.ReadNextAsync().Result;
model.calls = feed.ToList(); //feed.Resource is empty; feed.Count is 0;
model.CosmosContinuationToken = feed.ContinuationToken; //feed.ContinuationToken is populated with a large token value, indicating that there are more results, even though this fetch returned 0 items.
model.TotalRecords = feed.Count(); // 0
}
As you can see, even though I received 0 results, the continuation token indicates that there is more data there after this first request. And, after visually inspecting the data directly in the database (data explorer in the Azure portal), I see records that should match, but they are not found in this query. To further test, I ran the same exact query a few seconds later, and received results:
var query = #"
select c.id,c.callTime,c.direction,c.action,c.result,c.duration,c.hasR,c.hasV,c.callersIndexed,c.callers,c.files
from c
where
c.ownerId=#ownerId
and c.callTime>=#dateFrom
and c.callTime<=#dateTo
and (CONTAINS(c.phoneNums_s, #name)
or CONTAINS(c.names_s, #name)
or CONTAINS(c.xNums_s, #name))
order by c.callTime desc";
var queryIterator = container.GetItemQueryIterator<CallIndex>(new QueryDefinition(query)
.WithParameter("#ownerId", "62371255008")
.WithParameter("#name", "harr")
.WithParameter("#dateFrom", dateFrom) // 5/30/2020 5:00:00 AM +00:00
.WithParameter("#dateTo", dateTo) // 8/29/2020 4:59:59 AM +00:00
.WithParameter("#xnum", null), requestOptions: options, continuationToken: null);
if (queryIterator.HasMoreResults)
{
var feed = queryIterator.ReadNextAsync().Result;
model.calls = feed.ToList(); //feed.Resource has 25 items; feed.Count is 25;
model.CosmosContinuationToken = feed.ContinuationToken; //feed.ContinuationToken is populated, but it is considerably smaller than the token I received from the first request.
model.TotalRecords = feed.Count(); // 25
}
This is the exact query as before, but this time the feed gave me the results I expected. This has happened more than once, and continues to happen intermittently. What gives with this? Is this a bug in Azure Cosmos? If so, it seems like a serious bug that breaks the very core functionality of Cosmos (and databases in general).
Or, is this expected? Is it possible that in the first query, I need to continue to ReadNextAsync until I get some results back using the continuation token?
Any help is appreciated, as this is breaking very basic functionality in my app.
Also, I would like to add that the data returned from the query has not been newly added between the times of my first query attempt, and my second query attempt. That data has been there for a while.
Your code is correct, you are expected to drain the query checking HasMoreResults (although I would change the .Result with await to avoid a possible deadlock). What can happen in cross-partition queries is that you could get some empty page if the initial partitions checked for results have none.
Sometimes queries may have empty pages even when there are results on a future page. Reasons for this could be:
The SDK could be doing multiple network calls.
The query might be taking a long time to retrieve the documents.
Reference: https://learn.microsoft.com/azure/cosmos-db/troubleshoot-query-performance#common-sdk-issues
Try using below code:
Query Cosmos DB method:
public async Task<DocDbQueryResult> QueryCollectionBaseWithPagingInternalAsync(FeedOptions feedOptions, string queryString, IDictionary<string, object> queryParams, string collectionName)
{
string continuationToken = feedOptions.RequestContinuation;
List<JObject> documents = new List<JObject>();
IDictionary<string, object> properties = new Dictionary<string, object>();
int executionCount = 0;
double requestCharge = default(double);
double totalRequestCharge = default(double);
do
{
feedOptions.RequestContinuation = continuationToken;
var query = this.documentDbClient.CreateDocumentQuery<JObject>(
UriFactory.CreateDocumentCollectionUri(this.databaseName, collectionName),
new SqlQuerySpec
{
QueryText = queryString,
Parameters = ToSqlQueryParamterCollection(queryParams),
},
feedOptions)
.AsDocumentQuery();
var response = await query.ExecuteNextAsync<JObject>().ConfigureAwait(false);
documents.AddRange(response.AsEnumerable());
executionCount++;
requestCharge = executionCount == 1 ? response.RequestCharge : requestCharge;
totalRequestCharge += response.RequestCharge;
continuationToken = response.ResponseContinuation;
}
while (!string.IsNullOrWhiteSpace(continuationToken) && documents.Count < feedOptions.MaxItemCount);
var pagedDocuments = documents.Take(feedOptions.MaxItemCount.Value);
var result = new DocDbQueryResult
{
ResultSet = new JArray(pagedDocuments),
TotalResults = Convert.ToInt32(pagedDocuments.Count()),
ContinuationToken = continuationToken
};
// if query params are not null, use existing query params also to be passed as properties.
if (queryParams != null)
{
properties = queryParams;
}
properties.Add("TotalRequestCharge", totalRequestCharge);
properties.Add("ExecutionCount", executionCount);
return result;
}
ToSqlQueryParamterCollection method:
private static SqlParameterCollection ToSqlQueryParamtereCollection(IDictionary<string, object> queryParams)
{
var coll = new SqlParameterCollection();
if (queryParams != null)
{
foreach (var paramKey in queryParams.Keys)
{
coll.Add(new SqlParameter(paramKey, queryParams[paramKey]));
}
}
return coll;
}
I have a Cosmos DB stored procedure in which I am passing list of comma saperated Ids. I need to pass those IDs to in query. when I'm passing one value to the parameter then its working fine but not with more that one value.
It would be great if any one could help here.
below is the code of the stored procedure:
function getData(ids) {
var context = getContext();
var coll = context.getCollection();
var link = coll.getSelfLink();
var response = context.getResponse();
var query = {query: "SELECT * FROM c where c.vin IN (#ids)", parameters:
[{name: "#ids", value: ids}]};
var requestOptions = {
pageSize: 500
};
var run = coll.queryDocuments(link, query, requestOptions, callback);
function callback(err, docs) {
if (err) throw err;
if (!docs || !docs.length) response.setBody(null);
else {
response.setBody(JSON.stringify(docs));
}
}
if (!run) throw new Error('Unable to retrieve the requested information.');
}
For arrays, you should use ARRAY_CONTAINS function:
var query = {
query: "SELECT * FROM c where ARRAY_CONTAINS(#ids, c.vin)",
parameters: [{name: "#ids", value: ids}]
};
Also, it is possible that, as stated in this doc, your #ids array is being sent as string
When defining a stored procedure in Azure portal, input parameters are always sent as a string to the stored procedure. Even if you pass an array of strings as an input, the array is converted to string and sent to the stored procedure. To work around this, you can define a function within your stored procedure to parse the string as an array
So you might need to parse it before querying:
function getData(ids) {
arr = JSON.parse(ids);
}
Related:
How can I pass array as a sql query param for cosmos DB query
https://github.com/Azure/azure-cosmosdb-node/issues/156
This is how you can do it:
Inside the stored procedure
parse your one parameter into an array using the split function
loop through the array and
a) build the parameter name / value pair and push it into the parameter array used by the query later
b) use the parameter name to build a string for use inside the parenthesis of the IN statement
Build the query definition and pass it to the collection.
Example
This is how the value of the parameter looks: "abc,def,ghi,jkl"
If you are going to use this, replace "stringProperty" with the name of the property you are querying against.
// SAMPLE STORED PROCEDURE
function spArrayTest(arrayParameter) {
var collection = getContext().getCollection();
var stringArray = arrayParameter.split(",");
var qParams = [];
var qIn = "";
for(var i=0; i<stringArray.length; i++){
var nm = '#p'+ i; // parameter name
qParams.push({name: nm, value: stringArray[i]});
qIn += ( nm +','); // parameter name for query
}
qIn = qIn.substring(0,qIn.length-1); // remove last comma
// qIn only contains a list of the names in qParams
var qDef = 'SELECT * from documents d where d.stringProperty in ( ' + qIn + ' )';
console.log(qParams[0].name);
// Query Definition to be passed into "queryDocuments" function
var q = {
query: qDef,
parameters: qParams
};
// Query documents
var isAccepted = collection.queryDocuments(
collection.getSelfLink(),
q,
function (err, feed, options) {
if (err) throw err;
// Check the feed and if empty, set the body to 'no docs found',
// else return all documents from feed
if (!feed || !feed.length) {
var response = getContext().getResponse();
response.setBody('no docs found with an stringProperty in ' + arrayParameter);
}
else {
var response = getContext().getResponse();
response.setBody(feed);
}
});
if (!isAccepted) throw new Error('The query was not accepted by the server.');
}
Please refer to my sample js code, it works for me.
function sample(ids) {
var collection = getContext().getCollection();
var query = 'SELECT * FROM c where c.id IN ('+ ids +')'
console.log(query);
var isAccepted = collection.queryDocuments(
collection.getSelfLink(),
query,
function (err, feed, options) {
if (err) throw err;
if (!feed || !feed.length) getContext().getResponse().setBody('no docs found');
else {
for(var i = 0;i<feed.length;i++){
var doc = feed[i];
doc.name = 'a';
collection.replaceDocument(doc._self,doc,function(err) {
if (err) throw err;
});
}
getContext().getResponse().setBody(JSON.stringify("success"));
}
});
if (!isAccepted) throw new Error('The query was not accepted by the server.');
}
Parameter : '1','2','3'
Hope it helps you.
I have a stored procedure which gives me a document count (count.js on github). I have partitioned my collection. Due to this, I now have to pass the partition key in as an option to run the stored procedure.
Can and how should I enable crosspartition queries in the stored procedure (ie, collection(EnableCrossPartitionQuery = true)) so that I don't have to specify the partition key?
There is no way to do fan-out stored procedure execution in DocumentDB. The run against a single partition. I ran into this dilemma when trying to switch to partitioned collections and had to make some adjustments. Here are some options:
Download a 1 for every record and sum/count them client-side
Rerun the stored procedure for each unique partition key. In my case, this was not as bad as it sounds since the partition key is a tenantID and I only have a dozen of those and only expect a few hundred max.
I'm not sure about this one since I haven't tried it with partitioned collections, but each query now returns the resource usage of the collection in the x-ms-resource-usage header. That header has a documentsSize sub-header. You could use that divided by the average size of your documents to get an approximate count. There may even be a count record in that header information by now.
Also, there is an x-ms-item-count header but I'm not sure how that behaves. If you send a query for all the records in the entire partitioned collection and set the max-item-count to 1, you'll only get back one record and it shouldn't cost you a lot in RUs, but I don't know how that header behaves. Does it return a 1 in that case? Or does it return the total number of documents all the pages of the query would eventually return if you bothered to request every page. A quick experiment should confirm this.
Below you can find some example code that should allow you to read all records cross partion. The magic is inside the doForAll function, and at the top you can see how it is called.
// SAMPLE STORED PROCEDURE
function sample(prefix) {
var share = { counter: 0, hasEntityName : 0, isXXX: 0, partitions: {}, prefix };
doForAll({
filter: function limiter(record){
if (record && record.entityName === 'XXX') return true;
else return false;
},
callback: function handleRecord(record) {
//Keep track of this partition...
let partitionKey = record.partitionKey;
if (share.partitions[partitionKey])
share.partitions[partitionKey]++;
else
share.partitions[partitionKey] = 1;
//update some counters...
share.counter++;
if (record.entityName !== undefined) share.hasEntityName++;
if (record.entityName === 'XXX') share.isXXX++;
},
finaly: function whenAllIsDone() {
console.log("counter = " + share.counter + ". ");
console.log("has entity name: "+ share.hasEntityName+ ". ")
console.log("is XXX: " + share.isXXX+ ". ")
var parts = Object.getOwnPropertyNames(share.partitions)
console.log("partition keys: " + parts.length + " ...");
getContext()
.getResponse()
.setBody(share);
}
});
//The magic function...
//also see: https://azure.github.io/azure-cosmosdb-js-server/Collection.html
function doForAll(task, ctoken) {
if (!task) throw "Expected one parameter of type: { filter?: (rec?)=>boolean, callback?: (rec?) => void, finaly?: () => void }";
//Note:
//the "__" symbol is an alias for var collection = getContext().getCollection(); = aliased by __
var result = getContext()
.getCollection()
.chain()
.filter(task.filter || function (rec) { return true; })
.map(task.callback || function (rec) { return undefined; })
.value({ continuation: ctoken }, function afterBatchCallback (err, feed, options) {
if (err) throw err;
if (options.continuation)
doForAll(task, options.continuation);
else if (task.finaly)
task.finaly();
});
if (!result.isAccepted)
throw "catastrophic failure";
}
}
PS: it may to know how the data looks like that is used for the example.
This is an example of such a document:
{
"id": "123",
"partitionKey": "PART_1",
"entityName": "EXAMPLE_ENTITY",
"veryInterestingInfo": "The 'id' property is also the collections id, the 'partitionKey' property happens to be the collections partition key, and all the records in this collection have a 'entityName' property which contains a (non-unique) string"
}
I am trying to get a count for my location inside a polygon. Here is my stored proc:
function count(poly) {
var collection = getContext().getCollection();
var query = {query: 'Select f.id from f WHERE ST_WITHIN(f.location, #poly)',
parameters: [{name: '#poly', value: poly}]};
var isAccepted = collection.queryDocuments(
collection.getSelfLink(),
query,
function (err, docs, options) {
if (err) throw err;
if (!docs || !docs.length) getContext().getResponse().setBody('no docs found');
else getContext().getResponse().setBody(docs.length);
});
if (!isAccepted) throw new Error('The query was not accepted by the server.');}
When I execute the same query in query explorer, I get the results, but through stored procedures, its returning "no docs found". It is returning the results for simpler queries but for that too, the max count returned is always 100. Not sure what am I doing wrong.
Thanks in advance.
P.S.: I tried using ST_DISTANCE for these coordinates. It did returned count as 100(max value), but is not at all working for ST_WITHIN.
Edit:
It was not working.So I tried this as described in the official example for counting the results. And voila!! It worked. So I moved to the next step to count all the locations in all the polygons I had as locally, there were too many round trips to get the count for each polygon. But calling the same function from a loop does'nt return anything. I have already tested each query of the array in documentdb studio and it does return results. Please help!!! The code for new procedure is:
function countABC(filterQueryArray) {
var results = [];
for (i = 0; i < filterQueryArray.length; i++) {
countnew(filterQueryArray[i].QueryString, "");
}
getContext().getResponse().setBody(results);
function countnew(filterQuery, continuationToken) {
var collection = getContext().getCollection();
var maxResult = 50000;
var result = 0;
tryQuery(continuationToken);
function tryQuery(nextContinuationToken) {
var responseOptions = {
continuation: nextContinuationToken,
pageSize: maxResult
};
if (result >= maxResult || !query(responseOptions)) {
setBody(nextContinuationToken);
}
}
function query(responseOptions) {
return (filterQuery && filterQuery.length) ?
collection.queryDocuments(collection.getSelfLink(), filterQuery, responseOptions, onReadDocuments) :
collection.readDocuments(collection.getSelfLink(), responseOptions, onReadDocuments);
}
function onReadDocuments(err, docFeed, responseOptions) {
if (err) {
throw 'Error while reading document: ' + err;
}
result += docFeed.length;
if (responseOptions.continuation) {
tryQuery(responseOptions.continuation);
} else {
setBody(null);
}
}
function setBody(continuationToken) {
var body = {
count: result,
continuationToken: continuationToken
};
results.push(body);
}
}
}
With new sproc, it's not helpful to set result after the loop because at that time no queries are executed (results array will be empty). The idea is that all CRUD/query calls are queued and are executed after the script that queued them is finished (in this case main script).
Setting result/body needs to be done from callback. This is partially done already, but there is an issue that for every call of countnew, the "result" variable is reset to 0. Essentially, "var result = 0" needs to be done in main script.
Also, it's not recommended to use loops like the "for" loop when it calls CRUD/queries in a loop without waiting for previous CRUD/query to finish (due to async nature), otherwise checking for isAccepted is not reliable. What's recommended it to serialize this loop, something like this:
var result = 0;
step();
function step() {
if (filterQueryArray.length == 0) setBody(null);
else {
var query = filterQueryArray.shift();
// process current query. In the end (from callback), call step() again.
}
}
Does this make sense?
In Azure table storage, how can I query for a set of entities that match specific row keys in a partition???
I'm using Azure table storage and need to retrieve a set of entities that match a set of row keys within the partition.
Basically if this were SQL it may look something like this:
SELECT TOP 1 SomeKey
FROM TableName WHERE SomeKey IN (1, 2, 3, 4, 5);
I figured to save on costs and reduce doing a bunch of table retrieve operations that I could just do it using a table batch operation. For some reason I'm getting an exception that says:
"A batch transaction with a retrieve operation cannot contain any other operations"
Here is my code:
public async Task<IList<GalleryPhoto>> GetDomainEntitiesAsync(int someId, IList<Guid> entityIds)
{
try
{
var client = _storageAccount.CreateCloudTableClient();
var table = client.GetTableReference("SomeTable");
var batchOperation = new TableBatchOperation();
var counter = 0;
var myDomainEntities = new List<MyDomainEntity>();
foreach (var id in entityIds)
{
if (counter < 100)
{
batchOperation.Add(TableOperation.Retrieve<MyDomainEntityTableEntity>(someId.ToString(CultureInfo.InvariantCulture), id.ToString()));
++counter;
}
else
{
var batchResults = await table.ExecuteBatchAsync(batchOperation);
var batchResultEntities = batchResults.Select(o => ((MyDomainEntityTableEntity)o.Result).ToMyDomainEntity()).ToList();
myDomainEntities .AddRange(batchResultEntities );
batchOperation.Clear();
counter = 0;
}
}
return myDomainEntities;
}
catch (Exception ex)
{
_logger.Error(ex);
throw;
}
}
How can I achieve what I'm after without manually looping through the set of row keys and doing an individual Retrieve table operation for each one? I don't want to incur the cost associated with doing this since I could have hundreds of row keys that I want to filter on.
I made a helper method to do it in a single request per partition.
Use it like this:
var items = table.RetrieveMany<MyDomainEntity>(partitionKey, nameof(TableEntity.RowKey),
rowKeysList, columnsToSelect);
Here's the helper methods:
public static List<T> RetrieveMany<T>(this CloudTable table, string partitionKey,
string propertyName, IEnumerable<string> valuesRange,
List<string> columnsToSelect = null)
where T : TableEntity, new()
{
var enitites = table.ExecuteQuery(new TableQuery<T>()
.Where(TableQuery.CombineFilters(
TableQuery.GenerateFilterCondition(
nameof(TableEntity.PartitionKey),
QueryComparisons.Equal,
partitionKey),
TableOperators.And,
GenerateIsInRangeFilter(
propertyName,
valuesRange)
))
.Select(columnsToSelect))
.ToList();
return enitites;
}
public static string GenerateIsInRangeFilter(string propertyName,
IEnumerable<string> valuesRange)
{
string finalFilter = valuesRange.NotNull(nameof(valuesRange))
.Distinct()
.Aggregate((string)null, (filterSeed, value) =>
{
string equalsFilter = TableQuery.GenerateFilterCondition(
propertyName,
QueryComparisons.Equal,
value);
return filterSeed == null ?
equalsFilter :
TableQuery.CombineFilters(filterSeed,
TableOperators.Or,
equalsFilter);
});
return finalFilter ?? "";
}
I have tested it for less than 100 values in rowKeysList, however, if it even throws an exception if there are more, we can always split the request into parts.
With hundreds of row keys, that rules out using $filter with a list of row keys (which would result in partial partition scan anyway).
With the error you're getting, it seems like the batch contains both queries and other types of operations (which isn't permitted). I don't see why you're getting that error, from your code snippet.
Your only other option is to execute individual queries. You can do these asynchronously though, so you wouldn't have to wait for each to return. Table storage provides upwards of 2,000 transactions / sec on a given partition, so it's a viable solution.
Not sure how I missed this in the first place, but here is a snippet from the MSDN documentation for the TableBatchOperation type:
A batch operation may contain up to 100 individual table operations, with the requirement that each operation entity must have same partition key. A batch with a retrieve operation cannot contain any other operations. Note that the total payload of a batch operation is limited to 4MB.
I ended up executing individual retrieve operations asynchronously as suggested by David Makogon.
I made my own ghetto link-table. I know it's not that efficient (maybe its fine) but I only make this request if the data is not cached locally, which only means switching devices. Anyway, this seems to work. Checking the length of the two arrays lets me defer the context.done();
var query = new azure.TableQuery()
.top(1000)
.where('PartitionKey eq ?', 'link-' + req.query.email.toLowerCase() );
tableSvc.queryEntities('linkUserMarker',query, null, function(error, result, response) {
if( !error && result ){
var markers = [];
result.entries.forEach(function(e){
tableSvc.retrieveEntity('markerTable', e.markerPartition._, e.RowKey._.toString() , function(error, marker, response){
markers.push( marker );
if( markers.length == result.entries.length ){
context.res = {
status:200,
body:{
status:'error',
markers: markers
}
};
context.done();
}
});
});
} else {
notFound(error);
}
});
I saw your post when I was looking for a solution, in my case I needed to be look up multiple ids at the same time.
Because there is no contains linq support (https://learn.microsoft.com/en-us/rest/api/storageservices/query-operators-supported-for-the-table-service) I just made a massive or equals chain.
Seems to be working for me so far hope it helps anyone.
public async Task<ResponseModel<ICollection<TAppModel>>> ExecuteAsync(
ICollection<Guid> ids,
CancellationToken cancellationToken = default
)
{
if (!ids.Any())
throw new ArgumentOutOfRangeException();
// https://learn.microsoft.com/en-us/rest/api/storageservices/query-operators-supported-for-the-table-service
// Contains not support so make a massive or equals statement...lol
var item = Expression.Parameter(typeof(TTableModel), typeof(TTableModel).FullName);
var expressions = ids
.Select(
id => Expression.Equal(
Expression.Constant(id.ToString()),
Expression.MakeMemberAccess(
Expression.Parameter(typeof(TTableModel), nameof(ITableEntity.RowKey)),
typeof(TTableModel).GetProperty(nameof(ITableEntity.RowKey))
)
)
)
.ToList();
var builderExpression = expressions.First();
builderExpression = expressions
.Skip(1)
.Aggregate(
builderExpression,
Expression.Or
);
var finalExpression = Expression.Lambda<Func<TTableModel, bool>>(builderExpression, item);
var result = await _azureTableService.FindAsync(
finalExpression,
cancellationToken
);
return new(
result.Data?.Select(_ => _mapper.Map<TAppModel>(_)).ToList(),
result.Succeeded,
result.User,
result.Messages.ToArray()
);
}
public async Task<ResponseModel<ICollection<TTableEntity>>> FindAsync(
Expression<Func<TTableEntity,bool>> filter,
CancellationToken ct = default
)
{
try
{
var queryResultsFilter = _tableClient.QueryAsync<TTableEntity>(
FilterExpressionTree(filter),
cancellationToken: ct
);
var items = new List<TTableEntity>();
await foreach (TTableEntity qEntity in queryResultsFilter)
items.Add(qEntity);
return new ResponseModel<ICollection<TTableEntity>>(items);
}
catch (Exception exception)
{
_logger.Error(
nameof(FindAsync),
exception,
exception.Message
);
// OBSFUCATE
// TODO PASS ERROR ID
throw new Exception();
}
}