Executing ST_WITHIN using stored procedure in documentdb - c#

I am trying to get a count for my location inside a polygon. Here is my stored proc:
function count(poly) {
var collection = getContext().getCollection();
var query = {query: 'Select f.id from f WHERE ST_WITHIN(f.location, #poly)',
parameters: [{name: '#poly', value: poly}]};
var isAccepted = collection.queryDocuments(
collection.getSelfLink(),
query,
function (err, docs, options) {
if (err) throw err;
if (!docs || !docs.length) getContext().getResponse().setBody('no docs found');
else getContext().getResponse().setBody(docs.length);
});
if (!isAccepted) throw new Error('The query was not accepted by the server.');}
When I execute the same query in query explorer, I get the results, but through stored procedures, its returning "no docs found". It is returning the results for simpler queries but for that too, the max count returned is always 100. Not sure what am I doing wrong.
Thanks in advance.
P.S.: I tried using ST_DISTANCE for these coordinates. It did returned count as 100(max value), but is not at all working for ST_WITHIN.
Edit:
It was not working.So I tried this as described in the official example for counting the results. And voila!! It worked. So I moved to the next step to count all the locations in all the polygons I had as locally, there were too many round trips to get the count for each polygon. But calling the same function from a loop does'nt return anything. I have already tested each query of the array in documentdb studio and it does return results. Please help!!! The code for new procedure is:
function countABC(filterQueryArray) {
var results = [];
for (i = 0; i < filterQueryArray.length; i++) {
countnew(filterQueryArray[i].QueryString, "");
}
getContext().getResponse().setBody(results);
function countnew(filterQuery, continuationToken) {
var collection = getContext().getCollection();
var maxResult = 50000;
var result = 0;
tryQuery(continuationToken);
function tryQuery(nextContinuationToken) {
var responseOptions = {
continuation: nextContinuationToken,
pageSize: maxResult
};
if (result >= maxResult || !query(responseOptions)) {
setBody(nextContinuationToken);
}
}
function query(responseOptions) {
return (filterQuery && filterQuery.length) ?
collection.queryDocuments(collection.getSelfLink(), filterQuery, responseOptions, onReadDocuments) :
collection.readDocuments(collection.getSelfLink(), responseOptions, onReadDocuments);
}
function onReadDocuments(err, docFeed, responseOptions) {
if (err) {
throw 'Error while reading document: ' + err;
}
result += docFeed.length;
if (responseOptions.continuation) {
tryQuery(responseOptions.continuation);
} else {
setBody(null);
}
}
function setBody(continuationToken) {
var body = {
count: result,
continuationToken: continuationToken
};
results.push(body);
}
}
}

With new sproc, it's not helpful to set result after the loop because at that time no queries are executed (results array will be empty). The idea is that all CRUD/query calls are queued and are executed after the script that queued them is finished (in this case main script).
Setting result/body needs to be done from callback. This is partially done already, but there is an issue that for every call of countnew, the "result" variable is reset to 0. Essentially, "var result = 0" needs to be done in main script.
Also, it's not recommended to use loops like the "for" loop when it calls CRUD/queries in a loop without waiting for previous CRUD/query to finish (due to async nature), otherwise checking for isAccepted is not reliable. What's recommended it to serialize this loop, something like this:
var result = 0;
step();
function step() {
if (filterQueryArray.length == 0) setBody(null);
else {
var query = filterQueryArray.shift();
// process current query. In the end (from callback), call step() again.
}
}
Does this make sense?

Related

Creating a sequence -- SetOnInsert appears to do nothing

I'm having a problem trying, what boils down to, incrementing a field in a document or inserting an entire document. The context is "trying to insert an initial document for a sequence or incrementing the sequence number for an existing sequence".
This code:
private async Task<int> GetSequenceNumber(string sequenceName)
{
var filter = new ExpressionFilterDefinition<Sequence>(x => x.Id == sequenceName);
var builder = Builders<Sequence>.Update;
var update = builder
.SetOnInsert(x => x.CurrentValue, 1000)
.Inc(x => x.CurrentValue, 1);
var sequence = await _context.SequenceNumbers.FindOneAndUpdateAsync(
filter,
update,
new FindOneAndUpdateOptions<Sequence>
{
IsUpsert = true,
ReturnDocument = ReturnDocument.After,
});
return sequence.CurrentValue;
}
results in the exception
MongoDB.Driver.MongoCommandException: Command findAndModify failed: Updating the path 'currentvalue' would create a conflict at 'currentvalue'.
at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.ProcessResponse(ConnectionId connectionId, CommandMessage responseMessage)
Removing the SetOnInsert results in no errors, but inserts a document with the currentValue equal to 1 instead of the expected 1000.
It almost appears if SetOnInsert is not being honored, and that what's happening is a default document is inserted and then currentValue is incremented via Inc atomically as the new document is created.
How do I overcome these issues? A non-C# solution would also be welcome, as I could translate that...
Ok thanks to #dododo in the comments, I now realize that both an Inc and a SetOnInsert can't be applied at the same time. It's unintuitive because you'd think the former would apply on update only and the latter on insert only.
I went with the solution below, which suffers more than one round-trip, but at least works, and appears to work with my concurrency based tests.
public async Task<int> GetSequenceNumber(string sequenceName, int tryCount)
{
if (tryCount > 5) throw new InvalidOperationException();
var filter = new ExpressionFilterDefinition<Sequence>(x => x.Id == sequenceName);
var builder = Builders<Sequence>.Update;
// optimistically assume value was already initialized
var update = builder.Inc(x => x.CurrentValue, 1);
var sequence = await _context.SequenceNumbers.FindOneAndUpdateAsync(
filter,
update,
new FindOneAndUpdateOptions<Sequence>
{
IsUpsert = true,
ReturnDocument = ReturnDocument.After,
});
if (sequence == null)
try
{
// we have to try to save a new sequence...
sequence = new Sequence { Id = sequenceName, CurrentValue = 1001 };
await _context.SequenceNumbers.InsertOneAsync(sequence);
}
// ...but something else could beat us to it
catch (MongoWriteException e) when (e.WriteError.Code == DuplicateKeyCode)
{
// ...so we have to retry an update
return await GetSequenceNumber(sequenceName, tryCount + 1);
}
return sequence.CurrentValue;
}
I'm sure there are other options. It may be possible to use an aggregation pipeline, for example.

How to delete elements from sql database based on a date using c# .net

I have this Service that works to delete one (1) row from the database (Sorry for any lingo errors.):
public bool DeleteSchedulesFromDate(DateTime objDateTime)
{
var result = _db.Schedules.FirstOrDefault(x => x.AppointmentDateEnd <= objDateTime);
if (result != null)
{
_db.Schedules.Remove(result);
_db.SaveChanges();
}
else
{
return false;
}
return true;
}
This as the calling function:
private void DeleteSchedules(string dtEnd)
{
deleteScheduleDate = dtEnd;
DateTime _dtEnd;
if (DateTime.TryParse(dtEnd, out _dtEnd))
{
var result = #Service.DeleteSchedulesFromDate(_dtEnd);
schedules.Clear();
schedules = Service.GetSchedules();
if (result)
{
this.ShouldRender();
}
}
}
But how do I change it to delete all rows that matches the passed DateTime object?
I have tried :
to change it to a List, but then the bool doesn't work.
set a loop in the Service, but can't make it run correctly.
set a loop in the function call, but can't make it work either.
to google and look up other posts on SO, but found no match.
Instead of searching for the first match with FirstOrDefault you should get all valid result into a List (Where + ToList) and delete all of them (RemoveRange)
var result = _db.Schedules.Where(x => x.AppointmentDateEnd <= objDateTime).ToList();
if (result.Any())
{
_db.Schedules.RemoveRange(result);
_db.SaveChanges();
}

Getting more than 100 documents back from ExecuteStoredProcedureAsync

I have a CosmosDB instance that is using the SQL / DocumentDB interface. I am accessing it via the .NET SDK.
I have the stored procedure that I call with ExecuteStoredProcedureAsync. But I can only get a max of 100 documents back. I know this is the default option. Can I change it?
The optional parameter to ExecuteStoredProcedureAsync is a RequestOptions object. The RequestOptions doesn't have properties for MaxItemCount or continuation tokens.
You need to change the SP itself to adjust the amount of records you'd like to return. Here is a complete example with the implemented skip/take logic in SP-
function storedProcedure(continuationToken, take){
var filterQuery = "SELECT * FROM ...";
var accept = __.queryDocuments(__.getSelfLink(), filterQuery, {pageSize: take, continuation: continuationToken},
function (err, documents, responseOptions) {
if (err) throw new Error("Error" + err.message);
__.response.setBody({
result: documents,
continuation: responseOptions.continuation
});
});
}
Here is a corresponding C# code:
string continuationToken = null;
int pageSize = 500;
do
{
var r = await client.ExecuteStoredProcedureAsync<dynamic>(
UriFactory.CreateStoredProcedureUri(DatabaseId, CollectionId, "SP_NAME"),
new RequestOptions { PartitionKey = new PartitionKey("...") },
continuationToken, pageSize);
var documents = r.Response.result;
// processing documents ...
// 'dynamic' could be easily substituted with a class that will cater your needs
continuationToken = r.Response.continuation;
}
while (!string.IsNullOrEmpty(continuationToken));
As you can see, there is a parameter that controls the number of records to send back - pageSize. As you've noticed, pageSize is 100 by default. In case you need to return all at once, specify -1.
The RequestOptions doesn't have properties for MaxItemCount or
continuation tokens.
MaxItemCount is a parameter in Feedoptions.
ExecuteStoredProcedureAsync method does not limit the returned data entries, the key is your query operation in the Stored Procedure set the maximum number of entries you want to return.
Please refer to the sample stored procedure code as below :
function sample(prefix) {
var collection = getContext().getCollection();
var isAccepted = collection.queryDocuments(
collection.getSelfLink(),
'SELECT * FROM root r',
{ pageSize: 1000 },
function (err, feed, options) {
if (err) throw err;
if (!feed || !feed.length) {
var response = getContext().getResponse();
response.setBody('no docs found');
}
else {
var response = getContext().getResponse();
var body = "";
for(var i=0 ; i<feed.length;i++){
body +="{"+feed[i].id+"}";
}
response.setBody(JSON.stringify(body));
}
});
if (!isAccepted) throw new Error('The query was not accepted by the server.');
}
Result :

Documentdb stored proc cross partition query

I have a stored procedure which gives me a document count (count.js on github). I have partitioned my collection. Due to this, I now have to pass the partition key in as an option to run the stored procedure.
Can and how should I enable crosspartition queries in the stored procedure (ie, collection(EnableCrossPartitionQuery = true)) so that I don't have to specify the partition key?
There is no way to do fan-out stored procedure execution in DocumentDB. The run against a single partition. I ran into this dilemma when trying to switch to partitioned collections and had to make some adjustments. Here are some options:
Download a 1 for every record and sum/count them client-side
Rerun the stored procedure for each unique partition key. In my case, this was not as bad as it sounds since the partition key is a tenantID and I only have a dozen of those and only expect a few hundred max.
I'm not sure about this one since I haven't tried it with partitioned collections, but each query now returns the resource usage of the collection in the x-ms-resource-usage header. That header has a documentsSize sub-header. You could use that divided by the average size of your documents to get an approximate count. There may even be a count record in that header information by now.
Also, there is an x-ms-item-count header but I'm not sure how that behaves. If you send a query for all the records in the entire partitioned collection and set the max-item-count to 1, you'll only get back one record and it shouldn't cost you a lot in RUs, but I don't know how that header behaves. Does it return a 1 in that case? Or does it return the total number of documents all the pages of the query would eventually return if you bothered to request every page. A quick experiment should confirm this.
Below you can find some example code that should allow you to read all records cross partion. The magic is inside the doForAll function, and at the top you can see how it is called.
// SAMPLE STORED PROCEDURE
function sample(prefix) {
var share = { counter: 0, hasEntityName : 0, isXXX: 0, partitions: {}, prefix };
doForAll({
filter: function limiter(record){
if (record && record.entityName === 'XXX') return true;
else return false;
},
callback: function handleRecord(record) {
//Keep track of this partition...
let partitionKey = record.partitionKey;
if (share.partitions[partitionKey])
share.partitions[partitionKey]++;
else
share.partitions[partitionKey] = 1;
//update some counters...
share.counter++;
if (record.entityName !== undefined) share.hasEntityName++;
if (record.entityName === 'XXX') share.isXXX++;
},
finaly: function whenAllIsDone() {
console.log("counter = " + share.counter + ". ");
console.log("has entity name: "+ share.hasEntityName+ ". ")
console.log("is XXX: " + share.isXXX+ ". ")
var parts = Object.getOwnPropertyNames(share.partitions)
console.log("partition keys: " + parts.length + " ...");
getContext()
.getResponse()
.setBody(share);
}
});
//The magic function...
//also see: https://azure.github.io/azure-cosmosdb-js-server/Collection.html
function doForAll(task, ctoken) {
if (!task) throw "Expected one parameter of type: { filter?: (rec?)=>boolean, callback?: (rec?) => void, finaly?: () => void }";
//Note:
//the "__" symbol is an alias for var collection = getContext().getCollection(); = aliased by __
var result = getContext()
.getCollection()
.chain()
.filter(task.filter || function (rec) { return true; })
.map(task.callback || function (rec) { return undefined; })
.value({ continuation: ctoken }, function afterBatchCallback (err, feed, options) {
if (err) throw err;
if (options.continuation)
doForAll(task, options.continuation);
else if (task.finaly)
task.finaly();
});
if (!result.isAccepted)
throw "catastrophic failure";
}
}
PS: it may to know how the data looks like that is used for the example.
This is an example of such a document:
{
"id": "123",
"partitionKey": "PART_1",
"entityName": "EXAMPLE_ENTITY",
"veryInterestingInfo": "The 'id' property is also the collections id, the 'partitionKey' property happens to be the collections partition key, and all the records in this collection have a 'entityName' property which contains a (non-unique) string"
}

Function evaluation timed out when dictionary is opened at debug time

foreach (var distinctPart in distinctParts)
{
var list = partlist.Where(part =>
{
if (part.PartNumber.Equals(distinctPart))
return true;
return false;
}).Select(part =>
{
return part.Number;
}).Distinct();
int quantity = list.Count();
hwList[distinctPart] = quantity;
}
When I'm debugging and open the hwList dictionary, I get the error message:
Function evaluation disabled because a previous function evaluation timed out. You must continue execution to re enable function evaluation.
Why so complicated?
Perhaps you can already solve the problem by simplifying this code, like so:
foreach (var distinctPart in distinctParts)
{
var count = partlist.Where(part => part.PartNumber.Equals(distinctPart))
.Select(part => part.Number)
.Distinct().Count();
hwList[distinctPart] = count;
}
BTW, do you have a property called PartNumber and another Number, both defined on a Part?

Categories

Resources