Function evaluation timed out when dictionary is opened at debug time - c#

foreach (var distinctPart in distinctParts)
{
var list = partlist.Where(part =>
{
if (part.PartNumber.Equals(distinctPart))
return true;
return false;
}).Select(part =>
{
return part.Number;
}).Distinct();
int quantity = list.Count();
hwList[distinctPart] = quantity;
}
When I'm debugging and open the hwList dictionary, I get the error message:
Function evaluation disabled because a previous function evaluation timed out. You must continue execution to re enable function evaluation.

Why so complicated?
Perhaps you can already solve the problem by simplifying this code, like so:
foreach (var distinctPart in distinctParts)
{
var count = partlist.Where(part => part.PartNumber.Equals(distinctPart))
.Select(part => part.Number)
.Distinct().Count();
hwList[distinctPart] = count;
}
BTW, do you have a property called PartNumber and another Number, both defined on a Part?

Related

Fastest equivalent of comparing all elements of array

What is the fastest equivalent in C#/LINQ to compare all combination of elements of an array like so and add them to a bucket if they are not in a bucket. AKA. How could I optimize this piece of code in C#.
// pseudocode
List<T> elements = { .... }
HashSet<T> bucket = {}
foreach (T element in elements)
foreach (var myelemenet in elements.Where(e => e.id != element.id))
{
if (!element.notInTheList)
{
_elementIsTheSame = element.Equals(myelement);
if (_elementIsTheSame)
{
// append element to a bucket
if (!elementIsInTheBucket(bucket, element))
{
element.notInTheList = true;
addToBucket(bucket, element);
}
}
}
}
}
// takes about 150ms on a fast workstation with only 300 elements in the LIST!
The final order of the elements in the bucket is important
elements.GroupBy(x=>x).SelectMany(x=>x);
https://dotnetfiddle.net/yZ9JDp
This works because GroupBy preserves order.
Note that this puts the first element of each equivalence class first. Your code puts the first element last, and skips classes with just a single element.
Skipping the classes with just a single element can be done with a where before the SelectMany.
elements.GroupBy(x=>x).Where(x=>x.Skip(1).Any()).SelectMany(x=>x);
Getting the first element last is a bit more tricky, but I suspect it's a bug in your code so I will not try to write it out.
Depending on how you use the result you might want to throw a ToList() at the end.
It sounds like you're effectively after DistinctBy? Which can be simulated with something like:
var list = new List<MainType>();
var known = new HashSet<PropertyType>();
foreach (var item in source)
{
if (known.Add(item.TheProperty))
list.Add(item);
}
You now have a list of the items taking the first only when there are duplicates via the selected property, preserving order.
If the intent is to find the fastest solution, as the StackOverflow title suggests, then I would consider using CSharp's Parallel.ForEach to perform a map and reduce.
For example:
var resultsCache = new IRecord[_allRecords.Length];
var resultsCount = 0;
var parallelOptions = new ParallelOptions
{
MaxDegreeOfParallelism = 1 // use an appropriate value
};
Parallel.ForEach(
_allRecords,
parallelOptions,
// Part1: initialize thread local storage
() => { return new FilterMapReduceState(); },
// Part2: define task to perform
(record, parallelLoopState, index, results) =>
{
if (_abortFilterOperation)
{
parallelLoopState.Break();
}
if (strategy.CanKeep(record))
{
resultsCache[index] = record;
results.Count++;
}
else
{
resultsCache[index] = null;
}
return results;
},
// Part3: merge the results
(results) =>
{
Interlocked.Add(ref resultsCount, results.Count);
}
);
where
class FilterMapReduceState
{
public FilterMapReduceState()
{
this.Count = 0;
}
/// <summary>
/// Represents the number of records that meet the search criteria.
/// </summary>
internal int Count { get; set; }
}
As I understand from what you did you need is these pieces.
var multipleIds=elements.GroupBy(x => x.id)
.Where(g => g.Count() > 1)
.Select(y => y.Key);
var distinctIds=elements.Select(x=>x.id).Distinct();
var distinctElements=elements.Where(x=>distinctIds.Contains(x.id));
}

Executing ST_WITHIN using stored procedure in documentdb

I am trying to get a count for my location inside a polygon. Here is my stored proc:
function count(poly) {
var collection = getContext().getCollection();
var query = {query: 'Select f.id from f WHERE ST_WITHIN(f.location, #poly)',
parameters: [{name: '#poly', value: poly}]};
var isAccepted = collection.queryDocuments(
collection.getSelfLink(),
query,
function (err, docs, options) {
if (err) throw err;
if (!docs || !docs.length) getContext().getResponse().setBody('no docs found');
else getContext().getResponse().setBody(docs.length);
});
if (!isAccepted) throw new Error('The query was not accepted by the server.');}
When I execute the same query in query explorer, I get the results, but through stored procedures, its returning "no docs found". It is returning the results for simpler queries but for that too, the max count returned is always 100. Not sure what am I doing wrong.
Thanks in advance.
P.S.: I tried using ST_DISTANCE for these coordinates. It did returned count as 100(max value), but is not at all working for ST_WITHIN.
Edit:
It was not working.So I tried this as described in the official example for counting the results. And voila!! It worked. So I moved to the next step to count all the locations in all the polygons I had as locally, there were too many round trips to get the count for each polygon. But calling the same function from a loop does'nt return anything. I have already tested each query of the array in documentdb studio and it does return results. Please help!!! The code for new procedure is:
function countABC(filterQueryArray) {
var results = [];
for (i = 0; i < filterQueryArray.length; i++) {
countnew(filterQueryArray[i].QueryString, "");
}
getContext().getResponse().setBody(results);
function countnew(filterQuery, continuationToken) {
var collection = getContext().getCollection();
var maxResult = 50000;
var result = 0;
tryQuery(continuationToken);
function tryQuery(nextContinuationToken) {
var responseOptions = {
continuation: nextContinuationToken,
pageSize: maxResult
};
if (result >= maxResult || !query(responseOptions)) {
setBody(nextContinuationToken);
}
}
function query(responseOptions) {
return (filterQuery && filterQuery.length) ?
collection.queryDocuments(collection.getSelfLink(), filterQuery, responseOptions, onReadDocuments) :
collection.readDocuments(collection.getSelfLink(), responseOptions, onReadDocuments);
}
function onReadDocuments(err, docFeed, responseOptions) {
if (err) {
throw 'Error while reading document: ' + err;
}
result += docFeed.length;
if (responseOptions.continuation) {
tryQuery(responseOptions.continuation);
} else {
setBody(null);
}
}
function setBody(continuationToken) {
var body = {
count: result,
continuationToken: continuationToken
};
results.push(body);
}
}
}
With new sproc, it's not helpful to set result after the loop because at that time no queries are executed (results array will be empty). The idea is that all CRUD/query calls are queued and are executed after the script that queued them is finished (in this case main script).
Setting result/body needs to be done from callback. This is partially done already, but there is an issue that for every call of countnew, the "result" variable is reset to 0. Essentially, "var result = 0" needs to be done in main script.
Also, it's not recommended to use loops like the "for" loop when it calls CRUD/queries in a loop without waiting for previous CRUD/query to finish (due to async nature), otherwise checking for isAccepted is not reliable. What's recommended it to serialize this loop, something like this:
var result = 0;
step();
function step() {
if (filterQueryArray.length == 0) setBody(null);
else {
var query = filterQueryArray.shift();
// process current query. In the end (from callback), call step() again.
}
}
Does this make sense?

Getting Function Evaluation Timedout error when using function inside linq query

Thanks in advance.
I am trying to query an object and create anonymous type from the result. I am using a function inside the linq query. At this stage i am getting an error saying "Function evaluation disabled because a previous function evaluation timed out. You must continue execution to reenable function..."
Please find below my sample code.
var serviceResult = _service.GeResponse(panId, numberOfTransactions, startDate, endDate);
var balanceTrans = GetAllTypes<BalanceAdjustment>(serviceResult);
var presentmentTrans = GetAllTypes<Presentment>(serviceResult);
var test = balanceTrans.AsEnumerable();
var obj = test.Select(item => new SearchCardTransactionResult
{
PanId = item.PanId,
CardNumber = item.CardNumberFormatted,
Balance = item.FinancialBalance,
BillingAmount = Math.Abs(item.BillingAmount).Format(),
BillingCurrency = GetCurrencyCode(item.BillingCurrency)
});
private IList<T> GetAllTypes<T>(GetTransactionsResponse result)
where T : ValitorServices.ValitorPanWS.AbstractFinancialTransaction
{
return result.FinanacialTransactions.OfType<T>().ToList();
}
Could anyone advice what am i doing wrong here?
Many Thanks
I managed to resolve by using the following code.
var currencyCodes = _service.GetCurrencyCodes().ToList();
var test = balanceTrans.AsEnumerable();
var obj = test.AsEnumerable().Select(item => new SearchCardTransactionResult
{
PanId = item.PanId,
CardNumber = item.CardNumberFormatted,
Balance = item.FinancialBalance,
BillingAmount = Math.Abs(item.BillingAmount).Format(),
BillingCurrency = currencyCodes.First(c=>c.NumericCode.Equals(item.BillingCurrency)).AlphabeticCode
});

C# index out of range, multi thread, Logs

I am trying to grab logs from windows. To make it faster I look for the days where logs are and then for that range of those days I open one thread per day to load it fast. In function work1 the error "Index was outside the bounds of the array" appears. If I make the job in only one thread it works fine but it is very very slow.
I tried to use the information from
"Index was outside the bounds of the array while trying to start multiple threads"
but it does not work.
I think the problem is in IEnumerable when it is loaded, like it is not loaded in time when the loop is started.
Sorry for my english, i am from Uzbekistan.
var result = from EventLogEntry elog in aLog.Entries
orderby elog.TimeGenerated
select elog.TimeGenerated;
DateTime OLDentry = result.First();
DateTime NEWentry = result.Last();
DTN.Add(result.First());
foreach (var dtn in result) {
if (dtn.Year != DTN.Last().Year |
dtn.Month != DTN.Last().Month |
dtn.Day != DTN.Last().Day
) {
DTN.Add(dtn);
}
}
List<Thread> t = new List<Thread>();
int i = 0;
foreach (DateTime day in DTN) {
DateTime yad = day;
var test = from EventLogEntry elog in aLog.Entries
where (elog.TimeGenerated.Year == day.Year) &&
(elog.TimeGenerated.Month == day.Month) &&
(elog.TimeGenerated.Day == day.Day)
select elog;
var tt2 = test.ToArray();
t.Add(new Thread(() => work1(tt2)));
t[i].Start();
i++;
}
static void work1(IEnumerable<EventLogEntry> b) {
var z = b;
for (int i = 0; i < z.Count(); i++) {
Console.Write(z + "\n");
}
}
Replace var tt2 = test; with var tt2 = test.ToArray();
The error is a mistake you do numerous times in your code: you are enumerating over a the data countless times. Calling .Count() enumerates the data again, and in this case the data ends up conflicting with cached values inside the EventLogEntry enumerator.
LINQ does not return a data set. It returns a query. A variable of type IEnumerable<T> may return different values every time you call Count(), First() or Last(). Calling .ToArray() makes C# retrieve the result and store it in an array.
You should generally just enumerate an IEnumerable<T> once.

Breaking a loop when enumerating an IQueryable

I have the following code:
var docs = ctx.Documents.Select(a =>
new { a.ID, Content = a.Document, a.LastModified, CreatedDate = a.Created });
foreach (var doc in docs)
{
if (Utility.ContinueDocumentPreview)
{
_createFile(doc.ID, doc.Content, doc.CreatedDate, doc.LastModified);
_fireProgress(++counter, count);
}
else
{
break;
}
}
The Utility.ContinueDocumentPreview flag is set to false when a user hits a Cancel button while this process is running. The problem is when the flag is false and the code should break out of the loop, I get a SQL timeout exception.
Am I doing this incorrectly?
You could use .ToList to execute the SQL before looping, but it kind of depends on how much data you're getting in from your database. It could result in a big SQL query all at once, so test it out a lot to make sure you're getting the performance you like.
// You can add .ToList() here:
var docs = ctx.Documents.Select(a => new { a.ID, Content = a.Document, a.LastModified, CreatedDate = a.Created }).ToList();
// Or, you can add .ToList() here:
foreach (var doc in docs.ToList())
{
if (Utility.ContinueDocumentPreview)
{
_createFile(doc.ID, doc.Content, doc.CreatedDate, doc.LastModified);
_fireProgress(++counter, count);
}
else
{
break;
}
}

Categories

Resources