Improve performance of TryGetValue - c#

I am creating an Excel file using Open XML SDK. In this process, I have a scenario like below.
I need to add data into a Dictionary<uint, string> if key is not exists. For that I am using below code.
var dataLines = sheetData.Elements<Row>().ToList();
for (int i = 0; i < dataLines.Count; i++)
{
var x = dataLines[i];
if (!dataDictionary.TryGetValue(x.RowIndex.Value, out var res)) // 700 seconds, 1,279,999,998 Hit counts
{
dataDictionary.Add(x.RowIndex.Value, x.OuterXml);
}
}
When I am trying to create an Excel sheet which has rows around 90,000 - 92,000, the line with the IF condition in above code consume 700 seconds to complete. (checked with a performance profiler, also this line has 1,279,999,998 Hit counts).
How could I reduce the time the line with the IF condition in above code consumes?
Is there any better way to achive this with less time?

If the if statement is slow, one option you have is to eliminate it entirely and use the indexer of the dictionary to set the value. This means that the "last match will win". If you want the "first match to win", all you have to do is reverse the order you are iterating the list.
var dataLines = sheetData.Elements<Row>().ToList();
for (int i = dataLines.Count - 1; i >= 0; i--)
{
var x = dataLines[i];
dataDictionary[x.RowIndex.Value] = x.OuterXml;
}
If x.RowIndex.Value is unique, it doesn't matter which direction you iterate.
If it is important that the key is sorted in ascending order, you can use a SortedDictionary<TKey, TValue>.
But as others have pointed out, it seems odd that you have so many hit counts. There is probably recursion going on in your application that you need to track down.

Related

How can I implement odd-even sorting in C# using threads?

I am practicing about threads and concurrency in C# and tried to implement the basic odd-even sort algorithm using a thread for even and another for odd sorting.
static bool Sort(int startPosition, List<int> list)
{
bool result = true;
do
{
for (int i = startPosition; i <= list.Count - 2; i = i + 2)
{
if (list[i] > list[i + 1])
{
int temp = list[i];
list[i] = list[i + 1];
list[i + 1] = temp;
result = false;
}
}
} while (!result);
return result;
}
While the main method is like this:
static void Main(string[] args)
{
bool isOddSorted = false;
bool isEvenSorted = false;
List<int> list = new List<int>();
while (list.Count < 15)
{
list.Add(new Random().Next(0, 20));
}
var evenThread = new Thread(() =>
{
isEvenSorted = Sort(0, list);
});
evenThread.Start();
var oddThread = new Thread(() =>
{
isOddSorted = Sort(1, list);
});
oddThread.Start();
while (true)
{
if (isEvenSorted && isOddSorted)
{
foreach (int i in list)
{
Console.WriteLine(i);
}
break;
}
}
}
Understandably, the loop in Sort method works forever because the result variable is never set to true. However the way it works manages to sort the list. It just doesn't break at any time.
However the moment I add a "result = true" to the first line of do-scope of Sort function, the sorting messes up.
I couldn't figure out how to fix this.
You cannot do odd-even sort easily in a multi-threaded manner. Why?
Because the odd-even sort is in essence the repetition of two sorting passes (the odd and the even pass), with any subsequent pass depending on the result of the preceding pass. You cannot run two passes in parallel/concurrently in practical terms, as each pass has to follow each other.
There are of course ways to employ multi-threading, even with odd-even-sort, although that wouldn't probably make much practical sense. For example, you could divide the list into several partitions, with each partition being odd-even-sorted independently. The sorting of each partition could be done in a multi-threaded manner. As a final step it would require merging the sorted partitions in a way that would result in the fully sorted list.
(By the way, that you eventually get a sorted list if you only let the do while loops in your Sort method run many, many times is just that given enough time, even with "overlapping" concurrent passes you reach eventually a sorted list, but maybe not with all the same numbers from the original list. Because given enough repetions of the loop, eventually the elements will be compared with each other and shuffled to the right positions. However, since you have not synchronized list access, you might lose some numbers from the list, being replaced with duplicates of other numbers, depending on the runtime behavior and timing of list accesses between the two threads.)
You are trying to modify non-thread safe collection across threads.
Even if the assumption is good - you are using basic swap in Sort method (but you did not implement it entirely correct), you have to take under account that while one of the threads is doing the swap, other one could swap a value that is being in temp variable in this exact moment.
You would need to familiarize ourself with either locks and/or thread-Safe Collections.
Look at your result variable and the logic you have implemented with regard to result.
The outer do ... while (!result) loop will only exit when result is being true.
Now imagine your inner for loop finds two numbers that need swapping. So it does and swaps the numbers. And sets result to false. And here is my question to you: After result has been set to false when two numbers have been swapped, when and where is result ever being set to true?
Also, while you sort each the numbers on even list positions, and each the numbers on odd positions, your code does not do a final sort across the entire list. So, basically, if after doing the even and odd sorting, a larger number on an even position n is followed by a smaller number on odd position n+1, your code leaves it at that, leaving the list essentially still (partially) unsorted...

How to replace values smaller than zero with zero in a collection using LINQ

I have a list of objects. This object has a field called val. This value shouldn't be smaller than zero but there are such objects in this list. I want to replace those less-than-zero values with zero. The easiest solution is
foreach(Obj item in list)
{
if (item.val < 0)
{
item.val = 0;
}
}
But I want to do this using LINQ. The important thing is I do not want a list of updated elements. I want the same list just with the necessary values replaced. Thanks in advance.
As I read the comments I realized what I wanted to do is less efficient and pointless. LINQ is for querying and creating new collections rather than updating collections. A possible solution I came across was this
list.Select(c => { if (c.val < 0 ) c.val= 0; return c;}).ToList();
But my initial foreach solution is more efficient than this. So dont make the same mistake I do and complicate things.
you can try this one, which is faster because of parallelism
Parallel.ForEach(list, item =>
{
item.val = item.val < 0 ? 0 : item.val;
});
The Parallel ForEach in C# provides a parallel version of the standard, sequential Foreach loop. In standard Foreach loop, each iteration processes a single item from the collection and will process all the items one by one only. However, the Parallel Foreach method executes multiple iterations at the same time on different processors or processor cores. This may open the possibility of synchronization problems. So, the loop is ideally suited to processes where each iteration is independent of the others
More Details - LINK
loop 'for' is faster than 'foreach' so you can use this one
for (int i = 0; i < list.Count; i++)
{
if(list[i].val <= 0)
{
list[i].val = 0;
}
}

Deleting from array, mirrored (strange) behavior

The title may seem a little odd, because I have no idea how to describe this in one sentence.
For the course Algorithms we have to micro-optimize some stuff, one is finding out how deleting from an array works. The assignment is delete something from an array and re-align the contents so that there are no gaps, I think it is quite similar to how std::vector::erase works from c++.
Because I like the idea of understanding everything low-level, I went a little further and tried to bench my solutions. This presented some weird results.
At first, here is a little code that I used:
class Test {
Stopwatch sw;
Obj[] objs;
public Test() {
this.sw = new Stopwatch();
this.objs = new Obj[1000000];
// Fill objs
for (int i = 0; i < objs.Length; i++) {
objs[i] = new Obj(i);
}
}
public void test() {
// Time deletion
sw.Restart();
deleteValue(400000, objs);
sw.Stop();
// Show timings
Console.WriteLine(sw.Elapsed);
}
// Delete function
// value is the to-search-for item in the list of objects
private static void deleteValue(int value, Obj[] list) {
for (int i = 0; i < list.Length; i++) {
if (list[i].Value == value) {
for (int j = i; j < list.Length - 1; j++) {
list[j] = list[j + 1];
//if (list[j + 1] == null) {
// break;
//}
}
list[list.Length - 1] = null;
break;
}
}
}
}
I would just create this class and call the test() method. I did this in a loop for 25 times.
My findings:
The first round it takes a lot longer than the other 24, I think this is because of caching, but I am not sure.
When I use a value that is in the start of the list, it has to move more items in memory than when I use a value at the end, though it still seems to take less time.
Benchtimes differ quite a bit.
When I enable the commented if, performance goes up (10-20%) even if the value I search for is almost at the end of the list (which means the if goes off a lot of times without actually being useful).
I have no idea why these things happen, is there someone who can explain (some of) them? And maybe if someone sees this who is a pro at this, where can I find more info to do this the most efficient way?
Edit after testing:
I did some testing and found some interesting results. I run the test on an array with a size of a million items, filled with a million objects. I run that 25 times and report the cumulative time in milliseconds. I do that 10 times and take the average of that as a final value.
When I run the test with my function described just above here I get a score of:
362,1
When I run it with the answer of dbc I get a score of:
846,4
So mine was faster, but then I started to experiment with a half empty empty array and things started to get weird. To get rid of the inevitable nullPointerExceptions I added an extra check to the if (thinking it would ruin a bit more of the performance) like so:
if (fromItem != null && fromItem.Value != value)
list[to++] = fromItem;
This seemed to not only work, but improve performance dramatically! Now I get a score of:
247,9
The weird thing is, the scores seem to low to be true, but sometimes spike, this is the set I took the avg from:
94, 26, 966, 36, 632, 95, 47, 35, 109, 439
So the extra evaluation seems to improve my performance, despite of doing an extra check. How is this possible?
You are using Stopwatch to time your method. This calculates the total clock time taken during your method call, which could include the time required for .Net to initially JIT your method, interruptions for garbage collection, or slowdowns caused by system loads from other processes. Noise from these sources will likely dominate noise due to cache misses.
This answer gives some suggestions as to how you can minimize some of the noise from garbage collection or other processes. To eliminate JIT noise, you should call your method once without timing it -- or show the time taken by the first call in a separate column in your results table since it will be so different. You might also consider using a proper profiler which will report exactly how much time your code used exclusive of "noise" from other threads or processes.
Finally, I'll note that your algorithm to remove matching items from an array and shift everything else down uses a nested loop, which is not necessary and will access items in the array after the matching index twice. The standard algorithm looks like this:
public static void RemoveFromArray(this Obj[] array, int value)
{
int to = 0;
for (int from = 0; from < array.Length; from++)
{
var fromItem = array[from];
if (fromItem.Value != value)
array[to++] = fromItem;
}
for (; to < array.Length; to++)
{
array[to] = default(Obj);
}
}
However, instead of using the standard algorithm you might experiment by using Array.RemoveAt() with your version, since (I believe) internally it does the removal in unmanaged code.

Best Way to Check for Used Key with Nhibernate?

on my site I allow people to buy subscriptions to my site in bulk(I call them vouchers). Once they have these vouchers, they give them to whoever and they enter that code into their account to upgrade them.
Right now I am thinking of doing 4 alphanumeric code(upper case, lower case and digits) and will have something like this
var chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
var stringChars = new char[4];
var random = new Random();
for (int i = 0; i < stringChars.Length; i++)
{
stringChars[i] = chars[random.Next(chars.Length)];
}
var finalString = new String(stringChars);
For now I think that will give me more than enough combinations and if I ever do run out I can always up the length of the code. I want to keep it short because I don't want the user to have to type in huge as numbers.
I also don't have the time to make a more elegant solution maybe were they click a link or something in their email and it activates their account and of course this would cut down on someone trying to randomly guess a voucher number.
These are things I would deal with if the site every becomes more popular.
I am wondering though how can I handle the possible duplicate generation of the same voucher. My first thought was to check the database each time a voucher is created and if it exists then make a new one.
However that seems like it could be slow. So I thought also maybe getting all the keys first and store them in memory and they check there but if the list keeps growing I might run into out of memory exceptions and all that great stuff.
So does anyone have any ideas? Or am I stuck doing one of the 2 method I listed above?
I am using nhibernate, asp.net mvc and C#.
Edit
static void Main(string[] args)
{
List<string> hold = new List<string>();
for (int i = 0; i < 10000; i++)
{
HashAlgorithm sha = new SHA1CryptoServiceProvider();
byte[] result = sha.ComputeHash(BitConverter.GetBytes(i));
string hex = null;
foreach (byte x in result)
{
hex += String.Format("{0:x2}", x);
}
hold.Add(hex.Substring(0,3));
Console.WriteLine(hex.Substring(0, 4));
}
Console.WriteLine("Number of Distinct values {0}", hold.Distinct().Count());
}
above is my attempt to try to use hashing. However I think I am missing something as it seems to have quite a bit more duplicates then expected.
Edit 2
I think I added what I was missing but not sure if this is exactly what he meant. I am also not sure what to do in a situation when I moved it as far as I can move it(my has seems to give me a length of 40 places I can move it).
static void Main(string[] args)
{
int subStringLength = 4;
List<string> hold = new List<string>();
for (int i = 0; i < 10000; i++)
{
SHA1CryptoServiceProvider sha = new SHA1CryptoServiceProvider();
byte[] result = sha.ComputeHash(BitConverter.GetBytes(i));
string hex = null;
foreach (byte x in result)
{
hex += String.Format("{0:x2}", x);
}
int startingPositon = 0;
string possibleVoucherCode = hex.Substring(startingPositon,subStringLength);
string voucherCode = Move(subStringLength, hold, hex, startingPositon, possibleVoucherCode);
hold.Add(voucherCode);
}
Console.WriteLine("Number of Distinct values {0}", hold.Distinct().Count());
}
private static string Move(int subStringLength, List<string> hold, string hex, int startingPositon, string possibleVoucherCode)
{
if (hold.Contains(possibleVoucherCode))
{
int newPosition = startingPositon + 1;
if (newPosition <= hex.Length)
{
if ((newPosition + subStringLength) > hex.Length)
{
possibleVoucherCode = hex.Substring(newPosition, subStringLength);
return Move(subStringLength, hold, hex, newPosition, possibleVoucherCode);
}
// return something
return "0";
}
else
{
// return something
return "0";
}
}
else
{
return possibleVoucherCode;
}
}
}
It is going to be slow because you want to generate the vouchers randomly and then check the database for every generated code.
I would create a table vouchers with an id, the code and an is_used column. I would fill that table once with enough random codes. Since this can be done in a separate process, the performance won't be such a big problem. Let it run in the evening and the next day you get a fully filled vouchers-table.
If you want to prevent generating duplicate vouchers, that won't be a problem. You can generate them anyway and put them either in a System.Collections.Generic.HashSet (which prevents adding duplicates without throwing an exception) or call the Linq-method Distinct(), before adding them to that vouchers table.
If you insist on short codes:
Use a GUID as a primary key, generate one random number. How you might want to translate this in to alpha-num is up to you.
Use the last byte or two of the guid and the random number. 1234-684687 This should make it slightly less easy to bruteforce coupons. And handle any (rare) collisions with an exception.
Easy way to shorten an int, change it's base (from 10 to 62). (in VB, and this is old code)
This yields "2lkCB1" when given Int32.MaxValue
''//given intValue as your random integer
Dim result As String = String.Empty
Dim digits as String = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
Dim x As Integer
While (intValue > 0)
x = intValue Mod digits.Length
result = digits(x) & result
intValue = intValue - x
intValue = intValue \ digits.Length
End While
Return result
But now we're already answering more than one question.
For a bulk data operation like this, I would recommend not using NHibernate and just doing straight ADO.NET.
Batch Check
Since you anticipate generating big batches of codes at once, you should batch multiple code checks into a single round-trip to the database. If you're using SQL Server 2008 or higher, you could do this using table-valued parameters, checking a whole list of codes at once.
SELECT DISTINCT b.Code
FROM #batch b
WHERE NOT EXISTS (
SELECT v.Code
FROM dbo.Voucher v
WHERE v.Code = b.Code
);
Concurrency
Now, what about concurrency issues? What if two users generate the same code at roughly the same time? Or simply in-between the time when we check the code for uniqueness and when we insert it into the Voucher table?
We can take care of that by modifying the query as follows:
DECLARE #batchid uniqueidentifier;
SET #batchid = NEWID();
INSERT INTO dbo.Voucher (Code, BatchId)
SELECT DISTINCT b.Code, #batchid
FROM #batch b
WHERE NOT EXISTS (
SELECT Code
FROM dbo.Voucher v
WHERE b.Code = v.Code
);
SELECT Code
FROM dbo.Voucher
WHERE BatchId = #batchid;
Executing via .NET
Assuming that you have defined the following table-valued user type...
CREATE TYPE dbo.VoucherCodeList AS TABLE (
Code nvarchar(8) COLLATE SQL_Latin1_General_CP1_CS_AS NOT NULL
/* !!! Remember to specify the collation on your Voucher.Code column too, since you want upper and lower-case codes. */
);
... you could execute this query via .NET code like this:
public ICollection<string> GenerateCodes(int numberOfCodes)
{
var result = new List<string>(numberOfCodes);
while (result.Count < numberOfCodes)
{
var batchSize = Math.Min(_batchSize, numberOfCodes - result.Count);
var batch = Enumerable.Range(0, batchSize)
.Select(x => GenerateRandomCode());
var oldResultCount = result.Count;
result.AddRange(FilterAndSecureBatch(batch));
var filteredBatchSize = result.Count - oldResultCount;
var collisionRatio = ((double)batchSize - filteredBatchSize) / batchSize;
// Automatically increment length of random codes if collisions begin happening too frequently
if (collisionRatio > _collisionThreshold)
CodeLength++;
}
return result;
}
private IEnumerable<string> FilterAndSecureBatch(IEnumerable<string> batch)
{
using (var command = _connection.CreateCommand())
{
command.CommandText = _sqlQuery; // the concurrency-safe query listed above
var metaData = new[] { new SqlMetaData("Code", SqlDbType.NVarChar, 8) };
var param = command.Parameters.Add("#batch", SqlDbType.Structured);
param.TypeName = "dbo.VoucherCodeList";
param.Value = batch.Select(x =>
{
var record = new SqlDataRecord(metaData);
record.SetString(0, x);
return record;
});
using (var reader = command.ExecuteReader())
while (reader.Read())
yield return reader.GetString(0);
}
}
Performance
After implementing all of this (and moving the command and parameter creation out of the loop so it would be re-used between batches), I was able to insert 10,000 codes using a batch size of 500 consistently in approx. 0.5 to 2 seconds, or 5 to 20 codes per millisecond.
Code Density / Collisions / Guessability
The _collisionThreshold field limits the density of your codes. It's a value between 0 and 1. Actually, it must be less than 1 or else you would wind up in an infinite loop when the 4 digit codes were exhausted (probably should add an assertion for this in code). I would recommend never turning it above 0.5 for performance reasons. More than 50% collisions would mean it's spending more time testing already-used codes than actually generating new ones.
Keeping the collision threshold low is how you would control how hard-to-guess your codes are. Setting _collisionThreshold to 0.01 would generate codes such that there's approximately a 1% chance of someone guessing a code.
If collisions occur too frequently, CodeLength (which is used by the GenerateRandomCode() method) will be incremented. This value needs to be persisted somewhere. After executing GenerateCodes(), check CodeLength to see if it has changed and then save the new value.
Source Code
The full code is available here: https://gist.github.com/3217856. I am the author of this code, and am releasing it under the MIT license. I had fun with this little challenge, and also got to learn how to pass a table-valued parameter to an inline parametrized query. I hadn't ever done that before. I've only ever passed them to full-fledged stored procedures.
A possible solution for you is like this:
Find the maximum ID of a voucher (an integer). Then, run any hash function on it, take the first 32 bits and convert to the string you want to show the user (or use a 32bit hash function such as Jenkins hash function). This will probably work, hash collisions are pretty rare. But this solution is very similar to yours, in the point of randomness.
You could run a test which finds the first 10 or 100 collisions (this should be enough for you) and forces the algorithm to "skip" them and use a different starting value. Then, you don't need to check the database at all (well, at least until you reach about 4294967296 vouchers...)
how about utilizing nHibernate's HiLo algorithm?
Here is an example on how you can get the next value (without DB access).

How to improve the performance of my custom function for getting fast results?

I;m using Lucene/.NET to implement a numerical search engine.
I want to filter numbers from within a large range, depends on which number exists in string array.
I used the following code:
int startValue = 1;
endValue = 100000;
//Assume that the following string array contains 12000 strings
String[] ArrayOfTerms = new String[] { "1", "10",................. , "99995"};
public String[] GetFilteredStrings(String[] ArrayOfTerms)
{
List<String> filteredStrings = new List<String>();
for (int i = startValue; i <= endValue; i++)
{
int index = Array.IndexOf(ArrayOfTerms,i.ToString());
if( index != -1)
{
filteredStrings.Add((String)ArrayOfTerms.GetValue(index));
}
}
return filteredStrings.ToArray();
}
Now, my problem is it searches every value from 1 to 100000 and takes too much time. some times my application is hanging.
Can anyone of you help me how to improve this performance issue? I don't know about caching concept, but I know that Lucene supports cache filters. Should I use a cache filter? Thanks in advance.
In fact you're trying to determine if an Array contains the item or not.
I think you should use something like a HashSet or Dictionary to be able to determine presence of the value for O(1) time instead of O(n) time you have.
This code works pretty much faster.
var results = ArrayOfTerms.Where(s => int.Parse(s) <= endValue);
If I got what you want to do

Categories

Resources