Fast disk-based HashSet / Index for large set of Strings - c#

I'm looking for a solution on how to index a large set of strings - say 100 000 000 (probably more) with an average length of 50 bytes each (= 5 000 000 000 = 5 GB of data; and then in UTF16 and with .NET memory allocation, even more).
I then want to use the index to allow other processes to query if a string exists in the index -- and this as fast as possible.
I've done some simple testing with a large memory based HashSet - about 1 000 000 strings - and looking up e.g. 50 000 strings in that HashSet is only a matter of milliseconds.
Here's some pseudo code for what I want to achieve:
// 1) create huge disk based HashSet / Index / Lookup
using (var hs = DiskBasedHashSet<string>(#"c:\index.bin", .create)) {
for each (var s in lotsOfStringsToIndex) {
hs.Add(s);
}
}
// 2) use index to check if items exists - this need to be fast
public static class Query {
static var hs = DiskBasedHashSet<string>(#"c:\index.bin", .read);
// callable from anywhere, and really fast
public static QueryItem(string s) {
return hs.Contains(s);
}
}
for each (var s in checkForThese) {
var result = Query.QueryItem(s);
}
I've tried using SQL Servers, Lucene.NET, and B+Trees, with and without partitioning data. Anyhow, these solutions are to slow and, I think, overqualified for this task. Immagine, the overhead of creating a SQL-query or a Lucene Filter, just do check for a string in a set.

Related

I need the fastest way to process math on numerical array data

I apologize if this is in the incorrect forum. Despite finding a lot of Array manipulation on this site, most of these are averaging/summing... the array of numerics as a set using LINQ, which processes well for all values in an array. But I need to process each index over multiple arrays (of the same size).
My routine receives array data from devices, typically double[512] or ushort[512]; A single device itself will always have the same size of Array data, but the array sizes can range from 256 to 2048 depending on the device. I need to hold CountToAverage quantity of the arrays to average. Each time an array is received, it must push and pop from the queue to ensure that the number of arrays in the average process is consistent (this part of the process is fixed in the Setup() for this benchmark testing. For comparison purposes, the benchmark results are shown after the code.
What I am looking for is the fastest most efficient way to average the values of each index of all the arrays, and return a new array (of the same size) where each index is averaged from the set of arrays. The count of arrays to be averaged can range from 3 to 25 (the code below sets benchmark param to 10). I have 2 different averaging methods in the test, the 2nd is significantly faster, 6-7 times faster than the first. My first question is; Is there any way to achieve this faster, that can be done at O(1) or O(log n) time complexity?
Secondarily, I am using a Queue (which may be changed to ConcurrentQueue for implementation) as a holder for the arrays to be processed. My primary reasoning for using a queue is because I can guarantee FIFO processing of the feed of arrays which is critical. Also, I can process against the values in the Queue through a foreach loop (just like a List) without having to dequeue until I am ready. I would be interested if anyone knows whether this is performance hindering as I haven't benchmarked it. Keep in mind it must be thread-safe. If you have an alternative way to process multiple sets of array data in a thread-safe manner I am "all ears".
The reason for the performance requirement is this is not the only process that is happening, I have multiple devices that are sending array results "streamed" at an approximate rate of 1 every 1-5 milliseconds, for each device coming from different threads/processes/connections, that still has several other much more intensive algorithms to process through, so this cannot be a bottleneck.
Any insights on optimizations and performance are appreciated.
using System;
using System.Collections.Generic;
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Jobs;
using BenchmarkDotNet.Running;
using Microsoft.Diagnostics.Tracing.Parsers.MicrosoftAntimalwareEngine;
namespace ArrayAverage
{
public class ArrayAverage
{
[Params(10)]
public int CountToAverage;
[Params(512, 2048)]
public int PixelSize;
static Queue<double[]> calcRepo = new Queue<double[]>();
static List<double[]> spectra = new();
[Benchmark]
public double[] CalculateIndexAverages()
{
// This is too slow
var avg = new double[PixelSize];
for (int i = 0; i < PixelSize; i++)
{
foreach (var arrayData in calcRepo)
{
avg[i] += arrayData[i];
}
avg[i] /= calcRepo.Count;
}
return avg;
}
[Benchmark]
public double[] CalculateIndexAverages2()
{
// this is faster, but is it the fastest?
var sum = new double[PixelSize];
int cnt = calcRepo.Count;
foreach (var arrayData in calcRepo)
{
for (int i = 0; i < PixelSize; i++)
{
sum[i] += arrayData[i];
}
}
var avg = new double[PixelSize];
for (int i = 0; i < PixelSize; i++)
{
avg[i] = sum[i] / cnt;
}
return avg;
}
[GlobalSetup]
public void Setup()
{
// Just generating some data as simple Triangular curve simulating a range of spectra
for (double offset = 0; offset < CountToAverage; offset++)
{
var values = new double[PixelSize];
var decrement = 0;
for (int i = 0; i < PixelSize; i++)
{
if (i > (PixelSize / 2))
decrement--;
values[i] = (offset / 7) + i + (decrement * 2);
}
calcRepo.Enqueue(values);
}
}
}
public class App
{
public static void Main()
{
BenchmarkRunner.Run<ArrayAverage>();
}
}
}
Benchmark results:
BenchmarkDotNet=v0.13.1, OS=Windows 10.0.19043.1348 (21H1/May2021Update)
Intel Core i7-6700HQ CPU 2.60GHz (Skylake), 1 CPU, 8 logical and 4 physical cores
.NET SDK=6.0.100-preview.7.21379.14
[Host] : .NET 5.0.12 (5.0.1221.52207), X64 RyuJIT [AttachedDebugger]
DefaultJob : .NET 5.0.12 (5.0.1221.52207), X64 RyuJIT
Method
Arrays To Average
Array Size
Mean
Error
StdDev
CalculateIndexAverages
10
512
32.164 μs
0.5485 μs
0.5130 μs
CalculateIndexAverages2
10
512
5.792 μs
0.1135 μs
0.2241 μs
CalculateIndexAverages
10
2048
123.628 μs
2.3394 μs
1.9535 μs
CalculateIndexAverages2
10
2048
22.311 μs
0.4366 μs
0.8093 μs
When dealing with simple operations on a large amount of data, you'd be very interested in SIMD:
SIMD stands for "single instruction, multiple data". It’s a set of processor instructions that ... allows mathematical operations to execute over a set of values in parallel.
In your particular case, using the the Vector<T> example would give you a quick win. Naively converting your fastest method to use Vectors already gives a ~2x speed up on my PC.
public double[] CalculateIndexAverages4() {
// Assumption: PixelSize is a round multiple of Vector<>.Count
// If not, you'll have to add in the 'remainder' from the example.
var batch = Vector<double>.Count;
var sum = new double[PixelSize];
foreach (var arrayData in calcRepo) {
// Vectorised summing:
for (int i = 0; i <= PixelSize - batch; i += batch) {
var vSum = new Vector<double>(sum, i);
var vData = new Vector<double>(arrayData, i);
(vSum + vData).CopyTo(sum, i);
}
}
var vCnt = Vector<double>.One * calcRepo.Count;
// Reuse sum[] for averaging, so we don't incur memory allocation cost
for (int i = 0; i <= PixelSize - batch; i += batch) {
var vSum = new Vector<double>(sum, i);
(vSum / vCnt).CopyTo(sum, i);
}
return sum;
}
The Vector<T>.Count gives you how many items are being parallelised into one instruction. In the case of double, it's likely to be 4 on most modern CPUs supporting AVX2.
If you're okay with losing precision and can go to float, you'll get a much bigger win by again doubling the amount of data processed in a single CPU op. All of this without even changing your algorithm.
You can further optimize the code by reducing memory allocations. If the method is called frequently, time spent on GC will dominate completely.
// Assuming the data fits on the stack. Some 100k pixels should be safe.
Span<double> sum = stackalloc double[PixelSize];
// ...
Span<double> avg = stackalloc double[PixelSize];
And possibly also remove the extra stack-allocation of avg and simply reuse the sum:
for (int i = 0; i < sum.Length; i++)
{
sum[i] /= cnt;
}
// TODO: Avoid array allocation! Maybe use a pre-allocated array and fill it here.
return sum.ToArray();
In my opinion this would be fairly well optimized code. A major reason for the second option to be faster is that it access memory linearly, instead of jumping between multiple different arrays. Another factor is that foreach loops have some overhead, so placing this in the outer loop will also help a bit.
You might gain a little bit performance by switching out the queue and foreach loop to a list/array and for loop, but since PixelSize is much larger than CountToAverage I would expect the benefit to be fairly small.
Unrolling the loop to process say 4 values at a time might help a bit. It is possible for the c# compiler to apply such optimization automatically, but it is often difficult to tell what optimization are applied or not, so it might be easier just to test.
The next step would be to look at parallelization. Simple summing code like this might benefit a from SIMD to process multiple values at a time. But the link shows that using processor specific intrinsic has a much larger benefit over the more general Vector<T>, but may require separate code paths for each platform you are targeting. The link also have performance examples of summing values at various levels of optimization, with example code, so is well worth a read.
Another option would be to use multiple threads with Parallel.For/Foreach, but at 6μs it is likely that the overhead will be larger than any gains unless the size of the data is significantly larger.

Create huge dictionary

In my quest to the primes, I've already asked this question : Can't create huge arrays which lead me to create my own class of fake arrays based on a dictionary of arrays... : private Dictionary<int, Array> arrays = new Dictionary<int, Array>();
I can know create fake arrays of a lot of bool (like 10 000 000 000) using the code below:
public class CustomArray
{
private Dictionary<int, Array> arrays = new Dictionary<int, Array>();
public CustomArray(ulong lenght)
{
int i = 0;
while (lenght > 0x7FFFFFC7)
{
lenght -= 0x7FFFFFC7;
arrays[i] = new bool[0x7FFFFFC7];
i++;
}
arrays[i] = new bool[lenght];
}
}
But it crashes as soon as I ask for a CustomArray of 100 000 000 000 elements. It works well for the 25 first iterations (my Dictionary contains 25 arrays of 0x7FFFFFC7 elements) but then it crashes with an OutOfMemory exception.
As a remainder, I've got 16GB memory, VS2013, the program is compiled in 64bits, I've enabled the gcAllowVeryLargeObjects option and I don't see any memory peak in the Task Manager.
How can I avoid this error?
100000000000 bools means ~93 GB of memory. You only have #50 GB (including the default allocated virtual memory).
Storing them as bits (not as bytes), would get you down to ~12GB.
Look at System.Collection.BitArray

How can I reliably test/benchmark the size (including empty buckets) of a .Net HashSet<T> object?

As an exercise in personal education and experimentation, I want to create my own HashTable class. Specifically, I'd like to write this object, without using any existing code (i.e. this object will not inherit from another class) other than mapping to existing interfaces for testing purposes.
Since I'm planning on writing this in C#, my "benchmark" is going to be the .Net HashSet<T> class. I can easily test against the time of execution for add, remove and look-up requests, but I have no clue how to test the size of the HashSet benchmark object, including all buckets that are empty for future add requests.
How can I track the size of a HashSet<t> object as it dynamically grows to make room for future insertions?
To be clear, I don't need to know in the exact number of bytes (I understand that the .Net framework makes it a bit difficult to get the exact size of many types of objects) but rather I'd prefer to know how many buckets are in use and how many are empty, waiting to be used, as I execute various types of test.
The best way to get the number and size of the buckets is to use reflection. The only trouble is that you need to understand the collection's behavior first. After reading the code a bit and doing some trial and error, it seems you need to count the size of the private m_buckets array to get the number of buckets, and count the number of values greater than 0 to get the number of used buckets. The method would look like:
static void CountBuckets<T>(HashSet<T> hashSet)
{
var field = typeof(HashSet<T>).GetField("m_buckets", System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.NonPublic);
var buckets = (int[])field.GetValue(hashSet);
int numberOfBuckets = 0;
int numberOfBucketsUsed = 0;
if (buckets != null)
{
numberOfBuckets = buckets.Length;
numberOfBucketsUsed = buckets.Where(i => i != 0).Count();
}
Console.WriteLine("Number of buckets: {0} / Used: {1}", numberOfBuckets, numberOfBucketsUsed);
}
To test it, I first created a custom class where I could manually set the hash code:
public class Hash
{
private readonly int hashCode;
public Hash(int hashCode)
{
this.hashCode = hashCode;
}
public override int GetHashCode()
{
return this.hashCode;
}
}
From there, I did some tests:
var hashSet = new HashSet<Hash>();
CountBuckets(hashSet);
// Number of buckets: 0 / Used: 0
var firstHash = new Hash(0);
hashSet.Add(firstHash);
CountBuckets(hashSet);
// Number of buckets: 3 / Used: 1
hashSet.Add(new Hash(1));
hashSet.Add(new Hash(2));
CountBuckets(hashSet);
// Number of buckets: 3 / Used: 3
hashSet.Add(new Hash(3));
CountBuckets(hashSet);
// Number of buckets: 7 / Used: 4
hashSet.Add(new Hash(1));
CountBuckets(hashSet);
// Number of buckets: 7 / Used: 4
hashSet.Remove(firstHash);
CountBuckets(hashSet);
// Number of buckets: 7 / Used: 3
It sounds consistent with the intuitive behavior. First, the number of buckets is 0. After adding an element, it's expanded to 3. The number of buckets stay stable until a fourth element is added, expanding the count to 7. When simulating a hash collision, the number of used buckets stay stable, as expected. And removing an element decreases the number of used buckets.
I am not very familiar with internals of HashSet but you can see its source and use Reflection to gets its internal values:
HashSet<int> hashSet = new HashSet<int>();
var countField = typeof(HashSet<int>).GetField("m_count", BindingFlags.NonPublic | BindingFlags.Instance);
var freeListField = typeof(HashSet<int>).GetField("m_freeList", BindingFlags.NonPublic | BindingFlags.Instance);
var count = countField.GetValue(hashSet);
var freeList = freeListField.GetValue(hashSet);
Note: Such violation of private member access is of course very ugly but in your development/testing phase can be accepted I believe.
thst is interesting questionstrong text... i have a radical suggestion for you:
start your application and get the size of memory before initializing the HashSet. you can do so by using Process.GetCurrentProcess().WorkingSet64 (on msdn: http://msdn.microsoft.com/en-us/library/system.diagnostics.process.workingset64(v=vs.110).aspx)
then populate your HashSet and print Process.GetCurrentProcess().WorkingSet64 again. the difference would be the size you seek for.

How to improve performance of this algorithm?

I have a text file with 100000 pairs: word and frequency.
test.in file with words:
1 line - total count of all word-frequency pairs
2 line to ~100 001 - word-frequency pairs
100 002 line - total count of user input words
from 100 003 to the end - user input words
I parse this file and put the words in
Dictionary<string,double> dictionary;
And I want to execute some search + order logic in the following code:
for(int i=0;i<15000;i++)
{
tempInputWord = //take data from file(or other sources)
var adviceWords = dictionary
.Where(p => p.Key.StartsWith(searchWord, StringComparison.Ordinal))
.OrderByDescending(ks => ks.Value)
.ThenBy(ks => ks.Key,StringComparer.Ordinal)
.Take(10)
.ToList();
//some output
}
The problem: This code must run in less than 10 seconds.
On my computer (core i5 2400, 8gb RAM) with Parallel.For() - about 91 sec.
Can you give me some advice how to increase performance?
UPDATE :
Hooray! We did it!
Thank you #CodesInChaos, #usr, #T_D and everyone who was involved in solving the problem.
The final code:
var kvList = dictionary.OrderBy(ks => ks.Key, StringComparer.Ordinal).ToList();
var strComparer = new MyStringComparer();
var intComparer = new MyIntComparer();
var kvListSize = kvList.Count;
var allUserWords = new List<string>();
for (int i = 0; i < userWordQuantity; i++)
{
var searchWord = Console.ReadLine();
allUserWords.Add(searchWord);
}
var result = allUserWords
.AsParallel()
.AsOrdered()
.Select(searchWord =>
{
int startIndex = kvList.BinarySearch(new KeyValuePair<string, int>(searchWord, 0), strComparer);
if (startIndex < 0)
startIndex = ~startIndex;
var matches = new List<KeyValuePair<string, int>>();
bool isNotEnd = true;
for (int j = startIndex; j < kvListSize ; j++)
{
isNotEnd = kvList[j].Key.StartsWith(searchWord, StringComparison.Ordinal);
if (isNotEnd) matches.Add(kvList[j]);
else break;
}
matches.Sort(intComparer);
var res = matches.Select(s => s.Key).Take(10).ToList();
return res;
});
foreach (var adviceWords in result)
{
foreach (var adviceWord in adviceWords)
{
Console.WriteLine(adviceWord);
}
Console.WriteLine();
}
6 sec (9 sec without manual loop (with linq)))
You are not at all using any algorithmic strength of the dictionary. Ideally, you'd use a tree structure so that you can perform prefix lookups. On the other hand you are within 3.7x of your performance goal. I think you can reach that by just optimizing the constant factor in your algorithm.
Don't use LINQ in perf-critical code. Manually loop over all collections and collect results into a List<T>. That turns out to give a major speed-up in practice.
Don't use a dictionary at all. Just use a KeyValuePair<T1, T2>[] and run through it using a foreach loop. This is the fastest possible way to traverse a set of pairs.
Could look like this:
KeyValuePair<T1, T2>[] items;
List<KeyValuePair<T1, T2>> matches = new ...(); //Consider pre-sizing this.
//This could be a parallel loop as well.
//Make sure to not synchronize too much on matches.
//If there tend to be few matches a lock will be fine.
foreach (var item in items) {
if (IsMatch(item)) {
matches.Add(item);
}
}
matches.Sort(...); //Sort in-place
return matches.Take(10); //Maybe matches.RemoveRange(10, matches.Count - 10) is better
That should exceed a 3.7x speedup.
If you need more, try stuffing the items into a dictionary keyed on the first char of Key. That way you can look up all items matching tempInputWord[0]. That should reduce search times by the selectivity that is in the first char of tempInputWord. For English text that would be on the order of 26 or 52. This is a primitive form of prefix lookup that has one level of lookup. Not pretty but maybe it is enough.
I think the best way would be to use a Trie data structure instead of a dictionary. A Trie data structure saves all the words in a tree structure. A node can represent all the words that start with the same letters. So if you look for your search word tempInputWord in a Trie you will get a node that represents all the words starting with tempInputWord and you just have to traverse through all the child nodes. So you just have one search operation. The link to the Wikipedia article also mentions some other advantages over hash tables (that's what an Dictionary is basically):
Looking up data in a trie is faster in the worst case, O(m) time
(where m is the length of a search string), compared to an imperfect
hash table. An imperfect hash table can have key collisions. A key
collision is the hash function mapping of different keys to the same
position in a hash table. The worst-case lookup speed in an imperfect
hash table is O(N) time, but far more typically is O(1), with O(m)
time spent evaluating the hash.
There are no collisions of different keys in a trie.
Buckets in a trie, which are analogous to hash table buckets that store key collisions, are necessary only if a single key is
associated with more than one value.
There is no need to provide a hash function or to change hash functions as more keys are added to a trie.
A trie can provide an alphabetical ordering of the entries by key.
And here are some ideas for creating a trie in C#.
This should at least speed up the lookup, however, building the Trie might be slower.
Update:
Ok, I tested it myself using a file with frequencies of english words that uses the same format as yours. This is my code which uses the Trie class that you also tried to use.
static void Main(string[] args)
{
Stopwatch sw = new Stopwatch();
sw.Start();
var trie = new Trie<KeyValuePair<string,int>>();
//build trie with your value pairs
var lines = File.ReadLines("en.txt");
foreach(var line in lines.Take(100000))
{
var split = line.Split(' ');
trie.Add(split[0], new KeyValuePair<string,int>(split[0], int.Parse(split[1])));
}
Console.WriteLine("Time needed to read file and build Trie with 100000 words: " + sw.Elapsed);
sw.Reset();
//test with 10000 search words
sw.Start();
foreach (string line in lines.Take(10000))
{
var searchWord = line.Split(' ')[0];
var allPairs = trie.Retrieve(searchWord);
var bestWords = allPairs.OrderByDescending(kv => kv.Value).ThenBy(kv => kv.Key).Select(kv => kv.Key).Take(10);
var output = bestWords.Aggregate("", (s1, s2) => s1 + ", " + s2);
Console.WriteLine(output);
}
Console.WriteLine("Time to process 10000 different searchWords: " + sw.Elapsed);
}
My results on a pretty similar machine:
Time needed to read file and build Trie with 100000 words: 00:00:00.7397839
Time to process 10000 different searchWords: 00:00:03.0181700
So I think you are doing something wrong that we cannot see. For example the way you measure the time or the way you read the file. As my results show this stuff should be really fast. The 3 seconds are mainly due to the Console output in the loop which I needed so that the bestWords variable is used. Otherwise the variable would have been optimized away.
Replace the dictionary by a List<KeyValuePair<string, decimal>>, sorted by the key.
For the search I use that a substring sorts directly before its prefixes with ordinal comparisons. So I can use a binary search to find the first candidate. Since the candidates are contiguous I can replace Where with TakeWhile.
int startIndex = dictionary.BinarySearch(searchWord, comparer);
if(startIndex < 0)
startIndex = ~startIndex;
var adviceWords = dictionary
.Skip(startIndex)
.TakeWhile(p => p.Key.StartsWith(searchWord, StringComparison.Ordinal))
.OrderByDescending(ks => ks.Value)
.ThenBy(ks => ks.Key)
.Select(s => s.Key)
.Take(10).ToList();
Make sure to use ordinal comparison for all operations, including the initial sort, the binary search and the StartsWith check.
I would call Console.ReadLine outside the parallel loop. Probably using AsParallel().Select(...) on the collection of search words instead of Parallel.For.
If you want profiling, separate the reading of the file and see how long that takes.
Also data calculation, collection, presentation could be different steps.
If you want concurrence AND a dictionary, look at the ConcurrentDictionary, maybe even more for reliability than for performance, but probably for both:
http://msdn.microsoft.com/en-us/library/dd287191(v=vs.110).aspx
Assuming the 10 is constant, then why is everyone storing the entire data set? Memory is not free. The fastest solution is to store the first 10 entries into a list, sort it. Then, maintain the 10-element-sorted-list as you traverse through the rest of the data set, removing the 11th element every time you insert an element.
The above method works best for small values. If you had to take the first 5000 objects, consider using a binary heap instead of a list.

Comparing 2 huge lists using C# multiple times (with a twist)

Hey everyone, great community you got here. I'm an Electrical Engineer doing some "programming" work on the side to help pay for bills. I say this because I want you to take into consideration that I don't have proper Computer Science training, but I have been coding for the past 7 years.
I have several excel tables with information (all numeric), basically it is "dialed phone numbers" in one column and number of minutes to each of those numbers on another. Separately I have a list of "carrier prefix code numbers" for the different carriers in my country. What I want to do is separate all the "traffic" per carrier. Here is the scenario:
First dialed number row: 123456789ABCD,100 <-- That would be a 13 digit phone number and 100 minutes.
I have a list of 12,000+ prefix codes for carrier 1, these codes vary in length, and I need to check everyone of them:
Prefix Code 1: 1234567 <-- this code is 7 digits long.
I need to check the first 7 digits for the dialed number an compare it to the dialed number, if a match is found, I would add the number of minutes to a subtotal for later use. Please consider that not all prefix codes are the same length, some times they are shorter or longer.
Most of this should be a piece of cake, and I could should be able to do it, but I'm getting kind of scared with the massive amount of data; Some times the dialed number lists consists of up to 30,000 numbers, and the "carrier prefix code" lists around 13,000 rows long, and I usually check 3 carriers, that means I have to do a lot of "matches".
Does anyone have an idea of how to do this efficiently using C#? Or any other language to be kind honest. I need to do this quite often and designing a tool to do it would make much more sense. I need a good perspective from someone that does have that "Computer Scientist" background.
Lists don't need to be in excel worksheets, I can export to csv file and work from there, I don't need an "MS Office" interface.
Thanks for your help.
Update:
Thank you all for your time on answering my question. I guess in my ignorance I over exaggerated the word "efficient". I don't perform this task every few seconds. It's something I have to do once per day and I hate to do with with Excel and VLOOKUPs, etc.
I've learned about new concepts from you guys and I hope I can build a solution(s) using your ideas.
UPDATE
You can do a simple trick - group the prefixes by their first digits into a dictionary and match the numbers only against the correct subset. I tested it with the following two LINQ statements assuming every prefix has at least three digis.
const Int32 minimumPrefixLength = 3;
var groupedPefixes = prefixes
.GroupBy(p => p.Substring(0, minimumPrefixLength))
.ToDictionary(g => g.Key, g => g);
var numberPrefixes = numbers
.Select(n => groupedPefixes[n.Substring(0, minimumPrefixLength)]
.First(n.StartsWith))
.ToList();
So how fast is this? 15.000 prefixes and 50.000 numbers took less than 250 milliseconds. Fast enough for two lines of code?
Note that the performance heavily depends on the minimum prefix length (MPL), hence on the number of prefix groups you can construct.
MPL Runtime
-----------------
1 10.198 ms
2 1.179 ms
3 205 ms
4 130 ms
5 107 ms
Just to give an rough idea - I did just one run and have a lot of other stuff going on.
Original answer
I wouldn't care much about performance - an average desktop pc can quiete easily deal with database tables with 100 million rows. Maybe it takes five minutes but I assume you don't want to perform the task every other second.
I just made a test. I generated a list with 15.000 unique prefixes with 5 to 10 digits. From this prefixes I generated 50.000 numbers with a prefix and additional 5 to 10 digits.
List<String> prefixes = GeneratePrefixes();
List<String> numbers = GenerateNumbers(prefixes);
Then I used the following LINQ to Object query to find the prefix of each number.
var numberPrefixes = numbers.Select(n => prefixes.First(n.StartsWith)).ToList();
Well, it took about a minute on my Core 2 Duo laptop with 2.0 GHz. So if one minute processing time is acceptable, maybe two or three if you include aggregation, I would not try to optimize anything. Of course, it would be realy nice if the programm could do the task in a second or two, but this will add quite a bit of complexity and many things to get wrong. And it takes time to design, write, and test. The LINQ statement took my only seconds.
Test application
Note that generating many prefixes is really slow and might take a minute or two.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
namespace Test
{
static class Program
{
static void Main()
{
// Set number of prefixes and calls to not more than 50 to get results
// printed to the console.
Console.Write("Generating prefixes");
List<String> prefixes = Program.GeneratePrefixes(5, 10, 15);
Console.WriteLine();
Console.Write("Generating calls");
List<Call> calls = Program.GenerateCalls(prefixes, 5, 10, 50);
Console.WriteLine();
Console.WriteLine("Processing started.");
Stopwatch stopwatch = new Stopwatch();
const Int32 minimumPrefixLength = 5;
stopwatch.Start();
var groupedPefixes = prefixes
.GroupBy(p => p.Substring(0, minimumPrefixLength))
.ToDictionary(g => g.Key, g => g);
var result = calls
.GroupBy(c => groupedPefixes[c.Number.Substring(0, minimumPrefixLength)]
.First(c.Number.StartsWith))
.Select(g => new Call(g.Key, g.Sum(i => i.Duration)))
.ToList();
stopwatch.Stop();
Console.WriteLine("Processing finished.");
Console.WriteLine(stopwatch.Elapsed);
if ((prefixes.Count <= 50) && (calls.Count <= 50))
{
Console.WriteLine("Prefixes");
foreach (String prefix in prefixes.OrderBy(p => p))
{
Console.WriteLine(String.Format(" prefix={0}", prefix));
}
Console.WriteLine("Calls");
foreach (Call call in calls.OrderBy(c => c.Number).ThenBy(c => c.Duration))
{
Console.WriteLine(String.Format(" number={0} duration={1}", call.Number, call.Duration));
}
Console.WriteLine("Result");
foreach (Call call in result.OrderBy(c => c.Number))
{
Console.WriteLine(String.Format(" prefix={0} accumulated duration={1}", call.Number, call.Duration));
}
}
Console.ReadLine();
}
private static List<String> GeneratePrefixes(Int32 minimumLength, Int32 maximumLength, Int32 count)
{
Random random = new Random();
List<String> prefixes = new List<String>(count);
StringBuilder stringBuilder = new StringBuilder(maximumLength);
while (prefixes.Count < count)
{
stringBuilder.Length = 0;
for (int i = 0; i < random.Next(minimumLength, maximumLength + 1); i++)
{
stringBuilder.Append(random.Next(10));
}
String prefix = stringBuilder.ToString();
if (prefixes.Count % 1000 == 0)
{
Console.Write(".");
}
if (prefixes.All(p => !p.StartsWith(prefix) && !prefix.StartsWith(p)))
{
prefixes.Add(stringBuilder.ToString());
}
}
return prefixes;
}
private static List<Call> GenerateCalls(List<String> prefixes, Int32 minimumLength, Int32 maximumLength, Int32 count)
{
Random random = new Random();
List<Call> calls = new List<Call>(count);
StringBuilder stringBuilder = new StringBuilder();
while (calls.Count < count)
{
stringBuilder.Length = 0;
stringBuilder.Append(prefixes[random.Next(prefixes.Count)]);
for (int i = 0; i < random.Next(minimumLength, maximumLength + 1); i++)
{
stringBuilder.Append(random.Next(10));
}
if (calls.Count % 1000 == 0)
{
Console.Write(".");
}
calls.Add(new Call(stringBuilder.ToString(), random.Next(1000)));
}
return calls;
}
private class Call
{
public Call (String number, Decimal duration)
{
this.Number = number;
this.Duration = duration;
}
public String Number { get; private set; }
public Decimal Duration { get; private set; }
}
}
}
It sounds to me like you need to build a trie from the carrier prefixes. You'll end up with a single trie, where the terminating nodes tell you the carrier for that prefix.
Then create a dictionary from carrier to an int or long (the total).
Then for each dialed number row, just work your way down the trie until you find the carrier. Find the total number of minutes so far for the carrier, and add the current row - then move on.
The easiest data structure that would do this fairly efficiently would be a list of sets. Make a Set for each carrier to contain all the prefixes.
Now, to associate a call with a carrier:
foreach (Carrier carrier in carriers)
{
bool found = false;
for (int length = 1; length <= 7; length++)
{
int prefix = ExtractDigits(callNumber, length);
if (carrier.Prefixes.Contains(prefix))
{
carrier.Calls.Add(callNumber);
found = true;
break;
}
}
if (found)
break;
}
If you have 10 carriers, there will be 70 lookups in the set per call. But a lookup in a set isn't too slow (much faster than a linear search). So this should give you quite a big speed up over a brute force linear search.
You can go a step further and group the prefixes for each carrier according to the length. That way, if a carrier has only prefixes of length 7 and 4, you'd know to only bother to extract and look up those lengths, each time looking in the set of prefixes of that length.
How about dumping your data into a couple of database tables and then query them using SQL? Easy!
CREATE TABLE dbo.dialled_numbers ( number VARCHAR(100), minutes INT )
CREATE TABLE dbo.prefixes ( prefix VARCHAR(100) )
-- now populate the tables, create indexes etc
-- and then just run your query...
SELECT p.prefix,
SUM(n.minutes) AS total_minutes
FROM dbo.dialled_numbers AS n
INNER JOIN dbo.prefixes AS p
ON n.number LIKE p.prefix + '%'
GROUP BY p.prefix
(This was written for SQL Server, but should be very simple to translate for any other DBMS.)
Maybe it would be simpler (not necessarily more efficient) to do it in a database instead of C#.
You could insert the rows on the database and on insert determine the carrier and include it in the record (maybe in an insert trigger).
Then your report would be a sum query on the table.
I would probably just put the entries in a List, sort it, then use a binary search to look for matches. Tailor the binary search match criteria to return the first item that matches then iterate along the list until you find one that doesn't match. A binary search takes only around 15 comparisons to search a list of 30,000 items.
You may want to use a HashTable in C#.
This way you have key-value pairs, and your keys could be the phone numbers, and your value the total minutes. If a match is found in the key set, then modify the total minutes, else, add a new key.
You would then just need to modify your searching algorithm, to not look at the entire key, but only the first 7 digits of it.

Categories

Resources