What's wrong with my implementation of the KMP algorithm? - c#

static void Main(string[] args)
{
string str = "ABC ABCDAB ABCDABCDABDE";//We should add some text here for
//the performance tests.
string pattern = "ABCDABD";
List<int> shifts = new List<int>();
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
NaiveStringMatcher(shifts, str, pattern);
stopWatch.Stop();
Trace.WriteLine(String.Format("Naive string matcher {0}", stopWatch.Elapsed));
foreach (int s in shifts)
{
Trace.WriteLine(s);
}
shifts.Clear();
stopWatch.Restart();
int[] pi = new int[pattern.Length];
Knuth_Morris_Pratt(shifts, str, pattern, pi);
stopWatch.Stop();
Trace.WriteLine(String.Format("Knuth_Morris_Pratt {0}", stopWatch.Elapsed));
foreach (int s in shifts)
{
Trace.WriteLine(s);
}
Console.ReadKey();
}
static IList<int> NaiveStringMatcher(List<int> shifts, string text, string pattern)
{
int lengthText = text.Length;
int lengthPattern = pattern.Length;
for (int s = 0; s < lengthText - lengthPattern + 1; s++ )
{
if (text[s] == pattern[0])
{
int i = 0;
while (i < lengthPattern)
{
if (text[s + i] == pattern[i])
i++;
else break;
}
if (i == lengthPattern)
{
shifts.Add(s);
}
}
}
return shifts;
}
static IList<int> Knuth_Morris_Pratt(List<int> shifts, string text, string pattern, int[] pi)
{
int patternLength = pattern.Length;
int textLength = text.Length;
//ComputePrefixFunction(pattern, pi);
int j;
for (int i = 1; i < pi.Length; i++)
{
j = 0;
while ((i < pi.Length) && (pattern[i] == pattern[j]))
{
j++;
pi[i++] = j;
}
}
int matchedSymNum = 0;
for (int i = 0; i < textLength; i++)
{
while (matchedSymNum > 0 && pattern[matchedSymNum] != text[i])
matchedSymNum = pi[matchedSymNum - 1];
if (pattern[matchedSymNum] == text[i])
matchedSymNum++;
if (matchedSymNum == patternLength)
{
shifts.Add(i - patternLength + 1);
matchedSymNum = pi[matchedSymNum - 1];
}
}
return shifts;
}
Why does my implemention of the KMP algorithm work slower than the Naive String Matching algorithm?

The KMP algorithm has two phases: first it builds a table, and then it does a search, directed by the contents of the table.
The naive algorithm has one phase: it does a search. It does that search much less efficiently in the worst case than the KMP search phase.
If the KMP is slower than the naive algorithm then that is probably because building the table is taking you longer than it takes to simply search the string naively in the first place. Naive string matching is usually very fast on short strings. There is a reason why we don't use fancy-pants algorithms like KMP inside the BCL implementations of string searching. By the time you set up the table, you could have done half a dozen searches of short strings with the naive algorithm.
KMP is only a win if you have enormous strings and you are doing lots of searches that allow you to re-use an already-built table. You need to amortize away the huge cost of building the table by doing lots of searches using that table.
And also, the naive algorithm only has bad performance in bizarre and unlikely scenarios. Most people are searching for words like "London" in strings like "Buckingham Palace, London, England", and not searching for strings like "BANANANANANANA" in strings like "BANAN BANBAN BANBANANA BANAN BANAN BANANAN BANANANANANANANANAN...". The naive search algorithm is optimal for the first problem and highly sub-optimal for the latter problem; but it makes sense to optimize for the former, not the latter.
Another way to put it: if the searched-for string is of length w and the searched-in string is of length n, then KMP is O(n) + O(w). The Naive algorithm is worst case O(nw), best case O(n + w). But that says nothing about the "constant factor"! The constant factor of the KMP algorithm is much larger than the constant factor of the naive algorithm. The value of n has to be awfully big, and the number of sub-optimal partial matches has to be awfully large, for the KMP algorithm to win over the blazingly fast naive algorithm.
That deals with the algorithmic complexity issues. Your methodology is also not very good, and that might explain your results. Remember, the first time you run code, the jitter has to jit the IL into assembly code. That can take longer than running the method in some cases. You really should be running the code a few hundred thousand times in a loop, discarding the first result, and taking an average of the timings of the rest.
If you really want to know what is going on you should be using a profiler to determine what the hot spot is. Again, make sure you are measuring the post-jit run, not the run where the code is jitted, if you want to have results that are not skewed by the jit time.

Your example is too small and it does not have enough repetitions of the pattern where KMP avoids backtracking.
KMP can be slower than the normal search in some cases.

A Simple KMPSubstringSearch Implementation.
https://github.com/bharathkumarms/AlgorithmsMadeEasy/blob/master/AlgorithmsMadeEasy/KMPSubstringSearch.cs
using System;
using System.Collections.Generic;
using System.Linq;
namespace AlgorithmsMadeEasy
{
class KMPSubstringSearch
{
public void KMPSubstringSearchMethod()
{
string text = System.Console.ReadLine();
char[] sText = text.ToCharArray();
string pattern = System.Console.ReadLine();
char[] sPattern = pattern.ToCharArray();
int forwardPointer = 1;
int backwardPointer = 0;
int[] tempStorage = new int[sPattern.Length];
tempStorage[0] = 0;
while (forwardPointer < sPattern.Length)
{
if (sPattern[forwardPointer].Equals(sPattern[backwardPointer]))
{
tempStorage[forwardPointer] = backwardPointer + 1;
forwardPointer++;
backwardPointer++;
}
else
{
if (backwardPointer == 0)
{
tempStorage[forwardPointer] = 0;
forwardPointer++;
}
else
{
int temp = tempStorage[backwardPointer];
backwardPointer = temp;
}
}
}
int pointer = 0;
int successPoints = sPattern.Length;
bool success = false;
for (int i = 0; i < sText.Length; i++)
{
if (sText[i].Equals(sPattern[pointer]))
{
pointer++;
}
else
{
if (pointer != 0)
{
int tempPointer = pointer - 1;
pointer = tempStorage[tempPointer];
i--;
}
}
if (successPoints == pointer)
{
success = true;
}
}
if (success)
{
System.Console.WriteLine("TRUE");
}
else
{
System.Console.WriteLine("FALSE");
}
System.Console.Read();
}
}
}
/*
* Sample Input
abxabcabcaby
abcaby
*/

Related

How to remove duplicate lines from a large text file efficiently?

I want to edit a text as each line exists once in it. Each lines contains constantly 10 characters. I am generally working on 5-6 million of lines. So the code i am using currently is consuming too much RAM.
My code:
File.WriteAllLines(targetpath, File.ReadAllLines(sourcepath).Distinct())
So how can I make it less RAM consumer and less time-consumer at the same time?
Taking into account how much memory a string will take in C#, and assuming 10 characters length for 6 million records we get:
size in bytes ~= 20 + (length / 2 ) * 4;
total size in bytes ~= (20 + ( 10 / 2 ) * 4 )* 6000000 = 240 000 000
total size in Mb ~= 230
Now, 230 MB of space is not really a problem, even on x86 (32 bit system), so you can load all that data in memory.
For this, I would use a HashSet class which is obviously, a hash set that will let you easily eliminate the duplicates, by using lookup before adding an element.
In terms of big-O notation for time complexity, the average performance of a lookup in a hash set is O(1), which is the best you can get. In total, you would use lookup N times, totalling to N * O(1) = O(N)
In terms of big-O notation for space complexity, you would have O(N) space used, meaning that you use up memory proportional to number of elements, which is also the best you can get.
I'm not sure it is even possible to use up less space if you implement the algorithm in C# and not rely on any external components (that would also use at least O(N))
That being said, you can optimize for some scenarios by reading your file sequentially, line by line, see here.
This would give a better result if you have lots of duplicates, but worst case scenario when all the lines are distinct would consume the same amount of memory.
On a final note, if you look how Distinct method is implemented, you will see it also uses an implementation of hash table, although it's not the same class, but the performance is still roughly the same, check out this question for more details.
As ironstone13 corrected me, HashSet is OK, but does store the data.
Then this works fine too:
string[] arr = File.ReadAllLines("file.txt");
HashSet<string> hashes = new HashSet<string>();
for (int i = 0; i < arr.Length; i++)
{
if (!hashes.Add(arr[i])) arr[i] = null;
}
File.WriteAllLines("file2.txt", arr.Where(x => x != null));
This implementation was motivated by memory performance and hash conflicts.
The main idea was to keep just hashes, of course it would have to get back to file to get the line it sees as hash conflict/duplicit, to detect which one it is. (that part is not implemented).
class Program
{
static string[] arr;
static Dictionary<int, int>[] hashes = new Dictionary<int, int>[1]
{ new Dictionary<int, int>() }
;
static int[] file_indexes = {-1};
static void AddHash(int hash, int index)
{
for (int h = 0; h < hashes.Length; h++)
{
Dictionary<int, int> dict = hashes[h];
if (!dict.ContainsKey(hash))
{
dict[hash] = index;
return;
}
}
hashes = hashes.Union(new[] {new Dictionary<int, int>() {{hash, index}}}).ToArray();
file_indexes = Enumerable.Range(0, hashes.Length).Select(x => -1).ToArray();
}
static int UpdateFileIndexes(int hash)
{
int updates = 0;
for (int h = 0; h < hashes.Length; h++)
{
int index;
if (hashes[h].TryGetValue(hash, out index))
{
file_indexes[h] = index;
updates++;
}
else
{
file_indexes[h] = -1;
}
}
return updates;
}
static bool IsDuplicate(int index)
{
string str1 = arr[index];
for (int h = 0; h < hashes.Length; h++)
{
int i = file_indexes[h];
if (i == -1 || index == i) continue;
string str0 = arr[i];
if (str0 == null) continue;
if (string.CompareOrdinal(str0, str1) == 0) return true;
}
return false;
}
static void Main(string[] args)
{
arr = File.ReadAllLines("file.txt");
for (int i = 0; i < arr.Length; i++)
{
int hash = arr[i].GetHashCode();
if (UpdateFileIndexes(hash) == 0) AddHash(hash, i);
else if (IsDuplicate(i)) arr[i] = null;
else AddHash(hash, i);
}
File.WriteAllLines("file2.txt", arr.Where(x => x != null));
Console.WriteLine("DONE");
Console.ReadKey();
}
}
Before you write your data, if your data is in a list or dictionary, you could run LINQ query and use group by to group all like keys. Then for each write to the output file.
Your question is a little vague as well. Are you creating a next text file every time and do you have to store the data in text? There are better formats to use such as XML and json

Performing actions on a string using a StringBuilder

I am coding a C# application that appends, inserts, replaces and finds string content in string called content. Multiple objects called editObjects are performing these actions on the same string called content.
I am currently passing a StringBuilder object to each editObject object, and then each editObject performs the actions on the StringBuilder object.
This question has two parts:
Am I correct in saying that a StringBuilder is the most efficient way to perform multiple actions on a string?
I have found this post from 2013: Fastest search method in StringBuilder, and would like to know if there is some known code that is a more efficient way to find the index of a string in a StringBuilder?
public static class StringBuilderSearching
{
public static bool Contains(this StringBuilder haystack, string needle)
{
return haystack.IndexOf(needle) != -1;
}
public static int IndexOf(this StringBuilder haystack, string needle)
{
if(haystack == null || needle == null)
throw new ArgumentNullException();
if(needle.Length == 0)
return 0;//empty strings are everywhere!
if(needle.Length == 1)//can't beat just spinning through for it
{
char c = needle[0];
for(int idx = 0; idx != haystack.Length; ++idx)
if(haystack[idx] == c)
return idx;
return -1;
}
int m = 0;
int i = 0;
int[] T = KMPTable(needle);
while(m + i < haystack.Length)
{
if(needle[i] == haystack[m + i])
{
if(i == needle.Length - 1)
return m == needle.Length ? -1 : m;//match -1 = failure to find conventional in .NET
++i;
}
else
{
m = m + i - T[i];
i = T[i] > -1 ? T[i] : 0;
}
}
return -1;
}
private static int[] KMPTable(string sought)
{
int[] table = new int[sought.Length];
int pos = 2;
int cnd = 0;
table[0] = -1;
table[1] = 0;
while(pos < table.Length)
if(sought[pos - 1] == sought[cnd])
table[pos++] = ++cnd;
else if(cnd > 0)
cnd = table[cnd];
else
table[pos++] = 0;
return table;
}
}
StringBuilder is faster for most string manipulations - That doesn't mean it is the best for all multiple actions done on a string.
In your case, you are willing to find a string inside the StringBuilder, this requires you to do one of two things:
Doing a standard search at O(n) iterating the whole StringBuilder, which can be done in a pretty optimized way like the code you have posted in the question.
Indexing chars or strings on every addition and removal of data to/from the StringBuilder. Note that indexing means you will need to analyse every string you add or remove to/from the StringBuilder which creates a little overhead for each manipulation action.
You need to do the judging of what your application does more and what will disturb your application more - O(n) search instead of an O(log(n)) or O(n + m) manipulations instead of O(n) ones. (Where m is the overhead for each insertion/removal divided by the average inserted/removedstring length)
Basic indexing illustration:
Indexes which are in between are fine too for balance (maximum 2 chars index/maximum 3 chars/etc...).

Code sample that shows casting to uint is more efficient than range check

So I am looking at this question and the general consensus is that uint cast version is more efficient than range check with 0. Since the code is also in MS's implementation of List I assume it is a real optimization. However I have failed to produce a code sample that results in better performance for the uint version. I have tried different tests and there is something missing or some other part of my code is dwarfing the time for the checks. My last attempt looks like this:
class TestType
{
public TestType(int size)
{
MaxSize = size;
Random rand = new Random(100);
for (int i = 0; i < MaxIterations; i++)
{
indexes[i] = rand.Next(0, MaxSize);
}
}
public const int MaxIterations = 10000000;
private int MaxSize;
private int[] indexes = new int[MaxIterations];
public void Test()
{
var timer = new Stopwatch();
int inRange = 0;
int outOfRange = 0;
timer.Start();
for (int i = 0; i < MaxIterations; i++)
{
int x = indexes[i];
if (x < 0 || x > MaxSize)
{
throw new Exception();
}
inRange += indexes[x];
}
timer.Stop();
Console.WriteLine("Comparision 1: " + inRange + "/" + outOfRange + ", elapsed: " + timer.ElapsedMilliseconds + "ms");
inRange = 0;
outOfRange = 0;
timer.Reset();
timer.Start();
for (int i = 0; i < MaxIterations; i++)
{
int x = indexes[i];
if ((uint)x > (uint)MaxSize)
{
throw new Exception();
}
inRange += indexes[x];
}
timer.Stop();
Console.WriteLine("Comparision 2: " + inRange + "/" + outOfRange + ", elapsed: " + timer.ElapsedMilliseconds + "ms");
}
}
class Program
{
static void Main()
{
TestType t = new TestType(TestType.MaxIterations);
t.Test();
TestType t2 = new TestType(TestType.MaxIterations);
t2.Test();
TestType t3 = new TestType(TestType.MaxIterations);
t3.Test();
}
}
The code is a bit of a mess because I tried many things to make uint check perform faster like moving the compared variable into a field of a class, generating random index access and so on but in every case the result seems to be the same for both versions. So is this change applicable on modern x86 processors and can someone demonstrate it somehow?
Note that I am not asking for someone to fix my sample or explain what is wrong with it. I just want to see the case where the optimization does work.
if (x < 0 || x > MaxSize)
The comparison is performed by the CMP processor instruction (Compare). You'll want to take a look at Agner Fog's instruction tables document (PDF), it list the cost of instructions. Find your processor back in the list, then locate the CMP instruction.
For mine, Haswell, CMP takes 1 cycle of latency and 0.25 cycles of throughput.
A fractional cost like that could use an explanation, Haswell has 4 integer execution units that can execute instructions at the same time. When a program contains enough integer operations, like CMP, without an interdependency then they can all execute at the same time. In effect making the program 4 times faster. You don't always manage to keep all 4 of them busy at the same time with your code, it is actually pretty rare. But you do keep 2 of them busy in this case. Or in other words, two comparisons take just as long as single one, 1 cycle.
There are other factors at play that make the execution time identical. One thing helps is that the processor can predict the branch very well, it can speculatively execute x > MaxSize in spite of the short-circuit evaluation. And it will in fact end up using the result since the branch is never taken.
And the true bottleneck in this code is the array indexing, accessing memory is one of the slowest thing the processor can do. So the "fast" version of the code isn't faster even though it provides more opportunity to allow the processor to concurrently execute instructions. It isn't much of an opportunity today anyway, a processor has too many execution units to keep busy. Otherwise the feature that makes HyperThreading work. In both cases the processor bogs down at the same rate.
On my machine, I have to write code that occupies more than 4 engines to make it slower. Silly code like this:
if (x < 0 || x > MaxSize || x > 10000000 || x > 20000000 || x > 3000000) {
outOfRange++;
}
else {
inRange++;
}
Using 5 compares, now I can a difference, 61 vs 47 msec. Or in other words, this is a way to count the number of integer engines in the processor. Hehe :)
So this is a micro-optimization that probably used to pay off a decade ago. It doesn't anymore. Scratch it off your list of things to worry about :)
I would suggest attempting code which does not throw an exception when the index is out of range. Exceptions are incredibly expensive and can completely throw off your bench results.
The code below does a timed-average bench for 1,000 iterations of 1,000,000 results.
using System;
using System.Diagnostics;
namespace BenchTest
{
class Program
{
const int LoopCount = 1000000;
const int AverageCount = 1000;
static void Main(string[] args)
{
Console.WriteLine("Starting Benchmark");
RunTest();
Console.WriteLine("Finished Benchmark");
Console.Write("Press any key to exit...");
Console.ReadKey();
}
static void RunTest()
{
int cursorRow = Console.CursorTop; int cursorCol = Console.CursorLeft;
long totalTime1 = 0; long totalTime2 = 0;
long invalidOperationCount1 = 0; long invalidOperationCount2 = 0;
for (int i = 0; i < AverageCount; i++)
{
Console.SetCursorPosition(cursorCol, cursorRow);
Console.WriteLine("Running iteration: {0}/{1}", i + 1, AverageCount);
int[] indexArgs = RandomFill(LoopCount, int.MinValue, int.MaxValue);
int[] sizeArgs = RandomFill(LoopCount, 0, int.MaxValue);
totalTime1 += RunLoop(TestMethod1, indexArgs, sizeArgs, ref invalidOperationCount1);
totalTime2 += RunLoop(TestMethod2, indexArgs, sizeArgs, ref invalidOperationCount2);
}
PrintResult("Test 1", TimeSpan.FromTicks(totalTime1 / AverageCount), invalidOperationCount1);
PrintResult("Test 2", TimeSpan.FromTicks(totalTime2 / AverageCount), invalidOperationCount2);
}
static void PrintResult(string testName, TimeSpan averageTime, long invalidOperationCount)
{
Console.WriteLine(testName);
Console.WriteLine(" Average Time: {0}", averageTime);
Console.WriteLine(" Invalid Operations: {0} ({1})", invalidOperationCount, (invalidOperationCount / (double)(AverageCount * LoopCount)).ToString("P3"));
}
static long RunLoop(Func<int, int, int> testMethod, int[] indexArgs, int[] sizeArgs, ref long invalidOperationCount)
{
Stopwatch sw = new Stopwatch();
Console.Write("Running {0} sub-iterations", LoopCount);
sw.Start();
long startTickCount = sw.ElapsedTicks;
for (int i = 0; i < LoopCount; i++)
{
invalidOperationCount += testMethod(indexArgs[i], sizeArgs[i]);
}
sw.Stop();
long stopTickCount = sw.ElapsedTicks;
long elapsedTickCount = stopTickCount - startTickCount;
Console.WriteLine(" - Time Taken: {0}", new TimeSpan(elapsedTickCount));
return elapsedTickCount;
}
static int[] RandomFill(int size, int minValue, int maxValue)
{
int[] randomArray = new int[size];
Random rng = new Random();
for (int i = 0; i < size; i++)
{
randomArray[i] = rng.Next(minValue, maxValue);
}
return randomArray;
}
static int TestMethod1(int index, int size)
{
return (index < 0 || index >= size) ? 1 : 0;
}
static int TestMethod2(int index, int size)
{
return ((uint)(index) >= (uint)(size)) ? 1 : 0;
}
}
}
You aren't comparing like with like.
The code you were talking about not only saved one branch by using the optimisation, but also 4 bytes of CIL in a small method.
In a small method 4 bytes can be the difference in being inlined and not being inlined.
And if the method calling that method is also written to be small, then that can mean two (or more) method calls are jitted as one piece of inline code.
And maybe some of it is then, because it is inline and available for analysis by the jitter, optimised further again.
The real difference is not between index < 0 || index >= _size and (uint)index >= (uint)_size, but between code that has repeated efforts to minimise the method body size and code that does not. Look for example at how another method is used to throw the exception if necessary, further shaving off a couple of bytes of CIL.
(And no, that's not to say that I think all methods should be written like that, but there certainly can be performance differences when one does).

For vs. Linq - Performance vs. Future

Very brief question. I have a randomly sorted large string array (100K+ entries) where I want to find the first occurance of a desired string. I have two solutions.
From having read what I can my guess is that the 'for loop' is going to currently give slightly better performance (but this margin could always change), but I also find the linq version much more readable. On balance which method is generally considered current best coding practice and why?
string matchString = "dsf897sdf78";
int matchIndex = -1;
for(int i=0; i<array.length; i++)
{
if(array[i]==matchString)
{
matchIndex = i;
break;
}
}
or
int matchIndex = array.Select((r, i) => new { value = r, index = i })
.Where(t => t.value == matchString)
.Select(s => s.index).First();
The best practice depends on what you need:
Development speed and maintainability: LINQ
Performance (according to profiling tools): manual code
LINQ really does slow things down with all the indirection. Don't worry about it as 99% of your code does not impact end user performance.
I started with C++ and really learnt how to optimize a piece of code. LINQ is not suited to get the most out of your CPU. So if you measure a LINQ query to be a problem just ditch it. But only then.
For your code sample I'd estimate a 3x slowdown. The allocations (and subsequent GC!) and indirections through the lambdas really hurt.
Slightly better performance? A loop will give SIGNIFICANTLY better performance!
Consider the code below. On my system for a RELEASE (not debug) build, it gives:
Found via loop at index 999999 in 00:00:00.2782047
Found via linq at index 999999 in 00:00:02.5864703
Loop was 9.29700432810805 times faster than linq.
The code is deliberately set up so that the item to be found is right at the end. If it was right at the start, things would be quite different.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
namespace Demo
{
public static class Program
{
private static void Main(string[] args)
{
string[] a = new string[1000000];
for (int i = 0; i < a.Length; ++i)
{
a[i] = "Won't be found";
}
string matchString = "Will be found";
a[a.Length - 1] = "Will be found";
const int COUNT = 100;
var sw = Stopwatch.StartNew();
int matchIndex = -1;
for (int outer = 0; outer < COUNT; ++outer)
{
for (int i = 0; i < a.Length; i++)
{
if (a[i] == matchString)
{
matchIndex = i;
break;
}
}
}
sw.Stop();
Console.WriteLine("Found via loop at index " + matchIndex + " in " + sw.Elapsed);
double loopTime = sw.Elapsed.TotalSeconds;
sw.Restart();
for (int outer = 0; outer < COUNT; ++outer)
{
matchIndex = a.Select((r, i) => new { value = r, index = i })
.Where(t => t.value == matchString)
.Select(s => s.index).First();
}
sw.Stop();
Console.WriteLine("Found via linq at index " + matchIndex + " in " + sw.Elapsed);
double linqTime = sw.Elapsed.TotalSeconds;
Console.WriteLine("Loop was {0} times faster than linq.", linqTime/loopTime);
}
}
}
LINQ, according to declarative paradigm, expresses the logic of a computation without describing its control flow. The query is goal oriented, selfdescribing and thus easy to analyse and understand. Is also concise. Moreover, using LINQ, one depends highly upon abstraction of data structure. That involves high rate of maintanability and reusability.
Iteration aproach addresses imperative paradigm. It gives fine-grained control, thus ease obtain higher performance. The code is also simpler to debug. Sometimes well contructed iteration is more readable than query.
There is always dilemma between performance and maintainability. And usually (if there is no specific requirements about performance) maintainability should win. Only if you have performance problems, then you should profile application, find problem source, and improve its performance (by reducing maintainability at same time, yes that's the world we live in).
About your sample. Linq is not very good solution here, because it do not add match maintainability into your code. Actually for me projecting, filtering, and projecting again looks even worse, than simple loop. What you need here is simple Array.IndexOf, which is more maintainable, than loop, and have almost same performance:
Array.IndexOf(array, matchString)
Well, you gave the answer to your question yourself.
Go with a For loop if you want the best performance, or go with Linq if you want readability.
Also perhaps keep in mind the possibility of using Parallel.Foreach() which would benefit from in-line lambda expressions (so, more closer to Linq), and that is much more readable then doing paralelization "manually".
I don't think either is considered best practice some people prefer looking at LINQ and some don't.
If performance is a issue the I would profile both bits of code for your scenario and if the difference is negligible then go with the one you feel more conformable with, after all it will most likely be you who maintains the code.
Also have you thought about using PLINQ or making the loop run in parallel?
Just an interesting observation. LINQ Lambda queries for sure add a penalty over LINQ Where queries or a For Loop. In the following code, it fills a list with 1000001 multi-parameter objects and then searches for a specific item that in this test will always be the last one, using a LINQ Lamba, a LINQ Where Query and a For Loop. Each test iterates 100 times and then averages the times to get the results.
LINQ Lambda Query Average Time: 0.3382 seconds
LINQ Where Query Average Time: 0.238 seconds
For Loop Average Time: 0.2266 seconds
I've run this test over and over, and even increase the iteration and the spread is pretty much identical statistically speaking. Sure we are talking 1/10 of a second for essentially that a million item search. So in the real world, unless something is that intensive, not sure you would even notice. But if you do the LINQ Lambda vs LINQ Where query does have a difference in performance. The LINQ Where is near the same as the For Loop.
private void RunTest()
{
try
{
List<TestObject> mylist = new List<TestObject>();
for (int i = 0; i <= 1000000; i++)
{
TestObject testO = new TestObject(string.Format("Item{0}", i), 1, Guid.NewGuid().ToString());
mylist.Add(testO);
}
mylist.Add(new TestObject("test", "29863", Guid.NewGuid().ToString()));
string searchtext = "test";
int iterations = 100;
// Linq Lambda Test
List<int> list1 = new List<int>();
for (int i = 1; i <= iterations; i++)
{
DateTime starttime = DateTime.Now;
TestObject t = mylist.FirstOrDefault(q => q.Name == searchtext);
int diff = (DateTime.Now - starttime).Milliseconds;
list1.Add(diff);
}
// Linq Where Test
List<int> list2 = new List<int>();
for (int i = 1; i <= iterations; i++)
{
DateTime starttime = DateTime.Now;
TestObject t = (from testO in mylist
where testO.Name == searchtext
select testO).FirstOrDefault();
int diff = (DateTime.Now - starttime).Milliseconds;
list2.Add(diff);
}
// For Loop Test
List<int> list3 = new List<int>();
for (int i = 1; i <= iterations; i++)
{
DateTime starttime = DateTime.Now;
foreach (TestObject testO in mylist)
{
if (testO.Name == searchtext)
{
TestObject t = testO;
break;
}
}
int diff = (DateTime.Now - starttime).Milliseconds;
list3.Add(diff);
}
float diff1 = list1.Average();
Debug.WriteLine(string.Format("LINQ Lambda Query Average Time: {0} seconds", diff1 / (double)100));
float diff2 = list2.Average();
Debug.WriteLine(string.Format("LINQ Where Query Average Time: {0} seconds", diff2 / (double)100));
float diff3 = list3.Average();
Debug.WriteLine(string.Format("For Loop Average Time: {0} seconds", diff3 / (double)100));
}
catch (Exception ex)
{
Debug.WriteLine(ex.ToString());
}
}
private class TestObject
{
public TestObject(string _name, string _value, string _guid)
{
Name = _name;
Value = _value;
GUID = _guid;
}
public string Name;
public string Value;
public string GUID;
}
The Best Option Is To Use IndexOf method of Array Class. Since it is specialized for arrays it will b significantly faster than both Linq and For Loop.
Improving on Matt Watsons Answer.
using System;
using System.Diagnostics;
using System.Linq;
namespace PerformanceConsoleApp
{
public class LinqVsFor
{
private static void Main(string[] args)
{
string[] a = new string[1000000];
for (int i = 0; i < a.Length; ++i)
{
a[i] = "Won't be found";
}
string matchString = "Will be found";
a[a.Length - 1] = "Will be found";
const int COUNT = 100;
var sw = Stopwatch.StartNew();
Loop(a, matchString, COUNT, sw);
First(a, matchString, COUNT, sw);
Where(a, matchString, COUNT, sw);
IndexOf(a, sw, matchString, COUNT);
Console.ReadLine();
}
private static void Loop(string[] a, string matchString, int COUNT, Stopwatch sw)
{
int matchIndex = -1;
for (int outer = 0; outer < COUNT; ++outer)
{
for (int i = 0; i < a.Length; i++)
{
if (a[i] == matchString)
{
matchIndex = i;
break;
}
}
}
sw.Stop();
Console.WriteLine("Found via loop at index " + matchIndex + " in " + sw.Elapsed);
}
private static void IndexOf(string[] a, Stopwatch sw, string matchString, int COUNT)
{
int matchIndex = -1;
sw.Restart();
for (int outer = 0; outer < COUNT; ++outer)
{
matchIndex = Array.IndexOf(a, matchString);
}
sw.Stop();
Console.WriteLine("Found via IndexOf at index " + matchIndex + " in " + sw.Elapsed);
}
private static void First(string[] a, string matchString, int COUNT, Stopwatch sw)
{
sw.Restart();
string str = "";
for (int outer = 0; outer < COUNT; ++outer)
{
str = a.First(t => t == matchString);
}
sw.Stop();
Console.WriteLine("Found via linq First at index " + Array.IndexOf(a, str) + " in " + sw.Elapsed);
}
private static void Where(string[] a, string matchString, int COUNT, Stopwatch sw)
{
sw.Restart();
string str = "";
for (int outer = 0; outer < COUNT; ++outer)
{
str = a.Where(t => t == matchString).First();
}
sw.Stop();
Console.WriteLine("Found via linq Where at index " + Array.IndexOf(a, str) + " in " + sw.Elapsed);
}
}
}
Output:
Found via loop at index 999999 in 00:00:01.1528531
Found via linq First at index 999999 in 00:00:02.0876573
Found via linq Where at index 999999 in 00:00:01.3313111
Found via IndexOf at index 999999 in 00:00:00.7244812
A bit of a non-answer, and really just an extension to https://stackoverflow.com/a/14894589, but I have, on and off, been working on an API-compatible replacement for Linq-to-Objects for a while now. It still doesn't provide the performance of a hand-coded loop, but it is faster for many (most?) linq scenarios. It does create more garbage, and has some slightly heavier up front costs.
The code is available https://github.com/manofstick/Cistern.Linq
A nuget package is available https://www.nuget.org/packages/Cistern.Linq/ (I can't claim this to be battle hardened, use at your own risk)
Taking the code from Matthew Watson's answer (https://stackoverflow.com/a/14894589) with two slight tweaks, and we get the time down to "only" ~3.5 time worse than the hand-coded loop. On my machine it take about 1/3 of the time of original System.Linq version.
The two changes to replace:
using System.Linq;
...
matchIndex = a.Select((r, i) => new { value = r, index = i })
.Where(t => t.value == matchString)
.Select(s => s.index).First();
With the following:
// a complete replacement for System.Linq
using Cistern.Linq;
...
// use a value tuple rather than anonymous type
matchIndex = a.Select((r, i) => (value: r, index: i))
.Where(t => t.value == matchString)
.Select(s => s.index).First();
So the library itself is a work in progress. It fails a couple of edge cases from the corefx's System.Linq test suite. It also still needs a few functions to be converted over (they currently have the corefx System.Linq implementation, which is compatible from an API perspective, if not a performance perspective). But anymore who wants to help, comment, etc would be appreciated....

c# ref for speed

I understand full the ref word in the .NET
Since using the same variable, would increase speed to use ref instead of making copy?
I find bottleneck to be in password general.
Here is my codes
protected internal string GetSecurePasswordString(string legalChars, int length)
{
Random myRandom = new Random();
string myString = "";
for (int i = 0; i < length; i++)
{
int charPos = myRandom.Next(0, legalChars.Length - 1);
myString = myString + legalChars[charPos].ToString();
}
return myString;
}
is better to ref before legalchars?
Passing a string by value does not copy the string. It only copies the reference to the string. There's no performance benefit to passing the string by reference instead of by value.
No, you shouldn't pass the string reference by reference.
However, you are creating several strings pointlessly. If you're creating long passwords, that could be why it's a bottleneck. Here's a faster implementation:
protected internal string GetSecurePasswordString(string legalChars, int length)
{
Random myRandom = new Random();
char[] chars = new char[length];
for (int i = 0; i < length; i++)
{
int charPos = myRandom.Next(0, legalChars.Length - 1);
chars[i] = legalChars[charPos];
}
return new string(chars);
}
However, it still has three big flaws:
It creates a new instance of Random each time. If you call this method twice in quick succession, you'll get the same password twice. Bad idea.
The upper bound specified in a Random.Next() call is exclusive - so you'll never use the last character of legalChars.
It uses System.Random, which is not meant to be in any way cryptographically secure. Given that this is meant to be for a "secure password" you should consider using something like System.Security.Cryptography.RandomNumberGenerator. It's more work to do so because the API is harder, but you'll end up with a more secure system (if you do it properly).
You might also want to consider using SecureString, if you get really paranoid.
strings in .Net are immutable , so all modify operations on strings always result in creation ( and garbage collection) of new strings. No performance gain would be achieved by using ref in this case. Instead , use StringBuilder.
A word about the general performance gain of passing a string ByReference ("ref") instead of ByValue:
There is a performance gain, but it is very small!
Consider the program below where a function is called 10.000.0000 times with a string argument by value and by reference. The average time measured was
ByValue: 249 milliseconds
ByReference: 226 milliseconds
In general "ref" is a little faster, but often it's not worth worrying about it.
Here is my code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
namespace StringPerformanceTest
{
class Program
{
static void Main(string[] args)
{
const int n = 10000000;
int k;
string time, s1;
Stopwatch sw;
// List for testing ("1", "2", "3" ...)
List<string> list = new List<string>(n);
for (int i = 0; i < n; i++)
list.Add(i.ToString());
// Test ByVal
k = 0;
sw = Stopwatch.StartNew();
foreach (string s in list)
{
s1 = s;
if (StringTestSubVal(s1)) k++;
}
time = GetElapsedString(sw);
Console.WriteLine("ByVal: " + time);
Console.WriteLine("123 found " + k + " times.");
// Test ByRef
k = 0;
sw = Stopwatch.StartNew();
foreach (string s in list)
{
s1 = s;
if (StringTestSubRef(ref s1)) k++;
}
time = GetElapsedString(sw);
Console.WriteLine("Time ByRef: " + time);
Console.WriteLine("123 found " + k + " times.");
}
static bool StringTestSubVal(string s)
{
if (s == "123")
return true;
else
return false;
}
static bool StringTestSubRef(ref string s)
{
if (s == "123")
return true;
else
return false;
}
static string GetElapsedString(Stopwatch sw)
{
if (sw.IsRunning) sw.Stop();
TimeSpan ts = sw.Elapsed;
return String.Format("{0:00}:{1:00}:{2:00}.{3:000}", ts.Hours, ts.Minutes, ts.Seconds, ts.Milliseconds);
}
}
}

Categories

Resources