This question is more about an algorithm than actual code, but example code would be appreciated.
Let's say I have a two-dimensional array such as this:
A B C D E
--------------
1 | 0 2 3 4 5
2 | 1 2 4 5 6
3 | 1 3 4 5 6
4 | 2 3 4 5 6
5 | 1 2 3 4 5
I am trying to find the shortest list that would include a value from each row. Currently, I am going row by row and column by column, adding each value to a SortedSet and then checking the length of the set against the shortest set found so far. For example:
Adding cells {1A, 2A, 3A, 4A, 5A} would add the values {0, 1, 1, 2, 1} which would result in a sorted set {0, 1, 2}. {1B, 2A, 3A, 4A, 5A} would add the values {2, 1, 1, 2, 1} which would result in a sorted set {1, 2}, which is shorter than the previous set.
Obviously, adding {1D, 2C, 3C, 4C, 5D} or {1E, 2D, 3D, 4D, 5E} would be the shortest sets, having only one item each, and I could use either one.
I don't have to include every number in the array. I just need to find the shortest set while including at least one number from every row.
Keep in mind that this is just an example array, and the arrays that I'm using are much, much larger. The smallest is 495x28. Brute force will take a VERY long time (28^495 passes). Is there a shortcut that someone knows, to find this in the least number of passes? I have C# code, but it's kind of long.
Edit:
Posting current code, as per request:
// Set an array of counters, Add enough to create largest initial array
int ListsCount = MatrixResults.Count();
int[] Counters = new int[ListsCount];
SortedSet<long> CurrentSet = new SortedSet<long>();
for (long X = 0; X < ListsCount; X++)
{
Counters[X] = 0;
CurrentSet.Add(X);
}
while (true)
{
// Compile sequence list from MatrixResults[]
SortedSet<long> ThisSet = new SortedSet<long>();
for (int X = 0; X < Count4; X ++)
{
ThisSet.Add(MatrixResults[X][Counters[X]]);
}
// if Sequence Length less than current low, set ThisSet as Current
if (ThisSet.Count() < CurrentSet.Count())
{
CurrentSet.Clear();
long[] TSI = ThisSet.ToArray();
for (int Y = 0; Y < ThisSet.Count(); Y ++)
{
CurrentSet.Add(TSI[Y]);
}
}
// Increment Counters
int Index = 0;
bool EndReached = false;
while (true)
{
Counters[Index]++;
if (Counters[Index] < MatrixResults[Index].Count()) break;
Counters[Index] = 0;
Index++;
if (Index >= ListsCount)
{
EndReached = true;
break;
}
Counters[Index]++;
}
// If all counters are fully incremented, then break
if (EndReached) break;
}
With all computations there is always a tradeoff, several factors are in play, like will You get paid for getting it perfect (in this case for me, no). This is a case of the best being the enemy of the good. How long can we spend on solving a problem and will it be sufficient to get close enough to fulfil the use case (imo) and when we can solve the problem without hand painting pixels in UHD resolution to get the idea of a key through, lets!
So, my choice is an approach which will get a covering set which is small and ehem... sometimes will be the smallest :) In essence because of the sequence in comparing would to be spot on be iterative between different strategies, comparing the length of the sets for different strategies - and for this evening of fun I chose to give one strategy which is I find defendable to be close to or equal the minimal set.
So this strategy is to observe the multi dimensional array as a sequence of lists that has a distinct value set each. Then if reducing the total amount of lists with the smallest in the remainder iteratively, weeding out any non used values in that smallest list when having reduced total set in each iteration we will get a path which is close enough to the ideal to be effective as it completes in milliseconds with this approach.
A critique of this approach up front is then that the direction you pass your minimal list in really would have to get iteratively varied to pick best, left to right, right to left, in position sequences X,Y,Z, ... because the amount of potential reducing is not equal. So to get close to the ideal iterations of sequences would have to be made for each iteration too until all combinations were covered, choosing the most reducing sequence. right - but I chose left to right, only!
Now I chose not to run compare execution against Your code, because of the way you instantiate your MatrixResults is an array of int arrays and not instantiated as a multidimension array, which your drawing is, so I went by Your drawing and then couldn't share data source with your code. No matter, you can make that conversion if you wish, onwards to generate sample data:
private int[,] CreateSampleArray(int xDimension, int yDimensions, Random rnd)
{
Debug.WriteLine($"Created sample array of dimensions ({xDimension}, {yDimensions})");
var array = new int[xDimension, yDimensions];
for (int x = 0; x < array.GetLength(0); x++)
{
for(int y = 0; y < array.GetLength(1); y++)
{
array[x, y] = rnd.Next(0, 4000);
}
}
return array;
}
The overall structure with some logging, I'm using xUnit to run the code in
[Fact]
public void SetCoverExperimentTest()
{
var rnd = new Random((int)DateTime.Now.Ticks);
var sw = Stopwatch.StartNew();
int[,] matrixResults = CreateSampleArray(rnd.Next(100, 500), rnd.Next(100, 500), rnd);
//So first requirement is that you must have one element per row, so lets get our unique rows
var listOfAll = new List<List<int>>();
List<int> listOfRow;
for (int y = 0; y < matrixResults.GetLength(1); y++)
{
listOfRow = new List<int>();
for (int x = 0; x < matrixResults.GetLength(0); x++)
{
listOfRow.Add(matrixResults[x, y]);
}
listOfAll.Add(listOfRow.Distinct().ToList());
}
var setFound = new HashSet<int>();
List<List<int>> allUniquelyRequired = GetDistinctSmallestList(listOfAll, setFound);
// This set now has all rows that are either distinctly different
// Or have a reordering of distinct values of that length value lists
// our HashSet has the unique value range
//Meaning any combination of sets with those values,
//grabbing any one for each set, prefering already chosen ones should give a covering total set
var leastSet = new LeastSetData
{
LeastSet = setFound,
MatrixResults = matrixResults,
};
List<Coordinate>? minSet = leastSet.GenerateResultsSet();
sw.Stop();
Debug.WriteLine($"Completed in {sw.Elapsed.TotalMilliseconds:0.00} ms");
Assert.NotNull(minSet);
//There is one for each row
Assert.False(minSet.Select(s => s.y).Distinct().Count() < minSet.Count());
//We took less than 25 milliseconds
var timespan = new TimeSpan(0, 0, 0, 0, 25);
Assert.True(sw.Elapsed < timespan);
//Outputting to debugger for the fun of it
var sb = new StringBuilder();
foreach (var coordinate in minSet)
{
sb.Append($"({coordinate.x}, {coordinate.y}) {matrixResults[coordinate.x, coordinate.y]},");
}
var debugLine = sb.ToString();
debugLine = debugLine.Substring(0, debugLine.Length - 1);
Debug.WriteLine("Resulting set: " + debugLine);
}
Now the more meaty iterative bits
private List<List<int>> GetDistinctSmallestList(List<List<int>> listOfAll, HashSet<int> setFound)
{
// Our smallest set must be a subset the distinct sum of all our smallest lists for value range,
// plus unknown
var listOfShortest = new List<List<int>>();
int shortest = int.MaxValue;
foreach (var list in listOfAll)
{
if (list.Count < shortest)
{
listOfShortest.Clear();
shortest = list.Count;
listOfShortest.Add(list);
}
else if (list.Count == shortest)
{
if (listOfShortest.Contains(list))
continue;
listOfShortest.Add(list);
}
}
var setFoundAddition = new HashSet<int>(setFound);
foreach (var list in listOfShortest)
{
foreach (var item in list)
{
if (setFound.Contains(item))
continue;
if (setFoundAddition.Contains(item))
continue;
setFoundAddition.Add(item);
}
}
//Now we can remove all rows with those found, we'll add the smallest later
var listOfAllRemainder = new List<List<int>>();
bool foundInList;
List<int> consumedWhenReducing = new List<int>();
foreach (var list in listOfAll)
{
foundInList = false;
foreach (int item in list)
{
if (setFound.Contains(item))
{
//Covered by data from last iteration(s)
foundInList = true;
break;
}
else if (setFoundAddition.Contains(item))
{
consumedWhenReducing.Add(item);
foundInList = true;
break;
}
}
if (!foundInList)
{
listOfAllRemainder.Add(list); //adding what lists did not have elements found
}
}
//Remove any from these smallestset lists that did not get consumed in the favour used pass before
if (consumedWhenReducing.Count == 0)
{
throw new Exception($"Shouldn't be possible to remove the row itself without using one of its values, please investigate");
}
var removeArray = setFoundAddition.Where(a => !consumedWhenReducing.Contains(a)).ToArray();
setFoundAddition.RemoveWhere(x => removeArray.Contains(x));
foreach (var value in setFoundAddition)
{
setFound.Add(value);
}
if (listOfAllRemainder.Count != 0)
{
//Do the whole thing again until there in no list left
listOfShortest.AddRange(GetDistinctSmallestList(listOfAllRemainder, setFound));
}
return listOfShortest; //Here we will ultimately have the sum of shortest lists per iteration
}
To conclude: I hope to have inspired You, at least I had fun coming up with a best approximate, and should you feel like completing the code, You're very welcome to grab what You like.
Obviously we should really track the sequence we go through the shortest lists, after all it is of significance if we start by reducing the total distinct lists by element at position 0 or 0+N and which one we reduce with after. I mean we must have one of those values but each time consuming each value has removed most of the total list all it really produces is a value range and the range consumption sequence matters to the later iterations - Because a position we didn't reach before there were no others left e.g. could have remove potentially more than some which were covered. You get the picture I'm sure.
And this is just one strategy, One may as well have chosen the largest distinct list even within the same framework and if You do not iteratively cover enough strategies, there is only brute force left.
Anyways you'd want an AI to act. Just like a human, not to contemplate the existence of universe before, after all we can reconsider pretty often with silicon brains as long as we can do so fast.
With any moving object at least, I'd much rather be 90% on target correcting every second while taking 14 ms to get there, than spend 2 seconds reaching 99% or the illusive 100% => meaning we should stop the vehicle before the concrete pillar or the pram or conversely buy the equity when it is a good time to do so, not figuring out that we should have stopped, when we are allready on the other side of the obstacle or that we should've bought 5 seconds ago, but by then the spot price already jumped again...
Thus the defense rests on the notion that it is opinionated if this solution is good enough or simply incomplete at best :D
I realize it's pretty random, but just to say that although this sketch is not entirely indisputably correct, it is easy to read and maintain and anyways the question is wrong B-] We will very rarely need the absolute minimal set and when we do the answer will be much longer :D
... woopsie, forgot the support classes
public struct Coordinate
{
public int x;
public int y;
public override string ToString()
{
return $"({x},{y})";
}
}
public struct CoordinateValue
{
public int Value { get; set; }
public Coordinate Coordinate { get; set; }
public override string ToString()
{
return string.Concat(Coordinate.ToString(), " ", Value.ToString());
}
}
public class LeastSetData
{
public HashSet<int> LeastSet { get; set; }
public int[,] MatrixResults { get; set; }
public List<Coordinate> GenerateResultsSet()
{
HashSet<int> chosenValueRange = new HashSet<int>();
var chosenSet = new List<Coordinate>();
for (int y = 0; y < MatrixResults.GetLength(1); y++)
{
var candidates = new List<CoordinateValue>();
for (int x = 0; x < MatrixResults.GetLength(0); x++)
{
if (LeastSet.Contains(MatrixResults[x, y]))
{
candidates.Add(new CoordinateValue
{
Value = MatrixResults[x, y],
Coordinate = new Coordinate { x = x, y = y }
}
);
continue;
}
}
if (candidates.Count == 0)
throw new Exception($"OMG Something's wrong! (this row did not have any of derived range [y: {y}])");
var done = false;
foreach (var c in candidates)
{
if (chosenValueRange.Contains(c.Value))
{
chosenSet.Add(c.Coordinate);
done = true;
break;
}
}
if (!done)
{
var firstCandidate = candidates.First();
chosenSet.Add(firstCandidate.Coordinate);
chosenValueRange.Add(firstCandidate.Value);
}
}
return chosenSet;
}
}
This problem is NP hard.
To show that, we have to take a known NP hard problem, and reduce it to this one. Let's do that with the Set Cover Problem.
We start with a universe U of things, and a collection S of sets that covers the universe. Assign each thing a row, and each set a number. This will fill different numbers of columns for each row. Fill in a rectangle by adding new numbers.
Now solve your problem.
For each new number in your solution that didn't come from a set in the original problem, we can replace it with another number in the same row that did come from a set.
And now we turn numbers back into sets and we have a solution to the Set Cover Problem.
The transformations from set cover to your problem and back again are both O(number_of_elements * number_of_sets) which is polynomial in the input. And therefore your problem is NP hard.
Conversely if you replace each number in the matrix with the set of rows covered, your problem turns into the Set Cover Problem. Using any existing solver for set cover then gives a reasonable approach for your problem as well.
The code is not particularly tidy or optimised, but illustrates the approach I think #btilly is suggesting in his answer (E&OE) using a bit of recursion (I was going for intuitive rather than ideal for scaling, so you may have to work an iterative equivalent).
From the rows with their values make a "values with the rows that they appear in" counterpart. Now pick a value, eliminate all rows in which it appears and solve again for the reduced set of rows. Repeat recursively, keeping only the shortest solutions.
I know this is not terribly readable (or well explained) and may come back to tidy up in the morning, so let me know if it does what you want (is worth a bit more of my time;-).
// Setup
var rowValues = new Dictionary<int, HashSet<int>>
{
[0] = new() { 0, 2, 3, 4, 5 },
[1] = new() { 1, 2, 4, 5, 6 },
[2] = new() { 1, 3, 4, 5, 6 },
[3] = new() { 2, 3, 4, 5, 6 },
[4] = new() { 1, 2, 3, 4, 5 }
};
Dictionary<int, HashSet<int>> ValueRows(Dictionary<int, HashSet<int>> rv)
{
var vr = new Dictionary<int, HashSet<int>>();
foreach (var row in rv.Keys)
{
foreach (var value in rv[row])
{
if (vr.ContainsKey(value))
{
if (!vr[value].Contains(row))
vr[value].Add(row);
}
else
{
vr.Add(value, new HashSet<int> { row });
}
}
}
return vr;
}
List<int> FindSolution(Dictionary<int, HashSet<int>> rAndV)
{
if (rAndV.Count == 0) return new List<int>();
var bestSolutionSoFar = new List<int>();
var vAndR = ValueRows(rAndV);
foreach (var v in vAndR.Keys)
{
var copyRemove = new Dictionary<int, HashSet<int>>(rAndV);
foreach (var r in vAndR[v])
copyRemove.Remove(r);
var solution = new List<int>{ v };
solution.AddRange(FindSolution(copyRemove));
if (bestSolutionSoFar.Count == 0 || solution.Count > 0 && solution.Count < bestSolutionSoFar.Count)
bestSolutionSoFar = solution;
}
return bestSolutionSoFar;
}
var solution = FindSolution(rowValues);
Console.WriteLine($"Optimal solution has values {{ {string.Join(',', solution)} }}");
output Optimal solution has values { 4 }
I'm writing a program which uses OpenCv neural networks module along with C# and OpenCvSharp library. It must recognise the face of user, so in order to train the network, i need a set of samples. The problem is how to convert a sample image into array suitable for training. What i've got is 200x200 BitMap image, and network with 40000 input neurons, 200 hidden neurons and one output:
CvMat layerSizes = Cv.CreateMat(3, 1, MatrixType.S32C1);
layerSizes[0, 0] = 40000;
layerSizes[1, 0] = 200;
layerSizes[2, 0] = 1;
Network = new CvANN_MLP(layerSizes,MLPActivationFunc.SigmoidSym,0.6,1);
So then I'm trying to convert BitMap image into CvMat array:
private void getTrainingMat(int cell_count, CvMat trainMAt, CvMat responses)
{
CvMat res = Cv.CreateMat(cell_count, 10, MatrixType.F32C1);//10 is a number of samples
responses = Cv.CreateMat(10, 1, MatrixType.F32C1);//array of supposed outputs
int counter = 0;
foreach (Bitmap b in trainSet)
{
IplImage img = BitmapConverter.ToIplImage(b);
Mat imgMat = new Mat(img);
for (int i=0;i<imgMat.Height;i++)
{
for (int j = 0; j < imgMat.Width; j++)
{
int val =imgMat.Get<int>(i, j);
res[counter, 0] = imgMat.Get<int>(i, j);
}
responses[i, 0] = 1;
}
trainMAt = res;
}
}
And then, when trying to train it, I've got this exception:
input training data should be a floating-point matrix withthe number of rows equal to the number of training samples and the number of columns equal to the size of 0-th (input) layer
Code for training:
trainMAt = Cv.CreateMat(inp_layer_size, 10, MatrixType.F32C1);
responses = Cv.CreateMat(inp_layer_size, 1, MatrixType.F32C1);
getTrainingMat(inp_layer_size, trainMAt, responses);
Network.Train(trainMAt, responses, new CvMat(),null, Parameters);
I'm new to OpenCV and I think I did something wrong in converting because of lack of understanding CvMat structure. Where is my error and is there any other way of transforming the bitmap?
With the number of rows equal to the number of training samples
That's 10 samples.
and the number of columns equal to the size of 0-th (input) layer
That's inp_layer_size.
trainMAt = Cv.CreateMat(10, inp_layer_size, MatrixType.F32C1);
responses = Cv.CreateMat(10, 1, MatrixType.F32C1); // 10 labels for 10 samples
I primarily do C++, so forgive me if I'm misunderstanding, but your pixel loop will need adapting in addition.
Your inner loop looks broken, as you assign to val, but never use it, and also never increment your counter.
In addition, in your outer loop assigning trainMAt = res; for every image doesn't seem like a very good idea.
I am certain you will get it to operate correctly, just keep in mind the fact that the goal is to flatten each image into a single row, so you end up with 10 rows and inp_layer_size columns.
I am new to advanced algorithms, so please bear with me. I am currently trying to get Dijkstra's algorithm to work, and have spend 2 days trying to figure this out. I also read the pseudo-code on Wikipedia and got this far. I want to get the shortest distance between two vertices. In the below sample, I keep getting the wrong distance. Please help?
Sample graph setup is as follow:
Graph graph = new Graph();
graph.Vertices.Add(new Vertex("A"));
graph.Vertices.Add(new Vertex("B"));
graph.Vertices.Add(new Vertex("C"));
graph.Vertices.Add(new Vertex("D"));
graph.Vertices.Add(new Vertex("E"));
graph.Edges.Add(new Edge
{
From = graph.Vertices.FindVertexByName("A"),
To = graph.Vertices.FindVertexByName("B"),
Weight = 5
});
graph.Edges.Add(new Edge
{
From = graph.Vertices.FindVertexByName("B"),
To = graph.Vertices.FindVertexByName("C"),
Weight = 4
});
graph.Edges.Add(new Edge
{
From = graph.Vertices.FindVertexByName("C"),
To = graph.Vertices.FindVertexByName("D"),
Weight = 8
});
graph.Edges.Add(new Edge
{
From = graph.Vertices.FindVertexByName("D"),
To = graph.Vertices.FindVertexByName("C"),
Weight = 8
});
//graph is passed as param with source and dest vertices
public int Run(Graph graph, Vertex source, Vertex destvertex)
{
Vertex current = source;
List<Vertex> queue = new List<Vertex>();
foreach (var vertex in graph.Vertices)
{
vertex.Weight = int.MaxValue;
vertex.PreviousVertex = null;
vertex.State = VertexStates.UnVisited;
queue.Add(vertex);
}
current = graph.Vertices.FindVertexByName(current.Name);
current.Weight = 0;
queue.Add(current);
while (queue.Count > 0)
{
Vertex minDistance = queue.OrderBy(o => o.Weight).FirstOrDefault();
queue.Remove(minDistance);
if (current.Weight == int.MaxValue)
{
break;
}
minDistance.Neighbors = graph.GetVerTextNeigbours(current);
foreach (Vertex neighbour in minDistance.Neighbors)
{
Edge edge = graph.Edges.FindEdgeByStartingAndEndVertex(minDistance, neighbour);
int dist = minDistance.Weight + (edge.Weight);
if (dist < neighbour.Weight)
{
//from this point onwards i get stuck
neighbour.Weight = dist;
neighbour.PreviousVertex = minDistance;
queue.Remove(neighbour);
queueVisited.Enqueue(neighbor);
}
}
minDistance.State = VertexStates.Visited;
}
//here i want to record all node that was visited
while (queueVisited.Count > 0)
{
Vertex temp = queueVisited.Dequeue();
count += temp.Neighbors.Sum(x => x.Weight);
}
return count;
}
There are several immediate issues with the above code.
The current variable is never re-assigned, nor is the object mutatd, so graph.GetVerTextNeigbours(current) always returning the same set.
As a result, this will never even find the target vertex if more than one edge must be traversed.
The neighbor node is removed from the queue via queue.Remove(neighbour). This is incorrect and can prevent a node/vertex from being [re]explored. Instead, it should just have the weight updated in the queue.
In this case the "decrease-key v in Q" is the psuedo-code results in a NO-OP in the implementation due object-mutation and not using a priority queue (i.e. sorted) structure.
The visited queue, as per queueVisited.Enqueue(neighbor) and the later "sum of all neighbor weights of all visited nodes", is incorrect.
After the algorithm runs, the cost (Weight) of any node/vertex that was explored is the cost of the shortest path to that vertex. Thus, you can trivially ask the goal vertex what the cost is.
In addition, for the sake of the algorithm simplicity, I recommend giving up the "Graph/Vertex/Edge" layout initially created and, instead, pass in a simple Node. This Node can enumerate its neighbor Nodes, knows the weights to each neighbor, its own current weight, and the best-cost node to it.
Simply build this Node graph once, at the start, then pass the starting Node. This eliminates (or greatly simplifies) calls like GetVerTextNeighbors and FindEdgeByStartingAndEndVertex, and strange/unnecessary side-effects like assigning Neighbors each time the algorithm visits the vertex.
I know that in order to clean up your code you use loops. i.e. instead of saying
myItem = myArray[0]
my2ndItem = myArray[1]
up to element 100 or so will take 100 lines of your code and will be ugly.
It can be slimmed down and look nice / be more efficient using a loop.
Let's say you're creating a game and you have 100 non player controlled enemies that spawn in an EXACT position, each position differing.
How on earth would you spawn every single one without using 100 lines of code?
I can understand if, say for example, you wanted them all 25 yards apart, you could just use a loop and increment by 25.
So that said, programming statically in this case is the only way I can see a possible outcome, yet I know programming dynamically is the way to go.
How do people do this? And if you could provide other similar examples that would be great.
var spawnLocs = new SpawnLocation[10];
if (RandomPosition)
{
for (int i = 0; i < spawnLocs.Length; i++)
{
spawnLocs[i] = // auto define using algorithm / logic
}
}
else
{
spawnLocs[0] = // manually define spawn location
spawnLocs[1] =
...
...
spawnLocs[9] =
}
Spawn new enemy:
void SpawnEnemy(int spawnedCount)
{
if (EnemyOnMap < 10 && spawnedCount < 100)
{
var rd = new Random();
Enemy e = new Enemy();
SpawnLocation spawnLoc = new SpawnLocation();
bool locationOk = false;
while(!locationOk)
{
spawnLoc = spawnLocs[rd.Next(0, spawnLoc.Length)];
if (!spawnLoc.IsOccupied)
{
locationOk = true;
}
}
e.SpawnLocation = spawnLoc;
this.Map.AddNewEnemy(e);
}
}
Just specify the positions as an array or list or whatever data-format suits the purpose, then implement a 3 line loop that reads from that input for each enemy, and finds and sets the corresponding position.
Most languages will have some kind of "shorthand" format for providing the data for an array or list directly, as in C# for instance:
var myList = new List<EnemyPosition>{
new EnemyPosition{ x = 0, y = 1 },
new EnemyPosition{ x = 45, y = 31 },
new EnemyPosition{ x = 12, y = 9 },
(...)
};
You could naturally put the same data in an XML file, or a database, or your grandmothers cupboard for that matter, just as long you have some interface to retrieve it when needed.
Then you can set it up with a loop, just as in your question.
I've managed to do recognize characters from image. For this reason:
I save all recognized blobs(images) in List
Bitmap bpt1 = new Bitmap(#"C:\2\torec1.png", true);
Bitmap bpt2 = new Bitmap(#"C:\2\torec2.png", true);
List<Bitmap> toRecognize = new List<Bitmap>();
toRecognize.Add(bpt1);
toRecognize.Add(bpt2);
I keep a library of known letters in Dictionary.
Bitmap le = new Bitmap(#"C:\2\e.png", true);
Bitmap lg = new Bitmap(#"C:\2\g.png", true);
Bitmap ln = new Bitmap(#"C:\2\n.png", true);
Bitmap li = new Bitmap(#"C:\2\i.png", true);
Bitmap ls = new Bitmap(#"C:\2\s.png", true);
Bitmap lt = new Bitmap(#"C:\2\t.png", true);
var dict = new Dictionary<string, Bitmap>();
dict.Add("e", le);
dict.Add("g", lg);
dict.Add("n", ln);
dict.Add("i", li);
dict.Add("s", ls);
dict.Add("t", lt);
Then I create New List with Images - from library:
var target = dict.ToList();
And do the comparison of images: (target[index].Key, target[index].Value)
for (int i = 0; i < x; i++)
{
for (int j = 0; j < y; j++)
{
if (CompareMemCmp(toRecognize[i], target[j].Value) == true)
{
textBox3.AppendText("Found!" + Environment.NewLine);
textBox2.AppendText(target[j].Key); //Letter is found - save it!
}
else {textBox3.AppendText("Don't match!" + Environment.NewLine); }
}
}
1. [removed]
2. Is the method that I used tolerable from the perspective of performance? I'm planning to do the recornition of 10-20 images at the same time (average letter count for each is 8) and the library for letters will consist of English alphabet (26 upper + 26 lower case), special letter(~10) and Numbers (10).
So I have 80+ letters that have to be recognized and pattern library which consists of ~70+ characters. Will the performance be at a good level?
Constructive criticism gladly accepted. ;)
Question 1:
[removed]
Question 2:
It depends.
First of all, if performance is not enough, what's your bottleneck ?
I suspect it's CompareMemCmp() function... so can you speed-up it ?
If not, given that each iteration of your loop seems independent from the previous ones, you could try to run it in parallel.
To do this have a look at the Task Parallel Library methods of framework 4.0, in particular to Parallel.For.
EDIT :
If we are talking about perfect matching between images, you can try to use dictionary look-up to speed things up.
First, you can build a wrapper class for Bitmap that can be efficiently used as Dictionary<> key, like this:
class BitmapWrapper
{
private readonly int hash;
public Bitmap Image { get; private set; }
public BitmapWrapper(Bitmap img)
{
this.Image = img;
this.hash = this.ComputeHash();
}
private int ComputeHash()
{
// you could turn this code into something unsafe to speed-up GetPixel
// e.g. using lockbits etc...
unchecked // Overflow is fine, just wrap
{
int h = 17;
for (int x = 0; x < this.Image.Size.Width; x++)
for (int y = 0; y < this.Image.Size.Height; y++)
h = h * 23 + this.Image.GetPixel(x, y).GetHashCode();
return h;
}
}
public override int GetHashCode()
{
return this.hash;
}
public override bool Equals(object obj)
{
var objBitmap = obj as Bitmap;
if (obj == null)
return false;
// use CompareMemCmp in case of hash collisions
return Utils.CompareMemCmp(this.Image, objBitmap);
}
}
This class computes the hascode in ComputeHash method that is inspired by this answer (but you can just ex-or every pixel). That surely can be improved by involving unsafe code (something like in the CompareMemCmp method).
Once you have this class, you can build a look-up dictionary like this:
Bitmap le = new Bitmap(#"C:\2\e.png", true);
Bitmap lg = new Bitmap(#"C:\2\g.png", true);
...
var lookup = new Dictionary<string, Bitmap>();
lookup.Add(new BitmapWrapper(le), "e");
lookup.Add(new BitmapWrapper(lg), "g");
...
then the search will be simply:
foreach(var imgToRecognize in toRecognize)
{
string letterFound;
if(lookup.TryGetValue(new BitmapWrapper(imgToRecognize), out letterFound))
{
textBox3.AppendText("Found!" + Environment.NewLine);
textBox2.AppendText(letterFound); //Letter is found - save it!
}
else
textBox3.AppendText("Don't match!" + Environment.NewLine);
}
The performances of this method definitely depends on the hash computation, but certainly they can save a lot of CompareMemCmp() calls.
If C# is the right tool for the job depends on how big your images are. A hash table is a nice approach but you need to compare the whole image before you can check if you have a match. Xoring the images is very fast but you need to xor all images until you find the matching ones which is quite inefficient.
A better approach would be to choose a fingerprint which is designed in a way that you need to read only minimal amount of data. E.g. you can generate a hash code of a vertical line in the middle if your image which would produce a different value for each of your images. If not adapt the approach until you arrive at an algorithm where you do not need to read the image as a whole but only a few bytes until you can assign the image to the right bucket. This does only work if your input data contains only the images in your dictionary. Otherwise it would be only a probabilistic method.