EDIT
Just was pointed that the requirements state peaks cannot be ends of Arrays.
So I ran across this site
http://codility.com/
Which gives you programming problems and gives you certificates if you can solve them in 2 hours. The very first question is one I have seen before, typically called the Peaks and Flags question. If you are not familiar
A non-empty zero-indexed array A consisting of N integers is given. A peak is an array element which is larger than its neighbours. More precisely, it is an index P such that
0 < P < N β 1 and A[P β 1] < A[P] > A[P + 1]
.
For example, the following array A:
A[0] = 1
A[1] = 5
A[2] = 3
A[3] = 4
A[4] = 3
A[5] = 4
A[6] = 1
A[7] = 2
A[8] = 3
A[9] = 4
A[10] = 6
A[11] = 2
has exactly four peaks: elements 1, 3, 5 and 10.
You are going on a trip to a range of mountains whose relative heights are represented by array A. You have to choose how many flags you should take with you. The goal is to set the maximum number of flags on the peaks, according to certain rules.
Flags can only be set on peaks. What's more, if you take K flags, then the distance between any two flags should be greater than or equal to K. The distance between indices P and Q is the absolute value |P β Q|.
For example, given the mountain range represented by array A, above, with N = 12, if you take:
two flags, you can set them on peaks 1 and 5;
three flags, you can set them on peaks 1, 5 and 10;
four flags, you can set only three flags, on peaks 1, 5 and 10.
You can therefore set a maximum of three flags in this case.
Write a function that, given a non-empty zero-indexed array A of N integers, returns the maximum number of flags that can be set on the peaks of the array.
For example, given the array above
the function should return 3, as explained above.
Assume that:
N is an integer within the range [1..100,000];
each element of array A is an integer within the range [0..1,000,000,000].
Complexity:
expected worst-case time complexity is O(N);
expected worst-case space complexity is O(N), beyond input storage (not counting the
storage required for input arguments).
Elements of input arrays can be modified.
So this makes sense, but I failed it using this code
public int GetFlags(int[] A)
{
List<int> peakList = new List<int>();
for (int i = 0; i <= A.Length - 1; i++)
{
if ((A[i] > A[i + 1] && A[i] > A[i - 1]))
{
peakList.Add(i);
}
}
List<int> flagList = new List<int>();
int distance = peakList.Count;
flagList.Add(peakList[0]);
for (int i = 1, j = 0, max = peakList.Count; i < max; i++)
{
if (Math.Abs(Convert.ToDecimal(peakList[j]) - Convert.ToDecimal(peakList[i])) >= distance)
{
flagList.Add(peakList[i]);
j = i;
}
}
return flagList.Count;
}
EDIT
int[] A = new int[] { 7, 10, 4, 5, 7, 4, 6, 1, 4, 3, 3, 7 };
The correct answer is 3, but my application says 2
This I do not get, since there are 4 peaks (indices 1,4,6,8) and from that, you should be able to place a flag at 2 of the peaks (1 and 6)
Am I missing something here? Obviously my assumption is that the beginning or end of an Array can be a peak, is this not the case?
If this needs to go in Stack Exchange Programmers, I will move it, but thought dialog here would be helpful.
EDIT
Obviously my assumption is that the beginning or end of an Array can
be a peak, is this not the case?
Your assumption is wrong since peak is defined as:
0 < P < N β 1
When it comes to your second example you can set 3 flags on 1, 4, 8.
Here is a hint: If it is possible to set m flags, then there must be at least m * (m - 1) + 1 array elements. Given that N < 100,000, turning the above around should give you confidence that the problem can be efficiently brute-forced.
Here is a hint: If it is possible to set m flags, then there must be
at least m * (m - 1) + 1 array elements. Given that N < 100,000,
turning the above around should give you confidence that the problem
can be efficiently brute-forced.
No, that is wrong. Codility puts custom solutions through a series of tests, and brute forcing can easily fail on time.
I give here my solution of the task that makes 100% score (correctness and performance) in codility, implemented in C++. To understand the solution you must realize for a given distance of indexes (for example, when first peak starts at index 2 and the last peak at index 58 the distance is 56), that contains n peaks there is an upper limit for the maximal number of peaks that can hold flags according to condition described in the task.
#include <vector>
#include <math.h>
typedef unsigned int uint;
void flagPeaks(const std::vector<uint> & peaks,
std::vector<uint> & flaggedPeaks,
const uint & minDist)
{
flaggedPeaks.clear();
uint dist = peaks[peaks.size() - 1] - peaks[0];
if (minDist > dist / 2)
return;
flaggedPeaks.push_back(peaks[0]);
for (uint i = 0; i < peaks.size(); ) {
uint j = i + 1;
while (j < (peaks.size()) && ((peaks[j] - peaks[i]) < minDist))
++j;
if (j < (peaks.size()) && ((peaks[j] - peaks[i]) >= minDist))
flaggedPeaks.push_back(peaks[j]);
i = j;
}
}
int solution(std::vector<int> & A)
{
std::vector<uint> peaks;
uint min = A.size();
for (uint i = 1; i < A.size() - 1; i++) {
if ((A[i] > A[i - 1]) && (A[i] > A[i + 1])) {
peaks.push_back(i);
if (peaks.size() > 1) {
if (peaks[peaks.size() - 1] - peaks[peaks.size() - 2] < min)
min = peaks[peaks.size() - 1] - peaks[peaks.size() - 2];
}
}
}
// minimal distance between 2 peaks is 2
// so when we have less than 3 peaks we are done
if (peaks.size() < 3 || min >= peaks.size())
return peaks.size();
const uint distance = peaks[peaks.size() - 1] - peaks[0];
// parts are the number of pieces between peaks
// given n + 1 peaks we always have n parts
uint parts = peaks.size() - 1;
// calculate maximal possible number of parts
// for the given distance and number of peaks
double avgOptimal = static_cast<double>(distance) / static_cast<double> (parts);
while (parts > 1 && avgOptimal < static_cast<double>(parts + 1)) {
parts--;
avgOptimal = static_cast<double>(distance) / static_cast<double>(parts);
}
std::vector<uint> flaggedPeaks;
// check how many peaks we can flag for the
// minimal possible distance between two flags
flagPeaks(peaks, flaggedPeaks, parts + 1);
uint flags = flaggedPeaks.size();
if (flags >= parts + 1)
return parts + 1;
// reduce the minimal distance between flags
// until the condition fulfilled
while ((parts > 0) && (flags < parts + 1)) {
--parts;
flagPeaks(peaks, flaggedPeaks, parts + 1);
flags = flaggedPeaks.size();
}
// return the maximal possible number of flags
return parts + 1;
}
Related
Problem statement:
Given an array of non-negative integers, count the number of unordered pairs of array elements, such that their bitwise AND is a power of 2.
Example:
arr = [10, 7, 2, 8, 3]
Answer: 6 (10&7, 10&2, 10&8, 10&3, 7&2, 2&3)
Constraints:
1 <= arr.Count <= 2*10^5
0 <= arr[i] <= 2^12
Here's my brute-force solution that I've come up with:
private static Dictionary<int, bool> _dictionary = new Dictionary<int, bool>();
public static long CountPairs(List<int> arr)
{
long result = 0;
for (var i = 0; i < arr.Count - 1; ++i)
{
for (var j = i + 1; j < arr.Count; ++j)
{
if (IsPowerOfTwo(arr[i] & arr[j])) ++result;
}
}
return result;
}
public static bool IsPowerOfTwo(int number)
{
if (_dictionary.TryGetValue(number, out bool value)) return value;
var result = (number != 0) && ((number & (number - 1)) == 0);
_dictionary[number] = result;
return result;
}
For small inputs this works fine, but for big inputs this works slow.
My question is: what is the optimal (or at least more optimal) solution for the problem? Please provide a graceful solution in C#. π
One way to accelerate your approach is to compute the histogram of your data values before counting.
This will reduce the number of computations for long arrays because there are fewer options for value (4096) than the length of your array (200000).
Be careful when counting bins that are powers of 2 to make sure you do not overcount the number of pairs by including cases when you are comparing a number with itself.
We can adapt the bit-subset dynamic programming idea to have a solution with O(2^N * N^2 + n * N) complexity, where N is the number of bits in the range, and n is the number of elements in the list. (So if the integers were restricted to [1, 4096] or 2^12, with n at 100,000, we would have on the order of 2^12 * 12^2 + 100000*12 = 1,789,824 iterations.)
The idea is that we want to count instances for which we have overlapping bit subsets, with the twist of adding a fixed set bit. Given Ai -- for simplicity, take 6 = b110 -- if we were to find all partners that AND to zero, we'd take Ai's negation,
110 -> ~110 -> 001
Now we can build a dynamic program that takes a diminishing mask, starting with the full number and diminishing the mask towards the left
001
^^^
001
^^
001
^
Each set bit on the negation of Ai represents a zero, which can be ANDed with either 1 or 0 to the same effect. Each unset bit on the negation of Ai represents a set bit in Ai, which we'd like to pair only with zeros, except for a single set bit.
We construct this set bit by examining each possibility separately. So where to count pairs that would AND with Ai to zero, we'd do something like
001 ->
001
000
we now want to enumerate
011 ->
011
010
101 ->
101
100
fixing a single bit each time.
We can achieve this by adding a dimension to the inner iteration. When the mask does have a set bit at the end, we "fix" the relevant bit by counting only the result for the previous DP cell that would have the bit set, and not the usual union of subsets that could either have that bit set or not.
Here is some JavaScript code (sorry, I do not know C#) to demonstrate with testing at the end comparing to the brute-force solution.
var debug = 0;
function bruteForce(a){
let answer = 0;
for (let i = 0; i < a.length; i++) {
for (let j = i + 1; j < a.length; j++) {
let and = a[i] & a[j];
if ((and & (and - 1)) == 0 && and != 0){
answer++;
if (debug)
console.log(a[i], a[j], a[i].toString(2), a[j].toString(2))
}
}
}
return answer;
}
function f(A, N){
const n = A.length;
const hash = {};
const dp = new Array(1 << N);
for (let i=0; i<1<<N; i++){
dp[i] = new Array(N + 1);
for (let j=0; j<N+1; j++)
dp[i][j] = new Array(N + 1).fill(0);
}
for (let i=0; i<n; i++){
if (hash.hasOwnProperty(A[i]))
hash[A[i]] = hash[A[i]] + 1;
else
hash[A[i]] = 1;
}
for (let mask=0; mask<1<<N; mask++){
// j is an index where we fix a 1
for (let j=0; j<=N; j++){
if (mask & 1){
if (j == 0)
dp[mask][j][0] = hash[mask] || 0;
else
dp[mask][j][0] = (hash[mask] || 0) + (hash[mask ^ 1] || 0);
} else {
dp[mask][j][0] = hash[mask] || 0;
}
for (let i=1; i<=N; i++){
if (mask & (1 << i)){
if (j == i)
dp[mask][j][i] = dp[mask][j][i-1];
else
dp[mask][j][i] = dp[mask][j][i-1] + dp[mask ^ (1 << i)][j][i - 1];
} else {
dp[mask][j][i] = dp[mask][j][i-1];
}
}
}
}
let answer = 0;
for (let i=0; i<n; i++){
for (let j=0; j<N; j++)
if (A[i] & (1 << j))
answer += dp[((1 << N) - 1) ^ A[i] | (1 << j)][j][N];
}
for (let i=0; i<N + 1; i++)
if (hash[1 << i])
answer = answer - hash[1 << i];
return answer / 2;
}
var As = [
[10, 7, 2, 8, 3] // 6
];
for (let A of As){
console.log(JSON.stringify(A));
console.log(`DP, brute force: ${ f(A, 4) }, ${ bruteForce(A) }`);
console.log('');
}
var numTests = 1000;
for (let i=0; i<numTests; i++){
const N = 6;
const A = [];
const n = 10;
for (let j=0; j<n; j++){
const num = Math.floor(Math.random() * (1 << N));
A.push(num);
}
const fA = f(A, N);
const brute = bruteForce(A);
if (fA != brute){
console.log('Mismatch:');
console.log(A);
console.log(fA, brute);
console.log('');
}
}
console.log("Done testing.");
int[] numbers = new[] { 10, 7, 2, 8, 3 };
static bool IsPowerOfTwo(int n) => (n != 0) && ((n & (n - 1)) == 0);
long result = numbers.AsParallel()
.Select((a, i) => numbers
.Skip(i + 1)
.Select(b => a & b)
.Count(IsPowerOfTwo))
.Sum();
If I understand the problem correctly, this should work and should be faster.
First, for each number in the array we grab all elements in the array after it to get a collection of numbers to pair with.
Then we transform each pair number with a bitwise AND, then counting the number that satisfy our 'IsPowerOfTwo;' predicate (implementation here).
Finally we simply get the sum of all the counts - our output from this case is 6.
I think this should be more performant than your dictionary based solution - it avoids having to perform a lookup each time you wish to check power of 2.
I think also given the numerical constraints of your inputs it is fine to use int data types.
I had a go at a project euler coding challenge below, the answer given by the code is correct but I do not understand why its taking nearly a minute to run. It was finishing with similar times prior to using a sieve. Others users are reporting times as low as milliseconds.
I assume I am making a basic error somewhere...
// The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.
// Find the sum of all the primes below two million.
public static long Ex010()
{
var sum = 0L;
var sieve = new bool[2000000];
var primes = new List<int>(10000);
for (int i = 2; i < sieve.Length; i++)
{
if (sieve[i-1])
continue;
var isPrime = true;
foreach (var prime in primes)
{
if (i % prime == 0) {
isPrime = false;
break;
}
}
if (isPrime) {
primes.Add(i);
sum += i;
for (var x = i * 2; x < sieve.Length; x += i) {
sieve[x-1] = true;
}
}
}
return sum;
}
EDIT:
The only thing that seemed missing was this optimization :
if (prime > Math.Sqrt(i))
break;
It brings the time down to 160 ms.
EDIT 2:
Finally clicked, took out the foreach as was suggested many times. Its now 12 ms. Final solution :
public static long Ex010()
{
var sum = 0L;
var sieve = new bool[2000000];
for (int i = 2; i < sieve.Length; i++)
{
if (sieve[i-1])
continue;
sum += i;
for (var x = i * 2; x < sieve.Length; x += i) {
sieve[x-1] = true;
}
}
return sum;
}
You are doing trial division in addition to a sieve.
The boolean array will already tell you if a number is prime, so you don't need the List of primes at all.
You can also speed it up by only sieving up to the square root of the limit.
If you want to save some memory also, you can use a BitArray instead of a boolean array.
public static long Ex010()
{
const int Limit = 2000000;
int sqrt = (int)Math.Sqrt(Limit);
var sum = 0L;
var isComposite = new bool[Limit];
for (int i = 2; i < sqrt; i++) {
if (isComposite[i - 2])
continue;//This number is not prime, skip
sum += i;
for (var x = i * i; x < isComposite.Length; x += i) {
isComposite[x - 2] = true;
}
}
//Add the remaining prime numbers
for (int i = sqrt; i < Limit; i++) {
if (!isComposite[i - 2]) {
sum += i;
}
}
return sum;
}
(tl;dr: 2 million in 0.8 ms, 2 billion in 1.25 s; segmented odds-only SoE, presieving, wheeled striding)
As always, the limit of Euler task #10 seems designed to pose a mild challenge on a ZX81, Apple ][ or C64 but on modern hardware you generally have to multiply the limits by 1000 to make things even remotely interesting. Or set a time limit like 5 seconds and try to see by how many orders of magnitude the Euler limit can be exceeded...
Dennis_E's solution is simple and efficient, but I'd recommend applying two small improvements that give a marked performance boost without any effort at all.
Represent only odd numbers in the sieve
All even numbers except for the number 2 are composite. If you pull the number 2 out of thin air when needed then you can drop all even numbers from the sieve. This halves workload and memory footprint, for a doubling of performance at the marginal cost of writing << or >> in a few places (to convert between the realm of numbers and the realm of bit indices). This is usually known as an 'odds-only sieve' or 'wheeled representation mod 2'; it has the added advantage that it largely removes the need for guarding against index overflow.
Skip a few extra small primes during decoding
Skipping a few small primes ('applying a wheel') is much easier when going through a range of numbers incrementally, compared to hopping wildly about with different strides as during sieving. This skipping only involves applying a cyclic sequence of differences between consecutive numbers that are not multiples of the primes in question, like 4,2,4,2... for skipping multiples of 2 and 3 (the mod 6 wheel) or 6,4,2,4,2,4,6,2... for skipping multiples of 2, 3 and 5.
The mod 6 wheel sequence alternates between only two numbers, which can easily be achieved by XORing with a suitable value. On top of the odds-only sieve the distances are halved, so that the sequence becomes 2,1,2,1... This skipping reduces the work during decoding by 1/3rd (for stepping mod 3), and the skipped primes can also be ignored during the sieving. The latter can have a marked effect on sieve times, since the smallest primes do the greatest number of crossings-off during the sieving.
Here's a simple Sieve of Eratosthenes, with both suggestions applied. Note: here and in the following I generally go with the flow of C#/.Net and use signed integers where I would normally use unsigned integers in any sane language. That's because I don't have the time to vet the code for the performance implications (penalties) resulting from the use of unsigned types, like that the compiler suddenly forgets how to replace division by a constant with multiplication of the inverse and so on.
static long sum_small_primes_up_to (int n)
{
if (n < 7)
return (0xAA55200 >> (n << 2)) & 0xF;
int sqrt_n_halved = (int)Math.Sqrt(n) >> 1;
int max_bit = (int)(n - 1) >> 1;
var odd_composite = new bool[max_bit + 1];
for (int i = 5 >> 1; i <= sqrt_n_halved; ++i)
if (!odd_composite[i])
for (int p = (i << 1) + 1, j = p * p >> 1; j <= max_bit; j += p)
odd_composite[j] = true;
long sum = 2 + 3;
for (int i = 5 >> 1, d = 1; i <= max_bit; i += d, d ^= 3)
if (!odd_composite[i])
sum += (i << 1) + 1;
return sum;
}
The first if statement handles the small fry (n in 0..6) by returning a suitable element of a precomputed list of numbers, and it serves to get all the special cases out of the way in one fell swoop. All other occurrences of shift operators are for converting between the realm of numbers and the realm of indices into the odds-only sieve.
This is pretty much the same code that I normally use for sieving small primes, up to 64K or so (the potential least factors for numbers up to 32 bits). It does Euler's piddling 2 million in 4.5 milliseconds but throwing bigger numbers at it shows its Achilles heel: it does a lot of striding over large distances, which interacts badly with modern memory subsystems where decent access speed can only be got from caches. The performance drops markedly when the capacity of the level 1 cache (typically 32 KiByte) is exceeded significantly, and it goes down even further when exceeding the L2 and L3 capacities (typically several megabytes). How sharp the drop is depends on the quality (price tag) of the computer, of course...
Here are some timings taken on my laptop:
# benchmark: small_v0 ...
sum up to 2 * 10^4: 21171191 in 0,03 ms
sum up to 2 * 10^5: 1709600813 in 0,35 ms // 11,0 times
sum up to 2 * 10^6: 142913828922 in 4,11 ms // 11,7 times
sum up to 2 * 10^7: 12272577818052 in 59,36 ms // 14,4 times
sum up to 2 * 10^8: 1075207199997334 in 1.225,19 ms // 20,6 times
sum up to 2 * 10^9: 95673602693282040 in 14.381,29 ms // 11,7 times
In the middlish ranges there are time increases that go well beyond the expected factor of about 11, and then things stabilise again.
And here comes how to speed up the beast an order of magnitude...
Process the range in cache-sized segments
The remedy is easy enough: instead of striding each prime all the way from one end of the range to the other - and hence all across the memory space - we sieve the range in cache-sized strips, memorising the final positions for each prime so that the next round can continue right where the previous round left off. If we don't need a big bad sieve full of bits at the end then we can process a strip (extract its primes) after it has been sieved and then discard its data, reusing the sieve buffer for the next strip. Both are variations on the theme of segmented sieving but the offsets are treated differently during the processing; when the distinction matters then the first approach (big bad sieve for the whole range) is usually called a segmented sieve and the latter an iterated sieve. The terms 'moving' or 'sliding' sieve might fit the latter somewhat but should be avoided because they normally refer to a totally different class of sieves (also known as deque sieves) that are deceptively simple but whose performance is worse by at least an order of magnitude.
Here's an example of an iterated sieve, a slightly modified version of a function that I normally use for sieving primes in given ranges [m, n], just like in SPOJ's PRIMES1 and PRINT. Here the parameter m is implicitly 0, so it doesn't need to be passed.
Normally the function takes an interface that is responsible for processing the raw sieve (and any lose primes it may get passed), and which can get queried for the number of primes that the processor skips ('decoder order') so that the sieve can ignore those during the sieving. For this exposition I've replaced this with a delegate for simplicity.
The factor primes get sieved by a stock function that may look somewhat familiar, and I've changed the logic of the sieve from 'is_composite' to 'not_composite' (and to a base type that can participate in arithmetic) for reasons that will be explained later. decoder_order is the number of additional primes skipped by the decoder (which would be 1 for the function shown earlier, because it skips multiples of the prime 3 during the prime extraction/summing, over and above the wheel prime 2).
const int SIEVE_BITS = 1 << 15; // L1 cache size, 1 usable bit per byte
delegate long sieve_sum_func (byte[] sieve, int window_base, int window_bits);
static long sum_primes_up_to (int n, sieve_sum_func sum_func, int decoder_order)
{
if (n < 7)
return 0xF & (0xAA55200 >> (n << 2));
n -= ~n & 1; // make odd (n can't be 0 here)
int sqrt_n = (int)Math.Sqrt(n);
var factor_primes = small_primes_up_to(sqrt_n).ToArray();
int first_sieve_prime_index = 1 + decoder_order; // skip wheel primes + decoder primes
int m = 7; // this would normally be factor_primes[first_sieve_prime_index] + 2
int bits_to_sieve = ((n - m) >> 1) + 1;
int sieve_bits = Math.Min(bits_to_sieve, SIEVE_BITS);
var sieve = new byte[sieve_bits];
var offsets = new int[factor_primes.Length];
int sieve_primes_end = first_sieve_prime_index;
long sum = 2 + 3 + 5; // wheel primes + decoder primes
for (int window_base = m; ; )
{
int window_bits = Math.Min(bits_to_sieve, sieve_bits);
int last_number_in_window = window_base - 1 + (window_bits << 1);
while (sieve_primes_end < factor_primes.Length)
{
int prime = factor_primes[sieve_primes_end];
int start = prime * prime, stride = prime << 1;
if (start > last_number_in_window)
break;
if (start < window_base)
start = (stride - 1) - (window_base - start - 1) % stride;
else
start -= window_base;
offsets[sieve_primes_end++] = start >> 1;
}
fill(sieve, window_bits, (byte)1);
for (int i = first_sieve_prime_index; i < sieve_primes_end; ++i)
{
int prime = factor_primes[i], j = offsets[i];
for ( ; j < window_bits; j += prime)
sieve[j] = 0;
offsets[i] = j - window_bits;
}
sum += sum_func(sieve, window_base, window_bits);
if ((bits_to_sieve -= window_bits) == 0)
break;
window_base += window_bits << 1;
}
return sum;
}
static List<int> small_primes_up_to (int n)
{
int upper_bound_on_pi = 32 + (n < 137 ? 0 : (int)(n / (Math.Log(n) - 1.083513)));
var result = new List<int>(upper_bound_on_pi);
if (n < 2)
return result;
result.Add(2); // needs to be pulled out of thin air because of the mod 2 wheel
if (n < 3)
return result;
result.Add(3); // needs to be pulled out of thin air because of the mod 3 decoder
int sqrt_n_halved = (int)Math.Sqrt(n) >> 1;
int max_bit = (n - 1) >> 1;
var odd_composite = new bool[max_bit + 1];
for (int i = 5 >> 1; i <= sqrt_n_halved; ++i)
if (!odd_composite[i])
for (int p = (i << 1) + 1, j = p * p >> 1; j <= max_bit; j += p)
odd_composite[j] = true;
for (int i = 5 >> 1, d = 1; i <= max_bit; i += d, d ^= 3)
if (!odd_composite[i])
result.Add((i << 1) + 1);
return result;
}
static void fill<T> (T[] array, int count, T value, int threshold = 16)
{
Trace.Assert(count <= array.Length);
int current_size = Math.Min(threshold, count);
for (int i = 0; i < current_size; ++i)
array[i] = value;
for (int half = count >> 1; current_size <= half; current_size <<= 1)
Buffer.BlockCopy(array, 0, array, current_size, current_size);
Buffer.BlockCopy(array, 0, array, current_size, count - current_size);
}
Here's sieve processor that is equivalent to the logic used in the function shown at the beginning, and a dummy function that can be used to measure the sieve time sans any decoding, for comparison:
static long prime_sum_null (byte[] sieve, int window_base, int window_bits)
{
return 0;
}
static long prime_sum_v0 (byte[] sieve, int window_base, int window_bits)
{
long sum = 0;
int i = window_base % 3 == 0 ? 1 : 0;
int d = 3 - (window_base + 2 * i) % 3;
for ( ; i < window_bits; i += d, d ^= 3)
if (sieve[i] == 1)
sum += window_base + (i << 1);
return sum;
}
This function needs to perform a bit of modulo magic to synchronise itself with the mod 3 sequence over the mod 2 sieve; the earlier function did not need to do this because its starting point was fixed, not a parameter. Here are the timings:
# benchmark: iter_v0 ...
sum up to 2 * 10^4: 21171191 in 0,04 ms
sum up to 2 * 10^5: 1709600813 in 0,28 ms // 7,0 times
sum up to 2 * 10^6: 142913828922 in 2,42 ms // 8,7 times
sum up to 2 * 10^7: 12272577818052 in 22,11 ms // 9,1 times
sum up to 2 * 10^8: 1075207199997334 in 223,67 ms // 10,1 times
sum up to 2 * 10^9: 95673602693282040 in 2.408,06 ms // 10,8 times
Quite a difference, n'est-ce pas? But we're not done yet.
Replace conditional branching with arithmetic
Modern processors like things to be simple and predictable; if branches are not predicted correctly then the CPU levies a heavy fine in extra cycles for flushing and refilling the instruction pipeline. Unfortunately, the decoding loop isn't very predictable because primes are fairly dense in the low number ranges we're talking about here:
if (!odd_composite[i])
++count;
If the average number of non-primes between primes times the cost of an addition is less than the penalty for a mispredicted branch then the following statement should be faster:
count += sieve[i];
This explains why I inverted the logic of the sieve compared to normal, because with 'is_composite' semantics I'd have to do
count += 1 ^ odd_composite[i];
And the rule is to pull everything out of inner loops that can be pulled out, so that I simply applied 1 ^ x to the whole array before even starting.
However, Euler wants us to sum the primes instead of counting them. This can be done in a similar fashion, by turning the value 1 into a mask of all 1 bits (which preserves everything when ANDing) and 0 zeroises any value. This is similar to the CMOV instruction, except that it works even on the oldest of CPUs and does not require a reasonably decent compiler:
static long prime_sum_v1 (byte[] sieve, int window_base, int window_bits)
{
long sum = 0;
int i = window_base % 3 == 0 ? 1 : 0;
int d = 3 - (window_base + 2 * i) % 3;
for ( ; i < window_bits; i += d, d ^= 3)
sum += (0 - sieve[i]) & (window_base + (i << 1));
return sum;
}
Result:
# benchmark: iter_v1 ...
sum up to 2 * 10^4: 21171191 in 0,10 ms
sum up to 2 * 10^5: 1709600813 in 0,36 ms // 3,6 times
sum up to 2 * 10^6: 142913828922 in 1,88 ms // 5,3 times
sum up to 2 * 10^7: 12272577818052 in 13,80 ms // 7,3 times
sum up to 2 * 10^8: 1075207199997334 in 157,39 ms // 11,4 times
sum up to 2 * 10^9: 95673602693282040 in 1.819,05 ms // 11,6 times
Unrolling and strength reduction
Now, a bit of overkill: a decoder with a fully unrolled wheel mod 15 (the unrolling can unlock some reserves of instruction-level parallelism).
static long prime_sum_v5 (byte[] sieve, int window_base, int window_bits)
{
Trace.Assert(window_base % 2 == 1);
int count = 0, sum = 0;
int residue = window_base % 30;
int phase = UpperIndex[residue];
int i = (SpokeValue[phase] - residue) >> 1;
// get into phase for the unrolled code (which is based on phase 0)
for ( ; phase != 0 && i < window_bits; i += DeltaDiv2[phase], phase = (phase + 1) & 7)
{
int b = sieve[i]; count += b; sum += (0 - b) & i;
}
// process full revolutions of the wheel (anchored at phase 0 == residue 1)
for (int e = window_bits - (29 >> 1); i < e; i += (30 >> 1))
{
int i0 = i + ( 1 >> 1), b0 = sieve[i0]; count += b0; sum += (0 - b0) & i0;
int i1 = i + ( 7 >> 1), b1 = sieve[i1]; count += b1; sum += (0 - b1) & i1;
int i2 = i + (11 >> 1), b2 = sieve[i2]; count += b2; sum += (0 - b2) & i2;
int i3 = i + (13 >> 1), b3 = sieve[i3]; count += b3; sum += (0 - b3) & i3;
int i4 = i + (17 >> 1), b4 = sieve[i4]; count += b4; sum += (0 - b4) & i4;
int i5 = i + (19 >> 1), b5 = sieve[i5]; count += b5; sum += (0 - b5) & i5;
int i6 = i + (23 >> 1), b6 = sieve[i6]; count += b6; sum += (0 - b6) & i6;
int i7 = i + (29 >> 1), b7 = sieve[i7]; count += b7; sum += (0 - b7) & i7;
}
// clean up leftovers
for ( ; i < window_bits; i += DeltaDiv2[phase], phase = (phase + 1) & 7)
{
int b = sieve[i]; count += b; sum += (0 - b) & i;
}
return (long)window_base * count + ((long)sum << 1);
}
As you can see, I performed a bit of strength reduction in order to make things easier for the compiler. Instead of summing window_base + (i << 1), I sum i and 1 separately and perform the rest of the calculation only once, at the end of the function.
Timings:
# benchmark: iter_v5(1) ...
sum up to 2 * 10^4: 21171191 in 0,01 ms
sum up to 2 * 10^5: 1709600813 in 0,11 ms // 9,0 times
sum up to 2 * 10^6: 142913828922 in 1,01 ms // 9,2 times
sum up to 2 * 10^7: 12272577818052 in 11,52 ms // 11,4 times
sum up to 2 * 10^8: 1075207199997334 in 130,43 ms // 11,3 times
sum up to 2 * 10^9: 95673602693282040 in 1.563,10 ms // 12,0 times
# benchmark: iter_v5(2) ...
sum up to 2 * 10^4: 21171191 in 0,01 ms
sum up to 2 * 10^5: 1709600813 in 0,09 ms // 8,7 times
sum up to 2 * 10^6: 142913828922 in 1,03 ms // 11,3 times
sum up to 2 * 10^7: 12272577818052 in 10,34 ms // 10,0 times
sum up to 2 * 10^8: 1075207199997334 in 121,08 ms // 11,7 times
sum up to 2 * 10^9: 95673602693282040 in 1.468,28 ms // 12,1 times
The first set of timings is for decoder_order == 1 (i.e. not telling the sieve about the extra skipped prime), for direct comparison to the other decoder versions. The second set is for decoder_order == 2, which means the sieve could skip the crossings-off for the prime 5 as well. Here are the null timings (essentially the sieve time without the decode time), to put things a bit into perspective:
# benchmark: iter_null(1) ...
sum up to 2 * 10^8: 10 in 94,74 ms // 11,4 times
sum up to 2 * 10^9: 10 in 1.194,18 ms // 12,6 times
# benchmark: iter_null(2) ...
sum up to 2 * 10^8: 10 in 86,05 ms // 11,9 times
sum up to 2 * 10^9: 10 in 1.109,32 ms // 12,9 times
This shows that the work on the decoder has decreased decode time for 2 billion from 1.21 s to 0.35 s, which is nothing to sneeze at. Similar speedups can be realised for the sieving as well, but that is nowhere near as easy as it was for the decoding.
Low-hanging fruit: presieving
Lastly, a technique that can sometimes offer dramatic speedups (especially for packed bitmaps and/or higher-order wheels) is blasting a canned bit pattern over the sieve before comencing a round of sieving, such that the sieve looks as if it had already been sieved by a handful of small primes. This is usually known as presieving. In the current case the speedup is marginal (not even 20%) but I'm showing it because it is a useful technique to have in one's toolchest.
Note: I've ripped the presieving logic from another Euler project, so it doesn't fit organically into the code I wrote for this article. But it should demonstrate the technique well enough.
const byte CROSSED_OFF = 0; // i.e. composite
const byte NOT_CROSSED = 1 ^ CROSSED_OFF; // i.e. not composite
const int SIEVE_BYTES = SIEVE_BITS; // i.e. 1 usable bit per byte
internal readonly static byte[] TinyPrimes = { 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31 };
internal readonly static int m_wheel_order = 3; // == number of wheel primes
internal static int m_presieve_level = 0; // == number of presieve primes
internal static int m_presieve_modulus = 0;
internal static byte[] m_presieve_pattern;
internal static void set_presieve_level (int presieve_primes)
{
m_presieve_level = Math.Max(0, presieve_primes);
m_presieve_modulus = 1;
for (int i = m_wheel_order; i < m_wheel_order + presieve_primes; ++i)
m_presieve_modulus *= TinyPrimes[i];
// the pattern needs to provide SIEVE_BYTES bytes for every residue of the modulus
m_presieve_pattern = new byte[m_presieve_modulus + SIEVE_BYTES - 1];
var pattern = m_presieve_pattern;
int current_size = 1;
pattern[0] = NOT_CROSSED;
for (int i = m_wheel_order; i < m_wheel_order + presieve_primes; ++i)
{
int current_prime = TinyPrimes[i];
int new_size = current_size * current_prime;
// keep doubling while possible
for ( ; current_size * 2 <= new_size; current_size *= 2)
Buffer.BlockCopy(pattern, 0, pattern, current_size, current_size);
// copy rest, if any
Buffer.BlockCopy(pattern, 0, pattern, current_size, new_size - current_size);
current_size = new_size;
// mark multiples of the current prime
for (int j = current_prime >> 1; j < current_size; j += current_prime)
pattern[j] = CROSSED_OFF;
}
for (current_size = m_presieve_modulus; current_size * 2 <= pattern.Length; current_size *= 2)
Buffer.BlockCopy(pattern, 0, pattern, current_size, current_size);
Buffer.BlockCopy(pattern, 0, pattern, current_size, pattern.Length - current_size);
}
For a quick test you can hack the presieving into the sieve function as follows:
- int first_sieve_prime_index = 1 + decoder_order; // skip wheel primes + decoder primes
+ int first_sieve_prime_index = 1 + decoder_order + m_presieve_level; // skip wheel primes + decoder primes
plus
- long sum = 2 + 3 + 5; // wheel primes + decoder primes
+ long sum = 2 + 3 + 5; // wheel primes + decoder primes
+
+ for (int i = 0; i < m_presieve_level; ++i)
+ sum += TinyPrimes[m_wheel_order + i];
plus
- fill(sieve, window_bits, (byte)1);
+ if (m_presieve_level == 0)
+ fill(sieve, window_bits, (byte)1);
+ else
+ Buffer.BlockCopy(m_presieve_pattern, (window_base >> 1) % m_presieve_modulus, sieve, 0, window_bits);
and
set_presieve_level(4) // 4 and 5 work well
in the static constructor or Main().
This way you can use m_presieve_level for turning presieving on and off. The BlockCopy also works correctly after calling set_presieve_level(0), though, because then the modulus is 1. m_wheel_order should reflect the actual wheel order (= 1) plus the decoder order; it's currently set to 3, so it'll work only with the v5 decoder at level 2.
Timings:
# benchmark: iter_v5(2) pre(7) ...
sum up to 2 * 10^4: 21171191 in 0,02 ms
sum up to 2 * 10^5: 1709600813 in 0,08 ms // 4,0 times
sum up to 2 * 10^6: 142913828922 in 0,78 ms // 9,6 times
sum up to 2 * 10^7: 12272577818052 in 8,78 ms // 11,2 times
sum up to 2 * 10^8: 1075207199997334 in 98,89 ms // 11,3 times
sum up to 2 * 10^9: 95673602693282040 in 1.245,19 ms // 12,6 times
sum up to 2^31 - 1: 109930816131860852 in 1.351,97 ms
I'm writing a program that has to find the smallest number through the tournament bracket. For example there is an array
int[] a = new int[4] {4, 2, 1, 3}
and by comparing numbers standing next to each other I've to choose the smallest one. (min(4, 2) -> 2, min(1, 3) -> 1, and then I'm comparing 1 and 2, 1 is the smallest so it's the winner, but it's not possible to compare 2 and 1. Just a[0] with a1, a[2] with a[3] and so. In general a[2*i] with a[(2*i)+1] for(int i=0; i<a.Length/2; i++) <- something like this
First question: If there are n numbers, the whole tree consists of 2n-1 brackets. Am I supposed to create an array of 4 or 7 elements? 4 seems like a better option.
Second question: if I'm comparing 4 and 2, and 2 is smaller should I make a[0] = 2, and then while comparing 1 and 3 a1 = 1? Finally comparing a[0] with a1 and putting the smallest number to a[0]? Temporary int might be needed.
Last question: what do you propose to do it in the simplest way? I could hardly find any info about this algorithm. I hope you will direct my mind into working algorithm.
Not much, but I'm posting my code:
int[] a = new int[4] { 4, 2, 1, 3 };
int tmp = 0;
for (int i = 0; i < (a.Length)/2; i++)
{
if (a[tmp] > a[tmp + 1])
{
a[i] = a[i + 1];
}
else if(a[tmp] < a[tmp +1])
{
a[i] = a[i + 1];
}
tmp = tmp + 2;
}
Can you point what I'm doing ok, and what should be improved?
If tournament style is a must, a recursive approach seems the most appropriate:
int Minimum (int [] values, int start, int end)
{
if (start == end)
return values [start];
if (end - start == 1)
if ( values [start] < values [end])
return values [start];
else
return values [end];
else
{
int middle = start + (end - start) / 2;
int min1 = Minimum (values, start, middle);
int min2 = Minimum (values, middle + 1, end);
if (min1 < min2)
return min1;
else
return min2;
}
}
EDIT: Code is untested and errors might have slipped in, since it's been typed on the Android app.
EDIT: Forgot to say how you call this method. Like so:
int min = Minimum (myArray, 0, myArray.Length -1);
EDIT: Or create another overload:
int Minimum (int [] values)
{
return Minimum (values, 0, values.Length -1);
}
And to call use just:
int min = Minimum (myArray);
EDIT: And here's non-recursive method (bare in mind that this method actually modifies the array):
int Minimum(int[] values)
{
int step = 1;
do
{
for (int i = 0; i < values.Length - step; i += step)
if(values[i] > values[i + step])
values[i] = values[i + step];
step *= 2;
}
while(step < values.Length);
return values[0];
}
There are various simple solutions that utilize functions set up in C#:
int min = myArray.Min();
//Call your array something other than 'a' that's generally difficult to figure out later
Alternatively this will loop with a foreach through all of your values.
int minint = myArray[0];
foreach (int value in myArray) {
if (value < minint) minint = value;
}
1 - What tree are you talking about? Your array has n values to start with, so it will have n values max. If you mean the number of values in all the arrays you will create is 2n-1, it still doesn't mean you need to fit all of these in 1 array, create an array, use it and then create another array. C# GC will collect Objects of a reference type that have no pointers (are not going to be used again) so it will be fine memory wise if that's your concern?
2 - Post your code. There are a few gotchas but likely you will be fine changing the current array values or creating a new array. Temp int will not be needed.
3 - The above posted algos are the "simplest" using built in functions available to C#. If this is a homework assignment, please post some code.
As a general direction, using a recursive function would likely be most elegant (and some general reading on merge sorts would prove useful to you going forward).
I'm learning bit mask. And found and example but couldn't make it work.
I'm trying to calculate all sum combination from one array.
The result should be
0 - 1 - 2 - 3 - 3 - 4 - 5 - 6
My problem is with (i & mask) should only result in {0,1} and isn't.
Instead is producing.
0 - 1 - 4 - 5 - 12 - 13 - 16 - 17
int[] elem = new int[] { 1, 2, 3 };
double maxElem = Math.Pow(2, elem.Length);
for (int i = 0; i < maxElem; first++)
{
int mask = 1, sum = 0;
for (int run = 0; run < elem.Length; run++)
{
sum += elem[run] * (i & mask);
mask <<= 1;
}
Debug.Write(sum + " - ");
}
(i & mask) should only result in {0,1} and isn't
(i & mask) should return a result in {0,1} only when mask is 1 - that is, on the initial iteration. However, as soon as mask gets shifted by mask <<= 1 operation, the result of the next operation will be in {0,2}. As the mask gets shifted, possible results will become {0,4}, {0,8}, {0,16} and so on, because the only bit set to 1 in the mask would be moving to the left.
The reason why << operator doubles the number is the same as the reason why writing a zero after a decimal number has the effect of multiplying the number by ten: appending a zero to a number of any base is the same as multiplying that number by the value of base.
Ok, I solve it creating an IF.
int[] elem = new int[] { 1, 2, 3 };
double maxElem = Math.Pow(2, elem.Length);
for (int i = 0; i < maxElem; first++)
{
for (int run = 0; run < elem.Length; run++)
{
int mask = 1, sum = 0;
if ((i & mask) > 0) // ADD THIS LINE
{
sum += elem[run];
}
mask <<= 1;
}
}
I need to calculate the similarity between 2 strings. So what exactly do I mean? Let me explain with an example:
The real word: hospital
Mistaken word: haspita
Now my aim is to determine how many characters I need to modify the mistaken word to obtain the real word. In this example, I need to modify 2 letters. So what would be the percent? I take the length of the real word always. So it becomes 2 / 8 = 25% so these 2 given string DSM is 75%.
How can I achieve this with performance being a key consideration?
I just addressed this exact same issue a few weeks ago. Since someone is asking now, I'll share the code. In my exhaustive tests my code is about 10x faster than the C# example on Wikipedia even when no maximum distance is supplied. When a maximum distance is supplied, this performance gain increases to 30x - 100x +. Note a couple key points for performance:
If you need to compare the same words over and over, first convert the words to arrays of integers. The Damerau-Levenshtein algorithm includes many >, <, == comparisons, and ints compare much faster than chars.
It includes a short-circuiting mechanism to quit if the distance exceeds a provided maximum
Use a rotating set of three arrays rather than a massive matrix as in all the implementations I've see elsewhere
Make sure your arrays slice accross the shorter word width.
Code (it works the exact same if you replace int[] with String in the parameter declarations:
/// <summary>
/// Computes the Damerau-Levenshtein Distance between two strings, represented as arrays of
/// integers, where each integer represents the code point of a character in the source string.
/// Includes an optional threshhold which can be used to indicate the maximum allowable distance.
/// </summary>
/// <param name="source">An array of the code points of the first string</param>
/// <param name="target">An array of the code points of the second string</param>
/// <param name="threshold">Maximum allowable distance</param>
/// <returns>Int.MaxValue if threshhold exceeded; otherwise the Damerau-Leveshteim distance between the strings</returns>
public static int DamerauLevenshteinDistance(int[] source, int[] target, int threshold) {
int length1 = source.Length;
int length2 = target.Length;
// Return trivial case - difference in string lengths exceeds threshhold
if (Math.Abs(length1 - length2) > threshold) { return int.MaxValue; }
// Ensure arrays [i] / length1 use shorter length
if (length1 > length2) {
Swap(ref target, ref source);
Swap(ref length1, ref length2);
}
int maxi = length1;
int maxj = length2;
int[] dCurrent = new int[maxi + 1];
int[] dMinus1 = new int[maxi + 1];
int[] dMinus2 = new int[maxi + 1];
int[] dSwap;
for (int i = 0; i <= maxi; i++) { dCurrent[i] = i; }
int jm1 = 0, im1 = 0, im2 = -1;
for (int j = 1; j <= maxj; j++) {
// Rotate
dSwap = dMinus2;
dMinus2 = dMinus1;
dMinus1 = dCurrent;
dCurrent = dSwap;
// Initialize
int minDistance = int.MaxValue;
dCurrent[0] = j;
im1 = 0;
im2 = -1;
for (int i = 1; i <= maxi; i++) {
int cost = source[im1] == target[jm1] ? 0 : 1;
int del = dCurrent[im1] + 1;
int ins = dMinus1[i] + 1;
int sub = dMinus1[im1] + cost;
//Fastest execution for min value of 3 integers
int min = (del > ins) ? (ins > sub ? sub : ins) : (del > sub ? sub : del);
if (i > 1 && j > 1 && source[im2] == target[jm1] && source[im1] == target[j - 2])
min = Math.Min(min, dMinus2[im2] + cost);
dCurrent[i] = min;
if (min < minDistance) { minDistance = min; }
im1++;
im2++;
}
jm1++;
if (minDistance > threshold) { return int.MaxValue; }
}
int result = dCurrent[maxi];
return (result > threshold) ? int.MaxValue : result;
}
Where Swap is:
static void Swap<T>(ref T arg1,ref T arg2) {
T temp = arg1;
arg1 = arg2;
arg2 = temp;
}
What you are looking for is called edit distance or Levenshtein distance. The wikipedia article explains how it is calculated, and has a nice piece of pseudocode at the bottom to help you code this algorithm in C# very easily.
Here's an implementation from the first site linked below:
private static int CalcLevenshteinDistance(string a, string b)
{
if (String.IsNullOrEmpty(a) && String.IsNullOrEmpty(b)) {
return 0;
}
if (String.IsNullOrEmpty(a)) {
return b.Length;
}
if (String.IsNullOrEmpty(b)) {
return a.Length;
}
int lengthA = a.Length;
int lengthB = b.Length;
var distances = new int[lengthA + 1, lengthB + 1];
for (int i = 0; i <= lengthA; distances[i, 0] = i++);
for (int j = 0; j <= lengthB; distances[0, j] = j++);
for (int i = 1; i <= lengthA; i++)
for (int j = 1; j <= lengthB; j++)
{
int cost = b[j - 1] == a[i - 1] ? 0 : 1;
distances[i, j] = Math.Min
(
Math.Min(distances[i - 1, j] + 1, distances[i, j - 1] + 1),
distances[i - 1, j - 1] + cost
);
}
return distances[lengthA, lengthB];
}
There is a big number of string similarity distance algorithms that can be used. Some listed here (but not exhaustively listed are):
Levenstein
Needleman Wunch
Smith Waterman
Smith Waterman Gotoh
Jaro, Jaro Winkler
Jaccard Similarity
Euclidean Distance
Dice Similarity
Cosine Similarity
Monge Elkan
A library that contains implementation to all of these is called SimMetrics
which has both java and c# implementations.
I have found that Levenshtein and Jaro Winkler are great for small differences betwen strings such as:
Spelling mistakes; or
ΓΆ instead of o in a persons name.
However when comparing something like article titles where significant chunks of the text would be the same but with "noise" around the edges, Smith-Waterman-Gotoh has been fantastic:
compare these 2 titles (that are the same but worded differently from different sources):
An endonuclease from Escherichia coli that introduces single polynucleotide chain scissions in ultraviolet-irradiated DNA
Endonuclease III: An Endonuclease from Escherichia coli That Introduces Single Polynucleotide Chain Scissions in Ultraviolet-Irradiated DNA
This site that provides algorithm comparison of the strings shows:
Levenshtein: 81
Smith-Waterman Gotoh 94
Jaro Winkler 78
Jaro Winkler and Levenshtein are not as competent as Smith Waterman Gotoh in detecting the similarity. If we compare two titles that are not the same article, but have some matching text:
Fat metabolism in higher plants. The function of acyl thioesterases in the metabolism of acyl-coenzymes A and acyl-acyl carrier proteins
Fat metabolism in higher plants. The determination of acyl-acyl carrier protein and acyl coenzyme A in a complex lipid mixture
Jaro Winkler gives a false positive, but Smith Waterman Gotoh does not:
Levenshtein: 54
Smith-Waterman Gotoh 49
Jaro Winkler 89
As Anastasiosyal pointed out, SimMetrics has the java code for these algorithms. I had success using the SmithWatermanGotoh java code from SimMetrics.
Here is my implementation of Damerau Levenshtein Distance, which returns not only similarity coefficient, but also returns error locations in corrected word (this feature can be used in text editors). Also my implementation supports different weights of errors (substitution, deletion, insertion, transposition).
public static List<Mistake> OptimalStringAlignmentDistance(
string word, string correctedWord,
bool transposition = true,
int substitutionCost = 1,
int insertionCost = 1,
int deletionCost = 1,
int transpositionCost = 1)
{
int w_length = word.Length;
int cw_length = correctedWord.Length;
var d = new KeyValuePair<int, CharMistakeType>[w_length + 1, cw_length + 1];
var result = new List<Mistake>(Math.Max(w_length, cw_length));
if (w_length == 0)
{
for (int i = 0; i < cw_length; i++)
result.Add(new Mistake(i, CharMistakeType.Insertion));
return result;
}
for (int i = 0; i <= w_length; i++)
d[i, 0] = new KeyValuePair<int, CharMistakeType>(i, CharMistakeType.None);
for (int j = 0; j <= cw_length; j++)
d[0, j] = new KeyValuePair<int, CharMistakeType>(j, CharMistakeType.None);
for (int i = 1; i <= w_length; i++)
{
for (int j = 1; j <= cw_length; j++)
{
bool equal = correctedWord[j - 1] == word[i - 1];
int delCost = d[i - 1, j].Key + deletionCost;
int insCost = d[i, j - 1].Key + insertionCost;
int subCost = d[i - 1, j - 1].Key;
if (!equal)
subCost += substitutionCost;
int transCost = int.MaxValue;
if (transposition && i > 1 && j > 1 && word[i - 1] == correctedWord[j - 2] && word[i - 2] == correctedWord[j - 1])
{
transCost = d[i - 2, j - 2].Key;
if (!equal)
transCost += transpositionCost;
}
int min = delCost;
CharMistakeType mistakeType = CharMistakeType.Deletion;
if (insCost < min)
{
min = insCost;
mistakeType = CharMistakeType.Insertion;
}
if (subCost < min)
{
min = subCost;
mistakeType = equal ? CharMistakeType.None : CharMistakeType.Substitution;
}
if (transCost < min)
{
min = transCost;
mistakeType = CharMistakeType.Transposition;
}
d[i, j] = new KeyValuePair<int, CharMistakeType>(min, mistakeType);
}
}
int w_ind = w_length;
int cw_ind = cw_length;
while (w_ind >= 0 && cw_ind >= 0)
{
switch (d[w_ind, cw_ind].Value)
{
case CharMistakeType.None:
w_ind--;
cw_ind--;
break;
case CharMistakeType.Substitution:
result.Add(new Mistake(cw_ind - 1, CharMistakeType.Substitution));
w_ind--;
cw_ind--;
break;
case CharMistakeType.Deletion:
result.Add(new Mistake(cw_ind, CharMistakeType.Deletion));
w_ind--;
break;
case CharMistakeType.Insertion:
result.Add(new Mistake(cw_ind - 1, CharMistakeType.Insertion));
cw_ind--;
break;
case CharMistakeType.Transposition:
result.Add(new Mistake(cw_ind - 2, CharMistakeType.Transposition));
w_ind -= 2;
cw_ind -= 2;
break;
}
}
if (d[w_length, cw_length].Key > result.Count)
{
int delMistakesCount = d[w_length, cw_length].Key - result.Count;
for (int i = 0; i < delMistakesCount; i++)
result.Add(new Mistake(0, CharMistakeType.Deletion));
}
result.Reverse();
return result;
}
public struct Mistake
{
public int Position;
public CharMistakeType Type;
public Mistake(int position, CharMistakeType type)
{
Position = position;
Type = type;
}
public override string ToString()
{
return Position + ", " + Type;
}
}
public enum CharMistakeType
{
None,
Substitution,
Insertion,
Deletion,
Transposition
}
This code is a part of my project: Yandex-Linguistics.NET.
I wrote some tests and it's seems to me that method is working.
But comments and remarks are welcome.
Here is an alternative approach:
A typical method for finding similarity is Levenshtein distance, and there is no doubt a library with code available.
Unfortunately, this requires comparing to every string. You might be able to write a specialized version of the code to short-circuit the calculation if the distance is greater than some threshold, you would still have to do all the comparisons.
Another idea is to use some variant of trigrams or n-grams. These are sequences of n characters (or n words or n genomic sequences or n whatever). Keep a mapping of trigrams to strings and choose the ones that have the biggest overlap. A typical choice of n is "3", hence the name.
For instance, English would have these trigrams:
Eng
ngl
gli
lis
ish
And England would have:
Eng
ngl
gla
lan
and
Well, 2 out of 7 (or 4 out of 10) match. If this works for you, and you can index the trigram/string table and get a faster search.
You can also combine this with Levenshtein to reduce the set of comparison to those that have some minimum number of n-grams in common.
Here's a VB.net implementation:
Public Shared Function LevenshteinDistance(ByVal v1 As String, ByVal v2 As String) As Integer
Dim cost(v1.Length, v2.Length) As Integer
If v1.Length = 0 Then
Return v2.Length 'if string 1 is empty, the number of edits will be the insertion of all characters in string 2
ElseIf v2.Length = 0 Then
Return v1.Length 'if string 2 is empty, the number of edits will be the insertion of all characters in string 1
Else
'setup the base costs for inserting the correct characters
For v1Count As Integer = 0 To v1.Length
cost(v1Count, 0) = v1Count
Next v1Count
For v2Count As Integer = 0 To v2.Length
cost(0, v2Count) = v2Count
Next v2Count
'now work out the cheapest route to having the correct characters
For v1Count As Integer = 1 To v1.Length
For v2Count As Integer = 1 To v2.Length
'the first min term is the cost of editing the character in place (which will be the cost-to-date or the cost-to-date + 1 (depending on whether a change is required)
'the second min term is the cost of inserting the correct character into string 1 (cost-to-date + 1),
'the third min term is the cost of inserting the correct character into string 2 (cost-to-date + 1) and
cost(v1Count, v2Count) = Math.Min(
cost(v1Count - 1, v2Count - 1) + If(v1.Chars(v1Count - 1) = v2.Chars(v2Count - 1), 0, 1),
Math.Min(
cost(v1Count - 1, v2Count) + 1,
cost(v1Count, v2Count - 1) + 1
)
)
Next v2Count
Next v1Count
'the final result is the cheapest cost to get the two strings to match, which is the bottom right cell in the matrix
'in the event of strings being equal, this will be the result of zipping diagonally down the matrix (which will be square as the strings are the same length)
Return cost(v1.Length, v2.Length)
End If
End Function