Related
Imagine I have a list of several-hundred unique names, e.g.
["john", "maria", "joseph", "richard", "samantha", "isaac", ...]
What's the best way I can store these to provide a fast lookup-time by matching against a pattern?
I only need to match "masks", can't think of a better word for it.
Basically, I get in letters and their positions, ____a__ (where _ represents an unknown letter.) Then I need to find all values in the data structure that match that mask, e.g. in this case it would return "richard", but it should also be possible to get multiple "returned" values.
Seems like a lot of work for "hundreds" of names. Doing a linear search on a list of hundreds of names will be very fast. Now, if you're talking hundreds of thousands or millions ...
In any case, you can speed this up using a dictionary. You can pre-process the data into a dictionary whose keys are a combination of character and position, and values are the words that contain that character at that position. For example, if you were to index "john" and "joseph", you would have:
{'j',0},{"john","jospeh"}
{'o',1},{"john","joseph"}
{'h',2},{"john"}
{'n',3},{"john}
{'s',2},{"joseph"}
{'e',3},{"joseph"}
{'p',4},{"joseph"}
{'h',5},{"joseph"}
Now let's say you're given the mask "jo...." (the dots are "don't care"). You'd turn that into two keys:
{'j',0}
{'o',1}
You query the dictionary for the list of words that has 'j' at index 0. Then you query the dictionary for the list of words that has 'o' at index 1. Then you intersect the lists to get your result.
It's a simple inverted index, but on character rather than on word.
The lists themselves will cost you a total of O(m * n) space, where m is the total number of words and n is the average word length in characters. At maximum, the number of dictionary entries will be 26*max_word_length. In practice, it will probably be much less.
If you make the values a HashSet<string> rather than List<string>, intersection will go much faster. It'll increase your memory footprint, though.
That should be faster than linear search if your masks contain only a few characters. The longer the mask, the more lists you'll have to intersect.
For the dictionary key, I'd recommend:
public struct Key
{
public char KeyChar;
public int Pos;
public override int GetHashCode()
{
return (int)KeyChar + Pos << 16;
}
public override bool Equals(object obj)
{
if (!obj is Key) return false;
var other = (Key)obj;
return KeyChar == other.KeyChar && Pos == other.Pos;
}
}
So your dictionary would be Dictionary<Key, HashSet<string>>.
If the longest word has m letters, then you can keep m lists l[1], ..., l[m] such that the words in each list l[i] are sorted lexicographically starting from the i-th letter in every word (shorter words will not appear in that list). Then, if your query is ...ac., just perform a binary search in list l[4].
This will cost you O(mn) in memory and takes O(m n log n) time to build, but will give you O(log n) query time, which is the fastest you can get.
EDIT
Good news, I have recently stumbled upon range trees, that would allow you to perform this kind of queries somewhat efficiently, namely in O(log^m(n)+k) time, and requiring O(n log^(d-1)(n)) space.
They are not straightforward to implement, in the sense that you need to build a binary search tree sorting the words by the first letter, then build a binary search tree for every internal node which stores the words in the subtree of that node sorted by the second letter, and so on.
On the other hand, this would allow you to perform a wider range of queries, namely you can look for contiguous intervals of letters, i.e. a pattern like ..[a-c].[b-f].
Before you read my explanation I want to tell you that I need to optimize processing time for comparing two huge c# lists, index by index in a nested loop.
Its a .Net Core App which I am creating with C# of course.
In my algorithm I have to create a very long list with some ranges of integers, like this.
internal class Global
{
public string ChromosomeName { get; set; }
public int start { get; set; }
public int end { get; set; }
public string Cluster { get; set; }
public string Data { get; set; }
}
var globals = new List<Global>();// somewhere in my method.
now this list will be very huge for example it will have values stored like this. This is my main list so its named 'globals'
index 0 = start=1, end=400 ....
index 1 = start=401, end=800....
index (last) = start= 45090000 , end= 45090400 ...
These are just rough estimate values so that you understand that it's going to be a huge list.
Now in my algorithm what I actually have to do is
So I take one text file, read that file and store its data in another list exactly with the same properties as shown above in the code.
Now I have 2 lists, globals list and other list which i read from the file.
Both of them are very huge lists
Now I have to compare both of them index by index in a nested loop.
Outer loop will be iterating my globals list and inner loop will be iterating my other list ( which i read from the file).
After I finish the nested loops one time, I read another file and created another list and then compare that list with globals list in same manner..
So there will be one global list which will be compared index by index in a nested loop with around 10 more lists and all of them being nearly as huge as global list itself.
Below is pseudocode shown for the nested foreach loops.
foreach(var item in globals)
{
var value=0;
foreach(var item2 in otherHugeList)
{
compareMethod(item,item2);
//below is the actual code of wht kind of comparison I am doing, just if i guyx want to know, I am actually finding the overlap between two ranges.
//value += Math.Max(0, Math.Min(range1.end, EndList[i]) - Math.Max(range1.start, StartList[i]) + 1);
}
}
What is the fastest way I can do this, because right now it takes more than hours and I get frustrated and I cancel the process because I don't know how long its going to take. So I am not even able to get my results on smaller files.
I need to know the fastest possible way to do this, should I use any library compatible with .Net core? or multithreading somehow? I am not that good with threading concepts though.
P.S: I have used Parallel.ForEach and its difference on performance is negligible.
If you need to make element-by-element comparisons of two lists with 106 items each, there's 1012 comparisons that you need to make. It leaves you no hope to finish in a sane amount of time, so the key to solving this problem is to drastically reduce the number of comparisons.
The exact approach to making the reduction depends on the kind of comparison that you are running, so let's use overlap computation from your post as an example.
You know that there is no overlap between ranges R and Q when one of the statements below is true:
Upper bound of R is below the lower bound of Q, or
Lower bound of R is above the upper bound of Q.
This wouldn't help if your ranges appear on the list in random order. However, if you sort your ranges on the lower bound, and resolve ties by the upper bound, you will be able to use binary search to find the relevant portion of the list for each range you compare, i.e. the elements for which the overlap is possible.
Assuming that there is little overlap among ranges on the same list, this will reduce the number of comparisons from roughly a million per element to well under a hundred per element, resulting in 1000-fold increase in performance.
None of my lists will have self-overlapping ranges (comment)
Then you can use a variation of the merge algorithm by sorting both range lists, and then iterating them in a single loop. Set indexes into two arrays to zero, then walk both lists one step at a time. If the current range on the global list is below the start level of the current range on the comparison list, move on to the next element of the global list; otherwise, move on to the next element of the comparison list. The two indexes will "chase" each other until you reach the end of both lists after 2M increments.
I'm currently trying to process a number of data feeds that I have no control over, where I am using Regular Expressions in C# to extract information.
The originator of the data feed is extracting basic row data from their database (like a product name, price, etc), and then formatting that data within rows of English text. For each row, some of the text is repeated static text and some is the dynamically generated text from the database.
e.g
Panasonic TV with FREE Blu-Ray Player
Sony TV with FREE DVD Player + Box Office DVD
Kenwood Hi-Fi Unit with $20 Amazon MP3 Voucher
So the format in this instance is: PRODUCT with FREEGIFT.
PRODUCT and FREEGIFT are dynamic parts of each row, and the "with" text is static. Each feed has about 2000 rows.
Creating a Regular Expression to extract the dynamic parts is trivial.
The problem is that the marketing bods in control of the data feed keep on changing the structure of the static text, usually once a fortnight, so this week I might have:
Brand new Panasonic TV and a FREE Blu-Ray Player if you order today
Brand new Sony TV and a FREE DVD Player + Box Office DVD if you order today
Brand new Kenwood Hi-Fi unit and a $20 Amazon MP3 Voucher if you order today
And next week it will probably be something different, so I have to keep modifying my Regular Expressions...
How would you handle this?
Is there an algorithm to determine static and variable text within repeating rows of strings? If so, what would be the best way to use the output of such an algorithm to programatically create a dynamic Regular Expression?
Thanks for any help or advice.
This code isn't perfect, it certainly isn't efficient, and it's very likely to be too late to help you, but it does work. If given a set of strings, it will return the common content above a certain length.
However, as others have mentioned, an algorithm can only give you an approximation, as you could hit a bad batch where all products have the same initial word, and then the code would accidentally identify that content as static. It may also produce mismatches when dynamic content shares values with static content, but as the size of samples you feed into it grows, the chance of error will shrink.
I'd recommend running this on a subset of your data (20000 rows would be a bad idea!) with some sort of extra sanity checking (max # of static elements etc)
Final caveat: it may do a perfect job, but even if it does, how do you know which item is the PRODUCT and which one is the FREEGIFT?
The algorithm
If all strings in the set start with the same character, add that character to the "current match" set, then remove the leading character from all strings
If not, remove the first character from all strings whose first x (minimum match length) characters aren't contained in all the other strings
As soon as a mismatch is reached (case 2), yield the current match if it meets the length requirement
Continue until all strings are exhausted
The implementation
private static IEnumerable<string> FindCommonContent(string[] strings, int minimumMatchLength)
{
string sharedContent = "";
while (strings.All(x => x.Length > 0))
{
var item1FirstCharacter = strings[0][0];
if (strings.All(x => x[0] == item1FirstCharacter))
{
sharedContent += item1FirstCharacter;
for (int index = 0; index < strings.Length; index++)
strings[index] = strings[index].Substring(1);
continue;
}
if (sharedContent.Length >= minimumMatchLength)
yield return sharedContent;
sharedContent = "";
// If the first minMatch characters of a string aren't in all the other strings, consume the first character of that string
for (int index = 0; index < strings.Length; index++)
{
string testBlock = strings[index].Substring(0, Math.Min(minimumMatchLength, strings[index].Length));
if (!strings.All(x => x.Contains(testBlock)))
strings[index] = strings[index].Substring(1);
}
}
if (sharedContent.Length >= minimumMatchLength)
yield return sharedContent;
}
Output
Set 1 (from your example):
FindCommonContent(strings, 4);
=> "with "
Set 2 (from your example):
FindCommonContent(strings, 4);
=> "Brand new ", "and a ", "if you order today"
Building the regex
This should be as simple as:
"{.*}" + string.Join("{.*}", FindCommonContent(strings, 4)) + "{.*}";
=> "^{.*}Brand new {.*}and a {.*}if you order today{.*}$"
Although you could modify the algorithm to return information about where the matches are (between or outside the static content), this will be fine, as you know some will match zero-length strings anyway.
I think it would be possible with an algorithm , but the time it would take you to code it versus simply doing the Regular Expression might not be worth it.
You could however make your changing process faster. If instead of having your Regex String inside your application, you'd put it in a text file somewhere, you wouldn't have to recompile and redeploy everything every time there's a change, you could simply edit the text file.
Depending on your project size and implementation, this could save you a generous amount of time.
I need help on an algorithm. I have randomly generated numbers with 6 digits. Like;
123654
109431
There are approximately 1 million of them saved in a file line by line. I have to filter them according to the rule I try to describe below.
Take a number, compare it to all others digit by digit. If a number comes up with a digit with a value of bigger by one to the compared number, then delete it. Let me show it by using numbers.
Our number is: 123456
Increase the first digit with 1, so the number becomes: 223456. Delete all the 223456s from the file.
Increase the second digit by 1, the number becomes: 133456. Delete all 133456s from the file, and so on...
I can do it just as I describe but I need it to be "FAST".
So can anyone help me on this?
Thanks.
First of all, since it is around 1Million you had better perform the algorithm in RAM, not on Disk, that is, first load the contents into an array, then modify the array, then paste the results back into the file.
I would suggest the following algorithm - a straightforward one. Precalculate all the target numbers, in this case 223456, 133456, 124456, 123556, 123466, 123457. Now pass the array and if the number is NOT any of these, write it to another array. Alternatively if it is one of these numbers delete it(recommended if your data structure has O(1) remove)
This algorithm will keep a lot of numbers around in memory, but it will process the file one number at a time so you don't actually need to read it all in at once. You only need to supply an IEnumerable<int> for it to operate on.
public static IEnumerable<int> FilterInts(IEnumerable<int> ints)
{
var removed = new HashSet<int>();
foreach (var i in ints)
{
var iStr = i.ToString("000000").ToCharArray();
for (int j = 0; j < iStr.Length; j++)
{
var c = iStr[j];
if (c == '9')
iStr[j] = '0';
else
iStr[j] = (char)(c + 1);
removed.Add(int.Parse(new string(iStr)));
iStr[j] = c;
}
if (!removed.Contains(i))
yield return i;
}
}
You can use this method to create an IEnumerable<int> from the file:
public static IEnumerable<int> ReadIntsFrom(string path)
{
using (var reader = File.OpenText(path))
{
string line;
while ((line = reader.ReadLine()) != null)
yield return int.Parse(line);
}
}
Take all the numbers from the file to an arrayList, then:
take the number of threads as the number of digits
increment the first digit on the number in first thread, second in the second thread and then compare it with the rest of the numbers,
It would be fast as it will undergo by parallel processing...
All the suggestions (so far) require six comparisons per input line, which is not necessary. The numbers are coming in as strings, so use string comparisons.
Start with #Armen Tsirunyan's idea:
Precalculate all the target numbers,
in this case 223456, 133456, 124456,
123556, 123466, 123457.
But instead of single comparisons, make that into a string:
string arg = "223456 133456 124456 123556 123466 123457";
Then read through the input (either from file or in memory). Pseudocode:
foreach (string s in theBigListOfNumbers)
if (arg.indexOf(s) == -1)
print s;
This is just one comparison per input line, no dictionaries, maps, iterators, etc.
Edited to add:
In x86 instruction set processors (not just the Intel brand), substring searches like this are very fast. To search for a character within a string, for example, is just one machine instruction.
I'll have to ask others to weigh in on alternate architectures.
For starters, I would just read all the numbers into an array.
When you are finally done, rewrite the file.
It seems like the rule you're describing is for the target number abdcef you want to find all numbers that contain a+1, b+1, c+1, d+1, e+1, or f+1 in the appropriate place. You can do this in O(n) by looping over the lines in the file and comparing each of the six digits to the digit in the target number if no digits match, write the number to an output file.
This sounds like a potential case for a multidimensional array, and possibly also unsafe c# code so that you can use pointer math to iterate through such a large quantity of numbers.
I would have to dig into it further, but I would also probably use a Dictionary for non-linear lookups, if you are comparing numbers that aren't in sequence.
How about this. You process numbers one by one. Numbers will be stored in hash tables NumbersOK and NumbersNotOK.
Take one number
If it's not in NumbersNotOK place it in a Hash of NumbersOK
Get it's variances of single number increments in hash - NumbersNotOK.
Remove all of the NumbersOK members if they match any of the variances.
Repeat from 1, untill end of file
Save the NumbersOK to the file.
This way you'll pass the list just once. The hash table is made just for this kind of purposes and it'll be very fast (no expensive comparison methods).
This algorithm is not in full, as it doesn't handle when there are some numbers repeating, but it can be handled with some tweaking...
Read all your numbers from the file and store them in a map where the number is the key and a boolean is the value signifying that the value hasn't been deleted. (True means exists, false means deleted).
Then iterate through your keys. For each key, set the map to false for the values you would be deleting from the list.
Iterate through your list one more time and get all the keys where the value is true. This is the list of remaining numbers.
public List<int> FilterNumbers(string fileName)
{
StreamReader sr = File.OpenTest(fileName);
string s = "";
Dictionary<int, bool> numbers = new Dictionary<int, bool>();
while((s = sr.ReadLine()) != null)
{
int number = Int32.Parse(s);
numbers.Add(number,true);
}
foreach(int number in numbers.Keys)
{
if(numbers[number])
{
if(numbers.ContainsKey(100000+number))
numbers[100000+number]=false;
if(numbers.ContainsKey(10000+number))
numbers[10000+number]=false;
if(numbers.ContainsKey(1000+number))
numbers[1000+number]=false;
if(numbers.ContainsKey(100+number))
numbers[100+number]=false;
if(numbers.ContainsKey(10+number))
numbers[10+number]=false;
if(numbers.ContainsKey(1+number))
numbers[1+number]=false;
}
}
List<int> validNumbers = new List<int>();
foreach(int number in numbers.Keys)
{
validNumbers.Add(number);
}
return validNumbers;
}
This may need to be tested as I don't have a C# compiler on this computer and I'm a bit rusty. The algorithm will take a bit of memory bit it runs in linear time.
** EDIT **
This runs into problems whenever one of the numbers is 9. I'll update the code later.
Still sounds like a homework question... the fastest sort on a million numbers will be n log(n) that is 1000000log(1000000) that is 6*1000000 which is the same as comparing 6 numbers to each of the million numbers. So a direct comparison will be faster than sort and remove, because after sorting you still have to compare to remove. Unless, ofcourse, my calculations have entirely missed the target.
Something else comes to mind. When you pick up the number, read it as hex and not base 10. then maybe some bitwise operators may help somehow.
Still thinking on what can be done using this. Will update if it works
EDIT: currently thinking on the lines of gray code. 123456 (our original number) and 223456 or 133456 will be off only by one digit and a gray code convertor will catch it fast. It's late night here, so if someone else finds this useful and can give a solution...
I have a list of input words separated by comma. I want to sort these words by alphabetical and length. How can I do this without using the built-in sorting functions?
Good question!! Sorting is probably the most important concept to learn as an up-and-coming computer scientist.
There are actually lots of different algorithms for sorting a list.
When you break all of those algorithms down, the most fundamental operation is the comparison of two items in the list, defining their "natural order".
For example, in order to sort a list of integers, I'd need a function that tells me, given any two integers X and Y whether X is less than, equal to, or greater than Y.
For your strings, you'll need the same thing: a function that tells you which of the strings has the "lesser" or "greater" value, or whether they're equal.
Traditionally, these "comparator" functions look something like this:
int CompareStrings(String a, String b) {
if (a < b)
return -1;
else if (a > b)
return 1;
else
return 0;
}
I've left out some of the details (like, how do you compute whether a is less than or greater than b? clue: iterate through the characters), but that's the basic skeleton of any comparison function. It returns a value less than zero if the first element is smaller and a value greater than zero if the first element is greater, returning zero if the elements have equal value.
But what does that have to do with sorting?
A sort routing will call that function for pairs of elements in your list, using the result of the function to figure out how to rearrange the items into a sorted list. The comparison function defines the "natural order", and the "sorting algorithm" defines the logic for calling and responding to the results of the comparison function.
Each algorithm is like a big-picture strategy for guaranteeing that ANY input will be correctly sorted. Here are a few of the algorithms that you'll probably want to know about:
Bubble Sort:
Iterate through the list, calling the comparison function for all adjacent pairs of elements. Whenever you get a result greater than zero (meaning that the first element is larger than the second one), swap the two values. Then move on to the next pair. When you get to the end of the list, if you didn't have to swap ANY pairs, then congratulations, the list is sorted! If you DID have to perform any swaps, go back to the beginning and start over. Repeat this process until there are no more swaps.
NOTE: this is usually not a very efficient way to sort a list, because in the worst cases, it might require you to scan the whole list as many as N times, for a list with N elements.
Merge Sort:
This is one of the most popular divide-and-conquer algorithms for sorting a list. The basic idea is that, if you have two already-sorted lists, it's easy to merge them. Just start from the beginning of each list and remove the first element of whichever list has the smallest starting value. Repeat this process until you've consumed all the items from both lists, and then you're done!
1 4 8 10
2 5 7 9
------------ becomes ------------>
1 2 4 5 7 8 9 10
But what if you don't have two sorted lists? What if you have just one list, and its elements are in random order?
That's the clever thing about merge sort. You can break any single list into smaller pieces, each of which is either an unsorted list, a sorted list, or a single element (which, if you thing about it, is actually a sorted list, with length = 1).
So the first step in a merge sort algorithm is to divide your overall list into smaller and smaller sub lists, At the tiniest levels (where each list only has one or two elements), they're very easy to sort. And once sorted, it's easy to merge any two adjacent sorted lists into a larger sorted list containing all the elements of the two sub lists.
NOTE: This algorithm is much better than the bubble sort method, described above, in terms of its worst-case-scenario efficiency. I won't go into a detailed explanation (which involves some fairly trivial math, but would take some time to explain), but the quick reason for the increased efficiency is that this algorithm breaks its problem into ideal-sized chunks and then merges the results of those chunks. The bubble sort algorithm tackles the whole thing at once, so it doesn't get the benefit of "divide-and-conquer".
Those are just two algorithms for sorting a list, but there are a lot of other interesting techniques, each with its own advantages and disadvantages: Quick Sort, Radix Sort, Selection Sort, Heap Sort, Shell Sort, and Bucket Sort.
The internet is overflowing with interesting information about sorting. Here's a good place to start:
http://en.wikipedia.org/wiki/Sorting_algorithms
Create a console application and paste this into the Program.cs as the body of the class.
public static void Main(string[] args)
{
string [] strList = "a,b,c,d,e,f,a,a,b".Split(new [] { ',' }, StringSplitOptions.RemoveEmptyEntries);
foreach(string s in strList.Sort())
Console.WriteLine(s);
}
public static string [] Sort(this string [] strList)
{
return strList.OrderBy(i => i).ToArray();
}
Notice that I do use a built in method, OrderBy. As other answers point out there are many different sort algorithms you could implement there and I think my code snippet does everything for you except the actual sort algorithm.
Some C# specific sorting tutorials
There is an entire area of study built around sorting algorithms. You may want to choose a simple one and implement it.
Though it won't be the most performant, it shouldn't take you too long to implement a bubble sort.
If you don't want to use build-in-functions, you have to create one by your self. I would recommend Bubble sort or some similar algorithm. Bubble sort is not an effective algoritm, but it get the works done, and is easy to understand.
You will find much good reading on wikipedia.
I would recommend doing a wiki for quicksort.
Still not sure why you don't want to use the built in sort?
Bubble sort damages the brain.
Insertion sort is at least as simple to understand and code, and is actually useful in practice (for very small data sets, and nearly-sorted data). It works like this:
Suppose that the first n items are already in order (you can start with n = 1, since obviously one thing on its own is "in the correct order").
Take the (n+1)th item in your array. Call this the "pivot". Starting with the nth item and working down:
- if it is bigger than the pivot, move it one space to the right (to create a "gap" to the left of it).
- otherwise, leave it in place, put the "pivot" one space to the right of it (that is, in the "gap" if you moved anything, or where it started if you moved nothing), and stop.
Now the first n+1 items in the array are in order, because the pivot is to the right of everything smaller than it, and to the left of everything bigger than it. Since you started with n items in order, that's progress.
Repeat, with n increasing by 1 at each step, until you've processed the whole list.
This corresponds to one way that you might physically put a series of folders into a filing cabinet in order: put one in; then put another one into its correct position by pushing everything that belongs after it over by one space to make room; repeat until finished. Nobody ever sorts physical objects by bubble sort, so it's a mystery to me why it's considered "simple".
All that's left now is that you need to be able to work out, given two strings, whether the first is greater than the second. I'm not quite sure what you mean by "alphabetical and length" : alphabetical order is done by comparing one character at a time from each string. If there not the same, that's your order. If they are the same, look at the next one, unless you're out of characters in one of the strings, in which case that's the one that's "smaller".
Use NSort
I ran across the NSort library a couple of years ago in the book Windows Developer Power Tools. The NSort library implements a number of sorting algorithms. The main advantage to using something like NSort over writing your own sorting is that is is already tested and optimized.
Posting link to fast string sort code in C#:
http://www.codeproject.com/KB/cs/fast_string_sort.aspx
Another point:
The suggested comparator above is not recommended for non-English languages:
int CompareStrings(String a, String b) {
if (a < b) return -1;
else if (a > b)
return 1; else
return 0; }
Checkout this link for non-English language sort:
http://msdn.microsoft.com/en-us/goglobal/bb688122
And as mentioned, use nsort for really gigantic arrays that don't fit in memory.