Check if int is 10, 100, 1000, - c#

I have a part in my application which needs to do do something (=> add padding 0 in front of other numbers) when a specified number gets an additional digit, meaning it gets 10, 100, 1000 or so on...
At the moment I use the following logic for that:
public static bool IsNewDigit(this int number)
{
var numberString = number.ToString();
return numberString.StartsWith("1")
&& numberString.Substring(1).All(c => c == '0');
}
The I can do:
if (number.IsNewDigit()) { /* add padding 0 to other numbers */ }
This seems like a "hack" to me using the string conversion.
Is there something something better (maybe even "built-in") to do this?
UPDATE:
One example where I need this:
I have an item with the following (simplified) structure:
public class Item
{
public int Id { get; set; }
public int ParentId { get; set; }
public int Position { get; set; }
public string HierarchicPosition { get; set; }
}
HierarchicPosition is the own position (with the padding) and the parents HierarchicPositon. E.g. an item, which is the 3rd child of 12 from an item at position 2 has 2.03 as its HierarchicPosition. This can as well be something more complicated like 011.7.003.2.02.
This value is then used for sorting the items very easily in a "tree-view" like structure.
Now I have an IQueryable<Item> and want to add one item as the last child of another item. To avoid needing to recreate all HierarchicalPosition I would like to detect (with the logic in question) if the new position adds a new digit:
Item newItem = GetNewItem();
IQueryable<Item> items = db.Items;
var maxPosition = items.Where(i => i.ParentId == newItem.ParentId)
.Max(i => i.Position);
newItem.Position = maxPosition + 1;
if (newItem.Position.IsNewDigit())
UpdateAllPositions(items.Where(i => i.ParentId == newItem.ParentId));
else
newItem.HierarchicPosition = GetHierarchicPosition(newItem);
UPDATE #2:
I query this position string from the DB like:
var items = db.Items.Where(...)
.OrderBy(i => i.HierarchicPosition)
.Skip(pageSize * pageNumber).Take(pageSize);
Because of this I can not use an IComperator (or something else wich sorts "via code").
This will return items with HierarchicPosition like (pageSize = 10):
03.04
03.05
04
04.01
04.01.01
04.01.02
04.02
04.02.01
04.03
05
UPDATE #3:
I like the alternative solution with the double values, but I have some "more complicated cases" like the following I am not shure I can solve with that:
I am building (on part of many) an image gallery, which has Categories and Images. There a category can have a parent and multiple children and each image belongs to a category (I called them Holder and Asstes in my logic - so each image has a holder and each category can have multiple assets). These images are sorted first be the categories position and then by its own position. This I do by combining the HierarchicPosition like HolderHierarchicPosition#ItemHierarchicPosition. So in a category which has 02.04 as its position and 120 images the 3rd image would get 02.04#003.
I have even some cases with "three levels" (or maybe more in the future) like 03.1#02#04.
Can I adapt the "double solution" to suport such scenarios?
P.S.: I am also open to other solution for my base problem.

You could check if base-10 logarithm of the number is an integer. (10 -> 1, 100 -> 2, 1000 -> 3, ...)
This could also simplify your algorithm a bit in general. Instead of adding one 0 of padding every time you find something bigger, simply keep track of the maximum number you see, then take length = floor(log10(number))+1 and make sure everything is padded to length. This part does not suffer from the floating point arithmetic issues like the comparison to integer does.

From What you describe, it looks like your HierarchicPosition position should maintain an order of items and you run into the problem, that when you have the ids 1..9 and add a 10, you'll get the order 1,10,2,3,4,5,6... somewhere and therefore want to pad-left to 01,02,03...,10 - correct?
If I'm right, please have a look at this first: https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem
Because what you try to do is a workarround to solve the problem in a certain way. - But there might be more efficent ways to actually really solve it. (therefore you should have better asked about your actual problem rather than the solution you try to implement)
See here for a solution, using a custom IComparator to sort strings (that are actually numbers) in a native way: http://www.codeproject.com/Articles/11016/Numeric-String-Sort-in-C
Update regarding your update:
With providing a sorting "String" like you do, you could insert a element "somewhere" without having ALL subsequent items reindexed, as it would be for a integer value. (This seems to be the purpose)
Instead of building up a complex "String", you could use a Double-Value to achieve the very same result real quick:
If you insert an item somewhere between 2 existing items, all you have to do is : this.sortingValue = (prior.sortingValue + next.sortingValue) / 2 and handle the case when you are inserting at the end of the list.
Let's assume you add Elements in the following order:
1 First Element // pick a double value for sorting - 100.00 for example. -> 100.00
2 Next Element // this is the list end - lets just add another 100.00 -> 200.00
1.1 Child // this should go "in between": (100+200)/2 = 150.00
1.2 Another // between 1.1 and 2 : (150+200)/2 = 175
When you now simple sort depending on that double field, the order would be:
100.00 -> 1
150.00 -> 1.1
175.00 -> 1.2
200.00 -> 2
Wanna Add 1.1.1? Great: positon = (150.00 + 175.00)/2;;
you could simple multiply all by 10, whenever your NEW value hits x.5* to ensure you are not running out of decimal places (but you dont have to - having .5 .25 .125 ... does not hurt the sorting):
So, after adding the 1.1.1 which would be 162,5, multiply all by 10:
1000.00 -> 1
1500.00 -> 1.1
1625.00 -> 1.1.1
1750.00 -> 1.2
2000.00 -> 2
So, whenever you move an item arround, you only need to recalculate the position of n by looking at n-1 and n+1
Depending on the expected childs per entry, you could start with "1000.00", "10.000" or whatever matches best.
What I didn't take into account: When you want to move "2" to the top, you would need to recalculate all childs of "2" to have a value somewhere between the sorting value of "2" and the now "next" item... Could serve some headache :)
The solution with "double" values has some limitations, but will work for smaller sets of groups. However you are talking about "Groups, subgroups, and pictures with counts of 100" - so another solution would be preferable:
First, you should refactor your database: Currently you are trying to "squeeze" a Tree into a list (datatables are basically lists)
To really reflect the complex layout of a tree with an infinite depth, you should use 2 tables and implement the composite pattern.
Then you can use a recursive approach to get a category, its subcategory, [...] and finally the elements of that category.
With that, you only need to provide a position of each leaf within it's current node.
Rearanging leafs will not affect any leaf of another node or any node.
Rearanging nodes will not affect any subnode or leaf of that node.

You could check sum of square of all digits for the input, 10,100,1000 has something in common that, if you do the sum of square of all digits, it should converge to one;
10
1^2 + 0^2 = 1
100
1^2 + 0^2 + 0^2 = 1
so on so forth.

Related

Best way to group list of doubles, to add up to a specific value

I have task, that I'm unsure on how i should approach.
there's a list of doubles, and i need to group them together to add up to a specific value.
Say i have:
14.6666666666666,
14.6666666666666,
2.37499999999999,
1.04166666666665,
1.20833333333334,
1.20833333333334,
13.9583333333333,
1.20833333333334,
3.41666666666714,
3.41666666666714,
1.20833333333334,
1.20833333333334,
14.5416666666666,
1.20833333333335,
1.04166666666666,
And i would like to group into set values such as 12,14,16
I would like to take the highest value in the list then group it with short ones to equal the closest value above.
example:
take double 14.6666666666666, and group it with 1.20833333333334 to bring me close to 16, and if there are anymore small doubles left in the list, group them with that as well.
Then move on to the next double in the list..
That's literally the "Cutting stock Problem" (Sometimes called the 1 Dimensional Bin Packing Problem). There are a number of well documented solutions.
The only way to get the "Optimal" solution (other than a quantum computer) is to cycle through every combination, and select the best outcome.
A quicker way to get an "OK" solution is called the "First Fit Algorithm". It takes the requested values in the order they come, and removes them from the first piece of material that can fulfill the request.
The "First Fit Algorithm" can be slightly improved by pre-ordering the the values from largest to smallest, and pre-ordering the materials from smallest to largest. You could also uses the material that is closest to being completely consumed by the request, instead of the first piece that can fulfill the request.
A compromise, but one that requires more code is a "Genetic Algorithm". This is an over simplification, but you could use the basic idea of the "First Fit Algorithm", but randomly swap two of the values before each pass. If the efficiency increases, you keep the change, and if it decreases, you go back to the last state. Repeat until a fixed amount of time has passed or until you're happy.
Put the doubles in a list and sort them. Grab the highest value that is less than the target to start. Then loop through from the start of the list adding the values until you reach a point where adding the value will put you over the limit.
var threshold = 16;
List<double> values = new List<double>();
values.Add(14.932034);
etc...
Sort the list:
values = values.OrderBy(p => p).ToList();
Grab the highest value that is less than your threshold:
// Highest value under threshold
var highestValue = values.Where(x => x < threshold).Max();
Now perform your search and calculations until you reach your solution:
currentValue = highestValue
Console.WriteLine("Starting with: " + currentValue);
foreach(var val in values)
{
if(currentValue + val <= theshold)
{
currentValue = currentValue + val;
Console.WriteLine(" + " + val.ToString());
}
else
break;
}
Console.WriteLine("Finished with: " + currentValue.ToString());
Console.ReadLine();
Repeat the process for the next value and so on until you've output all of the solutions you want.

Determining % of time above a certain value in a dataset

I have a dataset of voltages (Sampled every 500ms). Lets say it looks something like this (In an array):
0ms -> 1.4v
500ms -> 1.3v
1000ms -> 1.2v
1500ms -> 1.5v
2000ms -> 1.3v
2500ms -> 1.3v
3000ms -> 1.2v
3500ms -> 1.3v
Assuming the transition between readings is linear (IE: 250ms = 1.35v), how would I go about calculating the total % of time that the reading is above or equal to 1.3v?
I was initially going to just get % of values that are >= 1.3v (IE: 6/8 in sample array), however this only works if the angle between points is 45 degrees. I am assuming I have to do something like create a line from point 1 to point 2 and find the intercept with the base line (1.3v). Then do the same for point 2 and point 3 and find the distance between both intersects (Say 700ms) then repeat for all points and get as a % of total sample time.
EDIT
Maybe I wasn't clear when I initially asked. I need help with identifying how I can perform these calculations, IE: objects/classes that I can use to help me virtually graph these lines and perform these calculations or any 3rd party math packages that might offer these capabilities.
The important part is not to think in data points, but in intervals. Every interval (e.g. 0-500, 500-1000, ...) is one of three cases (starting with float variables above and below both 0):
Trivial: Both start and end point are below your threshold - below += 1
Trivial: Both start and end point are above your threshold - above += 1
Interesting: One point is below, one above your threshold. Let's call the smaller value min and the higher value max. Now we do above += (max-threshold)/(max-min) and below += (threshold-min)/(max-min), so we linearily distribute this interval between both states.
Finally normalize the results by dividing both above and below by the number of intervals. This will give you a pair of numbers, that represent the fractions, i.e. that add up to 1 modulo rounding errors. Ofcourse multiplication with 100 gives you the percentages.
EDIT
#phoog pointed out in the comment, that I did not mention an "equal" case. This is by design, as your question already contains that: You chose >= as a comparison, so I definitly ment to use the same comparison here.
If I've understood the problem correctly, you can use a class like this to hold each entry:
public class DataEntry
{
public DataEntry(int time, double reading)
{
Time = time;
Reading = reading;
}
public int Time { get; set; }
public double Reading { get; set; }
}
And then the following link statement to get segments above 1.3:
var entries = new List<DataEntry>()
{
new DataEntry(0, 1.4),
new DataEntry(500, 1.3),
new DataEntry(1000, 1.2),
new DataEntry(1500, 1.5),
new DataEntry(2000, 1.3),
new DataEntry(2500, 1.3),
new DataEntry(3000, 1.2),
new DataEntry(3500, 1.3)
};
double totalTime = entries
.OrderBy(e => e.Time)
.Take(entries.Count - 1)
.Where((t, i) => t.Reading >= 1.3 && entries[i + 1].Reading >= 1.3)
.Sum(t => 500);
var perct = (totalTime / entries.Max(e => e.Time));
This should give you the 500ms segments that remained above 1.3.

Pick up two numbers from an array so that the sum is a constant

I came across an algorithm problem. Suppose I receive a credit and would like to but two items from a local store. I would like to buy two items that add up to the entire value of the credit. The input data has three lines.
The first line is the credit, the second line is the total amount of the items and the third line lists all the item price.
Sample data 1:
200
7
150 24 79 50 88 345 3
Which means I have $200 to buy two items, there are 7 items. I should buy item 1 and item 4 as 200=150+50
Sample data 2:
8
8
2 1 9 4 4 56 90 3
Which indicates that I have $8 to pick two items from total 8 articles. The answer is item 4 and item 5 because 8=4+4
My thought is first to create the array of course, then pick up any item say item x. Creating another array say "remain" which removes x from the original array.
Subtract the price of x from the credit to get the remnant and check whether the "remain" contains remnant.
Here is my code in C#.
// Read lines from input file and create array price
foreach (string s in price)
{
int x = Int32.Parse(s);
string y = (credit - x).ToString();
index1 = Array.IndexOf(price, s) ;
index2 = Array.IndexOf(price, y) ;
remain = price.ToList();
remain.RemoveAt(index1);//remove an element
if (remain.Contains(y))
{
break;
}
}
// return something....
My two questions:
How is the complexity? I think it is O(n2).
Any improvement to the algorithm? When I use sample 2, I have trouble to get correct indices. Because there two "4" in the array, it always returns the first index since IndexOf(String) reports the zero-based index of the first occurrence of the specified string in this instance.
You can simply sort the array in O(nlogn) time. Then for each element A[i] conduct a binary search for S-A[i] again in O(nlogn) time.
EDIT: As pointed out by Heuster, you can solve the 2-SUM problem on the sorted array in linear time by using two pointers (one from the beginning and other from the end).
Create a HashSet<int> of the prices. Then go through it sequentially.Something like:
HashSet<int> items = new HashSet<int>(itemsList);
int price1 = -1;
int price2 = -1;
foreach (int price in items)
{
int otherPrice = 200 - price;
if (items.Contains(otherPrice))
{
// found a match.
price1 = price;
price2 = otherPrice;
break;
}
}
if (price2 != -1)
{
// found a match.
// price1 and price2 contain the values that add up to your target.
// now remove the items from the HashSet
items.Remove(price1);
items.Remove(price2);
}
This is O(n) to create the HashSet. Because lookups in the HashSet are O(1), the foreach loop is O(n).
This problem is called 2-sum. See., for example, http://coderevisited.com/2-sum-problem/
Here is an algorithm in O(N) time complexity and O(N) space : -
1. Put all numbers in hash table.
2. for each number Arr[i] find Sum - Arr[i] in hash table in O(1)
3. If found then (Arr[i],Sum-Arr[i]) are your pair that add up to Sum
Note:- Only failing case can be when Arr[i] = Sum/2 then you can get false positive but you can always check if there are two Sum/2 in the array in O(N)
I know I am posting this is a year and a half later, but I just happened to come across this problem and wanted to add input.
If there exists a solution, then you know that both values in the solution must both be less than the target sum.
Perform a binary search in the array of values, searching for the target sum (which may or may not be there).
The binary search will end with either finding the sum, or the closest value less than sum. That is your starting high value while searching through the array using the previously mentioned solutions. Any value above your new starting high value cannot be in the solution, as it is more than the target value.
At this point, you have eliminated a chunk of data in log(n) time, that would otherwise be eliminated in O(n) time.
Again, this is an optimization that may only be worth implementing if the data set calls for it.

How to transform one list to another semi-aggregated list via LINQ?

I'm a Linq beginner so just looking for someone to let me know if following is possible to implement with Linq and if so some pointers how it could be achieved.
I want to transform one financial time series list into another where the second series list will be same length or shorter than the first list (usually it will be shorter, i.e., it becomes a new list where the elements themselves represent aggregation of information of one or more elements from the 1st list). How it collapses the list from one to the other depends on the data in the first list. The algorithm needs to track a calculation that gets reset upon new elements added to second list. It may be easier to describe via an example:
List 1 (time ordered from beginning to end series of closing prices and volume):
{P=7,V=1}, {P=10,V=2}, {P=10,V=1}, {P=10,V=3}, {P=11,V=5}, {P=12,V=1}, {P=13,V=2}, {P=17,V=1}, {P=15,V=4}, {P=14,V=10}, {P=14,V=8}, {P=10,V=2}, {P=9,V=3}, {P=8,V=1}
List 2 (series of open/close price ranges and summation of volume for such range period using these 2 param settings to transform list 1 to list 2: param 1: Price Range Step Size = 3, param 2: Price Range Reversal Step Size = 6):
{O=7,C=10,V=1+2+1}, {O=10,C=13,V=3+5+1+2}, {O=13,C=16,V=0}, {O=16,C=10,V=1+4+10+8+2}, {O=10,C=8,V=3+1}
In list 2, I explicitly am showing the summation of the V attributes from list 1 in list 2. But V is just a long so it would just be one number in reality. So how this works is opening time series price is 7. Then we are looking for first price from this initial starting price where delta is 3 away from 7 (via param 1 setting). In list 1, as we move thru the list, the next step is upwards move to 10 and thus we've established an "up trend". So now we build our first element in list 2 with Open=7,Close=10 and sum up the Volume of all bars used in first list to get to this first step in list 2. Now, next element starting point is 10. To build another up step, we need to advance another 3 upwards to create another up step or we could reverse and go downwards 6 (param 2). With data from list 1, we reach 13 first, so that builds our second element in list 2 and sums up all the V attributes used to get to this step. We continue on this process until end of list 1 processing.
Note the gap jump that happens in list 1. We still want to create a step element of {O=13,C=16,V=0}. The V of 0 is simply stating that we have a range move that went thru this step but had Volume of 0 (no actual prices from list 1 occurred here - it was above it but we want to build the set of steps that lead to price that was above it).
Second to last entry in list 2 represents the reversal from up to down.
Final entry in list 2 just uses final Close from list 1 even though it really hasn't finished establishing full range step yet.
Thanks for any pointers of how this could be potentially done via Linq if at all.
My first thought is, why try to use LINQ on this? It seems like a better situation for making a new Enumerable using the yield keyword to partially process and then spit out an answer.
Something along the lines of this:
public struct PricePoint
{
ulong price;
ulong volume;
}
public struct RangePoint
{
ulong open;
ulong close;
ulong volume;
}
public static IEnumerable<RangePoint> calculateRanges(IEnumerable<PricePoint> pricePoints)
{
if (pricePoints.Count() > 0)
{
ulong open = pricePoints.First().price;
ulong volume = pricePoints.First().volume;
foreach(PricePoint pricePoint in pricePoints.Skip(1))
{
volume += pricePoint.volume;
if (pricePoint.price > open)
{
if ((pricePoint.price - open) >= STEP)
{
// We have established a up-trend.
RangePoint rangePoint;
rangePoint.open = open;
rangePoint.close = close;
rangePoint.volume = volume;
open = pricePoint.price;
volume = 0;
yield return rangePoint;
}
}
else
{
if ((open - pricePoint.price) >= REVERSAL_STEP)
{
// We have established a reversal.
RangePoint rangePoint;
rangePoint.open = open;
rangePoint.close = pricePoint.price;
rangePoint.volume = volume;
open = pricePoint.price;
volume = 0;
yield return rangePoint;
}
}
}
RangePoint lastPoint;
lastPoint.open = open;
lastPoint.close = pricePoints.Last().price;
lastPoint.volume = volume;
yield return lastPoint;
}
}
This isn't yet complete. For instance, it doesn't handle gapping, and there is an unhandled edge case where the last data point might be consumed, but it will still process a "lastPoint". But it should be enough to get started.

Ranges with Linq and dictionaries

I've created a Range type:
public class Range<T> where T : IComparable<T>
{
public Range(T min, T max) : this(min, max, false) { }
public Range(T min, T max, bool upperbound)
{
}
public bool Upperbound { get; private set; }
public T Min { get; private set; }
public T Max { get; private set; }
public bool Between(T Value)
{
return Upperbound ? (Min.CompareTo(Value) < 0) && (Value.CompareTo(Max) <= 0) : (Min.CompareTo(Value) <= 0) && (Value.CompareTo(Max) < 0);
}
}
I want to use this as key in a dictionary, to allow me to do a search based upon a range. And yes, ranges can overlap or there might be gaps, which is part of the design. It looks simple but I want the search to be even a bit easier! I want to compare a range with it's value of type T, so I can use: myRange == 10 instead of myRange.Between(10).
How? :-)
(Not wanting to break my head over this. I'll probably find the answer but maybe I'm re-inventing the wheel or whatever.)The things I want to do with this dictionary? Well, in general I will use the range itself as a key. A range will be used for more than just a dictionary. I'm dealing with lots of data that have a min/max range and I need to group these together based on the same min/max values. The value in the dictionary is a list of these products that all have the same range. And by using a range, I can quickly find the proper list where I need to add a product. (Or create a new entry if no list is found.)
Once I have a list of products grouped by ranges, I can start searching for values that fir within specific ranges. Basically, this could be a Linq query on the dictionary of all values where the provides value is between the Min and Max value.
I am actually dealing with two such lists. In one, the range is upper-bound and the other lower-bound. There could be more of these kinds of lists, where I first need to collect data based on their range and then find specific items within them.
Could I use a List instead? Probably, but then I would not have the distinct grouping of my data based on the range itself. A List of Lists then? Possible, but then I'm considering the Dictionary again. :-)Range examples: I have multiple items where the range is 0 to 100. Other items where range is 0 to 1, 1 to 2, 2 to 3, etc. More items where range is 0 to 4, 4 to 6, 6 to 8, etc. I even have items with ranges from 0 to 0.5, 0.5 to 1, 1 to 1.5, etc. So, first I will group all items based on their ranges, so all items with range 1 to 2 would be together in one list, while all items with range 0 to 100 would be in a different list. I've calculated that I'll be dealing with about 50 different ranges which can overlap each other. However, I have over 25000 items which need to be grouped like this.
Next, I get a value from another source. For example the value 1.12 which I need to find. So this time I use Linq to search through the dictionary to find all lists of items where 1.12 would be in the range of the keys. Thus I'd find the range 1 to 2, 1 to 1.5 and even 0 to 100. Behind these ranges there would be lists of items which I need to process for this value. And then I can move onwards to the next value, for about 4000 different values. And preferably everything should finish with 5 seconds.
Using a key in a dictionary is a matter of overriding GetHashCode and Equals. Basically you'd create a hash based on the minimum and maximum values and Upperbound. Typically you call GetHashCode on each component and combine them, e.g.:
public override int GetHashCode()
{
int result = 17;
result = result * 31 + Min.GetHashCode();
result = result * 31 + Max.GetHashCode();
result = result * 31 + Upperbound ? 1 : 0;
}
You'd also need the equality test.
I'm not sure what you mean by "to allow me to do a search based upon a range" though. Could you give some sample code showing how you'd like to use this ability? I'm not entirely sure it'll fit within the normal dictionary approach...
I suggest you don't overload the == operator to allow you to do a containment test with it though. A range isn't equal to a value in the range, so code using that wouldn't be very intuitive.
(I'd personally rename Between to Contains as well.)

Categories

Resources