I have a user input values in my table.
Here if user enters 3.2, it mean 3 hours and 20 min.
I am showing the total hours that has been input by the user in the entire week.
Inputs are:
Sun : 3.2
Mon : 4.5
Tue: 5.0
Now, 3.2 + 4.5 + 5.0 = 12.70 which would mean 12 hours and 70 min.
However, I want the result to be 13.10 (which is 13 hours and 10 min) instead of 12.70.
I need the total in a select query which is binding my grid with the rest of the data.
Currently i am using the sum function in the select query along with other columns.
How do I do this?
Your input format won't work at all.
You are lucky it does in your example but in most cases it just won't. For instance, if 2 persons input "1 hour and 50 minutes" :
1.5 + 1.5 = 3.0
You cannot read it as : "three hours" since in reality it is "3 hours and 40 minutes".
As soon as the sum of "minutes" is greater thant 0.99, your are wrong.
But in the few lucky cases, you can do some arithmetic (if you want a result in the same "double" format as your input)?
var inputList = new List<double>() {3.2, 4.5, 5.0};
double total = inputList.Sum();
int baseHours = (int)Math.Floor(total);
int realBaseHours = (int)inputList.Sum(d => Math.Floor(d));
if (baseHours > realBaseHours)
throw new Exception("All hell breaks loose!");
int baseMinutes = (int)(total * 100.0 - baseHours * 100.0);
int finalHours = baseHours + baseMinutes / 60;
int finalMinutes = baseMinutes % 60;
double result = finalHours + finalMinutes / 100.0;
It's not good to saving times as a double format, but for your question:
Get all times as array of double values and do some arithmetic:
double[] times = { 3.2, 4.5, 5.0 };
int hours = 0;
int minuts = 0;
string[] values;
foreach (double t in times)
{
values = t.ToString("#.00").Split('.');
hours += int.Parse(values[0]);
minuts += int.Parse(values[1]);
}
hours += minuts / 60;
minuts += minuts % 60;
It work for all kind of times as double format.
Related
I am trying to calculate the Net Income based on a given Gross Income Value. The rules are this :
If grossValue is lower or equal to 1000, no tax is applied
10% Tax is applied to the exess amout
Example : Given a gross value of 3400, we apply 10% tax to the exess so 10% out of 2400 is 240 => Then we just return 2160 + 1000
The problem is this line : double netSalary = exessAmout - (10 / 100 * exessAmout); For some reason the value doesnt change
public double CalculateNetSalary(double grossSalary)
{
// Taxes dont apply, return grossValue
if(grossSalary <= 1000)
{
return grossSalary;
}
double exessAmout = grossSalary - 1000;
// Apply normal tax
double netSalary = exessAmout - (10 / 100 * exessAmout);
return netSalary + 1000;
}
I expected given a value of 3400 to receive 3160
Why :
exessAmout = 3400 - 1000 => 2400
netSalary = 2400 - (10% of 2400)
return netSalary + 1000
using a calculator to solve this I get the right answer, but running the code the value always stays the same
You are doing integer division. When you divide an int by another int then the result will be an int, which means that 10 / 100 will be zero. Make them double literals, i.e. 10.0 / 100.0, and it should work.
First As mentioned jmcilhinney in his answer you need to make this 10 / 100 for double literals. and you expecting here 3160 as an answer? that expected result is breaking in here.
// Apply Social Tax
if (grossSalary > 3000)
{
netSalary -= (10.0 / 100.0 * netSalary);
Console.WriteLine(netSalary);
}
You have applied Social Tax for the netSalary value. 3160. According to it, output should be 2944
I'm trying to use Math.NET Numerics to do interpolation of a DateTime - Value series. I started off with linear interpolation, but am getting some very off looking results.
Running this test:
public class script{
public void check_numerics()
{
var ticks = DateTime.Now.Ticks;
Console.WriteLine("Ticks: " + ticks);
var xValues = new double[] { ticks, ticks + 1000, ticks + 2000, ticks + 3000, ticks + 4000, ticks + 5000 };
var yValues = new double[] {0, 1, 2, 3, 4, 5};
var spline = Interpolate.LinearBetweenPoints(xValues, yValues);
var ticks2 = ticks;
for (int i = 0; i < 10; i++)
{
ticks2 += 500;
Console.WriteLine(spline.Interpolate(ticks2));
}
}
}
This gives:
Ticks: 635385235576843379
0.5
1
1.5
2
2.42857142857143 // this should be 2.5
3
3.5
4
4.5
5
Notice that 2.4285 is fairly wrong. At a different time (different ticks value) a different value will be "wrong". Is there a "bug" with large x values in Math.NET or am I expecting too much?
Just confirming the comments above as the maintainer of Math.NET Numerics:
The distance (epsilon) between the closest numbers of this magnitude that can be represented at double precision is 128:
Precision.EpsilonOf(ticks); // 128
This means that if you add or substract 128/2-1 = 63 from this number, you get back exactly the same number:
long ticks = DateTime.Now.Ticks // 635385606515570758
((long)(double)ticks) // 635385606515570816
((long)(63+(double)ticks)) // 635385606515570816
((long)(-63+(double)ticks)) // 635385606515570816
((long)(65+(double)ticks)) // 635385606515570944
((long)(-65+(double)ticks)) // 635385606515570688
The incremental steps of 500 are very close to these 128 and effectively get rounded to multiples of 128 (e.g. 512), so it's not surprising that there will be some artifacts like this.
If you reduce the time precision to milliseconds by dividing the ticks by 10000, as suggested by James, you get an epsilon of 0.0078125, and accurate results even for steps of 1 instead of 500.
Precision.EpsilonOf(ticks/10000); // 0.0078125
I am trying to round numbers to 10
ex:
6 becomes 10
4 becomes 0
11 becomes 10
14 becomes 10
17 becomes 20
How would I do this?
Math.Round doesn't work with this as far as I know.
For double (float and decimal will require additional casting):
value = Math.Round(value / 10) * 10;
For int :
value = (int) (Math.Round(value / 10.0) * 10);
I'm working on a time-decay algorithm for a post system based on Reddit's model here:
http://amix.dk/blog/post/19588
My working port is here:
public class Calculation
{
protected DateTime Epoch = new DateTime(1970, 1, 1);
protected long EpochSeconds(DateTime dt)
{
var ts = dt.Subtract(Convert.ToDateTime("1/1/1970 8:00:00 AM"));
return ((((((ts.Days * 24) + ts.Hours) * 60) + ts.Minutes) * 60) + ts.Seconds);
}
protected int Score(int upVotes, int downVotes)
{
return upVotes - downVotes;
}
public double HotScore(int upVotes, int downVotes, DateTime date)
{
var s = Score(upVotes, downVotes);
var order = Math.Log(Math.Max(Math.Abs(s), 1), 10);
var sign = Math.Sign(s);
var seconds = EpochSeconds(date) - 1134028003;
return Math.Round(order + sign * ((double)seconds / 45000), 7);
}
}
Based on the model output from the link provided, I should see gradual decay at 0-13 hours, and sharp decay after that.
What I'm seeing is very homogeneous decay, and scores much higher than the output from the original code (original code: 3480-3471).
Here is how I'm testing:
Calculation c = new Calculation();
double now = c.HotScore(100, 2, DateTime.Now);
double fivehoursago = c.HotScore(100, 2, DateTime.Now.AddHours(-5));
double tenhoursago = c.HotScore(100, 2, DateTime.Now.AddHours(-10));
double elevenhoursago = c.HotScore(100, 2, DateTime.Now.AddHours(-11));
double twelvehoursago = c.HotScore(100, 2, DateTime.Now.AddHours(-12));
double thirteenhoursago = c.HotScore(100, 2, DateTime.Now.AddHours(-13));
double fiftyhoursago = c.HotScore(100, 2, DateTime.Now.AddHours(-50));
double onehundredhoursago = c.HotScore(100, 2, DateTime.Now.AddHours(-100));
Console.WriteLine(now.ToString());
Console.WriteLine(fivehoursago.ToString());
Console.WriteLine(tenhoursago.ToString());
Console.WriteLine(elevenhoursago.ToString());
Console.WriteLine(twelvehoursago.ToString());
Console.WriteLine(thirteenhoursago.ToString());
Console.WriteLine(fiftyhoursago.ToString());
Console.WriteLine(onehundredhoursago.ToString());
Console.ReadLine();
Output values:
now: 4675.2993816
five hours: 4674.8993816
ten hours: 4674.4993816
eleven hours: 4674.4193816
twelve hours: 4674.3393816
thirteen hours: 4674.2593816
fifty hours: 4671.2993816
one-hundred hours: 4667.2993816
Clearly it's SORT of working right, but something is off. It could be related to the lack of true *nix Epoch support, or the lack of analogous microseconds calculation, but something isn't quite right.
Possible reference resources:
http://blogs.msdn.com/b/brada/archive/2004/03/20/93332.aspx
http://codeclimber.net.nz/archive/2007/07/10/convert-a-unix-timestamp-to-a-.net-datetime.aspx
Your primary problem is that the hot algorithm is time dependent. Your calculating the hot score at DateTime.Now, whereas the article was written on 23. Nov 2010 (look at the bottom of the article).
With some trial and error, it seems the data was calculated at approximately 2010-11-23 07:35. Try using that value rather than DateTime.Now, and you should get about the same results as the data in the graph shown.
Mind you, you could make the following improvements to your code:
public class Calculation
{
private static readonly DateTime Epoch = new DateTime(1970, 1, 1);
private double EpochSeconds(DateTime dt)
{
return (dt - Epoch).TotalSeconds;
}
private int Score(int upVotes, int downVotes)
{
return upVotes - downVotes;
}
public double HotScore(int upVotes, int downVotes, DateTime date)
{
int s = Score(upVotes, downVotes);
double order = Math.Log(Math.Max(Math.Abs(s), 1), 10);
int sign = Math.Sign(s);
double seconds = EpochSeconds(date) - 1134028003;
return Math.Round(order + sign * seconds / 45000, 7);
}
}
My results:
3479.0956039
3478.6956039
3478.2956039
3478.2156039
3478.1356039
3478.0556039
3475.0956039
3471.0956039
Changes:
Used the declared Epoch rather than a convert of 1970-01-01 08:00:00 (I think 08:00 is a mistake).
You can subtract two dates using a - b; which is the same as a.Subtract(b) but more succinct and it mirrors the original Python code.
A timespan does give you microsecond precision (Ticks are the smallest unit and equal 100 nanoseconds).
Also, TotalSeconds gives you the total number of seconds within a time span; no need to recalculate that. The fractional part even gives you your microsecond precision.
By returning double from EpochSeconds, you keep this precision.
Made the data types explicit rather than var to clearly indicate what variable is what (they match the method signatures, so no implicit upcasting).
Changed unneeded protected to private and made the Epoch a constant.
I need to calculate the time difference faken for division most accurately in nano seconds. Please tell me to do this.
At Present i'm using a lower accuracy method in which the problem is that : when the first calculation is performed it shows 87 milliseconds or 65 milliseconds as answer. But when the function is called again second time or more, it only show 0 milliseconds.
The code is :
long startTick = DateTime.Now.Ticks;
double result = (double)22 / 7;
result = System.Math.Round(result, digit);
long endTick = DateTime.Now.Ticks;
long tick = endTick - startTick;
double milliseconds = tick / TimeSpan.TicksPerMillisecond;
time.Text = result + "\nThe division took " + milliseconds + " milliseconds to complete.";
digit is the parameter of function which is variable. No matter what the value of digit is the milliseconds value remains 0 after first calling of function....
Please suggest more accurate way in which calling the same function with different decimal digits will result in different time interval in c# for windows Phone.
I think the memory flush should be done before and after each calculation. But i dont know how to do this.
I don't like this tick method personally for accuracy. I've tried stopwatch also but its not working. Please suggest another method best suited in my case. I want result like : 0.0345 or 0.0714 seconds.
Thanks
You are performing integer division on this line:
double milliseconds = tick / TimeSpan.TicksPerMillisecond;
Even though you are declaring it as a double, a long divided by a long will truncate the decimal. You are better off doing:
double milliseconds = (double)tick / TimeSpan.TicksPerMillisecond;
Or better yet, just ditch the tick stuff all together:
DateTime start = DateTime.Now;
double result = (double)22 / 7;
result = System.Math.Round(result, digit);
DateTime end = DateTime.Now;
double milliseconds = (end - start).TotalMilliseconds;
time.Text = result + "\nThe division took " + milliseconds + " milliseconds to complete.";
You won't be able to get micro or nano level precision, but you will get millisecond precision with a margin of error.
You still may get zero, however. You are trying to time how long a simple division operation takes. You could do millions of division operations in less than a second. You may want to do it 1,000,000 times, then divide the result by a 1,000,000:
DateTime start = DateTime.Now;
for (var i = 0; i < 1000000; i++)
{
double result = (double)22 / 7;
result = System.Math.Round(result, digit);
}
DateTime end = DateTime.Now;
double milliseconds = (end - start).TotalMilliseconds / 1000000;
This still won't be completely realistic, but should get you an actual number.
Since you have the time in ticks, just increase the resolution by multiplying the denominator:
double microseconds = tick / (TimeSpan.TicksPerMillisecond * 1000.0);
Why are you not using StopWatch Class to do your time calulation.
It is meant to the calculate the time the you want ..
Here is a link for your reference.
http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.aspx
//if you want to get the full milliseconds you could also do something like this.
dateStartTime = Convert.ToDateTime(DateTime.Now.TimeOfDay.ToString());
//then where you end the code do this
dateEndTime = Convert.ToDateTime(DateTime.Now.TimeOfDay.ToString());
ddateDuration = (TimeSpan)(dateEndTime - dateStartTime);
then to display out what you are actually looking for in terms of miliseconds do
Console.WriteLine(ddateDuration.ToString().Substring(0, 8));
// or some other method that you are using to display the results