I'm trying to make a game something like don't starve where I have a variable called hunger. I want this to decrease overtime and if it goes to zero then the player's health starts decreasing as well. Also I would like it to deplate faster if you are sprinting (sprint is already working). The main question is simply how to decrease the hunger variable faster and faster overtime based on the last time you've eaten.
Something like this:
As a variant:
Assuming your game got some kind of manager thread which monitors passing time and all global events are launched from there
Figure out formula for derivative of that function - it can be discrete. Store not only current value of "hunger" but alsotime stamp of last time of having food. Here, you can also have some kind of parameter, "food quality" that would affect speed? Difference between current time and timestamp gives you value to integrate your hunger decrease function by substracting result from "hunger".
Steps of check might be irregular, then just derivative and passed time is not enough... e.g. if there was time skip because of sleeping (though such time skips might be emulated by increasing pace of monitor thread). you have to store value of time of last step was done.
Related
This question is about System.Threading.Thread.Sleep(int). I know there is no method for a decimal value, but I really need to work with decimals.
I have a device which takes 20.37 milliseconds to turn by 1 degree. So, I need to put the code to sleep for an appropriate multiplication of 20.37 (2 degrees = 20.37*2 etc). Since the thread class got no decimal sleep method, how can I do this?
That does not work that way. Sleep will grant you that the thread sats idle for x time, but not that it won't stay idle for more. The end of the sleep period means that the thread is available for the scheduler to run it, but the scheduler may chose to run other threads/processes at that moment.
Get the initial instant, find the final instant, and calculate the current turn by the time passed. Also, do not forget to check how precise the time functions are.
Real-time programming has some particularities in its own as to advice you to seek for more info in the topic before trying to get something to work. It can be pretty extensive (multiprocessing OS vs monoprocessing, priorities, etc.)
Right, as pointed out in the comments, Thread.Sleep isn't 100% accurate. However, you can get it to (in theory) wait for 20.27 milliseconds by converting the milliseconds to ticks, and then making a new TimeSpan and calling the method with it, as follows:
Thread.Sleep(new TimeSpan(202700))
//202700 is 20.27 * TimeSpan.TicksPerMillisecond (which is 10,000)
Again, this is probably not going to be 100% accurate (as Thread.Sleep only guarantees for AT LEAST that amount of time). But if that's accurate enough, it'll be fine.
You can simply divide the integer - I just figured that out.
I needed less than a milisecond of time the thread sleeps so I just divided that time by an integer, you can either define a constant or just type in:
System.Threading.Thread.Sleep(time / 100);
Or what number you want.
Alternatively, as mentioned, you can do it like:
int thisIsTheNumberYouDivideTheTimeBy = 100;
Thread.Sleep(time / thisIsTheNumberYouDivideTheTimeBy);
Its actually quite simple. Hope that helped.
By the way, instead of
System.Threading.Thread.Sleep(x);
you can just type
Thread.Sleep(x);
unless you haven't written
using System.Threading;
in the beginning.
I had the same problem. But as a work around, i substitute the float vslie but convert to int value in the passing. The code itself rounds off for me and the thread sleeps for that long. As i said, its a work around and i'm just saying, not that it's accurate
You can use little bit of math as a workaround.
Let´s assume, that you don´t want to be extremely precise,
but still need overall float precise sleep.
Thread.Sleep(new Random().Next(20,21));
This should give you ~20.5 sleep timing. Use your imagination now.
TotalSleeps / tries = "should be wanted value", but for single sleep interval, this will not be true.
Dont use new Random() make an instance before.
ETC = "Estimated Time of Completion"
I'm counting the time it takes to run through a loop and showing the user some numbers that tells him/her how much time, approximately, the full process will take. I feel like this is a common thing that everyone does on occasion and I would like to know if you have any guidelines that you follow.
Here's an example I'm using at the moment:
int itemsLeft; //This holds the number of items to run through.
double timeLeft;
TimeSpan TsTimeLeft;
list<double> avrage;
double milliseconds; //This holds the time each loop takes to complete, reset every loop.
//The background worker calls this event once for each item. The total number
//of items are in the hundreds for this particular application and every loop takes
//roughly one second.
private void backgroundWorker1_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
//An item has been completed!
itemsLeft--;
avrage.Add(milliseconds);
//Get an avgrage time per item and multiply it with items left.
timeLeft = avrage.Sum() / avrage.Count * itemsLeft;
TsTimeLeft = TimeSpan.FromSeconds(timeLeft);
this.Text = String.Format("ETC: {0}:{1:D2}:{2:D2} ({3:N2}s/file)",
TsTimeLeft.Hours,
TsTimeLeft.Minutes,
TsTimeLeft.Seconds,
avrage.Sum() / avrage.Count);
//Only using the last 20-30 logs in the calculation to prevent an unnecessarily long List<>.
if (avrage.Count > 30)
avrage.RemoveRange(0, 10);
milliseconds = 0;
}
//this.profiler.Interval = 10;
private void profiler_Tick(object sender, EventArgs e)
{
milliseconds += 0.01;
}
As I am a programmer at the very start of my career I'm curious to see what you would do in this situation. My main concern is the fact that I calculate and update the UI for every loop, is this bad practice?
Are there any do's/don't's when it comes to estimations like this? Are there any preferred ways of doing it, e.g. update every second, update every ten logs, calculate and update UI separately? Also when would an ETA/ETC be a good/bad idea.
The real problem with estimation of time taken by a process is the quantification of the workload. Once you can quantify that, you can made a better estimate
Examples of good estimates
File system I/O or network transfer. Whether or not file systems have bad performance, you can get to know in advance, you can quantify the total number of bytes to be processed and you can measure the speed. Once you have these, and once you can monitor how many bytes have you transferred, you get a good estimate. Random factors may affect your estimate (i.e. an application starts meanwhile), but you still get a significative value
Encryption on large streams. For the reasons above. Even if you are computing a MD5 hash, you always know how many blocks have been processed, how many are to be processed and the total.
Item synchronization. This is a little trickier. If you can assume that the per-unit workload is constant or you can make a good estimate of the time required to process an item when variance is low or insignificant, then you can make another good estimate of the process. Pick email synchronization: if you don't know the byte size of the messages (otherwise you fall in case 1) but common practice tells that the majority of emails have quite the same size, then you can use the mean of the time taken to download/upload all processed emails to estimate the time taken to process a single email. This won't work in 100% of the cases and is subject to error, but you still see progress bar progressing on a large account
In general the rule is that you can make a good estimate of ETC/ETA (ETA is actually the date and time the operation is expected to complete) if you have a homogeneous process about of which you know the numbers. Homogeneity grants that the time to process a work item is comparable to others, i.e. the time taken to process a previous item can be used to estimate future. Numbers are used to make correct calculations.
Examples of bad estimates
Operations on a number of files of unknown size. This time you know only how many files you want to process (e.g. to download) but you don't know their size in advance. Once the size of the files has a high variance you see troubles. Having downloaded half of the file, when these were the smallest and sum up to 10% of total bytes, can be said being halfway? No! You just see the progress bar growing fast to 50% and then much slowly
Heterogenous processes. E.g. Windows installations. As pointed out by #HansPassant, Windows installations provide a worse-than-bad estimate. Installing a Windows software involves several processes including: file copy (this can be estimated), registry modifications (usually never estimated), execution of transactional code. The real problem is the last. Transactional processes involving execution of custom installer code are discusses below
Execution of generic code. This can never be estimated. A code fragment involves conditional statements. The execution of these involve changing paths depending on a condition external to the code. This means, for example, that a program behaves differently whether you have a printer installed or not, whether you have a local or a domain account, etc.
Conclusions
Estimating the duration of a software process isn't both an impossible and an exact/*deterministic* task.
It's not impossible because, even in the case of code fragments, you can either find a model for your code (pick a LU factorization as an example, this may be estimated). Or you might redesign your code splitting it into an estimation phase - where you first determine the branch conditions - and an execution phase, where all pre-determined branches are taken. I said might because this task is in practice impossible: most code determines branches as effects of previous conditions, meaning that estimating a branch actually involves running the code. Chicken and egg circle
It's not a deterministic process. Computer systems, especially if multitasking are affected by a number of random factors that may impact on your estimated process. You will never get a correct estimate before running your process. At most, you can detect external factors and re-estimate your process. The fork between your estimate and the real duration of process is mathematically converging to zero when you get closer to process end (lim [x->N] |est(N) - real(N)| == 0, where N is the process duration)
If your user interface is so obscure that you have to explain that ETC doesn't mean Etcetera then you are doing it wrong. Every user understands what a progress bar does, don't help.
Nothing is quite as annoying as an inaccurate progress bar. Particularly ones that promise a quick finish but then don't deliver. I'd give the progress bar displayed by any installer on Windows as a good example of one that is fundamentally broken. Just not a shining example of an implementation that you should pursue.
Such a progress bar is broken because it is utterly impossible to guess up front how long it is going to take to install a program. File systems have very unpredictable perf. This is a very common problem with estimating execution time. Better UI models are the spinning dots you'd see in a video player and many programs in Windows 8. Or the marquee style supported by the common ProgressBar control. Just feedback that says "I'm not dead, working on it". Even the hour-glass cursor is better than a bad estimate. If you have something to report beyond a technicality that no user is really interested in then don't hesitate to display that. Like the number of files you've processed or the number of kilobytes you've downloaded. The actual value of the number isn't that useful, seeing the rate at which it increases is the interesting tidbit.
I'm facing a problem and Im having problems to decide/figure-out an approach to solve it. The problem is the following:
Given N phone calls to be made, schedule in a way that the maximum of them be made.
Know Info:
Number of phone calls pending
Number callers (people who will talk on the phone)
Type of phone call (Reminder, billing, negotiation, etc...)
Estimate duration of phone call type (reminder:1min, billing:3min, negotiation:15min, etc...)
Number of phone calls pending
Ideal date for a given call
"Minimum" date of the a given call (can't happen before...)
"Maximum" date of the a given call (can't happen after...)
A day only have 8 hours
Rules:
Phone calls cannot be made before the "Minimum" or after the "Maximum" date
Reminder call placed award 1 point, reminder call missed -2 points
Billing call placed award 6 points, billing call missed -9 points
Negotiation call placed award 20 points, Negotiation call missed -25 points
A phone calls to John must be placed by the first person to ever call him. Notice that it does not HAVE TO, but, that call will earn extra points if you do...
I know a little about A.I. and I can recognize this a problem that fits the class, but i just dont know which approach to take... should i use neural networks? Graph search?
PS: this is not a academic question. This a real world problem that im facing.
PS2: Pointing system is still being created... the points here sampled are not the real ones...
PS3: The resulting algol can be executed several times (batch job style) or it can be resolved online depending on the performance...
PS4: My contract states that I will charge the client based on: (amount of calls I place) + (ratio * the duration of the call), but theres a clause about quality of service, and only placing reminders calls is not good for me, because even when reminded, people still forget to attend their appointments... which reduces the "quality" of the service I provide... i dont know yet the exact numbers
This does not seem like a problem for AI.
If it were me I would create a set of rules, ordered by priority. Then start filling in the caller's schedule.
Mabey one of the rules is to assign the shortest duration call types first (to satisfy the "maximum number of calls made" criteria).
This is sounding more and more like a knapsack problem, where you would substitute in call duration and call points for weight and price.
This is just a very basic answer, but you could try to "brute force" an optimum solution:
Use the Combinatorics library (it's in NuGet too) to generate every permutation of calls for a given person to make in a given time period (looking one week into the future, for instance).
For each permutation, group the calls into 8-hour chunks by estimated duration, and assign a date to them.
Iterate through the chunks - if you get to a call too early, discard that permutation. Otherwise add or subtract points based on whether the call was made before the end date. Store the total score as the score for that permutation.
Choose the permutation with the highest score.
I have a code snippet like this:
while(true)
{
myStopWatch.Start();
DoMyJob();
myStopWatch.Stop();
FPS = 1000/myStopWatch.Elapsed.ToMillionSeconds();
myStopWatch.Reset();
}
which works pretty good, I got the FPS around 100(+/-2). But sometimes I just want to focus on a certain part of the DoMyJob() performance and add some feedbacks, so I split the DoMyJob() to DoMyJob1() and add DoMyJobs2(). the first part is mainly calculation stuff, second part is to visualize the calculation on the Form and update some indicators.
So the code becomes:
while(true)
{
myStopWatch.Start();
DoMyJob_1();
myStopWatch.Stop();
FPS = 1000/myStopWatch.Elapsed.ToMillionSeconds();
myStopWatch.Reset();
DoMyJob_2();
}
I did not expect anything would mess up the FPS since DoMyJob1 is almost the same as the original DoMyJob. But oops..it messed up. the FPS becomes frenzy, bouncing between 40 and up to 600 in a somehow random manner. I wiped out the DoMyJob2() and FPS went back to steady 100.
As I examined deep into the FPS sequence, I found out the FPSs are not random at all - they had like 4 or 5 different ranges, in my code, 30-50, 100-120, 300-360, 560-600, etc. Not a single number falls into the gaps. Then I tried the code in another laptop and the issue still exists, but just with different ranges. I know StopWatch uses Win32API. Is it because it's buggy and I run the code on the 64bit system??
BTW: what the best way to measure FPS on .NET Windows Form App? (like if FPS=100 or more)
If DoMyJob_2 takes a variable amount of time, then you have a slice of time from every second that is not being taken into account. You could use your method to calculate an average time to execute DoMyJob_1, but not to determine frames per second. For example:
loop 1:
task 1: 5ms
reported fps: 1000/5ms = 200
task 2: 15ms
real fps: 1000/20ms = 50
loop 2:
task 1: 5ms
reported fps: 1000/5ms = 200
task 2: 25ms
real fps: 1000/30ms = 33
...
So I'm not sure that's what you are seeing, but it seems possible. What you are describing (fluctuating reported fps) might actually make more sense if the total length of the job tends to be stable, but the way you split the job makes each part variable.
Ideally I would like to have something similar to the Stopwatch class but with an extra property called Speed which would determine how quickly the timer changes minutes. I am not quite sure how I would go about implementing this.
Edit
Since people don't quite seem to understand why I want to do this. Consider playing a soccer game, or any sport game. The halfs are measured in minutes, but the time-frame in which the game is played is significantly lower i.e. a 45 minute half is played in about 2.5 minutes.
Subclass it, call through to the superclass methods to do their usual work, but multiply all the return values by Speed as appropriate.
I would use the Stopwatch as it is, then just multiply the result, for example:
var Speed = 1.2; //Time progresses 20% faster in this example
var s = new Stopwatch();
s.Start();
//do things
s.Stop();
var parallelUniverseMilliseconds = s.ElapsedMilliseconds * Speed;
The reason your simple "multiplication" doesn't work is that it doesn't speeding up the passing of time - the factor applies to all time that has passed, as well as time that is passing.
So, if you set your speed factor to 3 and then wait 10 minutes, your clock will correctly read 30 minutes. But if you then change the factor to 2, your clock will immediately read 20 minutes because the multiplication is applied to time already passed. That's obviously not correct.
I don't think the stopwatch is the class you want to measure "system time" with. I think you want to measure it yoruself, and store elapsed time in your own variable.
Assuming that your target project really is a game, you will likely have your "game loop" somewhere in code. Each time through the loop, you can use a regular stopwatch object to measure how much real-time has elapsed. Multiply that value by your speed-up factor and add it to a separate game-time counter. That way, if you reduce your speed factor, you only reduce the factor applied to passing time, not to the time you've already recorded.
You can wrap all this behaviour into your own stopwatch class if needs be. If you do that, then I'd suggest that you calculate/accumulate the elapsed time both "every time it's requested" and also "every time the factor is changed." So you have a class something like this (note that I've skipped field declarations and some simple private methods for brevity - this is just a rough idea):
public class SpeedyStopwatch
{
// This is the time that your game/system will run from
public TimeSpan ElapsedTime
{
get
{
CalculateElapsedTime();
return this._elapsedTime;
}
}
// This can be set to any value to control the passage of time
public double ElapsedTime
{
get { return this._timeFactor; }
set
{
CalculateElapsedTime();
this._timeFactor = value;
}
}
private void CalculateElapsedTime()
{
// Find out how long (real-time) since we last called the method
TimeSpan lastTimeInterval = GetElapsedTimeSinceLastCalculation();
// Multiply this time by our factor
lastTimeInterval *= this._timeFactor;
// Add the multiplied time to our elapsed time
this._elapsedTime += lastTimeInterval;
}
}
According to modern physics, what you need to do to make your timer go "faster" is to speed up the computer that your software is running one. I don't mean the speed at wich it performs calculations, but the physical speed. The close you get to the speed of light ( the constant C ) the greater the rate at which time passes for your computer, so as you approach the speed of light, time will "speed up" for you.
It sounds like what you might actually be looking for is an event scheduler, where you specify that certain events must happen at specific points in simulated time and you want to be able to change the relationship between real time and simulated time (perhaps dynamically). You can run into boundary cases when you start to change the speed of time in the process of running your simulation and you may also have to deal with cases where real time takes longer to return than normal (your thread didn't get a time slice as soon as you wanted, so you might not actually be able to achieve the simulated time you're targeting.)
For instance, suppose you wanted to update your simulation at least once per 50ms of simulated time. You can implement the simulation scheduler as a queue where you push events and use a scaled output from a normal Stopwatch class to drive the scheduler. The process looks something like this:
Push (simulate at t=0) event to event queue
Start stopwatch
lastTime = 0
simTime = 0
While running
simTime += scale*(stopwatch.Time - lastTime)
lastTime = stopwatch.Time
While events in queue that have past their time
pop and execute event
push (simulate at t=lastEventT + dt) event to event queue
This can be generalized to different types of events occurring at different intervals. You still need to deal with the boundary case where the event queue is ballooning because the simulation can't keep up with real time.
I'm not entirely sure what you're looking to do (doesn't a minute always have 60 seconds?), but I'd utilize Thread.Sleep() to accomplish what you want.