I have an application and I have 6 timers. each timer have different interval which mean 1s, 1s, 3s, 3s, 3s, 3s, respectively. Require CPU is always 2% to 3%.
in my PC is fine due to my PC's capability.
I am sure it may cause application if PC's capability is low.
Is there any effective way to use timer? or other running background?
The reason, I use timer because this timer will query database(get total amount) whenever user added or edit or delete product record, not just product record any record.
Timer 1s is for show Date and Time label
Timer 1s is to interact with datagriview, update the whole column
and Other timers is to get data from MySql Server. As my estimation, the max num of records can be 10 records.
Thanks
It's unclear why you think you need multiple timers here, and you don't even say which timer implementation you are using - and it would likely make a difference.
Employing a single timer that triggers on a reasonable minimal precision (1s, 100ms, etc) would reduce the overall overhead and would likely serve your purpose better. Of course that's said without any indication of what your actually trying to achieve.
It sounds as if you may have multiple issues, but to answer your question, running multiple timers will no cause your application to crash. How you implement the timers and if you are locking the code blocks that are called when a time is fired are important. If you are allowing code blocks to be executed before the code block has finished executing a previous call it can cause your application to become unstable. You should look at timers and perhaps even threads. Without knowing more about what you are doing it is difficult to provide a an more definitive answer to your question.
Related
What happens when you call DateTime.Now?
I followed the property code in Reflector and it appears to add the time zone offset of the current locale to UtcNow. Following UTCNow led me, turn by turn, finally to a Win32 API call.
I reflected on it and asked a related question but haven't received a satisfactory response yet. From the links in the present comment on that question, I infer that there is a hardware unit that keeps time. But I also want to know what unit it keeps time in and whether or not it uses the CPU to convert time into a human readable unit. This will shed some light on whether the retrieval of date and time information is I/O bound or compute-bound.
You are deeply in undocumented territory with this question. Time is provided by the kernel: the underlying native API call is NtQuerySystemTime(). This does get tinkered with across Windows versions - Windows 8 especially heavily altered the underlying implementation, with visible side-effects.
It is I/O bound in nature: time is maintained by the RTC (Real Time Clock) which used to be a dedicated chip but nowadays is integrated in the chipset. But there is very strong evidence that it isn't I/O bound in practice. Time updates in sync with the clock interrupt so very likely the interrupt handler reads the RTC and you get a copy of the value. Something you can see when you tinker with timeBeginPeriod().
And you can see when you profile it that it only takes ~7 nanoseconds on Windows 10 - entirely too fast to be I/O bound.
You seem to be concerned with blocking. There are two cases where you'd want to avoid that.
On the UI thread it's about latency. It does not matter what you do (IO or CPU), it can't take long. Otherwise it freezes the UI thread. UtcNow is super fast so it's not a concern.
Sometimes, non-blocking IO is being uses as a way to scale throughput as more load is added. Here, the only reason is to save threads because each thread consumes a lot of resources. Since there is no async way to call UtcNow the question is moot. You just have to call it as is.
Since time on Windows usually advances at 60 Hz I'd assume that a call to UtcNow reads from an in-memory variable that is written to at 60 Hz. That makes is CPU bound. But it does not matter either way.
.NET relies on the API. MSDN has to say this about the API:
https://msdn.microsoft.com/de-de/library/windows/desktop/ms724961(v=vs.85).aspx
When the system first starts, it sets the system time to a value based on the real-time clock of the computer and then regularly updates the time [...] GetSystemTime copies the time to a SYSTEMTIME [...]
I have found no reliable sources to back up my claim that it is stored as SYSTEMTIME structure, updated therein, and just copied into the receiving buffer of GetSystemTime when called. The smallest logical unit is 100ns from the NtQuerySystemTime system call, but we end up with 1 millisecond in the CLR's DateTime object. Resolution is not always the same.
We might be able to figure that out for Mono on Linux, but hardly for Windows given that the API code itself is not public. So here is an assumption: Current time is a variable in the kernel address space. It will be updated by the OS (frequently by the system clock timer interrupt, less frequently maybe from a network source -- the documentation mentions that callers may not rely on monotonic behavior, as a network sync can correct the current time backwards). The OS will synchronize access to prevent concurrent writing but otherwise it will not be an I/O-expensive operation.
On recent computers, the timer interval is no longer fixed, and can be controlled by the BIOS and OS. Applications can even request lower or higher clock rates: https://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted
Forgive me for this question, but I can't seem to find a good source of when to use which. Would be happy if you can explain it in simple terms.
Furthermore, I am facing this dilemma:
See, I am coding a simple application. I want it to show the elapsed time (hh:mm:ss format or something). But also, to be able to "speed up" or "slow down" its time intervals (i.e. speed up so that a minute in real time equals an hour in the app).
For example, in Youtube videos (* let's not consider the fact that we can jump to specific parts of the vid *), we see the actual time spent in watching that video on the bottom left corner of the screen, but through navigating in the options menu, we are able to speed the video up or down.
And we can actually see that the time gets updated in a manner that agrees with the speed factor (like, if you choose twice the speed, the timer below gets updated twice faster than normal), and you can change this speed rate whenever you want.
This is what I'm kinda after. Something like how Youtube videos measure the time elapsed and the fact that they can change the time intervals. So, which of the two do you think I should choose? Timer or StopWatch?
I'm just coding a Windows Form Application, by the way. I'm simulating something and I want the user to be able to speed up whenever he or she wishes to. Simple as this may be, I wish to implement a proper approach.
As far as I know the main differences are:
Timer
Timer is just a simple scheduler that runs some operation/method once in a while
It executes method in a separate thread. This prevents blocking of the main thread
Timer is good when we need to execute some task in certain time interval without blocking anything.
Stopwatch
Stopwatch by default runs on the same thread
It counts time and returns TimeSpan struct that can be useful in case when we need some additional information
Stopwatch is good when we need to watch the time and get some additional information about how much elapsed processor ticks does the method take etc.
This has already been covered in a number of other questions including
here. Basically, you can either have Stopwatch with a Speed factor then the result is your "elapsed time". A more complicated approach is to implement Timer and changing the Interval property.
Original Question
Is there a heuristic or algorithim to programatically find out how many threads i can open in order to obtain maximum throughput of a async operation such as writing on a socket?
Further explained question
I'm assisting a algorithms professor in my college and he posted a assignment where the students are supossed to learn the basics about distributed computing, in his words: Sockets... The assignment is to create a "server" that listens on a given port, receives a string, performs a simple operations on it (i think it's supposed to count it's length) and return Ok or Rejected... The "server" must be able to handle a minimum of 60k submitions per second... My job is to create a little app to simulate 60K clients...
I've managed to automate the distribution of servers and the clients across a university lab in order to test 10 servers at a time (network infrastructure became the bottleneck), the problem here is: A lab is homogeneous, 2 labs are not! If not tunned correctly the "client" usually can't simulate 60k users and report back to me, especially when the lab is a older one, AND i would like to provide the client to the students so they could test their own "server" more reliably... The ability to determine the optimal number of threads to spawn has now become vital! PS: Fire-and-Forget is not a option because the client also tests if the returned value is correct, e.g If i send "Short sentence" i know the result will be "Rejected" and i have to check it...
A class have 60 students... and there's the morning class and the night class, so each week there will be 120 "servers" to test because as the semester moves along the "server" part will have to do more stuff, the client no (it will always only send a string and receive "Ok"/"Rejected")... So there's enough work to be done in order to justify all this work i'm doing...
Edit1
- Changed from Console to a async operation
- I dont want the maximum number of threads, i want the number that will provide maximum throughput! I imagine that on a 6 core pc the number will be higher than on a 2 core pc
Edit2
- I'm building a simple console app to perform some test in another app... one of thouse is a specific kind of load test (RUDY attack) where i have to simulate a lot of clients performing a specific attack... The thing is that there's a curve between throughput and number of threads, where after a given point, opening more threads actually decreases my throughput...
Edit3
Added more context to the initial question...
The Windows console is really meant to be used by more than one thread, otherwise you get interleaved writes. So the thread count for maximum console output would be one.
It's when you're doing computation that multiple threads makes sense. Then, it's rarely useful to use more than one thread per logical processor - or one background thread plus on UI thread for UI apps on a single-core processor.
It depends entirely on the situation - so the actual answer to your question of "is there a magical algorithm that will give me the perfect setup for max throughput?" is ... no.
Sure, more cores means more threads that can run and less context-switching. That said, you've edited your question to include an IO-bound example. IO-bound operations generally make use of completion ports for async operations. So, in that particular case, removing your use of your own dedicated threads for such an operation would be your main concern towards achieving maximum throughput.
Since you changed the question, I'll provide another answer.
It depends on the workload. If you're doing compute-heavy tasks, then use every logical processor. If you're doing IO, then use async calls rather than spawning new threads.
Of course, .NET has a way of managing this for you - the Thread Pool. Use it. Don't worry about how many threads you need, just kick off tasks.
If you are actually trying to do something productive (instead of just printing to the console), you should use System.Threading.Tasks.Task.Factory.StartNew. You can start as many tasks as you want. The runtime will try to distribute them amongst the available hardware threads as well as it can.
I know there are some existing questions and they provide a very good general perspective on things. I'm hoping to get some details on the C#/VB.Net side for the actual implementation (not philosophy) of some of these perspectives.
My Particular Case
I have a WCF Service which, amongst other things, receives files. For most of the service's life this particular area is actually just sat doing nothing - when work does come it arrives in high bursts of greatly varying quantities.
For each file received (which at a max can be thousands per second) the service needs to work on the files for between 1-10 seconds (each) depending on a number of other services, local resources, and network IO wait times.
To aid the service with these burst workloads I implemented a Queue system. Those thousands of files recieved per second are placed onto the Queue. A controller calculates the number of threads to use based on the size of the queue, up until it reaches a "Peak Max Threads" setting which prevents it from creating additional threads. These threads are placed in a thread pool, and reused to cycle through the queue. The controller will; at intervals; recalculate the number of threads required. If the queue size reduces, a relevant number of threads are released.
The age old problem
How many threads should I peak at? Clearly, adding a new thread everytime a file was received would be silly for lack of a better word - the performance, at best, would deteriorate. Capping the threads when CPU utilization is only 10% across each core, also doesn't seem to be the best use of resources.
So, is there an appropriate way to determine how many threads to cap at? I would rather the service could determine this for itself by sampling available resources, but is there a performance hit from doing so? I know the common answer is to monitor workloads, adjust the counts through trial and error until I find a number I like, but due to the nature of this service (long periods of idle followed by high/burst workloads) it could take a long time to get that kind of information.
What then if we move the server's image to a different host which is faster/slower/different to the first? I have to re-sample the process all over again?
Ideally what I'm after, is for the co-ordinator to intelligently increase the size of the threadpool until CPU utilisation is at x% (would 80% be reasonable? 90%? 99%?). Clearly, I want to do this without adding more threads than is necessary to hit x% otherwise all I'll end up with is threads not just waiting on IO resources, but awaiting each other too.
Thanks in advance!
Related questions (if you want some generic ideas):
How many threads to create?
How many threads is too many?
How many threads to create and when?
A Complication for you
Where would be the fun if I didn't make the problem more difficult?
As it currently stands, the service does hit 100% cpu during these bursts, regularly. The issue is the CPU utilisation spikes. It goes from idle (0-10%) to 100%, and back down again. I'm not sure I can help that - ideally I wouldn't take it all the way to 100%. The problem exists because the files mentioned are in fact images, and part of the services' process is to pass the image through to the System.Windows.Media blackbox which does some complex image processing for me.
There are then lulls in between the spikes because of the IO waits and other processing that goes on. If the spikes hitting 100% can't be helped (and I'm all for knowing how to prevent that, or if I should) how should I aim for the CPU utilisation graph to look? Sat constantly at 100%? Bouncing between 50-100? If I do go through the effort of sampling to decide what does seem to work best, is it guaranteed that switching the virtual servers' host will also work best with the same graph?
This added complexity I won't take into consideration for those of you willing to answer. Feel free to ignore this section. However, any answer that also accounts for this complication, or even answers that just provide tips on how to handle it, I'll at the very least upvote!
Heck of a long question - sorry about that - and thanks for reading so much!!
PerformanceCounter allows you to query for processor usage.
However ,have you tried something the framework provides?
foreach (var file in files)
{
var workitem = file;
Task.Factory.StartNew(() =>
{
// do work on workitem
}, TaskCreationOptions.LongRunning | TaskCreationOptions.PreferFairness);
}
You can tune the concurrency level for Tasks in the Task.Factory.
The .NET 4 threadpool by default will schedule the number of threads it finds most performing on the hardware where it runs, but you can change how that works with the previous link.
Probably you need a custom solution but it would be ok to benchmark yours with the standard.
Edit: (comment note):
No links needed, I may have used an invented term since english is not my language. What I mean is: have a variable where you store the variance before the last check (prevDelta), and call it delta. add this to the varuiable avrageDelta and divide by 2, each time you 'check'. You will have the variable averageDelta that will mostly be low since you have no activity. Then have another set of delta variables, one you have already (delta - prevdelta), and store it in a delta variable that is not the average of all deltas but the average of deltas in a small timespan (you will have to come up with an algortihm to calculate accurately this temporal variance). Once done this you can compare the average delta and the 'temporal delta'. The average delta will be mostly low and will slowly go up whjen bursts come. In the same period the temporal delta will go up really fast. Then you have the situation when the burst stops, the average delta goes slowly down, and the 'temporal' goes really fast.
You could use I/O Completion Ports to asynchronously fetch your images without tying up any threads until it comes time to process what you have fetched.
You could then limit your thread pool based on the number of cores on your client PC, making sure to leave a core free for other processes to use.
What about a dynamic thread manager that monitors their overall performance and according to this spawns new threads or kills old ones? The main problem here is only how to define the performance measurement function. The rest can be done with a periodically scheduled job that increases or decreases the number of threads according to the previous number of threads and performance in that case or something like that. Maybe also in connection to resources utilization (CPU, disks, network...).
I have an app that needs to fire off a couple of events at certain times during the day - the times are all defined by the users. I can think of a couple of ways of doing it but none of them sit too well. The timing doesn't have to be of a particularly high resolution - a minute or so each way is fine.
My ideas :
When the app starts up read all the times and start timers off that will Tick at the appropriate time
Start a timer off that'll check every minute or so for 'current events'
tia for any better solutions.
Store/index the events sorted by when they next need attention. This could be in memory or not according to how many there are, how often you make changes, etc. If all of your events fire once a day, this list is basically a circular buffer which only changes when users change their events.
Start a timer which will 'tick' at the time of the event at the head of the list. Round up to the next minute if you like.
When the timer fires, process all events which are now in the past [edit - and which haven't already been processed], re-insert them into the list if necessary (i.e. if you don't have the "circular buffer" optimisation), and set a new timer.
Obviously, when you change the set of events, or change the time for an existing event, then you may need to reset the timer to make it fire earlier. There's usually no point resetting it to fire later - you may as well just let it go off and do nothing. And if you put an upper limit of one minute on how long the timer can run (or just have a 1 minute recurring timer), then you can get within 1-minute accuracy without ever resetting. This is basically your option 2.
Arguably you should use an existing framework rather than rolling your own, but I don't know C# so I have no idea what's available. I'm generally a bit wary of the idea of setting squillions of timers, because some environments don't support that (or don't support it well). Hence this scheme, which requires only one. I don't know whether C# has any problems in that respect, but this scheme can easily be arranged to use O(1) RAM if necessary, which can't be beat.
Have a look at Quartz.Net. It is a scheduler framework (originally for Java).
This sounds like a classic case for a Windows Service. I think there is a Windows Service project type in VS2005/2008. The service coupled with a simple database and a front-end application to allow users to set the trigger times would be all you need.
If it won't change very often, Scheduled Tasks is also an option.
I've written a few programs along these lines.
I suggest #2. All you need to to is keep a list of times that events are "due" at, and every X amount of time (depending on your resolution) check your list for "now" events. You can pick up some optimization if you can guarantee the list is sorted, and that each event on the list is due exactly once. Otherwise, if you have recurring events, you have to make sure you cover your window. What I mean is, if you have an event that is due at 11:30 am, and you're checking every seconds, then it's possible that you could check at 11:29:59, and then not again until 11:31:01, due to the inprecision of the CPU time-slices. So you'll need to be sure that one of those checks (11:29 or 11:31) still picks up the 11:30 hit, and that ONLY one of them does (i.e., you don't run at both 11:29 and 11:31).
The advantage this approach has over checking only on times you know to be on your list is that allows your list to be modified by 3rd parties without your knowledge, and your event handler will continue to 'just work'.
The simplest way would likely be to use Windows scheduler.
Otherwise you need to use one of the Timer classes, calculating how long until the first event. This approach, unlike the scheduler, allows new events to be found by the running process (and, possibly, resetting the timer).
The problem with #1 is that the number of milliseconds before an event may be too large to store in the Timer's interval, and as the number of events increase, your number of timers could get unweildly.
I dont see anything wrong with #2, but I would opt for a background worker or a thread.