Calculate event rate per second - c#

I have a game file with millions of events, file size can be > 10gb
Each line is a game action, like:
player 1, action=kill, timestamp=xxxx(ms granularity)
player 1, action=jump, timestamp=xxxx
player 2, action=fire, timestamp=xxxx
Each action is unique and finite for this data set.
I want to perform analysis on this file, like the total number of events per second, while tracking the individual number of actions in that second.
My plan in semi pseudocode:
lastReadGameEventTime = DateTime.MinValue;
while(line=getNextLine() != null)
{
parse_values(lastReadGameEventTime, out var timestamp, out var action);
if(timestamp == MinValue)
{
lastReadGameEventTime = timestamp;
}
else if(timestamp.subtract(lastReadGameEventTime).TotalSeconds > 1)
{
notify_points_for_this_second(datapoints);
datapoints = new T();
}
if(!datapoints.TryGetValue(action, out var act))
act = new Dictionary<string,int>();
act[action] = 0;
else
act[action]++;
}
lastReadGameEventTime = parse_time(line)
My worry is that this is too naive. I was thinking maybe count the entire minute and get the average per second. But of course I will miss game event spikes.
And if I want to calculate a 5 day average, it will further degrade the result set.
Any clever ideas?

You're asking several different questions here. All are related. Your requirements aren't real detailed, but I think I can point you in the right direction. I'm going to assume that all you want is number of events per second, for some period in the past. So all we need is some way to hold an integer (count of events) for every second during that period.
There are 86,400 seconds in a day. Let's say you want 10 days worth of information. You can build a circular buffer of size 864,000 to hold 10 days' worth of counts:
const int SecondsPerDay = 86400;
const int TenDays = 10 * SecondsPerDay;
int[] TenDaysEvents = new int[TenDays];
So you always have the last 10 days' of counts.
Assuming you have an event handler that reads your socket data and passes the information to a function, you can easily keep your data updated:
DateTime lastEventTime = DateTime.MinValue;
int lastTimeIndex = 0;
void ProcessReceivedEvent(string event)
{
// here, parse the event string to get the DateTime
DateTime eventTime = GetEventDate(event);
if (lastEventTime == DateTime.MinValue)
{
lastTimeIndex = 0;
}
else if (eventTime != lastEventTime)
{
// get number of seconds since last event
var elapsedTime = eventTime - lastEventTime;
var elapsedSeconds = (int)elapsedTime.TotalSeconds;
// For each of those seconds, set the number of events to 0
for (int i = 1; i <= elapsedSeconds; ++i)
{
lastTimeIndex = (lastTimeIndex + 1) % TenDays; // wrap around if we get past the end
TenDaysEvents[lastTimeIndex] = 0;
}
}
// Now increment the count for the current time index
++TenDaysEvents[lastTimeIndex];
}
This keeps the last 10 days in memory at all times, and is easy to update. Reporting is a bit more difficult because the start might be in the middle of the array. That is, if the current index is 469301, then the starting time is at 469302. It's a circular buffer. The naive way to report on this would be to copy the circular buffer to another array or list, with the starting point at position 0 in the new collection, and then report on that. Or, you could write a custom enumerator that counts back from the current position and starts there. That wouldn't be especially difficult to create.
The beauty of the above is that your array remains static. You allocate it once, and just re-use it. You might want to add an extra 60 entries, though, so that there's some "buffer" between the current time and the time from 10 days ago. That will prevent the data for 10 days ago from being changed during a query. Add an extra 300 items, to give yourself a 5-minute buffer.
Another option is to create a linked list of entries. Again, one per second. With that, you add items to the end of the list, and remove older items from the front. Whenever an event comes in for a new second, add the event entry to the end of the list, and then remove entries that are more than 10 days (or whatever your threshold is) from the front of the list. You can still use LINQ to report on things, as recommended in another answer.
You could use a hybrid, too. As each second goes by, write a record to the database, and keep the last minute, or hour, or whatever in memory. That way, you have up-to-the-second data available in memory for quick reports and real-time updates, but you can also use the database to report on any period since you first started collecting data.
Whatever you decide, you probably should keep some kind of database, because you can't guarantee that your system won't go down. In fact, you can pretty much guarantee that your system will go down at some point. It's no fun losing data, or having to scan through terabytes of log data to re-build the data that you've collected over time.

Related

Are DateTimes passed by reference in C#? If not, why is my object updating as I change a variable?

I have ran into a problem which I simply cannot get my head around. When I debug the program, I can see the program works fine - this is the strange part.
The issue I am facing is when I append to a List with the new object - it seems to completely change. Let me explain better by showing my code.
System.DateTime timeNow = System.DateTime.Now;
List<Train> trainsOnNet = new List<Train>();
for(int i = 1; i <= 3; i++)
{
Train t = new Train();
t.NewCarrige(true, "A"[0]);
t.NewCarrige(false, "B"[0]);
t.addStation(App.Stations.FirstOrDefault(x => x.GetShortCode().Equals("MTG")));
t.addStation(App.Stations.FirstOrDefault(x => x.GetShortCode().Equals("BNS")));
t.addStation(App.Stations.FirstOrDefault(x => x.GetShortCode().Equals("BSH")));
int minsToAdd = 5;
t.GetStations().ForEach(x =>
{
timeNow = timeNow.AddMinutes(minsToAdd);
x.SetArrivalTime(timeNow);
minsToAdd += 10;
});
timeNow = timeNow.AddMinutes(15);
trainsOnNet.Add(t);
}
When I add t to the trainsOnNet List, the time seems to change to the last time that the loop will generate, even before it generates.
If I place a stopper on this line, I can see that the t instance has the correct time variables (ie, 5 minutes from the current execution time and then 10 minutes between each). However, when I then press continue and inspect the trainsOnNet list. This time has been changed to the next trains set of times.
It appears to me that timeNow is being passed by reference, and as the loop increases the time, the stored time updates until I am left with 3 trains all saying the same time.
For example, if I execute the program at 1951 with a stopper on trainsOnNet.Add(t) I can see that t holds 3 stations, in which the first stations Arrival time is 19:56 and the second is 20:11 and 20:26. Perfect. I then hit continue and inspect the newly appended object to my list. On inspection, the t instance arrival time properties have now changed to:
20:56, 21:11, 21:36
Doing the maths of my code, the next train should arrive 20 minutes after the previous train has arrived at the end station. Which brings me to 20:46. Thus deeming it more confusing why the first train is being changed past the second trains expected time let alone being updated without stating to do so.
Any ideas would be greatly appricated.
Stop on the line on the first execution:
Stop on same line, after pressing continue (changed properties):
As you can see, the values are being changed? In this case, by a whole hour?
System.DateTime is a "value type". You can check that by seeing on your intellisense in Visual Studio or in the documentation that it is declared as a struct, not a class (the latter are reference types).
Your problem, however, is not really about this.
It's not clear what you exactly intend to do, and the flaw is probbaly in your logic implementation.
1 - you reuse the same variable timeNow for all iterations, it's hard (at least to me) to keep track on how much it is supposed to be and how much the value is after a few "mental" iterations.
2 - you set minsToAdd to 5, then increase it by 10 in loop. So, after a few iterations of the inner loop, minsToAdd gets these values :
15, 25, 35, 45, etc...
Then, you add this to timeNow, so imagine that timeNow is 00:00 at the beginning, in the inner loop it gets values
00:05,
00:05 + 15min = 00:20
00:20 + 25min = 00:55
00:55 + 35min = 01:30
etc...
and after the inner loop you add 15 min to the last value.
Is that what you expect?
Try to change these two lines:
timeNow = timeNow.AddMinutes(minsToAdd);
x.SetArrivalTime(timeNow);
To just:
x.SetArrivalTime(timeNow.AddMinutes(minsToAdd));
As others explained, you're overriding the timeNow variable with each iteration. This change should give you the expected result.

How to calculate intersections of timeranges?

I have got a text file that all lines like:
8:30 8:50 1
..........
20:30 20:35 151
Every line is a new user connection with it's time period in In-net.
The goal is to find periods of time where the quantity reaches the maximum.
So, maybe someone knows algorithm that can help me with this task(multiple intersections)? Find this task non-trivial for me(because i am new in programming), i have some ideas but i find them awful, that's why maybe i should start with mathematical algorithms to make the best way to achieve my goal.
For beginning we have to make some assumptions.
Assume you are looking for the shortest period with maximum connections.
Assume every line represents one connection. It's not clear from our
question what are the integer numbers after start and end times on
every line. So I ignore it.
The lines are given in order of increasing period start time.
We are free to choose any local maximum as the answer in case we got several periods with the same number of simultaneous connections.
The first stage of the solution is parsing. Given a sequence of lines we get the sequence of pairs of System.DateTime – a pair for each period in order.
static Tuple<DateTime, DateTime> Parse(string line)
{
var a = line.Split()
.Take(2) // take the start and end times only
.Select(p =>
DateTime.ParseExact(p, "H:m",
CultureInfo.InvariantCulture))
.ToArray();
return Tuple.Create(a[0], a[1]);
}
The next stage is the algorithm itself. It has two parts. First, we find local maximums as triples of start time, end time and connection count. Second, we select the absolute maximum from the set produced by the first part:
File.ReadLines(FILE_PATH).Select(Parse).GetLocalMaximums().Max(x=>x.Item3)
File.ReadLines(FILE_PATH).Select(Parse).GetLocalMaximums()
.Aggregate((x, y) => x.Item3 > y.Item3 ? x : y))
File.ReadLines(FILE_PATH).Select(Parse).GetLocalMaximums()
.Aggregate((x, y) => x.Item3 >= y.Item3 ? x : y))
The most sophisticated part is detection of a local maximum.
Take the first period A and write down its end time. Then write down
its start time as the last known start time. Note there is one end
time written and there is one active connection.
Take the next period B and write its end time. Compare the start
time of B to the minimum of end times written.
If there is no written end time smaller than B's start time then the
number of connections increases at this time. So discard previous
value for the last known start time and replace it with B's start
time. Then proceed to the next period. Note again there are one more
connections at this time and we have one more end time. So the number
of active connections is always equal to number of written down end
times.
If there is a value in the list of end time smaller than B's end, we
had a decrease in connection count and this means we just passed a
local maximum (here is the math). We have to report it: yield the triple (the last known start time, the minimum of written end times, the number of
end times written minus one). We should not count the end time for B
we had already written. Then discard all the end times being less
than B's start time, replace the last known start time, and proceed
to the next period.
When the minimum end time equals to the B's start, it means we've
lost one connection and got another one at the same time. This means
we have to discard the end time and proceed to the next period.
Repeat from step 2 for all the periods we have.
The source code for the local maximum detection
static IEnumerable<Tuple<DateTime, DateTime, int>>
GetLocalMaximums(this IEnumerable<Tuple<DateTime, DateTime>> ranges)
{
DateTime lastStart = DateTime.MinValue;
var queue = new List<DateTime>();
foreach (var r in ranges)
{
queue.Add(r.Item2);
var t = queue.Min();
if (t < r.Item1)
{
yield return Tuple.Create(lastStart, t, queue.Count-1);
do
{
queue.Remove(t);
t = queue.Min();
} while (t < r.Item1);
}
if (t == r.Item1) queue.Remove(t);
else lastStart = r.Item1;
}
// yield the last local maximum
if (queue.Count > 0)
yield return Tuple.Create(lastStart, queue.Min(), queue.Count);
}
While using List(T) was not the best decision, it's easy to understand. Use a sorted version of list for better performance. Replacing tuples with structs will eliminate a lot of memory allocation operations.
You could do:
string[] lines=System.IO.File.ReadAllLines(filePath)
var connections = lines
.Select(d => d.Split(' '))
.Select(d => new
{
From = DateTime.Parse(d[0]),
To = DateTime.Parse(d[1]),
Connections = int.Parse(d[2])
})
.OrderByDescending(d=>d.Connections).ToList();
connections will contain the sorted list with the top results first

How to group a time series by interval (OHLC bars) with LINQ

I've seen variation on this question before, but without a definite answer.
I have a list of object with a timestamp (stock trades data, or 'ticks'):
Class Tick
{
Datetime Timestamp;
double Price;
}
I want to generate another list based on those values which is grouped by certain interval
in order to create an OHLC bar (Open, High, Low, Close).
These bars may be of any interval specified (1 minute, 5, 10 or even 1 hour).
I also need to find an efficient way to sort new "ticks" into the list, as they
may arrive at high rate (3-5 ticks per second).
Would appreciate any thoughts on this, Thanks!
I want to generate another list based
on those values which is grouped by
certain interval in order to create an
OHLC bar (Open, High, Low, Close).
These bars may be of any interval
specified (1 minute, 5, 10 or even 1
hour).
Unfortunately, you haven't specified:
What the phase of the bar-series will be.
Whether a bar's begin / end times are purely "natural-time" based (depend solely on a fixed schedule rather than on the timestamp of the first and last ticks in it) or not.
Assuming natural-time intra-day bars, the phases are usually clamped to midnight. So hourly bars will be 00:00 - 01:00, 01:00 - 02:00, etc. In this case, the begin / end-time of a bar can serve as its unique-key.
So then the problem becomes: To what bar- begin / end time does a tick's timestamp belong to? If we assume everything I've assumed above, that can be solved easily with some simple integer math. The query can then be something like (untested, algo only):
var bars = from tick in ticks
// Calculate the chronological, natural-time, intra-day index
// of the bar associated with a tick.
let barIndexForDay = tick.Timestamp.TimeOfDay.Ticks / barSizeInTicks
// Calculate the begin-time of the bar associated with a tick.
// For example, turn 2011/04/28 14:23.45
// into 2011/04/28 14:20.00, assuming 5 min bars.
let barBeginDateTime = tick.Timestamp.Date.AddTicks
(barIndexForDay * barSizeInTicks)
// Produce raw tick-data for each bar by grouping.
group tick by barBeginDateTime into tickGroup
// Order prices for a group chronologically.
let orderedPrices = tickGroup.OrderBy(t => t.Timestamp)
.Select(t => t.Price)
select new Bar
{
Open = orderedPrices.First(),
Close = orderedPrices.Last(),
High = orderedPrices.Max(),
Low = orderedPrices.Min(),
BeginTime = tickGroup.Key,
EndTime = tickGroup.Key.AddTicks(barSizeInTicks)
};
It's common to want to locate a bar by index / date-time as well as to enumerate all bars in a series chronologically. In this case, you might want to consider storing the bars in a collection such as a SortedList<DateTime, Bar> (where the key is a bar's begin or end time), which will fill all these roles nicely.
I also need to find an efficient way
to sort new "ticks" into the list, as
they may arrive at high rate (3-5
ticks per second).
It depends on what you mean.
If these ticks are coming off a live price-feed (chronologically), you don't need a look-up at all - just store the current, incomplete, "partial" bar. When a new tick arrives, inspect its timestamp. If it is still part of the current "partial" bar, just update the bar with the new information (i.e. Close = tick.Price, High = Max(oldHigh, tick.Price) etc.). Otherwise, the "partial" bar is done - push it into your bar-collection. Do note that if you are using "natural-time" bars, the end of a bar could also be brought on by the passage of time rather than by a price-event (e.g. an hourly bar completes on the hour).
EDIT:
Otherwise, you'll need to do a lookup. If you're storing in the bars in a sorted-list (keyed by begin-time / end-time) as I've mentioned above, then you'll just need to calculate the bar begin-time / end-time associated with a tick. That should be easy enough; I've already given you a sample of how you might accomplish that in the LINQ query above.
For example:
myBars[GetBeginTime(tick.Timestamp)].Update(tick);

How to ensure a timestamp is always unique?

I'm using timestamps to temporally order concurrent changes in my program, and require that each timestamp of a change be unique. However, I've discovered that simply calling DateTime.Now is insufficient, as it will often return the same value if called in quick succession.
I have some thoughts, but nothing strikes me as the "best" solution to this. Is there a method I can write that will guarantee each successive call produces a unique DateTime?
Should I perhaps be using a different type for this, maybe a long int? DateTime has the obvious advantage of being easily interpretable as a real time, unlike, say, an incremental counter.
Update: Here's what I ended up coding as a simple compromise solution that still allows me to use DateTime as my temporal key, while ensuring uniqueness each time the method is called:
private static long _lastTime; // records the 64-bit tick value of the last time
private static object _timeLock = new object();
internal static DateTime GetCurrentTime() {
lock ( _timeLock ) { // prevent concurrent access to ensure uniqueness
DateTime result = DateTime.UtcNow;
if ( result.Ticks <= _lastTime )
result = new DateTime( _lastTime + 1 );
_lastTime = result.Ticks;
return result;
}
}
Because each tick value is only one 10-millionth of a second, this method only introduces noticeable clock skew when called on the order of 10 million times per second (which, by the way, it is efficient enough to execute at), meaning it's perfectly acceptable for my purposes.
Here is some test code:
DateTime start = DateTime.UtcNow;
DateTime prev = Kernel.GetCurrentTime();
Debug.WriteLine( "Start time : " + start.TimeOfDay );
Debug.WriteLine( "Start value: " + prev.TimeOfDay );
for ( int i = 0; i < 10000000; i++ ) {
var now = Kernel.GetCurrentTime();
Debug.Assert( now > prev ); // no failures here!
prev = now;
}
DateTime end = DateTime.UtcNow;
Debug.WriteLine( "End time: " + end.TimeOfDay );
Debug.WriteLine( "End value: " + prev.TimeOfDay );
Debug.WriteLine( "Skew: " + ( prev - end ) );
Debug.WriteLine( "GetCurrentTime test completed in: " + ( end - start ) );
...and the results:
Start time: 15:44:07.3405024
Start value: 15:44:07.3405024
End time: 15:44:07.8355307
End value: 15:44:08.3417124
Skew: 00:00:00.5061817
GetCurrentTime test completed in: 00:00:00.4950283
So in other words, in half a second it generated 10 million unique timestamps, and the final result was only pushed ahead by half a second. In real-world applications the skew would be unnoticeable.
One way to get a strictly ascending sequence of timestamps with no duplicates is the following code.
Compared to the other answers here this one has the following benefits:
The values track closely with actual real-time values (except in extreme circumstances with very high request rates when they would get slightly ahead of real-time).
It's lock free and should perform better that the solutions using lock statements.
It guarantees ascending order (simply appending a looping a counter does not).
public class HiResDateTime
{
private static long lastTimeStamp = DateTime.UtcNow.Ticks;
public static long UtcNowTicks
{
get
{
long original, newValue;
do
{
original = lastTimeStamp;
long now = DateTime.UtcNow.Ticks;
newValue = Math.Max(now, original + 1);
} while (Interlocked.CompareExchange
(ref lastTimeStamp, newValue, original) != original);
return newValue;
}
}
}
Also note the comment below that original = Interlocked.Read(ref lastTimestamp); should be used since 64-bit read operations are not atomic on 32-bit systems.
Er, the answer to your question is that "you can't," since if two operations occur at the same time (which they will in multi-core processors), they will have the same timestamp, no matter what precision you manage to gather.
That said, it sounds like what you want is some kind of auto-incrementing thread-safe counter. To implement this (presumably as a global service, perhaps in a static class), you would use the Interlocked.Increment method, and if you decided you needed more than int.MaxValue possible versions, also Interlocked.Read.
DateTime.Now is only updated every 10-15ms.
Not a dupe per se, but this thread has some ideas on reducing duplicates/providing better timing resolution:
How to get timestamp of tick precision in .NET / C#?
That being said: timestamps are horrible keys for information; if things happen that fast you may want an index/counter that keeps the discrete order of items as they occur. There is no ambiguity there.
I find that the most foolproof way is to combine a timestamp and an atomic counter. You already know the problem with the poor resolution of a timestamp. Using an atomic counter by itself also has the simple problem of requiring its state be stored if you are going to stop and start the application (otherwise the counter starts back at 0, causing duplicates).
If you were just going for a unique id, it would be as simple as concatenating the timestamp and counter value with a delimiter between. But because you want the values to always be in order, that will not suffice. Basically all you need to do is use the atomic counter value to add addition fixed width precision to your timestamp. I am a Java developer so I will not be able to provide C# sample code just yet, but the problem is the same in both domains. So just follow these general steps:
You will need a method to provide you with counter values cycling from 0-99999. 100000 is the maximum number of values possible when concatenating a millisecond precision timestamp with a fixed width value in a 64 bit long. So you are basically assuming that you will never need more than 100000 ids within a single timestamp resolution (15ms or so). A static method, using the Interlocked class to provide atomic incrementing and resetting to 0 is the ideal way.
Now to generate your id you simply concatenate your timestamp with your counter value padded to 5 characters. So if your timestamp was 13023991070123 and your counter was at 234 the id would be 1302399107012300234.
This strategy will work as long as you are not needing ids faster than 6666 per ms (assuming 15ms is your most granular resolution) and will always work without having to save any state across restarts of your application.
It can't be guaranteed to be unique, but is perhaps using ticks is granular enough?
A single tick represents one hundred
nanoseconds or one ten-millionth of a
second. There are 10,000 ticks in a
millisecond.
Not sure what you're trying to do entirely but maybe look into using Queues to handle sequentially process records.

Counter of type RateOfCountsPerSecond32 always shows 0

I have a windows service that serves messages of some virtual queue via a WCF service interface.
I wanted to expose two performance counters -
The number of items on the queue
The number of items removed from the queue per second
The first one works fine, the second one always shows as 0 in PerfMon.exe, despite the RawValue appearing to be correct.
I'm creating the counters as such -
internal const string PERF_COUNTERS_CATEGORY = "HRG.Test.GDSSimulator";
internal const string PERF_COUNTER_ITEMSINQUEUE_COUNTER = "# Messages on queue";
internal const string PERF_COUNTER_PNR_PER_SECOND_COUNTER = "# Messages read / sec";
if (!PerformanceCounterCategory.Exists(PERF_COUNTERS_CATEGORY))
{
System.Diagnostics.Trace.WriteLine("Creating performance counter category: " + PERF_COUNTERS_CATEGORY);
CounterCreationDataCollection counters = new CounterCreationDataCollection();
CounterCreationData numberOfMessagesCounter = new CounterCreationData();
numberOfMessagesCounter.CounterHelp = "This counter provides the number of messages exist in each simulated queue";
numberOfMessagesCounter.CounterName = PERF_COUNTER_ITEMSINQUEUE_COUNTER;
numberOfMessagesCounter.CounterType = PerformanceCounterType.NumberOfItems32;
counters.Add(numberOfMessagesCounter);
CounterCreationData messagesPerSecondCounter= new CounterCreationData();
messagesPerSecondCounter.CounterHelp = "This counter provides the number of messages read from the queue per second";
messagesPerSecondCounter.CounterName = PERF_COUNTER_PNR_PER_SECOND_COUNTER;
messagesPerSecondCounter.CounterType = PerformanceCounterType.RateOfCountsPerSecond32;
counters.Add(messagesPerSecondCounter);
PerformanceCounterCategory.Create(PERF_COUNTERS_CATEGORY, "HRG Queue Simulator performance counters", PerformanceCounterCategoryType.MultiInstance,counters);
}
Then, on each service call, I increment the relevant counter, for the per/sec counter this currently looks like this -
messagesPerSecCounter = new PerformanceCounter();
messagesPerSecCounter.CategoryName = QueueSimulator.PERF_COUNTERS_CATEGORY;
messagesPerSecCounter.CounterName = QueueSimulator.PERF_COUNTER_PNR_PER_SECOND_COUNTER;
messagesPerSecCounter.MachineName = ".";
messagesPerSecCounter.InstanceName = this.ToString().ToLower();
messagesPerSecCounter.ReadOnly = false;
messagesPerSecCounter.Increment();
As mentioned - if I put a breakpoint after the call to increment I can see the RawValue constantly increasing, in consistence with the calls to the service (fairly frequently, more than once a second, I would think)
But the performance counter itself stays on 0.
The performance counter providing the count of items on the 'queue', which is implemented in the same way (although I assign the RawValue, rather than call Increment) works just fine.
What am I missing?
I also initially had problems with this counter. MSDN has a full working example that helped me a lot:
http://msdn.microsoft.com/en-us/library/4bcx21aa.aspx
As their example was fairly long winded, I boiled it down to a single method to demonstrate the bare essentials. When run, I see the expected value of 10 counts per second in PerfMon.
public static void Test()
{
var ccdc = new CounterCreationDataCollection();
// add the counter
const string counterName = "RateOfCountsPerSecond64Sample";
var rateOfCounts64 = new CounterCreationData
{
CounterType = PerformanceCounterType.RateOfCountsPerSecond64,
CounterName = counterName
};
ccdc.Add(rateOfCounts64);
// ensure category exists
const string categoryName = "RateOfCountsPerSecond64SampleCategory";
if (PerformanceCounterCategory.Exists(categoryName))
{
PerformanceCounterCategory.Delete(categoryName);
}
PerformanceCounterCategory.Create(categoryName, "",
PerformanceCounterCategoryType.SingleInstance, ccdc);
// create the counter
var pc = new PerformanceCounter(categoryName, counterName, false);
// send some sample data - roughly ten counts per second
while (true)
{
pc.IncrementBy(10);
System.Threading.Thread.Sleep(1000);
}
}
I hope this helps someone.
When you are working with Average type Performance Counters there are two components - a numerator and a denominator. Because you are working with an average the counter is calculated as 'x instances per y instances'. In your case you are working out 'number items' per 'number of seconds'. In other words, you need to count both how many items you take out of the queue and how many seconds they take to be removed.
The Average type Performance Counters actually create two counters - a numerator component called {name} and a denominator component called {name}Base. If you go to the Performance Counter snap-in you can view all the categories and counters; you can check the name of the Base counter. When the queue processing process is started, you should
begin a stopwatch
remove item(s) from the queue
stop the stopwatch
increment the {name} counter by the number of items removed from the queue
increment the {name}Base counter by the number of ticks on the stopwatch
The counter is supposed to automatically know to divide the first counter by the second to give the average rate. Check CodeProject for a good example of how this works.
It's quite likely that you don't want this type of counter. These Average counters are used to determine how many instances happen per second of operation; e.g. the average number of seconds it takes to complete an order, or to do some complex transaction or process. What you may want is an average number of instances in 'real' time as opposed to processing time.
Consider if you had 1 item in your queue, and it took 1ms to remove, that's a rate of 1000 items per second. But after one second you have only removed 1 item (because that's all there is) and so you are processing 1 item per second in 'real' time. Similarly, if there are a million items in the queue but you've only processed one because your server is busy doing some other work, do you want to see 1000 items / second theoretical or 1 item / second real?
If you want this 'real' figure, as opposed to the theoretical throughput figure, then this scenario isn't really suited to performance counters - instead you need to know a start time, and end time, and a number of items processed. It can't really be done with a simple 'counter'. Instead you would store a system startup time somewhere, and calculate (number of items) / (now - startup time).
I had the same problem. In my testing, I believe I was seeing the issue was some combination of multi instance and rate of count per second. If I used single instance or a number of items counter it worked. Something about that combination of multi instance and rate per second caused it to be always zero.
As Performance counter of type RateOfCountsPerSecond64 has always the value 0 mentions, a reboot may do the trick. Worked for me anyway.
Another thing that worked for me was this initializing the counter in a block like this:
counter.BeginInit();
counter.RawValue = 0;
counter.EndInit();
I think you need some way to persist the counter. It appears to me that each time the service call is initiated then the counter is recreated.
So you could save the counter to a DB, flat file, or perhaps even a session variable if you wanted it unique to a user.

Categories

Resources