According to documentation, NetworkInterface.GetIPv4Statistics().BytesReceived returns the number of bytes that were received on the interface.
The question is bytes received since when?
I've found nothing so far on the internet and on the official MSDN page.
The BytesReceived getter returns the inOctets value of the MibIfRow2 struct which maps to the MIB_IF_ROW2 WinAPI structure.
The values there are also used for SNMP queries and from https://stackoverflow.com/a/8760781 we learn that those values are simply updated by adding the new reading to the previous inOctets value. The inOctets value will overflow its maxvalue without error and continue from 0.
With this knowledge there is no when.
If you need the bytes received over a certain period it is up to you to query the value at the start of your desired time period and then later on query the value again. Substract the last value (check if the value hasn't overflowed/reset in the mean time) and the first value to get the number of bytes sent for your time frame.
Related
I am using "PerformanceCounter" class of C# to calculate below 2 counters "Available Bytes" and "% Committed Bytes In Use" under "Memory" category.
PerformanceCounter pc = new PerformanceCounter("Memory", "Available Bytes", true);
PerformanceCounter pc1 = new PerformanceCounter("Memory", "% Committed Bytes In Use", true);
var a = pc.RawValue;
var b = pc1.NextValue();
The issue I'm seeing here is that "RawValue" is used for "Available Bytes" counter whereas "NextValue()" is used for "% Committed Bytes In Use" counter.
Is there any uniform method to calculate both or all counters?
It only varies per category because different categories contain different counter types. The PerformanceCounter.CounterType property defines what type of data the counter is holding, and therefore how the data is calculated. It doesn't make sense for a counter that's measuring the difference over time to have the difference in the raw value because the difference could be over different time periods for different clients wanting to do the measurement. See the Performance Counter Type Enumeration for more info on the different types. If you really want to get into the details of how each type works, you have to resort to the Win32 documentation on which all of this is based. There used to be a single page with all of this, but I'm having trouble finding that at the moment. The closest I can find is here: https://technet.microsoft.com/en-us/library/cc960029.aspx. Some performance counter types use a main counter and a "base" counter and then use a formula based on the current and previous raw values for each of those (and possibly system time as well) to compute NextValue(). RawValue may appear to be invalid for certain counter types because it just doesn't make sense to interpret it in the same way as the computed value. For example, IIRC for % CPU used for the process, the raw value is the number of CPU ticks used since the program started, which, if interpreted as a percentage is nonsense. It's only meaningful when compared to previous values and the elapsed time (from which you can also infer out the maximum possible change).
Using RawValue makes sense for some counters, not for others. However, NextValue() often can't return a meaningful value the first time you call it because when it's computed as a difference between samples, you don't have a previous sample to compare it to. You can just ignore that, or you can set up your code to call it once during startup so that subsequent calls get real values. Keep in mind that NextValue() is expected to be called on a timer. For example, if you're calling it on the Network Bytes Sent counter, it will return the number of bytes sent between the previous call and this one. So for example if you call NextValue() on the Network Bytes Sent counter 2 seconds after the initial call and then again after 2 minutes, you're going to get very different values, even if the network transfer is steady because the call after 2 seconds is return the number of bytes transferred in 2 seconds, and the call after 2 minutes is going to return the number of bytes transferred in 2 minutes.
So, in short, you can use NextValue() for all counter types, but you must throw away or ignore the first value returned, and you must call NextValue() on a fixed interval for the results to make sense (just like the interactive Windows Performance Monitor program does).
By my experience and mostly MSDN documentation, is that it varies per Performance Counter Category, then again by specific attribute Property such as Available Bytes or % Committed in your case.
What might be what you are looking for is NextSample().
Perf Counter
Property: RawValue
Gets or sets the raw, or uncalculated, value of this counter.
^ meaning that it's not necessarily up to the developer who created it.
Method: NextValue()
Obtains a counter sample and returns the calculated value for it.
^ meaning that it's up to the developer who created it.
Method: NextSample()
Obtains a counter sample, and returns the raw, or uncalculated, value for it.
Also something that was explained to me long ago, so take it with a grain of salt, the concept of RawValue is not always valid.
RawValues are used to create samples. NextSample(), or samples, is/are averages - much more realistic - of RawValues over time. NextValue() is cleaned up samples converted into either %, or from bytes to Kilobytes (based on the context of the value and the implementation by developer).
So, in my humble opinion, even if the info is over 10 years old, is to abandon usage of RawValue and utilize NextSample() in its place - should you need a realistic/accurate value.
I got a Task that counts the number of packets it receives from some source.
Every 250ms a timer fires up reads and outputs the count to the user. Right after i need to set the count back to 0.
My concern is that between reading and displaying the count, but BEFORE I set count=0, count has incremented in the other thread, so i end up losing counts by zeroing it out.
I am new to Threading so i have been at multiple options.
I looked into using Interlocked but as far as i know it only gives me arithmetic operations, i don't have the option to actually set the variable to value.
I was also looking into the ReaderWriterLockSlim, what i need is the most efficient / less overhead way to accomplish since there is lot of data coming across.
You want Exchange:
int currentCount = System.Threading.Interlocked.Exchange(ref count, 0)
As per the docs:
Sets a 32-bit signed integer to a specified value and returns the original value, as an atomic operation.
[Background]
I am using System.Diagnostics.Process assembly under .net to track
the performance of a process.
I am taking 30 samples at an interval of 1 sec
Value of Working set populated as:
long peakWorkingSet = requiredProcess.PeakWorkingSet64 where
Process requiredProcess = Process.GetProcessesByName(processName).First();
Value of Peak Working set populated with the same process instance as:
long WorkingSet = requiredProcess.WorkingSet64
[Query]
I expect the PeakWorkingSet64 be the peak value of memory associated which is represented by WorkingSet64(Please correct if I am mistaken here)
But for some reason, I see the value of PeakWorkingSet64 as 80K when in fact the sample data suggests values of WorkingSet64 never reached that value. They were fluctuating around 50K.
Any inputs appreciated.Please help understand
Not sure what to say about that (or why I in particular should be able to divine).
PeakWorkingSet64 indeed holds, as you expect, the peak value of the entire history of WorkingSet64 as specified in the MSDN docs:
The maximum amount of physical memory, in bytes, allocated for the associated process since it was started.
Note that this means "from since the process was spawned", which e.g. includes the period of time during which the runtime initializes.
Now, you have tried to measure the memory consumption by taking a small number (30) of discrete samples, in an interval of one second each. This is not a very reliable way of measuring. All you know from those samples is what the working set looked like at the precise time you looked at it, not what it looked like a moment earlier or a moment later.
The working set might incidentially always be around 50kiB when you look at it once per second, but might be 80k at another time (or any other value) in between the samples. Working sets are not constant, they change all the time.
Further, and more likely, the working set might be much larger during startup (that is, before your code is even executed!). Thus, the peak value would naturally be higher, but you never get to "measure" such a high value with your samples, even if you do a million samples.
I'm writing some code using PacketDotNet and SharpPCap to parse H.225 packets for a VOIP phone system. I've been using Wireshark to look at the structure, but I'm stuck. I've been using This as a reference.
Most of the H.225 packets I see are user information type with an empty message body and the actual information apparently shows up as a list of NonStandardControls in Wireshark. I thought I'd just extract out these controls and parse them later, but I don't really know where they start.
In almost all cases, the items start at the 10th byte of the H.225 data. Each item appears to begin with the length which is recorded as 2 bytes. However, I am getting a packet that has items starting at the 11th byte.
The only difference I see in this packet is something in the message body supposedly called open type length which has a value of 1, whereas the rest all appear to be 0. Would the items start at 10 + open type length? Is there some document that explains what this open type length is for?
Thanks.
H.225 doesn't use a fixed length encoding, it user ASN.1 PER encoding (not BER).
You probably won't find a C# library. OPAL is adding a C API if you are able to use that.
I need to count the amount (in B/kB/MB/whatever) of data sent and received by my PC, by every running program/process.
Let's say I click "Start counting" and I get the sum of everything sent/received by my browser, FTP client, system actualizations etc. etc. from that moment till I choose "Stop".
To make it simpler, I want to count data transferred via TCP only - if it matters.
For now, I got the combo list of NICs in the PC (based on the comment in the link below).
I tried to change the code given here but I failed, getting strange out-of-nowhere values in dataSent/dataReceived.
I also read the answer at the question 442409 but as I can see it is about the data sent/received by the same program, which doesn't fit my requirements.
Perfmon should have counters for this type of thing that you want to do, so look there first.
Alright, I think I've found the solution, but maybe someone will suggest something better...
I made the timer (tested it with 10ms interval), which gets the "Bytes Received/sec" PerformanceCounter value and adds it to a global "temporary" variable and also increments the sum counter (if there is any lag). Then I made second timer with 1s interval, which gets the sum of values (from temporary sum), divides it by the counter and adds to the overall amount (also global). Then it resets the temporary sum and the counter.
I'm just not sure if it is right method, because I don't know, how the variables of "Bytes Received/sec" PerformanceCounter are varying during the one second. Maybe I should make some kind of histograph and get the average value?
For now, downloading 8.6MB file gave me 9.2MB overall amount - is it possible the other processes would generate that amount of net activity in less than 20 seconds?