Download speed for Open Hardware Monitor - c#

I'm making some changes for Open Hardware Monitor. I will add the network adapter download and upload speed. But when I calculate the download speed I get a wrong calculation.
I can't use a timer to calculate the correct download speed because of the auto update in OHM.
In the source here you can see how I calculate the download speed (in Mb/s).
In the construct of the class i do:
IPv4InterfaceStatistics interfaceStats = netInterfaces.GetIPv4Statistics();
bytesSent = interfaceStats.BytesSent;
bytesReceived = interfaceStats.BytesReceived;
stopWatch = new Stopwatch();
stopWatch.Start();
When the update method is called (in some random times) I do this:
IPv4InterfaceStatistics interfaceStats = netInterfaces.GetIPv4Statistics();
stopWatch.Stop();
long time = stopWatch.ElapsedMilliseconds;
if (time != 0)
{
long bytes = interfaceStats.BytesSent;
long bytesCalc = ((bytes - bytesSent)*8);
usedDownloadSpeed.Value = ((bytesCalc / time) * 1000)/1024;
bytesSent = bytes;
}
Hope someone can see my issue?
Added screenshot

There where a few conversion issues with my previous code.
Now I have this source and it works.
Tnx all for answering.
interfaceStats = netInterfaces.GetIPv4Statistics();
//Calculate download speed
downloadSpeed.Value = Convert.ToInt32(interfaceStats.BytesReceived - bytesPreviousReceived) / 1024F;
bytesPreviousReceived = interfaceStats.BytesReceived;

The following changes should help...
speed = netInterfaces.Speed / 1048576L;
If I recall correctly, the Speed property is a long and when you divide it by an int, you end up with a truncated result. Which brings us to a similar set of changes in your other calculation...
usedDownloadSpeed.Value = ((bytesCalc / time) * 1000L)/1024L;
... assuming that usedDownloadSpeed.Value is also a long to make sure you're not getting any truncated values with implicit conversion of your results or calculations. If you want to be doubly sure you have the casting correctly, use Convert.ToInt64().

Related

SharpDX XAudio2: 6 SourceVoice limit

I have been playing around with SharpDX.XAudio2 for a few days now, and while things have been largely positive (the odd software quirk here and there) the following problem has me completely stuck:
I am working in C# .NET using VS2015.
I am trying to play multiple sounds simultaneously.
To do this, I have made:
- Test.cs: Contains main method
- cSoundEngine.cs: Holds XAudio2, MasteringVoice, and sound management methods.
- VoiceChannel.cs: Holds a SourceVoice, and in future any sfx/ related data.
cSoundEngine:
List<VoiceChannel> sourceVoices;
XAudio2 engine;
MasteringVoice master;
public cSoundEngine()
{
engine = new XAudio2();
master = new MasteringVoice(engine);
sourceVoices = new List<VoiceChannel>();
}
public VoiceChannel AddAndPlaySFX(string filepath, double vol, float pan)
{
/**
* Set up and start SourceVoice
*/
NativeFileStream fileStream = new NativeFileStream(filepath, NativeFileMode.Open, NativeFileAccess.Read);
SoundStream soundStream = new SoundStream(fileStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source);
sourceVoices.Add(voice);
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
return voice;
}
Test.cs:
cSoundEngine engine = new cSoundEngine();
total = 6;
for (int i = 0; i < total; i++)
{
string filepath = System.IO.Directory.GetParent(System.IO.Directory.GetCurrentDirectory()).Parent.FullName + #"\Assets\Planet.wav";
VoiceChannel sfx = engine.AddAndPlaySFX(filepath, 0.1, 0);
}
Console.Read(); //Input anything to end play.
There is currently nothing worth showing in VoiceChannel.cs - it holds 'SourceVoice source' which is the one parameter sent in the constructor!
Everything is fine and well running with up to 5 sounds (total = 5). All you hear is the blissful drone of Planet.wav. Any higher than 5 however causes the console to freeze for ~5 seconds, then close (likely a c++ error which debugger can't handle). Sadly no error message for us to look at or anything.
From testing:
- Will not crash as long as you do not have more than 5 running sourcevoices.
- Changing sample rate does not seem to help.
- Setting inputChannels for master object to a different number makes no difference.
- MasteringVoice seems to say the max number of inputvoices is 64.
- Making each sfx play from a different wav file makes no difference.
- Setting the volume for sourcevoices and/or master makes no difference.
From the XAudio2 API Documentation I found this quote: 'XAudio2 removes the 6-channel limit on multichannel sounds, and supports multichannel audio on any multichannel-capable audio card. The card does not need to be hardware-accelerated.'. This is the closest I have come to finding something that mentions this problem.
I am not well experienced with programming sfx and a lot of this is very new to me, so feel free to call me an idiot where appropriate but please try and explain things in layman terms.
Please, if you have any ideas or answers they would be greatly appreciated!
-Josh
As Chuck has suggested, I have created a databank which holds the .wav data, and I just reference the single data store with each buffer. This has improved the sound limit up to 20 - however this has not fixed the problem as a whole, likely because I have not implemented this properly.
Implementation:
class SoundDataBank
{
/**
* Holds a single byte array for each sound
*/
Dictionary<eSFX, Byte[]> bank;
string curdir => Directory.GetParent(Directory.GetCurrentDirectory()).Parent.FullName;
public SoundDataBank()
{
bank = new Dictionary<eSFX, byte[]>();
bank.Add(eSFX.planet, NativeFile.ReadAllBytes(curdir + #"\Assets\Planet.wav"));
bank.Add(eSFX.base1, NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav"));
}
public Byte[] GetSoundData(eSFX sfx)
{
byte[] output = bank[sfx];
return output;
}
}
In SoundEngine we create a SoundBank object (initialised in SoundEngine constructor):
SoundDataBank soundBank;
public VoiceChannel AddAndPlaySFXFromStore(eSFX sfx, double vol)
{
/**
* sourcevoice will be automatically added to MasteringVoice and engine in the constructor.
*/
byte[] buffer = soundBank.GetSoundData(sfx);
MemoryStream memoryStream = new MemoryStream(buffer);
SoundStream soundStream = new SoundStream(memoryStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source, engine, MakeOutputMatrix());
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
sourceVoices.Add(voice);
return voice;
}
Following this implementation now lets me play up to 20 sound effects - but NOT because we are playing from the soundbank. Infact, even running the old method for sound effects now gets up to 20 sfx instances.
This has improved up to 20 because we have done NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav") in the constructor for the SoundBank.
I suspect NativeFile is holding a store of loaded file data, so you regardless of whether you run the original SoundEngine.AddAndPlaySFX() or SoundEngine.AddAndPlaySFXFromStore(), they are both running from memory?
Either way, this has quadrupled the limit from before, so this has been incredibly useful - but requires further work.

How to make geo location retrieval process faster in UWP?

I am using Geolocator class to find current position of the device in UWP app. The location retrieval process works very fast in my compute. But when I try to run the same app in real device, then device retrieval process takes around 30 seconds.
I'm using the following code snippet:
var accessStatus = await Geolocator.RequestAccessAsync();
if (accessStatus == GeolocationAccessStatus.Allowed)
{
Geolocator geolocator = new Geolocator
{
DesiredAccuracyInMeters = 500,
DesiredAccuracy = PositionAccuracy.High
};
Geoposition pos = await geolocator.GetGeopositionAsync()
}
How can I make this process faster in my devices?
Already tried by increasing the DesiredAccuracyInMeters value upto 2000 but couldn't find any improvement. Thanks in advance.
If you check the documentation, you can see that when you set both DesiredAccuracy and DesiredAccuracyInMeters, the one set last takes precedence:
When neither DesiredAccuracyInMeters nor DesiredAccuracy are set, your app will use an accuracy setting of 500 meters (which corresponds to the DesiredAccuracy setting of Default). Setting DesiredAccuracy to Default or High indirectly sets DesiredAccuracyInMeters to 500 or 10 meters, respectively. When your app sets both DesiredAccuracy and DesiredAccuracyInMeters, your app will use whichever accuracy value was set last.
So you because you are setting DesiredAccuracy to High, you are effectively overriding the meters setting. To make the search faster, do not set the High accuracy and only set the meters value.
I will add to Martin's question, you should use first the cached position and then use the GetPositionAsync, you should get a faster localization of the user in this way:
var locator = CrossGeolocator.Current;
locator.DesiredAccuracy = 500;
//Check if we have a cached position
var loc = await locator.GetLastKnownLocationAsync ();
if ( loc != null )
{
CurrentPosition = new Position (loc.Latitude, loc.Longitude);
}
if ( !locator.IsGeolocationAvailable || !locator.IsGeolocationEnabled )
{
return;
}
//and if not we get a new one
var def = await locator.GetPositionAsync (TimeSpan.FromSeconds (10), null, true);
CurrentPosition = new Position (def.Latitude, def.Longitude);

Monotorrent Peermonitor downloadspeed not updating

Hell all,
I'm having the following problem:
When I am trying to get the downloadspeed for every peer in a torrent with the MonoTorrent libary, It just retus zeroes. I get the downloadspeed for every peer like this:
foreach (PeerId p in manager.GetPeers())
{
nTorrentPeerStatus pStatus = new nTorrentPeerStatus();
pStatus.Url = p.Peer.ConnectionUri.ToString();
pStatus.DownloadSpeed = Math.Round(p.Monitor.DownloadSpeed/1024.0, 2);
pStatus.UploadSpeed = Math.Round(p.Monitor.UploadSpeed/1024.0, 2);
pStatus.RequestingPieces = p.AmRequestingPiecesCount;
s.PeerStatuses.Add(pStatus);
}
This always returns zeroes for both the down and upload speed. But when i place a breakpoint on one of these lines, they return something else than zero? So does anyone have any idea why it does work when I place a breakpoint and wait a few seconds before continuing instead of just getting all the download and upload speeds at once?

Writing to MIDI file in C#

I have been trying to find a way to write to a MIDI file using the C# MIDI Toolkit. However, I am constantly running into problems with time synchronization. The resulting MIDI file is always off beat. More precisely, it has correct rhythm in relation to itself, but when imported into a sequencer, it does not seem to contain any tempo information (which is understandable, since I never specify it from within my program). There is no documentation on how to do this.
I am using the following code to insert notes into a track.
public const int NOTE_LENGTH = 32;
private static void InsertNote(Track t, int pitch, int velocity, int position, int duration, int channel)
{
ChannelMessageBuilder builder = new ChannelMessageBuilder();
builder.Command = ChannelCommand.NoteOn;
builder.Data1 = pitch;
builder.Data2 = velocity;
builder.MidiChannel = channel;
builder.Build();
t.Insert(position * NOTE_LENGTH, builder.Result);
builder.Command = ChannelCommand.NoteOff;
builder.Build();
t.Insert((position + duration) * NOTE_LENGTH, builder.Result);
}
I am sure the notes themselves are okay, since the resulting output is audible, but has no tempo information. How do I enter tempo information into the Sequence object that contains my tracks?
Stumbled upon an answer by brute-force trying: the NOTE_LENGTH should be evenly devisable by 3.

Audio generation software or .NET library

I need to be able to play certain tones in a c# application. I don't care if it generates them on the fly or if it plays them from a file, but I just need SOME way to generate tones that have not only variable volume and frequency, but variable timbre. It would be especially helpful if whatever I used to generate these tones would have many timbre pre-sets, and it would be even more awesome if these timbres didn't all sound midi-ish (meaning some of them sounded like the might have been recordings of actual instruments).
Any suggestions?
You might like to take a look at my question Creating sine or square wave in C#
Using NAudio in particular was a great choice
This article helped me with something similar:
http://social.msdn.microsoft.com/Forums/vstudio/en-US/18fe83f0-5658-4bcf-bafc-2e02e187eb80/beep-beep
The part in particular is the Beep Class:
public class Beep
{
public static void BeepBeep(int Amplitude, int Frequency, int Duration)
{
double A = ((Amplitude * (System.Math.Pow(2, 15))) / 1000) - 1;
double DeltaFT = 2 * Math.PI * Frequency / 44100.0;
int Samples = 441 * Duration / 10;
int Bytes = Samples * 4;
int[] Hdr = {0X46464952, 36 + Bytes, 0X45564157, 0X20746D66, 16, 0X20001, 44100, 176400, 0X100004, 0X61746164, Bytes};
using (MemoryStream MS = new MemoryStream(44 + Bytes))
{
using (BinaryWriter BW = new BinaryWriter(MS))
{
for (int I = 0; I < Hdr.Length; I++)
{
BW.Write(Hdr[I]);
}
for (int T = 0; T < Samples; T++)
{
short Sample = System.Convert.ToInt16(A * Math.Sin(DeltaFT * T));
BW.Write(Sample);
BW.Write(Sample);
}
BW.Flush();
MS.Seek(0, SeekOrigin.Begin);
using (SoundPlayer SP = new SoundPlayer(MS))
{
SP.PlaySync();
}
}
}
}
}
It can be used as follows
Beep.BeepBeep(100, 1000, 1000); /* 10% volume */
There's a popular article on CodeProject along these lines:
http://www.codeproject.com/KB/audio-video/CS_ToneGenerator.aspx
You might also check out this thread:
http://episteme.arstechnica.com/eve/forums/a/tpc/f/6330927813/m/197000149731
In order for your generated tones not to sound 'midi-ish', you'll have to use real-life samples and play them back. Try to find some good real instrument sample bank, like http://www.sampleswap.org/filebrowser-new.php?d=INSTRUMENTS+single+samples%2F
Then, when you want to compose melody from them, just alternate playback frequency relative to the original sample frequency.
Please drop me a line if you find this answer usefull.

Categories

Resources