Monotorrent Peermonitor downloadspeed not updating - c#

Hell all,
I'm having the following problem:
When I am trying to get the downloadspeed for every peer in a torrent with the MonoTorrent libary, It just retus zeroes. I get the downloadspeed for every peer like this:
foreach (PeerId p in manager.GetPeers())
{
nTorrentPeerStatus pStatus = new nTorrentPeerStatus();
pStatus.Url = p.Peer.ConnectionUri.ToString();
pStatus.DownloadSpeed = Math.Round(p.Monitor.DownloadSpeed/1024.0, 2);
pStatus.UploadSpeed = Math.Round(p.Monitor.UploadSpeed/1024.0, 2);
pStatus.RequestingPieces = p.AmRequestingPiecesCount;
s.PeerStatuses.Add(pStatus);
}
This always returns zeroes for both the down and upload speed. But when i place a breakpoint on one of these lines, they return something else than zero? So does anyone have any idea why it does work when I place a breakpoint and wait a few seconds before continuing instead of just getting all the download and upload speeds at once?

Related

Does UnityWebRequest.uploadProgress have side effects?

I have the following upload code using Unity's UnityWebRequest API (Unity 2019.2.13f1):
public IEnumerator UploadJobFile(string jobId, string path)
{
if (!File.Exists(path))
{
Debug.LogError("The given file to upload does not exist. Please re-create the recording and try again.");
yield break;
}
UnityWebRequest upload = new UnityWebRequest(hostURL + "/jobs/upload/" + jobId);
upload.uploadHandler = new UploadHandlerFile(path);
upload.downloadHandler = new DownloadHandlerBuffer();
upload.method = UnityWebRequest.kHttpVerbPOST;
upload.SetRequestHeader("filename", Path.GetFileName(path));
UnityWebRequestAsyncOperation op = upload.SendWebRequest();
while (!upload.isDone)
{
//Debug.Log("Uploading file...");
Debug.Log("Uploading file. Progress " + (int)(upload.uploadProgress * 100f) + "%"); // <-----------------
yield return null;
}
if (upload.isNetworkError || upload.isHttpError)
{
Debug.LogError("Upload error:\n" + upload.error);
}
else
{
Debug.Log("Upload success");
}
// this is needed to clear resources on the file
upload.Dispose();
}
string hostURL = "http://localhost:8080";
string jobId = "manualUploadTest";
string path = "E:/Videos/short.mp4";
void Update()
{
if (Input.GetKeyDown(KeyCode.O))
{
Debug.Log("O key was pressed.");
StartCoroutine(UploadAndTest(jobId, path));
}
}
And the files I receive on the server side arrive broken, especially if they are larger (30 MB or more). They are missing bytes in the end and sometimes have entire byte blocks duplicated in the middle.
This happens both when testing client and server on the same machine or when running on different machines.
The server does not complain - from its perspective, no transport errors happened.
I noticed that if I comment out the access to upload.uploadProgress (and e.g. instead use the commented-out debug line above it which just prints a string literal), the files stay intact. Ditching the wile loop altogether and replacing it with yield return op also works.
I tested this strange behavior repeatedly in an outer loop - usually after at most 8 repetitions with the "faulty" code, the file appears broken. If I use the "correct" variant, 100 uploads (update: 500) in a row were successful.
Has upload.uploadProgress side-effects? For what it's worth, the same happens if I print op.progress instead - the files are also broken.
This sounds like a real bug. uploadProgress obviously should not have side effects.

How to read and write more then 25000 records/lines into text file at a time?

I am connecting my application with stock market live data provider using web socket. So when market is live and socket is open then it's giving me nearly 45000 lines in a minute. at a time I am deserializing it line by line
and then write that line into text file and also reading text file and removing first line of text file. So handling another process with socket becomes slow. So please can you help me that how should I perform that process very fast like nearly 25000 lines in a minute.
string filePath = #"D:\Aggregate_Minute_AAPL.txt";
var records = (from line in File.ReadLines(filePath).AsParallel()
select line);
List<string> str = records.ToList();
str.ForEach(x =>
{
string result = x;
result = result.TrimStart('[').TrimEnd(']');
var jsonString = Newtonsoft.Json.JsonConvert.DeserializeObject<List<LiveAMData>>(x);
foreach (var item in jsonString)
{
string value = "";
string dirPath = #"D:\COMB1\MinuteAggregates";
string[] fileNames = null;
fileNames = System.IO.Directory.GetFiles(dirPath, item.sym+"_*.txt", System.IO.SearchOption.AllDirectories);
if(fileNames.Length > 0)
{
string _fileName = fileNames[0];
var lineList = System.IO.File.ReadAllLines(_fileName).ToList();
lineList.RemoveAt(0);
var _item = lineList[lineList.Count - 1];
if (!_item.Contains(item.sym))
{
lineList.RemoveAt(lineList.Count - 1);
}
System.IO.File.WriteAllLines((_fileName), lineList.ToArray());
value = $"{item.sym},{item.s},{item.o},{item.h},{item.c},{item.l},{item.v}{Environment.NewLine}";
using (System.IO.StreamWriter sw = System.IO.File.AppendText(_fileName))
{
sw.Write(value);
}
}
}
});
How to make process fast, if application perform this then it takes nearly 3000 to 4000 symbols. and if there is no any process then it executes 25000 lines per minute. So how to increase line execution time/process with all this code ?
First you need to cleanup you code to gain more visibility, i did a quick refactor and this is what i got
const string FilePath = #"D:\Aggregate_Minute_AAPL.txt";
class SomeClass
{
public string Sym { get; set; }
public string Other { get; set; }
}
private void Something() {
File
.ReadLines(FilePath)
.AsParallel()
.Select(x => x.TrimStart('[').TrimEnd(']'))
.Select(JsonConvert.DeserializeObject<List<SomeClass>>)
.ForAll(WriteRecord);
}
private const string DirPath = #"D:\COMB1\MinuteAggregates";
private const string Separator = #",";
private void WriteRecord(List<SomeClass> data)
{
foreach (var item in data)
{
var fileNames = Directory
.GetFiles(DirPath, item.Sym+"_*.txt", SearchOption.AllDirectories);
foreach (var fileName in fileNames)
{
var fileLines = File.ReadAllLines(fileName)
.Skip(1).ToList();
var lastLine = fileLines.Last();
if (!lastLine.Contains(item.Sym))
{
fileLines.RemoveAt(fileLines.Count - 1);
}
fileLines.Add(
new StringBuilder()
.Append(item.Sym)
.Append(Separator)
.Append(item.Other)
.Append(Environment.NewLine)
.ToString()
);
File.WriteAllLines(fileName, fileLines);
}
}
}
From here should be more easy to play with List.AsParallel to check how and with what parameters the code is faster.
Also:
You are opening the write file twice
The removes are also somewhat expensive, in the index 0 is more (however, if there are few elements this could not make much difference
if(fileNames.Length > 0) is useless, use a for, if the list is empty, then he for will simply skip
You can try StringBuilder instead string interpolation
I hope this hints can help you to improve your time! and that i have not forgetting something.
Edit
We have nearly 10,000 files in our directory. So when process is
running then it's passing an error that The Process can not access the
file because it is being used by another process
Well, is there a possibility that in your process lines there is duplicated file names?
If that is the case, you could try a simple approach, a retry after some milliseconds, something like
private const int SleepMillis = 5;
private const int MaxRetries = 3;
public void WriteFile(string fileName, string[] fileLines, int retries = 0)
{
try
{
File.WriteAllLines(fileName, fileLines);
}
catch(Exception e) //Catch the special type if you can
{
if (retries >= MaxRetries)
{
Console.WriteLine("Too many tries with no success");
throw; // rethrow exception
}
Thread.Sleep(SleepMillis);
WriteFile(fileName, fileLines, ++retries); // try again
}
}
I tried to keep it simple, but there are some annotations:
- If you can make your methods async, it could be an improvement by changing the sleep for a Task.Delay, but you need to know and understand well how async works
- If the collision happens a lot, then you should try another approach, something like a concurrent map with semaphores
Second edit
In real scenario I am connecting to websocket and receiving 70,000 to
1 lac records on every minute and after that I am bifurcating those
records with live streaming data and storing in it's own file. And
that becomes slower when I am applying our concept with 11,000 files
It is a hard problem, from what i understand, you're talking about 1166 records per second, at this size the little details can become big bottlenecks.
At that phase i think it is better to think about other solutions, it could be so much I/O for the disk, could be many threads, or too few, network...
You should start by profiling the app to check where the app is spending more time to focus in that area, how much resources is using? how much resources do you have? how is the memory, processor, garbage collector, network? do you have an SSD?
You need a clear view of what is slowing you down so you can attack that directly, it will depend on a lot of things, it will be hard to help with that part :(.
There are tons of tools for profile c# apps, and many ways to attack this problem (spread the charge in several servers, use something like redis to save data really quick, some event store so you can use events....

SharpDX XAudio2: 6 SourceVoice limit

I have been playing around with SharpDX.XAudio2 for a few days now, and while things have been largely positive (the odd software quirk here and there) the following problem has me completely stuck:
I am working in C# .NET using VS2015.
I am trying to play multiple sounds simultaneously.
To do this, I have made:
- Test.cs: Contains main method
- cSoundEngine.cs: Holds XAudio2, MasteringVoice, and sound management methods.
- VoiceChannel.cs: Holds a SourceVoice, and in future any sfx/ related data.
cSoundEngine:
List<VoiceChannel> sourceVoices;
XAudio2 engine;
MasteringVoice master;
public cSoundEngine()
{
engine = new XAudio2();
master = new MasteringVoice(engine);
sourceVoices = new List<VoiceChannel>();
}
public VoiceChannel AddAndPlaySFX(string filepath, double vol, float pan)
{
/**
* Set up and start SourceVoice
*/
NativeFileStream fileStream = new NativeFileStream(filepath, NativeFileMode.Open, NativeFileAccess.Read);
SoundStream soundStream = new SoundStream(fileStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source);
sourceVoices.Add(voice);
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
return voice;
}
Test.cs:
cSoundEngine engine = new cSoundEngine();
total = 6;
for (int i = 0; i < total; i++)
{
string filepath = System.IO.Directory.GetParent(System.IO.Directory.GetCurrentDirectory()).Parent.FullName + #"\Assets\Planet.wav";
VoiceChannel sfx = engine.AddAndPlaySFX(filepath, 0.1, 0);
}
Console.Read(); //Input anything to end play.
There is currently nothing worth showing in VoiceChannel.cs - it holds 'SourceVoice source' which is the one parameter sent in the constructor!
Everything is fine and well running with up to 5 sounds (total = 5). All you hear is the blissful drone of Planet.wav. Any higher than 5 however causes the console to freeze for ~5 seconds, then close (likely a c++ error which debugger can't handle). Sadly no error message for us to look at or anything.
From testing:
- Will not crash as long as you do not have more than 5 running sourcevoices.
- Changing sample rate does not seem to help.
- Setting inputChannels for master object to a different number makes no difference.
- MasteringVoice seems to say the max number of inputvoices is 64.
- Making each sfx play from a different wav file makes no difference.
- Setting the volume for sourcevoices and/or master makes no difference.
From the XAudio2 API Documentation I found this quote: 'XAudio2 removes the 6-channel limit on multichannel sounds, and supports multichannel audio on any multichannel-capable audio card. The card does not need to be hardware-accelerated.'. This is the closest I have come to finding something that mentions this problem.
I am not well experienced with programming sfx and a lot of this is very new to me, so feel free to call me an idiot where appropriate but please try and explain things in layman terms.
Please, if you have any ideas or answers they would be greatly appreciated!
-Josh
As Chuck has suggested, I have created a databank which holds the .wav data, and I just reference the single data store with each buffer. This has improved the sound limit up to 20 - however this has not fixed the problem as a whole, likely because I have not implemented this properly.
Implementation:
class SoundDataBank
{
/**
* Holds a single byte array for each sound
*/
Dictionary<eSFX, Byte[]> bank;
string curdir => Directory.GetParent(Directory.GetCurrentDirectory()).Parent.FullName;
public SoundDataBank()
{
bank = new Dictionary<eSFX, byte[]>();
bank.Add(eSFX.planet, NativeFile.ReadAllBytes(curdir + #"\Assets\Planet.wav"));
bank.Add(eSFX.base1, NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav"));
}
public Byte[] GetSoundData(eSFX sfx)
{
byte[] output = bank[sfx];
return output;
}
}
In SoundEngine we create a SoundBank object (initialised in SoundEngine constructor):
SoundDataBank soundBank;
public VoiceChannel AddAndPlaySFXFromStore(eSFX sfx, double vol)
{
/**
* sourcevoice will be automatically added to MasteringVoice and engine in the constructor.
*/
byte[] buffer = soundBank.GetSoundData(sfx);
MemoryStream memoryStream = new MemoryStream(buffer);
SoundStream soundStream = new SoundStream(memoryStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source, engine, MakeOutputMatrix());
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
sourceVoices.Add(voice);
return voice;
}
Following this implementation now lets me play up to 20 sound effects - but NOT because we are playing from the soundbank. Infact, even running the old method for sound effects now gets up to 20 sfx instances.
This has improved up to 20 because we have done NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav") in the constructor for the SoundBank.
I suspect NativeFile is holding a store of loaded file data, so you regardless of whether you run the original SoundEngine.AddAndPlaySFX() or SoundEngine.AddAndPlaySFXFromStore(), they are both running from memory?
Either way, this has quadrupled the limit from before, so this has been incredibly useful - but requires further work.

How to make geo location retrieval process faster in UWP?

I am using Geolocator class to find current position of the device in UWP app. The location retrieval process works very fast in my compute. But when I try to run the same app in real device, then device retrieval process takes around 30 seconds.
I'm using the following code snippet:
var accessStatus = await Geolocator.RequestAccessAsync();
if (accessStatus == GeolocationAccessStatus.Allowed)
{
Geolocator geolocator = new Geolocator
{
DesiredAccuracyInMeters = 500,
DesiredAccuracy = PositionAccuracy.High
};
Geoposition pos = await geolocator.GetGeopositionAsync()
}
How can I make this process faster in my devices?
Already tried by increasing the DesiredAccuracyInMeters value upto 2000 but couldn't find any improvement. Thanks in advance.
If you check the documentation, you can see that when you set both DesiredAccuracy and DesiredAccuracyInMeters, the one set last takes precedence:
When neither DesiredAccuracyInMeters nor DesiredAccuracy are set, your app will use an accuracy setting of 500 meters (which corresponds to the DesiredAccuracy setting of Default). Setting DesiredAccuracy to Default or High indirectly sets DesiredAccuracyInMeters to 500 or 10 meters, respectively. When your app sets both DesiredAccuracy and DesiredAccuracyInMeters, your app will use whichever accuracy value was set last.
So you because you are setting DesiredAccuracy to High, you are effectively overriding the meters setting. To make the search faster, do not set the High accuracy and only set the meters value.
I will add to Martin's question, you should use first the cached position and then use the GetPositionAsync, you should get a faster localization of the user in this way:
var locator = CrossGeolocator.Current;
locator.DesiredAccuracy = 500;
//Check if we have a cached position
var loc = await locator.GetLastKnownLocationAsync ();
if ( loc != null )
{
CurrentPosition = new Position (loc.Latitude, loc.Longitude);
}
if ( !locator.IsGeolocationAvailable || !locator.IsGeolocationEnabled )
{
return;
}
//and if not we get a new one
var def = await locator.GetPositionAsync (TimeSpan.FromSeconds (10), null, true);
CurrentPosition = new Position (def.Latitude, def.Longitude);

Download speed for Open Hardware Monitor

I'm making some changes for Open Hardware Monitor. I will add the network adapter download and upload speed. But when I calculate the download speed I get a wrong calculation.
I can't use a timer to calculate the correct download speed because of the auto update in OHM.
In the source here you can see how I calculate the download speed (in Mb/s).
In the construct of the class i do:
IPv4InterfaceStatistics interfaceStats = netInterfaces.GetIPv4Statistics();
bytesSent = interfaceStats.BytesSent;
bytesReceived = interfaceStats.BytesReceived;
stopWatch = new Stopwatch();
stopWatch.Start();
When the update method is called (in some random times) I do this:
IPv4InterfaceStatistics interfaceStats = netInterfaces.GetIPv4Statistics();
stopWatch.Stop();
long time = stopWatch.ElapsedMilliseconds;
if (time != 0)
{
long bytes = interfaceStats.BytesSent;
long bytesCalc = ((bytes - bytesSent)*8);
usedDownloadSpeed.Value = ((bytesCalc / time) * 1000)/1024;
bytesSent = bytes;
}
Hope someone can see my issue?
Added screenshot
There where a few conversion issues with my previous code.
Now I have this source and it works.
Tnx all for answering.
interfaceStats = netInterfaces.GetIPv4Statistics();
//Calculate download speed
downloadSpeed.Value = Convert.ToInt32(interfaceStats.BytesReceived - bytesPreviousReceived) / 1024F;
bytesPreviousReceived = interfaceStats.BytesReceived;
The following changes should help...
speed = netInterfaces.Speed / 1048576L;
If I recall correctly, the Speed property is a long and when you divide it by an int, you end up with a truncated result. Which brings us to a similar set of changes in your other calculation...
usedDownloadSpeed.Value = ((bytesCalc / time) * 1000L)/1024L;
... assuming that usedDownloadSpeed.Value is also a long to make sure you're not getting any truncated values with implicit conversion of your results or calculations. If you want to be doubly sure you have the casting correctly, use Convert.ToInt64().

Categories

Resources