UWP: Garbage collection Gen 2 still happens inside a NoGCRegion - c#

In my UWP app I need to perform a critical section for few seconds, when I need to be sure that the Garbage collector is not invoked.
So, I call
const long TOTAL_GC_ALLOWED = 1024L * 1024 * 240; // 240 MB, it seems max allowed is 244 for Workstations 64 bits
try
{
bool res = GC.TryStartNoGCRegion(TOTAL_GC_ALLOWED);
if (!res)
s_log.ErrorFormat("Cannot allocate noGC Region!");
}
catch (Exception)
{
s_log.WarnFormat("Cannot start NoGCRegion");
}
Unfortunately, even if the method GC.TryStartNoGCRegion() returns true, I still see the same amount of GarbageCollection of Gen 2 as in the case when I don't call this method.
Please also notice that I am trying on a machine with 16 GB of RAM, of which only 9GB are used by the whole O.S. when I was doing my tests.
What am I doing wrong?
How can I achieve to suppress the GC (for a limited amount of time)?

Related

UWP AudioGraph : Garbage Collector causes clicks in the audio output

I have a C# UWP application that uses the AudioGraph API.
I use a custom effect on a MediaSourceAudioInputNode.
I followed the sample on this page :
https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/custom-audio-effects
It works but I can hear multiple clicks per second in the speakers when the custom effect is running.
Here is the code for my ProcessFrame method :
public unsafe void ProcessFrame(ProcessAudioFrameContext context)
{
if (context == null)
{
throw new ArgumentNullException(nameof(context));
}
AudioFrame frame = context.InputFrame;
using (AudioBuffer inputBuffer = frame.LockBuffer(AudioBufferAccessMode.Read))
using (IMemoryBufferReference inputReference = inputBuffer.CreateReference())
{
((IMemoryBufferByteAccess)inputReference).GetBuffer(out byte* inputDataInBytes, out uint inputCapacity);
Span<float> samples = new Span<float>(inputDataInBytes, (int)inputCapacity / sizeof(float));
for (int i = 0; i < samples.Length; i++)
{
float sample = samples[i];
// sample processing...
samples[i] = sample;
}
}
}
I used the Visual Studio profiler to identify the cause of the problem.
It is clear that there is a memory problem. The garbage collection runs several times each second. At each garbage collection, I can hear a click.
The Visual Studio profiler shows that the garbage-collected objects are type ProcessAudioFrameContext.
These objects are created by the AudioGraph API before entering the ProcessFrame method and passed as a parameter to the method.
Is there something that I can do to avoid these frequent garbage collections ?
The problem is not specific to custom effects, but it is a general problem with AudioGraph (current SDK is 1809).
Garbage collections can pause the AudioGraph thread for a too long time (more than 10ms, it is the default size of audio buffers). The result is that clicks can be heard in the audio output.
The use of custom effects puts a lot of pressure on the garbage collector.
I found a good workaround. It uses the GC.TryStartNoGCRegion method.
After this method is called, the clicks completely disappear. But the app keeps growing in memory until the GC.EndNoGCRegion method is called.
// at the beginning of playback...
// 240 Mb is the amount of memory that can be allocated before a GC occurs
GC.TryStartNoGCRegion(240 * 1024 * 1024, true);
// ... at the end of playback
GC.EndNoGCRegion();
MSDN doc :
https://learn.microsoft.com/fr-fr/dotnet/api/system.gc.trystartnogcregion?view=netframework-4.7.2
And a good article :
https://mattwarren.org/2016/08/16/Preventing-dotNET-Garbage-Collections-with-the-TryStartNoGCRegion-API/
the garbage collector is probably reacting to you initializing the sample temporary memory every frame, which is then released after the frame, try assign the memory for holding the samples in your start up code and just reuse it every frame.

Memory leak TextBox.Text set with Windows 10 1809 (10.0.17763)

Since updating to Windows 10 1809, my application appears to be leaking memory on a WinForms C++/CLI TextBox.Text set call.
The textbox is a log updated from a separate device thread logging Bluetooth communications. After a random period of time (sometimes 10 mins, sometimes a few hours) the GUI event handler will lock on the TextBox.Text set call and WorkingSet memory will increase linearly (~ 10MB/s) for around 30s to a minute before recovering. However the allocated commit memory is retained and then built upon the next time it fails. This builds until the application eventually crashes with a stack overflow or out of memory exception.
This is the bones of the Comms event handler:
System::Void MainForm::OnCommsMessageLog(System::Object^ sender, CustomEventArgs::LogMessageEventArgs^ plogMessageEventArgs)
{
if (InvokeRequired)
{
Invoke(gcnew commsMessageDelegate(this, &MainForm::OnCommsMessageLog), sender, plogMessageEventArgs);
}
else
{
msclr::lock OnCommsMessageLogLock(mpOnCommsLogLock);
String^ logMessage = plogMessageEventArgs->LogMessage;
// Pipe out to file to check for log contents
StreamWriter^ pWriter = gcnew StreamWriter(mLogFilePath + pBthDevice->Name + ".txt", true);
pWriter->Write(logMessage);
pWriter->Close();
mpDeviceLog += logMessage; // Add new log line to String member variable
if(mpDeviceLog->Length > MAX_LOG_LENGTH) // MAX_LOG_LENGTH = 100000
{
mpDeviceLog = mpDeviceLog->Substring(MAX_LOG_LENGTH / 5);
logTextBox->Text = mpDeviceLog; // <-- LOCKS HERE
}
else
{
logTextBox->AppendText(logMessage);
//logTextBox->Text += logMessage; // <-- ALTERNATIVELY LOCKS HERE IF THIS METHOD IS USED
}
}
}
This cannot be replicated on any non-1809 machine, which could be coincidental, but seems unlikely.
The machines in question are relatively low spec, running 2GB RAM with a Celeron 1.6GHz processor, but the application is pretty thin, and under full load typically only uses 50MB of WorkingSet memory.
The rate at which it leaks memory appears to be dependant on the contents of the text box at the time of the lockup.
Update
Appears to be an issue with TextBox.MultiLine on any Windows 1809 machine. Can be replicated by creating a small application with a MultiLine text box then running the following code:
logTextBox.Text = "a\r\n";
logTextBox.Text = "a\r"; // Will lock here for 30 to 60s
The issue can be rectified by adding in the following between the Text set operations:
logTextBox.Clear();
logTextBox.Multiline = false;
logTextBox.Multiline = true;
Issue has also been raised on MSDN awaiting a reply.

How does that thread cause a memory leak?

One of our programs suffered from a severe memory leak: its process memory rose by 1 GB per day at a customer site.
I could set up the scenario in our test center, and could get a memory leak of some 700 MB per day.
This application is a Windows service written in C# which communicates with devices over a CAN bus.
The memory leak does not depend on the rate of data the application writes to the CAN bus. But it clearly depends on the number of messages received.
The "unmanaged" side of reading the messages is:
[StructLayout(LayoutKind.Sequential, Pack = 1)]
public struct CAN_MSG
{
public uint time_stamp;
public uint id;
public byte len;
public byte rtr;
[MarshalAs(UnmanagedType.ByValArray, SizeConst = 8)]
public byte[] a_data;
}
[DllImport("IEICAN02.dll", EntryPoint = "#3")]
public static extern int CAN_CountMsgs(ushort card_idx, byte can_no, byte que_type);
//ICAN_API INT32 _stdcall CAN_CountMsgs(UINT16 card_idx, UINT8 can_no,UINT8 que_type);
[DllImport("IEICAN02.dll", EntryPoint = "#10")]
public static extern int CAN_ReadMsg(ushort card_idx, byte can_no, ushort count, [MarshalAs(UnmanagedType.LPArray), Out()] CAN_MSG[] msg);
//ICAN_API INT32 _stdcall CAN_ReadMsg(UINT16 card_idx, UINT8 can_no, UINT16 count, CAN_MSG* p_obj);
We use essentially as follows:
private void ReadMessages()
{
while (keepRunning)
{
// get the number of messages in the queue
int messagesCounter = ICAN_API.CAN_CountMsgs(_CardIndex, _PortIndex, ICAN_API.CAN_RX_QUE);
if (messagesCounter > 0)
{
// create an array of appropriate size for those messages
CAN_MSG[] canMessages = new CAN_MSG[messagesCounter];
// read them
int actualReadMessages = ICAN_API.CAN_ReadMsg(_CardIndex, _PortIndex, (ushort)messagesCounter, canMessages);
// transform them into "our" objects
CanMessage[] messages = TransformMessages(canMessages);
Thread thread = new Thread(() => RaiseEventWithCanMessages(messages))
{
Priority = ThreadPriority.AboveNormal
};
thread.Start();
}
Thread.Sleep(20);
}
}
// transformation process:
new CanMessage
{
MessageData = (byte[])messages[i].a_data.Clone(),
MessageId = messages[i].id
};
The loop is executed once per every ~30 milliseconds.
When I call RaiseEventWithCanMessages(messages) in the same thread, the memory leak disappears (well, not completely, some 10 MB per day - i.e. about 1% of the original leak - remain, but that other leak is likely unrelated).
I do not understand how this creation of threads can lead to a memory leak. Can you provide me with some information how the memory leak is caused?
Addendum 2018-08-16:
The application starts of with some 50 MB of memory, and crashes at some 2GB. That means, that Gigabytes of memory are available for most of the time.
Also, CPU is at some 20% - 3 out of 4 cores are idle.
The number of threads used by the application remains rather constant around ~30 threads.
Overall, there are plenty of resources available for the Garbage Collection. Still, GC fails.
With some 30 threads per second, and a memory leak of 700 MB per day, on average ~300 bytes of memory leak per freshly created thread; with ~5 messages per new thread, some ~60bytes per message. The "unmanaged" struct does not make it into the new thread, its contents are copied into a newly instantiated class.
So: why does GC fail despite the enormous amount of resources available for it?
You're creating 2 arrays and a thread every ~30 milliseconds, without any coordination between them. The arrays could be a problem, but frankly I'm much more worried about the thread - creating threads is really, really expensive. You should not be creating them this frequently.
I'm also concerned about what happens if the read loop is out-pacing the thread - i.e. if RaiseEventWithCanMessages takes more time than the code that does the query/sleep. In that scenario, you'd have a constant growth of threads. And you'd probably also have all the various RaiseEventWithCanMessages fighting with each-other.
The fact that putting RaiseEventWithCanMessages inline "fixes" it suggests that the main problem here is either the sheer number of threads being created (bad), or the many overlapping and growing numbers of concurrent RaiseEventWithCanMessages.
The simplest fix would be: don't use the extra threads here.
If you actually want concurrent operations, I would have exactly two threads here - one that does the query, and one that does whatever RaiseEventWithCanMessages is, both in a loop. I would then coordinate between the threads such that the query thread waits for the previous RaiseEventWithCanMessages thing to be complete, such that it hands it over in a coordinated style - so there is always at most one outstanding RaiseEventWithCanMessages, and you stop running queries if it isn't keeping up.
Essentially:
CanMessage[] messages = TransformMessages(canMessages);
HandToConsumerBlockingUntilAvailable(messages); // TODO: implement
with the other thread basically doing:
var nextMessages = BlockUntilAvailableFromProducer(); // TODO: implement
A very basic implementation of this could be just:
void HandToConsumerBlockingUntilAvailable(CanMessage[] messages) {
lock(_queue) {
if(_queue.Length != 0) Monitor.Wait(_queue); // block until space
_queue.Enqueue(messages);
if(queue.Length == 1) Monitor.PulseAll(_queue); // wake consumer
}
}
CanMessage[] BlockUntilAvailableFromProducer() {
lock(_queue) {
if(_queue.Length == 0) Monitor.Wait(_queue); // block until work
var next = _queue.Dequeue();
Monitor.Pulse(_queue); // wake producer
return _next;
}
}
private readonly Queue<CanMessage[]> _queue = new Queue<CanMessage[]>;
This implementation enforces that there is no more than 1 outstanding unprocessed Message[] in the queue.
This addresses the issues of creating lots of threads, and the issues of the query loop out-pacing the RaiseEventWithCanMessages code.
I might also look into using the ArrayPool<T>.Shared for leasing oversized arrays (meaning: you need to be careful not to read more data than you've actually written, since you might have asked for an array of 500 but been given one of size 512), rather than constantly allocating arrays.

monotorrent - writeRate/readRate not working

i'm using monotorrent that downloads a 20GB~ file, when monotorrent creates the files the memory and CPU reaches maximum which slows the computer and even overheat it, so i wanted to limit the memory usage by limiting the write rate.
here's what i have tried:-
, i checked around and found that you can limit read/write rate of the engine using this code:-
EngineSettings engineSettings = new EngineSettings(downloadsPath, port);
engineSettings.PreferEncryption = true;
engineSettings.AllowedEncryption = EncryptionTypes.All;
engineSettings.MaxWriteRate = **maximum write rate in bytes**;
engineSettings.MaxReadRate = **maximum read rate in bytes**;
engineSettings.GlobalMaxDownloadSpeed = **max download in bytes**;
the download rate worked but it didn't limited the memory usage, so i checked the write rate value in runtime using this code
MessageBox.Show(engine.DiskManager.WriteRate.ToString());
and it returned 0, so instead of adding MaxWriteRate to the EngineSettings i went into EngineSettings.cs and added a default value to MaxWriteRate by changing this code:-
public int MaxWriteRate
{
get { return 5000; }
set { maxWriteRate = 5000; }
}
and it didn't limited the memory usage also the WriteRate value returned 0, so i went into DiskManager.cs and added a default value to WriteRate by changing this code:-
public int WriteRate
{
get { return 5000; }
}
now WriteRate value returned 5000 but it didn't limited the memory usage, then i stuck and didn't found anything else to change,
does anyone know why it's not working? i'm thinking that WriteRate is not even about limiting the writing speed.
When downloading a torrent, the download speed is limited by three things:
1) The maximum allowed download speed speed for the TorrentManager
2) The maximum allowed download speed overall
3) No more than 4MB of data is held in memory while waiting to be written to disk.
Specifically on the third point, if there are more than 4MB of pieces held in memory then no further Socket.Receive calls will be made until that data is flushed. https://github.com/mono/monotorrent/blob/caac16cffd95749febe04c3f7cf22567c3e40432/src/MonoTorrent/MonoTorrent.Client/RateLimiters/DiskWriterLimiter.cs#L43-L46
This screenshot shows what happens today when you specify a max write rate of 2 * 1024 * 1024 (2,048 kB/sec):
The download rate auto-limits because the 4MB buffer fills up, which means setting the max disk write rate ends up limiting both download rate and memory consumption.

How to increase or keep starting speed of copying a file

I am using these codes to copy a big file:
const int CopyBufferSize = 64 * 1024;
string src = #"F:\Test\src\Setup.exe";
string dst = #"F:\Test\dst\Setup.exe";
public void CopyFile()
{
Stream input = File.OpenRead(src);
long length = input.Length;
byte[] buffer = new byte[CopyBufferSize];
Stopwatch swTotal = Stopwatch.StartNew();
Invoke((MethodInvoker)delegate
{
progressBar1.Maximum = (int)Math.Abs(length / CopyBufferSize) + 1;
});
using (Stream output = File.OpenWrite(dst))
{
int bytesRead = 1;
// This will finish silently if we couldn't read "length" bytes.
// An alternative would be to throw an exception
while (length > 0 && bytesRead > 0)
{
bytesRead = input.Read(buffer, 0, Math.Min(CopyBufferSize, buffer.Length));
output.Write(buffer, 0, bytesRead);
length -= bytesRead;
Invoke((MethodInvoker)delegate
{
progressBar1.Value++;
label1.Text = (100 * progressBar1.Value / progressBar1.Maximum).ToString() + " %";
label3.Text = ((int)swTotal.Elapsed.TotalSeconds).ToString() + " Seconds";
});
}
Invoke((MethodInvoker)delegate
{
progressBar1.Value = progressBar1.Maximum;
});
}
Invoke((MethodInvoker)delegate
{
swTotal.Stop();
Console.WriteLine("Total time: {0:N4} seconds.", swTotal.Elapsed.TotalSeconds);
label3.Text += ((int)swTotal.Elapsed.TotalSeconds - int.Parse(label3.Text.Replace(" Seconds",""))).ToString() + " Seconds";
});
}
The file size is about 4 GB.
In the first 7 seconds it can copy up to 400 MB then this hot speed calms down.
What happen and how to keep this hot speed or even increase it?
Another question is here:
When the file copied, windows is still working on destination file(about 10 seconds).
Copy Time: 116 seconds
extra time: 10-15 seconds or even more
How to remove or decrease this extra time?
What happens? Caching, mostly.
The OS pretends you copied 400 MiB in seven seconds, but you didn't. You just sent 400 MiB to the OS (or file system) to write in the future, and that's as much as the buffer can take. If you try to write a 400 MiB file and you pull the plug as soon as it's "done", your file will not be written. The same thing deals with the "overtime" - your application has sent all it has to the buffer, but the buffer isn't yet written to the drive itself (either its buffer, or even slower, the actual physical platter).
This is especially visible with USB flash drives, which tend to use caching heavily. This makes working with the (usually very slow) drive much more pleasant, with the trade-off that you have to wait for the OS to finish writing everything before pulling the drive out (that's why you get the "Safe remove" icon).
So it should be obvious that you can't really make the total time shorter. All you can do is try and make the user interface reflect reality a bit better, so that the user doesn't see the "first 400 MiB are so fast!" thing... but it doesn't really work well. In any case, your read->write speed is ~30 MiB/s. The OS just hides the peaks to make it easier to deal with the slow hard drive - very useful when you're dealing with lots of small files, worthless when dealing with files bigger than the buffer.
You have a bit of control over this when you use the FileStream constructor directly, instead of using File.OpenWrite - you can use FileOptions.WriteThrough to instruct the OS to avoid any caching and write directly to disk[1], giving you a better idea of the real write speed. Do note that this usually makes the total time larger, though, and it may make concurrent access even worse. You definitely don't want to use it for small files.
[1] - Haha, right. The drive usually has caching of its own, and some ignore the OS' pleas. Tough luck.
One thing you could try is to increase the buffer size. This really matters when the write cache can no longer keep up (as discussed in other answer). Writing a lot of small blocks is often slower than writing a few large blocks. Instead of 64 kB, try 1 MB, 4 MB or even bigger:
const int CopyBufferSize = 1 * 1024 * 1024; // 1 MB
// or
const int CopyBufferSize = 4 * 1024 * 1024; // 4 MB

Categories

Resources