Why there is nowhere written that HMACSHA256 can cause System.AccessViolationException when accessed parallelly? The AccessViolationException is super hard to investigate since it cannot be caught by regular try-catch (from .Net 6.0 it cannot be caught at all) and since it is usually thrown elsewhere than where it was caused.
Or even more concerning: why nowhere in documentation is written that (HMAC)SHA256 shall not be reused at all? See example of same strings being hashed differently.
Why I am asking:
Our newly written app was randomly crashing after minutes or hours. In the Windows Event Viewer we then found:
Faulting application name: XXXXX.exe, version: 1.0.0.0, time stamp: 0x62571213
Faulting module name: coreclr.dll, version: 6.0.522.21309, time stamp: 0x625708f4
Exception code: 0xc0000005
Exception code 0xc0000005 is the code of the AccessViolationException. It took days to determine what was causing the problem and we were rather lucky to identify it:
since we do a lot of optimization it seemed reasonable to cache the instance of HMACSHA256 and reuse it. Eventually it was accessed by two threads at once and caused the application to crash without any log message (there was only the error code in the Event Viewer, no stacktrace).
Code example producing System.AccessViolationException immediately:
var keyString = "abcdefghijklmnopqrstuvwxyz0123456789";
var hmac = new HMACSHA256(Encoding.UTF8.GetBytes(keyString));
var parallelOptions = new ParallelOptions() { MaxDegreeOfParallelism = 4 };
static IEnumerable<string> EnumerateStrings()
{
while (true)
{
yield return "A short string";
yield return "Lorem ipsum dolor sit amet, consectetur adipiscing elit ...";
}
}
Parallel.ForEach(EnumerateStrings(),
parallelOptions,
theString => hmac.ComputeHash(Encoding.UTF8.GetBytes(theString)));
Disclaimer: I do not require an answer to this question. I just wish somebody has asked exactly this question somewhen before me - it could have given me some clue. I would probably spend less time hitting my head against the wall. Hope it will spare some other heads.
Related
We are getting error on server and our service is automatically stopped in the server.
Randomly application is crash in approx 1 hour with below Error as -
Faulting application name: Chubb.Studio.Event.Processor.exe, version:
0.0.0.0, time stamp: 0x5c0ab1b7 Faulting module name: KERNELBASE.dll, version: 6.3.9600.19425, time stamp: 0x5d26b6e9 Exception code:
0xc0000005 Fault offset: 0x0000000000001556 Faulting process id:
0x115c Faulting application start time: 0x01d5a35fd202f96d Faulting
application path:
E:\WindowsService\DevInt\Chubb.Studio.EventProcessor\Chubb.Studio.Event.Processor.exe
Faulting module path: C:\Windows\system32\KERNELBASE.dll Report Id:
762c15d4-0f5b-11ea-8120-005056a27597 Faulting package full name:
Faulting package-relative application ID:
Our Code is look like as -
protected override void OnStarted()
{
//IntializeEventsExecution();
Task task = Task.Factory.StartNew(() => IntializeEventsExecution());
base.OnStarted();
}
public void IntializeEventsExecution()
{
StartEvents();
}
public void StartEvents()
{
var eventList = GetEventTopics();
Parallel.ForEach(eventList,
new ParallelOptions { MaxDegreeOfParallelism = eventList.Count },
(item, state, index) =>
{
StartProcessingEvent(eventList[(int)index]);
});
}
/// <summary>
///
/// </summary>
/// <param name="index"></param>
public void StartProcessingEvent(EventTopic topic)
{
try
{
Task task = Task.Factory.StartNew(() => ExecuteProcessingEvent(topic));
task.Wait();
}
catch (Exception)
{
}
finally
{
new _processingDelegate(StartProcessingEvent).Invoke(topic);
}
}
As Klaus says in his comment, a STATUS_ACCESS-VIOLATION exception is caused by a process reading or writing memory that it doesn't own. Given this is C#, the most likely reason is either an incorrect use of P/Invoke or using unsafe code.
The best approach to debugging something vague like this is to isolate the issue by removing P/Invoke calls one by one until the exception doesn't happen. It's hard to be more precise because the exception may be triggered a long way from the cause (memory or stack corruption).
This SO answer gives a good list of the likely causes of an access violation in managed code.
Access violations in managed apps typically happen for one of these
reasons:
You P/Invoke a native API passing in a handle to a managed object and the native API uses that handle. If you get a collection and
compaction while the native API is running, the managed object may
move and the pointer becomes invalid.
You P/Invoke something with a buffer that is too small or smaller than the size you pass in and the API overruns a read or write
A pointer (IntPtr, etc) you pass to a P/Invoke call is invalid (-1 or 0) and the native isn't checking it before use
You P/Invoke a native call and the native code runs out of memory (usually virtual) and isn't checking for failed allocations and
reads/writes to an invalid address
You use a GCHandle that is not initialized or that somehow is pointing to an already finalized and collected object (so it's not
pointing to an object, it's pointing to an address where an object
used to be)
Your app uses a handle to something that got invalidated by a sleep/wake. This is more esoteric but certainly happens. For example,
if you're running an application off of a storage card, the entire app
isn't loaded into RAM. Pieces in use are demand-paged in for
execution. This is all well and good. Now if you power the device off,
the drivers all shut down. When you power back up, many devices simply
re-mount the storage devices. When your app needs to demand-page in
more program, it's no longer where it was and it dies. Similar
behavior can happen with databases on mounted stores. If you have an
open handle to the database, after a sleep/wake cycle the connection
handle may no longer be valid.
UWP app with RichEditBox has memory and other type issues on release configuration with code optimization enabled. On debug or relase with non optimized code it runs ok. Following code is inside a method running on the thread pool (await Task.Run(() => MyMethod(richEditTextDocument));
// Getting text for first time works
richEditTextDocument.GetText(Windows.UI.Text.TextGetOptions.None, out string rtbOriginalText);
foreach (Match v in wordMatches)
{
try
{
// In release mode with optimized code,
//at very first iteration, line below throws
//"Insufficient memory to continue execution of the program"
Windows.UI.Text.ITextRange selectedTextNew = richEditTextDocument.GetRange(v.Index, v.Index + v.Length);
}
catch
{
continue; //insufficient memory
}
}
// In release with optimized code, calling GetText for second time throws
//(Exception from HRESULT: 0x8000FFFF)
// at System.Runtime.InteropServices.McgMarshal.ThrowOnExternalCallFailed(Int32, RuntimeTypeHandle) + 0x21
// at __Interop.ComCallHelpers.Call(__ComObject, RuntimeTypeHandle, Int32, TextGetOptions, Void *) + 0xc2
// at __Interop.ForwardComStubs.Stub_67[TThis](__ComObject, TextGetOptions, String &, Int32) + 0x44
// at Windows.UI.Text.RichEditTextDocument.GetText(TextGetOptions, String &) + 0x23
// Thinking that it can be fixed setting selection to 0 before GetText, but...
richEditTextDocument.Selection.StartPosition = 0; //, this line throws insufficient memory
richEditTextDocument.Selection.EndPosition = 0;
// HRESULT: 0x8000FFFF (if execution reachs here, deleting two previous lines)
richEditTextDocument.GetText(Windows.UI.Text.TextGetOptions.None, out string rtbOriginalTextAnother);
Submission to store was rejected two times because of other minor errors that went fixed, but at third time it passed the test and was published, whithout noticing this error, that is part of the main function of the app and lets it "unusable" (as they (Microsoft) said the other times). Submitting a non optimized code building (but with NET native toolchain) complains about missing DEBUG dlls. I noticed the error but, as disabling code optimization when debugging the release "fixed" it (as is explained at https://devblogs.microsoft.com/devops/debugging-net-native-windows-universal-apps/ linked by official Microsoft docs at https://learn.microsoft.com/en-us/windows/msix/package/packaging-uwp-apps), i forgot that it only was being "ignored". So, first time publishing and got an unusable app.
App uses nuget packages NewtonSoft.Json, Win2D.uwp, Microsoft.NETCore.UniversalWindowsPlatform and a "normal" reference to Microsoft.Advertising.Xaml (also app is not showing ads in production, ErrorOcurred gives NoAdAvailable)
Thanks
I have the following code that throws an out of memory exception when writing large files. Is there something I'm missing?
I am not sure why it is throwing an out of memory error as I thought the Filestream would only use a maximum of 4096 bytes for the buffer? I am not entirely sure what it means by the Buffer to be honest and any advice would be appreciated.
public static async Task CreateRandomFile(string pathway, int size, IProgress<int> prog)
{
byte[] fileSize = new byte[size];
new Random().NextBytes(fileSize);
await Task.Run(() =>
{
using (FileStream fs = File.Create(pathway,4096))
{
for (int i = 0; i < size; i++)
{
fs.WriteByte(fileSize[i]);
prog.Report(i);
}
}
}
);
}
public static void p_ProgressChanged(object sender, int e)
{
int pos = Console.CursorTop;
Console.WriteLine("Progress Copied: " + e);
Console.SetCursorPosition (0, pos);
}
public static void Main()
{
Console.WriteLine("Testing CopyLearning");
//CopyFile()
Progress<int> p = new Progress<int>();
p.ProgressChanged += p_ProgressChanged;
Task ta = CreateRandomFile(#"D:\Programming\Testing\RandomFile.asd", 99999999, p);
ta.Wait();
}
Edit: the 99,999,999 was just created to make a 99MB file
Note: I have commented out prog.Report(i) and it will work fine.
It seems for some reason, the error occurs at the line
Console.writeline("Progress Copied: " + e);
I am not entirely sure why this causes an error? So the error might have been caused because of the progressEvent?
Edit 2: I have followed advice to change the code such that it reports progress every 4000 Bytes by using the following:
if (i%4000==0)
prog.Report(i);
For some reason. I am now able to write files up to 900MBs fine.
I guess the question is, why would the "Edit 2"'s code allow it to write up to 900MB just fine? Is it because it's reporting progress and writing to the console up to 4000x less than before? I didn't realize the Console would take up so much memory especially because I'm assuming all it's doing is outputting "Progress Copied"?
Edit 3:
For some reason when I change the following line as follows:
for (int i = 0; i < size; i++)
{
fs.WriteByte(fileSize[i]);
Console.Writeline(i)
prog.Report(i);
}
where there is a "Console.Writeline()" before the prog.Report(i), it would work fine and copy the file, albeit take a very long time to do so. This leads me to believe that this is a Console related issue for some reason but I am not sure as to what.
fs.WriteByte(fileSize[i]);
prog.Report(i);
You created a fire-hose problem. After deadlocks and threading races, probably the 3rd most likely problem caused by threads. And just as hard to diagnose.
Easiest to see by using the debugger's Debug + Windows + Threads window and look at thread that is executing CreateRandomFile(). With some luck, you'll see it is completed and has written all 99MB bytes. But the progress reported on the console is far behind this, having only reported 125KB bytes written, give or take.
Core issue is the way Progress<>.Report() works. It uses SynchronizationContext.Post() to invoke the ProgressChanged event handler. In a console mode app that will call ThreadPool.QueueUserWorkItem(). That's quite fast, your CreateRandomFile() method won't be bogged down much by it.
But the event handler itself is quite a lot slower, console output is not very fast. So in effect, you are adding threadpool work requests at an enormous rate, 99 million of them in a handful of seconds. No way for the threadpool scheduler to keep up, you'll have roughly 4 of them executing at the same time. All competing to write to the console as well, only one of them can acquire the underlying lock.
So it is the threadpool scheduler that causes OOM, forced to store so many work requests.
And sure, when you call Report() less frequently then the fire-hose problem is a lot less worse. Not actually that simple to ensure it never causes a problem, although directly calling Console.Write() is an obvious fix. Ultimately simple, create a usable UI that is useful to a human. Nobody likes a crazily scrolling window or a blur of text. Reporting progress no more frequently than 20 times per second is plenty good enough for the user's eyes, the console has no trouble keeping up with that.
I am getting this error when loading large files into memory. What I don't understand is that my memory (as monitored by Task Manager still says only 7G used on a machine that has 32G). Is this memory exception referring to a constrained part of this memory? And, if so, how do I allocate more. The code producing the error is below.
System.OutOfMemoryException occurred
HResult=-2147024882
Message=Exception of type 'System.OutOfMemoryException' was thrown.
Source=mscorlib
StackTrace:
at System.IO.File.InternalReadAllBytes(String path, Boolean checkHost)
InnerException:
Active (x86) and 64bit Windows 7
Code:
public void LoadAllBinaries(string aKey)
{
if (msDatas != null)
return;
msDatas = new SortedDictionary<string, byte[]>();
var dataFiles = File.ReadAllLines(G.ConfigDir + #"\dates_out.txt");
foreach (var df in dataFiles)
{
try
{
string fn = G.DataDir + "\\n" + aKey + df + ".dft";
byte[] ba = File.ReadAllBytes(fn);
msDatas.Add(fn,ba);
ba = null;
}
catch (Exception e)
{
Console.WriteLine("OpenSLoadAllBinariestreams ERROR: " + e.Message);
}
}
}
You've either got a memory leak somewhere, or a handle leak. Both will give you the Out Of Memory error.
You can see if you've got a handle leak by opening TaskManager, go to View, Select Columns..., and turn on Handles. Start your process and let it run through a few iterations. Jot down the numbers you get. Then come back in a few hours and compare the numbers. If the handles numbers are considerably higher, you're not Disposing an object than you should be.
Also ensure that you dispose of your objects as well. You can also try use System.GC.Collect for garbage collection. Hope this helps.
Not exactly sure but your exception stacktrace says
at System.IO.File.InternalReadAllBytes(String path, Boolean checkHost)
To me it looks like the below line of code is the culprit and it's throwing OOM exception cause most most probably the size of dates_out.txt is high enough.
File.ReadAllLines(G.ConfigDir + #"\dates_out.txt")
What's the size of the file dates_out.txt you are reading? if it's in size of GB then try reading it line by line rather reading the entire file at once (as suggested by others over comment).
EDIT:
If you want to see current memory consumption of a program’s object, then you can get it using GC.GetTotalMemory method. You can also use windbg.exe tool or Microsoft’s CLR Profiler
long memoryUsed = GC.GetTotalMemory (true);
In visual studio go to Project Properties and Build screen. Check that where Any CPU is selected that the platform target checkbox for Prefer 32 bit is not checked.
You can also explicitly select x64 there.
This resolved the same error for me.
I'm trying to use the code from the most popular answer to this question: Using C#, how does one figure out what process locked a file?
I'm testing this code in Windows 7 x64 using VS2010 and .NET v4.
I'm finding that the code excerpt...
var baTemp = new byte[nLength];
try
{
Marshal.Copy(ipTemp, baTemp, 0, nLength);
strObjectName = Marshal.PtrToStringUni(Is64Bits() ? new IntPtr(ipTemp.ToInt64()) : new IntPtr(ipTemp.ToInt32()));
}
catch (AccessViolationException)
{
return null;
}
finally
{
Marshal.FreeHGlobal(ipObjectName);
Win32API.CloseHandle(ipHandle);
}
is what is causing my problems. The Marshal.Copy can fail when the address created earlier is not valid. The code creating the address in x64 systems...
if (Is64Bits())
{
ipTemp = new IntPtr(Convert.ToInt64(objObjectName.Name.Buffer.ToString(), 10) >> 32);
}
in one instance of my noted failures starts with a buffer string representation of 20588995036390572032 that translates to x1C92AA2089E00000. The code appears to strip the low order word leaving x1C92AA20 as the usaable address.
Question 1: Why would we not simply use the 64bit address provided by the buffer object rather than shift out the low order word and use just the high order word in a 64bit app running on a 64bit OS?
Question 2: Should the try/catch/finally block include more than just AccessViolationException?
Read the comments on that article. Unworkable code. Doesn't work. Don't use. Even the "suggested" corrected version is broken (and it doesn't work on my Win8 64bits) and in the words of its author:
The following was produced based on Iain Ballard's code dump. It is broken: it will occasionally lock up when you retrieve the handle name. This code doesn't contain any work-arounds for that issue, and .NET leaves few options: Thread.Abort can no longer abort a thread that's currently in a native method.