UWP app with RichEditBox has memory and other type issues on release configuration with code optimization enabled. On debug or relase with non optimized code it runs ok. Following code is inside a method running on the thread pool (await Task.Run(() => MyMethod(richEditTextDocument));
// Getting text for first time works
richEditTextDocument.GetText(Windows.UI.Text.TextGetOptions.None, out string rtbOriginalText);
foreach (Match v in wordMatches)
{
try
{
// In release mode with optimized code,
//at very first iteration, line below throws
//"Insufficient memory to continue execution of the program"
Windows.UI.Text.ITextRange selectedTextNew = richEditTextDocument.GetRange(v.Index, v.Index + v.Length);
}
catch
{
continue; //insufficient memory
}
}
// In release with optimized code, calling GetText for second time throws
//(Exception from HRESULT: 0x8000FFFF)
// at System.Runtime.InteropServices.McgMarshal.ThrowOnExternalCallFailed(Int32, RuntimeTypeHandle) + 0x21
// at __Interop.ComCallHelpers.Call(__ComObject, RuntimeTypeHandle, Int32, TextGetOptions, Void *) + 0xc2
// at __Interop.ForwardComStubs.Stub_67[TThis](__ComObject, TextGetOptions, String &, Int32) + 0x44
// at Windows.UI.Text.RichEditTextDocument.GetText(TextGetOptions, String &) + 0x23
// Thinking that it can be fixed setting selection to 0 before GetText, but...
richEditTextDocument.Selection.StartPosition = 0; //, this line throws insufficient memory
richEditTextDocument.Selection.EndPosition = 0;
// HRESULT: 0x8000FFFF (if execution reachs here, deleting two previous lines)
richEditTextDocument.GetText(Windows.UI.Text.TextGetOptions.None, out string rtbOriginalTextAnother);
Submission to store was rejected two times because of other minor errors that went fixed, but at third time it passed the test and was published, whithout noticing this error, that is part of the main function of the app and lets it "unusable" (as they (Microsoft) said the other times). Submitting a non optimized code building (but with NET native toolchain) complains about missing DEBUG dlls. I noticed the error but, as disabling code optimization when debugging the release "fixed" it (as is explained at https://devblogs.microsoft.com/devops/debugging-net-native-windows-universal-apps/ linked by official Microsoft docs at https://learn.microsoft.com/en-us/windows/msix/package/packaging-uwp-apps), i forgot that it only was being "ignored". So, first time publishing and got an unusable app.
App uses nuget packages NewtonSoft.Json, Win2D.uwp, Microsoft.NETCore.UniversalWindowsPlatform and a "normal" reference to Microsoft.Advertising.Xaml (also app is not showing ads in production, ErrorOcurred gives NoAdAvailable)
Thanks
I am new to C# and trying to read a .sgy file that contains seismic data. I found out a library known as Unplugged.SEGY for reading the file. My file is of 4.12Gb,I am getting " A first chance exception of type 'System.OutOfMemoryException' occurred in mscorlib.dll" and then the program stops suddenly. This is the my code
using System;
using Unplugged.Segy;
namespace ABC
{
class abc
{
static void Main(String[] args)
{
var reader = new SegyReader();
ISegyFile line = reader.Read(#"D:\Major\Seismic.sgy");
ITrace trace = line.Traces[0];
double mean = 0;
double max = double.MinValue;
double min = double.MaxValue;
foreach (var sampleValue in trace.Values)
{
mean += sampleValue / trace.Values.Count;
if (sampleValue < min) min = sampleValue;
if (sampleValue > max) max = sampleValue;
}
Console.WriteLine(mean);
Console.WriteLine(min);
Console.WriteLine(max);
}
}
}
Please Help me out
EDIT: I am running the application as 64-bit process
Since you are running in 64 bit (and as long as you're in .NET 4.5+) I recommend making sure the gcAllowVeryLargeObjects flag is set to true.
In .NET there are various sizes that can be used in 32 bit applications, capping between 2-4 GB per process. A 64 bit application can consume much more per process.
However; in both 32 bit and 64 bit a single object can only consume 2GB at most.
However; to trump that final statement again, since 4.5 and beyond, you can flag your configuration to allow objects greater than 2GB.
My final thought is that flag needs to be set in your situation.
To have a .NET process greater than 4GB it must be a 64bit process.
To have a single object greater than 2GB it must be a 64bit process, running .NET 4.5 or later, and the gcAllowVeryLargeObjects flag is set to true.
Application/Code description:
My application is based on c# and uses SQL Server CE and iv'e got this exception only twice at the same code location. the crash with this exception was not introduced till this version. the only change in this version was changing the .net framework to 4.5.2.
I'm getting access violation exception on the dispose of an SqlCeConnection with the following error:
Attempted to read or write protected memory. This is often an
indication that other memory is corrupt.
This exception is not intercepted by the try catch clause of .net- it causes a crash.
In my code I use the following to run
try
{
var connectionString = string.Format("{0}{1}{2}", "Data Source=", _localDB, ";File Mode=Read Write;Max Database Size=4000;Persist Security Info=False;");
using (var sqlCeConnection = new SqlCeConnection(connectionString))
{
using (var sqlCeCommand = new SqlCeCommand())
{
sqlCeCommand.Connection = sqlCeConnection;
sqlCeCommand.CommandText = "SELECT * FROM Application";
sqlCeConnection.Open();
var result = (string)sqlCeCommand.ExecuteScalar();
isValid = !IsValid(result);
}
}
}
catch (Exception ex)
{
_log.Error("exception", ex);
}
call stack for the first crash:
ntdll!ZwWaitForMultipleObjects+a
KERNELBASE!WaitForMultipleObjectsEx+e8
kernel32!WaitForMultipleObjectsExImplementation+b3
kernel32!WerpReportFaultInternal+215
kernel32!WerpReportFault+77
kernel32!BasepReportFault+1f
kernel32!UnhandledExceptionFilter+1fc
ntdll! ?? ::FNODOBFM::`string'+2365
ntdll!_C_specific_handler+8c
ntdll!RtlpExecuteHandlerForException+d
ntdll!RtlDispatchException+45a
ntdll!KiUserExceptionDispatcher+2e
sqlcese35!__SafeRelease+c
sqlcese35!Column::`vector deleting destructor'+5c
sqlcese35!Object::DeleteObjects+39
sqlcese35!Table::`vector deleting destructor'+45
sqlcese35!Table::Release+27
sqlcese35!HashTable::~HashTable+2a
sqlcese35!Store::~Store+12b
sqlcese35!Store::Release+2a
sqlceme35!ME_SafeRelease+17
DomainBoundILStubClass.IL_STUB_PInvoke(IntPtr ByRef)+78
[[InlinedCallFrame] (System.Data.SqlServerCe.NativeMethods.SafeRelease)] System.Data.SqlServerCe.NativeMethods.SafeRelease(IntPtrByRef)
System.Data.SqlServerCe.SqlCeConnection.ReleaseNativeInterfaces()+147
System.Data.SqlServerCe.SqlCeConnection.Dispose(Boolean)+f1
System_ni!System.ComponentModel.Component.Dispose()+18
call stack for the second crash:
ntdll!NtWaitForMultipleObjects+a
KERNELBASE!WaitForMultipleObjectsEx+e8
kernel32!WaitForMultipleObjectsExImplementation+b3
kernel32!WerpReportFaultInternal+215
kernel32!WerpReportFault+77
kernel32!BasepReportFault+1f
kernel32!UnhandledExceptionFilter+1fc
ntdll! ?? ::FNODOBFM::`string'+2335
ntdll!_C_specific_handler+8c
ntdll!RtlpExecuteHandlerForException+d
ntdll!RtlDispatchException+45a
ntdll!KiUserExceptionDispatcher+2e
<Unloaded_sqlcese35.dll>+7c88c
<Unloaded_sqlceqp35.dll>+102790
0x06ccc898
0x06f9efc8
0x1eca8018
0x1f207400
<Unloaded_sqlcese35.dll>+228dc
0x00000004
0x2edff008
0x00000002
0x00000003
0x00000004
<Unloaded_sqlcese35.dll>+3fbd9
0x06ccc898
DomainBoundILStubClass.IL_STUB_PInvoke(IntPtr ByRef)+78
[[InlinedCallFrame] (System.Data.SqlServerCe.NativeMethods.SafeRelease)] System.Data.SqlServerCe.NativeMethods.SafeRelease(IntPtrByRef)
System.Data.SqlServerCe.SqlCeConnection.ReleaseNativeInterfaces()+147
System.Data.SqlServerCe.SqlCeConnection.Dispose(Boolean)+f1
System_ni!System.ComponentModel.Component.Dispose()+1b
I found some references on the internet that suggests some solutions:
Probable solution: check multithreading issue on the same connection (attempted to read write protected memory. this is often an indication that other memory is corrupt)
Rejection:
a. the connection is created in the using brackets and doesn't get reused.
b. the calling method is called every 5 minutes and verified via the dump file that it was not called simultaneously.
Probable solution: sql ce version mismatch (http://blogs.msdn.com/b/sqlservercompact/archive/2009/05/06/troubleshooting-access-violation-exception-while-using-sql-server-compact-database-with-ado-net-provider.aspx)
Probable Rejection: I can see in the version installed is 3.5 SP2 (3.5.8080.0) and from the modules located in the dump I can see the sqlceme35.dll, System.Data.SqlServerCe.dll DLL's are with version 3.05.8080.0
Probable solution which is in question is the following:
https://stackoverflow.com/a/20492181/1447518
Probable Rejection: it doesn't sound right from a statistical perspective- the code crashed twice in the same place although there another place in the application code which writes and read to a different DB and the application didn't crash there.
The last thing I was thinking about, may suggest a unload problem of DLLs (take a look at the second call stack). My guess is that the dll's are unloaded from the application while the application needed them in order to do a dispose, but it seams a bit blurry and a 'long shot'
My question is: what may cause the problem, and what is a probable solution?
Although this solution is not yet verified, the solution is as follows:
From the second call stack i can see there is unload of native DLL's, my guess was that the dispose method of the SQL connection was using one of the methods it currently disposed.
I verified thorough the Process dump that all the SqlCeConnection types were in the process of dispose.
Seeing ErikEj comment made me realize it's will be better if i will take a look in the code differences that was made between SQL-CE 3.5 to 4.0 (System.Data.SqlServerCe.dll).
after viewing the code, i could see that the method of the release was moved to a later position inside the dispose method.
In addition i could see that before calling SafeRelease there was another check that checks if the native DLLs that were needed for safe release was released already- and throw an exception.
bottom line, SQL-CE 4.0 has 2 solutions for the same issue.
my guess is that this issue was caused because of this.
the solution for now was to keep a connection during all the application life-cycle (which has no connection string), this cause the pointer pool to keep the native Dlls in the memory for all the application life-cycle.
the better solution is to move to SQL-CE 4.0 .
I am trying to get virtual machine configuration info for a VM in VMware using webservices SDK approach. I was able to get virtual machine configuration info from simple console application, command line interface (Powershell) of my tool. However when i tried to do the same in my UI (MMC-Snapin), I am getting a StackOverflowException. Can you please help me or give me suggestions how to debug the error?
Please note that same code works with console/commandline (powershell). Not from MMC UI (i took care of serialization). Is it something to do with stack limitations with MMC? I dont have any clue how to debug this. Any ideas/suggestions really help?
I have given the code below. Please note that as soon as i un-comment "config" property from property collection I am getting stackoverflow from MMC Snap-in (UI).
Regards,
Dreamer
In other words, do i need to increase the stack size for MMC UI?
Increasing the max stack size of the thread to 8MB (8388608), not throwing the exception. But I am not happy with the fix as what if bigger data comes?
In fact setting it to 1MB stack size is working. So probably the default stack size for MMC is low. Not sure whether increasing to 1MB causes any side affects though. Any comments/thoughts?
Btw, the exception is coming from VMWARE SDK (vimservice/vimserializers/system.xml) which I have no control over.
Regards,
Naresh
TraversalSpec datacenterVMTraversalSpec = new TraversalSpec();
datacenterVMTraversalSpec.type = "Datacenter";
datacenterVMTraversalSpec.name = "datacenterVMTraversalSpec";
datacenterVMTraversalSpec.path = "vmFolder";
datacenterVMTraversalSpec.skip = false;
datacenterVMTraversalSpec.selectSet = new SelectionSpec[] { new SelectionSpec() };
datacenterVMTraversalSpec.selectSet[0].name = "folderTraversalSpec";
TraversalSpec folderTraversalSpec = new TraversalSpec();
folderTraversalSpec.name = "folderTraversalSpec";
folderTraversalSpec.type = "Folder";
folderTraversalSpec.path = "childEntity";
folderTraversalSpec.skip = false;
folderTraversalSpec.selectSet = new SelectionSpec[] { new SelectionSpec(), datacenterVMTraversalSpec };
folderTraversalSpec.selectSet[0].name = "folderTraversalSpec";
PropertyFilterSpec propFilterSpec = new PropertyFilterSpec();
propFilterSpec.propSet = new PropertySpec[] { new PropertySpec() };
propFilterSpec.propSet[0].all = false;
propFilterSpec.propSet[0].type = "VirtualMachine";
propFilterSpec.propSet[0].pathSet = new string[] { "name",
//"config", //TODO: investigate including config is throwing stack overflow exception in MMC UI.
"summary",
"datastore",
"resourcePool"
};
propFilterSpec.objectSet = new ObjectSpec[] { new ObjectSpec() };
propFilterSpec.objectSet[0].obj = this.ServiceUtil.GetConnection().Root;
propFilterSpec.objectSet[0].skip = false;
propFilterSpec.objectSet[0].selectSet = new SelectionSpec[] { folderTraversalSpec };
VimService vimService = this.ServiceUtil.GetConnection().Service;
ManagedObjectReference objectRef = this.ServiceUtil.GetConnection().PropCol;
PropertyFilterSpec[] filterSpec = new PropertyFilterSpec[] { propFilterSpec };
ObjectContent[] ocArray = vimService.RetrieveProperties(objectRef, filterSpec);
Regards,
Dreamer
The easiest way to get a stack overflow exception is infinite recursion. That would be the first thing I would look for. Do you have a stack trace with your exception? That would let you know immediately if that's the case.
For technical reasons, the amount of stack space is fixed. This means that particularly RAM-heavy recursive algorithms are in trouble: it is always possible that certain inputs will overrun the threshold, and the program will crash despite lots of free RAM still being available.
On Windows, the memory for stack is reserved, but typically not committed (except in .NET). This means that if your program wants a 100 MB large stack, only the address space is used up. Other programs can still use the same amount of RAM as they could before you declared that you might use up to 100 MB of stack.
In .NET, because the stack space is committed, the total amount of memory other programs can allocate does go down by 100 MB in this case, but no physical RAM is actually allocated until your algorithm genuinely needs it.
So increasing the stack size is not as bad as you might think, especially if you're not coding in .NET.
I've had an algorithm run into a stack limitation. Unfortunately our algorithm was part of a library. Not wanting to require our callers to start their threads with a larger stack, we rewrote the algorithm to use an explicit stack in a loop, instead of recursion. This made the algorithm slower and much harder to understand. It also made debugging nearly impossible. But it did the job.
So rewriting with explicit stack is one possibility, but I recommend against it unless you absolutely must handle the incoming data no matter how large it is, up to the limit of the available RAM, and are not happy with setting a smaller hard limit.
My application traverses a directory tree and in each directory it tries to open a file with a particular name (using File.OpenRead()). If this call throws FileNotFoundException then it knows that the file does not exist. Would I rather have a File.Exists() call before that to check if file exists? Would this be more efficient?
Update
I ran these two methods in a loop and timed each:
void throwException()
{
try
{
throw new NotImplementedException();
}
catch
{
}
}
void fileOpen()
{
string filename = string.Format("does_not_exist_{0}.txt", random.Next());
try
{
File.Open(filename, FileMode.Open);
}
catch
{
}
}
void fileExists()
{
string filename = string.Format("does_not_exist_{0}.txt", random.Next());
File.Exists(filename);
}
Random random = new Random();
These are the results without the debugger attached and running a release build :
Method Iterations per second
throwException 10100
fileOpen 2200
fileExists 11300
The cost of a throwing an exception is a lot higher than I was expecting, and calling FileOpen on a file that doesn't exist seems much slower than checking the existence of a file that doesn't exist.
In the case where the file will often not be present it appears to be faster to check if the file exists. I would imagine that in the opposite case - when the file is usually present you will find it is faster to catch the exception. If performance is critical to your application I suggest that you benchmark both apporaches on realistic data.
As mentioned in other answers, remember that even in you check for existence of the file before opening it you should be careful of the race condition if someone deletes the file after your existence check but just before you open it. You still need to handle the exception.
No, don't. If you use File.Exists, you introduce concurrency problem. If you wrote this code:
if file exists then
open file
then if another program deleted your file between when you checked File.Exists and before you actually open the file, then the program will still throw exception.
Second, even if a file exists, that does not mean you can actually open the file, you might not have the permission to open the file, or the file might be a read-only filesystem so you can't open in write mode, etc.
File I/O is much, much more expensive than exception, there is no need to worry about the performance of exceptions.
EDIT:
Benchmarking Exception vs Exists in Python under Linux
import timeit
setup = 'import random, os'
s = '''
try:
open('does not exist_%s.txt' % random.randint(0, 10000)).read()
except Exception:
pass
'''
byException = timeit.Timer(stmt=s, setup=setup).timeit(1000000)
s = '''
fn = 'does not exists_%s.txt' % random.randint(0, 10000)
if os.path.exists(fn):
open(fn).read()
'''
byExists = timeit.Timer(stmt=s, setup=setup).timeit(1000000)
print 'byException: ', byException # byException: 23.2779269218
print 'byExists: ', byExists # byExists: 22.4937438965
Is this behavior truly exceptional? If it is expected, you should be testing with an if statement, and not using exceptions at all. Performance isn't the only issue with this solution and from the sound of what you are trying to do, performance should not be an issue. Therefore, style and a good approach should be the items of concern with this solution.
So, to summarize, since you expect some tests to fail, do use the File.Exists to check instead of catching exceptions after the fact. You should still catch other exceptions that can occur, of course.
It depends !
If there's a high chance for the file to be there (you know this for your scenario, but as an example something like desktop.ini) I would rather prefer to directly try to open it.
Anyway, in case of using File.Exist you need to put File.OpenRead in try/catch for concurrency reasons and avoiding any run-time exception but it would considerably boost your application performance if the chance for file to be there is low. Ostrich algorithm
Wouldn't it be most efficient to run a directory search, find it, and then try to open it?
Dim Files() as string = System.IO.Directory.GetFiles("C:\", "SpecificName.txt", IO.SearchOption.AllDirectories)
Then you would get an array of strings that you know exist.
Oh, and as an answer to the original question, I would say that yes, try/catch would introduce more processor cycles, I would also assume that IO peeks actually take longer than the overhead of the processor cycles.
Running the Exists first, then the open second, is 2 IO functions against 1 of just trying to open it. So really, I'd say the overall performance is going to be a judgment call on processor time vs. hard drive speed on the PC it will be running on. If you've got a slower processor, I'd go with the check, if you've got a fast processor, I might go with the try/catch on this one.
File.Exists is a good first line of defense. If the file doesn't exist, then you're guaranteed to get an exception if you try to open it. The existence check is cheaper than the cost of throwing and catching an exception. (Maybe not much cheaper, but a bit.)
There's another consideration, too: debugging. When you're running in the debugger, the cost of throwing and catching an exception is higher, because the IDE has hooks into the exception mechanism that increase your overhead. And if you've checked any of the "Break on thrown" checkboxes in Debug > Exceptions, then any avoidable exceptions become a huge pain point. For that reason alone, I would argue for preventing exceptions when possible.
However, you still need the try-catch, for the reasons pointed out by other answers here. The File.Exists call is merely an optimization; it doesn't save you from needing to catch exceptions due to timing, permissions, solar flares, etc.
I don't know about efficiency but I would prefer the File.Exists check. The problem is all the other things that could happen: bad file handle, etc. If your program logic knows that sometimes the file doesn't exist and you want to have a different behavior for existing vs. non-existing files, use File.Exists. If its lack of existence is the same as other file-related exceptions, just use exception handling.
Vexing Exceptions -- more about using exceptions well
Yes, you should use File.Exists. Exceptions should be used for exceptional situations not to control the normal flow of your program. In your case, a file not being there is not an exceptional occurrence. Therefore, you should not rely on exceptions.
UPDATE:
So everyone can try it for themselves, I'll post my test code. For non existing files, relying on File.Open to throw an exception for you is about 50 times worse than checking with File.Exists.
class Program
{
static void Main(string[] args)
{
TimeSpan ts1 = TimeIt(OpenExistingFileWithCheck);
TimeSpan ts2 = TimeIt(OpenExistingFileWithoutCheck);
TimeSpan ts3 = TimeIt(OpenNonExistingFileWithCheck);
TimeSpan ts4 = TimeIt(OpenNonExistingFileWithoutCheck);
}
private static TimeSpan TimeIt(Action action)
{
int loopSize = 10000;
DateTime startTime = DateTime.Now;
for (int i = 0; i < loopSize; i++)
{
action();
}
return DateTime.Now.Subtract(startTime);
}
private static void OpenExistingFileWithCheck()
{
string file = #"C:\temp\existingfile.txt";
if (File.Exists(file))
{
using (FileStream fs = File.Open(file, FileMode.Open, FileAccess.Read))
{
}
}
}
private static void OpenExistingFileWithoutCheck()
{
string file = #"C:\temp\existingfile.txt";
using (FileStream fs = File.Open(file, FileMode.Open, FileAccess.Read))
{
}
}
private static void OpenNonExistingFileWithCheck()
{
string file = #"C:\temp\nonexistantfile.txt";
if (File.Exists(file))
{
using (FileStream fs = File.Open(file, FileMode.Open, FileAccess.Read))
{
}
}
}
private static void OpenNonExistingFileWithoutCheck()
{
try
{
string file = #"C:\temp\nonexistantfile.txt";
using (FileStream fs = File.Open(file, FileMode.Open, FileAccess.Read))
{
}
}
catch (Exception ex)
{
}
}
}
On my computer:
ts1 = .75 seconds (same with or without debugger attached)
ts2 = .56 seconds (same with or without debugger attached)
ts3 = .14 seconds (same with or without debugger attached)
ts4 = 14.28 seconds (with debugger attached)
ts4 = 1.07 (without debugger attached)
UPDATE:
I added details on whether a dubgger was attached or not. I tested debug and release build but the only thing that made a difference was the one function that ended up throwing exceptions while the debugger was attached (which makes sense). Still though, checking with File.Exists is the best choice.
I would say that, generally speaking, exceptions "increase" the overall "performance" of your system!
In your sample, anyway, it is better to use File.Exists...
The problem with using File.Exists first is that it opens the file too. So you end up opening the file twice. I haven't measured it, but I guess this additional opening of the file is more expensive than the occasional exceptions.
If the File.Exists check improves the performance depends on the probability of the file existing. If it likely exists then don't use File.Exists, if it usually doesn't exist the the additional check will improve the performance.
The overhead of an exception is noticeable, but it's not significant compared to file operations.