I have a WinForms (.NET C#) OLTP application based on Oracle.
From our support environment we regularly experience loss of connectivity to the database, and a resulting minidump file is generated (by what, i am not entirely certain of) - apparently it does not cause the application to crash, but in order to actually do anything you have to close it and start it again.
After a many such minidumps have been created in the same directory, all of a sudden the minidumps starts getting rather strange file names, filenames that are apparently "illegal" on windows.
For instance we have a file name like:
"°÷ƒ
_minidump_default_pid_20248_tid_x19AC_2015_9_1_8_31_51.dmp"
And yes the carriage return is PART of the file name.
We discovered this because log4net watches the directory and all of a sudden starts to bark unhandled exceptiosn due to these invalid file names.
So we are trying to figure out why the minidump is generated in the first place, but the question here is, can we somehow prevent the minidump from being generated with an invalid filename or otherwise control the naming process?
Secondly, does anybody know why is it even possible to create invalid file names in the first place?
Update:
For anyone looking at this trying to figure out why the dump files are created in the first place, our issue was that Windows was generating them when it was near running out of memory, but for some reason we would'nt always get an OOMException.
First, you should really try to find out how those dumps are generated. Microsoft e.g. provides a nice way using a Registry key called LocalDumps which has provided great help for me. I am sure that this approach won't generate invalid file names like above.
Second, if the application does not crash, it has probably registered an unhandled exception handler. This is basically ok and designed to write crash dumps, but the unhandled exception is handled by the crashing process itself. How can the code to handle the situation be sure he himself is not affected by the crash? The better option is to let Windows as the OS handle the crash. Then the Windows kernel (which is not affected by the crash) can really handle the situation. That's what LocalDumps does.
Third, direct file system access is possible in Windows via paths that start with \\.\ when passing it to the Windows API. Starting a path like that will skip any file name check so you can generate files with reserved characters such as *, ?, : or newlines as observed by you. The unhandled exception handler of your application is probably doing that and is affected by the crash in a way that parts of the file name are overwritten.
Chkdsk should be able to repair the file system.
pls check if you are installing from network path like \remoteserver\d$\client.
then change it to \remoteserver\d\clinet
"$" in share path create issue while extration on elevated permission files
Related
When calling File.WriteAllText to generate files, intermittently the c# windows form program will hit a catch block due to an Exception with the message of "FileStream was asked to open a device that was not a file. For support for devices like 'com1:' or 'lpt1:', call CreateFile, then use the FileStream constructors that take an OS handle as an IntPtr."
The stacktrace shows Microsoft.Win32.Win32Native.SafeCreateFile as being where it got to last.
After days of researching and trying to find out why this was happening, most articles talk about the reserved file names like com and etc. However, this exception is hit in the middle of exporting files after getting through a few files already. These files don't have any reserved names in them. They all have the same path except for at the end of the file name where a hyphen and padded number for which file it is on is added. Since this gets through similar named files, I believed that this wasn't what was actually happening. The other issue is I could do the same export process multiple times and not get it to happen every time. In fact out of 100 or so export processes it would maybe happen only once even though nothing in the name or path had changed.
Today I found a way to actually get it to happen almost every time. If I go to Chrome and open say "a.singlediv.com" to test with, start the export, then as the export is running I repeatedly reload the div site, it will 99% of the time halt the export and hit the exception mentioned above.
Does anyone have any advice that could lead me to why this Exception message is shown for doing this? Task Manager doesn't seem to have any glaring issues with Memory or CPU overload and a log is being posted to our SQL server after hitting the Exception so I also dont believe it is a network issue.
Update
I was able to find out more about when the error happens with testing yesterday. When refreshing the website repeatedly, I tried a couple different ways of exporting to get more information.
A custom configuration file, a text file, and an excel file are used as setup files for our program. I made identical folders on the Desktop and on a mapped drive which points to a folder located on our server for where the setup files are located as well as where the exports will be saved. I tried a couple different ways of loading these files and export locations.
If I load the setup files from the Desktop folder and save to the Desktop, or load from the Desktop folder and save to the mapped drive, or load from the mapped drive and save to the Desktop folder all of these don't seem to hit the Exception. So far the only way I have gotten the Exception to occur is when I load from the mapped drive and save to the mapped drive while doing the constant refreshing of the div website.
Attached below is a screenshot of the Exception.ToString() (Except for our custom methods that called File.WriteAllText() before in the StackTrace). I manually added returns between each method call so that it is easier to read.
Workaround Update
Per Mason's suggestion I wrapped my File.WriteAllText call in a Polly Retry which has seemed to so far allowed the export to finish every file while debugging the code.
I first did a RetryPolicy.Execute that wrapped the File.WriteAllText. Before File.WriteAllText I wrote the Attempt # it was on to the Output window and after it wrote a message about completing. I did get an Attempt #2 but was getting Attempt #1 and completed for every time it was called which was not easy to see since it happens over 2000 times. So I tried to add an if statement to check if this was not the first attempt and if it wasn't it would write the Attempt # and that the retry handled Exception to the Output window. However, after adding these if statements I didn't get any Attempt #s and it didn't break on the Exception but I also saw that the Output window did have a line about hitting the Exception so I assume that means that the Retry handled it? I would of expected to see Attempt #2 and Retry handled Exception around the Exception message in the Output window since it didn't stop and created every file needed. I have added a code snippet of what I added below.
int attempt = 0;
Polly.Retry.RetryPolicy retryIfException = Policy.Handle<System.NotSupportedException>().Retry(3);
retryIfException.Execute(() =>
{
if (attempt>0)
{
Console.WriteLine($"Attempt #{++attempt}");
}
File.WriteAllText(saveLocation, this.currentFileText.ToString());
if (attempt>1)
{
Console.WriteLine("Retry handled Exception");
}
});
The OpenFileByID line in test() is giving me System.AccessViolationException Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
I am trying to replicate this code example (see the answer), which I'm running in Visual Studio Express 2013 for Windows Desktop. But this example doesn't seem to work for me. It is breaking on the OpenFileByID line in test().
In a nutshell, I am getting a file's ID, then trying to create a file handle from that ID. Later on I plan to use that handle to get information about the file. The reason I'm using IDs is so that I can repair broken links, since a target file's GUID is far more reliable than its presumed location. Help appreciated!
Edit: The file I'm trying to open is an ordinary text file on my Desktop, nothing special.
You're not checking to see if you got a valid volume handle - which you might not be. Could be the source of your a/v.
When you're opening the root dir, the doc says you shouldn't use FILE_ATTRIBUTE_NORMAL with any other flags - but you're using it with FILE_FLAG_BACKUP_SEMANTICS. To use FILE_FLAG_BACKUP_SEMANTICS, you have to get privileges for SE_BACKUP_NAME. You'll have to be an admin or a backup operator to do so. I can't imagine that you need that flag.
You can get the volume handle by opening "\\.\C:" (for example)...which is different than that handle to the root folder. I usually open it with GenericRead, but if all you need it for is for OpenFileById, you can specify 0 for access.
Also - adding object IDs to files isn't necessary - the file reference number (FRN) is the master file table identifier for the file - it's the "other" kind of ID you can pass in a FILE_ID_DESCRIPTOR. You can get it from an open file handle calling GetFileInformationByHandle - it's the nFileindexHigh and nFileIndexLow made into a long int. When you move a file, the FRN stays (only it's parent FRN changes). Also, when you rename a file, the FRN doesn't change. The benefit of using this over ObjectID is that you're not altering the volume in order to track a file...and you don't have to use DeviceIOControl - which is a bit of an interop bad dream.
One more thought - OpenFileByID didn't show up until Vista and Windows Server 2008. You're there, right?
I have the following piece of code in my application:
if (!Directory.Exists(myPath))
Directory.CreateDirectory(myPath);
If I run it in a regular unit test sometimes it passes, sometimes not. The directory is always there (I made sure of it, so technically it will never be "created" by code). But every once in a while Directory.Exists(myPath) returned false, which makes the code try to create the folder and then I get an UnauthorizedAccessException!
The funny thing here is if I put a breakpoint on the CreateDirectory, and then move the yellow arrow up back to test, the test returns true!
What's going on?
myPath is \\nameOfLocalMachine\sharedFolder. The share is reliable and constantly used... .NET 4.0
I just made a fiddler simulate 3000 sequentials requests. 175 failed... All with the same message:
Access to the path '\nameOfLocalMachine\sharedFolder\randomFileName.json' is denied
This mishap is pretty normal on Windows. Programs open a handle on a directory like this and specify delete sharing. Which permits anybody to delete the directory, even though the program is using it. The directory won't actually disappear from the file system until that handle is closed. What follows is that trying to recreate that directory cannot work, it still exists. Windows generates an "access denied" error, reported in your C# program with the UnauthorizedAccessException.
While that sounds like an obscure feature, every program in Windows does this. Every process has a default working directory, the value of Environment.CurrentDirectory. Creating a handle on such a directory ensures that it cannot disappear while the program is using it. There are other cases, FileSystemWatcher would be another example. Or a program busy iterating the directory. Anti-malware and search indexers are notorious for hard to diagnose sources of such errors.
Otherwise a standard hazard of a multi-tasking operating system. You are not the only one using the file system. Not repeatedly deleting and creating the same directory ought to be very high on your list. If this is absolutely necessary then rename the directory first before you delete it. You'd still fail to delete the renamed directory but you won't fail recreating it. You can delete it later, next time you need to do this. Much lower odds for trouble then. Because more time passed.
I am copying several backup files which includes small to large files that is greater that 1 TB from a local folder to a network shared folder using C# File.Copy() function. Previously this worked well. But in recent times I am facing different types of exception at different time. I also tried using the NET USE command to create a shared path connection even though the credentials are the same to resolve this problem.
File.Copy(sourceFilePath, destinationFilePath, overwrite);
The exceptions I am getting are:
- Error: Could not find file.
- Error: The handle is invalid.
- Error: There are currently no logon servers available to service the logon request.
- Error: The specified network name is no longer available.
NB: These exceptions are not for invalid file path. Because exceptions occurs after copying some portions of the file and same code was worked before for the same files.
Anyone have the idea to resolve this kind of situation?
There are known problems with Windows when copying very large files. See Windows file copy bug revisited, for example. The problem seems to be that Windows wants to cache the file, and it goes to heroic efforts to do so. It ends up allocating almost all memory to caching, finally causing fatal thrashing. This will cause a non-deterministic error on the other system (that's trying to copy the file).
The way to counter that is by copying the file without buffering by calling CopyFileEx. Unfortunately, there is no direct way to do that from the .NET Framework. So I wrote and published some code that does. See A Better File.Copy Replacement.
So I have been writing to
Environment.SpecialFolder.ApplicationData
this data file, that upon uninstall needs to be deleted. I am using Innos Setup to build my installer. It works great for me. So my data file hangs out in the above path and I do that cause when I used to try to write it to
Application.ExecutablePath
certain boxes I tested it on would throw a nasty error at me trying to write data there. I do research and somehow its not always writable and its how i came up with the Environment.SpecialFolder.ApplicationData
That is why my data file now resides in the SpecialFolder.ApplicationData. Trouble is if the user uninstalls and reinstalls I need that file gone. It might be a short coming of my knowledge of Innos but I cannot figure out how to know where that file will be to tell innos that.
So then I thought I had a clever solution: Innos can run a file when its done uninstalling, so I had my program create this file "uninstallData.bat" that says:
del "the file in my special folder application data path"
and I wrote it out to drumroll
Application.ExecutablePath
(yes it was a while in development and I had forgot it was't doable.)
So of course I am back to square one, I need to write a file to a path Innos knows about {app} and I need it to be able to delete my data file in the SpecialFolder... i don't care how I do it i just need that file gone.
Are there other Environment. or Application. approches I have missed? Maybe somewhere that is viewable by an uninstaller AND can be written to?
As an aside, I am not sure why my box I develop on can write to the application folder no issue, but it cannot on other boxes... weird.
Any input would be great sorta lost as to how to crack this nut.
The environment location is in the user profile. If there are multiple users on the machine, and they all run the application then a copy of the file will be in each profile.
The path also depends on the OS.
Regardless, the current user's app data location is pointed to by %APPDATA% and %LOCALAPPDATA%. These Windows environment variables should be available within Innos.
Appliccation.ExecutablePath is not writable per standard defintions - the program files folder should never be manipulated by running applications. Ther area number of special folders for that. Nice that you finally found.... what is properly documented by Microsoft for a LONG time now (minimum 10 years).
I suggest you get a proper installer - WIX comes to my mind. Your problem is totally unrelated to C# - it seems to be totally a "crappy installer" issue. Or provide a PROGRAM (not bat file) to run at uninstall. What exatly is your problem there?