Explorer Wont Release Files After Using My Thumbnail Provider - c#

I've set up a thumbnail provider for a file type.
The project is made with
C#
.NET 4.5
And I am running Windows x64
My provider successfully generates the thumbnail as expected and I can move, delete, copy, ect, the file. The issue of the locking seems to be caused by the file being placed in a folder. At that point, moving, deleting, ect, the folder all show the error, "File In Use".
I have confirmed Explore locking the file using Sysinternal's Process Explorer, if you are familiar with it.
I've tried 2 approaches to try to resolve this...
implemented IThumbnailProvider and IInitializeWithStream myself.
used 3rd party Sharpshell
Both suffer from this same issue, the file not being released.
On Sharpshell's github, an issue has been started specifying this too.
https://github.com/dwmkerr/sharpshell/issues/78
I associate the file type in the registry like so
HKEY_CLASSES_ROOT
---- .qb
----shellex
----{e357fccd-a995-4576-b01f-234630154e96} : my CLSID...
I have also tried instead...
HKEY_CLASSES_ROOT
---- .qb
-----PersistentHandler : my CLSID...
Both result in this issue being created.
If I was to implement IExtractImage instead... will I see the same issue?
I know C# isn't "officially" supported to do this, is that where my issue lies? If I was to implement this in C++ would I wind up with the same issue?
EDIT:
I'd like to mention the file after around 1 minute seems to get freed, and things go back to normal.
Thumbnail Creation
Some bytes are read into a buffer... then then image is generated from that.
public void GetThumbnail(int cx, out IntPtr hBitmap, out WTS_ALPHATYPE bitmapType)
{
... bunch of other code
using (MemoryStream steam = new MemoryStream(buffer))
using (var image = new Bitmap(steam))
using (var scaled = new Bitmap(image, cx, cx))
{
hBitmap = scaled.GetHbitmap();
hBitmap = (IntPtr)(hBitmap.ToInt64());
}
}
EDIT 2:
Doing some more testing, I called DeleteObject(hBitmap), (even though this destroys the thumbnail), and the file is still locked. I even removed all the code from GetThumbnail... just gives the same result, file locked. There has to be something more going on?

Turns out you need to release the COM IStream object that you get from IInitializeWithStream.
I came to this conclusion by reading more about disposing COM objects.
Proper way of releasing COM objects?
I followed MS's example on how to wrap IStream
https://msdn.microsoft.com/en-us/library/jj200585%28v=vs.85%29.aspx
public class StreamWrapper : Stream
{
private IStream m_stream;
// initialize the wrapper with the COM IStream
public StreamWrapper(IStream stream)
{
if (stream == null)
{
throw new ArgumentNullException();
}
m_stream = stream;
}
// .... bunch of other code
protected override void Dispose(bool disposing)
{
if (m_stream != null)
{
Marshal.ReleaseComObject(m_stream); // releases the file
m_stream = null;
}
}
}
Here's a sample.
Follow the link above to see StreamWrapper's implementation...
[ComVisible(true), ClassInterface(ClassInterfaceType.None)]
[ProgId("mythumbnailer.provider"), Guid("insert-your-guid-here")]
public class QBThumbnailProvider : IThumbnailProvider, IInitializeWithStream
{
#region IInitializeWithStream
private StreamWrapper stream{ get; set; }
public void Initialize(IStream stream, int grfMode)
{
// IStream passed to our wrapper which handles our clean up
this.stream = new StreamWrapper(stream);
}
#endregion
#region IThumbnailProvider
public void GetThumbnail(int cx, out IntPtr hBitmap, out WTS_ALPHATYPE bitmapType)
{
hBitmap = IntPtr.Zero;
bitmapType = WTS_ALPHATYPE.WTSAT_ARGB;
try
{
//... bunch of other code
// set the hBitmap somehow
using (MemoryStream stream = new MemoryStream(buffer))
using (var image = new Bitmap(stream))
using (var scaled = new Bitmap(image, cx, cx))
{
hBitmap = scaled.GetHbitmap();
}
}
catch (Exception ex)
{
}
// release the IStream COM object
stream.Dispose();
}
#endregion
}
Basically it comes down to two lines of code
Marshal.ReleaseComObject(your_istream); // releases the file
your_istream = null;
Side Note
The GDI Bitmap created with scaled.GetHbitmap(); probably needs to disposed of, but I can't find a way to do it without loosing the created thumbnail.

Related

Why does the streamwriter in C# write multiple lines correctly one time and not the second time? [duplicate]

I have some code and when it executes, it throws a IOException, saying that
The process cannot access the file 'filename' because it is being used by
another process
What does this mean, and what can I do about it?
What is the cause?
The error message is pretty clear: you're trying to access a file, and it's not accessible because another process (or even the same process) is doing something with it (and it didn't allow any sharing).
Debugging
It may be pretty easy to solve (or pretty hard to understand), depending on your specific scenario. Let's see some.
Your process is the only one to access that file
You're sure the other process is your own process. If you know you open that file in another part of your program, then first of all you have to check that you properly close the file handle after each use. Here is an example of code with this bug:
var stream = new FileStream(path, FileAccess.Read);
var reader = new StreamReader(stream);
// Read data from this file, when I'm done I don't need it any more
File.Delete(path); // IOException: file is in use
Fortunately FileStream implements IDisposable, so it's easy to wrap all your code inside a using statement:
using (var stream = File.Open("myfile.txt", FileMode.Open)) {
// Use stream
}
// Here stream is not accessible and it has been closed (also if
// an exception is thrown and stack unrolled
This pattern will also ensure that the file won't be left open in case of exceptions (it may be the reason the file is in use: something went wrong, and no one closed it; see this post for an example).
If everything seems fine (you're sure you always close every file you open, even in case of exceptions) and you have multiple working threads, then you have two options: rework your code to serialize file access (not always doable and not always wanted) or apply a retry pattern. It's a pretty common pattern for I/O operations: you try to do something and in case of error you wait and try again (did you ask yourself why, for example, Windows Shell takes some time to inform you that a file is in use and cannot be deleted?). In C# it's pretty easy to implement (see also better examples about disk I/O, networking and database access).
private const int NumberOfRetries = 3;
private const int DelayOnRetry = 1000;
for (int i=1; i <= NumberOfRetries; ++i) {
try {
// Do stuff with file
break; // When done we can break loop
}
catch (IOException e) when (i <= NumberOfRetries) {
// You may check error code to filter some exceptions, not every error
// can be recovered.
Thread.Sleep(DelayOnRetry);
}
}
Please note a common error we see very often on StackOverflow:
var stream = File.Open(path, FileOpen.Read);
var content = File.ReadAllText(path);
In this case ReadAllText() will fail because the file is in use (File.Open() in the line before). To open the file beforehand is not only unnecessary but also wrong. The same applies to all File functions that don't return a handle to the file you're working with: File.ReadAllText(), File.WriteAllText(), File.ReadAllLines(), File.WriteAllLines() and others (like File.AppendAllXyz() functions) will all open and close the file by themselves.
Your process is not the only one to access that file
If your process is not the only one to access that file, then interaction can be harder. A retry pattern will help (if the file shouldn't be open by anyone else but it is, then you need a utility like Process Explorer to check who is doing what).
Ways to avoid
When applicable, always use using statements to open files. As said in previous paragraph, it'll actively help you to avoid many common errors (see this post for an example on how not to use it).
If possible, try to decide who owns access to a specific file and centralize access through a few well-known methods. If, for example, you have a data file where your program reads and writes, then you should box all I/O code inside a single class. It'll make debug easier (because you can always put a breakpoint there and see who is doing what) and also it'll be a synchronization point (if required) for multiple access.
Don't forget I/O operations can always fail, a common example is this:
if (File.Exists(path))
File.Delete(path);
If someone deletes the file after File.Exists() but before File.Delete(), then it'll throw an IOException in a place where you may wrongly feel safe.
Whenever it's possible, apply a retry pattern, and if you're using FileSystemWatcher, consider postponing action (because you'll get notified, but an application may still be working exclusively with that file).
Advanced scenarios
It's not always so easy, so you may need to share access with someone else. If, for example, you're reading from the beginning and writing to the end, you have at least two options.
1) share the same FileStream with proper synchronization functions (because it is not thread-safe). See this and this posts for an example.
2) use FileShare enumeration to instruct OS to allow other processes (or other parts of your own process) to access same file concurrently.
using (var stream = File.Open(path, FileMode.Open, FileAccess.Write, FileShare.Read))
{
}
In this example I showed how to open a file for writing and share for reading; please note that when reading and writing overlaps, it results in undefined or invalid data. It's a situation that must be handled when reading. Also note that this doesn't make access to the stream thread-safe, so this object can't be shared with multiple threads unless access is synchronized somehow (see previous links). Other sharing options are available, and they open up more complex scenarios. Please refer to MSDN for more details.
In general N processes can read from same file all together but only one should write, in a controlled scenario you may even enable concurrent writings but this can't be generalized in few text paragraphs inside this answer.
Is it possible to unlock a file used by another process? It's not always safe and not so easy but yes, it's possible.
Using FileShare fixed my issue of opening file even if it is opened by another process.
using (var stream = File.Open(path, FileMode.Open, FileAccess.Write, FileShare.ReadWrite))
{
}
Problem
one is tying to open file System.IO.File.Open(path, FileMode) with this method and want a shared access on file but
if u read documentation of System.IO.File.Open(path, FileMode) it is explicitly saying its does not allow sharing
Solution
use you have to use other override with FileShare
using FileStream fs = System.IO.File.Open(filePath, FileMode.Open, FileAccess.Read, FileShare.Read);
with FileShare.Read
Had an issue while uploading an image and couldn't delete it and found a solution. gl hf
//C# .NET
var image = Image.FromFile(filePath);
image.Dispose(); // this removes all resources
//later...
File.Delete(filePath); //now works
As other answers in this thread have pointed out, to resolve this error you need to carefully inspect the code, to understand where the file is getting locked.
In my case, I was sending out the file as an email attachment before performing the move operation.
So the file got locked for couple of seconds until SMTP client finished sending the email.
The solution I adopted was to move the file first, and then send the email. This solved the problem for me.
Another possible solution, as pointed out earlier by Hudson, would've been to dispose the object after use.
public static SendEmail()
{
MailMessage mMailMessage = new MailMessage();
//setup other email stuff
if (File.Exists(attachmentPath))
{
Attachment attachment = new Attachment(attachmentPath);
mMailMessage.Attachments.Add(attachment);
attachment.Dispose(); //disposing the Attachment object
}
}
I got this error because I was doing File.Move to a file path without a file name, need to specify the full path in the destination.
The error indicates another process is trying to access the file. Maybe you or someone else has it open while you are attempting to write to it. "Read" or "Copy" usually doesn't cause this, but writing to it or calling delete on it would.
There are some basic things to avoid this, as other answers have mentioned:
In FileStream operations, place it in a using block with a FileShare.ReadWrite mode of access.
For example:
using (FileStream stream = File.Open(path, FileMode.Open, FileAccess.Write, FileShare.ReadWrite))
{
}
Note that FileAccess.ReadWrite is not possible if you use FileMode.Append.
I ran across this issue when I was using an input stream to do a File.SaveAs when the file was in use. In my case I found, I didn't actually need to save it back to the file system at all, so I ended up just removing that, but I probably could've tried creating a FileStream in a using statement with FileAccess.ReadWrite, much like the code above.
Saving your data as a different file and going back to delete the old one when it is found to be no longer in use, then renaming the one that saved successfully to the name of the original one is an option. How you test for the file being in use is accomplished through the
List<Process> lstProcs = ProcessHandler.WhoIsLocking(file);
line in my code below, and could be done in a Windows service, on a loop, if you have a particular file you want to watch and delete regularly when you want to replace it. If you don't always have the same file, a text file or database table could be updated that the service always checks for file names, and then performs that check for processes & subsequently performs the process kills and deletion on it, as I describe in the next option. Note that you'll need an account user name and password that has Admin privileges on the given computer, of course, to perform the deletion and ending of processes.
When you don't know if a file will be in use when you are trying to save it, you can close all processes that could be using it, like Word, if it's a Word document, ahead of the save.
If it is local, you can do this:
ProcessHandler.localProcessKill("winword.exe");
If it is remote, you can do this:
ProcessHandler.remoteProcessKill(computerName, txtUserName, txtPassword, "winword.exe");
where txtUserName is in the form of DOMAIN\user.
Let's say you don't know the process name that is locking the file. Then, you can do this:
List<Process> lstProcs = new List<Process>();
lstProcs = ProcessHandler.WhoIsLocking(file);
foreach (Process p in lstProcs)
{
if (p.MachineName == ".")
ProcessHandler.localProcessKill(p.ProcessName);
else
ProcessHandler.remoteProcessKill(p.MachineName, txtUserName, txtPassword, p.ProcessName);
}
Note that file must be the UNC path: \\computer\share\yourdoc.docx in order for the Process to figure out what computer it's on and p.MachineName to be valid.
Below is the class these functions use, which requires adding a reference to System.Management. The code was originally written by Eric J.:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Runtime.InteropServices;
using System.Diagnostics;
using System.Management;
namespace MyProject
{
public static class ProcessHandler
{
[StructLayout(LayoutKind.Sequential)]
struct RM_UNIQUE_PROCESS
{
public int dwProcessId;
public System.Runtime.InteropServices.ComTypes.FILETIME ProcessStartTime;
}
const int RmRebootReasonNone = 0;
const int CCH_RM_MAX_APP_NAME = 255;
const int CCH_RM_MAX_SVC_NAME = 63;
enum RM_APP_TYPE
{
RmUnknownApp = 0,
RmMainWindow = 1,
RmOtherWindow = 2,
RmService = 3,
RmExplorer = 4,
RmConsole = 5,
RmCritical = 1000
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)]
struct RM_PROCESS_INFO
{
public RM_UNIQUE_PROCESS Process;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = CCH_RM_MAX_APP_NAME + 1)]
public string strAppName;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = CCH_RM_MAX_SVC_NAME + 1)]
public string strServiceShortName;
public RM_APP_TYPE ApplicationType;
public uint AppStatus;
public uint TSSessionId;
[MarshalAs(UnmanagedType.Bool)]
public bool bRestartable;
}
[DllImport("rstrtmgr.dll", CharSet = CharSet.Unicode)]
static extern int RmRegisterResources(uint pSessionHandle,
UInt32 nFiles,
string[] rgsFilenames,
UInt32 nApplications,
[In] RM_UNIQUE_PROCESS[] rgApplications,
UInt32 nServices,
string[] rgsServiceNames);
[DllImport("rstrtmgr.dll", CharSet = CharSet.Auto)]
static extern int RmStartSession(out uint pSessionHandle, int dwSessionFlags, string strSessionKey);
[DllImport("rstrtmgr.dll")]
static extern int RmEndSession(uint pSessionHandle);
[DllImport("rstrtmgr.dll")]
static extern int RmGetList(uint dwSessionHandle,
out uint pnProcInfoNeeded,
ref uint pnProcInfo,
[In, Out] RM_PROCESS_INFO[] rgAffectedApps,
ref uint lpdwRebootReasons);
/// <summary>
/// Find out what process(es) have a lock on the specified file.
/// </summary>
/// <param name="path">Path of the file.</param>
/// <returns>Processes locking the file</returns>
/// <remarks>See also:
/// http://msdn.microsoft.com/en-us/library/windows/desktop/aa373661(v=vs.85).aspx
/// http://wyupdate.googlecode.com/svn-history/r401/trunk/frmFilesInUse.cs (no copyright in code at time of viewing)
///
/// </remarks>
static public List<Process> WhoIsLocking(string path)
{
uint handle;
string key = Guid.NewGuid().ToString();
List<Process> processes = new List<Process>();
int res = RmStartSession(out handle, 0, key);
if (res != 0) throw new Exception("Could not begin restart session. Unable to determine file locker.");
try
{
const int ERROR_MORE_DATA = 234;
uint pnProcInfoNeeded = 0,
pnProcInfo = 0,
lpdwRebootReasons = RmRebootReasonNone;
string[] resources = new string[] { path }; // Just checking on one resource.
res = RmRegisterResources(handle, (uint)resources.Length, resources, 0, null, 0, null);
if (res != 0) throw new Exception("Could not register resource.");
//Note: there's a race condition here -- the first call to RmGetList() returns
// the total number of process. However, when we call RmGetList() again to get
// the actual processes this number may have increased.
res = RmGetList(handle, out pnProcInfoNeeded, ref pnProcInfo, null, ref lpdwRebootReasons);
if (res == ERROR_MORE_DATA)
{
// Create an array to store the process results
RM_PROCESS_INFO[] processInfo = new RM_PROCESS_INFO[pnProcInfoNeeded];
pnProcInfo = pnProcInfoNeeded;
// Get the list
res = RmGetList(handle, out pnProcInfoNeeded, ref pnProcInfo, processInfo, ref lpdwRebootReasons);
if (res == 0)
{
processes = new List<Process>((int)pnProcInfo);
// Enumerate all of the results and add them to the
// list to be returned
for (int i = 0; i < pnProcInfo; i++)
{
try
{
processes.Add(Process.GetProcessById(processInfo[i].Process.dwProcessId));
}
// catch the error -- in case the process is no longer running
catch (ArgumentException) { }
}
}
else throw new Exception("Could not list processes locking resource.");
}
else if (res != 0) throw new Exception("Could not list processes locking resource. Failed to get size of result.");
}
finally
{
RmEndSession(handle);
}
return processes;
}
public static void remoteProcessKill(string computerName, string userName, string pword, string processName)
{
var connectoptions = new ConnectionOptions();
connectoptions.Username = userName;
connectoptions.Password = pword;
ManagementScope scope = new ManagementScope(#"\\" + computerName + #"\root\cimv2", connectoptions);
// WMI query
var query = new SelectQuery("select * from Win32_process where name = '" + processName + "'");
using (var searcher = new ManagementObjectSearcher(scope, query))
{
foreach (ManagementObject process in searcher.Get())
{
process.InvokeMethod("Terminate", null);
process.Dispose();
}
}
}
public static void localProcessKill(string processName)
{
foreach (Process p in Process.GetProcessesByName(processName))
{
p.Kill();
}
}
[DllImport("kernel32.dll")]
public static extern bool MoveFileEx(string lpExistingFileName, string lpNewFileName, int dwFlags);
public const int MOVEFILE_DELAY_UNTIL_REBOOT = 0x4;
}
}
I had this problem and it was solved by following the code below
var _path=MyFile.FileName;
using (var stream = new FileStream
(_path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
// Your Code! ;
}
I had a very specific situation where I was getting an "IOException: The process cannot access the file 'file path'" on the line
File.Delete(fileName);
Inside an NUnit test that looked like:
Assert.Throws<IOException>(() =>
{
using (var sr = File.OpenText(fileName) {
var line = sr.ReadLine();
}
});
File.Delete(fileName);
It turns out NUnit 3 uses something they call "isolated context" for exception assertions. This probably runs on a separate thread.
My fix was to put the File.Delete in the same context.
Assert.Throws<IOException>(() =>
{
try
{
using (var sr = File.OpenText(fileName) {
var line = sr.ReadLine();
}
}
catch
{
File.Delete(fileName);
throw;
}
});
I had the following scenario that was causing the same error:
Upload files to the server
Then get rid of the old files after they have been uploaded
Most files were small in size, however, a few were large, and so attempting to delete those resulted in the cannot access file error.
It was not easy to find, however, the solution was as simple as Waiting "for the task to complete execution":
using (var wc = new WebClient())
{
var tskResult = wc.UploadFileTaskAsync(_address, _fileName);
tskResult.Wait();
}
In my case this problem was solved by Opening the file for Shared writing/reading. Following are the sample codes for shared reading and writing:-
Stream Writer
using(FileStream fs = new FileStream("D:\\test.txt",
FileMode.Append, FileAccess.Write, FileShare.ReadWrite))
using (StreamWriter sw = new StreamWriter(fs))
{
sw.WriteLine("any thing which you want to write");
}
Stream Reader
using (FileStream fs = new FileStream("D:\\test.txt", FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
using (StreamReader rr=new StreamReader(fs))
{
rr.ReadLine())
}
My below code solve this issue, but i suggest
First of all you need to understand what causing this issue and try the solution which you can find by changing code
I can give another way to solve this issue but better solution is to check your coding structure and try to analyse what makes this happen,if you do not find any solution then you can go with this code below
try{
Start:
///Put your file access code here
}catch (Exception ex)
{
//by anyway you need to handle this error with below code
if (ex.Message.StartsWith("The process cannot access the file"))
{
//Wait for 5 seconds to free that file and then start execution again
Thread.Sleep(5000);
goto Start;
}
}

File Delete, it's being used by another process [duplicate]

I have some code and when it executes, it throws a IOException, saying that
The process cannot access the file 'filename' because it is being used by
another process
What does this mean, and what can I do about it?
What is the cause?
The error message is pretty clear: you're trying to access a file, and it's not accessible because another process (or even the same process) is doing something with it (and it didn't allow any sharing).
Debugging
It may be pretty easy to solve (or pretty hard to understand), depending on your specific scenario. Let's see some.
Your process is the only one to access that file
You're sure the other process is your own process. If you know you open that file in another part of your program, then first of all you have to check that you properly close the file handle after each use. Here is an example of code with this bug:
var stream = new FileStream(path, FileAccess.Read);
var reader = new StreamReader(stream);
// Read data from this file, when I'm done I don't need it any more
File.Delete(path); // IOException: file is in use
Fortunately FileStream implements IDisposable, so it's easy to wrap all your code inside a using statement:
using (var stream = File.Open("myfile.txt", FileMode.Open)) {
// Use stream
}
// Here stream is not accessible and it has been closed (also if
// an exception is thrown and stack unrolled
This pattern will also ensure that the file won't be left open in case of exceptions (it may be the reason the file is in use: something went wrong, and no one closed it; see this post for an example).
If everything seems fine (you're sure you always close every file you open, even in case of exceptions) and you have multiple working threads, then you have two options: rework your code to serialize file access (not always doable and not always wanted) or apply a retry pattern. It's a pretty common pattern for I/O operations: you try to do something and in case of error you wait and try again (did you ask yourself why, for example, Windows Shell takes some time to inform you that a file is in use and cannot be deleted?). In C# it's pretty easy to implement (see also better examples about disk I/O, networking and database access).
private const int NumberOfRetries = 3;
private const int DelayOnRetry = 1000;
for (int i=1; i <= NumberOfRetries; ++i) {
try {
// Do stuff with file
break; // When done we can break loop
}
catch (IOException e) when (i <= NumberOfRetries) {
// You may check error code to filter some exceptions, not every error
// can be recovered.
Thread.Sleep(DelayOnRetry);
}
}
Please note a common error we see very often on StackOverflow:
var stream = File.Open(path, FileOpen.Read);
var content = File.ReadAllText(path);
In this case ReadAllText() will fail because the file is in use (File.Open() in the line before). To open the file beforehand is not only unnecessary but also wrong. The same applies to all File functions that don't return a handle to the file you're working with: File.ReadAllText(), File.WriteAllText(), File.ReadAllLines(), File.WriteAllLines() and others (like File.AppendAllXyz() functions) will all open and close the file by themselves.
Your process is not the only one to access that file
If your process is not the only one to access that file, then interaction can be harder. A retry pattern will help (if the file shouldn't be open by anyone else but it is, then you need a utility like Process Explorer to check who is doing what).
Ways to avoid
When applicable, always use using statements to open files. As said in previous paragraph, it'll actively help you to avoid many common errors (see this post for an example on how not to use it).
If possible, try to decide who owns access to a specific file and centralize access through a few well-known methods. If, for example, you have a data file where your program reads and writes, then you should box all I/O code inside a single class. It'll make debug easier (because you can always put a breakpoint there and see who is doing what) and also it'll be a synchronization point (if required) for multiple access.
Don't forget I/O operations can always fail, a common example is this:
if (File.Exists(path))
File.Delete(path);
If someone deletes the file after File.Exists() but before File.Delete(), then it'll throw an IOException in a place where you may wrongly feel safe.
Whenever it's possible, apply a retry pattern, and if you're using FileSystemWatcher, consider postponing action (because you'll get notified, but an application may still be working exclusively with that file).
Advanced scenarios
It's not always so easy, so you may need to share access with someone else. If, for example, you're reading from the beginning and writing to the end, you have at least two options.
1) share the same FileStream with proper synchronization functions (because it is not thread-safe). See this and this posts for an example.
2) use FileShare enumeration to instruct OS to allow other processes (or other parts of your own process) to access same file concurrently.
using (var stream = File.Open(path, FileMode.Open, FileAccess.Write, FileShare.Read))
{
}
In this example I showed how to open a file for writing and share for reading; please note that when reading and writing overlaps, it results in undefined or invalid data. It's a situation that must be handled when reading. Also note that this doesn't make access to the stream thread-safe, so this object can't be shared with multiple threads unless access is synchronized somehow (see previous links). Other sharing options are available, and they open up more complex scenarios. Please refer to MSDN for more details.
In general N processes can read from same file all together but only one should write, in a controlled scenario you may even enable concurrent writings but this can't be generalized in few text paragraphs inside this answer.
Is it possible to unlock a file used by another process? It's not always safe and not so easy but yes, it's possible.
Using FileShare fixed my issue of opening file even if it is opened by another process.
using (var stream = File.Open(path, FileMode.Open, FileAccess.Write, FileShare.ReadWrite))
{
}
Problem
one is tying to open file System.IO.File.Open(path, FileMode) with this method and want a shared access on file but
if u read documentation of System.IO.File.Open(path, FileMode) it is explicitly saying its does not allow sharing
Solution
use you have to use other override with FileShare
using FileStream fs = System.IO.File.Open(filePath, FileMode.Open, FileAccess.Read, FileShare.Read);
with FileShare.Read
Had an issue while uploading an image and couldn't delete it and found a solution. gl hf
//C# .NET
var image = Image.FromFile(filePath);
image.Dispose(); // this removes all resources
//later...
File.Delete(filePath); //now works
As other answers in this thread have pointed out, to resolve this error you need to carefully inspect the code, to understand where the file is getting locked.
In my case, I was sending out the file as an email attachment before performing the move operation.
So the file got locked for couple of seconds until SMTP client finished sending the email.
The solution I adopted was to move the file first, and then send the email. This solved the problem for me.
Another possible solution, as pointed out earlier by Hudson, would've been to dispose the object after use.
public static SendEmail()
{
MailMessage mMailMessage = new MailMessage();
//setup other email stuff
if (File.Exists(attachmentPath))
{
Attachment attachment = new Attachment(attachmentPath);
mMailMessage.Attachments.Add(attachment);
attachment.Dispose(); //disposing the Attachment object
}
}
I got this error because I was doing File.Move to a file path without a file name, need to specify the full path in the destination.
The error indicates another process is trying to access the file. Maybe you or someone else has it open while you are attempting to write to it. "Read" or "Copy" usually doesn't cause this, but writing to it or calling delete on it would.
There are some basic things to avoid this, as other answers have mentioned:
In FileStream operations, place it in a using block with a FileShare.ReadWrite mode of access.
For example:
using (FileStream stream = File.Open(path, FileMode.Open, FileAccess.Write, FileShare.ReadWrite))
{
}
Note that FileAccess.ReadWrite is not possible if you use FileMode.Append.
I ran across this issue when I was using an input stream to do a File.SaveAs when the file was in use. In my case I found, I didn't actually need to save it back to the file system at all, so I ended up just removing that, but I probably could've tried creating a FileStream in a using statement with FileAccess.ReadWrite, much like the code above.
Saving your data as a different file and going back to delete the old one when it is found to be no longer in use, then renaming the one that saved successfully to the name of the original one is an option. How you test for the file being in use is accomplished through the
List<Process> lstProcs = ProcessHandler.WhoIsLocking(file);
line in my code below, and could be done in a Windows service, on a loop, if you have a particular file you want to watch and delete regularly when you want to replace it. If you don't always have the same file, a text file or database table could be updated that the service always checks for file names, and then performs that check for processes & subsequently performs the process kills and deletion on it, as I describe in the next option. Note that you'll need an account user name and password that has Admin privileges on the given computer, of course, to perform the deletion and ending of processes.
When you don't know if a file will be in use when you are trying to save it, you can close all processes that could be using it, like Word, if it's a Word document, ahead of the save.
If it is local, you can do this:
ProcessHandler.localProcessKill("winword.exe");
If it is remote, you can do this:
ProcessHandler.remoteProcessKill(computerName, txtUserName, txtPassword, "winword.exe");
where txtUserName is in the form of DOMAIN\user.
Let's say you don't know the process name that is locking the file. Then, you can do this:
List<Process> lstProcs = new List<Process>();
lstProcs = ProcessHandler.WhoIsLocking(file);
foreach (Process p in lstProcs)
{
if (p.MachineName == ".")
ProcessHandler.localProcessKill(p.ProcessName);
else
ProcessHandler.remoteProcessKill(p.MachineName, txtUserName, txtPassword, p.ProcessName);
}
Note that file must be the UNC path: \\computer\share\yourdoc.docx in order for the Process to figure out what computer it's on and p.MachineName to be valid.
Below is the class these functions use, which requires adding a reference to System.Management. The code was originally written by Eric J.:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Runtime.InteropServices;
using System.Diagnostics;
using System.Management;
namespace MyProject
{
public static class ProcessHandler
{
[StructLayout(LayoutKind.Sequential)]
struct RM_UNIQUE_PROCESS
{
public int dwProcessId;
public System.Runtime.InteropServices.ComTypes.FILETIME ProcessStartTime;
}
const int RmRebootReasonNone = 0;
const int CCH_RM_MAX_APP_NAME = 255;
const int CCH_RM_MAX_SVC_NAME = 63;
enum RM_APP_TYPE
{
RmUnknownApp = 0,
RmMainWindow = 1,
RmOtherWindow = 2,
RmService = 3,
RmExplorer = 4,
RmConsole = 5,
RmCritical = 1000
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)]
struct RM_PROCESS_INFO
{
public RM_UNIQUE_PROCESS Process;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = CCH_RM_MAX_APP_NAME + 1)]
public string strAppName;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = CCH_RM_MAX_SVC_NAME + 1)]
public string strServiceShortName;
public RM_APP_TYPE ApplicationType;
public uint AppStatus;
public uint TSSessionId;
[MarshalAs(UnmanagedType.Bool)]
public bool bRestartable;
}
[DllImport("rstrtmgr.dll", CharSet = CharSet.Unicode)]
static extern int RmRegisterResources(uint pSessionHandle,
UInt32 nFiles,
string[] rgsFilenames,
UInt32 nApplications,
[In] RM_UNIQUE_PROCESS[] rgApplications,
UInt32 nServices,
string[] rgsServiceNames);
[DllImport("rstrtmgr.dll", CharSet = CharSet.Auto)]
static extern int RmStartSession(out uint pSessionHandle, int dwSessionFlags, string strSessionKey);
[DllImport("rstrtmgr.dll")]
static extern int RmEndSession(uint pSessionHandle);
[DllImport("rstrtmgr.dll")]
static extern int RmGetList(uint dwSessionHandle,
out uint pnProcInfoNeeded,
ref uint pnProcInfo,
[In, Out] RM_PROCESS_INFO[] rgAffectedApps,
ref uint lpdwRebootReasons);
/// <summary>
/// Find out what process(es) have a lock on the specified file.
/// </summary>
/// <param name="path">Path of the file.</param>
/// <returns>Processes locking the file</returns>
/// <remarks>See also:
/// http://msdn.microsoft.com/en-us/library/windows/desktop/aa373661(v=vs.85).aspx
/// http://wyupdate.googlecode.com/svn-history/r401/trunk/frmFilesInUse.cs (no copyright in code at time of viewing)
///
/// </remarks>
static public List<Process> WhoIsLocking(string path)
{
uint handle;
string key = Guid.NewGuid().ToString();
List<Process> processes = new List<Process>();
int res = RmStartSession(out handle, 0, key);
if (res != 0) throw new Exception("Could not begin restart session. Unable to determine file locker.");
try
{
const int ERROR_MORE_DATA = 234;
uint pnProcInfoNeeded = 0,
pnProcInfo = 0,
lpdwRebootReasons = RmRebootReasonNone;
string[] resources = new string[] { path }; // Just checking on one resource.
res = RmRegisterResources(handle, (uint)resources.Length, resources, 0, null, 0, null);
if (res != 0) throw new Exception("Could not register resource.");
//Note: there's a race condition here -- the first call to RmGetList() returns
// the total number of process. However, when we call RmGetList() again to get
// the actual processes this number may have increased.
res = RmGetList(handle, out pnProcInfoNeeded, ref pnProcInfo, null, ref lpdwRebootReasons);
if (res == ERROR_MORE_DATA)
{
// Create an array to store the process results
RM_PROCESS_INFO[] processInfo = new RM_PROCESS_INFO[pnProcInfoNeeded];
pnProcInfo = pnProcInfoNeeded;
// Get the list
res = RmGetList(handle, out pnProcInfoNeeded, ref pnProcInfo, processInfo, ref lpdwRebootReasons);
if (res == 0)
{
processes = new List<Process>((int)pnProcInfo);
// Enumerate all of the results and add them to the
// list to be returned
for (int i = 0; i < pnProcInfo; i++)
{
try
{
processes.Add(Process.GetProcessById(processInfo[i].Process.dwProcessId));
}
// catch the error -- in case the process is no longer running
catch (ArgumentException) { }
}
}
else throw new Exception("Could not list processes locking resource.");
}
else if (res != 0) throw new Exception("Could not list processes locking resource. Failed to get size of result.");
}
finally
{
RmEndSession(handle);
}
return processes;
}
public static void remoteProcessKill(string computerName, string userName, string pword, string processName)
{
var connectoptions = new ConnectionOptions();
connectoptions.Username = userName;
connectoptions.Password = pword;
ManagementScope scope = new ManagementScope(#"\\" + computerName + #"\root\cimv2", connectoptions);
// WMI query
var query = new SelectQuery("select * from Win32_process where name = '" + processName + "'");
using (var searcher = new ManagementObjectSearcher(scope, query))
{
foreach (ManagementObject process in searcher.Get())
{
process.InvokeMethod("Terminate", null);
process.Dispose();
}
}
}
public static void localProcessKill(string processName)
{
foreach (Process p in Process.GetProcessesByName(processName))
{
p.Kill();
}
}
[DllImport("kernel32.dll")]
public static extern bool MoveFileEx(string lpExistingFileName, string lpNewFileName, int dwFlags);
public const int MOVEFILE_DELAY_UNTIL_REBOOT = 0x4;
}
}
I had this problem and it was solved by following the code below
var _path=MyFile.FileName;
using (var stream = new FileStream
(_path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
// Your Code! ;
}
I had a very specific situation where I was getting an "IOException: The process cannot access the file 'file path'" on the line
File.Delete(fileName);
Inside an NUnit test that looked like:
Assert.Throws<IOException>(() =>
{
using (var sr = File.OpenText(fileName) {
var line = sr.ReadLine();
}
});
File.Delete(fileName);
It turns out NUnit 3 uses something they call "isolated context" for exception assertions. This probably runs on a separate thread.
My fix was to put the File.Delete in the same context.
Assert.Throws<IOException>(() =>
{
try
{
using (var sr = File.OpenText(fileName) {
var line = sr.ReadLine();
}
}
catch
{
File.Delete(fileName);
throw;
}
});
I had the following scenario that was causing the same error:
Upload files to the server
Then get rid of the old files after they have been uploaded
Most files were small in size, however, a few were large, and so attempting to delete those resulted in the cannot access file error.
It was not easy to find, however, the solution was as simple as Waiting "for the task to complete execution":
using (var wc = new WebClient())
{
var tskResult = wc.UploadFileTaskAsync(_address, _fileName);
tskResult.Wait();
}
In my case this problem was solved by Opening the file for Shared writing/reading. Following are the sample codes for shared reading and writing:-
Stream Writer
using(FileStream fs = new FileStream("D:\\test.txt",
FileMode.Append, FileAccess.Write, FileShare.ReadWrite))
using (StreamWriter sw = new StreamWriter(fs))
{
sw.WriteLine("any thing which you want to write");
}
Stream Reader
using (FileStream fs = new FileStream("D:\\test.txt", FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
using (StreamReader rr=new StreamReader(fs))
{
rr.ReadLine())
}
My below code solve this issue, but i suggest
First of all you need to understand what causing this issue and try the solution which you can find by changing code
I can give another way to solve this issue but better solution is to check your coding structure and try to analyse what makes this happen,if you do not find any solution then you can go with this code below
try{
Start:
///Put your file access code here
}catch (Exception ex)
{
//by anyway you need to handle this error with below code
if (ex.Message.StartsWith("The process cannot access the file"))
{
//Wait for 5 seconds to free that file and then start execution again
Thread.Sleep(5000);
goto Start;
}
}

Printing a Local Report without Preview - Stream size exceeded or A generic error occurred in GDI+ C#

I am using this article to print my rdlc directly to printer but when I am trying to create Metafile object by passing stream it gives me error. (A generic error occurred in GDI+)
Code:
using System;
using System.IO;
using System.Data;
using System.Text;
using System.Drawing.Imaging;
using System.Drawing.Printing;
using System.Collections.Generic;
using System.Windows.Forms;
using Microsoft.Reporting.WinForms;
public class Demo : IDisposable
{
private int m_currentPageIndex;
private IList<Stream> m_streams;
// Routine to provide to the report renderer, in order to
// save an image for each page of the report.
private Stream CreateStream(string name, string fileNameExtension, Encoding encoding, string mimeType, bool willSeek)
{
DataSet ds = new DataSet();
ds.Tables.Add(dsData.Tables[0].Copy());
using (MemoryStream stream = new MemoryStream())
{
IFormatter bf = new BinaryFormatter();
ds.RemotingFormat = SerializationFormat.Binary;
bf.Serialize(stream, ds);
data = stream.ToArray();
}
Stream stream1 = new MemoryStream(data);
m_streams.Add(stream1);
return stream1;
}
// Export the given report as an EMF (Enhanced Metafile) file.
private void Export(LocalReport report)
{
string deviceInfo =
#"<DeviceInfo>
<OutputFormat>EMF</OutputFormat>
<PageWidth>8.5in</PageWidth>
<PageHeight>11in</PageHeight>
<MarginTop>0.25in</MarginTop>
<MarginLeft>0.25in</MarginLeft>
<MarginRight>0.25in</MarginRight>
<MarginBottom>0.25in</MarginBottom>
</DeviceInfo>";
Warning[] warnings;
m_streams = new List<Stream>();
report.Render("Image", deviceInfo, CreateStream,
out warnings);
foreach (Stream stream in m_streams)
stream.Position = 0;
}
// Handler for PrintPageEvents
private void PrintPage(object sender, PrintPageEventArgs ev)
{
Metafile pageImage = new
Metafile(m_streams[m_currentPageIndex]);
// Adjust rectangular area with printer margins.
Rectangle adjustedRect = new Rectangle(
ev.PageBounds.Left - (int)ev.PageSettings.HardMarginX,
ev.PageBounds.Top - (int)ev.PageSettings.HardMarginY,
ev.PageBounds.Width,
ev.PageBounds.Height);
// Draw a white background for the report
ev.Graphics.FillRectangle(Brushes.White, adjustedRect);
// Draw the report content
ev.Graphics.DrawImage(pageImage, adjustedRect);
// Prepare for the next page. Make sure we haven't hit the end.
m_currentPageIndex++;
ev.HasMorePages = (m_currentPageIndex < m_streams.Count);
}
private void Print()
{
if (m_streams == null || m_streams.Count == 0)
throw new Exception("Error: no stream to print.");
PrintDocument printDoc = new PrintDocument();
if (!printDoc.PrinterSettings.IsValid)
{
throw new Exception("Error: cannot find the default printer.");
}
else
{
printDoc.PrintPage += new PrintPageEventHandler(PrintPage);
m_currentPageIndex = 0;
printDoc.Print();
}
}
// Create a local report for Report.rdlc, load the data,
// export the report to an .emf file, and print it.
private void Run()
{
LocalReport report = new LocalReport();
LocalReport report = new LocalReport();
report.ReportPath = #"Reports\InvoiceReportTest.rdlc";
report.DataSources.Add(
new ReportDataSource("DataSet1", dsPrintDetails));
Export(report);
Print();
}
public void Dispose()
{
if (m_streams != null)
{
foreach (Stream stream in m_streams)
stream.Close();
m_streams = null;
}
}
public static void Main(string[] args)
{
using (Demo demo = new Demo())
{
demo.Run();
}
}
}
It gives me error when stream size exceed or rdlc static content is more.
My dataset that I use to create stream of it is:
I don't know whether static content should not affect stream size or not but it is not giving me any error if I remove some content from rdlc but when I add that it again throw error (A generic error occurred in GDI+)
A generic error exception is a pretty lousy exception to diagnose. It conveys little info beyond "it did not work". The exception is raised whenever the Graphics class runs into trouble using drawing objects or rendering the drawing commands to the underlying device context. There is a clear and obvious reason for that in this code and from the things you did to troubleshoot it: the program ran out of memory.
The Graphics class treats its underlying device context as unmanaged resource, the basic reason why you don't get the more obvious OutOfMemoryException. It usually is, like when you use it to render to the screen or a printer, just not in this case because it renders to a MemoryStream. Some odds that you can see the first-chance notification for it in the VS Output window. Adding the Commit Size column in Task Manager can provide an additional diagnostic, trouble starts when it heads north of a gigabyte.
What is especially notable about this code that the program will always fail with this exception. Give it a report with too many pages or a data table with too many records and it is doomed. It will inevitably always require too much memory to store the metafile records in the memory streams. The only thing you can do about it is to make the program more memory-efficient so it can deal with production demands. Lots of opportunities here.
First observation is that you inherited some sloppiness from the MSDN code sample. Which is common and something in general to beware of, such samples focus on demonstrating coding techniques. Making the code bullet-proof gets in the way of the mission, untested and left as an exercise to the reader. Notable is that it ignores the need to Dispose() too much. The provided Dispose() method does not actually accomplish anything, disposing a memory stream merely marks it as unreadable. What it does not do is properly dispose the Metafile, LocalReport and PrintDocument objects. Use the using statement to correct these omissions.
Second observation is that the addition to the CreateStream() method is hugely wasteful. Also the bad kind of waste, it is very rough on the Large Object Heap. There is no need to Copy() the DataTable, the report doesn't write to it. There is no need to convert the MemoryStream to an array and create a MemoryStream from the array again, the first MemoryStream is already good as-is. Don't use using, set its Position to 0. This is pretty likely good enough to solve the problem.
If you still have trouble then you should consider using a FileStream instead of a MemoryStream. It will be just as efficient, the OS ensures it is, having to pick a name for the file is the only additional burden. Not a real issue here, use Path.GetTempFileName(). Note how the Dispose() method now becomes useful and necessary, you'll also want to delete the file again. Or better, use the FileOptions.DeleteOnClose option when you open the file so it is automagic.
And last but not least, you'll want to take advantage of the OS capabilities, modern machines can provide terabytes of address space and LOH fragmentation is never a problem. Project > Properties > Build tab > untick the "Prefer 32-bit" checkbox. Repeat for the Release configuration. You never prefer it when you battle out-of-memory problems.
At my end using the same functions as you are using and getting the same problem don't know why I use the provided function but it's running at my end so use this function may solve your problem:
private Stream CreateStream(string name, string fileNameExtension, Encoding encoding, string mimeType, bool willSeek)
{
Stream stream = new MemoryStream();
m_streams.Add(stream);
return stream;
}

C# MemoryStream slowing programme performance

I'm working on a project using WPF to display the Kinect ColorImageFrame and a skeleton representation. I also have to record those two videos.
I'm able to display and record (using EmguCV) those two images, but I have some performance issues. It seems that this part of my code is the reason of my loss of performance.
private void DrawSkeleton(Skeleton[] skeletons)
{
using (System.Drawing.Bitmap skelBitmap = new System.Drawing.Bitmap(640, 480))
{
foreach (Skeleton S in skeletons)
{
if (S.TrackingState == SkeletonTrackingState.Tracked)
{
DrawBonesAndJoints(S,skelBitmap);
}
else if (S.TrackingState == SkeletonTrackingState.PositionOnly)
{
}
}
_videoArraySkel.Add(ToOpenCVImage<Bgr, Byte>(skelBitmap));
BitmapSource source = ToWpfBitmap(skelBitmap);
this.skeletonStream.Source = source;
}
}
and more precisely from the ToWpfBitmap which allows me to display it in my Window:
public static BitmapSource ToWpfBitmap(System.Drawing.Bitmap bitmap)
{
using (MemoryStream stream = new MemoryStream())
{
bitmap.Save(stream, System.Drawing.Imaging.ImageFormat.Bmp);
stream.Position = 0;
BitmapImage result = new BitmapImage();
result.BeginInit();
// According to MSDN, "The default OnDemand cache option retains access to the stream until the image is needed."
// Force the bitmap to load right now so we can dispose the stream.
result.CacheOption = BitmapCacheOption.OnLoad;
result.StreamSource = stream;
result.EndInit();
result.Freeze();
return result;
}
}
The loss of performance is characterized by:
- The videos displayed on the Window are not fluent anymore
- The video recording seems to miss some frames which leads to a video going faster/lower than the normal.
Can you help me by telling me where this problem may come from?
Try to use RecyclableMemoryStream instead of MemoryStream. It was designed for solving some issue with memory.
Check out this article for details - Announcing Microsoft.IO.RecycableMemoryStream
Have you tried doing the memory write i/o in a separate thread, while maintaining the data in a buffer like a queue?

Does my code properly clean up its List<MemoryStream>?

I've got a third-party component that does PDF file manipulation. Whenever I need to perform operations I retrieve the PDF documents from a document store (database, SharePoint, filesystem, etc.). To make things a little consistent I pass the PDF documents around as a byte[].
This 3rd party component expects a MemoryStream[] (MemoryStream array) as a parameter to one of the main methods I need to use.
I am trying to wrap this functionality in my own component so that I can use this functionality for a number of areas within my application. I have come up with essentially the following:
public class PdfDocumentManipulator : IDisposable
{
List<MemoryStream> pdfDocumentStreams = new List<MemoryStream>();
public void AddFileToManipulate(byte[] pdfDocument)
{
using (MemoryStream stream = new MemoryStream(pdfDocument))
{
pdfDocumentStreams.Add(stream);
}
}
public byte[] ManipulatePdfDocuments()
{
byte[] outputBytes = null;
using (MemoryStream outputStream = new MemoryStream())
{
ThirdPartyComponent component = new ThirdPartyComponent();
component.Manipuate(this.pdfDocumentStreams.ToArray(), outputStream);
//move to begining
outputStream.Seek(0, SeekOrigin.Begin);
//convert the memory stream to a byte array
outputBytes = outputStream.ToArray();
}
return outputBytes;
}
#region IDisposable Members
public void Dispose()
{
for (int i = this.pdfDocumentStreams.Count - 1; i >= 0; i--)
{
MemoryStream stream = this.pdfDocumentStreams[i];
this.pdfDocumentStreams.RemoveAt(i);
stream.Dispose();
}
}
#endregion
}
The calling code to my "wrapper" looks like this:
byte[] manipulatedResult = null;
using (PdfDocumentManipulator manipulator = new PdfDocumentManipulator())
{
manipulator.AddFileToManipulate(file1bytes);
manipulator.AddFileToManipulate(file2bytes);
manipulatedResult = manipulator.Manipulate();
}
A few questions about the above:
Is the using clause in the AddFileToManipulate() method redundant and unnecessary?
Am I cleaning up things OK in my object's Dispose() method?
Is this an "acceptable" usage of MemoryStream? I am not anticipating very many files in memory at once...Likely 1-10 total PDF pages, each page about 200KB. App designed to run on server supporting an ASP.NET site.
Any comments/suggestions?
Thanks for the code review :)
AddFileToManipulate scares me.
public void AddFileToManipulate(byte[] pdfDocument)
{
using (MemoryStream stream = new MemoryStream(pdfDocument))
{
pdfDocumentStreams.Add(stream);
}
}
This code is adding a disposed stream to your pdfDocumentStream list. Instead you should simply add the stream using:
pdfDocumentStreams.Add(new MemoryStream(pdfDocument));
And dispose of it in the Dispose method.
Also you should look at implementing a finalizer to ensure stuff gets disposed in case someone forgets to dispose the top level object.
Is the using clause in the AddFileToManipulate() method redundant and unnecessary?
Worse, it's destructive. You're basically closing your memory stream before it's added in. See the other answers for details, but basically, dispose at the end, but not any other time. Every using with an object causes a Dispose to happen at the end of the block, even if the object is "passed off" to other objects via methods.
Am I cleaning up things OK in my object's Dispose() method?
Yes, but you're making life more difficult than it needs to be. Try this:
foreach (var stream in this.pdfDocumentStreams)
{
stream.Dispose();
}
this.pdfDocumentStreams.Clear();
This works just as well, and is much simpler. Disposing an object does not delete it - it just tells it to free it's internal, unmanaged resources. Calling dispose on an object in this way is fine - the object stays uncollected, in the collection. You can do this and then clear the list in one shot.
Is this an "acceptable" usage of MemoryStream? I am not anticipating very many files in memory at once...Likely 1-10 total PDF pages, each page about 200KB. App designed to run on server supporting an ASP.NET site.
This depends on your situation. Only you can determine whether the overhead of having these files in memory is going to cause you problems. This is going to be a fairly heavy-weight object, though, so I'd use it carefully.
Any comments/suggestions?
Implement a finalizer. It's a good idea whenever you implement IDisposable. Also, you should rework your Dispose implementation to the standard one, or mark your class as sealed. For details on how this should be done, see this article. In particular, you should have a method declared as protected virtual void Dispose(bool disposing) that your Dispose method and your finalizer both call.
It looks to me like you misunderstand what Using does.
It's just syntactic sugar to replace
MemoryStream ms;
try
{
ms = new MemoryStream();
}
finally
{
ms.Dispose();
}
Your usage in AddFileToManipulate is redundant. I'd set up the list of memorystreams in the constructor of PdfDocumentManipulator, then have PdfDocumentManipulator's dispose method call dispose on all the memorystreams.
Side note. This really seems like it calls for an extension method.
public static void DisposeAll<T>(this IEnumerable<T> enumerable)
where T : IDisposable {
foreach ( var cur in enumerable ) {
cur.Dispose();
}
}
Now your Dispose method becomes
public void Dispose() {
pdfDocumentStreams.Reverse().DisposeAll();
pdfDocumentStreams.Clear();
}
EDIT
You don't need the 3.5 framework in order to have extension methods. They will happily work on the 3.0 compiler down targeted to 2.0
http://blogs.msdn.com/jaredpar/archive/2007/11/16/extension-methods-without-3-5-framework.aspx

Categories

Resources