I have a ASP .NET Core 3.0 API app that returns a PDF file of a WPF page. It generates the WPF page itself and then converts it to XPS so i can then convert it to PDF but when its done loading the api doesnt release it from memory so it just builds up until it crashes. I have implemented GC.collect for each time it has generated a PDF but with no real success.
Class i use to generate the PDF from a WPF app with IDispossable
public QueryAndGenerate(int orderNumber, string XPSPath, string PDFPath, bool throwExceptions = true)
{
Helper.Log("QueryAndGenerate start");
this.XPSPath = XPSPath;
this.PDFPath = PDFPath;
List<byte[]> Bytes = new List<byte[]>();
var rows = QueryAndGenerate.GetDataRows(Properties.Resources.joborderQuery, new QueryAndGenerate.MySqlParameter("ORDERNUMBER", orderNumber));
PDFPaths = new List<string>();
Helper.Log(string.Format("rows from query: {0} lenth: {1}", rows, rows.Count));
try
{
foreach (var row in rows)
{
isMultipleGuidenote = true;
QueryAndGenerate queryAndGenerate = new QueryAndGenerate(orderNumber, row.Field<int>("JOBORDERNUMBER"), XPSPath, PDFPath, throwExceptions);
Bytes.Add(File.ReadAllBytes(PDFPath));
Helper.Log("generated file: "+ row);
}
}
catch (Exception e)
{
Helper.Log(e);
}
PdfDocument outputDocument = new PdfDocument();
foreach (byte[] pdfBytes in Bytes)
{
if (pdfBytes.Length != 0)
{
using (MemoryStream stream = new MemoryStream(pdfBytes))
{
PdfDocument inputDocument = PdfReader.Open(stream, PdfDocumentOpenMode.Import);
foreach (PdfPage page in inputDocument.Pages)
{
outputDocument.AddPage(page);
}
}
}
}
outputDocument.Save(this.PDFPath);
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
}
public void Dispose()
{
xpsControls = null;
jobRow = null;
checkRows = null;
warrantyRows = null;
subAssemblyRows = null;
detailRows = null;
detailToolRows = null;
detailItemRows = null;
PDFPaths = null;
cadmanCheck = null;
}
The issue you are probably facing is fragmentation of the large object heap (LOH). The article The Dangers of the Large Object Heap explains the problem very well. We faced the same issue in one of our REST-apis, which generates PDF files. The problem is, that the byte[] used is most likely larger than 85kb, which causes it to be placed in LOH. Your call to GC.Collect(); initiates garbage collection on generation 0 of the heap. If you want to prevent this fragmentation from occuring, you should set GC-options before calling GC.Collect(); as mentioned in How to (not) use the large object heap in .Net at Getting rid of large object heap fragmentation
So basically replace your GC-calls with this:
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect(generation: 2, GCCollectionMode.Forced, blocking: true, compacting: true);
GC.WaitForPendingFinalizers();
GC.Collect(generation: 2, GCCollectionMode.Forced, blocking: true, compacting: true);
Related
Backstory: I'm generating csv files as reports, and is testing to see what happens if multiple big reports are generated, this below should generate around 4 MB csv file, so if I called to get this report 2 times my pc would throttle at my 16 GB of ram, even after getting the files the program still uses all my ram, only by restarting the program can I clear up my ram. The ram is mostly used on boilerplate of the SMS class.
My issue is that the tmp list is never cleared/cleaned up by the garbage collector, even after the controller call is finished, which ends in large amounts of ram getting used.
I can see that the ram usage is only increasing when generating the tmp list not when creating the csv file
The console output
Visual studio Memory diagnostic after clicking download once
Snapshot of memory
private static readonly Random random = new();
private string GenerateString(int length = 30)
{
StringBuilder str_build = new StringBuilder();
Random random = new Random();
char letter;
for (int i = 0; i < length; i++)
{
double flt = random.NextDouble();
int shift = Convert.ToInt32(Math.Floor(25 * flt));
letter = Convert.ToChar(shift + 65);
str_build.Append(letter);
}
return str_build.ToString();
}
[HttpGet("SMS")]
public async Task<ActionResult> GetSMSExport([FromQuery]string phoneNumber)
{
Console.WriteLine($"Generating items: {DateTime.Now.ToLongTimeString()}");
Console.WriteLine($"Finished generating items: {DateTime.Now.ToLongTimeString()}");
Console.WriteLine($"Generating CSV: {DateTime.Now.ToLongTimeString()}");
var tmp = new List<SMS>();
// tmp never gets cleared by the garbage collector, even if its not used after the call is finished
for (int i = 0; i < 1000000; i++)
{
tmp.Add(new SMS() {
GatewayName = GenerateString(),
Message = GenerateString(),
Status = GenerateString()
});
}
// 1048574 is max, excel says 1048576 is max but because of header and seperater line it needs to be minussed with 2
ActionResult csv = await ExportDataAsCSV(tmp, $"SMS_Report.csv");
Console.WriteLine($"Finished generating CSV: {DateTime.Now.ToLongTimeString()}");
return csv;
}
private async Task<ActionResult> ExportDataAsCSV(IEnumerable<object> listToExport, string fileName)
{
Console.WriteLine("Creating file");
if (listToExport is null || !listToExport.Any())
throw new ArgumentNullException(nameof(listToExport));
System.IO.File.Delete("Reports/" + GenerateString() + fileName);
var file = System.IO.File.Create("Reports/" + GenerateString() + fileName, 4096, FileOptions.DeleteOnClose);
var streamWriter = new StreamWriter(file, Encoding.UTF8);
await streamWriter.WriteAsync("sep=;");
await streamWriter.WriteAsync(Environment.NewLine);
var headerNames = listToExport.First().GetType().GetProperties();
foreach (var header in headerNames)
{
var displayAttribute = header.GetCustomAttributes(typeof(System.ComponentModel.DataAnnotations.DisplayAttribute),true);
if (displayAttribute.Length != 0)
{
var attribute = displayAttribute.Single() as System.ComponentModel.DataAnnotations.DisplayAttribute;
await streamWriter.WriteAsync(sharedLocalizer[attribute.Name] + ";");
}
else
await streamWriter.WriteAsync(header.Name + ";");
}
await streamWriter.WriteAsync(Environment.NewLine);
var newListToExport = listToExport.ToArray();
for (int j = 0; j < newListToExport.Length; j++)
{
object item = newListToExport[j];
var itemProperties = item.GetType().GetProperties();
for (int i = 0; i < itemProperties.Length; i++)
{
await streamWriter.WriteAsync(itemProperties[i].GetValue(item)?.ToString() + ";");
}
await streamWriter.WriteAsync(Environment.NewLine);
}
Helpers.LogHelper.Log(Helpers.LogHelper.LogType.Info, GetType(), $"User {User.Identity.Name} downloaded {fileName}");
await file.FlushAsync();
file.Position = 0;
return File(file, "text/csv", fileName);
}
public class SMS
{
[Display(Name = "Sent_at_text")]
public DateTime? SentAtUtc { get; set; }
[Display(Name = "gateway_name")]
public string GatewayName { get; set; }
[Display(Name = "message_title")]
[JsonProperty("messageText")]
public string Message { get; set; }
[Display(Name = "status_title")]
[JsonProperty("statusText")]
public string Status { get; set; }
}
Answer can be found in this post
MVC memory issue, memory not getting cleared after controller call is finished (Example project included)
Copied answer
The github version is a fixed version for this issue, so you can go explorer what i did in the changeset
Notes:
After generating a large file, you might need to download smaller sized files before C# releases the memory
Adding forced garbage collection helped alot
Adding a few using statements also helped alot
Smaller existing issues
If your object cant fit in the RAM and it start filling the pagefile it will not reduce the pagefile after use(Restarting your pc will help a little on it but wont clear the pagefile entirely)
I couldnt get it below 400MB of ram usage no matter what i tried, but it didnt matter if i had 5GB or 1GB in the RAM it would still get reduced down to ~400 MB
I need convert 5.000.000 records from DB to JSON, but I'm running out of memory after 4.000 records.
I'm using Task thinking when the task is complete the GC clear everything in the thread from memory.
public class Program()
{
public static void Main(string[] args)
{
Program p = new Program();
p.ExportUsingTask();
}
public void ExportUsingTask()
{
List<int> ids = Program.LoadID(); // trying dont keep DBContext references, so GC can free memory
GC.Collect(); // GC can clear 130MB of memory, DBContext have no references anymore
GC.WaitForPendingFinalizers();
foreach (int item in ids)
{
Task job = new Task(() => new Program().Process(item));
job.RunSynchronously();
Task.WaitAll(job);
job.Dispose();
job = null;
GC.Collect(); // GC don't clear memory, uses more and more memory at each iteration, until OutOfMemoryException
GC.WaitForPendingFinalizers();
}
}
public static List<int> LoadID()
{
List<int> ids = new List<int>();
using (Context db = new Context())
{
ids = db.Alfa.Where(a => a.date.Year == 2019).Select(a => a.id).ToList<int>(); // load 500.000 id from DB, use 130MB of memory
// have some business logic here, but isn't the problem, memory is free after execution anyway
db.Dispose();
}
return ids;
}
public void Process(int id)
{
Beta b = GetBetaFromAlfa(id); // Beta is JSON model that I need save to file
string json = Newtonsoft.Json.JsonConvert.SerializeObject(b);
b = null;
using (StreamWriter sw = System.IO.File.AppendText(#"c:\MyFile.json"))
{
sw.Write(json);
sw.Close();
sw.Dispose();
}
GC.Collect(); // GC don't clear memory
GC.WaitForPendingFinalizers();
}
public static Beta GetBetaFromAlfa(int idAlfa)
{
Alfa a = null; // Alfa is my model in DB
Beta b = null; // Beta is JSON model that I need save to file
using (Context db = new Context())
{
Alfa a = db.Alfa.Single(a => a.id == idAlfa);
b = ConvertAlfaToBeta(a);
db.Dispose();
}
GC.Collect(); // GC don't clear memory
GC.WaitForPendingFinalizers();
return b;
}
public static Beta ConvertAlfaToBeta(Alfa alfa)
{
// business logic, something like:
// beta.id = alfa.id;
// beta.name = alfa.name;
// only simple type association (int, string, decimal, datetime, etc)
}
}
public class Alfa(){ ... }
public class Beta(){ ... }
In the first try, I did a single loop reading records one by one, when I got 100 records, I saved all JSON to file. But I ran out of memory anyway when I got 4000 records, using loop:
public void ExportUsingLoop()
{
List<int> ids = Program.LoadID(); // trying dont keep DBContext references, so GC can free memory
GC.Collect(); // GC can clear 130MB of memory, DBContext have no references anymore
GC.WaitForPendingFinalizers();
int count = 0;
StringBuilder content = new StringBuilder();
foreach (int item in ids)
{
count++;
Beta b = GetBetaFromAlfa(id); // Beta is JSON model that I need save to file
string json = Newtonsoft.Json.JsonConvert.SerializeObject(b);
content.AppendLine(json);
b = null;
json = null;
if(count % 100 == 0)
{
using (StreamWriter sw = System.IO.File.AppendText(#"c:\MyFile.json"))
{
sw.Write(content.ToString());
content.Clear(); // just for clarification
sw.Close();
sw.Dispose();
}
GC.Collect(); // GC don't clear memory, uses more and more memory at each iteration, until OutOfMemoryException
GC.WaitForPendingFinalizers();
}
}
}
I have a simple program that just reading XPS file, i've read the following post and it did solve part of the issue.
Opening XPS document in .Net causes a memory leak
class Program
{
static int intCounter = 0;
static object _intLock = new object();
static int getInt()
{
lock (_intLock)
{
return intCounter++;
}
}
static void Main(string[] args)
{
Console.ReadLine();
for (int i = 0; i < 100; i++)
{
Thread t = new Thread(() =>
{
var ogXps = File.ReadAllBytes(#"C:\Users\Nathan\Desktop\Objective.xps");
readXps(ogXps);
Console.WriteLine(getInt().ToString());
});
t.SetApartmentState(ApartmentState.STA);
t.Start();
Thread.Sleep(50);
}
Console.ReadLine();
}
static void readXps(byte[] originalXPS)
{
try
{
MemoryStream inputStream = new MemoryStream(originalXPS);
string memoryStreamUri = "memorystream://" + Path.GetFileName(Guid.NewGuid().ToString() + ".xps");
Uri packageUri = new Uri(memoryStreamUri);
Package oldPackage = Package.Open(inputStream);
PackageStore.AddPackage(packageUri, oldPackage);
XpsDocument xpsOld = new XpsDocument(oldPackage, CompressionOption.Normal, memoryStreamUri);
FixedDocumentSequence seqOld = xpsOld.GetFixedDocumentSequence();
//The following did solve some of the memory issue
//-----------------------------------------------
var docPager = seqOld.DocumentPaginator;
docPager.ComputePageCount();
for (int i = 0; i < docPager.PageCount; i++)
{
FixedPage fp = docPager.GetPage(i).Visual as FixedPage;
fp.UpdateLayout();
}
seqOld = null;
//-----------------------------------------------
xpsOld.Close();
oldPackage.Close();
oldPackage = null;
inputStream.Close();
inputStream.Dispose();
inputStream = null;
PackageStore.RemovePackage(packageUri);
}
catch (Exception e)
{
}
}
}
^ The program will read a XPS file for hundred times
^Before apply the fix
^After apply the fix
So the fix in the post suggested did eliminate some objects, however i found that there are still objects like Dispatcher , ContextLayoutManager and MediaContext still exists in memory and their number are exactly 100, is this a normal behavior or a memory leak? How do i fix this? Thanks.
25/7/2018 Update
Adding the line Dispatcher.CurrentDispatcher.InvokeShutdown(); did get rid of the Dispatcher , ContextLayoutManager and MediaContext object, don't know if this is an ideal way to fix.
It looks like those classes you're left with are from the XPSDocument, that implements IDisposable but you don't call those. And there are a few more classes that implement that same interface and if they do, as a rule of thumb, either wrap them in a using statement so it is guaranteed their Dispose method gets called or call their Dispose method your self.
An improved version of your readXps method will look like this:
static void readXps(byte[] originalXPS)
{
try
{
using (MemoryStream inputStream = new MemoryStream(originalXPS))
{
string memoryStreamUri = "memorystream://" + Path.GetFileName(Guid.NewGuid().ToString() + ".xps");
Uri packageUri = new Uri(memoryStreamUri);
using(Package oldPackage = Package.Open(inputStream))
{
PackageStore.AddPackage(packageUri, oldPackage);
using(XpsDocument xpsOld = new XpsDocument(oldPackage, CompressionOption.Normal, memoryStreamUri))
{
FixedDocumentSequence seqOld = xpsOld.GetFixedDocumentSequence();
//The following did solve some of the memory issue
//-----------------------------------------------
var docPager = seqOld.DocumentPaginator;
docPager.ComputePageCount();
for (int i = 0; i < docPager.PageCount; i++)
{
FixedPage fp = docPager.GetPage(i).Visual as FixedPage;
fp.UpdateLayout();
}
seqOld = null;
//-----------------------------------------------
} // disposes XpsDocument
} // dispose Package
PackageStore.RemovePackage(packageUri);
} // dispose MemoryStream
}
catch (Exception e)
{
// really do something here, at least:
Debug.WriteLine(e);
}
}
This should at least clean-up most of the objects. I'm not sure if you're going to see the effects in your profiling as that depends on if the objects are actually collected during your analysis. Profiling a debug build might give unanticipated results.
As the remainder of those object instances seem to be bound to the System.Windows.Threading.Dispatcher I suggest you could try to keep a reference to your Threads (but at this point you might consider looking into Tasks) ansd once all threads are done, call the static ExitAllFrames on the Dispatcher.
Your main method will then look like this:
Console.ReadLine();
Thread[] all = new Thread[100];
for (int i = 0; i < all.Length; i++)
{
var t = new Thread(() =>
{
var ogXps = File.ReadAllBytes(#"C:\Users\Nathan\Desktop\Objective.xps");
readXps(ogXps);
Console.WriteLine(getInt().ToString());
});
t.SetApartmentState(ApartmentState.STA);
t.Start();
all[i] = t; // keep reference
Thread.Sleep(50);
}
foreach(var t in all) t.Join(); // https://stackoverflow.com/questions/263116/c-waiting-for-all-threads-to-complete
all = null; // meh
Dispatcher.ExitAllFrames(); // https://stackoverflow.com/a/41953265/578411
Console.ReadLine();
it is possible using OpenPop.dll.
Pop3Client objPOP3Client = new Pop3Client();
int intTotalEmail = 0;
DataTable dtEmail = new DataTable();
object[] objMessageParts;
try
{
dtEmail = GetAllEmailStructure();
if (objPOP3Client.Connected)
objPOP3Client.Disconnect();
objPOP3Client.Connect(strHostName, intPort, bulUseSSL);
try
{
objPOP3Client.Authenticate(strUserName, new Common()._Decode(strPassword));
intTotalEmail = objPOP3Client.GetMessageCount();
AddMapping();
for (int i = 1; i <= intTotalEmail; i++)
{
objMessageParts = GetMessageContent(i, ref objPOP3Client, dtExistMailList);
if (objMessageParts != null && objMessageParts[0].ToString() == "0")
{
AddToDtEmail(objMessageParts, i, dtEmail, dtUserList, dtTicketIDList, dtBlacklistEmails, dtBlacklistSubject, dtBlacklistDomains);
}
}
}
catch (Exception ex)
{
}
}
catch (Exception ex)
{
ParserLogError(ex, "GetAllEmail()");
}
finally
{
if (objPOP3Client.Connected)
objPOP3Client.Disconnect();
}
// function
public object[] GetMessageContent(int intMessageNumber, ref Pop3Client objPOP3Client, DataTable dtExistingMails)
{
object[] strArrMessage = new object[10];
Message objMessage;
MessagePart plainTextPart = null, HTMLTextPart = null;
string strMessageId = "";
try
{
strArrMessage[0] = "";
strArrMessage[1] = "";
strArrMessage[2] = "";
strArrMessage[3] = "";
strArrMessage[4] = "";
strArrMessage[5] = "";
strArrMessage[6] = "";
strArrMessage[7] = null;
strArrMessage[8] = null;
strArrMessage[7] = "";
strArrMessage[8] = "";
objMessage = objPOP3Client.GetMessage(intMessageNumber);
strMessageId = (objMessage.Headers.MessageId == null ? "" : objMessage.Headers.MessageId.Trim());
if (!IsExistMessageID(dtExistingMails, strMessageId)) //check in data base message id is exists or not
{
strArrMessage[0] = "0";
strArrMessage[1] = objMessage.Headers.From.Address.Trim(); // From EMail Address
strArrMessage[2] = objMessage.Headers.From.DisplayName.Trim(); // From EMail Name
strArrMessage[3] = objMessage.Headers.Subject.Trim();// Mail Subject
plainTextPart = objMessage.FindFirstPlainTextVersion();
strArrMessage[4] = (plainTextPart == null ? "" : plainTextPart.GetBodyAsText().Trim());
HTMLTextPart = objMessage.FindFirstHtmlVersion();
strArrMessage[5] = (HTMLTextPart == null ? "" : HTMLTextPart.GetBodyAsText().Trim());
strArrMessage[6] = strMessageId;
List<MessagePart> attachment = objMessage.FindAllAttachments();
strArrMessage[7] = null;
strArrMessage[8] = null;
if (attachment.Count > 0)
{
if (attachment[0] != null && attachment[0].IsAttachment)
{
strArrMessage[7] = attachment[0].FileName.Trim();
strArrMessage[8] = attachment[0];
}
}
}
else
{
strArrMessage[0] = "1";
}
}
catch (Exception ex)
{
ParserLogError(ex, "GetMessageContent()");
}
return strArrMessage;
}
but, i want to make it faster than above OpenPop.dll. so please let me know if any other technique are there for parsing mails.
please check code and then tell me.
Thanks in advance
but, i want to make it faster than above OpenPop.dll. so please let me
know if any other technique are there for parsing mails.
In your GetMessageContent() method, the 1 place that consumes the vast amount of time is:
objMessage = objPOP3Client.GetMessage(intMessageNumber);
The network I/O part of downloading a message cannot really be optimized, but OpenPop.NET's parser is slow (based on my own performance tests).
MimeKit is 25x faster than OpenPop.NET at parsing email messages.
One of the main performance problems in OpenPop.NET's MIME parser is the fact that it uses a StreamReader for parsing (which is slow due to unnecessary charset conversion, reading 1 line at a time, etc - I have an analysis of another email library that uses StreamReader for parsing here: https://stackoverflow.com/a/18787176/87117).
Then there's the problem that OpenPop.NET's parser also uses Regex to remove CFWS (Comments and Folding White Space) from a header string before parsing/decoding it. This is expensive. It's far better to write a good tokenizer that can deal with CFWS.
If you are interested in some of the other techniques I used to optimize MimeKit to be so fast (as fast or faster than highly optimized C implementations), I wrote some blog posts about this:
Optimization Tricks used by MimeKit: Part 1
The summary of the optimization I talk about in part 1 is replacing loops like this that scan for the end of a line:
while (*inptr != (byte) '\n')
inptr++;
with a faster loop, like this:
int* dword = (int*) inptr;
do {
mask = *dword++ ^ 0x0A0A0A0A;
mask = ((mask - 0x01010101) & (~mask & 0x80808080));
} while (mask == 0);
inptr = (byte*) (dword - 1);
while (*inptr != (byte) '\n')
inptr++;
which improved performance by 20% (although on non-x86 architectures, it requires 'dword' to be 4-byte aligned).
Optimization Tricks used by MimeKit: Part 2
In part 2, I talk about writing a more optimized version of System.IO.MemoryStream. The problem with MemoryStream is that it has to keep 1 contiguous block of memory with the content, which means that as you write more data to it and it has to resize its internal byte array, it has to copy the content to the new array (which is expensive, especially once the amount of data in the stream is large).
To work around this performance bottleneck, I wrote a MemoryBlockStream which does not need to use a contiguous block of memory - it uses a linked list of byte arrays. Instead of having to resize the byte array when you overflow the current buffer, it simply allocates another 2048-byte array that the data will overflow into and appends it to the linked list.
Note: MimeKit itself only does email parsing, it doesn't do POP3 or SMTP or IMAP. If you want that kind of functionality, I've also written a library built on MimeKit that does that as well: MailKit
Update:
Sample code using MailKit (as requested) to download/parse all messages:
using System;
using System.Net;
using MailKit.Net.Pop3;
using MailKit;
using MimeKit;
namespace TestClient {
class Program
{
public static void Main (string[] args)
{
using (var client = new Pop3Client ()) {
client.Connect ("pop.gmail.com", 995, true);
// Note: since we don't have an OAuth2 token, disable
// the XOAUTH2 authentication mechanism.
client.AuthenticationMechanisms.Remove ("XOAUTH2");
client.Authenticate ("joey#gmail.com", "password");
int count = client.GetMessageCount ();
for (int i = 0; i < count; i++) {
var message = client.GetMessage (i);
Console.WriteLine ("Subject: {0}", message.Subject);
}
client.Disconnect (true);
}
}
}
}
I am calling below method in a loop with same xmlRequestPath and xmlResponsePath files. Two loop counts it executes fine in the 3rd iteration I am getting exception "The process cannot access the file because it is being used by another process.".
public static void UpdateBatchID(String xmlRequestPath, String xmlResponsePath)
{
String batchId = "";
XDocument requestDoc = null;
XDocument responseDoc = null;
lock (locker)
{
using (var sr = new StreamReader(xmlRequestPath))
{
requestDoc = XDocument.Load(sr);
var element = requestDoc.Root;
batchId = element.Attribute("BatchID").Value;
if (batchId.Length >= 16)
{
batchId = batchId.Remove(0, 16).Insert(0, DateTime.Now.ToString("yyyyMMddHHmmssff"));
}
else if (batchId != "") { batchId = DateTime.Now.ToString("yyyyMMddHHmmssff"); }
element.SetAttributeValue("BatchID", batchId);
}
using (var sw = new StreamWriter(xmlRequestPath))
{
requestDoc.Save(sw);
}
using (var sr = new StreamReader(xmlResponsePath))
{
responseDoc = XDocument.Load(sr);
var elementResponse = responseDoc.Root;
elementResponse.SetAttributeValue("BatchID", batchId);
}
using (var sw = new StreamWriter(xmlResponsePath))
{
responseDoc.Save(sw);
}
}
Thread.Sleep(500);
requestDoc = null;
responseDoc = null;
}
Exception is occurring at using (var sw = new StreamWriter(xmlResponsePath)) in above code.
Exception:
The process cannot access the file 'D:\Projects\ESELServer20130902\trunk\Testing\ESL Server Testing\ESLServerTesting\ESLServerTesting\TestData\Assign\Expected Response\Assign5kMACResponse.xml' because it is being used by another process.
Maybe at the third loop the stream is still being closed, so it tells you that it is non accessible. Try waiting a bit before calling it again in the loop, for example:
while (...)
{
UpdateBatchID(xmlRequestPath, xmlResponsePath);
System.Threading.Thread.Sleep(500);
}
Or, close explicitly the stream instead of leaving the work to the garbage collector:
var sr = new StreamReader(xmlResponsePath);
responseDoc = XDocument.Load(sr);
....
sr.Close();
Instead of using two streams, a Write and a Read stream, try using only a FileStream, since the problem might be that after loading the file the stream remains opened until the garbadge collector actives.
using (FileSteam f = new FileStream(xmlResponsePath))
{
responseDoc = XDocument.Load(sr);
var elementResponse = responseDoc.Root;
elementResponse.SetAttributeValue("BatchID", batchId);
responseDoc.Save(sw);
}