Log4Net Custom AdoNetAppender Buffer Issue - c#

I am using log4net and I have created my own appender from the AdoNetAppender. My appender just implements a kind of a buffer which permits grouping identical events in one log (for thousands of identical errors, I will only have one line in the database).
Here is the code for easy comprehension (my appender has a buffersize = 1):
class CustomAdoNetAppender : AdoNetAppender
{
//My Custom Buffer
private static List<LoggingEvent> unSendEvents = new List<LoggingEvent>();
private int customBufferSize = 5;
private double interval = 100;
private static DateTime lastSendTime = DateTime.Now;
protected override void SendBuffer(log4net.Core.LoggingEvent[] events)
{
LoggingEvent loggingEvent = events[0];
LoggingEvent l = unSendEvents.Find(delegate(LoggingEvent logg) { return GetKey(logg).Equals(GetKey(loggingEvent), StringComparison.OrdinalIgnoreCase); });
//If the events already exist in the custom buffer (unSendEvents) containing the 5 last events
if (l != null)
{
//Iterate the count property
try
{
l.Properties["Count"] = (int)l.Properties["Count"] + 1;
}
catch
{
l.Properties["Count"] = 1;
}
}
//Else
else
{
//If the custom buffer (unSendEvents) contains 5 events
if (unSendEvents.Count() == customBufferSize)
{
//Persist the older event
base.SendBuffer(new LoggingEvent[] { unSendEvents.ElementAt(0) });
//Delete it from the buffer
unSendEvents.RemoveAt(0);
}
//Set count properties to 1
loggingEvent.Properties["Count"] = 1;
//Add the event to the pre-buffer
unSendEvents.Add(loggingEvent);
}
//If timer is over
TimeSpan timeElapsed = loggingEvent.TimeStamp - lastSendTime;
if (timeElapsed.TotalSeconds > interval)
{
//Persist all events contained in the unSendEvents buffer
base.SendBuffer(unSendEvents.ToArray());
//Update send time
lastSendTime = unSendEvents.ElementAt(unSendEvents.Count() - 1).TimeStamp;
//Flush the buffer
unSendEvents.Clear();
}
}
/// <summary>
/// Function to build a key (aggregation of important properties of a logging event) to facilitate comparison.
/// </summary>
/// <param name="logg">The loggign event to get the key.</param>
/// <returns>Formatted string representing the log event key.</returns>
private string GetKey(LoggingEvent logg)
{
return string.Format("{0}|{1}|{2}|{3}", logg.Properties["ErrorCode"] == null ? string.Empty : logg.Properties["ErrorCode"].ToString()
, logg.Level.ToString()
, logg.LoggerName
, logg.MessageObject.ToString()
);
}
}
The buffer and count part is going well. My issue is that I am losing the 5 last logs because the buffer is not flushed at the end of the program. The unSendEvent buffer is full but never flushed in the database because no more new logs are going to "push" in the db older logs.
Is there any solution for me? I have tried to use the Flush() method but with no success.

The Smtp appender has a lossy parameter. If it's not set to false, you aren't guaranteed to get all of the logging messages. Sounds like that might be your problem perhaps? I use a config file, so this line is in my appender definition.
<lossy value="false" />

There are a couple ways I can think of to handle this. The first is to change your buffer size to one (it is at 5 right now). That would ensure that all entries get written right away. However, this might not be ideal. If that is the case, one work-around I can think of is to put five dummy log messages into your buffer. That will flush out the real ones and your dummy events will be the ones that get dropped.

Related

Application insights, and it's maximum storage ability on telemetry

I have a middleware telemetry handler, that has a method that awaits the execution of a request, and then tries to store some key data values from the response body into custom dimensions fields in application insights, so I can use graphana and potentially other 3rd party products to analyse my reponses.
public class ResponseBodyHandler : IResponseBodyHandler
{
private readonly ITelemetryPropertyHandler _telemetryPropertyHandler = new TelemetryPropertyHandler();
public void TransformResponseBodyDataToTelemetryData(RequestTelemetry requestTelemetry, string responseBody)
{
SuccessResponse<List<Booking>> result = null;
try
{
result = JsonConvert.DeserializeObject<SuccessResponse<List<Booking>>>(responseBody);
}
catch (Exception e)
{
Log.Error("Telemetry response handler, failure to deserialize response body: " + e.Message);
return;
}
_telemetryPropertyHandler.CreateTelemetryProperties(requestTelemetry, result);
}
}
public class TelemetryPropertyHandler : ITelemetryPropertyHandler
{
private readonly ILabelHandler _labelHandler = new LabelHandler();
public void CreateTelemetryProperties(RequestTelemetry requestTelemetry, SuccessResponse<List<Booking>> result)
{
Header bookingHeader = result?.SuccessObject?.FirstOrDefault()?.BookingHeader;
requestTelemetry?.Properties.Add("ResponseClientId", "" + bookingHeader?.ConsigneeNumber);
Line line = bookingHeader?.Lines.FirstOrDefault();
requestTelemetry?.Properties.Add("ResponseProductId", "" + line?.PurchaseProductID);
requestTelemetry?.Properties.Add("ResponseCarrierId", "" + line?.SubCarrierID);
_labelHandler.HandleLabel(requestTelemetry, bookingHeader);
requestTelemetry?.Properties.Add("ResponseBody", JsonConvert.SerializeObject(result));
}
}
Now, inside: _labelHandler.HandleLabel(requestTelemetry, bookingHeader);
It extracts an Image that is base64 encoded, chunks up the string in sizes of 8192 characters, and adds them to the Properties as: Image index 0 .. N (N being the total chunks)
I can debug and verify that the code works.
However, on application insights, the entire "request" entry, is missing, not just the custom dimensions.
I am assuming that this is due to a maximum size constraint, and I am likely trying to add more data than is "allowed", however, I can't for the life of me, find the documentation that enforces this restriction.
Can someone tell what rule I am breaking? so I can either truncate the image out, if it isn't possible to store that much data? Or if there is something else I am doing wrong?
I have validated, that my code works fine, as long as I truncate the data into a single Property, that of course only partially stores the Image. (Making said "feature" useless)
public class LabelHandler : ILabelHandler
{
private readonly IBase64Splitter _base64Splitter = new Base64Splitter();
public void HandleLabel(RequestTelemetry requestTelemetry, Header bookingHeader)
{
Label label = bookingHeader?.Labels.FirstOrDefault();
IEnumerable<List<char>> splitBase64String = _base64Splitter.SplitList(label?.Base64.ToList());
if (splitBase64String != null)
{
bool imageHandlingWorked = true;
try
{
int index = 0;
foreach (List<char> chunkOfImageString in splitBase64String)
{
string dictionaryKey = $"Image index {index}";
string chunkData = new string(chunkOfImageString.ToArray());
requestTelemetry?.Properties.Add(dictionaryKey, chunkData);
index++;
}
}
catch (Exception e)
{
imageHandlingWorked = false;
Log.Error("Error trying to store label in chunks: " + e.Message);
}
if (imageHandlingWorked && label != null)
{
label.Base64 = "";
}
}
}
}
The above code is responsible for adding the chunks to a requestTelemetry Property field
public class Base64Splitter : IBase64Splitter
{
private const int ChunkSize = 8192;
public IEnumerable<List<T>> SplitList<T>(List<T> originalList)
{
for (var i = 0; i < originalList.Count; i += ChunkSize)
yield return originalList.GetRange(i, Math.Min(ChunkSize, originalList.Count - i));
}
}
This is the specific method for creating a char list chunk of characters, that correspond to the application insights maximum size pr custom dimension field.
Here is an image of the truncated field being added, if I just limit myself to a single property, but truncate the base64 encoded value.
[I'm from Application Insights team]
You can find field limits documented here: https://learn.microsoft.com/en-us/azure/azure-monitor/app/data-model-request-telemetry
On ingestion side there is a limit of 64 * 1024 bytes for overall JSON payload (need to add it to documentation).
You're facing something different though - that custom dimensions are removed completely. Maybe SDK detects that 64kb is exceeded and "mitigates" it this way. Can you try to limit it to a little bit less than 64kb?

Best way to collect and store times and dates

This question might be a little ambiguous so I'll explain my goal and my previous implementation. I'm looking for some advice on how to improve my implementation.
My application needs a certain set of days and times (hours and minutes) to be used for criteria later in the program.
The days and times are variable and depend on whether a user is a member of a particular group or not.
My previous implementation was to get the name of the group that was selected and then go to the web server and download the appropriate file which contained the day and time. There was a file for each group.
The format of the text file was:
Day,Day,Day,etc..HH,HH,MM,MM
It was then read into two separate arrays with the positions hardcoded. E.g. Index 0, 1,2 where days while 3,4 where hours and 5,6 where minutes.
This method also meant that I'd need a longer array for a group that had more days than another.
Obviously this was all very inefficient and the code wasn't very reusable or extendable. I'd have to alter if a new group was introduced and it had more of less data in the text file.
Edit - due to the vagueness of the question I have included code:
This method is passed the group name in the fileName parameter of CollectQualifyingTimes. The string looked like gtstimes.txt or gtsdates.txt or gectimes.txt or gecdates.txt
internal static class DownloadQualifyingTimes
{
//TimeDate Arrays
public static readonly List<string> QDaysList = new List<string>();
public static readonly List<int> QTimesList = new List<int>();
private static StreamReader _reader;
private static string _line;
private static string _file;
/// <summary>
/// Collects the Times
/// </summary>
public static void CollectQualifyingTimes(string fileName)
{
Logger.Debug("About to download the " + fileName + " Qualifying Times and Dates");
FileDownload.DownloadOnlineFile(fileName);
OpenQualifyingFile(fileName);
}
/// <summary>
/// Open the qualifying file and read the values.
/// </summary>
/// <returns></returns>
private static void OpenQualifyingFile(string fileName)
{
try
{
_file = Path + "\\" + fileName;
using (_reader = new StreamReader(_file))
{
while ((_line = _reader.ReadLine()) != null)
{
if (fileName.Contains("Times"))
{
QTimesList.Add(Convert.ToInt16(_line));
Logger.Debug("Times " + _line);
}
else
{
QDaysList.Add(_line);
Logger.Debug("Days " + _line);
}
}
}
}
catch (WebException exception)
{
Logger.Error(exception);
}
}
}
//The method that calls the Downloading class looks like this:
/// <summary>
///
/// </summary>
/// <param name="selectedLeague"></param>
/// <returns></returns>
public static bool QualificationTimeCheck(string selectedLeague)
{
var currentUtcTime = DateTime.UtcNow;
//Day check regardless of league
if (DownloadQualifyingTimes.QDaysList.Contains(currentUtcTime.DayOfWeek.ToString()))
{
Logger.Debug("Qualifying day condition meet");
if (selectedLeague.IsOneOf("GTS", "CAT"))
{
Logger.Debug("GTS or CAT qualifying logic");
if (currentUtcTime.Hour ==
DownloadQualifyingTimes.QTimesList[0] ||
currentUtcTime.Hour ==
DownloadQualifyingTimes.QTimesList[1])
{
Logger.Debug("Qualifying hour condition meet");
if (((currentUtcTime.Minute > DownloadQualifyingTimes.QTimesList[2])
&& (currentUtcTime.Minute < DownloadQualifyingTimes.QTimesList[3])) || SessionObject.LastLapStartedMinute <= DownloadQualifyingTimes.QTimesList[3])
{
Logger.Debug("Qualifying minute condition meet");
return true;
}
I hope this illustrates the nature of my question and the problem.
Can you think of a better way to implement this process? If you need any more information regarding it please don't hesitate to ask.
Edit - Ended up implementing List as per first comment suggestion.

Undetectable memory leak

I have a Stock class which loads lots of stock data history from a file (about 100 MB). I have a Pair class that takes two Stock objects and calculates some statistical relations between the two then writes the results to file.
In my main method I have a loop going through a list of pairs of stocks (about 500). It creates 2 stock objects and then a pair object out of the two. At this point the pair calculations are written to file and I'm done with the objects. I need to free the memory so I can go on with the next calculation.
I addition to setting the 3 objects to null I have added the following two lines at the end of the loop:
GC.Collect(GC.MaxGeneration);
GC.WaitForPendingFinalizers();
Stepping over theses two lines just seems to just free up 50 MB out of the 200-300 MB that is allocated per every loop iteration (viewing it from task manager).
The program does about eight or ten pairs before it gives me a system out of memory exception. The memory usage steadily increases until it crashes at about 1.5 GB. (This is an 8 GB machine running Win7 Ultimate)
I don't have much experience with garbage collection. Am I doing something wrong?
Here's my code since you asked: (note: program has two modes, 1> add mode in which new pairs are added to system. 2> regular mode which updates the pair files realtime based on filesystemwatcher events. The stock data is updated by external app called QCollector.)
This is the segment in MainForm which runs in Add Mode:
foreach (string line in PairList)
{
string[] tokens = line.Split(',');
stockA = new Stock(QCollectorPath, tokens[0].ToUpper());
stockB = new Stock(QCollectorPath, tokens[1].ToUpper());
double ratio = double.Parse(tokens[2]);
Pair p = new Pair(QCollectorPath, stockA, stockB, ratio);
// at this point the pair is written to file (constructor handles this)
// commenting out the following lines of code since they don't fix the problem
// stockA = null;
// stockB = null;
// p = null;
// refraining from forced collection since that's not the problem
// GC.Collect(GC.MaxGeneration);
// GC.WaitForPendingFinalizers();
// so far this is the only way i can fix the problem by setting the pair classes
// references to StockA and StockB to null
p.Kill();
}
I am adding more code as per request: Stock and Pair are subclasses of TimeSeries, which has the common functionality
public abstract class TimeSeries {
protected List<string> data;
// following create class must be implemented by subclasses (stock, pair, etc...)
// as each class is created differently, although their data formatting is identical
protected void List<string> Create();
// . . .
public void LoadFromFile()
{
data = new List<string>();
List<StreamReader> srs = GetAllFiles();
foreach (StreamReader sr in srs)
{
List<string> temp = new List<string>();
temp = TurnFileIntoListString(sr);
data = new List<string>(temp.Concat(data));
sr.Close()
}
}
// uses directory naming scheme (according to data month/year) to find files of a symbol
protected List<StreamReader> GetAllFiles()...
public static List<string> TurnFileIntoListString(StreamReader sr)
{
List<string> list = new List<string>();
string line;
while ((line = sr.ReadLine()) != null)
list.Add(line);
return list;
}
// this is the only mean to access a TimeSeries object's data
// this is to prevent deadlocks by time consuming methods such as pair's Create
public string[] GetListCopy()
{
lock (data)
{
string[] listCopy = new string[data.count];
data.CopyTo(listCopy);
return listCopy();
}
}
}
public class Stock : TimeSeries
{
public Stock(string dataFilePath, string symbol, FileSystemWatcher fsw = null)
{
DataFilePath = dataFilePath;
Name = symbol.ToUpper();
LoadFromFile();
// to update stock data when external app updates the files
if (fsw != null) fsw.Changed += FileSystemEventHandler(fsw_Changed);
}
protected void List<string> Create()
{
// stock files created by external application
}
// . . .
}
public class Pair : TimeSeries {
public Pair(string dataFilePath, Stock stockA, Stock stockB, double ratio)
{
// assign parameters to local members
// ...
if (FileExists())
LoadFromFile();
else
Create();
}
protected override List<string> Create()
{
// since stock can get updated by fileSystemWatcher's event handler
// a copy is obtained from the stock object's data
string[] listA = StockA.GetListCopy();
string[] listB = StockB.GetListCopy();
List<string> listP = new List<string>();
int i, j;
i = GetFirstValidBar(listA);
j = GetFirstValidBar(listB);
DateTime dtA, dtB;
dtA = GetDateTime(listA[i]);
dtB = GetDateTime(listB[j]);
// this hidden segment adjusts i and j until they are starting at same datetime
// since stocks can have different amount of data
while (i < listA.Count() && j < listB.Count)
{
double priceA = GetPrice(listA[i]);
double priceB = GetPrice(listB[j]);
double priceP = priceA * ratio - priceB;
listP.Add(String.Format("{0},{1:0.00},{2:0.00},{3:0.00}"
, dtA
, priceP
, priceA
, priceB
);
if (i < j)
i++;
else if (j < i)
j++;
else
{
i++;
j++;
}
}
}
public void Kill()
{
data = null;
stockA = null;
stockB = null;
}
}
Your memory leak is here:
if (fsw != null) fsw.Changed += FileSystemEventHandler(fsw_Changed);
The instance of the stock object will be kept in memory as long as the FileSystemWatcher is alive, since it is responding to an event of the FileSystemWatcher.
I think that you want to either implement that event somewhere else, or at some other point in your code add a:
if (fsw != null) fsw.Changed -= fsw_Changed;
Given the way that the code is written it might be possible that stock object is intended to be called without a FileSystemWatcher in cases where a bulk processing is done.
In the original code that you posted the constructors of the Stock classes were being called with a FileSystemWatcher. You have changed that now. I think you will find that now with a null FileSystemWatcher you can remove your kill and you will not have a leak since you are no longer listening to the fsw.Changed.

strange behavior of XamlReader.Load()?

I've got a very strange issue while parsing an external XAML file. The pre-history is that I want to load an external XAML file with content to process. But I want to load as many different files as I want. That happens by unloading the old and loading the new one.
My issue is:
When I load a xaml the first time, everything is good, all as it should be.
But when I load the same xaml the second time, every entry of the object im Loading is there twice. If I run this again, every object is there three times and so on...
To debug the project yourself, download it here. The function starts at line 137 in the file "Control Panel.xaml.cs". I realy don't know what this is. Is it my fault or simply a bug? If yes, is there a workaround?
/// <summary>
/// Load a xaml file and parse it
/// </summary>
public void LoadPresentation()
{
this.Title = "Control Panel - " + System.IO.Path.GetFileName(global.file);
System.IO.FileStream XAML_file = new System.IO.FileStream(global.file, System.IO.FileMode.Open);
try
{
System.IO.StreamReader reader = new System.IO.StreamReader(XAML_file);
string dump = reader.ReadToEnd(); //This is only for debugging purposes because of the strange issue...
XAML_file.Seek(0, System.IO.SeekOrigin.Begin);
presentation = (ResourceDictionary)XamlReader.Load(XAML_file);
//Keys the resourceDictionary must have to be valid
if (presentation["INDEX"] == null || presentation["MAIN_GRID"] == null || presentation["CONTAINER"] == null || presentation["LAYOUTLIST"] == null)
{
throw new Exception();
}
//When this list is loaded, every item in it is there twice or three times or four... Why????
TopicList Index = null;
Index = (TopicList)presentation["INDEX"];
for (int i = 0; i < topics.Count; )
{
topics.RemoveAt(i);
}
foreach (TopicListItem item in Index.Topics)
{
topics.Insert(item.TopicIndex, (Topic)presentation[item.ResourceKey]);
}
lv_topics.SelectedIndex = 0;
selectedIndex = 0;
}
catch
{
System.Windows.Forms.MessageBox.Show("Failed to load XAML file \"" + global.file + "\"", "Parsing Error", System.Windows.Forms.MessageBoxButtons.OK, System.Windows.Forms.MessageBoxIcon.Error);
presentation = null;
}
finally
{
XAML_file.Close();
}
}
Edit:
I have tried to serialize the object that was read from the XamlReader and in the output was nowhere any childelement... But if I pull the object out of the dictionary, the children are all there (duplicated and triplicated, but there).
I have already tried to clear the list over
topics.Clear();
and
topics=new ObservableCollection<TopicListItem>();
lv_topics.ItemsSource=topics;
Try Index.Topics.Clear() after loading the Topics into your topics object. That appears to get rid of the duplication.
//When this list is loaded, every item in it is there twice or three times or four... Why????
TopicList Index = null;
Index = (TopicList)presentation["INDEX"];
topics.Clear();
foreach (TopicListItem item in Index.Topics)
{
topics.Insert(item.TopicIndex, (Topic)presentation[item.ResourceKey]);
}
Index.Topics.Clear(); //Adding this will prevent the duplication
lv_topics.SelectedIndex = 0;
selectedIndex = 0;
In the code post topics is not declared in LoadPresentation() so naturally it will have any prior values.
I know you said you tried topics=new ObservableCollection(); but please try again. And put that IN LoadPresentation()
public void LoadPresentation()
{
ObservableCollection<TopicListItem> topics = new ObservableCollection<TopicListItem>()
I would pass filename
public void LoadPresentation(string fileName)
I get you may need to use topics outside LoadPresentation but this is debugging. If you need topics outside the return it.
public ObservableCollection<TopicListItem> LoadPresentation(string fileName)
If that does not fix it I would put a try catch block on the XAML_file.Close(); to see if something weird is not going on.

Adding AsParallel() call cause my code to break on writing a file

I'm building a console application that have to process a bunch of document.
To stay simple, the process is :
for each year between X and Y, query the DB to get a list of document reference to process
for each of this reference, process a local file
The process method is, I think, independent and should be parallelized as soon as input args are different :
private static bool ProcessDocument(
DocumentsDataset.DocumentsRow d,
string langCode
)
{
try
{
var htmFileName = d.UniqueDocRef.Trim() + langCode + ".htm";
var htmFullPath = Path.Combine("x:\path", htmFileName;
missingHtmlFile = !File.Exists(htmFullPath);
if (!missingHtmlFile)
{
var html = File.ReadAllText(htmFullPath);
// ProcessHtml is quite long : it use a regex search for a list of reference
// which are other documents, then sends the result to a custom WS
ProcessHtml(ref html);
File.WriteAllText(htmFullPath, html);
}
return true;
}
catch (Exception exc)
{
Trace.TraceError("{0,8}Fail processing {1} : {2}","[FATAL]", d.UniqueDocRef, exc.ToString());
return false;
}
}
In order to enumerate my document, I have this method :
private static IEnumerable<DocumentsDataset.DocumentsRow> EnumerateDocuments()
{
return Enumerable.Range(1990, 2020 - 1990).AsParallel().SelectMany(year => {
return Document.FindAll((short)year).Documents;
});
}
Document is a business class that wrap the retrieval of documents. The output of this method is a typed dataset (I'm returning the Documents table). The method is waiting for a year and I'm sure a document can't be returned by more than one year (year is part of the key actually).
Note the use of AsParallel() here, but I never got issue with this one.
Now, my main method is :
var documents = EnumerateDocuments();
var result = documents.Select(d => {
bool success = true;
foreach (var langCode in new string[] { "-e","-f" })
{
success &= ProcessDocument(d, langCode);
}
return new {
d.UniqueDocRef,
success
};
});
using (var sw = File.CreateText("summary.csv"))
{
sw.WriteLine("Level;UniqueDocRef");
foreach (var item in result)
{
string level;
if (!item.success) level = "[ERROR]";
else level = "[OK]";
sw.WriteLine(
"{0};{1}",
level,
item.UniqueDocRef
);
//sw.WriteLine(item);
}
}
This method works as expected under this form. However, if I replace
var documents = EnumerateDocuments();
by
var documents = EnumerateDocuments().AsParrallel();
It stops to work, and I don't understand why.
The error appears exactly here (in my process method):
File.WriteAllText(htmFullPath, html);
It tells me that the file is already opened by another program.
I don't understand what can cause my program not to works as expected. As my documents variable is an IEnumerable returning unique values, why my process method is breaking ?
thx for advises
[Edit] Code for retrieving document :
/// <summary>
/// Get all documents in data store
/// </summary>
public static DocumentsDS FindAll(short? year)
{
Database db = DatabaseFactory.CreateDatabase(connStringName); // MS Entlib
DbCommand cm = db.GetStoredProcCommand("Document_Select");
if (year.HasValue) db.AddInParameter(cm, "Year", DbType.Int16, year.Value);
string[] tableNames = { "Documents", "Years" };
DocumentsDS ds = new DocumentsDS();
db.LoadDataSet(cm, ds, tableNames);
return ds;
}
[Edit2] Possible source of my issue, thanks to mquander. If I wrote :
var test = EnumerateDocuments().AsParallel().Select(d => d.UniqueDocRef);
var testGr = test.GroupBy(d => d).Select(d => new { d.Key, Count = d.Count() }).Where(c=>c.Count>1);
var testLst = testGr.ToList();
Console.WriteLine(testLst.Where(x => x.Count == 1).Count());
Console.WriteLine(testLst.Where(x => x.Count > 1).Count());
I get this result :
0
1758
Removing the AsParallel returns the same output.
Conclusion : my EnumerateDocuments have something wrong and returns twice each documents.
Have to dive here I think
This is probably my source enumeration in cause
I suggest you to have each task put the file data into a global queue and have a parallel thread take writing requests from the queue and do the actual writing.
Anyway, the performance of writing in parallel on a single disk is much worse than writing sequentially, because the disk needs to spin to seek the next writing location, so you are just bouncing the disk around between seeks. It's better to do the writes sequentially.
Is Document.FindAll((short)year).Documents threadsafe? Because the difference between the first and the second version is that in the second (broken) version, this call is running multiple times concurrently. That could plausibly be the cause of the issue.
Sounds like you're trying to write to the same file. Only one thread/program can write to a file at a given time, so you can't use Parallel.
If you're reading from the same file, then you need to open the file with only read permissions as not to put a write lock on it.
The simplest way to fix the issue is to place a lock around your File.WriteAllText, assuming the writing is fast and it's worth parallelizing the rest of the code.

Categories

Resources