C# Reading XPS causing memory leak - c#

I have a simple program that just reading XPS file, i've read the following post and it did solve part of the issue.
Opening XPS document in .Net causes a memory leak
class Program
{
static int intCounter = 0;
static object _intLock = new object();
static int getInt()
{
lock (_intLock)
{
return intCounter++;
}
}
static void Main(string[] args)
{
Console.ReadLine();
for (int i = 0; i < 100; i++)
{
Thread t = new Thread(() =>
{
var ogXps = File.ReadAllBytes(#"C:\Users\Nathan\Desktop\Objective.xps");
readXps(ogXps);
Console.WriteLine(getInt().ToString());
});
t.SetApartmentState(ApartmentState.STA);
t.Start();
Thread.Sleep(50);
}
Console.ReadLine();
}
static void readXps(byte[] originalXPS)
{
try
{
MemoryStream inputStream = new MemoryStream(originalXPS);
string memoryStreamUri = "memorystream://" + Path.GetFileName(Guid.NewGuid().ToString() + ".xps");
Uri packageUri = new Uri(memoryStreamUri);
Package oldPackage = Package.Open(inputStream);
PackageStore.AddPackage(packageUri, oldPackage);
XpsDocument xpsOld = new XpsDocument(oldPackage, CompressionOption.Normal, memoryStreamUri);
FixedDocumentSequence seqOld = xpsOld.GetFixedDocumentSequence();
//The following did solve some of the memory issue
//-----------------------------------------------
var docPager = seqOld.DocumentPaginator;
docPager.ComputePageCount();
for (int i = 0; i < docPager.PageCount; i++)
{
FixedPage fp = docPager.GetPage(i).Visual as FixedPage;
fp.UpdateLayout();
}
seqOld = null;
//-----------------------------------------------
xpsOld.Close();
oldPackage.Close();
oldPackage = null;
inputStream.Close();
inputStream.Dispose();
inputStream = null;
PackageStore.RemovePackage(packageUri);
}
catch (Exception e)
{
}
}
}
^ The program will read a XPS file for hundred times
^Before apply the fix
^After apply the fix
So the fix in the post suggested did eliminate some objects, however i found that there are still objects like Dispatcher , ContextLayoutManager and MediaContext still exists in memory and their number are exactly 100, is this a normal behavior or a memory leak? How do i fix this? Thanks.
25/7/2018 Update
Adding the line Dispatcher.CurrentDispatcher.InvokeShutdown(); did get rid of the Dispatcher , ContextLayoutManager and MediaContext object, don't know if this is an ideal way to fix.

It looks like those classes you're left with are from the XPSDocument, that implements IDisposable but you don't call those. And there are a few more classes that implement that same interface and if they do, as a rule of thumb, either wrap them in a using statement so it is guaranteed their Dispose method gets called or call their Dispose method your self.
An improved version of your readXps method will look like this:
static void readXps(byte[] originalXPS)
{
try
{
using (MemoryStream inputStream = new MemoryStream(originalXPS))
{
string memoryStreamUri = "memorystream://" + Path.GetFileName(Guid.NewGuid().ToString() + ".xps");
Uri packageUri = new Uri(memoryStreamUri);
using(Package oldPackage = Package.Open(inputStream))
{
PackageStore.AddPackage(packageUri, oldPackage);
using(XpsDocument xpsOld = new XpsDocument(oldPackage, CompressionOption.Normal, memoryStreamUri))
{
FixedDocumentSequence seqOld = xpsOld.GetFixedDocumentSequence();
//The following did solve some of the memory issue
//-----------------------------------------------
var docPager = seqOld.DocumentPaginator;
docPager.ComputePageCount();
for (int i = 0; i < docPager.PageCount; i++)
{
FixedPage fp = docPager.GetPage(i).Visual as FixedPage;
fp.UpdateLayout();
}
seqOld = null;
//-----------------------------------------------
} // disposes XpsDocument
} // dispose Package
PackageStore.RemovePackage(packageUri);
} // dispose MemoryStream
}
catch (Exception e)
{
// really do something here, at least:
Debug.WriteLine(e);
}
}
This should at least clean-up most of the objects. I'm not sure if you're going to see the effects in your profiling as that depends on if the objects are actually collected during your analysis. Profiling a debug build might give unanticipated results.
As the remainder of those object instances seem to be bound to the System.Windows.Threading.Dispatcher I suggest you could try to keep a reference to your Threads (but at this point you might consider looking into Tasks) ansd once all threads are done, call the static ExitAllFrames on the Dispatcher.
Your main method will then look like this:
Console.ReadLine();
Thread[] all = new Thread[100];
for (int i = 0; i < all.Length; i++)
{
var t = new Thread(() =>
{
var ogXps = File.ReadAllBytes(#"C:\Users\Nathan\Desktop\Objective.xps");
readXps(ogXps);
Console.WriteLine(getInt().ToString());
});
t.SetApartmentState(ApartmentState.STA);
t.Start();
all[i] = t; // keep reference
Thread.Sleep(50);
}
foreach(var t in all) t.Join(); // https://stackoverflow.com/questions/263116/c-waiting-for-all-threads-to-complete
all = null; // meh
Dispatcher.ExitAllFrames(); // https://stackoverflow.com/a/41953265/578411
Console.ReadLine();

Related

how do you release a thread when it finishes

I have an application that recursively walks a very large (6 TB) folder. To speed things up, I create a new thread for each recursion. At one point my thread count was in excess of 12,000. As the task gets closer to completion, my thread count gets drops, but on Task Manager the thread count keeps climbing. I think that indicates that the threads are not being garbage collected when they finish.
At one point, my internal thread count showed 5575 threads while the Windows resource monitor showed the task using 33,023 threads.
static void Main(string[] args)
{
string folderName = Properties.Settings.Default.rootFolder;
ParameterizedThreadStart needleThreader = new ParameterizedThreadStart(needle);
Thread eye = new Thread(needleThreader);
threadcount = 1;
eye.Start(folderName);
}
static void needle(object objFolderName)
{
string folderName = (string)objFolderName;
FolderData folderData = getFolderData(folderName);
addToDB(folderData);
//since the above statement gets executed (my database table
//gets populated), I think the thread should get garbage collected
//here, but the windows thread count keeps climbing.
}
// recursive routine to walk directory structure and create annotated treeview
private static FolderData getFolderData(string folderName)
{
//Console.WriteLine(folderName);
long folderSize = 0;
string[] directories = new string[] { };
string[] files = new string[] { };
try
{
directories = Directory.GetDirectories(folderName);
}
catch { };
try
{
files = Directory.GetFiles(folderName);
}
catch { }
for (int f = 0; f < files.Length; f++)
{
try
{
folderSize += new FileInfo(files[f]).Length;
}
catch { } //cannot access file so skip;
}
FolderData folderData = new FolderData(folderName, directories.Length, files.Length, folderSize);
List<String> directoryList = directories.ToList<String>();
directoryList.Sort();
for (int d = 0; d < directoryList.Count; d++)
{
Console.Write(" " + threadcount + " ");
//threadcount is my internal counter. it increments here
//where i start a new thread and decrements when the thread ends
//see below
threadcount++;
ParameterizedThreadStart needleThreader = new ParameterizedThreadStart(needle);
Thread eye = new Thread(needleThreader);
eye.Start(directoryList[d]);
}
//thread is finished, so decrement
threadcount--;
return folderData;
}
Thanks to matt-dot-net's suggestion I spent a few hours research TPL (Task Parallel Library), and it was well worth it.
Here is my new code. It works blazingly fast, does not peg the CPU (uses 41% which is a lot but still plays nice in the sandbox), uses only about 160MB of memory (instead of nearly all of the 4GB available) and uses a maximum of about 70 threads.
You'd almost think I new what I was doing. But the .net TPL handles all the hard stuff, like determining the correct number of threads and making sure they clean up after themselves.
class Program
{
static object padlock = new object();
static void Main(string[] args)
{
OracleConnection ora = new OracleConnection(Properties.Settings.Default.ora);
ora.Open();
new OracleCommand("DELETE FROM SCRPT_APP.S_DRIVE_FOLDERS", ora).ExecuteNonQuery();
ora.Close();
string folderName = Properties.Settings.Default.rootFolder;
Task processRoot = new Task((value) =>
{
getFolderData(value);
}, folderName);
//wait is like join; it waits for this asynchronous task to finish.
processRoot.Start();
processRoot.Wait();
}
// recursive routine to walk directory structure and create annotated treeview
private static void getFolderData(object objFolderName)
{
string folderName = (string)objFolderName;
Console.WriteLine(folderName);
long folderSize = 0;
string[] directories = new string[] { };
string[] files = new string[] { };
try
{
directories = Directory.GetDirectories(folderName);
}
catch { };
try
{
files = Directory.GetFiles(folderName);
}
catch { }
for (int f = 0; f < files.Length; f++)
{
try
{
folderSize += new FileInfo(files[f]).Length;
}
catch { } //cannot access file so skip;
}
FolderData folderData = new FolderData(folderName, directories.Length, files.Length, folderSize);
List<String> directoryList = directories.ToList<String>();
directoryList.Sort();
//create a task for each subdirectory
List<Task> dirTasks = new List<Task>();
for (int d = 0; d < directoryList.Count; d++)
{
dirTasks.Add(new Task((value) =>
{
getFolderData(value);
}, directoryList[d]));
}
//start all tasks
foreach (Task task in dirTasks)
{
task.Start();
}
//wait fo them to finish
Task.WaitAll(dirTasks.ToArray());
addToDB(folderData);
}
private static void addToDB(FolderData folderData)
{
lock (padlock)
{
OracleConnection ora = new OracleConnection(Properties.Settings.Default.ora);
ora.Open();
OracleCommand addFolderData = new OracleCommand(
"INSERT INTO FOLDERS " +
"(PATH, FOLDERS, FILES, SPACE_USED) " +
"VALUES " +
"(:PATH, :FOLDERS, :FILES, :SPACE_USED) ",
ora);
addFolderData.BindByName = true;
addFolderData.Parameters.Add(":PATH", OracleDbType.Varchar2);
addFolderData.Parameters.Add(":FOLDERS", OracleDbType.Int32);
addFolderData.Parameters.Add(":FILES", OracleDbType.Int32);
addFolderData.Parameters.Add(":SPACE_USED", OracleDbType.Int64);
addFolderData.Prepare();
addFolderData.Parameters[":PATH"].Value = folderData.FolderName;
addFolderData.Parameters[":FOLDERS"].Value = folderData.FolderCount;
addFolderData.Parameters[":FILES"].Value = folderData.FileCount;
addFolderData.Parameters[":SPACE_USED"].Value = folderData.Size;
addFolderData.ExecuteNonQuery();
ora.Close();
}
}
}
}

Problem with passing constantly reading serial buffer data to another class

Long story short. ;
I have a class named Scope. And this class contains all logic for scope operations etc. It also starts backround thread that constantly read serial port data (in my case events was unreliable):
Thread BackgroundReader = new Thread(ReadBuffer);
BackgroundReader.IsBackground = true;
BackgroundReader.Start();
private void ReadBuffer()
{
SerialPort.DiscardInBuffer();
while (!_stopCapture)
{
int bufferSize = SerialPort.BytesToRead;
byte[] buffer = new byte[bufferSize];
if(bufferSize > 5)
{
SerialPort.Read(buffer, 0, bufferSize);
Port_DataReceivedEvent(buffer, null);
}
Thread.Sleep(_readDelay);
}
CurrentBuffer = null;
}
In Scope class there is a public field named Buffer
public byte[] Buffer
{
get
{
return CurrentBuffer;
}
}
And here is event fired while there is new data readed
private void Port_DataReceivedEvent(object sender, EventArgs e)
{
//populate buffer
Info(sender, null);
CurrentBuffer = ((byte[])sender);
foreach(byte data in CurrentBuffer)
{
DataBuffer.Enqueue(data);
}
if (DataBuffer.Count() > _recordLength)
{
GenerateFrame(DataBuffer.ToArray());
DataBuffer.Clear(); ;
}
}
To make code more manageable, I splitted it in several classes. One of this classes is for searching specific data pattern in current stream and create specific object from this data. This code works in way that send to serial port specific command and expect return frame. If reponse is not received or not ok, send is performed again and again until correct response arrives or there will be timeout. Response is expected to be in current buffer. Those strange string manipulation is for debug purposes.
public class GetAcknowledgedFrame
{
byte[] WritedData;
string lastEx;
string stringData;
public DataFrame WriteAcknowledged(Type SendType, Type ReturnType, JyeScope scope)
{
var stopwatch = new Stopwatch();
stopwatch.Restart();
while (stopwatch.ElapsedMilliseconds < scope.TimeoutTime)
{
try
{
if (SendType == typeof(GetParameters))
{
WriteFrame(new ScopeControlFrames.GetParameters(), scope.SerialPort);
}
else if(SendType == typeof(GetConfig))
{
WriteFrame(new ScopeControlFrames.GetConfig(), scope.SerialPort);
}
else if (SendType == typeof(EnterUSBScopeMode))
{
WriteFrame(new ScopeControlFrames.EnterUSBScopeMode(), scope.SerialPort);
}
return ReturnFrame(ReturnType, scope.Buffer, scope.TimeoutTime);
}
catch (InvalidDataFrameException ex)
{
lastEx = ex.Message;
System.Threading.Thread.Sleep(10);
}
}
stringData = "";
foreach (var data in scope.Buffer)
{
stringData += data + ",";
}
stringData.Remove(stringData.Length - 1);
throw new TimeoutException($"Timeout while waiting for frame acknowledge: " + SendType.ToString() + ", " + ReturnType.ToString() + Environment.NewLine+ "Add. err: "+lastEx);
}
private DataFrame ReturnFrame(Type FrameType, byte[] buffer, int timeoutTime)
{
if (FrameType == typeof(DataFrames.DSO068.CurrConfigDataFrame))
{
DataFrames.DSO068.CurrConfigDataFrame CurrConfig = new DataFrames.DSO068.CurrConfigDataFrame(buffer);
return CurrConfig;
}
else if (FrameType == typeof(DataFrames.DSO112.CurrConfigDataFrame))
{
DataFrames.DSO112.CurrConfigDataFrame CurrParam = new DataFrames.DSO112.CurrConfigDataFrame(buffer);
return CurrParam;
}
else if (FrameType == typeof(CurrParamDataFrame))
{
CurrParamDataFrame CurrParam = new CurrParamDataFrame(buffer);
return CurrParam;
}
else if (FrameType == typeof(DataBlockDataFrame))
{
DataBlockDataFrame CurrData = new DataBlockDataFrame(buffer);
return CurrData;
}
else if (FrameType == typeof(DataSampleDataFrame))
{
DataSampleDataFrame CurrData = new DataSampleDataFrame(buffer);
return CurrData;
}
else if (FrameType == typeof(ScopeControlFrames.ScopeReady))
{
ScopeControlFrames.ScopeReady ready = new ScopeControlFrames.ScopeReady(buffer);
return ready;
}
else
{
throw new InvalidOperationException("Wrong object type");
}
}
private bool WriteFrame(DataFrame frame, IStreamResource port)
{
WritedData = frame.Data;
port.Write(frame.Data, 0, frame.Data.Count());
return true;
}
}
From main class (and main thread) I call method in this class, for example:
var Ready = (ScopeControlFrames.ScopeReady)new GetAcknowledgedFrame().WriteAcknowledged
(typeof(ScopeControlFrames.EnterUSBScopeMode), typeof(ScopeControlFrames.ScopeReady), this);
The problem is when I pass "this" object (that has thread working in background) to my helper class. It seems like helper class not see changing data in this object. The problem started when I separate code of my helper class from main class.
My questions:
- I know that object are passed by reference, that means I think that when object is dynamically changing its state (in this case data buffer should changing while new data is received) all classes that has reference to this object are also seeing this changes. Maybe I'm missing something?
- I tried passing array (by ref), arrays are also reference types. But this not help me at all. Maybe I'm missing something?
I tried changing this class to static, it not helped.
Many thanks for help.
The code below;
Info(sender, null);
CurrentBuffer = ((byte[])sender);
is creating a new reference variable called CurrentBuffer. Any other code holding a reference 'pointer' to the CurrentBuffer value prior to this line of code will not get the new value of CurrentBuffer when its reset.

Capture Slow Output from Method

I have a slow running utility method that logs one line of output at a time. I need to be able to output each of those lines and then read them from other locations in code. I have attempted using Tasks and Streams similar to the code below:
public static Task SlowOutput(Stream output)
{
Task result = new Task(() =>
{
using(StreamWriter sw = new StreamWriter(output))
{
for(var i = 0; i < int.MaxValue; i++)
{
sw.WriteLine(i.ToString());
System.Threading.Thread.Sleep(1000);
}
}
}
}
And then called like this:
MemoryStream ms = new MemoryStream();
var t = SlowOutput(ms);
using (var sr = new StreamReader(ms))
{
while (!t.IsCompleted)
{
Console.WriteLine(sr.ReadLine())
}
}
But of course, sr.ReadLine() is always empty because as soon as the method's sw.WriteLine() is called, it changes the position of the underlying stream to the end.
What I'm trying to do is pipe the output of the stream by maybe queueing up the characters that the method outputs and then consuming them from outside the method. Streams don't seem to be the way to go.
Is there a generally accepted way to do this?
What I would do is switch to a BlockingCollection<String>.
public static Task SlowOutput(BlockingCollection<string> output)
{
return Task.Run(() =>
{
for(var i = 0; i < int.MaxValue; i++)
{
output.Add(i);
System.Threading.Thread.Sleep(1000);
}
output.Complete​Adding();
}
}
consumed by
var bc = BlockingCollection<string>();
SlowOutput(bc);
foreach(var line in bc.GetConsumingEnumerable()) //Blocks till a item is added to the collection. Leaves the foreach loop after CompleteAdding() is called and there are no more items to be processed.
{
Console.WriteLine(line)
}

Trying to deserialize more than 1 object at the same time

Im trying to send some object from a server to the client.
My problem is that when im sending only 1 object, everything works correctly. But at the moment i add another object an exception is thrown - "binary stream does not contain a valid binaryheader" or "No map for object (random number)".
My thoughts are that the deserialization does not understand where the stream starts / ends and i hoped that you guys can help me out here.
heres my deserialization code:
public void Listen()
{
try
{
bool offline = true;
Dispatcher.Invoke(System.Windows.Threading.DispatcherPriority.Normal,
new Action(() => offline = Offline));
while (!offline)
{
TcpObject tcpObject = new TcpObject();
IFormatter formatter = new BinaryFormatter();
tcpObject = (TcpObject)formatter.Deserialize(serverStream);
if (tcpObject.Command == Command.Transfer)
{
SentAntenna sentAntenna = (SentAntenna)tcpObject.Object;
int idx = 0;
foreach (string name in SharedProperties.AntennaNames)
{
if (name == sentAntenna.Name)
break;
idx++;
}
if (idx < 9)
{
PointCollection pointCollection = new PointCollection();
foreach (Frequency f in sentAntenna.Frequencies)
pointCollection.Add(new Point(f.Channel, f.Intensity));
SharedProperties.AntennaPoints[idx] = pointCollection;
}
}
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message); // raise an event
}
}
serialization code:
case Command.Transfer:
Console.WriteLine("Transfering");
Thread transfer = new Thread(new ThreadStart(delegate
{
try
{
string aName = tcpObject.Object.ToString();
int indx = 0;
foreach (string name in names)
{
if (name == aName)
break;
indx++;
}
if (indx < 9)
{
while (true) // need to kill when the father thread terminates
{
if (antennas[indx].Frequencies != null)
{
lock (antennas[indx].Frequencies)
{
TcpObject sendTcpObject = new TcpObject();
sendTcpObject.Command = Command.Transfer;
SentAntenna sa = new SentAntenna(antennas[indx].Frequencies, aName);
sendTcpObject.Object = sa;
formatter.Serialize(networkStream, sendTcpObject);
}
}
}
}
}
catch (Exception ex) { Console.WriteLine(ex); }
}));
transfer.Start();
break;
Interesting. There's nothing particularly odd in your serialization code, and I've seen people use vanilla concatenation for multiple objects in the past, although I've actually always advised against it as BinaryFormatter does not explicitly claim this scenario is OK. But: if it isn't, the only thing I can suggest is to implement your own framing; so your write code becomes:
serialize to an empty MemoryStream
note the length and write the length to the NetworkStream, for example as a simple fixed-width 32-bit network-byte-order integer
write the payload from the MemoryStream to the NetworkStream
rinse, repeat
And the read code becomes:
read exactly 4 bytes and compute the length
buffer that many bytes into a MemoryStream
deserialize from the NetworkStream
(Noting in both cases to set the MemoryStream's position back to 0 between write and read)
You can also implement a Stream-subclass that caps the length if you want to avoid a buffer when reading, bit that is more complex.
apperantly i came up with a really simple solution. I just made sure only 1 thread is allowed to transfer data at the same time so i changed this line of code:
formatter.Serialize(networkStream, sendTcpObject);
to these lines of code:
if (!transfering) // making sure only 1 thread is transfering data
{
transfering = true;
formatter.Serialize(networkStream, sendTcpObject);
transfering = false;
}

What's the best approach to achieve uniqueness in an object shared by multiple threads?

I am interested in timing my function calls to database + other functions to build some metrics for my application's performance. I used Stopwatch and a metrics object, but it doesn't seem to consistently give correct values. Sometimes the elapsed time for calling a function is exactly the same for all the calls which is unrealistic...
I have discovered that the reason for the problem is due to the Metrics object properties values. The values of one Metrics object get overwritten when other instances of the Metrics generated by other threads are assigned values. It seems like the properties values are per reference although a new instance is created by each thread.
What's the best approach to achieve uniqueness in an object shared by multiple threads?
Code below:
private Metrics Metrics;
private Stopwatch Stopwatch;
private int DegreeOfParallelism { get { return Convert.ToInt32(ConfigurationManager.AppSettings["DegreeOfParallelism"].ToString()); } }
var lOptions = new ParallelOptions() { MaxDegreeOfParallelism = DegreeOfParallelism };
Parallel.ForEach(RequestBag, lOptions, (lItem, loopState) =>
{
if (!string.IsNullOrEmpty(lItem.XmlRequest))
{
try
{
Metrics = new Metrics();
Stopwatch = new Stopwatch();
Stopwatch.Start();
ObjRef = new Object();
lItem.XmlRequest = ObjRef.GetDecision(Username, Password);
Stopwatch.Stop();
Metrics.ElapsedTime = string.Format("{0:0.00}", Stopwatch.Elapsed.TotalSeconds);
Stopwatch.Restart();
if (!string.IsNullOrEmpty(DBConnectionString))
{
DataAccess = new DataAccess2(DBConnectionString);
DataAccess.WriteToDB(lItem.XmlRequest);
}
Stopwatch.Stop();
Metrics.DbFuncCallTime = string.Format("{0:0.00}", Stopwatch.Elapsed.TotalSeconds);
}
catch (Exception pEx)
{
KeepLog(pEx);
Metrics.HasFailed = true;
}
finally
{
ProcessedIdsBag.Add(lItem.OrderId);
Metrics.ProcessedOrderId = lItem.OrderId;
Metrics.DegreeOfParallelism = DegreeOfParallelism;
Metrics.TotalNumOfOrders = NumberOfOrders;
Metrics.TotalNumOfOrdersProcessed = ProcessedIdsBag.Count;
pBackgroundWorker.ReportProgress(Metrics.GetProgressPercentage(NumberOfOrders, ProcessedIdsBag.Count), Metrics);
RequestBag.TryTake(out lItem);
}
}
});
Any help will be very much appreciated.
Thanks,
R
What it seems that you want to do is create a metrics object for each iteration, and then aggregate them at the end:
private ConcurrentBag<Metrics> allMetrics = new ConcurrentBag<Metrics>();
private int DegreeOfParallelism { get { return Convert.ToInt32(ConfigurationManager.AppSettings["DegreeOfParallelism"].ToString()); } }
var lOptions = new ParallelOptions() { MaxDegreeOfParallelism = DegreeOfParallelism };
Parallel.ForEach(RequestBag, lOptions, (lItem, loopState) =>
{
if (!string.IsNullOrEmpty(lItem.XmlRequest))
{
try
{
var Metrics = new Metrics();
var Stopwatch = new Stopwatch();
Stopwatch.Start();
ObjRef = new Object();
lItem.XmlRequest = ObjRef.GetDecision(Username, Password);
Stopwatch.Stop();
Metrics.ElapsedTime = string.Format("{0:0.00}", Stopwatch.Elapsed.TotalSeconds);
Stopwatch.Restart();
if (!string.IsNullOrEmpty(DBConnectionString))
{
DataAccess = new DataAccess2(DBConnectionString);
DataAccess.WriteToDB(lItem.XmlRequest);
}
Stopwatch.Stop();
Metrics.DbFuncCallTime = string.Format("{0:0.00}", Stopwatch.Elapsed.TotalSeconds);
}
catch (Exception pEx)
{
KeepLog(pEx);
Metrics.HasFailed = true;
}
finally
{
ProcessedIdsBag.Add(lItem.OrderId);
Metrics.ProcessedOrderId = lItem.OrderId;
Metrics.DegreeOfParallelism = DegreeOfParallelism;
Metrics.TotalNumOfOrders = NumberOfOrders;
Metrics.TotalNumOfOrdersProcessed = ProcessedIdsBag.Count;
pBackgroundWorker.ReportProgress(Metrics.GetProgressPercentage(NumberOfOrders, ProcessedIdsBag.Count), Metrics);
RequestBag.TryTake(out lItem);
allMetrics.add(Metrics);
}
}
});
// Aggregate everything in AllMetrics here
You need to change the scope of your Stopwatch and Metrics variables.
Currently, each thread shares the same Metrics variable. As soon as a thread enters the try block, it creates a new instance of Metrics (correctly), but puts it a shared variable (incorrectly). All other threads will see that new instance when reading the shared variable until the next thread comes along and start the whole process over.
move
private Metrics Metrics;
private Stopwatch Stopwatch;
to just inside your loop
Parallel.ForEach(RequestBag, lOptions, (lItem, loopState) =>
{
private Metrics Metrics;
private Stopwatch Stopwatch;
...
This will give each iteration through the loop it's own variable in which to store it's own instance of the the object.

Categories

Resources