Overhead in returning from a method - c#

I have a method A() that processes some queries. This method, from opening bracket to just before its return statement, times at +/-70ms. A good 50% of which comes from opening the connection, about 20% comes from the actual queries, 5-10% is used by some memory access, the rest is (probably) used for disposing the connection, commands and reader.
Although this big chunk of time used for handling the connection is annoying enough, what bothers me more is that when I call A() from method B():
B()
{
var timer = Stopwatch.Startnew()
A();
timer.Stop(); // elapsed: +/- 250ms
Debugger.Break();
}
Another 180ms of lag gets added and I can't seem to figure out why. I already tried having A return null, which changes nothing.
The only disk I/O and networking happens in A. I thought the transfer from disk and network to local memory should've happened in A, and therefore a call to A from B should not be influenced by this, but apparently this is not the case? Is this network latency I'm experiencing here? If so, then why does this also happen when I just let B return null?
I have no other explanation at the moment...
Everything resides in the same assembly,
Measuring without a debugger attached changes nothing,
Returning 'null' immediately show 0 ms, returning null instead of the normal return value changes nothing (but enforces the idea this is somehow related to latency).
A is roughly implemented as follows, like any simple method accessing a database. It's contrived, but shows the basic idea and flow:
A()
{
var totalTimer = Stopwatch.StartNew();
var stuff = new Stuffholder();
using(connection)
{
using(command)
{
using(reader)
{
// fill 'stuff'
}
}
}
totalTimer.Stop(); // elapsed: +/- 70ms
return stuff;
}
Any ideas?

The overhead that you are seeing is due to just-in-time compilation. The first time method B() is called method A() has not been natively compiled (it is partially compiled in the dll as IL), so you see a slight lag while the compiler compiles A() into machine code.
When profiling a method call it is important to call the method a large number of times and take the average time (discarding the first call if you like, although over enough calls the compilation overhead should become insignificant).

Since you have database access in A() you might be suffering from network name resolution issues, so your first call take a while longer to execute.

Related

Why my code does not speed up with a multithreaded Parallel.For loop?

I tried to transform a simple sequential loop into a parallel computed loop with the System.Threading.Tasks library.
The code compiles, returns correct results, but It does not save any computational cost, otherwise, it takes longer.
EDIT: Sorry guys, I have probably oversimplified the question and made some errors doing that.
To append additional information, I am running the code on an i7-4700QM, and it is referenced in a Grasshopper script.
Here is the actual code. I also switched to a non thread-local variables
public static class LineNet
{
public static List<Ray> SolveCpu(List<Speaker> sources, List<Receiver> targets, List<Panel> surfaces)
{
ConcurrentBag<Ray> rays = new ConcurrentBag<Ray>();
for (int i = 0; i < sources.Count; i++)
{
Parallel.For(
0,
targets.Count,
j =>
{
Line path = new Line(sources[i].Position, targets[j].Position);
Ray ray = new Ray(path, i, j);
if (Utils.CheckObstacles(ray,surfaces))
{
rays.Add(ray);
}
}
);
}
}
}
The Grasshopper implementation just collects sources targets and surfaces, calls the method Solve and returns rays.
I understand that dispatching workload to threads is expensive, but is it so expensive?
Or is the ConcurrentBag just preventing parallel calculation?
Plus, my classes are immutable (?), but if I use a common List the kernel aborts the operation and throws an exception, is someone able to tell why?
Without a good Minimal, Complete, and Verifiable code example that reliably reproduces the problem, it is not possible to provide a definitive answer. The code you posted does not even appear to be an excerpt of real code, because the type declared as the return type of the method isn't the same as the value actually returned by the return statement.
However, certainly the code you posted does not seem like a good use of Parallel.For(). Your Line constructor would have be fairly expensive to justify parallelizing the task of creating the items. And to be clear, that's the only possible win here.
At the end, you still need to aggregate all of the Line instances that you created into a single list, so all those intermediate lists created for the Parallel.For() tasks are just pure overhead. And the aggregation is necessarily serialized (i.e. only one thread at a time can be adding an item to the result collection), and in the worst way (each thread only gets to add a single item before it gives up the lock and another thread has a chance to take it).
Frankly, you'd be better off storing each local List<T> in a collection, and then aggregating them all at once in the main thread after Parallel.For() returns. Not that that would be likely to make the code perform better than a straight-up non-parallelized implementation. But at least it would be less likely to be worse. :)
The bottom line is that you don't seem to have a workload that could benefit from parallelization. If you think otherwise, you'll need to explain the basis for that thought in a clearer, more detailed way.
if I use a common List the kernel aborts the operation and throws an exception, is someone able to tell why?
You're already using (it appears) List<T> as the local data for each task, and indeed that should be fine, as tasks don't share their local data.
But if you are asking why you get an exception if you try to use List<T> instead of ConcurrentBag<T> for the result variable, well that's entirely to be expected. The List<T> class is not thread safe, but Parallel.For() will allow each task it runs to execute the localFinally delegate concurrently with all the others. So you have multiple threads all trying to modify the same not-thread-safe collection concurrently. This is a recipe for disaster. You're fortunate you get the exception; the actual behavior is undefined, and it's just as likely you'll simply corrupt the data structure as cause a run-time exception.

use of forcefully calling Garbage collection method [duplicate]

The general advice is that you should not call GC.Collect from your code, but what are the exceptions to this rule?
I can only think of a few very specific cases where it may make sense to force a garbage collection.
One example that springs to mind is a service, that wakes up at intervals, performs some task, and then sleeps for a long time. In this case, it may be a good idea to force a collect to prevent the soon-to-be-idle process from holding on to more memory than needed.
Are there any other cases where it is acceptable to call GC.Collect?
If you have good reason to believe that a significant set of objects - particularly those you suspect to be in generations 1 and 2 - are now eligible for garbage collection, and that now would be an appropriate time to collect in terms of the small performance hit.
A good example of this is if you've just closed a large form. You know that all the UI controls can now be garbage collected, and a very short pause as the form is closed probably won't be noticeable to the user.
UPDATE 2.7.2018
As of .NET 4.5 - there is GCLatencyMode.LowLatency and GCLatencyMode.SustainedLowLatency. When entering and leaving either of these modes, it is recommended that you force a full GC with GC.Collect(2, GCCollectionMode.Forced).
As of .NET 4.6 - there is the GC.TryStartNoGCRegion method (used to set the read-only value GCLatencyMode.NoGCRegion). This can itself, perform a full blocking garbage collection in an attempt to free enough memory, but given we are disallowing GC for a period, I would argue it is also a good idea to perform full GC before and after.
Source: Microsoft engineer Ben Watson's: Writing High-Performance .NET Code, 2nd Ed. 2018.
See:
https://msdn.microsoft.com/en-us/library/system.runtime.gclatencymode(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/dn906204(v=vs.110).aspx
I use GC.Collect only when writing crude performance/profiler test rigs; i.e. I have two (or more) blocks of code to test - something like:
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
TestA(); // may allocate lots of transient objects
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
TestB(); // may allocate lots of transient objects
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
...
So that TestA() and TestB() run with as similar state as possible - i.e. TestB() doesn't get hammered just because TestA left it very close to the tipping point.
A classic example would be a simple console exe (a Main method sort-enough to be posted here for example), that shows the difference between looped string concatenation and StringBuilder.
If I need something precise, then this would be two completely independent tests - but often this is enough if we just want to minimize (or normalize) the GC during the tests to get a rough feel for the behaviour.
During production code? I have yet to use it ;-p
The best practise is to not force a garbage collection in most cases. (Every system I have worked on that had forced garbage collections, had underlining problems that if solved would have removed the need to forced the garbage collection, and speeded the system up greatly.)
There are a few cases when you know more about memory usage then the garbage collector does. This is unlikely to be true in a multi user application, or a service that is responding to more then one request at a time.
However in some batch type processing you do know more then the GC. E.g. consider an application that.
Is given a list of file names on the command line
Processes a single file then write the result out to a results file.
While processing the file, creates a lot of interlinked objects that can not be collected until the processing of the file have complete (e.g. a parse tree)
Does not keep match state between the files it has processed.
You may be able to make a case (after careful) testing that you should force a full garbage collection after you have process each file.
Another cases is a service that wakes up every few minutes to process some items, and does not keep any state while it’s asleep. Then forcing a full collection just before going to sleep may be worthwhile.
The only time I would consider forcing
a collection is when I know that a lot
of object had been created recently
and very few objects are currently
referenced.
I would rather have a garbage collection API when I could give it hints about this type of thing without having to force a GC my self.
See also "Rico Mariani's Performance Tidbits"
These days I consider same of the above cases would be better to use a short lived worker process to do each batch of work and let the OS do the resource recovery.
One case is when you are trying to unit test code that uses WeakReference.
In large 24/7 or 24/6 systems -- systems that react to messages, RPC requests or that poll a database or process continuously -- it is useful to have a way to identify memory leaks. For this, I tend to add a mechanism to the application to temporarily suspend any processing and then perform full garbage collection. This puts the system into a quiescent state where the memory remaining is either legitimately long lived memory (caches, configuration, &c.) or else is 'leaked' (objects that are not expected or desired to be rooted but actually are).
Having this mechanism makes it a lot easier to profile memory usage as the reports will not be clouded with noise from active processing.
To be sure you get all of the garbage, you need to perform two collections:
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
As the first collection will cause any objects with finalizers to be finalized (but not actually garbage collect these objects). The second GC will garbage collect these finalized objects.
You can call GC.Collect() when you know something about the nature of the app the garbage collector doesn't.
As the author, it's often tempting to think this is likely or normal. However, the truth is the GC amounts to a pretty well-written and tested expert system, and it's rare you'll know something about the low level code paths it doesn't.
The best example I can think of where you might have some extra information is an app that cycles between idle periods and very busy periods. You want the best performance possible for the busy periods and therefore want to use the idle time to do some clean up.
However, most of the time the GC is smart enough to do this anyway.
One instance where it is almost necessary to call GC.Collect() is when automating Microsoft Office through Interop. COM objects for Office don't like to automatically release and can result in the instances of the Office product taking up very large amounts of memory. I'm not sure if this is an issue or by design. There's lots of posts about this topic around the internet so I won't go into too much detail.
When programming using Interop, every single COM object should be manually released, usually though the use of Marshal.ReleseComObject(). In addition, calling Garbage Collection manually can help "clean up" a bit. Calling the following code when you're done with Interop objects seems to help quite a bit:
GC.Collect()
GC.WaitForPendingFinalizers()
GC.Collect()
In my personal experience, using a combination of ReleaseComObject and manually calling garbage collection greatly reduces the memory usage of Office products, specifically Excel.
As a memory fragmentation solution.
I was getting out of memory exceptions while writing a lot of data into a memory stream (reading from a network stream). The data was written in 8K chunks. After reaching 128M there was exception even though there was a lot of memory available (but it was fragmented). Calling GC.Collect() solved the issue. I was able to handle over 1G after the fix.
Have a look at this article by Rico Mariani. He gives two rules when to call GC.Collect (rule 1 is: "Don't"):
When to call GC.Collect()
I was doing some performance testing on array and list:
private static int count = 100000000;
private static List<int> GetSomeNumbers_List_int()
{
var lstNumbers = new List<int>();
for(var i = 1; i <= count; i++)
{
lstNumbers.Add(i);
}
return lstNumbers;
}
private static int[] GetSomeNumbers_Array()
{
var lstNumbers = new int[count];
for (var i = 1; i <= count; i++)
{
lstNumbers[i-1] = i + 1;
}
return lstNumbers;
}
private static int[] GetSomeNumbers_Enumerable_Range()
{
return Enumerable.Range(1, count).ToArray();
}
static void performance_100_Million()
{
var sw = new Stopwatch();
sw.Start();
var numbers1 = GetSomeNumbers_List_int();
sw.Stop();
//numbers1 = null;
//GC.Collect();
Console.WriteLine(String.Format("\"List<int>\" took {0} milliseconds", sw.ElapsedMilliseconds));
sw.Reset();
sw.Start();
var numbers2 = GetSomeNumbers_Array();
sw.Stop();
//numbers2 = null;
//GC.Collect();
Console.WriteLine(String.Format("\"int[]\" took {0} milliseconds", sw.ElapsedMilliseconds));
sw.Reset();
sw.Start();
//getting System.OutOfMemoryException in GetSomeNumbers_Enumerable_Range method
var numbers3 = GetSomeNumbers_Enumerable_Range();
sw.Stop();
//numbers3 = null;
//GC.Collect();
Console.WriteLine(String.Format("\"int[]\" Enumerable.Range took {0} milliseconds", sw.ElapsedMilliseconds));
}
and I got OutOfMemoryException in GetSomeNumbers_Enumerable_Range method the only workaround is to deallocate the memory by:
numbers = null;
GC.Collect();
You should try to avoid using GC.Collect() since its very expensive. Here is an example:
public void ClearFrame(ulong timeStamp)
{
if (RecordSet.Count <= 0) return;
if (Limit == false)
{
var seconds = (timeStamp - RecordSet[0].TimeStamp)/1000;
if (seconds <= _preFramesTime) return;
Limit = true;
do
{
RecordSet.Remove(RecordSet[0]);
} while (((timeStamp - RecordSet[0].TimeStamp) / 1000) > _preFramesTime);
}
else
{
RecordSet.Remove(RecordSet[0]);
}
GC.Collect(); // AVOID
}
TEST RESULT: CPU USAGE 12%
When you change to this:
public void ClearFrame(ulong timeStamp)
{
if (RecordSet.Count <= 0) return;
if (Limit == false)
{
var seconds = (timeStamp - RecordSet[0].TimeStamp)/1000;
if (seconds <= _preFramesTime) return;
Limit = true;
do
{
RecordSet[0].Dispose(); // Bitmap destroyed!
RecordSet.Remove(RecordSet[0]);
} while (((timeStamp - RecordSet[0].TimeStamp) / 1000) > _preFramesTime);
}
else
{
RecordSet[0].Dispose(); // Bitmap destroyed!
RecordSet.Remove(RecordSet[0]);
}
//GC.Collect();
}
TEST RESULT: CPU USAGE 2-3%
In your example, I think that calling GC.Collect isn't the issue, but rather there is a design issue.
If you are going to wake up at intervals, (set times) then your program should be crafted for a single execution (perform the task once) and then terminate. Then, you set the program up as a scheduled task to run at the scheduled intervals.
This way, you don't have to concern yourself with calling GC.Collect, (which you should rarely if ever, have to do).
That being said, Rico Mariani has a great blog post on this subject, which can be found here:
http://blogs.msdn.com/ricom/archive/2004/11/29/271829.aspx
One useful place to call GC.Collect() is in a unit test when you want to verify that you are not creating a memory leak (e. g. if you are doing something with WeakReferences or ConditionalWeakTable, dynamically generated code, etc).
For example, I have a few tests like:
WeakReference w = CodeThatShouldNotMemoryLeak();
Assert.IsTrue(w.IsAlive);
GC.Collect();
GC.WaitForPendingFinalizers();
Assert.IsFalse(w.IsAlive);
It could be argued that using WeakReferences is a problem in and of itself, but it seems that if you are creating a system that relies on such behavior then calling GC.Collect() is a good way to verify such code.
There are some situations where it is better safe than sorry.
Here is one situation.
It is possible to author an unmanaged DLL in C# using IL rewrites (because there are situations where this is necessary).
Now suppose, for example, the DLL creates an array of bytes at the class level - because many of the exported functions need access to such. What happens when the DLL is unloaded? Is the garbage collector automatically called at that point? I don't know, but being an unmanaged DLL it is entirely possible the GC isn't called. And it would be a big problem if it wasn't called. When the DLL is unloaded so too would be the garbage collector - so who is going to be responsible for collecting any possible garbage and how would they do it? Better to employ C#'s garbage collector. Have a cleanup function (available to the DLL client) where the class level variables are set to null and the garbage collector called.
Better safe than sorry.
The short answer is: never!
using(var stream = new MemoryStream())
{
bitmap.Save(stream, ImageFormat.Png);
techObject.Last().Image = Image.FromStream(stream);
bitmap.Dispose();
// Without this code, I had an OutOfMemory exception.
GC.Collect();
GC.WaitForPendingFinalizers();
//
}
Another reason is when you have a SerialPort opened on a USB COM port, and then the USB device is unplugged. Because the SerialPort was opened, the resource holds a reference to the previously connected port in the system's registry. The system's registry will then contain stale data, so the list of available ports will be wrong. Therefore the port must be closed.
Calling SerialPort.Close() on the port calls Dispose() on the object, but it remains in memory until garbage collection actually runs, causing the registry to remain stale until the garbage collector decides to release the resource.
From https://stackoverflow.com/a/58810699/8685342:
try
{
if (port != null)
port.Close(); //this will throw an exception if the port was unplugged
}
catch (Exception ex) //of type 'System.IO.IOException'
{
System.GC.Collect();
System.GC.WaitForPendingFinalizers();
}
port = null;
If you are creating a lot of new System.Drawing.Bitmap objects, the Garbage Collector doesn't clear them. Eventually GDI+ will think you are running out of memory and will throw a "The parameter is not valid" exception. Calling GC.Collect() every so often (not too often!) seems to resolve this issue.
i am still pretty unsure about this.
I am working since 7 years on an Application Server. Our bigger installations take use of 24 GB Ram. Its hightly Multithreaded, and ALL calls for GC.Collect() ran into really terrible performance issues.
Many third party Components used GC.Collect() when they thought it was clever to do this right now.
So a simple bunch of Excel-Reports blocked the App Server for all threads several times a minute.
We had to refactor all the 3rd Party Components in order to remove the GC.Collect() calls, and all worked fine after doing this.
But i am running Servers on Win32 as well, and here i started to take heavy use of GC.Collect() after getting a OutOfMemoryException.
But i am also pretty unsure about this, because i often noticed, when i get a OOM on 32 Bit, and i retry to run the same Operation again, without calling GC.Collect(), it just worked fine.
One thing i wonder is the OOM Exception itself...
If i would have written the .Net Framework, and i can't alloc a memory block, i would use GC.Collect(), defrag memory (??), try again, and if i still cant find a free memory block, then i would throw the OOM-Exception.
Or at least make this behavior as configurable option, due the drawbacks of the performance issue with GC.Collect.
Now i have lots of code like this in my app to "solve" the problem:
public static TResult ExecuteOOMAware<T1, T2, TResult>(Func<T1,T2 ,TResult> func, T1 a1, T2 a2)
{
int oomCounter = 0;
int maxOOMRetries = 10;
do
{
try
{
return func(a1, a2);
}
catch (OutOfMemoryException)
{
oomCounter++;
if (maxOOMRetries > 10)
{
throw;
}
else
{
Log.Info("OutOfMemory-Exception caught, Trying to fix. Counter: " + oomCounter.ToString());
System.Threading.Thread.Sleep(TimeSpan.FromSeconds(oomCounter * 10));
GC.Collect();
}
}
} while (oomCounter < maxOOMRetries);
// never gets hitted.
return default(TResult);
}
(Note that the Thread.Sleep() behavior is a really App apecific behavior, because we are running a ORM Caching Service, and the service takes some time to release all the cached objects, if RAM exceeds some predefined values. so it waits a few seconds the first time, and has increased waiting time each occurence of OOM.)
one good reason for calling GC is on small ARM computers with little memory, like the Raspberry PI (running with mono).
If unallocated memory fragments use too much of the system RAM, then the Linux OS can get unstable.
I have an application where I have to call GC every second (!) to get rid of memory overflow problems.
Another good solution is to dispose objects when they are no longer needed. Unfortunately this is not so easy in many cases.
This isn't that relevant to the question, but for XSLT transforms in .NET (XSLCompiledTranform) then you might have no choice. Another candidate is the MSHTML control.
If you are using a version of .net less than 4.5, manual collection may be inevitable (especially if you are dealing with many 'large objects').
this link describes why:
https://blogs.msdn.microsoft.com/dotnet/2011/10/03/large-object-heap-improvements-in-net-4-5/
Since there are Small object heap(SOH) and Large object heap(LOH)
We can call GC.Collect() to clear de-reference object in SOP, and move lived object to next generation.
In .net4.5, we can also compact LOH by using largeobjectheapcompactionmode

"finally" not executed because of Async in "try"?

I have a bool (b) which appears only twice in my code (besides its declaration):
try
{
b = true;
//code including await SomeAsync();
}
catch { }
finally { b = false; }
yet sometimes the try block is started with b being true (-before b = true) which should never be happening because the declaration leaves it false, as does the previous finally. At some point, this code is executed in a loop, which in some cases is iterating quickly and SomeAsync() is trying to use too much memory. I assume that is throwing an exception of a type that can "break" a finally. (b is always false as expected when there's only a normal amount of data for SomeAsync() to process.)
I've tried verifying what Visual Studio shows me with Debug.WriteLine() both after the try and after the finally, and also by appending to a string a different character in those places, but then the finally was executed. So I assume that the slow delay was enough to prevent the exception.
Is this really possible? (Any ideas on how to verify it or fix it so that the finally always runs?)
(finally blocks can fail - cases: Conditions when finally does not execute in a .net try..finally block )
EDIT
A good point was raised in a comment and an answer - that this code might be executed several times concurrently during the awaits. Unfortunately, there's another detail (that those answers made me aware is pertinent) - after all iterations - the state is that b is true. That is not explained by concurrency. I have also added an if (b) return; before the try block (to avoid calling the Async while the previous is running). Still getting left with a true b after all iterations are done.
This is a pretty common mistake. The code you showed can be run multiple times concurrently. For example, it you attach it to a button click event, the user can click the button ten times in a row, causing ten copies of the function to run at nearly the same time.
In this example they won't literally run at the same time, being ties to the UI thread, but they can overlap with the scheduler jumping from one instance to the next every time it sees an await statement.
If you are running this in the background (e.g. Thread.Start) then each instance will get its own thread and you really will have multiple copies running at the same time.

Reactive Extensions Test Scheduler Simulating Time elapse

I am working with RX scheduler classes using the .Schedule(DateTimeOffset, Action>) stuff. Basically I've a scheduled action that can schedule itself again.
Code:
public SomeObject(IScheduler sch, Action variableAmountofTime)
{
this.sch = sch;
sch.Schedule(GetNextTime(), (Action<DateTimeOffset> runAgain =>
{
//Something that takes an unknown variable amount of time.
variableAmountofTime();
runAgain(GetNextTime());
});
}
public DateTimeOffset GetNextTime()
{
//Return some time offset based on scheduler's
//current time which is irregular based on other inputs that i have left out.
return this.sch.now.AddMinutes(1);
}
My Question is concerning simulating the amount of time variableAmountofTime might take and testing that my code behaves as expected and only triggers calling it as expected.
I have tried advancing the test scheduler's time inside the delegate but that does not work. Example of code that I wrote that doesnt work. Assume GetNextTime() is just scheduleing one minute out.
[Test]
public void TestCallsAppropriateNumberOfTimes()
{
var sch = new TestScheduler();
var timesCalled = 0;
var variableAmountOfTime = () =>
{
sch.AdvanceBy(TimeSpan.FromMinutes(3).Ticks);
timescalled++;
};
var someObject = new SomeObject(sch, variableAmountOfTime);
sch.AdvanceTo(TimeSpan.FromMinutes(3).Ticks);
Assert.That(timescalled, Is.EqualTo(1));
}
Since I am wanting to go 3 minutes into the future but the execution takes 3 minutes, I want to see this only trigger 1 time..instead it triggers 3 times.
How can I simulate something taking time during execution using the test scheduler.
Good question. Unfortunately, this is currently not supported in Rx v1.x and Rx v2.0 Beta (but read on). Let me explain the complication of nested Advance* calls to you.
Basically, Advance* implies starting the scheduler to run work till the point specified. This involves running the work in order on a single logical thread that represents the flow of time in the virtual scheduler. Allowing nested Advance* calls raises a few questions.
First of all, should a nested Advance* call cause a nested worker loop to be run? If that were the case, we're no longer mimicking a single logical thread of execution as the current work item would be interrupted in favor of running the inner loop. In fact, Advance* would lead to an implicit yield where the rest of the work (that was due now) after the Advance* call would not be allowed to run until all nested work has been processed. This leads to the situation where future work cannot depend on (or wait for) past work to finish its execution. One way out is to introduce real physical concurrency, which defeats various design points of the virtual time and historical schedulers to begin with.
Alternatively, should a nested Advance* call somehow communicate to the top-most worker loop dispatching call (Advance* or Start) it may need to extend its due time because a nested invocation has asked to advance to a point beyond the original due time. Now all sorts of things are getting weird though. The clock doesn't reflect the changes after returning from Advance* and the top-most call no longer finishes at a predictable time.
For Rx v2.0 RC (coming next month), we took a look at this scenario and decided Advance* is not the right thing to emulate "time slippage" because it'd need an overloaded meaning depending on the context where it's invoked from. Instead, we're introducing a Sleep method that can be used to slip time forward from any context, without the side-effect of running work. Think of it as a way to set the Clock property but with safeguarding against going back in time. The name also reflects the intent clearly.
In addition to the above, to reduce the surprise factor of nested Advance* calls having no effect, we made it detect this situation and throw an InvalidOperationException in a nested context. Sleep, on the other hand, can be called from anywhere.
One final note. It turns out we needed exactly the same feature for work we're doing in Rx v2.0 RC with regards to our treatment of time. Several tests required a deterministic way to emulate slippage of time due to the execution of user code that can take arbitrarily long (think of the OnNext handler to e.g. Observable.Interval).
Hope this helps... Stay tuned for our Rx v2.0 RC release in the next few weeks!
-Bart (Rx team)

CPU performance when creating lots of objects with SqlDataReader

I have a stored procedure that returns several resultsets and within (some) resultsets 1000's of rows. I am executing this stored proc using x threads at the same time, with a maximum of y concurrently.
When i just run through the data:
using (SqlDataReader reader = command.ExecuteReader(CommandBehavior.CloseConnection))
{
do
{
while (reader.Read())
{
for (int i = 0; i < reader.FieldCount; i++)
{
var value = reader.GetValue(i);
}
}
}
while (reader.NextResult());
}
I get reasonable throughput - and CPU. Now obviously this is useless, so i need some objects! OK, now i change it to do something like this:
using (SqlDataReader reader = command.ExecuteReader(CommandBehavior.CloseConnection))
{
while(reader.Read())
{
Bob b = new Bob(reader);
this.bobs.Add(b);
}
reader.NextResult();
while(reader.Read())
{
Clarence c = new Clarence(reader);
this.clarences.Add(c);
}
// More fun
}
And my implementation of data classes:
public Bob(SqlDataReader reader)
{
this.some = reader.GetInt32(0);
this.parameter = reader.GetInt32(1);
this.value = reader.GetString(2);
}
And this performs worse. Which is not surprising. What is surprising, is that the CPU drops, and by about 20 - 25% of total (i.e. not 25% of what it was using; it's close to 50% of what it was using)! Why would doing more work, drop the CPU? I don't get it... it looks like there's some locking somewhere - but i don't understand where? i want to get the most out of the machine!
EDIT - changed code due to bad demo code example. Oops.
ALSO: to test the constructor theory, i changed the implementation of the one that creates the object. This time it also does a for loop over the fields, and does a getValue, and will create an empty object. So it doesn't do what i want yes, but i wanted to see if creating the large numbers of objects is the issue. It isn't.
SECOND EDIT: looks like adding the objects to the list is the issue - CPU drops immediately as soon as i add these objects to the list. Not sure how to improve this... (or if it's worth investigating; the first scenario is obviously stupid)
I can see the performance issue with your methodology, but I am not sure I can pinpoint the exact way to explain the cause. But essentially, you are adding the flow of the cursor to instantiation rather setting properties. If I were to take a guess, you are causing some context switching when you make a pull from a reader as part of your constructor.
If you think of a reader as a firehose cursor (it is) and think of the difference between the firehose being controlled by the user holding the hose (normal method) rather than the container being filled, you start to get a picture of the problem.
Not sure how the threading relates? But, if you have multiple clients and you are stopping the flow of the hose a bit more by moving bits over to a constructor rather than setting properties on a constructed object, and then contending for thread time across multiple requests, I can envision an even bigger issue under load.
Well, the reason the CPU isn't maxed out is simply that it's waiting on something else. Either the disk or the network.
Considering your first code block simply reads and immediately discards a value there are several possibilities: One is that the compiler could be optimizing out the allocation of the value variable because it is limited in scope and never used. This would mean the processor doesn't have to wait for memory allocation. Instead it would be limited mainly by how fast you can get the data off of the network wire.
Normally memory allocation ought to be super fast, however, if the amount of memory you are allocating causes windows to push stuff out to disk then this be held back by your hard drive speeds.
Looking at your second code block, you are constructing lots of objects (more memory usage) and keeping them around. Neither of which allows the compiler to optimize the calls out; and both of which put increased pressure on local system resources beyond just the processor.
To determine if the above is even close, you need to put watch counters on the amount of memory used by the app vs the amount of available RAM. Also you'll want counters on your disk system to see if it's being heavily accessed under either scenario.
Point is, try and watch EVERYTHING that is going on in the machine while the code blocks are running. That will give you a better idea of why the processor is waiting.
Maybe its just a matter of incorporating the Bob and Clarence calls that are inherently less CPU intensive- and that part of the execution just takes some time because of I/O bottlenecks or something along those lines.
Your best bet is to run this through a profiler and take a look at the report.

Categories

Resources