Generics vs Object performance - c#

I'm doing practice problems from MCTS Exam 70-536 Microsft .Net Framework Application Dev Foundation, and one of the problems is to create two classes, one generic, one object type that both perform the same thing; in which a loop uses the class and iterated over thousand times. And using the timer, time the performance of both. There was another post at C# generics question that seeks the same questoion but nonone replied.
Basically if in my code I run the generic class first it takes loger to process. If I run the object class first than the object class takes longer to process. The whole idea was to prove that generics perform faster.
I used the original users code to save me some time. I didn't particularly see anything wrong with the code and was puzzled by the outcome. Can some one explain why the unusual results?
Thanks,
Risho
Here is the code:
class Program
{
class Object_Sample
{
public Object_Sample()
{
Console.WriteLine("Object_Sample Class");
}
public long getTicks()
{
return DateTime.Now.Ticks;
}
public void display(Object a)
{
Console.WriteLine("{0}", a);
}
}
class Generics_Samle<T>
{
public Generics_Samle()
{
Console.WriteLine("Generics_Sample Class");
}
public long getTicks()
{
return DateTime.Now.Ticks;
}
public void display(T a)
{
Console.WriteLine("{0}", a);
}
}
static void Main(string[] args)
{
long ticks_initial, ticks_final, diff_generics, diff_object;
Object_Sample OS = new Object_Sample();
Generics_Samle<int> GS = new Generics_Samle<int>();
//Generic Sample
ticks_initial = 0;
ticks_final = 0;
ticks_initial = GS.getTicks();
for (int i = 0; i < 50000; i++)
{
GS.display(i);
}
ticks_final = GS.getTicks();
diff_generics = ticks_final - ticks_initial;
//Object Sample
ticks_initial = 0;
ticks_final = 0;
ticks_initial = OS.getTicks();
for (int j = 0; j < 50000; j++)
{
OS.display(j);
}
ticks_final = OS.getTicks();
diff_object = ticks_final - ticks_initial;
Console.WriteLine("\nPerformance of Generics {0}", diff_generics);
Console.WriteLine("Performance of Object {0}", diff_object);
Console.ReadKey();
}
}

Well, the first problem I can see is that you're using the DateTime object to measure time in your application (for a very small interval).
You should be using the Stopwatch class. It offers better precision when trying to benchmark code.
The second problem is that you're not allowing for JIT (Just-In-Time compilation). The first call to your code is going to take longer simply because it has to be JIT'd. After that, you'll get your results.
I would make a single call in to your code before you start timing things so you can get an accurate idea of what is happening during the loop.

You should run both classes a separate time before timing it to allow the JITter to run.

Your test is incorrect. Here are your methods:
public void display(T a)
{
Console.WriteLine("{0}", a); // Console.WriteLine(string format, params object[] args) <- boxing is performed here
}
public void display(Object a)// <- boxing is performed here
{
Console.WriteLine("{0}", a);
}
So, in both cases you are using boxing. Much better would be if your class, for example, will count total sum of values, like:
public void add(long a)
{
Total += a;
}
public void display(Object a)// <- boxing is performed here
{
Total += (long) a;// <- unboxing is performed here
}

Your timed code includes a Console.WriteLine(). That will take up 99.999999% of the time.
Your assumption that generic will be faster in this situation is wrong. You may have misinterpreted a remark about non-generic collection classes.
This won't be on he exam.

why would it be faster? both ints must be boxed in order to use Console.WriteLine(string, object)
edit: ToString() itself does not seem to cause boxing
http://weblogs.asp.net/ngur/archive/2003/12/16/43856.aspx
so when you use Console.WriteLine(a); which would call Console.WriteLine(Int32) that should work i guess (i would need to look into reflector to confirm this)

Related

Trying to merge sort a linked list and getting a stackoverflow exception. How might I prevent that? [duplicate]

I would like to either prevent or handle a StackOverflowException that I am getting from a call to the XslCompiledTransform.Transform method within an Xsl Editor I am writing. The problem seems to be that the user can write an Xsl script that is infinitely recursive, and it just blows up on the call to the Transform method. (That is, the problem is not just the typical programmatic error, which is usually the cause of such an exception.)
Is there a way to detect and/or limit how many recursions are allowed? Or any other ideas to keep this code from just blowing up on me?
From Microsoft:
Starting with the .NET Framework
version 2.0, a StackOverflowException
object cannot be caught by a try-catch
block and the corresponding process is
terminated by default. Consequently,
users are advised to write their code
to detect and prevent a stack
overflow. For example, if your
application depends on recursion, use
a counter or a state condition to
terminate the recursive loop.
I'm assuming the exception is happening within an internal .NET method, and not in your code.
You can do a couple things.
Write code that checks the xsl for infinite recursion and notifies the user prior to applying a transform (Ugh).
Load the XslTransform code into a separate process (Hacky, but less work).
You can use the Process class to load the assembly that will apply the transform into a separate process, and alert the user of the failure if it dies, without killing your main app.
EDIT: I just tested, here is how to do it:
MainProcess:
// This is just an example, obviously you'll want to pass args to this.
Process p1 = new Process();
p1.StartInfo.FileName = "ApplyTransform.exe";
p1.StartInfo.UseShellExecute = false;
p1.StartInfo.WindowStyle = ProcessWindowStyle.Hidden;
p1.Start();
p1.WaitForExit();
if (p1.ExitCode == 1)
Console.WriteLine("StackOverflow was thrown");
ApplyTransform Process:
class Program
{
static void Main(string[] args)
{
AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException);
throw new StackOverflowException();
}
// We trap this, we can't save the process,
// but we can prevent the "ILLEGAL OPERATION" window
static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)
{
if (e.IsTerminating)
{
Environment.Exit(1);
}
}
}
NOTE The question in the bounty by #WilliamJockusch and the original question are different.
This answer is about StackOverflow's in the general case of third-party libraries and what you can/can't do with them. If you're looking about the special case with XslTransform, see the accepted answer.
Stack overflows happen because the data on the stack exceeds a certain limit (in bytes). The details of how this detection works can be found here.
I'm wondering if there is a general way to track down StackOverflowExceptions. In other words, suppose I have infinite recursion somewhere in my code, but I have no idea where. I want to track it down by some means that is easier than stepping through code all over the place until I see it happening. I don't care how hackish it is.
As I mentioned in the link, detecting a stack overflow from static code analysis would require solving the halting problem which is undecidable. Now that we've established that there is no silver bullet, I can show you a few tricks that I think helps track down the problem.
I think this question can be interpreted in different ways, and since I'm a bit bored :-), I'll break it down into different variations.
Detecting a stack overflow in a test environment
Basically the problem here is that you have a (limited) test environment and want to detect a stack overflow in an (expanded) production environment.
Instead of detecting the SO itself, I solve this by exploiting the fact that the stack depth can be set. The debugger will give you all the information you need. Most languages allow you to specify the stack size or the max recursion depth.
Basically I try to force a SO by making the stack depth as small as possible. If it doesn't overflow, I can always make it bigger (=in this case: safer) for the production environment. The moment you get a stack overflow, you can manually decide if it's a 'valid' one or not.
To do this, pass the stack size (in our case: a small value) to a Thread parameter, and see what happens. The default stack size in .NET is 1 MB, we're going to use a way smaller value:
class StackOverflowDetector
{
static int Recur()
{
int variable = 1;
return variable + Recur();
}
static void Start()
{
int depth = 1 + Recur();
}
static void Main(string[] args)
{
Thread t = new Thread(Start, 1);
t.Start();
t.Join();
Console.WriteLine();
Console.ReadLine();
}
}
Note: we're going to use this code below as well.
Once it overflows, you can set it to a bigger value until you get a SO that makes sense.
Creating exceptions before you SO
The StackOverflowException is not catchable. This means there's not much you can do when it has happened. So, if you believe something is bound to go wrong in your code, you can make your own exception in some cases. The only thing you need for this is the current stack depth; there's no need for a counter, you can use the real values from .NET:
class StackOverflowDetector
{
static void CheckStackDepth()
{
if (new StackTrace().FrameCount > 10) // some arbitrary limit
{
throw new StackOverflowException("Bad thread.");
}
}
static int Recur()
{
CheckStackDepth();
int variable = 1;
return variable + Recur();
}
static void Main(string[] args)
{
try
{
int depth = 1 + Recur();
}
catch (ThreadAbortException e)
{
Console.WriteLine("We've been a {0}", e.ExceptionState);
}
Console.WriteLine();
Console.ReadLine();
}
}
Note that this approach also works if you are dealing with third-party components that use a callback mechanism. The only thing required is that you can intercept some calls in the stack trace.
Detection in a separate thread
You explicitly suggested this, so here goes this one.
You can try detecting a SO in a separate thread.. but it probably won't do you any good. A stack overflow can happen fast, even before you get a context switch. This means that this mechanism isn't reliable at all... I wouldn't recommend actually using it. It was fun to build though, so here's the code :-)
class StackOverflowDetector
{
static int Recur()
{
Thread.Sleep(1); // simulate that we're actually doing something :-)
int variable = 1;
return variable + Recur();
}
static void Start()
{
try
{
int depth = 1 + Recur();
}
catch (ThreadAbortException e)
{
Console.WriteLine("We've been a {0}", e.ExceptionState);
}
}
static void Main(string[] args)
{
// Prepare the execution thread
Thread t = new Thread(Start);
t.Priority = ThreadPriority.Lowest;
// Create the watch thread
Thread watcher = new Thread(Watcher);
watcher.Priority = ThreadPriority.Highest;
watcher.Start(t);
// Start the execution thread
t.Start();
t.Join();
watcher.Abort();
Console.WriteLine();
Console.ReadLine();
}
private static void Watcher(object o)
{
Thread towatch = (Thread)o;
while (true)
{
if (towatch.ThreadState == System.Threading.ThreadState.Running)
{
towatch.Suspend();
var frames = new System.Diagnostics.StackTrace(towatch, false);
if (frames.FrameCount > 20)
{
towatch.Resume();
towatch.Abort("Bad bad thread!");
}
else
{
towatch.Resume();
}
}
}
}
}
Run this in the debugger and have fun of what happens.
Using the characteristics of a stack overflow
Another interpretation of your question is: "Where are the pieces of code that could potentially cause a stack overflow exception?". Obviously the answer of this is: all code with recursion. For each piece of code, you can then do some manual analysis.
It's also possible to determine this using static code analysis. What you need to do for that is to decompile all methods and figure out if they contain an infinite recursion. Here's some code that does that for you:
// A simple decompiler that extracts all method tokens (that is: call, callvirt, newobj in IL)
internal class Decompiler
{
private Decompiler() { }
static Decompiler()
{
singleByteOpcodes = new OpCode[0x100];
multiByteOpcodes = new OpCode[0x100];
FieldInfo[] infoArray1 = typeof(OpCodes).GetFields();
for (int num1 = 0; num1 < infoArray1.Length; num1++)
{
FieldInfo info1 = infoArray1[num1];
if (info1.FieldType == typeof(OpCode))
{
OpCode code1 = (OpCode)info1.GetValue(null);
ushort num2 = (ushort)code1.Value;
if (num2 < 0x100)
{
singleByteOpcodes[(int)num2] = code1;
}
else
{
if ((num2 & 0xff00) != 0xfe00)
{
throw new Exception("Invalid opcode: " + num2.ToString());
}
multiByteOpcodes[num2 & 0xff] = code1;
}
}
}
}
private static OpCode[] singleByteOpcodes;
private static OpCode[] multiByteOpcodes;
public static MethodBase[] Decompile(MethodBase mi, byte[] ildata)
{
HashSet<MethodBase> result = new HashSet<MethodBase>();
Module module = mi.Module;
int position = 0;
while (position < ildata.Length)
{
OpCode code = OpCodes.Nop;
ushort b = ildata[position++];
if (b != 0xfe)
{
code = singleByteOpcodes[b];
}
else
{
b = ildata[position++];
code = multiByteOpcodes[b];
b |= (ushort)(0xfe00);
}
switch (code.OperandType)
{
case OperandType.InlineNone:
break;
case OperandType.ShortInlineBrTarget:
case OperandType.ShortInlineI:
case OperandType.ShortInlineVar:
position += 1;
break;
case OperandType.InlineVar:
position += 2;
break;
case OperandType.InlineBrTarget:
case OperandType.InlineField:
case OperandType.InlineI:
case OperandType.InlineSig:
case OperandType.InlineString:
case OperandType.InlineTok:
case OperandType.InlineType:
case OperandType.ShortInlineR:
position += 4;
break;
case OperandType.InlineR:
case OperandType.InlineI8:
position += 8;
break;
case OperandType.InlineSwitch:
int count = BitConverter.ToInt32(ildata, position);
position += count * 4 + 4;
break;
case OperandType.InlineMethod:
int methodId = BitConverter.ToInt32(ildata, position);
position += 4;
try
{
if (mi is ConstructorInfo)
{
result.Add((MethodBase)module.ResolveMember(methodId, mi.DeclaringType.GetGenericArguments(), Type.EmptyTypes));
}
else
{
result.Add((MethodBase)module.ResolveMember(methodId, mi.DeclaringType.GetGenericArguments(), mi.GetGenericArguments()));
}
}
catch { }
break;
default:
throw new Exception("Unknown instruction operand; cannot continue. Operand type: " + code.OperandType);
}
}
return result.ToArray();
}
}
class StackOverflowDetector
{
// This method will be found:
static int Recur()
{
CheckStackDepth();
int variable = 1;
return variable + Recur();
}
static void Main(string[] args)
{
RecursionDetector();
Console.WriteLine();
Console.ReadLine();
}
static void RecursionDetector()
{
// First decompile all methods in the assembly:
Dictionary<MethodBase, MethodBase[]> calling = new Dictionary<MethodBase, MethodBase[]>();
var assembly = typeof(StackOverflowDetector).Assembly;
foreach (var type in assembly.GetTypes())
{
foreach (var member in type.GetMembers(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Instance).OfType<MethodBase>())
{
var body = member.GetMethodBody();
if (body!=null)
{
var bytes = body.GetILAsByteArray();
if (bytes != null)
{
// Store all the calls of this method:
var calls = Decompiler.Decompile(member, bytes);
calling[member] = calls;
}
}
}
}
// Check every method:
foreach (var method in calling.Keys)
{
// If method A -> ... -> method A, we have a possible infinite recursion
CheckRecursion(method, calling, new HashSet<MethodBase>());
}
}
Now, the fact that a method cycle contains recursion, is by no means a guarantee that a stack overflow will happen - it's just the most likely precondition for your stack overflow exception. In short, this means that this code will determine the pieces of code where a stack overflow can occur, which should narrow down most code considerably.
Yet other approaches
There are some other approaches you can try that I haven't described here.
Handling the stack overflow by hosting the CLR process and handling it. Note that you still cannot 'catch' it.
Changing all IL code, building another DLL, adding checks on recursion. Yes, that's quite possible (I've implemented it in the past :-); it's just difficult and involves a lot of code to get it right.
Use the .NET profiling API to capture all method calls and use that to figure out stack overflows. For example, you can implement checks that if you encounter the same method X times in your call tree, you give a signal. There's a project clrprofiler that will give you a head start.
I would suggest creating a wrapper around XmlWriter object, so it would count amount of calls to WriteStartElement/WriteEndElement, and if you limit amount of tags to some number (f.e. 100), you would be able to throw a different exception, for example - InvalidOperation.
That should solve the problem in the majority of the cases
public class LimitedDepthXmlWriter : XmlWriter
{
private readonly XmlWriter _innerWriter;
private readonly int _maxDepth;
private int _depth;
public LimitedDepthXmlWriter(XmlWriter innerWriter): this(innerWriter, 100)
{
}
public LimitedDepthXmlWriter(XmlWriter innerWriter, int maxDepth)
{
_maxDepth = maxDepth;
_innerWriter = innerWriter;
}
public override void Close()
{
_innerWriter.Close();
}
public override void Flush()
{
_innerWriter.Flush();
}
public override string LookupPrefix(string ns)
{
return _innerWriter.LookupPrefix(ns);
}
public override void WriteBase64(byte[] buffer, int index, int count)
{
_innerWriter.WriteBase64(buffer, index, count);
}
public override void WriteCData(string text)
{
_innerWriter.WriteCData(text);
}
public override void WriteCharEntity(char ch)
{
_innerWriter.WriteCharEntity(ch);
}
public override void WriteChars(char[] buffer, int index, int count)
{
_innerWriter.WriteChars(buffer, index, count);
}
public override void WriteComment(string text)
{
_innerWriter.WriteComment(text);
}
public override void WriteDocType(string name, string pubid, string sysid, string subset)
{
_innerWriter.WriteDocType(name, pubid, sysid, subset);
}
public override void WriteEndAttribute()
{
_innerWriter.WriteEndAttribute();
}
public override void WriteEndDocument()
{
_innerWriter.WriteEndDocument();
}
public override void WriteEndElement()
{
_depth--;
_innerWriter.WriteEndElement();
}
public override void WriteEntityRef(string name)
{
_innerWriter.WriteEntityRef(name);
}
public override void WriteFullEndElement()
{
_innerWriter.WriteFullEndElement();
}
public override void WriteProcessingInstruction(string name, string text)
{
_innerWriter.WriteProcessingInstruction(name, text);
}
public override void WriteRaw(string data)
{
_innerWriter.WriteRaw(data);
}
public override void WriteRaw(char[] buffer, int index, int count)
{
_innerWriter.WriteRaw(buffer, index, count);
}
public override void WriteStartAttribute(string prefix, string localName, string ns)
{
_innerWriter.WriteStartAttribute(prefix, localName, ns);
}
public override void WriteStartDocument(bool standalone)
{
_innerWriter.WriteStartDocument(standalone);
}
public override void WriteStartDocument()
{
_innerWriter.WriteStartDocument();
}
public override void WriteStartElement(string prefix, string localName, string ns)
{
if (_depth++ > _maxDepth) ThrowException();
_innerWriter.WriteStartElement(prefix, localName, ns);
}
public override WriteState WriteState
{
get { return _innerWriter.WriteState; }
}
public override void WriteString(string text)
{
_innerWriter.WriteString(text);
}
public override void WriteSurrogateCharEntity(char lowChar, char highChar)
{
_innerWriter.WriteSurrogateCharEntity(lowChar, highChar);
}
public override void WriteWhitespace(string ws)
{
_innerWriter.WriteWhitespace(ws);
}
private void ThrowException()
{
throw new InvalidOperationException(string.Format("Result xml has more than {0} nested tags. It is possible that xslt transformation contains an endless recursive call.", _maxDepth));
}
}
This answer is for #WilliamJockusch.
I'm wondering if there is a general way to track down
StackOverflowExceptions. In other words, suppose I have infinite
recursion somewhere in my code, but I have no idea where. I want to
track it down by some means that is easier than stepping through code
all over the place until I see it happening. I don't care how hackish
it is. For example, It would be great to have a module I could
activate, perhaps even from another thread, that polled the stack
depth and complained if it got to a level I considered "too high." For
example, I might set "too high" to 600 frames, figuring that if the
stack were too deep, that has to be a problem. Is something like that
possible. Another example would be to log every 1000th method call
within my code to the debug output. The chances this would get some
evidence of the overlow would be pretty good, and it likely would not
blow up the output too badly. The key is that it cannot involve
writing a check wherever the overflow is happening. Because the entire
problem is that I don't know where that is. Preferrably the solution
should not depend on what my development environment looks like; i.e,
it should not assumet that I am using C# via a specific toolset (e.g.
VS).
It sounds like you're keen to hear some debugging techniques to catch this StackOverflow so I thought I would share a couple for you to try.
1. Memory Dumps.
Pro's: Memory Dumps are a sure fire way to work out the cause of a Stack Overflow. A C# MVP & I worked together troubleshooting a SO and he went on to blog about it here.
This method is the fastest way to track down the problem.
This method wont require you to reproduce problems by following steps seen in logs.
Con's: Memory Dumps are very large and you have to attach AdPlus/procdump the process.
2. Aspect Orientated Programming.
Pro's: This is probably the easiest way for you to implement code that checks the size of the call stack from any method without writing code in every method of your application. There are a bunch of AOP Frameworks that allow you to Intercept before and after calls.
Will tell you the methods that are causing the Stack Overflow.
Allows you to check the StackTrace().FrameCount at the entry and exit of all methods in your application.
Con's: It will have a performance impact - the hooks are embedded into the IL for every method and you cant really "de-activate" it out.
It somewhat depends on your development environment tool set.
3. Logging User Activity.
A week ago I was trying to hunt down several hard to reproduce problems. I posted this QA User Activity Logging, Telemetry (and Variables in Global Exception Handlers) . The conclusion I came to was a really simple user-actions-logger to see how to reproduce problems in a debugger when any unhandled exception occurs.
Pro's: You can turn it on or off at will (ie subscribing to events).
Tracking the user actions doesn't require intercepting every method.
You can count the number of events methods are subscribed too far more simply than with AOP.
The log files are relatively small and focus on what actions you need to perform to reproduce the problem.
It can help you to understand how users are using your application.
Con's: Isn't suited to a Windows Service and I'm sure there are better tools like this for web apps.
Doesn't necessarily tell you the methods that cause the Stack Overflow.
Requires you to step through logs manually reproducing problems rather than a Memory Dump where you can get it and debug it straight away.
Maybe you might try all techniques I mention above and some that #atlaste posted and tell us which one's you found were the easiest/quickest/dirtiest/most acceptable to run in a PROD environment/etc.
Anyway good luck tracking down this SO.
If you application depends on 3d-party code (in Xsl-scripts) then you have to decide first do you want to defend from bugs in them or not.
If you really want to defend then I think you should execute your logic which prone to external errors in separate AppDomains.
Catching StackOverflowException is not good.
Check also this question.
I had a stackoverflow today and i read some of your posts and decided to help out the Garbage Collecter.
I used to have a near infinite loop like this:
class Foo
{
public Foo()
{
Go();
}
public void Go()
{
for (float i = float.MinValue; i < float.MaxValue; i+= 0.000000000000001f)
{
byte[] b = new byte[1]; // Causes stackoverflow
}
}
}
Instead let the resource run out of scope like this:
class Foo
{
public Foo()
{
GoHelper();
}
public void GoHelper()
{
for (float i = float.MinValue; i < float.MaxValue; i+= 0.000000000000001f)
{
Go();
}
}
public void Go()
{
byte[] b = new byte[1]; // Will get cleaned by GC
} // right now
}
It worked for me, hope it helps someone.
With .NET 4.0 You can add the HandleProcessCorruptedStateExceptions attribute from System.Runtime.ExceptionServices to the method containing the try/catch block. This really worked! Maybe not recommended but works.
using System;
using System.Reflection;
using System.Runtime.InteropServices;
using System.Runtime.ExceptionServices;
namespace ExceptionCatching
{
public class Test
{
public void StackOverflow()
{
StackOverflow();
}
public void CustomException()
{
throw new Exception();
}
public unsafe void AccessViolation()
{
byte b = *(byte*)(8762765876);
}
}
class Program
{
[HandleProcessCorruptedStateExceptions]
static void Main(string[] args)
{
Test test = new Test();
try {
//test.StackOverflow();
test.AccessViolation();
//test.CustomException();
}
catch
{
Console.WriteLine("Caught.");
}
Console.WriteLine("End of program");
}
}
}
#WilliamJockusch, if I understood correctly your concern, it's not possible (from a mathematical point of view) to always identify an infinite recursion as it would mean to solve the Halting problem. To solve it you'd need a Super-recursive algorithm (like Trial-and-error predicates for example) or a machine that can hypercompute (an example is explained in the following section - available as preview - of this book).
From a practical point of view, you'd have to know:
How much stack memory you have left at the given time
How much stack memory your recursive method will need at the given time for the specific output.
Keep in mind that, with the current machines, this data is extremely mutable due to multitasking and I haven't heard of a software that does the task.
Let me know if something is unclear.
By the looks of it, apart from starting another process, there doesn't seem to be any way of handling a StackOverflowException. Before anyone else asks, I tried using AppDomain, but that didn't work:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Text;
using System.Threading;
namespace StackOverflowExceptionAppDomainTest
{
class Program
{
static void recrusiveAlgorithm()
{
recrusiveAlgorithm();
}
static void Main(string[] args)
{
if(args.Length>0&&args[0]=="--child")
{
recrusiveAlgorithm();
}
else
{
var domain = AppDomain.CreateDomain("Child domain to test StackOverflowException in.");
domain.ExecuteAssembly(Assembly.GetEntryAssembly().CodeBase, new[] { "--child" });
domain.UnhandledException += (object sender, UnhandledExceptionEventArgs e) =>
{
Console.WriteLine("Detected unhandled exception: " + e.ExceptionObject.ToString());
};
while (true)
{
Console.WriteLine("*");
Thread.Sleep(1000);
}
}
}
}
}
If you do end up using the separate-process solution, however, I would recommend using Process.Exited and Process.StandardOutput and handle the errors yourself, to give your users a better experience.
You can read up this property every few calls, Environment.StackTrace , and if the stacktrace exceded a specific threshold that you preset, you can return the function.
You should also try to replace some recursive functions with loops.

cast to IEnumerable results in performance hit

While analyzing an performance hit in our Windows CE based software, I stumbled upon a very
fascinating mystery:
Look at these two methods:
void Method1(List<int> list)
{
foreach (var item in list)
{
if (item == 2000)
break;
}
}
void Method2(List<int> list)
{
foreach (var item in (IEnumerable<int>)list)
{
if (item == 2000)
break;
}
}
void StartTest()
{
var list = new List<int>();
for (var i = 0; i < 3000; i++)
list.Add(i);
StartMeasurement();
Method1(list);
StopMeasurement(); // 1 ms
StartMeasurement();
Method2(list);
StopMeasurement(); // 721 ms
}
void StartMeasurement()
{
_currentStartTime = Environment.TickCount;
}
void StopMeasurement()
{
var time = Environment.TickCount - _currentStartTime;
Debug.WriteLine(time);
}
Method1 takes 1 ms to run. Method2 needs nearly 700 ms !
Don't try to reproduce this performance hit: it won't appear in a normal program on PC.
Unfortunately we can reproduce it in our software on our smart devices very reliably. The program runs on Compact Framework 3.5, Windows Forms, Windows CE 6.0.
The measurement uses Environment.TickCount.
As it is quite clear that there must be a strange bug in our software which slows down the enumerator, I simply can't imagine what kind of error could let the List class slow down, only if the iteration is using the IEnumerable interface of List.
And one more hint: After opening and closing a modal dialog (Windows Forms), suddenly both methods take the same time: 1 ms.
You need to run tests several times, because in a single run the CPU might get suspended, etc. It is for instance possible that while running method2, you are moving with your mouse resulting in the OS temporary letting the mouse driver run, etc. Or a network package arrives, or the timer says it is time to let another application run,... In other words there are a lot of reasons why all of a sudden your program stops running for a few milliseconds.
If I run the following program (note that using DateTime's, etc.) is not recommended:
var list = new List<int>();
for (var i = 0; i < 3000; i++)
list.Add(i);
DateTime t0 = DateTime.Now;
for(int i = 0; i < 50000; i++) {
Method1(list);
}
DateTime t1 = DateTime.Now;
for(int i = 0; i < 50000; i++) {
Method2(list);
}
DateTime t2 = DateTime.Now;
Console.WriteLine(t1-t0);
Console.WriteLine(t2-t1);
I get:
00:00:00.6522770 (method1)
00:00:01.2461630 (method2)
Swapping the order of testing results in:
00:00:01.1278890 (method2)
00:00:00.5473190 (method1)
So it's only 100% slower. Furthermore the performance of the first method (method1) is probably a bit better since for method1, the JIT compiler will first need to translate the code to machine instructions. In other words, the first method calls you do tend to be a bit slower than the ones later in the process..
The delay is probably caused because if you use a List<T>, the compiler can specialize the foreach loop: it already knows the structure of the IEnumerator<T> at compile time and can inline it if necessary.
If you use an IEnumerable<T>, the compiler must use virtual calls and lookup the exact methods using a vtable. This explains the time difference. Especially since you don't do much in your loop. In other words, the runtime must lookup which method to actually use, since the IEnumerable<T> could be anything: a LinkedList<T>, a HashSet<T>, a datastructure you made yourself,...
A general rule is: the higher the type of object in the class hierarchy the less the compiler knows about the real instance, the less it can optimize performance.
Maybe generating the template code the first time for the IEnumerable?
Many thanks for all your comments.
Our development team is analyzing this performance bug for nearly a week now
We run these tests several times in different order and different modules of the program
with and without compiler optimization. #CommuSoft: You are right that JIT needs more time to run code
for the first time. Unfortunately the result is always the same:
Method2 is about 700 times slower than Method1.
Maybe it's worth to mention again, that the performance hit appears untill we open and close any modal dialog in the program. It's not important what modal dialog. The performance hit will be fixed right after the
method Dispose() of the base class Form has been called. (The Dispose method hasn't been
overriden by a derived class)
On my way of analyzing the bug I stepped deeply into the framework and found out now,
that List can't be the reason for the performance hit. Look at this piece of code:
class Test
{
void Test()
{
var myClass = new MyClass();
StartMeasurement();
for (int i = 0; i < 5000; i++)
{
myClass.DoSth();
}
StopMeasurement(); // ==> 46 ms
var a = (IMyInterface)myClass;
StartMeasurement();
for (int i = 0; i < 5000; i++)
{
a.DoSth();
}
StopMeasurement(); // ==> 665 ms
}
}
public interface IMyInterface
{
void DoSth();
}
public class MyClass : IMyInterface
{
public void DoSth()
{
for (int i = 0; i < 10; i++ )
{
double a = 1.2345;
a = a / 19.44;
}
}
}
To call a method via the interface needs much more time than calling it directly.
But of course after closing our dubious dialog we measured between 44 and 45 ms for both loops.

MethodBase.GetCurrentMethod() Performance?

I have written a log class and a function as in the following code:
Log(System.Reflection.MethodBase methodBase, string message)
Every time I log something I also log the class name from the methodBase.Name and methodBase.DeclaringType.Name.
I read the following post Using Get CurrentMethod and I noticed that this method is slow.
Should I use the this.GetType() instead of System.Reflection.MethodBase or I should manually log the class/method name in my log e.g. Log("ClassName.MethodName", "log message)? What is the best practice?
It really depends.
If you use the this.GetType() approach you will lose the method information, but you will have a big performance gain (apparently a factor of 1200, according to your link).
If you offer an interface that lets the caller supply strings (e.g. Log("ClassName.MethodName", "log message"), you will probably gain even better performance, but this makes your API less friendly (the calling developer has to supply the class name and method name).
I know this is an old question, but I figured I'd throw out a simple solution that seems to perform well and maintains symbols
static void Main(string[] args)
{
int loopCount = 1000000; // 1,000,000 (one million) iterations
var timer = new Timer();
timer.Restart();
for (int i = 0; i < loopCount; i++)
Log(MethodBase.GetCurrentMethod(), "whee");
TimeSpan reflectionRunTime = timer.CalculateTime();
timer.Restart();
for (int i = 0; i < loopCount; i++)
Log((Action<string[]>)Main, "whee");
TimeSpan lookupRunTime = timer.CalculateTime();
Console.WriteLine("Reflection Time: {0}ms", reflectionRunTime.TotalMilliseconds);
Console.WriteLine(" Lookup Time: {0}ms", lookupRunTime.TotalMilliseconds);
Console.WriteLine();
Console.WriteLine("Press Enter to exit");
Console.ReadLine();
}
public static void Log(Delegate info, string message)
{
// do stuff
}
public static void Log(MethodBase info, string message)
{
// do stuff
}
public class Timer
{
private DateTime _startTime;
public void Restart()
{
_startTime = DateTime.Now;
}
public TimeSpan CalculateTime()
{
return DateTime.Now.Subtract(_startTime);
}
}
Running this code gives me the following results:
Reflection Time: 1692.1692ms
Lookup Time: 19.0019ms
Press Enter to exit
For one million iterations, that's not bad at all, especially compared to straight up reflection. The method group is being cast to a Delegate type, you maintain a symbolic link all the way into the logging. No goofy magic strings.

Is using delegates excessively a bad idea for performance? [duplicate]

This question already has answers here:
Does using delegates slow down my .NET programs?
(4 answers)
Closed 9 years ago.
Consider the following code:
if (IsDebuggingEnabled) {
instance.Log(GetDetailedDebugInfo());
}
GetDetailedDebugInfo() may be an expensive method, so we only want to call it if we're running in debug mode.
Now, the cleaner alternative is to code something like this:
instance.Log(() => GetDetailedDebugInfo());
Where .Log() is defined such as:
public void Log(Func<string> getMessage)
{
if (IsDebuggingEnabled)
{
LogInternal(getMessage.Invoke());
}
}
My concern is with performance, preliminary testing doesn't show the second case to be particularly more expensive, but I don't want to run into any surprises if load increases.
Oh, and please don't suggest conditional compilation because it doesn't apply to this case.
(P.S.: I wrote the code directly in the StackOverflow Ask a Question textarea so don't blame me if there are subtle bugs and it doesn't compile, you get the point :)
No, it shouldn't have a bad performance. After all, you'll be calling it only in debug mode where performance is not at the forefront. Actually, you could remove the lambda and just pass the method name to remove the overhead of an unnecessary intermediate anonymous method.
Note that if you want to do this in Debug builds, you can add a [Conditional("DEBUG")] attribute to the log method.
There is a difference in performance. How significant it is will depend on the rest of your code so I would recommend profiling before embarking on optimisations.
Having said that for your first example:
if (IsDebuggingEnabled)
{
instance.Log(GetDetailedDebugInfo());
}
If IsDebuggingEnabled is static readonly then the check will be jitted away as it knows it can never change. This means that the above sample will have zero performance impact if IsDebuggingEnabled is false, because after the JIT is done the code will be gone.
instance.Log(() => GetDetailedDebugInfo());
public void Log(Func<string> getMessage)
{
if (IsDebuggingEnabled)
{
LogInternal(getMessage.Invoke());
}
}
The method will be called every time instance.Log is called. Which will be slower.
But before expending time with this micro optimization you should profile your application or run some performance tests to make sure this is actually a bottle neck in your application.
I was hoping for some documentation regarding performance in such cases, but it seems that all I got were suggestions on how to improve my code... No one seems to have read my P.S. - no points for you.
So I wrote a simple test case:
public static bool IsDebuggingEnabled { get; set; }
static void Main(string[] args)
{
for (int j = 0; j <= 10; j++)
{
Stopwatch sw = Stopwatch.StartNew();
for (int i = 0; i <= 15000; i++)
{
Log(GetDebugMessage);
if (i % 1000 == 0) IsDebuggingEnabled = !IsDebuggingEnabled;
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
}
Console.ReadLine();
for (int j = 0; j <= 10; j++)
{
Stopwatch sw = Stopwatch.StartNew();
for (int i = 0; i <= 15000; i++)
{
if (IsDebuggingEnabled) GetDebugMessage();
if (i % 1000 == 0) IsDebuggingEnabled = !IsDebuggingEnabled;
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
}
Console.ReadLine();
}
public static string GetDebugMessage()
{
StringBuilder sb = new StringBuilder(100);
Random rnd = new Random();
for (int i = 0; i < 100; i++)
{
sb.Append(rnd.Next(100, 150));
}
return sb.ToString();
}
public static void Log(Func<string> getMessage)
{
if (IsDebuggingEnabled)
{
getMessage();
}
}
Timings seem to be exactly the same between the two versions.
I get 145 ms in the first case, and 145 ms in the second case
Looks like I answered my own question.
You can also do this:
// no need for a lambda
instance.Log(GetDetailedDebugInfo)
// Using these instance methods on the logger
public void Log(Func<string> detailsProvider)
{
if (!DebuggingEnabled)
return;
this.LogImpl(detailsProvider());
}
public void Log(string message)
{
if (!DebuggingEnabled)
return;
this.LogImpl(message);
}
protected virtual void LogImpl(string message)
{
....
}
Standard answers:
If you gotta do it, you gotta do it.
Loop it 10^9 times, look at a stopwatch, & that tells you how many nanoseconds it takes.
If your program is big, chances are you have bigger problems elsewhere.
Call getMessage delegate directly instead of calling Invoke on it.
if(IsDebuggingEnabled)
{
LogInternal(getMessage());
}
You should also add null check on getMessage.
I believe delegates create a new thread, so you may be right about it increasing performance.
Why not set up a test run like Dav suggested, and keep a close eye on the number of threads spawned by your app, you can use Process Explorer for that.
Hang on! I've been corrected! Delegates only use threads when you use 'BeginInvoke'... so my above comments don't apply to the way you're using them.

Performance of calling delegates vs methods

Following this question - Pass Method as Parameter using C# and some of my personal experience I'd like to know a little more about the performance of calling a delegate vs just calling a method in C#.
Although delegates are extremely convenient, I had an app that did lots of callbacks via delegates and when we rewrote this to use callback interfaces we got an order of magnitude speed improvement. This was with .NET 2.0 so I'm not sure how things have changed with 3 and 4.
How are calls to delegates handled internally in the compiler/CLR and how does this affect performance of method calls?
EDIT - To clarify what I mean by delegates vs callback interfaces.
For asynchronous calls my class could provide an OnComplete event and associated delegate which the caller could subscribe to.
Alternatively I could create an ICallback interface with an OnComplete method that the caller implements and then registers itself with the class that will then call that method on completion (i.e. the way Java handles these things).
I haven't seen that effect - I've certainly never encountered it being a bottleneck.
Here's a very rough-and-ready benchmark which shows (on my box anyway) delegates actually being faster than interfaces:
using System;
using System.Diagnostics;
interface IFoo
{
int Foo(int x);
}
class Program : IFoo
{
const int Iterations = 1000000000;
public int Foo(int x)
{
return x * 3;
}
static void Main(string[] args)
{
int x = 3;
IFoo ifoo = new Program();
Func<int, int> del = ifoo.Foo;
// Make sure everything's JITted:
ifoo.Foo(3);
del(3);
Stopwatch sw = Stopwatch.StartNew();
for (int i = 0; i < Iterations; i++)
{
x = ifoo.Foo(x);
}
sw.Stop();
Console.WriteLine("Interface: {0}", sw.ElapsedMilliseconds);
x = 3;
sw = Stopwatch.StartNew();
for (int i = 0; i < Iterations; i++)
{
x = del(x);
}
sw.Stop();
Console.WriteLine("Delegate: {0}", sw.ElapsedMilliseconds);
}
}
Results (.NET 3.5; .NET 4.0b2 is about the same):
Interface: 5068
Delegate: 4404
Now I don't have particular faith that that means delegates are really faster than interfaces... but it makes me fairly convinced that they're not an order of magnitude slower. Additionally, this is doing almost nothing within the delegate/interface method. Obviously the invocation cost is going to make less and less difference as you do more and more work per call.
One thing to be careful of is that you're not creating a new delegate several times where you'd only use a single interface instance. This could cause an issue as it would provoke garbage collection etc. If you're using an instance method as a delegate within a loop, you will find it more efficient to declare the delegate variable outside the loop, create a single delegate instance and reuse it. For example:
Func<int, int> del = myInstance.MyMethod;
for (int i = 0; i < 100000; i++)
{
MethodTakingFunc(del);
}
is more efficient than:
for (int i = 0; i < 100000; i++)
{
MethodTakingFunc(myInstance.MyMethod);
}
Could this have been the problem you were seeing?
I find it completely implausible that a delegate is substantially faster or slower than a virtual method. If anything the delegate should be negligibly faster. At a lower level, delegates are usually implemented something like (using C-style notation, but please forgive any minor syntax errors as this is just an illustration):
struct Delegate {
void* contextPointer; // What class instance does this reference?
void* functionPointer; // What method does this reference?
}
Calling a delegate works something like:
struct Delegate myDelegate = somethingThatReturnsDelegate();
// Call the delegate in de-sugared C-style notation.
ReturnType returnValue =
(*((FunctionType) *myDelegate.functionPointer))(myDelegate.contextPointer);
A class, translated to C, would be something like:
struct SomeClass {
void** vtable; // Array of pointers to functions.
SomeType someMember; // Member variables.
}
To call a vritual function, you would do the following:
struct SomeClass *myClass = someFunctionThatReturnsMyClassPointer();
// Call the virtual function residing in the second slot of the vtable.
void* funcPtr = (myClass -> vtbl)[1];
ReturnType returnValue = (*((FunctionType) funcPtr))(myClass);
They're basically the same, except that when using virtual functions you go through an extra layer of indirection to get the function pointer. However, this extra indirection layer is often free because modern CPU branch predictors will guess the address of the function pointer and speculatively execute its target in parallel with looking up the address of the function. I've found (albeit in D, not C#) that virtual function calls in a tight loop are not any slower than non-inlined direct calls, provided that for any given run of the loop they're always resolving to the same real function.
Since CLR v 2, the cost of delegate invocation is very close to that of virtual method invocation, which is used for interface methods.
See Joel Pobar's blog.
I did some tests (in .Net 3.5... later I will check at home using .Net 4).
The fact is:
Getting an object as an interface and then executing the method is faster than getting a delegate from a method then calling the delegate.
Considering the variable is already in the right type (interface or delegate) and simple invoking it makes the delegate win.
For some reason, getting a delegate over an interface method (maybe over any virtual method) is MUCH slower.
And, considering there are cases when we simple can't pre-store the delegate (like in Dispatches, for example), that may justify why interfaces are faster.
Here are the results:
To get real results, compile this in Release mode and run it outside Visual Studio.
Checking direct calls twice
00:00:00.5834988
00:00:00.5997071
Checking interface calls, getting the interface at every call
00:00:05.8998212
Checking interface calls, getting the interface once
00:00:05.3163224
Checking Action (delegate) calls, getting the action at every call
00:00:17.1807980
Checking Action (delegate) calls, getting the Action once
00:00:05.3163224
Checking Action (delegate) over an interface method, getting both at
every call
00:03:50.7326056
Checking Action (delegate) over an interface method, getting the
interface once, the delegate at every call
00:03:48.9141438
Checking Action (delegate) over an interface method, getting both once
00:00:04.0036530
As you can see, the direct calls are really fast.
Storing the interface or delegate before, and then only calling it is really fast.
But having to get a delegate is slower than having to get an interface.
Having to get a delegate over an interface method (or virtual method, not sure) is really slow (compare the 5 seconds of getting an object as an interface to the almost 4 minutes of doing the same to get the action).
The code that generated those results is here:
using System;
namespace ActionVersusInterface
{
public interface IRunnable
{
void Run();
}
public sealed class Runnable:
IRunnable
{
public void Run()
{
}
}
class Program
{
private const int COUNT = 1700000000;
static void Main(string[] args)
{
var r = new Runnable();
Console.WriteLine("To get real results, compile this in Release mode and");
Console.WriteLine("run it outside Visual Studio.");
Console.WriteLine();
Console.WriteLine("Checking direct calls twice");
{
DateTime begin = DateTime.Now;
for (int i = 0; i < COUNT; i++)
{
r.Run();
}
DateTime end = DateTime.Now;
Console.WriteLine(end - begin);
}
{
DateTime begin = DateTime.Now;
for (int i = 0; i < COUNT; i++)
{
r.Run();
}
DateTime end = DateTime.Now;
Console.WriteLine(end - begin);
}
Console.WriteLine();
Console.WriteLine("Checking interface calls, getting the interface at every call");
{
DateTime begin = DateTime.Now;
for (int i = 0; i < COUNT; i++)
{
IRunnable interf = r;
interf.Run();
}
DateTime end = DateTime.Now;
Console.WriteLine(end - begin);
}
Console.WriteLine();
Console.WriteLine("Checking interface calls, getting the interface once");
{
DateTime begin = DateTime.Now;
IRunnable interf = r;
for (int i = 0; i < COUNT; i++)
{
interf.Run();
}
DateTime end = DateTime.Now;
Console.WriteLine(end - begin);
}
Console.WriteLine();
Console.WriteLine("Checking Action (delegate) calls, getting the action at every call");
{
DateTime begin = DateTime.Now;
for (int i = 0; i < COUNT; i++)
{
Action a = r.Run;
a();
}
DateTime end = DateTime.Now;
Console.WriteLine(end - begin);
}
Console.WriteLine();
Console.WriteLine("Checking Action (delegate) calls, getting the Action once");
{
DateTime begin = DateTime.Now;
Action a = r.Run;
for (int i = 0; i < COUNT; i++)
{
a();
}
DateTime end = DateTime.Now;
Console.WriteLine(end - begin);
}
Console.WriteLine();
Console.WriteLine("Checking Action (delegate) over an interface method, getting both at every call");
{
DateTime begin = DateTime.Now;
for (int i = 0; i < COUNT; i++)
{
IRunnable interf = r;
Action a = interf.Run;
a();
}
DateTime end = DateTime.Now;
Console.WriteLine(end - begin);
}
Console.WriteLine();
Console.WriteLine("Checking Action (delegate) over an interface method, getting the interface once, the delegate at every call");
{
DateTime begin = DateTime.Now;
IRunnable interf = r;
for (int i = 0; i < COUNT; i++)
{
Action a = interf.Run;
a();
}
DateTime end = DateTime.Now;
Console.WriteLine(end - begin);
}
Console.WriteLine();
Console.WriteLine("Checking Action (delegate) over an interface method, getting both once");
{
DateTime begin = DateTime.Now;
IRunnable interf = r;
Action a = interf.Run;
for (int i = 0; i < COUNT; i++)
{
a();
}
DateTime end = DateTime.Now;
Console.WriteLine(end - begin);
}
Console.ReadLine();
}
}
}

Categories

Resources