Pointing a function to another - c#

Suppose I have two functions:
void DoesNothing(){}
void OnlyCalledOnce(){
//lines of code
}
Is it possible to call OnlyCalledOnce and it actually run DoesNothing ? I imagine something like this:
void DoesNothing(){}
void OnlyCalledOnce(){
//lines of code
OnlyCalledOnce = DoesNothing;
}
and after that last line, whenever I called OnlyCalledOnce it would run DoesNothing.
Is it possible?

You can simply return early in OnlyCalledOnce like this: (assuming your DoesNothing example literally does nothing - it isn't needed)
bool initialized = false;
void OnlyCalledOnce()
{
if (initialized) return;
// firsttimecode
initialized = true;
}
The initialized variable will be true after first run.

Did you try to use delegate?
class Program
{
private static Action Call = OnlyCalledOnce;
public static void Main(string[] args)
{
Call();
Call();
Call();
Console.ReadKey();
}
static void DoesNothing()
{
Console.WriteLine("DoesNothing");
}
static void OnlyCalledOnce()
{
Console.WriteLine("OnlyCalledOnce");
Call = DoesNothing;
}
}

Another way you could solve this is to maintain a list of strings that represent the methods that have been called. The strings don't even have to be the method name, they just need to be unique to each method.
Then you can have a helper method called ShouldIRun that takes in the function's unique string and checks to see if it exists in the list. If it does, then the method returns false, and if it doesn't, then the method adds the string to the list and returns true.
The nice thing here is that you don't have to maintain a bunch of state variables, you can use this with as many methods as you want, and the methods themselves don't need any complicated logic - they just ask the helper if they should run or not!
public class Program
{
private static List<string> CalledMethods = new List<string>();
static bool ShouldIRun(string methodName)
{
if (CalledMethods.Contains(methodName)) return false;
CalledMethods.Add(methodName);
return true;
}
// Now this method can use method above to return early (do nothing) if it's already ran
static void OnlyCalledOnce()
{
if (!ShouldIRun("OnlyCalledOnce")) return;
Console.WriteLine("You should only see this once.");
}
// Let's test it out
private static void Main()
{
OnlyCalledOnce();
OnlyCalledOnce();
OnlyCalledOnce();
GetKeyFromUser("\nDone! Press any key to exit...");
}
}
Output

As already stated, you can use this:
private bool isExecuted = false;
void DoesNothing(){}
void OnlyCalledOnce(){
if (!isExecuted)
{
isExecuted = true;
//lines of code
DoesNothing();
}
}
If you have multiple threads etc, you can do a lock(object) ..

What's your problem with this?
void DoesNothing()
{
}
void OnlyCalledOnce()
{
DoesNothing();
}
It will run DoesNothing() once you run OnlyCalledOnce()

Related

Call multiple methods in a defined order

Picture a case like this:
I have a controller action (or service method) where I need to call three methods in a consecutive order, each method has a single responsibility.
public return_type MyMethod(_params_) {
// .. some code
Method_1 (...);
Method_2 (...);
Method_3 (...);
// ... some more code
}
A developer can make the mistake of calling Method_2 before Method_1, or at least we can say that nothing forces him to follow this order, or to get an exception when the order isn't followed.
Now we can call Method_2 inside Method_1, and Method_3 inside Method_2, but that doesn't seem right when each method handles a completely different responsibility.
Is there a design pattern for this situation? Or any "clean" way to handle this?
This is exactly what facade pattern do.
Try to extract the three methods to another class, and make them private. Expose a single method MyMethod that calls the other methods in the desired order. Clients should use Facade.MyMethod
More details: https://en.m.wikipedia.org/wiki/Facade_pattern
I suppose you should leave control of execution for yourself and give possibility just to set what should be executed.
public interface IMethodsExecutor
{
void Execute();
void ShouldRunMethod1();
void ShouldRunMethod2();
void ShouldRunMethod3();
}
public class MethodsExecutor: IMethodsExecutor
{
private bool _runMethod1;
private bool _runMethod2;
private bool _runMethod3;
public MethodsExecutor()
{
_runMethod1 = false;
_runMethod2 = false;
_runMethod3 = false;
}
public void ShouldRunMethod1()
{
_runMethod1 = true;
}
public void ShouldRunMethod2()
{
_runMethod2 = true;
}
public void ShouldRunMethod3()
{
_runMethod3 = true;
}
private void Method1()
{
}
private void Method2()
{
}
private void Method3()
{
}
public void Execute()
{
if (_runMethod1)
{
Method1();
}
if (_runMethod2)
{
Method2();
}
if (_runMethod3)
{
Method3();
}
}
}
So that the usage will be:
IMethodsExecutor methodsExecutor = new MethodsExecutor();
methodsExecutor.ShouldRunMethod1();
methodsExecutor.ShouldRunMethod3();
methodsExecutor.Execute();

Does conditional compilation optimise away methods that generate input arguments?

In C#, we can perform conditional compilation using #if / #endif statements or with Conditional attributes. For example, the following code will print something only for a debug build:
public static void Main(string[] args)
{
CheckResult();
}
[Conditional("DEBUG")]
private static void CheckResult()
{
System.Console.WriteLine("everything is fine");
}
What happens, though, if this CheckResult() method accepts arguments, and we use it like so?
public static void Main(string[] args)
{
CheckResult(CalculateSomethingExpensive() == 100);
}
private static int CalculateSomethingExpensive()
{
result = //some sort of expensive operation here
return result;
}
[Conditional("DEBUG")]
private static void CheckResult(bool resultIsOK)
{
System.Console.WriteLine(resultIsOK ? "OK" : "not OK");
}
In this case, what compiler rules decide whether the expensive method is executed or optimised away? For example, is it guaranteed to be removed if it makes no changes to the state of any object?
I understand that the uncertainty can be removed by explicitly using #if but when one has a large code base with hundreds of Debug.Assert() statements, this can get unsightly very quickly.
So, with a little modification, here is what is compiled (under Release):
class Program
{
public static void Main(string[] args)
{
CheckResult(CalculateSomethingExpensive() == 100);
}
private static int CalculateSomethingExpensive()
{
var result = new Random().Next(100);//some sort of expensive operation here
return result;
}
[Conditional("DEBUG")]
private static void CheckResult(bool resultIsOK)
{
System.Console.WriteLine(resultIsOK ? "OK" : "not OK");
}
}
Which is pretty much the same as your example except modified to compile. Compiling it and then running it through the decompiler, results in this:
internal class Program
{
public Program()
{
}
private static int CalculateSomethingExpensive()
{
return (new Random()).Next(100);
}
[Conditional("DEBUG")]
private static void CheckResult(bool resultIsOK)
{
Console.WriteLine((resultIsOK ? "OK" : "not OK"));
}
public static void Main(string[] args)
{
}
}
You can see that the only difference is that the CheckResult call is removed from Main. This means that the full call is removed. Even if CheckResult or CalculateSomethingExpensive had side-effects, those will be removed.
The interesting thing is that the methods are still there in the compiled output, just the calls are removed, so don't use [Conditional(Debug)] to hide secrets used during debugging.

Force code to execute in order?

I am seeing a strange problem in my C# code. I have something like this:
public static class ErrorHandler {
public static int ErrorIgnoreCount = 0;
public static void IncrementIgnoreCount() {
ErrorIgnoreCount++;
}
public static void DecrementIgnoreCount() {
ErrorIgnoreCount--;
}
public static void DoHandleError() {
// actual error handling code here
}
public static void HandleError() {
if (ErrorIgnoreCount == 0) {
DoHandleError();
}
}
}
public class SomeClass {
public void DoSomething() {
ErrorHandler.IncrementIgnoreCount();
CodeThatIsSupposedToGenerateErrors(); // some method; not shown
ErrorHandler.DecrementIgnoreCount();
}
}
The problem is that the compiler often decides that the order of the three calls in the DoSomething() method is not important. For example, the decrement may happen before the increment. The result is that when the code that is supposed to generate errors is run, the error handling code fires, which I don't want.
How can I prevent that?
Add Trace or Logs to your code in IncrementIgnoreCount, DecrementIgnoreCount and HandleError function.
That will help you to view real call order.

How can I stop a method's execution using PostSharp?

Currently I am trying to develop a solution that will check if a method has been executed and if some time has passed since it was last executed, given that it was and the time has passed, I would like to skip from the OnEntry method to the OnExit without actually executing any code from the method itself.
Sort of :
public class CacheThisMethod : OnMethodBoundaryAspect
{
public override void OnEntry(MethodExecutionArgs args)
{
if (isCached( args.Method.name)
{
args.MethodExecutionTag = getReturnValue(args.Method.name)
//jump to OnExit
}
else
{
//continue
}
}
public override void OnExit(MethodExecutionArgs args)
{
args.Method.ReturnValue = args.MethodExecutionTag;
}
}
How can I achieve this? Thanks.
The following modifications to your code show how to get what you want.
public class CacheThisMethod : OnMethodBoundaryAspect
{
public override void OnEntry(MethodExecutionArgs args)
{
if (isCached( args.Method.name)
{
args.MethodExecutionTag = getReturnValue(args.Method.name)
OnExit(args);
}
else
{
//continue
}
}
public override void OnExit(MethodExecutionArgs args)
{
//args.Method.ReturnValue = args.MethodExecutionTag;
args.ReturnValue = args.MethodExecutionTag;
args.FlowBehavior = FlowBehavior.Return;
}
}
However, if you are working on a per method name key, then you can use a simple property for your cached return value, as a separate instance of each aspect will be created for you when you attach your advice to a method.
If there is no reason to jump to the OnExit then just add the FlowBehaviour and return value setting to the OnEntry method at the point it makes the call to the OnExit method.

read return value from delegate

I am not sure if I understood the usage of delegates correctly but I would like to read delegate return value in publisher class. The example is below with description.
//Publisher class
public class ValidateAbuse
{
public delegate List<String> GetAbuseList();
public static GetAbuseList Callback;
public void Ip(string ip)
{
// I would like to read GetAbuseList value (List<String>) here. How to do that?
}
}
//Subscriber class
class Server
{
public static void Start()
{
ValidateAbuse.Callback = GetIpAbuseList;
ValidateAbuse.Ip(MyIp);
}
private static List<string> GetIpAbuseList()
{
//return List<String> to ValidateAbuse class and use return value in public void Ip(string ip) method
}
public void Ip(string ip)
{
if (Callback != null)
{
List<String> valueReturnedByCallback = Callback();
}
}
Here's a version that does not use static for ValidateAbuse and that uses the built-in Func<T> delegate.
public class ValidateAbuse
{
private Func<List<string>> callback;
public ValidateAbuse(Func<List<string>> callback)
{
this.callback = callback;
}
public void Ip(string ip)
{
var result = callback();
}
}
public class Server
{
public static void Start()
{
var validateAbuse = new ValidateAbuse(GetIpAbuseList);
validateAbuse.Ip(MyIp);
}
private static List<string> GetIpAbuseList()
{
//return List<string> to ValidateAbuse class and use return value in public void Ip(string ip) method
}
}
I recommend you avoid static since that gives you a global state, which could later give you coupling problems and also makes it hard for you to unit test.
The other answers given so far has a guard clause, checking Callback for null. Unless that is expected behaviour (that Callback is null) I would avoid this. It's better to crash early than to get hard to debug errors later on.
I would also try to make the Server non-static.
It should be as simple as:
// Ip in your code sample is missing static
public static void Ip(string ip)
{
List<string> abuseList;
if (Callback != null)
abuseList = Callback()
}
However you can avoid creating a delegate all together by using a Func:
public static Func<List<string>> Callback;
Try this: Read more from here http://msdn.microsoft.com/en-us/library/bb534960%28v=vs.110%29.aspx
internal delegate int PowerOfTwo();
void Main(){
PowerOfTwo ch = new PowerOfTwo(CheckPower);
Console.WriteLine(ch());
}
int CheckPower(){
return 2*2;
}
#Torbjörn Kalin's answer is good, but only if you have only 1 delegate you want to get the return value from. If you want to retrieve the return values of more than one delegate, this is how you do it:
//Publisher class
public class ValidateAbuse
{
public delegate List<String> GetAbuseList();
public static GetAbuseList Callback;
public void Ip(string ip)
{
foreach (GetAbuseList gal in Callback.GetInvocationList())
{
List<string> result = gal.Invoke(/*any arguments to the parameters go here*/);
//Do any processing on the result here
}
}
}
//Subscriber class
class Server
{
public static void Start()
{
//Use += to add to the delegate list
ValidateAbuse.Callback += GetIpAbuseList;
ValidateAbuse.Ip(MyIp);
}
private static List<string> GetIpAbuseList()
{
//return code goes here
return new List<String>();
}
This will invoke each delegate one after the other, and you can process the output of each delegate separately from each other.
The key here is using the += operator (not the = operator) and looping through the list that is retrieved by calling GetInvocationList() and then calling Invoke() on each delegate retrieved.
I figured this out after reading this page:
https://www.safaribooksonline.com/library/view/c-cookbook/0596003390/ch07s02.html
(altho it was partially because I already had an idea what to do, and I didn't start a free trial to read the rest)
Hope this helps!

Categories

Resources