I have a client/server architecture rolled into the same executable project. It also supports user-code access via script. As such, in my code there are a lot of checks on critical methods to ensure the context in which they are being called is correct. For example, I have a whole lot of the following:
public void Spawn(Actor2D actor)
{
if (!Game.Instance.SERVER)
{
Util.Log(LogManager.LogLevel.Error, "Spawning of actors is only allowed on the server.");
return;
}
//Do stuff.
}
I would like to cut down on this duplication of code. Does there exist something in C# what would give me the functionality to do something like:
public void Spawn(Actor2D actor)
{
AssertServer("Spawning of actors is only allowed on the server.");
//Do stuff.
}
Even a generic message like "[MethodNameOfPreviousCallOnStack] can only be called on the server." would be acceptable. But it would have to also return from the caller as well (in this case Spawn()), as to function like an abort. Similar to an assert, but instead of generating an exception just returns. Thanks!
You should consider going up another level of abstraction and add metadata to the method to describe these constraints:
[ServerOnly]
public void Spawn(...)
{
...
}
Then use a AOP library like Dynamic Proxy to intercept calls to the method. If the method has the [ServerOnly] attribute, then you can check the context you are running in then return out if it is incorrect.
This approach will get you pretty close:
public void RunOnServerOnly(Action execFunc, string errorMessage)
{
if (!Game.Instance.SERVER)
{
Util.Log(LogManager.LogLevel.Error, errorMessage);
}
else
{
execFunc();
}
}
Then you call it:
RunOnServerOnly(() => Spawn(newActor), "Spawning of actors is only allowed on the server.");
Explanation:
To get rid of the duplicated code, you have one function that performs the check and logging. You give it an action (generic delegate) to perform if the check passes. It runs the if statement, logs if it isn't on a server, and otherwise just runs the function.
I would tend to agree that exceptions are probably the better route to go, but the above meets your requirements.
If possible add a ServerContext object that describes the current server instance to the method argument list.
public void Spawn(ServerContext context, Actor2D actor)
{
// do stuff
}
This makes it difficult for the caller to execute this method without a valid context. In that way you are enforcing the rule at compile time.
Related
In the perspective of callbacks, I am facing a strange situation when I knew that myDelegate.Target contains the reference to the class whose method it contains. (I searched it on SO, however I excuse if I missed some thread already answering this)
For example
public delegate void TravePlanDelegate();
public class Traveller
{
//Papa is planing a tour, along with Mama
public void Planner()
{
//Asking me (delegate) to hold a letter about PlanA's detail
TravelPlanDelegate myPlan = PlanA;
//Sending me to TravelAgency office with letter
new TravelAgency().ExecuteTravelPlan(myPlan);
}
public void PlanA()
{
//Papa's open plan with Mama
Console.WriteLine("First Berline, then New Yark and finally Lahore");
}
public void PlanB()
{
//Papa's secret plan
Console.WriteLine("First Dubai, and then Lahore");
}
}
public class TravelAgency
{
public void ExecuteTravelPlan(TravePlanDelegate tp)
{
Traveller traveller = (Traveller)tp.Target;
//Here it should execute plan
//tp.Target - A reference to Traveler class, which can lead travel
//agency to Papa's secret plan (And exposes it to Mama)
}
}
In this example, TravelAgency can get information from delegate about papa's secret plan too. Did I get delegate concept properly or missing something?
Your assumption is correct. Unfortunately, however you try to "encapsulate" your object- there must always be a reference to it somewhere, otherwise it would be impossible to invoke it's instance method.
As some kind of counter measure, you can proxy the method invocation to a lambda expression:
TravelPlanDelegate myPlan = (args) =>PlanA(args);
This makes it less likely that any rogue code will attempt to carry out some ill intended operations on your code, since knowing how your code looks like in advance will not help it accomplish a thing.
Note that this does not ensure a thing, since the produced delegate still has a Target property to an object which holds a reference to yours.
Crackers which are smart enough can still apply reflection to the generated class and obtain a reference to your object.
Conclusion:
Only consume code you trust - it is not much of a problem in today's Open Source driven world.
I have created a class that handles encrypted database traffic. I have used external Event definitions successfully, allowing the calling process to properly handle errors that - while they occurred within the database module - actually originated from the calling procedure.
Using the external Error handler I would do the following:
public event EventHandler ErrorStatusChanged;
...and later, if/when such occurred, I would handle it like this:
if (ErrorStatusChanged != null)
{
ErrorStatusChanged(this, EventArgs.Empty);
}
...and that seems to work just fine. However, now I want to extend further using a callback function, but I have only a few clues how I might approach this situation. (I feel sure that it is possible to accomplish this, but I'm fairly lost/confused as to actual implementation...)
Something like:
public delegate void Update_System_Status (bool dbConnected, string textStatus);
...and then later (I'm sure I've got this wrong, the compiler flags it even before compile time):
if (Update_System_Status != null)
{
Update_System_Status(bConnFlag, sConnTextStatus);
}
I'd like to build a couple of callbacks - one that allows the datahandler class to inform the calling process that it has successfully connected (or not), and another to handle updating a progress bar during the longer mass-update processes. And after numerous searches using [callback] and/or [delegate] as keywords, I'm getting nowhere quickly. I am, however, getting really confused!
I had envisioned that I would provide some sort of interface, very similar to the EventHander (above) and be able to determine - later on, when it is needed - whether the calling procedure provided the proper function, and call it if/when possible. I know that not all programmers will want to provide a main form update callback function to this database handler, so I figure I'll need to somehow "know" when one has been provided, and when not.
I have unsuccessfully employed the [delegate] keyword, and have no idea how to use the [interface] directive at all - all of the examples I keep running into illustrate the functions being internal to the class, which is exactly not what I am trying to achieve. I am trying to provide a framework that will allow an external process to provide a function that, when something happens inside the class/object that would be good to update the external (calling) system, could do so by calling the provided function (from outside).
Thanks (in advance) for your assistance - I'm lost!
I think you problem is that you are only declaring a delegate type - you are not declaring a property of that type
Try something like this:
public delegate void MyDelegateType(bool dbConnected, string textStatus);
public MyDelegateType Update_System_Status { get; set; }
then you should be able to use it like you want:
if (Update_System_Status != null)
{
Update_System_Status(bConnFlag, sConnTextStatus);
}
but if you are going to use it like this you might aswell make it an event:
public event MyDelegateType MyEvent;
You declared delegate correctly. What you are missing is a member of this class what will use that delegate declaration. The code you tries is valid for events (which are multicast delagates), though you omit event definition itself (therefore you code doesn't work). Callbacks are usually passed via method calls (but you can as well pass it via property):
public void DoSomething(Update_System_Status callback)
{
// call callback
callback?.Invoke(false, "some status");
}
or
public Update_System_Status Callback {get; set;}
// and then somewhere (using pre C# 6.0 syntax)
var callback = Callback;
if(callback != null)
callback(true, "some status");
If you don't need proper parameter names (in your case they have different type, so impossible to make mistake which is which), then you can simply use Action<> without need to declare delegate before:
public void DoSomething(Action<bool, string> callback) { ... }
I have a class which manipulates a resource which is shared by multiple threads. The threads pass around control of a mutex in order to manage access to the resource. I would like to manage control of the mutex using the RAII idiom via a disposable object.
There is an additional caveat. When the class begins an operation to manipulate the resource, it is possible that the operation is no longer necessary, or may no longer be performed. This is the result of user action which occurs after the operation has been scheduled to be carried out -- no way around it unfortunately. There are many different operations which might possibly be carried out, and all of them must acquire the mutex in this way. I'm imagining it will look something like this, for example:
public void DoAnOperation()
{
using(RAIIMutex raii = new RAIIMutex(TheMutex))
{
if(ShouldContinueOperation())
{
// Do operation-specific stuff
}
}
}
However, because I'm lazy, I'd like to not have to repeat that if(ShouldContinueOperation()) statement for each operation's function. Is there a way to do this while keeping the // Do operation-specific stuff in the body of the using statement? That seems like the most readable way to write it. For example, I don't want something like this (I'd prefer repeating the if statement if something like this is the only alternative):
public void DoAnOperation()
{
var blaarm = new ObjectThatDoesStuffWithRAIIMutex(TheMutex, ActuallyDoAnOperation);
blaarm.DoAnOperationWithTheMutex();
}
private void ActuallyDoAnOperation()
{
// Do operation-specific stuff
}
It is not entirely clear what ShouldContinueOperation depends on, but assuming that it can be a static function (based on the example code provided in the question), you might like something along the lines of:
public static void TryOperation(Mutex mutex, Action action)
{
using (RAIIMutex raii = new RAIIMutex(mutex))
{
if (ShouldContinueOperation())
action();
}
}
Which you can then use like:
RAIIMutex.TryOperation(TheMutex, () =>
{
// Do operation-specific stuff
});
This combines the using and the ShouldContinueOperation check in one line for the caller. I'm not quite sure about the readability of the lambda syntax used, but that's a matter of personal preference.
I had a difficult time determining a good title, so feel free to change it if necessary. I wasn't really sure how to describe what I'm trying to achieve and the word "template" came to mind (obviously I'm not trying to use C++ templates).
If I have a class that performs some action in every method, let's pretending doing a try/catch and some other stuff:
public class SomeService
{
public bool Create(Entity entity)
{
try
{
this.repository.Add(entity);
this.repository.Save();
return true;
}
catch (Exception e)
{
return false;
}
}
}
Then I add another method:
public bool Delete(Entity entity)
{
try
{
this.repository.Remove(entity);
this.repository.Save();
return true;
}
catch (Exception e)
{
return false;
}
}
There's obviously a pattern in the methods here: try/catch with the return values. So I was thinking that since all methods on the service need to implement this pattern of working, could I refactor it into something like this instead:
public class SomeService
{
public bool Delete(Entity entity)
{
return this.ServiceRequest(() =>
{
this.repository.Remove(entity);
this.repository.Save();
});
}
public bool Create(Entity entity)
{
return this.ServiceRequest(() =>
{
this.repository.Add(entity);
this.repository.Save();
});
}
protected bool ServiceRequest(Action action)
{
try
{
action();
return true;
}
catch (Exception e)
{
return false;
}
}
}
This way all methods follow the same "template" for execution. Is this a bad design? Remember, the try/catch isn't all that could happen for each method. Think of adding validation, there would be the need to say if(!this.Validate(entity))... in each method.
Is this too difficult to maintain/ugly/bad design?
Using lambda expressions usually reduces readability. Which basically means that in a few months someone will read this and get a headache.
If it's not necessary, or there's no real performance benefit, just use the 2 separate functions. IMO it's better to have readable code then to use nifty techniques.
This seems like a technique that would be limited to only small "actions" -- the more code in the "action" the less useful this would be as readability would be more and more compromised. In fact, the only thing you're really reusing here is the try/catch block which is arguably bad design in the first place.
That's not to say that it's necessarily a bad design pattern, just that your example doesn't really seem to be a good fit for it. LINQ, for example, uses this pattern extensively. In combination with extension methods and the fluent style it can be very handy and still remain readable. Again, though, it seems best suited to replace small "actions" -- anything more than a couple of lines of code and I think it gets pretty messy.
If you are going to do it you might want to make it more useful by passing in both the action and the entity the action uses as parameters instead of just the action. That would make it more likely that you could do additional, common computations in your action.
public bool Delete( Entity entity )
{
return this.ServiceRequest( e => {
this.repository.Remove( e );
this.repository.Save();
}, entity );
}
protected bool ServiceRequest( Action<Entity> action, Entity entity )
{
try
{
this.Validate( entity );
action( entity );
return true;
}
catch (SqlException) // only catch specific exceptions
{
return false;
}
}
I would try to look for a way to break the repository action (add/update/delete) from the repository changes flush (save).
Depending on how you use your code (web/windows) you might be able to use a 'session' manager for this. Having this separation will also allow you to have multiple actions flushed in a single transaction.
Other think, not related to the topic but to the code, don't return true/false, but let exception pass through or return something that will allow you to distinguish on the cause or failure (validation, etc.). You might want to throw on contract breach (invalid data passed) and return value on normal invalid business rules (to not use exception as a business rule as they are slow).
An alternative would be to create an interface, say IExample which expresses more of your intent and decouples the action from the actor. So in your example at the end you mention perhaps using this.Validate(entity). Clearly that won't work with your current design as you'd have to pass action and entity and then pass entity to action.
If you express it as an interface on entity you simply pass any entity that implements IExample.
public interface IExample
{
bool Validate();
void Action();
}
Now entity implements IExample, ServiceRequest now takes an IExample as its parameter and bob's your uncle.
Your original design isn't bad at all, it's perfectly common actually, but becomes restrictive as requirements change (this time action has to be called twice, this time it needs to call validation and then postvalidation). By expressing it through an interface you make it testable and the pattern can be moved to a helper class designed to replay this particular sequence. Once extracted it also becomes testable. It also means if the requirements suddenly require postvalidation to be called you can revisit the entire design.
Finally, if the pattern isn't being applied a lot, for example you have perhaps three places in the class, then it might not be worth the effort, just code each long hand. Interestingly, because of the way things are Jitted it might make it faster to have three distinct and complete methods rather than three that all share a common core...
I have designed the following class that should work kind of like a method (usually the user will just run Execute()):
public abstract class ??? {
protected bool hasFailed = false;
protected bool hasRun = false;
public bool HasFailed { get { return hasFailed; } }
public bool HasRun { get { return hasRun; } }
private void Restart() {
hasFailed = false;
hasRun = false;
}
public bool Execute() {
ExecuteImplementation();
bool returnValue = hasFailed;
Restart();
return returnValue;
}
protected abstract void ExecuteImplementation();
}
My question is: how should I name this class? Runnable? Method(sounds awkward)?
Naming a class is all about a good design. You have to know which use cases this class will be part of, which responsibility it will take and what collaborations will this class take part in. Naming class without context can only do harm. Naming class after a pattern just because the pattern uses similar names is even worse, because it might confuse any reader who knows something about patterns, whcih is exactly opposite of what patterns try to achieve - name common decisions/solutions/designs/etc... Your class can be Runnable, Executable, Method, Procedure, Job, Worker, RestartableExecutor, Command, Block, Closure, Functor and virtually pretty much anything without further information.
Possibilities:
action
command
worker
method
I like action, personally.
I guess a good question to ask yourself would be what you are executing. Then you might get an idea of what to name it.
For example, if you are executing a file and folder scan, you could name the class FileAndFolderScan.
FileAndFolderScan.Execute();
It sure looks like a Task to me.
Usually the Command pattern uses classes with an Execute() method. Is that more or less what you're trying to accomplish? I guess it's close enough for me; I would call it a Command or Worker or something similar.
Do you know about BackgroundWorker?
The .NET Framework already has a class that does this (several, actually). They're called delegates.
If it's really doing a lot more than just executing a method - performing error handling or that sort of thing - then name it for what it actually does, not how it's implemented.
If you absolutely have to implement a totally generic and abstract class that does nothing but encapsulate an arbitrary method and some sort of success/failure status (why?), then... task, worker, command, instruction, request, activity, etc... pick any of the above, they all mean pretty much the same thing in this context.
At my work we were stuck on .NET 2.0 for a while (pre-Action and Func delegates) and I had been using a bunch of overloaded generic delegates with the same signatures called Runner and Returner. Seems to me you could go with Runner and have a pretty clear self-describing class.
Alternately, why not just go with something like Executable or Executor?
Task
Client code should look good with this.
//create instance of Task implementation
Task myTask = TaskFactory.CreateTask();
//Execute the task
myTask.Execute()
You could call it an Executioner. :)
Quick Answer:
You already have several suggestions like "Task", "Process", "Job", even "Command".
Complementary comments:
Any object has a life cycle, a "start" operation, usually the constructor, a "finish" operation, usually a "Dispose" or destructor, and unleast a single main operation like your "Execute()", but there can be more.
In you code, the constructor or destructor are internally managed by your compiler, but sometimes do some other stuff like open and closing files.
You may want to learn more about the Command Design pattern, as mention by previous answers, it seems to fit your case.