Is there such a thing as too many events? - c#

public class Human
{
public void Run(){}
public void Jump(){}
public void Eat(){}
//Generalized approach
public EventHandler<HumanActivityProgressChanged> ActivityProgressChanged;
public EventHandler<HumanActivityCompleted> ActivityCompleted;
//Per-method approach
public EventHandler<HumanActivityProgressChanged> Running;
public EventHandler<HumanActivityCompleted> Ran;
public EventHandler<HumanActivityProgressChanged> Jumping;
public EventHandler<HumanActivityCompleted> Jumped;
public EventHandler<HumanActivityProgressChanged> Eating;
public EventHandler<HumanActivityCompleted> Ate;
}
I have different methods that implements Event-based Async pattern. These methods trigger a ProgressChanged eventargs and a Completed eventargs. They all trigger the same eventargs (As shown in the code above).
Does it make sense to provide an event per each async method? Or just provide a generalized event for all async methods? Is there such a thing as too much events?

Both are valid. It really depends on what your intention is.
Will you expect a listener to want to listen to all events and repond in similar ways? Go fot the generalized event.
If you are expecting a load of disparate listeners each interested in different aspects, performing wildly different tasks on these events, go for the second.
The idea behind api design is not to impose a certain way of usage on the clients (Rails users may disagree), but you will give strong hints in the way you design it..

The fact that all events have the same EventArgs is an indication that you might replace these events with a single event and pass the activity as a property of the HumanActivityProgressChanged and HumanActivityCompleted types.
There is no law that tells you to do so. It all depends on what you wish to expose and what clients would expect/need.

It's a question of taste, but I personally prefer the first approach.
For one thing, maintenance is going to be much easier.
And it is simply more efficient to code that way.

Related

Testable code: Attach event handler in constructor

as for my understanding, part of writing (unit-)testable code, a constructor should not do real work in constructor and only assigning fields. This worked pretty well so far. But I came across with a problem and I'm not sure what is the best way to solve it. See the code sample below.
class SomeClass
{
private IClassWithEvent classWithEvent;
public SomeClass(IClassWithEvent classWithEvent)
{
this.classWithEvent = classWithEvent;
// (1) attach event handler in ctor.
this.classWithEvent.Event += OnEvent;
}
public void ActivateEventHandling()
{
// (2) attach event handler in method
this.classWithEvent.Event += OnEvent;
}
private void OnEvent(object sender, EventArgs args)
{
}
}
For me option (1) sounds fine, but it the constructor should only assign fields. Option (2) feels a bit "too much".
Any help is appreciated.
A unit test would test SomeClass at most. Therefore you would typically mock classWithEvent. Using some kind of injection for classWithEvent in ctor is fine.
Just as Thomas Weller said wiring is field assignment.
Option 2 is actually bad IMHO. As if you omit a call to ActivateEventHandling you end up with a improperly initialized class and need to transport knowledge of the requirement to call ActivateEventHandling in comments or somehow else, which make the class harder to use and probably results in a class-usage that was not even tested by you, as you have called ActivateEventHandling and tested it but an uninformed user omitting the activation didn't, and you have certainly not tested your class when ActivateEventHandling was not called, right? :)
Edit: There may be alternative approaches here which are worth mentioning it
Depending on the paradigm it may be wise to avoid wiring events in the class at all. I need to relativize my comment on Stephen Byrne's answer.
Wiring can be regarded as context knowledge. The single responsibility principle says a class should do only one task. Furthermore a class can be used more versatile if it does not have a dependency to something else. A very loosely coupled system would provide many classes witch have events and handlers and do not know other classes.
The environment is then responsible for wiring all the classes together to connect events properly with handlers.
The environment would create the context in which the classes interact with each-other in a meaningful way.
A class in this case does therefore not know to whom it will be bound and it actually does not care. If it requires a value, it asks for it, whom it asks should be unknown to it. In that case there wouldn't even be an interface injected into the ctor to avoid a dependency. This concept is similar to neurons in a brain as they also emit messages to the environment and expect answer not knowing neighbouring neurons.
However I regard a dependency to an interface, if it is injected by some means of a dependency injection container just another paradigm and not less wrong.
The non trivial task of the environment to wire up all classes on start may lead to runtime errors (which are mitigated by a very good test coverage of functional and integration tests, which may be a hard task for large projects) and it gets very annoying if you need to wire dozens of classes and probably hundreds of events on startup manually.
While I agree that wiring in an environment and not in the class itself can be nice, it is not practical for large scale code.
Ralf Westphal (one of the founders of the clean code developer initiative (sorry german only)) has written a software that performs the wiring automatically in a concept called "event based components" (not necessarily coined by himself). It uses naming conventions and signature matching with reflection to bind events and handlers together.
Wiring events is field assignment (because delegates are nothing but simple reference variables that point to methods).
So option(1) is fine.
The point of constructor is not to "assign fields". It is to establish invariants of your object, i. e. something that never changes during its lifetime.
So if in other methods of class you depend on being always subscribed to some object, you'd better do it in the constructor.
On the other hand, if subscriptions come and go (probably not the case here), you can move this code to another method.
The single responsibility principle dictates that that wiring should be avoided. Your class should not care how, or where from it receives data. It would make sense to rename OnEvent method to something more meaningful, and make it public.
Then some other class (bootstrapper, configurator, whatever) should be responsible for the wiring. Your class should only be responsible for what happens when a new data come's in.
Pseudo code:
public interface IEventProvider //your IClassWithEvent
{
Event MyEvent...
}
public class EventResponder : IEventResponder
{
public void OnEvent(object sender, EventArgs args){...}
}
public class Boostrapper
{
public void WireEvent(IEventProvider eventProvider, IEventResponder eventResponder)
{
eventProvider>event += eventResponder.OnEvent;
}
}
Note, the above is pseudo code, and it's only for the purpose to describe the idea.
How your bootstrapper actually is implemented depends on many things. It can be your "main" method, or your global.asax, or whatever you have in place to actually configure and prepare your application.
The idea is, that whatever is responsible to prepare the application to run, should compose it, not the classes themselves, as they should be as single purpose as possible, and should not care too much about how and where they are used.

Action<T> vs virtual methods

Consider I have a game-creation framework, with a basic class called Game.
// Action version
public class Game
{
public Action<float> OnUpdate;
public void Update(float mFrameTime) { OnUpdate.Invoke(mFrameTime); }
}
// Virtual version
public class Game
{
public virtual void Update(float mFrameTime) {}
}
Which one is the best approach?
(design-wise and performance-wise)
Subscribing something to the OnUpdate action (and not inheriting the Game class), or inheriting the Game class and overriding the virtual method?
This depends on who the target audience is for the Update notification. If it's for only derived classes then a virtual method is a fine approach. If it's for derived classes and or arbitrary consumer then the event style pattern is the correct approach.
I would rather raise events than have the Action<T>. This is a common pattern in .NET:
public class Game
{
public event EventHandler<SomeEventArgs> Update;
protected virtual void OnUpdate(float mFrameTime) { // invoke event here }
}
It depends what the rest of your framework looks like and what you plan to do with this class. If this class simply alerts other classes that its time to update, then you'll want an event based system. If this is a base class for all games to derive from, then you'll probably want an override. No answer is 100% correct - it all depends on what exactly you are trying to accomplish with your design.
Unfortunately I lack the knowledge to answer which of those options would be more efficient from a performance perspective, but from a design perspective I would advise overriding the Update method for the following reasons:
It keeps all your update logic in one spot.
It allows you to easily monitor the ordering of update actions.
As others have said, it depends on the purpose of the class, and I am making assumptions based on the fact that it's a 'game' class. There's no reason you couldn't implement both, with a virtual method that triggers an event. This would allow for the bulk of update code to be located centrally, with the option for elements to subscribe to the update event as needed.
When I've approached the same problem myself I've typically adopted a virtual-method approach, though I've broken down update into multiple sections for convenience and control purposes.

Code-friendly version of event signature in .NET

Preceding posts:
Event Signature in .NET — Using a Strong Typed 'Sender'?
In a C# event handler, why must the “sender” parameter be an object?
Microsoft's conventions and guidelines force .NET users to use special pattern for creating, raising and handling events in .NET.
Event design guidelines http://msdn.microsoft.com/en-us/library/ms229011.aspx state that
Citation:
The event-handler signature observes the following conventions :
The return type is Void.
The first parameter is named sender
and is of type Object. This is the
object that raised the event.
The second parameter is named e and
is of type EventArgs or a derived
class of EventArgs.This is the
event-specific data.
The method takes exactly two
parameters.
These conventions tell developers that the (following) shorter and more obvious code is evil:
public delegate void ConnectionEventHandler(Server sender, Connection connection);
public partial class Server
{
protected virtual void OnClientConnected(Connection connection)
{
if (ClientConnected != null) ClientConnected(this, connection);
}
public event ConnectionEventHandler ClientConnected;
}
and the (following) longer and less obvious code is good:
public delegate void ConnectionEventHandler(object sender, ConnectionEventArgs e);
public class ConnectionEventArgs : EventArgs
{
public Connection Connection { get; private set; }
public ConnectionEventArgs(Connection connection)
{
this.Connection = connection;
}
}
public partial class Server
{
protected virtual void OnClientConnected(Connection connection)
{
if (ClientConnected != null) ClientConnected(this, new ConnectionEventArgs(connection));
}
public event ConnectionEventHandler ClientConnected;
}
Though these guidelines not state why is it so important to follow these conventions, making developers act like monkeys who don't know why and what are they doing.
IMHO, Microsoft's event signature conventions for .NET are bad for your code because they cause additional zero-efficiency effort to be spent on coding, coding, coding:
Coding "(MyObject)sender" casts (not speaking about 99% of situations that don't require sender at all)
Coding derived "MyEventArgs" for the data to be passed inside event handler.
Coding dereferences (calling "e.MyData" when the data is required instead of just "data")
It's not that hard to do this effort, but practically speaking what are we loosing when not conforming to Microsoft's conventions, except that people take you as an heretic because your act of confrontation to Microsoft's conventions verges on blasphemy :)
Do you agree?
The problems you will have:
When you add another argument, you
will have to change your event
handler signature.
When a programmer first looks at
your code, your event handlers will
not look like event handlers.
Especially the latter can waste you far more time than writing a 5 line class.
Regarding having a strongly-typed sender, I've often wondered that myself.
Regarding the EventArgs, I'd still recommend you use an intermediate EventArgs class because you may want to add event information in the future which you don't currently foresee. If you've used a specific EventArgs class all along, you can simply change the class itself and the code where it gets fired. If you pass the Connection as per your example, you'd have to refactor every event handler.
Edit
Jim Mischel made a good point in his comments. By making the sender an object, we enable the same event method to potentially be reused to handle a variety of events. For example, let's say that a grid needs to update itself if:
the user clicks a "refresh" button, or
the system detects that a new entry has been loaded from the server.
You could then say something like this:
serverBus.EntryReceived += RefreshNeededHandler;
refreshButton.Click += RefreshNeededHandler;
...
public void RefreshNeededHandler(object sender, EventArgs args)
{
...
}
Of course, in practice, I have pretty much never had any call for this kind of reuse, whereas the first thing I tend to to in many, many cases is cast the sender to the object type that I know it has to be. If I want to reuse handlers like this, I think it would be easy enough to make two handlers that both call the same convenience method. For me, an event handler is conceptually supposed to handle a specific type of event on a particular group of objects. So I am not personally convinced that the object sender approach is the best convention.
However, I can imagine cases where this would be extremely handy, like if you want to log every event that gets fired.
The biggest problem I see in not following the convention is that you're going to confuse developers who are used to handling events in the way that the runtime library does. I won't say that the convention is good or bad, but it's certainly not evil. .NET developers know and understand how to work with events that are written in conformance with Microsoft's guidelines. Creating your own event handling mechanism on top of that may be more efficient at runtime and might even lead to code that you think is cleaner. But it's going to be different and you'll end up with two event handling "standards" in your program.
My position is that it's better to use a single less-than-ideal standard (as long as it's not horribly broken) than to have two competing standards.
I used strongly typed events (instead of object as it saves me having to cast), it really isn't that hard to understand, "oh look they've used a type that isn't an object"
As for eventArgs, you should use this in case the object changes as per #StriplingWarrior answer.
I don't understand why devs would get confused over it?

C# Observer pattern: Still tightly connected?

ok, so I am stuck at the observer pattern here, almost all tutorials I read tell the subject class to subscribe the observer(s).
But with encapsulation in mind, how is this not tighlty coupled? They still depend on each other know, don't they?
What I mean to say is that the subject Class must know the observer object to add him to the list of objects to notify.
Therefore a dependency is created, right?
What's the mistake I make?
Thanks!
Thanks everybody for the replies,
Now I have some new questions. If I understand correctly the best way to deal with this is with Interfaces. So I will do that ;)
But, Why are the always talking about delegates and events? events ARE a form of delegates. So why aren't they just saying events?
When you say "know", you're right that the publisher must know about the observer in order to publish information to it.
However, it doesn't need to "know" about it in the sense that it is hardcoded that:
The observer is always going to be this particular class
There is always going to be these particular observers available
In its basic form, events are publisher/observer in play, so you can easily do this just with events:
public class Observer
{
}
public class Publisher
{
public event EventHandler SomethingHappened;
}
You would then make the observer handle that event:
public class Observer
{
public Observer(Publisher pub)
{
pub.SomethingHappened += Publisher_SomethingHappened;
}
private void Publisher_SomethingHappened(object sender, EventArgs e)
{
}
}
public class Publisher
{
public event EventHandler SomethingHappened;
}
Whenever this event is raised from the publisher, the observer is informed about it. Realize that the act of hooking into an event is to "tell" that class about the observer, but the publisher doesn't have any hardcoded information about the publisher, except that there is someone out there listening.
A different way would be using interfaces:
public class Observer : IObserver
{
public Observer(Publisher pub)
{
pub.Observers.Add(this);
}
void IObserver.SomethingHappened()
{
}
}
public class Publisher
{
public List<IObserver> Observers { get; private set; }
}
public interface IObserver
{
void SomethingHappened();
}
Again, the publisher will "know" about the observer in the sense that it has a reference to it, but again it has no hardcoded information about which class or how many instances there will be.
Just a word of warning: The code above is very flawed, at a minimum you should ensure that the observer "unhooks" from the publisher when you're done, otherwise you're going to have leaks in the system. If you don't understand what I mean by that, leave a comment and I'll edit in an example.
The class that is Observable can accept interfaces in the observe method. This interface is defined in the library where the subject Class is defined and then implemented by the subscriber. That way, the classes only know what they are supposed to know.
An observable object in C# is an object that declares one or more events. One or more observing classes might or might not subscribe to these events at runtime. The observable part does not know and does not care.
The observed class does not have to maintain a list of objects to be notified. It just has to fire an event and otherwise is totally agnostic about who is listening.
So there's no dependency whatsoever from the observed class to the observing one. Only the observing one has to know about the events that it can observe.
Thomas
I dont really understand you. But if you are worried about Observer pattern in C#/.NET. Then developers at Microsoft already solved all your problems in form of events.
I think you have read this slightly wrong, yes, the subject does subscribe the observer but it doesn't initiate the subscription, i.e. MySubjectClass.Observers += MyObserverClass;
By using an interface to define the contract between the Subject and the Observer you allow the Observer to be any class that implements the interface.
So you can see this is not tightly coupled, i.e. the Subject isn't instantiating concrete Observer classes.
All the observer object and the observed object know is that they are interacting with an IObservable and an IObserver object respectively. The exact type of these objects is irrelevent to them - they only care that they implement the IObserver and IObservable interfaces.
You ask "Whats the mistake I make ?" . Your following line is where I think you went wrong:
What I mean to say is that the subject Class must know the observer object to add him to the list of objects to notify.

Events convention - I don't get it

My class with an event:
public class WindowModel
{
public delegate void WindowChangedHandler(object source, WindowTypeEventArgs e);
public event WindowChangedHandler WindowChanged;
public void GotoWindow(WindowType windowType)
{
this.currentWindow = windowType;
this.WindowChanged.Invoke(this, new WindowTypeEventArgs(windowType));
}
}
Derived event class:
public class WindowTypeEventArgs : EventArgs
{
public readonly WindowType windowType;
public WindowTypeEventArgs(WindowType windowType)
{
this.windowType = windowType;
}
}
Some other class that register it to the event:
private void SetupEvents()
{
this.WindowModel.WindowChanged += this.ChangeWindow;
}
private void ChangeWindow(object sender, WindowTypeEventArgs e)
{
//change window
}
What have I gained from following the .Net convention?
It would make more sense to have a contract like this
public delegate void WindowChangedHandler(WindowType windowType);
public event WindowChangedHandler WindowChanged;
Doing it this way, I don't need to create a new class and is easier to understand.
I am not coding a .Net library. This code is only going to be used in this project. I like conventions but am I right when I say that in this example it does not make sense or have i missunderstood something?
Viewed in isolation, yes, you're correct: the .NET conventional syntax is more verbose and less intuitive, but there are advantages:
Future changes to the information passed by your event do not automatically require changes to every consumer of the event. For example, if you wanted to add an additional piece of information to your event--say, a WindowTitle string--you'll have to modify the signature of every single function that gets attached to that event, regardless of whether or not they use it. With the EventArgs approach, you add the property to the arguments and only alter the functions that need to take advantage to the additional information.
Since .NET 2.0 introduced the EventHandler<TEventArgs> delegate type, you no longer need to define your own event delegates manually. In your example, you would type your event as EventHandler<WindowTypeEventArgs> instead of WindowChangedHandler.
The EventArgs approach makes it easy to pass multiple types of information back to the calling function. If you needed to do this in your alternative example (that passes the event parameter directly), you'd still end up creating your own -tuple class to hold the information.
The impact of the first one is made more evident when you look at the actual pattern for .NET events in creating a protected virtual function that actually does the invoking. For example:
public event EventHandler<WindowTypeEventArgs> WindowChanged;
protected virtual void OnWindowChanged(WindowTypeEventArgs e)
{
var evt = WindowChanged;
if(evt != null) evt(this, e);
}
There are a couple of things I'd like to point out here:
Using the pattern of creating this event-invoking method allows you to avoid null checking throughout your code (an event without any functions attached to it will be null and will throw an exception if you try to invoke it)
This pattern also allows classes that inherit from you to control the order of invocation, allowing them to execute their code explicitly either before or after any outside consumers
This is especially important in multithreaded environments. If you just said if(WindowChanged != null) WindowChanged(this, e);, you would actually run the risk of the WindowChanged event becoming null between the time you check it and the time you call it. This isn't important to do in single-threaded scenarios, but is a great defensive habit to form.
I recognise your confusion! I had the same feeling when I first looked at this too.
The big thing to realise is that it doesn't give you much of an advantage programmatically speaking, but it is a convention that is well known in the framework. As such, there are plenty of tools that expect the void EventName(object sender, EventArgs e) signature. Some IoC containers, for example, can use this signature to auto wire events at construction time.
In short, it looks a bit weird, but it's a convention. Stick with it and the bulb will light up eventually!
You can use your delegate. Nobody will force you. It's just a good pattern for events.
If you use standart Sender-EventArgs pattern you'll be able to use the same ChangeWindow handler for other events too.

Categories

Resources