Events convention - I don't get it - c#

My class with an event:
public class WindowModel
{
public delegate void WindowChangedHandler(object source, WindowTypeEventArgs e);
public event WindowChangedHandler WindowChanged;
public void GotoWindow(WindowType windowType)
{
this.currentWindow = windowType;
this.WindowChanged.Invoke(this, new WindowTypeEventArgs(windowType));
}
}
Derived event class:
public class WindowTypeEventArgs : EventArgs
{
public readonly WindowType windowType;
public WindowTypeEventArgs(WindowType windowType)
{
this.windowType = windowType;
}
}
Some other class that register it to the event:
private void SetupEvents()
{
this.WindowModel.WindowChanged += this.ChangeWindow;
}
private void ChangeWindow(object sender, WindowTypeEventArgs e)
{
//change window
}
What have I gained from following the .Net convention?
It would make more sense to have a contract like this
public delegate void WindowChangedHandler(WindowType windowType);
public event WindowChangedHandler WindowChanged;
Doing it this way, I don't need to create a new class and is easier to understand.
I am not coding a .Net library. This code is only going to be used in this project. I like conventions but am I right when I say that in this example it does not make sense or have i missunderstood something?

Viewed in isolation, yes, you're correct: the .NET conventional syntax is more verbose and less intuitive, but there are advantages:
Future changes to the information passed by your event do not automatically require changes to every consumer of the event. For example, if you wanted to add an additional piece of information to your event--say, a WindowTitle string--you'll have to modify the signature of every single function that gets attached to that event, regardless of whether or not they use it. With the EventArgs approach, you add the property to the arguments and only alter the functions that need to take advantage to the additional information.
Since .NET 2.0 introduced the EventHandler<TEventArgs> delegate type, you no longer need to define your own event delegates manually. In your example, you would type your event as EventHandler<WindowTypeEventArgs> instead of WindowChangedHandler.
The EventArgs approach makes it easy to pass multiple types of information back to the calling function. If you needed to do this in your alternative example (that passes the event parameter directly), you'd still end up creating your own -tuple class to hold the information.
The impact of the first one is made more evident when you look at the actual pattern for .NET events in creating a protected virtual function that actually does the invoking. For example:
public event EventHandler<WindowTypeEventArgs> WindowChanged;
protected virtual void OnWindowChanged(WindowTypeEventArgs e)
{
var evt = WindowChanged;
if(evt != null) evt(this, e);
}
There are a couple of things I'd like to point out here:
Using the pattern of creating this event-invoking method allows you to avoid null checking throughout your code (an event without any functions attached to it will be null and will throw an exception if you try to invoke it)
This pattern also allows classes that inherit from you to control the order of invocation, allowing them to execute their code explicitly either before or after any outside consumers
This is especially important in multithreaded environments. If you just said if(WindowChanged != null) WindowChanged(this, e);, you would actually run the risk of the WindowChanged event becoming null between the time you check it and the time you call it. This isn't important to do in single-threaded scenarios, but is a great defensive habit to form.

I recognise your confusion! I had the same feeling when I first looked at this too.
The big thing to realise is that it doesn't give you much of an advantage programmatically speaking, but it is a convention that is well known in the framework. As such, there are plenty of tools that expect the void EventName(object sender, EventArgs e) signature. Some IoC containers, for example, can use this signature to auto wire events at construction time.
In short, it looks a bit weird, but it's a convention. Stick with it and the bulb will light up eventually!

You can use your delegate. Nobody will force you. It's just a good pattern for events.
If you use standart Sender-EventArgs pattern you'll be able to use the same ChangeWindow handler for other events too.

Related

Events and Delegates Vs Calling methods

I hope this question is not to closely related to others but others don't seem to fill the gap in knowledge.
This seems to be hot topic to try and understand Events and Delegates and after reading many SO questions and MSDN articles I'm afraid to say I still don't understand. After some years creating great web applications, I found myself getting extremely frustrated in not understanding them. Please can anyone clarify this in common code. So the question is, why would you use events and delegates over calling a method?
Below is some basic code I written at work. Would I be able to leverage events and delegates?
Public Class Email
{
public string To {get;set;}
//Omitted code
public void Send()
{
//Omitted code that sends.
}
}
Public Class SomeClass
{
//Some props
Public void DoWork()
{
//Code that does some magic
//Now Send Email
Email newEmail = new Email();
newEmail.To = "me#me.com";
newEmail.Send();
}
}
This is probably not the best example but is there anyway that the DoWork() method can subscribe to the Email? Would this work? Any help for me to truly understand events and delegates would be greatly appreciated.
Regards,
The biggest reason I have found in real-world programming for the use of events and delegates is easing the task of code maintenance and encouraging code reuse.
When one class calls methods in another class, those classes are "tightly coupled". The more classes you have tightly coupled, the more difficult it becomes to make a change to one of them without having to also change several others. You may as well have written one big class at that point.
Using events instead makes things more "loosely coupled" and makes it much easier to change one class without having to disturb others.
Taking your example above, suppose we had a third class, Logger, that should log when an email is sent. It uses a method, LogEvent(string desc, DateTime time), to write an entry to the log:
public class Logger
{
...
public void LogEvent(string desc, DateTime time)
{
...//some sort of logging happens here
}
}
If we use methods, we need to update your Email class' Send method to instantiate a Logger and call its LogEvent method:
public void Send()
{
//Omitted code that sends.
var logger = new Logger();
logger.LogEvent("Sent message", DateTime.Now);
}
Now Email is tightly coupled to Logger. If we change the signature of that LogEvent method in Logger, we will also have to make changes to Email. Do you see how this can quickly become a nightmare when you are dealing with even a medium-sized project? Furthermore, no one wants to even try to use the LogEvent method because they know that if they need to make any sort of change to it, they will have to start changing other classes, and what should have been an afternoon's worth of work quickly turns into a week. So instead, they write a new method, or a new class, that then becomes tightly coupled to whatever else they are doing, things get bloated, and each programmer starts to get into their own little "ghetto" of their own code. This is very, very bad when you have to come in later and figure out what the hell the program is doing or hunt down a bug.
If you put some events on your Email class instead, you can loosely couple these classes:
Public Class Email
{
public event EventHandler<EventArgs> Sent;
private void OnSent(EventArgs e)
{
if (Sent!= null)
Sent(this, e);
}
public string To {get;set;}
//Omitted code
public void Send()
{
//Omitted code that sends.
OnSent(new EventArgs());//raise the event
}
}
Now you can add an event handler to Logger and subcribe it to the Email.Sent event from just about anywhere in your application and have it do what it needs to do:
public class Logger
{
...
public void Email_OnSent(object sender, EventArgs e)
{
LogEvent("Message Sent", DateTime.Now);
}
public void LogEvent(string desc, DateTime time)
{
...//some sort of logging happens here
}
}
and elsewhere:
var logger = new Logger();
var email = new Email();
email.Sent += logger.Email_OnSent;//subscribe to the event
Now your classes are very loosely coupled, and six months down the road, when you decide that you want your Logger to capture more or different information, or even do something totally different when an email is sent, you can change the LogEvent method or the event handler without having to touch the Email class. Furthermore, other classes can also subscribe to the event without having to alter the Email class, and you can have a whole host of things happen when an email is sent.
Now maintaining your code is much easier, and other people are much more likely to reuse your code, because they know they won't have to go digging through the guts of 20 different classes just to make a change to how something is handled.
BIG EDIT: More about delegates. If you read through here: Curiosity is Bliss: C# Events vs Delegates (I'll keep links to a minimum, I promise), you see how the author gets into the fact that events are basically special types of delegates. They expect a certain method signature (i.e. (object sender, EventArgs e)), and can have more than one method added to them (+=) to be executed when the method is raised. There are other differences as well, but these are the main ones you will notice. So what good is a delegate?
Imagine you wanted to give the client of your Email class some options for how to send the mail. You could define a series of methods for this:
Public Class Email
{
public string To {get;set;}
//Omitted code
public void Send(MailMethod method)
{
switch(method)
{
case MailMethod.Imap:
ViaImap();
break;
case MailMethod.Pop:
ViaPop();
break;
}
}
private void ViaImap() {...}
private void ViaPop() {...}
}
This works well, but if you want to add more options later, you have to edit your class (as well as the MailMethod enum that is assumed here). If you declare a delegate instead, you can defer this sort of decision to the client and make your class much more flexible:
Public Class Email
{
public Email()
{
Method = ViaPop;//declare the default method on instantiation
}
//define the delegate
public delegate void SendMailMethod(string title, string message);
//declare a variable of type SendMailMethod
public SendMailMethod Method;
public string To {get;set;}
//Omitted code
public void Send()
{
//assume title and message strings have been determined already
Method(title, message);
}
public void SetToPop()
{
this.Method = ViaPop;
}
public void SetToImap()
{
this.Method = ViaImap;
}
//You can write some default methods that you forsee being needed
private void ViaImap(string title, string message) {...}
private void ViaPop(string title, string message) {...}
}
Now a client can use your class with its own methods or provide their own method to send mail just about however they choose:
var regularEmail = new Email();
regularEmail.SetToImap();
regularEmail.Send();
var reallySlowEmail = new Email();
reallySlowEmail.Method = ViaSnailMail;
public void ViaSnailMail(string title, string message) {...}
Now your classes are somewhat less tightly coupled and much easier to maintain (and write tests for!). There are certainly other ways to use delegates, and lambdas sort of take things up a notch, but this should suffice for a bare-bones introduction.
Ok, I understand this answer wont strictly speaking be correct, but I'll tell you how I came to understand them.
All functions have a memory address, and some functions are simple get/set for data. It helps to think of all variables as being functions with only two methods - get and set. You're quite comfortable passing variables by reference, which means (simplistically) you are passing a pointer to their memory, which enables some other code to call their get/set methods, implicitly by using "=" and "==".
Now translate that concept to functions and code. Some code and functions have names (like variable names) which you give them. you are used to executing those functions by calling their name; but the name is just a synonym for their memory location (simplistically). By calling the function you are de-referencing its memory address using its name, and then calling the method which lives at that memory address.
As I said, this is all very simplistic and in various ways, incorrect. But it helps me.
So - is it possible to pass the memory address of a function but not call it ? In the same way you pass a reference to a variable without evaluating it ? I.e. what is the equivalent of calling
DoSomeFunction(ref variablePointer)
Well, the reference to a function is called a delegate. But because a function can also take parameters (which a variable cannot) you need to use a calling syntax more elaborate than just ref. You set up the call you want to make into a delegate structure and pass that delegate stucture to the recipient, who can either immediately evaluate (call) that delegate, or store it for later use.
Its the "store for later use" that is the key to understanding event handlers. The special (and somewhat confusing) syntax around event handlers is just another way of setting up a function pointer (delegate) and add it to a list of function pointers that the recipient class can evaluate at some convenient time.
A simple way of looking at event handlers would be;
class myClass
{
public List<delegate> eventHandlers = new List<delegate>();
public void someMethod()
{
//... do some work
//... then call the events
foreach(delegate d in eventHandlers)
{
// we have no idea what the method name is that the delegate
// points to, but we dont need to know - the pointer to the
// function is stored as a delegate, so we just execute the
// delegate, which is a synonym for the function.
d();
}
}
}
public class Program()
{
public static void Main()
{
myClass class1 = new myClass();
// 'longhand' version of setting up a delegate callback
class1.eventHandlers.Add(new delegate(eventHandlerFunction));
// This call will cause the eventHandlerFunction below to be
// called
class1.someMethod();
// 'shorthand' way of setting up a delegate callback
class1.eventHandlers.Add(() => eventHandlerFunction());
}
public static eventHandlerFunction()
{
Console.WriteLine("I have been called");
}
It gets slightly more complicated when you want the caller of the delegate to pass in some values to the function, but otherwise all delegate concepts are the same as the concepts of "ref" variables - they are references to code which will be executed at a later date, and typically you pass them as callbacks to other classes, who will decide when and whether to execute them. In earler languages delegates were pretty much the same as "function pointers" or (in beloved long departed Nantucket Clipper) "Code blocks". Its all much more complex than simply passing around a memory address of a block of code, but if you hang on to that concept, you wont go far wrong.
Hope that helps.
The simplest way of thinking about the use of delegates is to think in terms of when you want to call a method, but you don't not yet know which one (or ones).
Take a control's Click event handler, for example. It uses the EventHandler delegate. The signature is void EventHandler(object sender, EventArgs e);. What this delegate is there for is that when someone clicks the control I want to be able to call zero or more methods that have the EventHandler signature, but I don't know what they are yet. This delegate lets me effectively call unknown future methods.
Another example is LINQ's .Select(...) operator. It has the signature IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector). This method contains a delegate Func<TSource, TResult> selector. What this method does is takes a sequence of values from source and applies an as yet unknown projection to produce a sequence of TResult.
Finally, another good example is Lazy<T>. This object has a constructor with this signature: public Lazy(Func<T> valueFactory). The job of Lazy<T> is to delay the instantiate of T until the first time it is used, but then to retain that value for all future uses. It presumably is a costly instantiation that would be ideal to avoid if we don't need the object, but if we need it more than one we don't want to be hit with the cost. Lazy<T> handles all the thread locking, etc, to make sure that only one instance of T is created. But the value of T returned by Func<T> valueFactory can be anything - the creators of Lazy<T> has no idea what the delegate will be, and nor should they.
This, to me, is the most fudamentally important thing to understand about delegates.
why would you use events and delegates over calling a method?
In the context of the example you posted, if you wanted to send emails asynchronously, you would have to implement a notification mechanism.
See the SmtpClient implementation for an example: https://msdn.microsoft.com/en-us/library/system.net.mail.smtpclient.sendcompleted%28v=vs.110%29.aspx
If more of an explanation is required, rather than code examples, I'll try to explain why you would use a Delegate or Event in the example you gave above.
Imagine that you wish to know if the email was sent or not after you called Email.Send().
In the Email Class you would have two events - one for a failed send, and one for a successful send.
When the Email Class sends without an error, it would look to see if there are any subscribers to the 'SuccessfulSend()' event, and if there are, it raises that event. This would then notify the subscribers that wanted to be informed if the send was successful so that they can perform some other task.
So you could have an event handler that is notified of the successful send, and in this handler, you could call another method (DoMoreWork()).
If the Email.Send() failed, you could be notified of this, and call another method that logs the failure for later reference.
With regards to Delegates, if there were three different Email Classes that used different functionality (or servers) to send mail, the client calling the Email.Send() method, could supply the relevant Email Class to use when sending the email.
The Email Class would use the IEmail interface, and the three Email Classes would implement IEmail (To, From, Subject, Body, Attachments, HTMLBody etc.), but could perform the interactions/rules in different ways.
One could require a Subject, another require an Attachment, one could use CDONTS, another use a different protocol.
The client could determine if it needs to use CDONTS depending on where it is installed, or it could be in an area of the app where an attachment is required, or would format the body in HTML.
What this does is to remove the burden of logic from the client and all of the places where these checks and logic should be checked, and move it into the single versions of the relevant Class.
Then the client simply calls the Email.Send() after providing the correct object to use in its constructor (or by using a settable property).
If a fix or change to a particular email object's code is required - it is carried out in one place rather than finding all areas in the client and updating there.
Imagine if your Email Class was used by several different applications...

Create a special dictionary<T, EventDelegate<T>> with generic delegate EventDelegate<T>(T e) where T : GameEventBase

I have a game with many classes that need to listen to events. But under certain circumstances, those classes are destroyed or disabled. When that happens, I need to remove their listening methods from the events manager delegate table.
I don't want to modify the EventsManager and I would like to each class that adds any events to it to know which events it added.
I'm currently using something like this do add and remove the events in each class:
void AddEventsListeners() {
EventsManager.AddListener<OnClickDown>(OnClickDownHandler);
EventsManager.AddListener<OnClickUp>(OnClickUpHandler);
EventsManager.AddListener<OnClick>(OnClickHandler);
}
void RemoveEventsListeners() {
EventsManager.RemoveListener<OnClickDown>(OnClickDownHandler);
EventsManager.RemoveListener<OnClickUp>(OnClickUpHandler);
EventsManager.RemoveListener<OnClick>(OnClickHandler);
}
Those OnClick are all derived from GameEventBase, and the OnClickHandler are methods declared as
void OnClickDown(OnClickHandler e) {}
to match the delegate that is used in the EventsManager, which is declared as
delegate void EventDelegate<T>(T e) where T : GameEventBase;
I want to be able to fill a special hash table named, say, events, that has keyvalue pairs like
<T, EventDelegate<T>> where T: GameEventBase
That is, I want to be able to do events.add(OnClick, OnClickHandler), where OnClickHandler is declared as
OnClickHandler(OnClick e) {}
And I want adding to fail if OnClickHandler where defined, for example, as
OnClickHandler(OtherGameEventBaseDerivedEvent e) {}
That requirement translates to me wanting type safety in that special dictionary.
One of my attempts involved not a dictionary, but a way to decide which method to call, between the AddListener and RemoveListener
I didn't like it because it introduces a parameter to the method and the code reads really weird with it. It does work, and does reduce the repetition, but is too ugly.
I create a AddOrRemoveAllListeners(AddOrRemove addOrRemove), which I populated with calls to AddOrRemoveListener for each event.
Now all I had to do is AddOrRemoveAllListeners(AddOrRemove.Remove) or AddOrRemoveAllListeners(AddOrRemove.Add), to add or remove my events.
enum AddOrRemove {
Remove,
Add
}
void AddOrRemoveListener<T>(EventsManager.EventDelegate<T> del, AddOrRemove addOrRemove)
where T : GameEventBase {
switch (addOrRemove) {
case AddOrRemove.Remove:
EvMan.RemoveListener<T>(del);
break;
case AddOrRemove.Add:
EvMan.AddListener<T>(del);
break;
}
}
Another attempt involved creating the type
class EventsDictionary<T> : Dictionary<T, EventsManager.EventDelegate<T>> where T : GameEventBase { }
And using it like this:
EventsDictionary<GameEventBase> events = new MyCustomDictionary<GameEventBase>();
void AddEventHandlerPairToEventsDictionary<T>(T e, EventsManager.EventDelegate<T> handler) where T : GameEventBase {
if (!events.ContainsKey(e)) {
events.Add(e, handler);
}
}
But the events.Add(e, handler) fails and forces me to declare the handler as
EventsManager.EventDelegate<GameEventBase>
instead of
EventsManager.EventDelegate<T>
If I do that, I could add keyvalue pairs that don't make sense in that events type, i.e., I lose the event handling type safety.
I want to have such a structure because I don't like all those repetitions. It would be really bad if someone forgot to remove an event in the RemoveEventsListeners().
Having such a dictionary, I could use a foreach loop to add/remove the handlers to the EventsManager, which would be really nice.
As for performance, this is for a game and it needs to have good performance. Those adding/removing of events can happen a lot (sometimes hundreds of times per frame) because a lot of objects are destroyed (can't leave null handlers in the EventsManager) or disabled (need to stop listening to everything until enabled again) all the time. This means reflection and lots of casting/boxing or anything that creates lots of garbage collected objects is out.
I'm, of course, open to suggestions as to other ways to approach this.
Thanks for your assistance!
I'm attaching the relevant parts of the EventsManager being used (The RemoveListener() is analogous to the AddListener). The GameEventBase is just an empty shell. It isn't a .NET event nor uses EventArgs.
public class EventsManager : ManagedBase {
public delegate void EventDelegate<T>(T e) where T : GameEventBase;
private delegate void EventDelegate(GameEventBase e);
private readonly Dictionary<Type, EventDelegate> delegates = new Dictionary<Type, EventDelegate>();
private readonly Dictionary<Delegate, EventDelegate> delegateLookup = new Dictionary<Delegate, EventDelegate>();
public void AddListener<T>(EventDelegate<T> del) where T : GameEventBase {
// Early-out if we've already registered this delegate
if (delegateLookup.ContainsKey(del)) {
return;
}
// Create a new non-generic delegate which calls our generic one.
// This is the delegate we actually invoke.
EventDelegate internalDelegate = (e) => del((T) e);
delegateLookup[del] = internalDelegate;
EventDelegate tempDel;
if (delegates.TryGetValue(typeof (T), out tempDel)) {
delegates[typeof (T)] = tempDel + internalDelegate;
}
else {
delegates[typeof (T)] = internalDelegate;
}
}
public void Raise(GameEventBase e) {
EventDelegate del;
if (delegates.TryGetValue(e.GetType(), out del)) {
del.Invoke(e);
}
}
}
Your problems seem to be solved if you use the EventAggregator pattern.
There is a short description of it by Martin Fowler
Some very good implementations of it already exist, for example in caliburn micro and
Microsoft Prism
The general idea is that you simplify event registration and deregistration and have a single source of events for many objects.
I never had performance issues with it. You simply put a _eventAggregator.Subscribe(this) when you want to start listening to events for an object and Unsubscribe if you want to stop. Whereever you want to fire an event, just publish it, EventAggregator does the routing.
This once again looks like an XY problem. OP seems to want to have a central place to handle event handlers, registration and disposal. The OP has gone down the route of trying to create a pattern that deal with this in a generic way, but has not looked into the state of the art regarding how this problem is typically solved. He has now come up against a problem in his design and is now asking for a solution to THAT problem, rather than the original problem of event handlers.
There are two good solutions to event handler registration lifecycle management that I know of in .net.
Weak Event Handler
You state that "It would be really bad if someone forgot to remove an event in the RemoveEventsListeners()." Yet do not actually mention WHY it is bad. Typically the only reason for this being bad is that the event handler will now keep an object in reference, that should be collected. With weak reference event handlers, the GC will still be able to collect your object, even when it subscribes to an object that is still alive.
Rx.Net
Rx.Net abstracts event registrations into IDisposables, which you can tie to the object's lifetime, assuming of course you want to control the lifetime of the registrations.
However I actually find the IObservable pattern much nicer to work with than event handler pattern, mostly because C# lacks first class support for event handlers (this is not the case with F#).
F#
Most of your problems will have stemmed from the short sighted design of events keyword handling in C# (specifically not making events a first class construct). F# however does support first class events, and thus should be able to support the pattern you are trying to construct.
Thus with this option you should scrap your code base and rewrite it in F#.
*EDIT added tongue in cheek option of rewriting in F#.

Calling designer-built event handlers

I'm currently tasked with cleaning up, bug fixing and optimising a Form in winforms (3000 lines of code in one .cs file, it's getting a bit ugly!). I've noticed a few obvious bad practices and some redundant calls already which I could sort out relatively easily.
However there's one that is popping up a lot which seems to me like bad practice, but I can't actually back it up with any documentation. I could be completely wrong.
private void datePicker_DateChanged(object sender, EventArgs e)
{
tabControl_SelectedIndexChanged(sender, e);
}
private void comboBox_SelectedIndexChanged(object sender, EventArgs e)
{
tabControl_SelectedIndexChanged(sender, e);
}
My first concern is that the method will have the sender object that is the date picker or the combo box, but does this matter? I asked myself, what is the sender object there for? Perhaps this is why it's there? I also find EventArgs by itself is pretty useless as far (as I'm aware) unless the class is inherited.
I know that neither the sender or EventArgs are used in the tabControl_SelectedIndexChanged method, so the code works fine. What about possible future implications when some code is changed?
Should I change these to 3 different event handlers that all point to a simple void loadCurrentTab() method? Or perhaps I should get all 3 controls to call the same event handler, such as loadCurrentTab(sender, e)? Or just leave it as it is? Is it that important?
Should I change these to 3 different event handlers that all point to a simple void loadCurrentTab() method?
This would actually be my preference, in this scenario. This makes the intent very clear - all three event handlers are routing to one set of logic which (by design) doesn't pay attention to the sender or EventArgs.
My biggest issue with calling event handlers (aside from it being poor practice) is when you are looking at call stacks with a quick glance, in your case, you would see that the selected index of the tab changes when that did not actually happen.
It's also a good practice to not have unused parameters in your method calls. There are a handful of exceptions (event handling for one) but for any code that I write, I try to be sure I always use what I'm providing.
To start with, it's not all that important. Consider not fixing what isn't broken.
The preferred approach would be to have several handlers all calling a special method (not an eventhandler).
private void datePicker_DateChanged(object sender, EventArgs e)
{
// tabControl_SelectedIndexChanged(sender, e);
RefreshCurrentTabControl();
}
calling other EventHandler is obviously not good pracice. Becouse if tabControl_SelectedIndexChanged change in time and start depend on sender all code fails or star work unexpedted..
I'll separate same logic in third method like you said.. loadCurrentTab(). You also may consider use some command pattern to handle events -> good way to separate Business Logic from GUI.
For example:
interface ICommand {
void Execute();
}
interface IEventCommand : ICommand {
void Execute(object sender, EventArgs e);
}
class CommandAttribute : Attribute {
...
}
class MyCommand : IEventCommand {
...
}
implement some manager, executor, attach command using reflection, etc...
And then you can attach command like this:
[Command(typeof(MyCommand), new [] {"Click"})]
private Button m_oButton = new Button();
All nessesary parameters are auto-magically ;) passed to command.
Yep that's bad practice, you twiddle with one and break the other in ignorance, classic error would be as simple as ((ComboBox)sender).SomeProperty, because "it can only be called from the Combobox".
Depends on how far you can afford to take the refactoring, but my instant reaction would be at least "RefreshTabControl" to do the work and then get the eventHandlers to call it. If you have to expand upon that, ie Sender Or EventArgs matters, then some form of delegation as suggested, or not as clever but much better than what you have, would be an overload.
e.g
void RefreshTabControl(Combobox argBox, EventArgs e)
and then a cast and null check in the eventhandler, at least that would be a statement of intent.
What you have now is a bug waiting to happen.

Code-friendly version of event signature in .NET

Preceding posts:
Event Signature in .NET — Using a Strong Typed 'Sender'?
In a C# event handler, why must the “sender” parameter be an object?
Microsoft's conventions and guidelines force .NET users to use special pattern for creating, raising and handling events in .NET.
Event design guidelines http://msdn.microsoft.com/en-us/library/ms229011.aspx state that
Citation:
The event-handler signature observes the following conventions :
The return type is Void.
The first parameter is named sender
and is of type Object. This is the
object that raised the event.
The second parameter is named e and
is of type EventArgs or a derived
class of EventArgs.This is the
event-specific data.
The method takes exactly two
parameters.
These conventions tell developers that the (following) shorter and more obvious code is evil:
public delegate void ConnectionEventHandler(Server sender, Connection connection);
public partial class Server
{
protected virtual void OnClientConnected(Connection connection)
{
if (ClientConnected != null) ClientConnected(this, connection);
}
public event ConnectionEventHandler ClientConnected;
}
and the (following) longer and less obvious code is good:
public delegate void ConnectionEventHandler(object sender, ConnectionEventArgs e);
public class ConnectionEventArgs : EventArgs
{
public Connection Connection { get; private set; }
public ConnectionEventArgs(Connection connection)
{
this.Connection = connection;
}
}
public partial class Server
{
protected virtual void OnClientConnected(Connection connection)
{
if (ClientConnected != null) ClientConnected(this, new ConnectionEventArgs(connection));
}
public event ConnectionEventHandler ClientConnected;
}
Though these guidelines not state why is it so important to follow these conventions, making developers act like monkeys who don't know why and what are they doing.
IMHO, Microsoft's event signature conventions for .NET are bad for your code because they cause additional zero-efficiency effort to be spent on coding, coding, coding:
Coding "(MyObject)sender" casts (not speaking about 99% of situations that don't require sender at all)
Coding derived "MyEventArgs" for the data to be passed inside event handler.
Coding dereferences (calling "e.MyData" when the data is required instead of just "data")
It's not that hard to do this effort, but practically speaking what are we loosing when not conforming to Microsoft's conventions, except that people take you as an heretic because your act of confrontation to Microsoft's conventions verges on blasphemy :)
Do you agree?
The problems you will have:
When you add another argument, you
will have to change your event
handler signature.
When a programmer first looks at
your code, your event handlers will
not look like event handlers.
Especially the latter can waste you far more time than writing a 5 line class.
Regarding having a strongly-typed sender, I've often wondered that myself.
Regarding the EventArgs, I'd still recommend you use an intermediate EventArgs class because you may want to add event information in the future which you don't currently foresee. If you've used a specific EventArgs class all along, you can simply change the class itself and the code where it gets fired. If you pass the Connection as per your example, you'd have to refactor every event handler.
Edit
Jim Mischel made a good point in his comments. By making the sender an object, we enable the same event method to potentially be reused to handle a variety of events. For example, let's say that a grid needs to update itself if:
the user clicks a "refresh" button, or
the system detects that a new entry has been loaded from the server.
You could then say something like this:
serverBus.EntryReceived += RefreshNeededHandler;
refreshButton.Click += RefreshNeededHandler;
...
public void RefreshNeededHandler(object sender, EventArgs args)
{
...
}
Of course, in practice, I have pretty much never had any call for this kind of reuse, whereas the first thing I tend to to in many, many cases is cast the sender to the object type that I know it has to be. If I want to reuse handlers like this, I think it would be easy enough to make two handlers that both call the same convenience method. For me, an event handler is conceptually supposed to handle a specific type of event on a particular group of objects. So I am not personally convinced that the object sender approach is the best convention.
However, I can imagine cases where this would be extremely handy, like if you want to log every event that gets fired.
The biggest problem I see in not following the convention is that you're going to confuse developers who are used to handling events in the way that the runtime library does. I won't say that the convention is good or bad, but it's certainly not evil. .NET developers know and understand how to work with events that are written in conformance with Microsoft's guidelines. Creating your own event handling mechanism on top of that may be more efficient at runtime and might even lead to code that you think is cleaner. But it's going to be different and you'll end up with two event handling "standards" in your program.
My position is that it's better to use a single less-than-ideal standard (as long as it's not horribly broken) than to have two competing standards.
I used strongly typed events (instead of object as it saves me having to cast), it really isn't that hard to understand, "oh look they've used a type that isn't an object"
As for eventArgs, you should use this in case the object changes as per #StriplingWarrior answer.
I don't understand why devs would get confused over it?

Is it poor form for a C# class to subscribe to its own published events?

I'm probably just being neurotic, but I regularly find myself in situations in which I have class that publishes an event, and I find it convenient to subscribe to this event from within the class itself (e.g. in the constructor), rather than only subscribing from external classes.
This sounds reasonable to me, but I can't help the nagging feeling that it's a poor practice, for the simple reason that I'm always faced with the question: "Why not perform the actions that you'd provide in the event handler in the code which fires the event?"
public class Button
{
public Button()
{
this.Click += someHandler; // bad practice?
}
public event EventHandler Click;
public void HandleInput()
{
if (someInputCondition)
{
// Perform necessary actions here rather than
// subscribing in the constructor?
this.Click(this, ...);
}
}
}
Are there any drawbacks to subscribing to your own events?
This sounds reasonable to me, but I can't help the nagging feeling that it's a poor practice, for the simple reason that I'm always faced with the question: "Why not perform the actions that you'd provide in the event handler in the code which fires the event?"
To answer that question, consider partial class scenarios. Suppose you have a base type B. You run an automated tool that decorates B by extending it to derived class D. Your tool generates a partial class so that developers consuming D can further customize it for their own purposes.
In that case, it seems perfectly reasonable that the user-authored side of D would want to sign up to be called when events declared by B or the machine-generated side of D are raised by the machine-generated side of D.
That was the scenario we found ourselves in when designing VSTO many years ago. As it turns out, it was not difficult to do this in C# but it was quite tricky to get it all working in VB. I believe VB has made some tweaks to their event subscription model to make this easier.
That said: if you can avoid this, I would. If you're just making an event for internal subscription that seems like a bad code smell. Partial methods in C# 3 help out greatly here, since they make it easy and low-cost for the machine-generated side to call little notification functions in the user-generated side, without having to go to the trouble of publishing an event.
I see no problem with this. But if you handle the events in the same class you could also override the event method:
protected override void OnClick(Eventargs e)
{
base.OnClick(e);
}
This is more efficient and gives you the power to swallow the event if necessary (simply not calling base.OnClick()).
There's a very exotic problem due to internal optimization when doing this. Due to the optimization adding/removing event handlers is not thread safe. It only applies to events that are used by the declaring type like in your example.
Fortunately this has been changed with 4.0, but if you're on previous version, you could experience this.
The issue is that “someHandler” will change the state of your object. Do you want this state changing before or after any “external” code is run by the event?
It is not clear at what point the state change will be make if you subscribe to the event, however calling it in “HandleInput()” make it a lot clearer when it will be called.
(Also it is more normal to call “HandleInput()”, “OnClick” and make it virtual so sub classes can override it)
After saying the above, normally there is no great harm in subscribing to your own event; in UI classes that represent forms it is very common, otherwise it tend to “surprise” a lot of people that read your code.
If your button class should be the first which receives the click event, you should write your code in the event method, eg.:
protected virtual void OnClick(EventArgs e)
{
//insert your code here
if(this.Click != null)
{
this.Click(this, e);
}
}
but if it's not necessary that your class is the first reciever, you can subscribe to the event normally.
if you take the ordinary System.Windows.Form class as an example,
when you want to handle the Form_Load event (using visual studio designer), it is handled
in the class of the Form itself !
this.Load += new System.EventHandler(this.Form1_Load);
private void Form1_Load(object sender, EventArgs e)
{
}
so i think it is not a problem at all !!.

Categories

Resources