What's the point of DSLs / fluent interfaces - c#

I was recently watching a webcast about how to create a fluent DSL and I have to admit, I don't understand the reasons why one would use such an approach (at least for the given example).
The webcast presented an image resizing class, that allows you to specify an input-image, resize it and save it to an output-file using the following syntax (using C#):
Sizer sizer = new Sizer();
sizer.FromImage(inputImage)
.ToLocation(outputImage)
.ReduceByPercent(50)
.OutputImageFormat(ImageFormat.Jpeg)
.Save();
I don't understand how this is better than a "conventional" method that takes some parameters:
sizer.ResizeImage(inputImage, outputImage, 0.5, ImageFormat.Jpeg);
From a usability point of view, this seems a lot easier to use, since it clearly tells you what the method expects as input. In contrast, with the fluent interface, nothing stops you from omitting/forgetting a parameter/method-call, for example:
sizer.ToLocation(outputImage).Save();
So on to my questions:
1 - Is there some way to improve the usability of a fluent interface (i.e. tell the user what he is expected to do)?
2 - Is this fluent interface approach just a replacement for the non existing named method parameters in C#? Would named parameters make fluent interfaces obsolete, e.g. something similar objective-C offers:
sizer.Resize(from:input, to:output, resizeBy:0.5, ..)
3 - Are fluent interfaces over-used simply because they are currently popular?
4 - Or was it just a bad example that was chosen for the webcast? In that case, tell me what the advantages of such an approach are, where does it make sense to use it.
BTW: I know about jquery, and see how easy it makes things, so I'm not looking for comments about that or other existing examples.
I'm more looking for some (general) comments to help me understand (for example) when to implement a fluent interface (instead of a classical class-library), and what to watch out for when implementing one.

2 - Is this fluent interface approach
just a replacement for the non
existing named method parameters in
C#? Would named parameters make fluent
interfaces obsolete, e.g. something
similar objective-C offers:
Well yes and no. The fluent interface gives you a larger amount of flexibility. Something that could not be achieved with named params is:
sizer.FromImage(i)
.ReduceByPercent(x)
.Pixalize()
.ReduceByPercent(x)
.OutputImageFormat(ImageFormat.Jpeg)
.ToLocation(o)
.Save();
The FromImage, ToLocation and OutputImageFormat in the fluid interface, smell a bit to me. Instead I would have done something along these lines, which I think is much clearer.
new Sizer("bob.jpeg")
.ReduceByPercent(x)
.Pixalize()
.ReduceByPercent(x)
.Save("file.jpeg",ImageFormat.Jpeg);
Fluent interfaces have the same problems many programming techniques have, they can be misused, overused or underused. I think that when this technique is used effectively it can create a richer and more concise programming model. Even StringBuilder supports it.
var sb = new StringBuilder();
sb.AppendLine("Hello")
.AppendLine("World");

I would say that fluent interfaces are slightly overdone and I would think that you have picked just one such example.
I find fluent interfaces particularly strong when you are constructing a complex model with it. With model I mean e.g. a complex relationship of instantiated objects. The fluent interface is then a way to guide the developer to correctly construct instances of the semantic model. Such a fluent interface is then an excellent way to separate the mechanics and relationships of a model from the "grammar" that you use to construct the model, essentially shielding details from the end user and reducing the available verbs to maybe just those relevant in a particular scenario.
Your example seems a bit like overkill.
I have lately done some fluent interface on top of the SplitterContainer from Windows Forms. Arguably, the semantic model of a hierarchy of controls is somewhat complex to correctly construct. By providing a small fluent API a developer can now declaratively express how his SplitterContainer should work. Usage goes like
var s = new SplitBoxSetup();
s.AddVerticalSplit()
.PanelOne().PlaceControl(()=> new Label())
.PanelTwo()
.AddHorizontalSplit()
.PanelOne().PlaceControl(()=> new Label())
.PanelTwo().PlaceControl(()=> new Panel());
form.Controls.Add(s.TopControl);
I have now reduced the complex mechanics of the control hierarchy to a couple of verbs that are relevant for the issue at hand.
Hope this helps

Consider:
sizer.ResizeImage(inputImage, outputImage, 0.5, ImageFormat.Jpeg);
What if you used less clear variable names:
sizer.ResizeImage(i, o, x, ImageFormat.Jpeg);
Imagine you've printed this code out. It's harder to infer what these arguments are, as you don't have access to the method signature.
With the fluent interface, this is clearer:
sizer.FromImage(i)
.ToLocation(o)
.ReduceByPercent(x)
.OutputImageFormat(ImageFormat.Jpeg)
.Save();
Also, the order of methods is not important. This is equivalent:
sizer.FromImage(i)
.ReduceByPercent(x)
.OutputImageFormat(ImageFormat.Jpeg)
.ToLocation(o)
.Save();
In addition, perhaps you might have defaults for the output image format, and the reduction, so this could become:
sizer.FromImage(i)
.ToLocation(o)
.Save();
This would require overloaded constructors to achieve the same effect.

It's one way to implement things.
For objects that do nothing but manipulate the same item over and over again, there's nothing really wrong with it. Consider C++ Streams: they're the ultimate in this interface. Every operation returns the stream again, so you can chain together another stream operation.
If you're doing LINQ, and doing manipulation of an object over and over, this makes some sense.
However, in your design, you have to be careful. What should the behavior be if you want to deviate halfway through? (IE,
var obj1 = object.Shrink(0.50); // obj1 is now 50% of obj2
var obj2 = object.Shrink(0.75); // is ojb2 now 75% of ojb1 or is it 75% of the original?
If obj2 was 75% of the original object, then that means you're making a full copy of the object every time (and has its advantages in many cases, like if you're trying to make two instances of the same thing, but slightly differently).
If the methods simply manipulate the original object, then this kind of syntax is somewhat disingenuous. Those are manipulations on the object instead of manipulations to create a changed object.
Not all classes work like this, nor does it make sense to do this kind of design. For example, this style of design would have little to no usefulness in the design of a hardware driver or the core of a GUI application. As long as the design involves nothing but manipulating some data, this pattern isn't a bad one.

You should read Domain Driven Design by Eric Evans to get some idea why is DSL considered good design choice.
Book is full of good examples, best practice advices and design patterns. Highly recommended.

It's possible to use a variation on a Fluent interface to enforce certain combinations of optional parameters (e.g. require that at least one parameter from a group is present, and require that if a certain parameter is specified, some other parameter must be omitted). For example, one could provide a functionality similar to Enumerable.Range, but with a syntax like IntRange.From(5).Upto(19) or IntRange.From(5).LessThan(10).Stepby(2) or IntRange(3).Count(19).StepBy(17). Compile-time enforcement of overly-complex parameter requirements may require the definition of an annoying number of intermediate-value structures or classes, but the approach can in some cases prove useful in simpler cases.

Further to #sam-saffron's suggestion regarding the flexibility of a Fluent Interface when adding a new operation:
If we needed to add a new operation, such as Pixalize(), then, in the 'method with multiple parameters' scenario, this would require a new parameter to be added to the method signature. This may then require a modification to every invocation of this method throughout the codebase in order to add a value for this new parameter (unless the language in use would allow an optional parameter).
Hence, one possible benefit of a Fluent Interface is limiting the impact of future change.

Related

Are fluent interfaces a violation of the Command Query Separation Principle?

I started writing a fluent interface and took a look at an older piece Martin Fowler wrote on fluent interfaces (which I didn't realize he and Eric Evans coined the term). In the piece, Martin mentions that setters usually return an instance of the object being configured or worked on, which he says is a violation of CQS.
The common convention in the curly brace world is that modifier
methods are void, which I like because it follows the principle of
CommandQuerySeparation. This convention does get in the way of a
fluent interface, so I'm inclined to suspend the convention for this
case.
So if my fluent interface does something like:
myObject
.useRepository("Stuff")
.withTransactionSupport()
.retries(3)
.logWarnings()
.logErrors();
Is this truly a violation of CQS?
UPDATE I broke out my sample to show logging warnings and errors as separate behaviors.
Yes, it is. All those methods are obviously returning something, and equally obviously they have side effects (judging from the fact that you don't do anything with the return value, yet you do bother to call them). Since the definition of CQS states that mutators should not return a value we have a clear-cut violation in our hands.
But does it matter to you that CQS is violated? If the fluent interface makes you more productive all things considered, and if you consider it a well-known pattern with equally well-known benefits and drawbacks, why should it matter that it violates principle X on paper?
It violates this principle when it changes objects but not when it only returns a new object.
var newObject = myObject
.useRepository("Stuff")
.withTransactionSupport()
.retries(3)
.logWarningsAndErrors();
If myObject is unchanged after this statement, everything is OK. Generally spoken, a fluent interface violates the CQS principle, if, and only if it has side effects.
However the question is, if your example does represent a query at all. Does "fluent" necessarily mean "query"? It could probably just be perceived as an action-fluent-interface where the same object is passed from one action to the next.
No. The pattern here is "Configuration". Such configuration commands return the configuration object itself as opposite to something unrelated to the command. A violation of the Command/Query segregation would occur if the commands which serve the configuration purpose returned some unrelated data, for example:
if (myObject.UseRepository("Stuff") > 1 && myObject.UseRepository("Bla") < 5) {
// oh, good, some invisible stuff internal to myObject is in right interval...
}
I think it depends on what those methods are doing. If each one is it's own command, then yes, it could be breaking CQS.
However, you could fix this easily 2 different ways.
Just don't chain the commands. Just do myObject.useRepository(".."). Then call the next one, etc. But if the next item in the chain requires information from the previous one you would be in trouble.
Instead of making each of these their own command, instead these chained things are simply updating data on the DTO directly. Then at the end, you run a method called .Configure() that then sends this DTO to a single command that does all of the processing.
If we ignore the type system DSL design, fluent interface is exactly the same as method chaining.
o.A().B() is equivalent to a = o.A(); a.B()
It does violate command-query separation in a mutable data structure. We have to explicitly add superfluous return this in method implementation (Here a & o refer to the same object) (By the way I prefer method cascading in such case)
However, we also often see it in immutable data structure, because a pure function has to return the result. (Here a & o refer to different objects`) In this case it does not violate command-query separation

Immutable class appropriate when instances are used in a "what-if" tool?

I have a class that basically represents the parameters of a model, and encapsulates the logic to calculate values of the model with those parameters. I'm trying to decide if this class should be immutable. In practice, instances of the model will be generated by fitting to some data set, so in that sense it makes sense (to me at least) for that instance to be immutable since it's tied to external data.
On the other hand, there will be a GUI to let a user do a "what-if" wherein they can change the parameters to see how it changes model values. So I could make the model mutable to make this easy, or create new copies every time a parameter is changed. The latter seems awkward, especially if there are e.g. 5 parameters that could be ticked up and down individually...seems like I would have to implement a SetX() method for each parameter which returns a copy, right?
Am I overthinking this, or is there a proper pattern to use here? (This is C# code, though I guess not really language-specific)
Consider carefully how the object is going to actually be used for your speculative analysis. For straightforward, one-off, let's-mutate-the-field-try-something-and-change-it-back scenarios, sure, just make it mutable. But suppose you want to pull out the big guns; then it becomes much nicer to have an immutable model.
Data d = whatever;
// What if we mutate X and Y? Which one maximizes the value of Foo(d) ?
var query = from x in Range(0, 100)
from y in Range(0, 100)
let mutated = data.MutateX(x).MutateY(y)
orderby Foo(mutated)
select mutated;
var max = query.First();
And so on. With an immutable pattern it becomes much easier to write speculative queries, it becomes much easier to parallelize those queries across multiple cores, and so on.
The latter seems awkward
Well that's your answer. The goal of API design is to make writing code as easy as possible. If a particular pattern makes it more difficult or awkward than the alternative, the alternative is probably correct.
I think you're probably overthinking this a little bit. While there is probably a very elegant design pattern for this that uses eight classes and four interfaces, I think the simplest route forward would be to make it a normal, mutable class. Think about your intention: you want a Model that can be loaded from external data (perhaps a static method returning a Model instance) with parameters that can change according to user input. This seems like a use case for your everyday, garden variety Class.
You may also choose to separate your classes into a Data class and Strategy class, the second which contains the changeable parameters and uses something like a Strategy pattern to calculate the results.

Getting my head around object oriented programming

I am entry level .Net developer and using it to develop web sites. I started with classic asp and last year jumped on the ship with a short C# book.
As I developed I learned more and started to see that coming from classic asp I always used C# like scripting language.
For example in my last project I needed to encode video on the webserver and wrote a code like
public class Encoder
{
Public static bool Encode(string videopath) {
...snip...
return true;
}
}
While searching samples related to my project I’ve seen people doing this
public class Encoder
{
Public static Encode(string videopath) {
EncodedVideo encoded = new EncodedVideo();
...snip...
encoded.EncodedVideoPath = outputFile;
encoded.Success = true;
...snip...
}
}
public class EncodedVideo
{
public string EncodedVideoPath { get; set; }
public bool Success { get; set; }
}
As I understand second example is more object oriented but I don’t see the point of using EncodedVideo object.
Am I doing something wrong? Does it really necessary to use this sort of code in a web app?
someone once explained OO to me as a a soda can.
A Soda can is an object, an object has many properties. And many methods. For example..
SodaCan.Drink();
SodaCan.Crush();
SocaCan.PourSomeForMyHomies();
etc...
The purpose of OO Design is theoretically to write a line of code once, and have abstraction between objects.
This means that Coder.Consume(SodaCan.contents); is relative to your question.
An encoded video is not the same thing as an encoder. An encoder returns an encoded video. and encoded video may use an encoder but they are two seperate objects. because they are two different entities serving different functions, they simply work together.
Much like me consuming a soda can does not mean that I am a soda can.
Neither example is really complete enough to evaluate. The second example seems to be more complex than the first, but without knowing how it will be used it's difficult to tell.
Object Oriented design is at it's best when it allows you to either:
1) Keep related information and/or functions together (instead of using parallel arrays or the like).
Or
2) Take advantage of inheritance and interface implementation.
Your second example MIGHT be keeping the data together better, if it returns the EncodedVideo object AND the success or failure of the method needs to be kept track of after the fact. In this case you would be replacing a combination of a boolean "success" variable and a path with a single object, clearly documenting the relation of the two pieces of data.
Another possibility not touched on by either example is using inheritance to better organize the encoding process. You could have a single base class that handles the "grunt work" of opening the file, copying the data, etc. and then inherit from that class for each different type of encoding you need to perform. In this case much of your code can be written directly against the base class, without needing to worry about what kind of encoding is actually being performed.
Actually the first looks better to me, but shouldn't return anything (or return an encoded video object).
Usually we assume methods complete successfully without exceptional errors - if exceptional errors are encountered, we throw an exception.
Object oriented programming is fundamentally about organization. You can program in an OO way even without an OO language like C#. By grouping related functions and data together, it is easier to deal with increasingly complex projects.
You aren't necessarily doing something wrong. The question of what paradigm works best is highly debatable and isn't likely to have a clear winner as there are so many different ways to measure "good" code,e.g. maintainable, scalable, performance, re-usable, modular, etc.
It isn't necessary, but it can be useful in some cases. Take a look at various MVC examples to see OO code. Generally, OO code has the advantage of being re-usable so that what was written for one application can be used for others over and over again. For example, look at log4net for example of a logging framework that many people use.
The way your structure an OO program--which objects you use and how you arrange them--really depends on many factors: the age of the project, the overall size of the project, complexity of the problem, and a bit for just personal taste.
The best advice I can think of that will wrap all the reasons for OO into one quick lesson is something I picked up learning design patterns: "Encapsulate the parts that change." The value of OO is to reuse elements that will be repeated without writing additional code. But obviously you only care to "wrap up" code into objects if it will actually be reused or modified in the future, thus you should figure out what is likely to change and make objects out of it.
In your example, the reason to use the second set up may be that you can reuse the EncodedVideo object else where in the program. Anytime you need to deal with EncodedVideo, you don't concern yourself with the "how do I encode and use video", you just use the object you have and trust it to handle the logic. It may also be valuable to encapsulate the encoding logic if it's complex, and likely to change. Then you isolate changes to just one place in the code, rather than many potential places where you might have used the object.
(Brief aside: The particular example you posted isn't valid C# code. In the second example, the static method has no return type, though I assume you meant to have it return the EncodedVideo object.)
This is a design question, so answer depends on what you need, meaning there's no right or wrong answer. First method is more simple, but in second case you incapsulate encoding logic in EncodedVideo class and you can easily change the logic (based on incoming video type, for instance) in your Encoder class.
I think the first example seems more simple, except I would avoid using statics whenever possible to increase testability.
public class Encoder
{
private string videoPath;
public Encoder(string videoPath) {
this.videoPath = videoPath;
}
public bool Encode() {
...snip...
return true;
}
}
Is OOP necessary? No.
Is OOP a good idea? Yes.
You're not necessarily doing something wrong. Maybe there's a better way, maybe not.
OOP, in general, promotes modularity, extensibility, and ease of maintenance. This goes for web applications, too.
In your specific Encoder/EncodedVideo example, I don't know if it makes sense to use two discrete objects to accomplish this task, because it depends on a lot of things.
For example, is the data stored in EncodedVideo only ever used within the Encode() method? Then it might not make sense to use a separate object.
However, if other parts of the application need to know some of the information that's in EncodedVideo, such as the path or whether the status is successful, then it's good to have an EncodedVideo object that can be passed around in the rest of the application. In this case, Encode() could return an object of type EncodedVideo rather than a bool, making that data available to the rest of your app.
Unless you want to reuse the EncodedVideo class for something else, then (from what code you've given) I think your method is perfectly acceptable for this task. Unless there's unrelated functionality in EncodedVideo and the Encoder classes or it forms a massive lump of code that should be split down, then you're not really lowering the cohesion of your classes, which is fine. Assuming you don't need to reuse EncodedVideo and the classes are cohesive, by splitting them you're probably creating unnecessary classes and increasing coupling.
Remember: 1. the OO philosophy can be quite subjective and there's no single right answer, 2. you can always refactor later :p

Ab-using languages

Some time ago I had to address a certain C# design problem when I was implementing a JavaScript code-generation framework. One of the solutions I came with was using the “using” keyword in a totally different (hackish, if you please) way. I used it as a syntax sugar (well, originally it is one anyway) for building hierarchical code structure. Something that looked like this:
CodeBuilder cb = new CodeBuilder();
using(cb.Function("foo"))
{
// Generate some function code
cb.Add(someStatement);
cb.Add(someOtherStatement);
using(cb.While(someCondition))
{
cb.Add(someLoopStatement);
// Generate some more code
}
}
It is working because the Function and the While methods return IDisposable object, that, upon dispose, tells the builder to close the current scope. Such thing can be helpful for any tree-like structure that need to be hard-codded.
Do you think such “hacks” are justified? Because you can say that in C++, for example, many of the features such as templates and operator overloading get over-abused and this behavior is encouraged by many (look at boost for example). On the other side, you can say that many modern languages discourage such abuse and give you specific, much more restricted features.
My example is, of course, somewhat esoteric, but real. So what do you think about the specific hack and of the whole issue? Have you encountered similar dilemmas? How much abuse can you tolerate?
I think this is something that has blown over from languages like Ruby that have much more extensive mechanisms to let you create languages within your language (google for "dsl" or "domain specific languages" if you want to know more). C# is less flexible in this respect.
I think creating DSL's in this way is a good thing. It makes for more readable code. Using blocks can be a useful part of a DSL in C#. In this case I think there are better alternatives. The use of using is this case strays a bit too far from its original purpose. This can confuse the reader. I like Anton Gogolev's solution better for example.
Offtopic, but just take a look at how pretty this becomes with lambdas:
var codeBuilder = new CodeBuilder();
codeBuilder.DefineFunction("Foo", x =>
{
codeBuilder.While(condition, y =>
{
}
}
It would be better if the disposable object returned from cb.Function(name) was the object on which the statements should be added. That internally this function builder passed through the calls to private/internal functions on the CodeBuilder is fine, just that to public consumers the sequence is clear.
So long as the Dispose implementation would make the following code cause a runtime error.
CodeBuilder cb = new CodeBuilder();
var f = cb.Function("foo")
using(function)
{
// Generate some function code
f.Add(someStatement);
}
function.Add(something); // this should throw
Then the behaviour is intuitive and relatively reasonable and correct usage (below) encourages and prevents this happening
CodeBuilder cb = new CodeBuilder();
using(var function = cb.Function("foo"))
{
// Generate some function code
function.Add(someStatement);
}
I have to ask why you are using your own classes rather than the provided CodeDomProvider implementations though. (There are good reasons for this, notably that the current implementation lacks many of the c# 3.0 features) but since you don't mention it yourself...
Edit: I would second Anoton's suggest to use lamdas. The readability is much improved (and you have the option of allowing Expression Trees
If you go by the strictest definitions of IDisposable then this is an abuse. It's meant to be used as a method for releasing native resources in a deterministic fashion by a managed object.
The use of IDisposable has evolved to essentially be used by "any object which should have a deterministic lifetime". I'm not saying this is write or wrong but that's how many API's and users are choosing to use IDisposable. Given that definition it's not an abuse.
I wouldn't consider it terribly bad abuse, but I also wouldn't consider it good form because of the cognitive wall you're building for your maintenance developers. The using statement implies a certain class of lifetime management. This is fine in its usual uses and in slightly customized ones (like #heeen's reference to an RAII analogue), but those situations still keep the spirit of the using statement intact.
In your particular case, I might argue that a more functional approach like #Anton Gogolev's would be more in the spirit of the language as well as maintainable.
As to your primary question, I think each such hack must ultimately stand on its own merits as the "best" solution for a particular language in a particular situation. The definition of best is subjective, of course, but there are definitely times (especially when the external constraints of budgets and schedules are thrown into the mix) where a slightly more hackish approach is the only reasonable answer.
I often "abuse" using blocks. I think they provide a great way of defining scope. I have a whole series of objects that I use for capture and restoring state (e.g. of Combo boxes or the mouse pointer) during operations that may change the state. I also use them for creating and dropping database connections.
E.g.:
using(_cursorStack.ChangeCursor(System.Windows.Forms.Cursors.WaitCursor))
{
...
}
I wouldn't call it abuse. Looks more like a fancied up RAII technique to me. People have been using these for things like monitors.

When would I use a delegate in asp.net?

I'm always looking for a way to use all the tools I can and to stretch myself just beyond where I am at. But as much as I have read about delegates, I can never find a place to use them (like Interfaces, Generics, and a lot of stuff, but I digress.) I was hoping someone could show me when and how they used a delegate in web programming for asp.net c#(2.0 and above).
Thank you and if this wrong for Stack Overflow, please just let me know.
bdukes is right about events. But you're not limited to just using delegates with events.
Study the classic Observer Pattern for more examples on using delegates. Some text on the pattern points toward an event model, but from a raw learning perspective, you don't have to use events.
One thing to remember: A delegate is just another type that can be used & passed around similar to your primitive types such as an "int". And just like "int", a delegate has it's own special characteristics that you can act on in your coding when you consume the delegate type.
To get a really great handle on the subject and on some of it's more advanced and detailed aspects, get Joe Duffy's book, .NET Framework 2.0.
Well, whenever you handle an event, you're using a delegate.
To answer your second question first, I think this is a great question for StackOverflow!
On the first, one example would be sorting. The Sort() method on List takes a delegate to do the sorting, as does the Find() method. I'm not a huge fan of sorting in the database, so I like to use Sort() on my result sets. After all, the order of a list is much more of a UI issue (typically) than a business rule issue.
Edit: I've added my reasons for sorting outside the DB to the appropriate question here.
Edit: The comparison function used in the sort routine is a delegate. Therefore, if you sort a List using the .Sort(Comparison(T)) method the Comparison(T) method you pass to the sort function is a delegate. See the .Sort(Comparison(T)) documentation.
Another quick example off the top of my head would be unit testing with Rhino Mocks. A lot of the things you can do with Rhino Mocks utilize delegates and lambda expressions.
You can use delegates whenever you know you will want to take some action, but the details of that action will depend on circumstances.
Among other things, we use delegates for:
Sorting and filtering, especially if the user can choose between different sorting/filtering criteria
Simplifying code. For example, a longish process where the beginning and end are always the same, but a small middle bit varies. Instead of having a hard-to-read if block in the middle, I have one method for the whole process, and pass in a delegate (Action) for the middle bit.
I have a very useful ToString method in my presentation layer which converts a collection of anything into a comma-separated list. The method parameters are an IEnumerable and a Func delegate for turning each T in the collection into a string. It works equally well for stringing together Users by their FirstName or for listing Projects by their ID.
There isn't anything special to asp.net related to delegates (besides considerations when using async stuff, which is a whole different question), so I will point you to other questions instead:
Delegate Usage : Business Applications
Where do I use delegates?
Another example would be to publish events for user controls.
Eg.
// In your user control
public delegate void evtSomething(SomeData oYourData);
public event evtSomething OnSomething;
// In the page using your user control
ucYourUserControl.OnSomething += ucYourUserControl_OnSomething;
// Then implement the function
protected void ucYourUserControl_OnSelect(SomeData oYourData)
{
...
}
Recently i used the delegates for "delegating" the checking of the permissions.
public Func CheckPermission;
This way, the CheckPermission function can be shared by various controls or classes, say it in a static class or a utilities class, and still be managed centralized, avoiding also Interface explossion; just a thought

Categories

Resources