Best practice for Undo Redo implementation in C# [closed] - c#

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I need to implement Undo/Redo frame work for my window application(editor like powerpoint), what should be the best practice to follow, how would be handle all property changes of my objects and it reflection on UI.

There are two classic patterns to use. The first is the memento pattern which is used to store snapshots of your complete object state. This is perhaps more system intensive than the command pattern, but it allows rollback very simply to an older snapshot. You could store the snapshots on disk a la PaintShop/PhotoShop or keep them in memory for smaller objects that don't require persistence. What you're doing is exactly what this pattern was designed for, so it should fit the bill slightly better than the Command Pattern suggested by others.
Also, an additional note is that because it doesn't require you to have reciprocal commands to undo something that was previously done, it means that any potentially one way functions [such as hashing or encryption] which can't be undone trivially using reciprocal commands can still be undone very simply by just rolling back to an older snapshot.
Also as pointed out, the command pattern which is potentially less resource intensive, so I will concede that in specific cases where:
There is a large object state to be persisted and/or
There are no destructive methods and
Where reciprocal commands can be used very trivially to reverse any action taken
the command pattern may be a better fit [but not necessarily, it will depend very much on the situation]. In other cases, I would use the memento pattern.
I would probably refrain from using a mashup of the two because I tend to care about the developer that's going to come in behind me and maintain my code as well as it being my ethical responsibility to my employer to make that process as simple and inexpensive as possible. I see a mashup of the two patterns easily becoming an unmaintainable rat hole of discomfort that would be expensive to maintain.

There are three approaches here that are viable. Memento Pattern (Snapshots), Command Pattern and State Diffing. They all have advantages and disadvantages and it really comes down to your use case, what data you are working with and what you are willing to implement.
I would go with State Diffing if you can get away with it as it combines memory reduction with ease of implementation and maintainability.
I'm going to quote an article describing the three approaches (Reference below).
Note that VoxelShop mentioned in the article is open source. So you can take a look at the complexity of the command pattern here:
https://github.com/simlu/voxelshop/tree/develop/src/main/java/com/vitco/app/core/data/history
Below is an adapted excerpt from the article. However I do recommend that you read it in full.
Memento Pattern
Each history state stores a full copy. An action creates a new state and a pointer is used to move between the states to allow for undo and redo.
Pros
Implementation is independent of the applied action. Once implemented we can add actions without worrying about breaking history.
It is fast to advance to a predefined position in history. This is interesting when the actions applied between current and desired history position are computationally expensive.
Cons
Memory Requirements can be significantly higher compared to other approaches.
Loading time can be slow if the snapshots are large.
Command Pattern
Similar to the Memento Pattern, but instead of storing the full state, only the difference between the states is stored. The difference is stored as actions that can be applied and un-applied. When introducing a new action, apply and un-apply need to be implemented.
Pros
Memory footprint is small. We only need to store the changes to the model and if these are small, then the history stack is also small.
Cons
We can not go to an arbitrary position directly, but rather need to un-apply the history stack until we get there. This can be time consuming.
Every action and it's reverse needs to be encapsulated in an object. If your action is non trivial this can be difficult. Mistakes in the (reverse) action are really hard to debug and can easily result in fatal crashes. Even simple looking actions usually involve a good amount of complexity. E.g. in case of the 3D Editor, the object for adding to the model needs to store what was added, what color was currently selected, what was overwritten, if mirror mode active etc.
Can be challenging to implement and memory intensive when actions do not have a simple reverse, e.g when blurring an image.
State Diffing
Similar to the Command Pattern, but the difference is stored independent of the action by simply xor-nig the states. Introducing a new action does not require any special considerations.
Pros
Implementation is independent of the applied action. Once the history functionality is added we can add actions without worrying about breaking history.
Memory Requirements is usually much lower than for the Snapshot approach and in a lot of cases comparable to the Command Pattern approach. However this highly depends on the type of actions applied. E.g. inverting the color of an image using the Command Pattern should be very cheap, while State Diffing would save the whole image. Conversely when drawing a long free-form line, the Command Pattern approach might use more memory if it chained history entries for each pixel.
Cons / Limitations
We can not go to an arbitrary position directly, but rather need to un-apply the history stack until we get there.
We need to compute the diff between states. This can be expensive.
Implementing the xor diff between model states might be hard to implement depending on your data model.
Reference:
https://www.linkedin.com/pulse/solving-history-hard-problem-lukas-siemon

The classic practice is to follow the Command Pattern.
You can encapsulate any object that performs an action with a command, and have it perform the reverse action with an Undo() method. You store all the actions in a stack for an easy way of rewinding through them.

Take a look at the Command Pattern.
You have to encapsulate every change to your model into separate command objects.

I wrote a really flexible system to keep track of changes. I have a drawing program which implements 2 types of changes:
add/remove a shape
property change of a shape
Base class:
public abstract class Actie
{
public Actie(Vorm[] Vormen)
{
vormen = Vormen;
}
private Vorm[] vormen = new Vorm[] { };
public Vorm[] Vormen
{
get { return vormen; }
}
public abstract void Undo();
public abstract void Redo();
}
Derived class for adding shapes:
public class VormenToegevoegdActie : Actie
{
public VormenToegevoegdActie(Vorm[] Vormen, Tekening tek)
: base(Vormen)
{
this.tek = tek;
}
private Tekening tek;
public override void Redo()
{
tek.Vormen.CanRaiseEvents = false;
tek.Vormen.AddRange(Vormen);
tek.Vormen.CanRaiseEvents = true;
}
public override void Undo()
{
tek.Vormen.CanRaiseEvents = false;
foreach(Vorm v in Vormen)
tek.Vormen.Remove(v);
tek.Vormen.CanRaiseEvents = true;
}
}
Derived class for removing shapes:
public class VormenVerwijderdActie : Actie
{
public VormenVerwijderdActie(Vorm[] Vormen, Tekening tek)
: base(Vormen)
{
this.tek = tek;
}
private Tekening tek;
public override void Redo()
{
tek.Vormen.CanRaiseEvents = false;
foreach(Vorm v in Vormen)
tek.Vormen.Remove(v);
tek.Vormen.CanRaiseEvents = true;
}
public override void Undo()
{
tek.Vormen.CanRaiseEvents = false;
foreach(Vorm v in Vormen)
tek.Vormen.Add(v);
tek.Vormen.CanRaiseEvents = true;
}
}
Derived class for property changes:
public class PropertyChangedActie : Actie
{
public PropertyChangedActie(Vorm[] Vormen, string PropertyName, object OldValue, object NewValue)
: base(Vormen)
{
propertyName = PropertyName;
oldValue = OldValue;
newValue = NewValue;
}
private object oldValue;
public object OldValue
{
get { return oldValue; }
}
private object newValue;
public object NewValue
{
get { return newValue; }
}
private string propertyName;
public string PropertyName
{
get { return propertyName; }
}
public override void Undo()
{
//Type t = base.Vorm.GetType();
PropertyInfo info = Vormen.First().GetType().GetProperty(propertyName);
foreach(Vorm v in Vormen)
{
v.CanRaiseVeranderdEvent = false;
info.SetValue(v, oldValue, null);
v.CanRaiseVeranderdEvent = true;
}
}
public override void Redo()
{
//Type t = base.Vorm.GetType();
PropertyInfo info = Vormen.First().GetType().GetProperty(propertyName);
foreach(Vorm v in Vormen)
{
v.CanRaiseVeranderdEvent = false;
info.SetValue(v, newValue, null);
v.CanRaiseVeranderdEvent = true;
}
}
}
With each time Vormen = the array of items that are submitted to the change.
And it should be used like this:
Declaration of the stacks:
Stack<Actie> UndoStack = new Stack<Actie>();
Stack<Actie> RedoStack = new Stack<Actie>();
Adding a new shape (eg. Point)
VormenToegevoegdActie vta = new VormenToegevoegdActie(new Vorm[] { NieuweVorm }, this);
UndoStack.Push(vta);
RedoStack.Clear();
Removing a selected shape
VormenVerwijderdActie vva = new VormenVerwijderdActie(to_remove, this);
UndoStack.Push(vva);
RedoStack.Clear();
Registering a property change
PropertyChangedActie ppa = new PropertyChangedActie(new Vorm[] { (Vorm)e.Object }, e.PropName, e.OldValue, e.NewValue);
UndoStack.Push(ppa);
RedoStack.Clear();
Finally the Undo/Redo action
public void Undo()
{
Actie a = UndoStack.Pop();
RedoStack.Push(a);
a.Undo();
}
public void Redo()
{
Actie a = RedoStack.Pop();
UndoStack.Push(a);
a.Redo();
}
I think this is the most effective way of implementing a undo-redo algorithm.
For an example, look at this page on my website: DrawIt.
I implemented the undo redo stuff at around line 479 of the file Tekening.cs. You can download the source code. It can be implemented by any kind of application.

Related

Performance profiling of Razor views

I am working on a project where we render a kind of article list, with a lot of optional images and text properties and all in "fancy" non-tabular layout that is adapting to existing/missing properties and adding some randomness. Customer wanted a casual look and we ended up with a big tree of razor templates of depth 4 and 3-10 on each level. Each template is very simple and there is no "magic" going on.
During loadtests we found that for large article lists we have a perfomance problem in the view rendering. Rendering 5 articles takes 20ms, but rendering 60 takes 1s.
Is there a smart way to measure each templates rendering duration? I would like to avoid adding Stopwatches everywhere manually. Is there a framework way? Is there a recommendation somewhere on how to debug such problems in general? I could not find anything.
Just found a good solution myself:
public class RazorPerformanceDiagnosticListener
{
private readonly IDictionary<string, TemplatePerformanceHolder> _timers = new ConcurrentDictionary<string, TemplatePerformanceHolder>();
[DiagnosticName("Microsoft.AspNetCore.Mvc.Razor.BeginInstrumentationContext")]
public virtual void OnBeginInstrumentationContext(HttpContext httpContext, string path, int position, int length, bool isLiteral)
{
if (_timers.ContainsKey(path))
{
_timers[path].ContextDepth++;
}
else
{
_timers[path] = new TemplatePerformanceHolder(){ContextDepth = 1, Stopwatch = Stopwatch.StartNew() };
}
}
[DiagnosticName("Microsoft.AspNetCore.Mvc.Razor.EndInstrumentationContext")]
public virtual void OnEndInstrumentationContext(HttpContext httpContext, string path)
{
_timers[path].ContextDepth--;
if (_timers[path].ContextDepth == 0)
{
_timers[path].Stopwatch.Stop();
//log _timers[path].Stopwatch.Elapsed
_timers.Remove(path);
}
}
}
public class TemplatePerformanceHolder
{
public Stopwatch Stopwatch;
public int ContextDepth;
}
and in Startup.cs
using System.Diagnostics;
public void Configure(DiagnosticListener diagnosticListener)
{
diagnosticListener.SubscribeWithAdapter(new RazorPerformanceDiagnosticListener());
}
If you wonder why you need to keep track of this context depth: these BeginInstrumentationContext events are triggered multiple times within one razor template (and so is is EndInstrumentationContext), so each template has a stack.
Also keep in mind that this code will get issues with concurrent requests (or parallelized template rendering). To work around it, you would need to make the HttpContext and possibly the threads id part of the Dictionary key. But I didn't need "production ready" code, so be wary.
Based on this.

C# - Static events on non-static classes

There are situations where I'm quite fond of static events, but the fact that I rarely see them in other people's code makes me wonder if I'm missing something important. I found a lot of discussions about static events on this site, but most of them deal with situations that I'm not interested in (like on static classes) or where I wouldn't think of using them in the first place.
What I am interested in are situations where I might have many instances of something and a single instance of a long-living "manager" object that reacts to something on those instances. A very simple example to illustrate what I mean:
public class God {
//the list of followers is really big and changes all the time,
//it seems like a waste of time to
//register/unregister events for each and every one...
readonly List<Believer> Believers = new List<Believer>();
God() {
//...so instead let's have a static event and listen to that
Believer.Prayed += this.Believer_Prayed;
}
void Believer_Prayed(Believer believer, string prayer) {
//whatever
}
}
public class Believer {
public static event Action<Believer, string> Prayed;
void Pray() {
if (Prayed != null) {
Prayed(this, "can i have stuff, please");
}
}
}
To me, this looks like a much cleaner and simpler solution than having an instance event and I don't have to monitor changes in the believers collection either. In cases where the Believer class can "see" the God-type class, I might sometimes use a NotifyGodOfPrayer()-method instead (which was the preferred answer in a few similar questions), but often the Believer-type class is in a "Models"-assembly where I can't or don't want to access the God class directly.
Are there any actual downsides to this approach?
Edit: Thanks to everyone who has already taken the time to answer.
My example may be bad, so I would like to clarify my question:
If I use this kind of static events in situations, where
I'm sure there will only ever be one instance of the subscriber-object
that is guaranteed to exist as long as the application is running
and the number of instances I'm watching is huge
then are there potential problems with this approach that I'm not aware of?
Unless the answer to that question is "yes", I'm not really looking for alternative implementations, though I really appreciate everyone trying to be helpful.
I'm not looking for the most pretty solution (I'd have to give that prize to my own version simply for being short and easy to read and maintain :)
One important thing to know about events is that they cause objects which are hooked to an event not to be garbage collected until event owner is garbage collected, or until event handler is unhooked.
To put it into your example, if you had a polytheistic pantheon with many gods, where you promoted and demoted gods such as
new God("Svarog");
new God("Svantevit");
new God("Perun");
gods would remain in your RAM while they are attached to Believer.Prayed. This would cause your application to leak gods.
I'll comment on design decision also, but I understand that example you made is maybe not best copy of your real scenario.
It seems more reasonable to me not to create dependency from God to Believer, and to use events. Good approach would be to create an event aggregator which would stand between believers and gods. For example:
public interface IPrayerAggregator
{
void Pray(Believer believer, string prayer);
void RegisterGod(God god);
}
// god does
prayerAggregator.RegisterGod(this);
// believer does
prayerAggregator.Pray(this, "For the victory!");
Upon Pray method being called, event aggregator calls appropriate method of God class in turn. To manage references and avoid memory leaks, you could create UnregisterGod method or hold gods in collection of weak references such as
public class Priest : IPrayerAggregator
{
private List<WeakReference> _gods;
public void Pray(Believer believer, string prayer)
{
foreach (WeakReference godRef in _gods) {
God god = godRef.Target as God;
if (god != null)
god.SomeonePrayed(believer, prayer);
else
_gods.Remove(godRef);
}
}
public void RegisterGod(God god)
{
_gods.Add(new WeakReference(god, false));
}
}
Quick tip: Temporarily store event delegate as listeners might unhook their event handlers
void Pray() {
var handler = Prayed;
if (handler != null) {
handler(this, "can i have stuff, please");
}
}
Edit
Having in mind details you added about your scenario (huge number of event invokers, constant and single event watcher) I think you chose right scenario, purely for efficiency reasons. It creates least memory and cpu overhead. I wouldn't take this approach generally, but for scenario you described static event is very pragmatic solution that I might take.
One downside I see is flow of control. If your event listener is created in single instance as you say, then I would leverage singleton (anti)pattern and invoke method of God from Believer directly.
God.Instance.Pray(this, "For the victory!");
//or
godInstance.Pray(this, "For the victory!");
Why? Because then you get more granular control over performing action of praying. If you decide down the line that you need to subclass Believer to a special kind that doesn't pray on certain days, then you would have control over this.
I actually think that having an instance even would be cleaner and defiantly more readable.
It is much more simple to view it as, an instance is preying, so his pray event gets trigger. And I don't see ant downsides for that. I don't think that monitor changes is not more of a hustle than monitoring the static event. but is the correct way to go...
Monitoring the list:
Change the list to be an ObservableCollection (and take a look at NotifyCollectionChangedEventArgs ).
Monitor it by:
public class God {
readonly ObservableCollection<Believer> Believers = new ObservableCollection<Believer>();
public God() {
Believers = new ObservableCollection<T>();
Believers.CollectionChanged += BelieversListChanged;
}
private void BelieversListChanged(object sender, NotifyCollectionChangedEventArgs args) {
if ((e.Action == NotifyCollectionChangedAction.Remove || e.Action == NotifyCollectionChangedAction.Replace) && e.OldItems != null)
{
foreach (var oldItem in e.OldItems)
{
var bel= (Believer)e.oldItem;
bel.Prayed -= Believer_Prayed;
}
}
if((e.Action==NotifyCollectionChangedAction.Add || e.Action==NotifyCollectionChangedAction.Replace) && e.NewItems!=null)
{
foreach(var newItem in e.NewItems)
{
foreach (var oldItem in e.OldItems)
{
var bel= (Believer)e.newItem;
bel.Prayed += Believer_Prayed;
}
}
}
}
void Believer_Prayed(Believer believer, string prayer) {
//whatever
}
}

How to enforce constraints between decoupled objects?

Note - I have moved the original post to the bottom because I think it is still of value to newcomers to this thread. What follows directly below is an attempt at rewriting the question based on feedback.
Completely Redacted Post
Ok, I'll try to elaborate a bit more on my specific problem. I realise I am blending domain logic with interfacing/presentation logic a little but to be honest I am not sure where to seperate it. Please bear with me :)
I am writing an application that (among other things) performs logistics simulations for moving stuff around. The basic idea is that the user sees a Project, similar to Visual Studio, where she can add, remove, name, organise, annotate and so on various objects which I am about to outline:
Items and Locations are basic behaviourless data items.
class Item { ... }
class Location { ... }
A WorldState is a Collection of item-location pairs. A WorldState is mutable: The user is able to add and remove items, or change their location.
class WorldState : ICollection<Tuple<Item,Location>> { }
A Plan represents the movement of items to different locations at desired times. These can either be imported into the Project or generated within the program. It references a WorldState to get the initial location of various objects. A Plan is also mutable.
class Plan : IList<Tuple<Item,Location,DateTime>>
{
WorldState StartState { get; }
}
A Simulation then executes a Plan. It encapsulates a lot of rather complex behaviour, and other objects, but the end result is a SimulationResult which is a set of metrics that basically describe how much this cost and how well the Plan was fulfilled (think the Project Triangle)
class Simulation
{
public SimulationResult Execute(Plan plan);
}
class SimulationResult
{
public Plan Plan { get; }
}
The basic idea is that the users can create these objects, wire them together, and potentially re-use them. A WorldState may be used by multiple Plan objects. A Simulation may then be run over multiple Plans.
At the risk of being horribly verbose, an example
var bicycle = new Item();
var surfboard = new Item();
var football = new Item();
var hat = new Item();
var myHouse = new Location();
var theBeach = new Location();
var thePark = new Location();
var stuffAtMyHouse = new WorldState( new Dictionary<Item, Location>() {
{ hat, myHouse },
{ bicycle, myHouse },
{ surfboard, myHouse },
{ football, myHouse },
};
var gotoTheBeach = new Plan(StartState: stuffAtMyHouse , Plan : new [] {
new [] { surfboard, theBeach, 1/1/2010 10AM }, // go surfing
new [] { surfboard, myHouse, 1/1/2010 5PM }, // come home
});
var gotoThePark = new Plan(StartState: stuffAtMyHouse , Plan : new [] {
new [] { football, thePark, 1/1/2010 10AM }, // play footy in the park
new [] { football, myHouse, 1/1/2010 5PM }, // come home
});
var bigDayOut = new Plan(StartState: stuffAtMyHouse , Plan : new [] {
new [] { bicycle, theBeach, 1/1/2010 10AM }, // cycle to the beach to go surfing
new [] { surfboard, theBeach, 1/1/2010 10AM },
new [] { bicycle, thePark, 1/1/2010 1PM }, // stop by park on way home
new [] { surfboard, thePark, 1/1/2010 1PM },
new [] { bicycle, myHouse, 1/1/2010 1PM }, // head home
new [] { surfboard, myHouse, 1/1/2010 1PM },
});
var s1 = new Simulation(...);
var s2 = new Simulation(...);
var s3 = new Simulation(...);
IEnumerable<SimulationResult> results =
from simulation in new[] {s1, s2}
from plan in new[] {gotoTheBeach, gotoThePark, bigDayOut}
select simulation.Execute(plan);
The problem is when something like this is executed:
stuffAtMyHouse.RemoveItem(hat); // this is fine
stuffAtMyHouse.RemoveItem(bicycle); // BAD! bicycle is used in bigDayOut,
So basically when a user attempts to delete an item from a WorldState (and maybe the entire Project) via a world.RemoveItem(item) call, I want to ensure that the item is not referred to in any Plan objects which use that WorldState. If it is, I want to tell the user "Hey! The following Plan X is using this Item! Go and deal with that before trying to remove it!". The sort of behaviour I do not want from a world.RemoveItem(item) call is:
Deleting the item but still having the Plan reference it.
Deleting the item but having the Plan silently delete all elements in its list that refer to the item. (actually, this is probably desireable but only as a secondary option).
So my question is basically how can such desired behaviour be implemented with in a cleanly decoupled fashion. I had considered making this a purview of the user interface (so when user presses 'del' on an item, it triggers a scan of the Plan objects and performs a check before calling world.RemoveItem(item)) - but (a) I am also allowing the user to write and execute custom scripts so they can invoke world.RemoveItem(item) themselves, and (b) I'm not convinced this behaviour is a purely "user interface" issue.
Phew. Well I hope someone is still reading...
Original Post
Suppose I have the following classes:
public class Starport
{
public string Name { get; set; }
public double MaximumShipSize { get; set; }
}
public class Spaceship
{
public readonly double Size;
public Starport Home;
}
So suppose a constraint exists whereby a Spaceship size must be smaller than or equal to the MaximumShipSize of its Home.
So how do we deal with this?
Traditionally I've done something coupled like this:
partial class Starport
{
public HashSet<Spaceship> ShipsCallingMeHome; // assume this gets maintained properly
private double _maximumShipSize;
public double MaximumShipSize
{
get { return _maximumShipSize; }
set
{
if (value == _maximumShipSize) return;
foreach (var ship in ShipsCallingMeHome)
if (value > ship)
throw new ArgumentException();
_maximumShipSize = value
}
}
}
This is manageable for a simple example like this (so probably a bad example), but I'm finding as the constraints get larger and and more complex, and I want more related features (e.g. implement a method bool CanChangeMaximumShipSizeTo(double) or additional methods which will collect the ships which are too large) I end up writing more unnecessary bidirectional relationships (in this case SpaceBase-Spaceship is arguably appropriate) and complicated code which is largely irrelevant from the owners side of the equation.
So how is this sort of thing normally dealt with? Things I've considered:
I considered using events, similar to the ComponentModel INotifyPropertyChanging/PropertyChanging pattern, except that the EventArgs would have some sort of Veto() or Error() capability (much like winforms allows you to consume a key or suppress a form exit). But I'm not sure whether this constitutes eventing abuse or not.
Alternatively, managing events myself via an explicitly defined interface, e.g
asdf I need this line here or the formatting won't work
interface IStarportInterceptor
{
bool RequestChangeMaximumShipSize(double newValue);
void NotifyChangeMaximumShipSize(double newValue);
}
partial class Starport
{
public HashSet<ISpacebaseInterceptor> interceptors; // assume this gets maintained properly
private double _maximumShipSize;
public double MaximumShipSize
{
get { return _maximumShipSize; }
set
{
if (value == _maximumShipSize) return;
foreach (var interceptor in interceptors)
if (!RequestChangeMaximumShipSize(value))
throw new ArgumentException();
_maximumShipSize = value;
foreach (var interceptor in interceptors)
NotifyChangeMaximumShipSize(value);
}
}
}
But I'm not sure if this is any better. I'm also unsure if rolling my own events in this manner would have certain performance implications or there are other reasons why this might be a good/bad idea.
Third alternative is maybe some very wacky aop using PostSharp or an IoC/Dependency Injection container. I'm not quite ready to go down that path yet.
God object which manages all the checks and so forth - just searching stackoverflow for god object gives me the impression this is bad and wrong
My main concern is this seems like a fairly obvious problem and what I thought would be a reasonably common one, but I haven't seen any discussions about it (e.g. System.ComponentModel providse no facilities to veto PropertyChanging events - does it?); this makes me afraid that I've (once again) failed to grasp some fundamental concepts in coupling or (worse) object-oriented design in general.
Comments?
}
Based on the revised question:
I'm thinking the WorldState class needs a delegate... And Plan would set a method that should be called to test if an item is in use. Sortof like:
delegate bool IsUsedDelegate(Item Item);
public class WorldState {
public IsUsedDelegate CheckIsUsed;
public bool RemoveItem(Item item) {
if (CheckIsUsed != null) {
foreach (IsUsedDelegate checkDelegate in CheckIsUsed.GetInvocationList()) {
if (checkDelegate(item)) {
return false; // or throw exception
}
}
}
// Remove the item
return true;
}
}
Then, in the plan's constructor, set the delegate to be called
public class plan {
public plan(WorldState state) {
state.IsUsedDelegate += CheckForItemUse;
}
public bool CheckForItemUse(Item item) {
// Am I using it?
}
}
This is very rough, of course, I'll try to add more after lunch : ) But you get the general idea.
(Post-Lunch :)
The downside is that you have to rely on the Plan to set the delegate... but there's simply no way to avoid that. There's no way for an Item to tell how many references there are to it, or to control its own usage.
The best you can have is an understood contract... WorldState agrees not to remove an item if a Plan is using it, and Plan agrees to tell WorldState that it's using an item. If a Plan doesn't hold up its end of the contract, then it may end up in an invalid state. Tough luck, Plan, that's what you get for not following the rules.
The reason you don't use events is because you need a return value. An alternative would be to have WorldState expose a method to add 'listeners' of type IPlan, where IPlan defines CheckItemForUse(Item item). But you'd still have to rely that a Plan notifies WorldState to ask before removing an item.
One huge gap that I'm seeing: In your example, the Plan you create is not tied to the WorldState stuffAtMyHouse. You could create a Plan to take your dog to the beach, for example, and Plan would be perfectly happy (you'd have to create a dog Item, of course). Edit: do you mean to pass stuffAtMyHouse to the Plan constructor, instead of myHouse?
Because they're not tied, you currently don't care if you remove bicycle from stuffAtMyHouse... because what you're currently saying is "I don't care where the bicycle starts, and I don't care where it belongs, just take it to the beach". But what you mean (I believe) is "Take my bicycle from my house and go to the beach." The Plan needs to have a starting WorldState context.
TLDR: The best decoupling you can hope for is to let Plan choose what method WorldState should query before removing an item.
HTH,
James
Original Answer
It's not 100% clear to me what your goal is, and maybe it's just the forced example. Some possibilities:
I. Enforcing the maximum ship size on methods such as SpaceBase.Dock(myShip)
Pretty straight-forward... the SpaceBase tracks the size when called and throws a TooBigToDockException to the ship attempting to dock if it's too big. In this case, there's not really any coupling... you wouldn't notify the ship of the new max ship size, because managing the max ship size isn't the ship's responsibility.
If the max ship size decreases, you would force the ship to undock... again, the ship doesn't need to know the new max size (though an event or interface to tell it that it's now floating in space might be appropriate). The ship would have no say or veto on the decision... The base has decided it's too big and has booted it.
Your suspicions are correct... God objects are usually bad; clearly-delineated responsibilities make them vanish from the design in puffs of smoke.
II. A queryable property of the SpaceBase
If you want to let a ship ask you if it's too big to dock, you can expose this property. Again, you're not really coupled... you're just letting the ship make a decision to dock or not dock based on this property. But the base doesn't trust the ship to not-dock if it's too big... the base will still check on a call to Dock() and throw an exception.
The responsibility for checking dock-related constraints lies firmly with the base.
III. As true coupling, when the information is necessary to both parties
In order to dock, the base may need to control the ship. Here an interface is appropriate, ISpaceShip, which might have methods such as Rotate(), MoveLeft(), and MoveRight().
Here you avoid coupling by the virtue of the interface itself... Every ship will implement Rotate() differently... the base doesn't care, so long as it can call Rotate() and have the ship turn in place. A NoSuchManeuverException might be thrown by the ship if it doesn't know how to rotate, in which case the base makes a decision to try something different or reject the dock. The objects communicate, but they are not coupled beyond the Interface (contract), and the base still has the responsibility of docking.
IV. Validation on the MaxShipSize setter
You talk about throwing an exception to the caller if it tries to set the MaxShipSize to smaller than the docked ships. I have to ask, though, who is trying to set the MaxShipSize, and why? Either the MaxShipSize should have been set in the constructor and be immutable, or setting the size should follow natural rules, e.g. you can't set the ship size smaller than its current size, because in the real world you would expand a SpaceBase, but never shrink it.
By preventing illogical changes, you render the forced undocking and the communication that goes along with it moot.
The point I'm trying to make is that when you feel like your code is getting unnecessarily complicated, you're almost always right, and your first consideration should be the underlying design. And that in code, less is always more. When you talk about writing Veto() and Error(), and additional methods to 'collect ships that are too large', I become concerned that the code will turn into a Rube Goldberg machine. And I think that separated responsibilities and encapsulation will whittle away much of the unnecessary complication you're experiencing.
It's like a sink with plumbing issues... you can put in all sorts of bends and pipes, but the right solution is usually simple, straight-forward, and elegant.
HTH,
James
You know that a Spaceship must have a Size; put the Size in the base class, and implement validation checks in the accessor there.
I know this seems excessively focused on your specific implementation, but the point here is that your expectations aren't as decoupled as you expect; if you have a hard expectation in the base class of something in the derived class, your base class is making a fundamental expectation of the derived class providing an implementation of that; might as well migrate that expectation directly to the base class, where you can manage the constraints better.
You could do something like C++ STL traits classes - implement a generic SpaceBase<Ship, Traits> which has two parameterizing Types - one that defines the SpaceShip member, and the other that constrains the SpaceBase and its SpaceShips using a SpaceBaseTraits class to encapsulate the characteristics of the base such as limitations on ships it can contain.
The INotifyPropertyChanging interface was designed for data binding, which explains why it doesn't have abilities you're looking for. I might try something like this:
interface ISpacebaseInterceptor<T>
{
bool RequestChange(T newValue);
void NotifyChange(T newValue);
}
You want to apply constraints on actions, but applying them on the data.
Firstly, why changing Starport.MaximumShipSize is allowed? When we "resize" the Starport shouldn't all the Ships take off?
Those are the kind of questions to understand better what needs to be done (and there is no "right and wrong" answer, there is "mine and yours").
Look at the problem from other angle:
public class Starport
{
public string Name { get; protected set; }
public double MaximumShipSize { get; protected set; }
public AircarfDispatcher GetDispatcherOnDuty() {
return new AircarfDispatcher(this); // It can be decoupled further, just example
}
}
public class Spaceship
{
public double Size { get; private set; };
public Starport Home {get; protected set;};
}
public class AircarfDispatcher
{
Startport readonly airBase;
public AircarfDispatcher(Starport airBase) { this.airBase = airBase; }
public bool CanLand(Spaceship ship) {
if (ship.Size > airBase.MaximumShipSize)
return false;
return true;
}
public bool CanTakeOff(Spaceship ship) {
return true;
}
public bool Land(Spaceship ship) {
var canLand = CanLand(ship);
if (!canLand)
throw new ShipLandingException(airBase, this, ship, "Not allowed to land");
// Do something with the capacity of Starport
}
}
// Try to land my ship to the first available port
var ports = GetPorts();
var onDuty = ports.Select(p => p.GetDispatcherOnDuty())
.Where(d => d.CanLand(myShip)).First();
onDuty.Land(myShip);
// try to resize! But NO we cannot do that (setter is protected)
// because it is not the responsibility of the Port, but a building company :)
ports.First().MaximumShipSize = ports.First().MaximumShipSize / 2.0

Good way to define a method

What is the best / good way to implement method calls.
For eg: From the below which is generally considered as best practice. If both are bad, then what is considered as best practice.
Option 1 :
private void BtnPostUpdate_Click(object sender, EventArgs e)
{
getValue();
}
private void getValue()
{
String FileName = TbxFileName.Text;
int PageNo = Convert.ToInt32(TbxPageNo.Text);
// get value from Business Layer
DataTable l_dtbl = m_BLL.getValue(FileName, PageNo);
if (l_dtbl.Rows.Count == 1)
{
TbxValue.Text = Convert.ToInt32(l_dtbl.Rows[0]["Value"]);
}
else
{
TbxValue.Text = 0;
}
}
Option 2 :
private void BtnPostUpdate_Click(object sender, EventArgs e)
{
String FileName = TbxFileName.Text;
int PageNo = Convert.ToInt32(TbxPageNo.Text);
int Value = getValue(FileName, PageNo);
TbxValue.Text = Value.ToString();
}
private int getValue(string FileName, int PageNo)
{
// get value from Business Layer
DataTable l_dtbl = m_BLL.getValue(FileName, PageNo);
if (l_dtbl.Rows.Count == 1)
{
return Convert.ToInt32(l_dtbl.Rows[0]["Value"]);
}
return 0;
}
I understand we can pass parameters directly without assigning to a local variable... My question is more about the method definition and the way it is handled.
If you're subscribing to the event automatically, I don't think it's particularly bad to have a method with the event handler signature which just delegates to a method which has the "real" signature you need (in this case, no parameters).
If you're subscribing manually, you can use a lambda expression instead:
postUpdateButton.Click += (sender, args) => PostUpdate();
and then do the work in PostUpdate. Whether you then split up the PostUpdate into two methods, one to deal with the UI interaction and one to deal with the BLL interaction is up to you. In this case I don't think it matters too much.
How you structure UI logic to make it testable is a whole different matter though. I've recently become a fan of the MVVM pattern, but I don't know how applicable that would be to your particular scenario (it's really designed around Silverlight and WPF).
A couple of other comments though:
Conventionally, parameters should be camelCased, not PascalCased
Do you genuinely believe you're getting benefit from prefixing local variables with l_? Isn't it obvious that they're local? Personally I'm not keen on most of the variable names shown here - consider naming variables after their meaning rather than their type.
Using a DataTable to return information is a somewhat error-prone way of doing things. Why can the BLL not return an int? to indicate the value (or a lack of value)?
here is what i like to to if i don't implement mvc. and i'm assuming web here.
I'd do option 2 first but instead of having the buttons code set the text id create a property to set the text boxs value.
I do this because if something else sets the textbox value then you are going to duplicate code. bad if you change a name or control type.
According to your example, option 2 is the way to go. Option 1 knows about your form and how to display data on it, which violates the SRP.

Tiered Design With Analytical Widgets - Is This Code Smell? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
The idea I'm playing with right now is having a multi-leveled "tier" system of analytical objects which perform a certain computation on a common object and then create a new set of analytical objects depending on their outcome. The newly created analytical objects will then get their own turn to run and optionally create more analytical objects, and so on and so on. The point being that the child analytical objects will always execute after the objects that created them, which is relatively important. The whole apparatus will be called by a single thread so I'm not concerned with thread safety at the moment. As long as a certain base condition is met, I don't see this being an unstable design but I'm still a little bit queasy about it.
Is this some serious code smell or should I go ahead and implement it this way? Is there a better way?
Here is a sample implementation:
namespace WidgetTier
{
public class Widget
{
private string _name;
public string Name
{
get { return _name; }
}
private TierManager _tm;
private static readonly Random random = new Random();
static Widget()
{
}
public Widget(string name, TierManager tm)
{
_name = name;
_tm = tm;
}
public void DoMyThing()
{
if (random.Next(1000) > 1)
{
_tm.Add();
}
}
}
//NOT thread-safe!
public class TierManager
{
private Dictionary<int, List<Widget>> _tiers;
private int _tierCount = 0;
private int _currentTier = -1;
private int _childCount = 0;
public TierManager()
{
_tiers = new Dictionary<int, List<Widget>>();
}
public void Add()
{
if (_currentTier + 1 >= _tierCount)
{
_tierCount++;
_tiers.Add(_currentTier + 1, new List<Widget>());
}
_tiers[_currentTier + 1].Add(new Widget(string.Format("({0})", _childCount), this));
_childCount++;
}
//Dangerous?
public void Sweep()
{
_currentTier = 0;
while (_currentTier < _tierCount) //_tierCount will start at 1 but keep increasing because child objects will keep adding more tiers.
{
foreach (Widget w in _tiers[_currentTier])
{
w.DoMyThing();
}
_currentTier++;
}
}
public void PrintAll()
{
for (int t = 0; t < _tierCount; t++)
{
Console.Write("Tier #{0}: ", t);
foreach (Widget w in _tiers[t])
{
Console.Write(w.Name + " ");
}
Console.WriteLine();
}
}
}
class Program
{
static void Main(string[] args)
{
TierManager tm = new TierManager();
for (int c = 0; c < 10; c++)
{
tm.Add(); //create base widgets;
}
tm.Sweep();
tm.PrintAll();
Console.ReadLine();
}
}
}
Yes, I call the following code smell:
_currentTier = 0;
while (_currentTier < _tierCount) //_tierCount will start at 1 but keep increasing because child objects will keep adding more tiers.
{
foreach (Widget w in _tiers[_currentTier])
{
w.DoMyThing();
}
_currentTier++;
}
You are iterating over a collection as it is changing. I mean the first iteration, not the second. You're obviously accounting for the change (hence the < _tierCount rather than a standard foreach) but it's still a smell, IMO.
Would I let it into production code? Possibly. Depends on the scenario. But I'd feel dirty about it.
Also: Your _tiers member could just as easily be a List<List<Widget>>.
The biggest potential issue here is that the Sweep method is iterating over a collection (_tiers) that could potentially change in a call to Widget.DoMyThing().
The .NET BCL classes do not allow collection to change while they are being iterated. The code the way it's structured, exposes a risk that this may happen.
Beyond that, the other problem is that the structure of the program makes it difficult to understand what happens in what order. Perhaps you could separate the phase of the program that recursively assembles the model from the portion that visits the model and performs computations.
+1 to both Randolpho and LBushkin.
However, I gave it some thought and I think I know why this smells. The pattern as I have implemented seems to be some kind of perversion of the Builder pattern. What would work better is creating a Composite out of a series of analytical steps which, taken as a whole, represents some kind of meaningful state. Each step of the analytical process (behavior) should be distinct from the output Composite (state). What I've implemented above meshes both the state and the behavior together. Since the state-holders and state-analyzers are one and the same object, this also violates the Single Responsibility Principle. The approach of having a composite "build itself" opens up the possibility of creating a vicious loop, even though my prototype above has a deterministic completion.
Links:
Builder Pattern
Composite Pattern

Categories

Resources