I am currently working on a project where I am projecting events into materialized views. There is a potential where these views are later redefined (think adding/removing properties, changing types, etc). When this happens, I would like to re-process the events, rehydrating the views in their new forms. I would like to automate this process by evaluating the definition of the model itself, and comparing it to a known "fingerprint". If the fingerprint changed, the views need to be deleted, and re-projected from the events.
Before I try to implement something from scratch, is there a common way for determining this? I thought something like GetHashCode() may give something workable, but the results seem unstable, and not affected by changes to the class definition itself.
Is the best way to do this to use reflection, looking for the things I care about (property name, types, etc) and compute some sort of fingerprint from those values, or is there a better way to what I want that I don't know about?
Related
For context, using C# inside the Unity3D Editor.
I have more and more often started using enums to loosely couple things to settings.
For example i am setting up an item, and i want to give it a visual from a pool of defined visuals. That visual is basically a class that contains a sprite, a color, and a model attached to an integer unique ID. From this Unique ID, i generate an Enum. And it takes some effort to verify that the UniqueID is actually Unique, and catch some edge cases regarding that.
The benefit of doing the above, is that the enum is all that has to be stored on the item, to link it to the visual. At runtime there is a dictionary created to lookup the enum, and then request the stored visual to be loaded/used. This loosely couples the visuals to the item, so loading the item list does not automatically load all of the visual assets associated with the item. The last part is unity default behavior and is really annoying, and it really slows down the game and consumes a massive amount of RAM in this default behavior.
As a result we have a lot of those enums for various purposes and a lot of lookup stuff happening. And currently we are having no big problems with it.
However, the enums and the editing/generation of those enums is error prone in the sense that when values are removed, the items (and any other interested parties) are non the wiser, which then has to be either tested before build, or runs into a safety catch/error at runtime.
My question is. Is this a blatant abuse of Enums? And if so, what would be a better way of approaching this problem of loose coupling?
If it is not, what would be a better way to set up and manage these enums in a safe way? So alarm bells will go off if anything using the enum now has an invalid value, or the values meaning would change? Which i imagine is hardly possible, and requires code all over the place to "self check" on recompile?
Or does this just all boil down to team discipline to manage the values well, and know what the enums mean and represent? In which case, it would never be able to make this designer friendly unless i write a custom editor for each and every one of these.
Thanks for any insights you might be able to provide.
If I understand you correctly, you're trying to associate each item with one of multiple static visuals? If this is the case you can simply write each visual as a static readonly object inside the visuals class. In your "item" objects you can then make a field called e.g. "visual" and set this to reference the right visual.
I don't know what makes the visuals load, but if the constructor does, then I believe they will load when the visual class is first used at runtime.
Background:
I have 2 instances of an object of the same type. One object is populated with the configuration of a device I'm connected to, the other object is populated with a version of the configuration that I've stored on my hard drive.
The user can alter either, so I'd like to compare them and present the differences to the user.
Each object contains a number of ViewModel properties, all of which extend ViewModelBase, which are the ones I want to compare.
Question:
Is a better way to do this than what I'm about to propose.
I'm thinking of using Reflection to inspect each property in my objects, and for each that extend ViewModelBase, I'll loop through each of those properties. For any that are different, I'll put the name and value into a list and then present that to the user.
Rather than inventing this wheel, I'm wondering if this is this a problem that's been solved before? Is there a better way for it to be done?
Depending on the amount of properties to be compared, manual checking would be the more efficient option. However, if you have lots of properties or want the check to be dynamic (i.e. you just add new properties and it automagically works), then I think Reflection is the way to go here.
Why not just implement the equals operator for your type?
http://msdn.microsoft.com/en-us/library/ms173147(v=vs.80).aspx
Edit: Having read more carefully I see what you're actually asking is what the most efficient way of doing the actual comparison is.
Doing it via reflection saves on code but is slower. Doing it with lots of manual comparions is fairly quick but more code.
If you are fairly determent and lazy in the good way. You can mix benefits of both solutions. With help of tool like cci you can emit method that compares properties. The beauty of this is that your reflection code will be executed on compile time leaving you with strait forward method to execute at runtime. This allows you to change models as you see fit and not worry about comparison code. There is a down side to this and that is learning cci which is quite challenging.
As I design the models for a domain, they almost always end up having some .IsSomething functionality on them. IsNew and IsDirty are common for data persistence purposes, IsValid for business rule validation, even IsFraudulent in a current project (more business rule validation), etc. Whenever I see these implemented by others, they are almost invariably done so as methods. But I find myself wondering if there's a particular reason for that.
I tend to see properties as describing an object and methods as performing some kind of action. These don't really perform an action. They involve code because they're dynamically determined when called, and they're clearly read-only, but to me they still fit as properties rather than methods.
There could potentially be a serialization issue with properties, I suppose. Though a rich domain model tends not to serialize well anyway given that it contains logic and functionality, so any time I need to move something across a service boundary I generally flatten it into a defined DTO structure first anyway.
But I wonder if anybody else has any insight on the subject? Is there a good reason to implement these as methods rather than as properties?
(Tangentially related, though an answer has already been given, extension properties would really help with consistency on something like this. I have a number of IsSomething() extension methods, usually on System.String, for implementing domain-specific logic. But even if properties are the way to go, I may want to stick with methods just for consistency with the extensions.)
Assuming that accessing the property:
Has no side-effects
Is "reasonably speedy" (yeah, very woolly...)
then I see no reason not to make it a property. The serialization shouldn't be an issue - most serialization schemes provide ways of marking a property as transient (i.e. not-to-be-serialized).
I would use a property because:
It describes the object in some way, so conceptually its characteristic, its property
It does not ask for any parameters
It basically just retrieves certain data, not performs any standalone actions or modifications
This is mostly a request for comments if there is a reason I should not go down this road.
I have a multi-tierd, CodeSmith generated application. At the UI level there need to be some fields that are required, and the required fields will vary depending on field values in the bound entity. What I am thinking of doing is adding a "PropertyRequired" CustomAttribute to each property in the entities that I can set true or false when I load the entity in its manager. Then I will use Reflection to query the property and give visual feedback to the user at the UI level, and I can validate that all the required properties have a valid value in the manager before I save. I've worked this out as a proof of concept with one property in one entity, but before I try to extend it to the rest of the application I'd like to ask if there is someone with more experience to either tell me to go for it, or why I won't like it when I scale up. If this is a bad idea, or if you can suggest a better approach please offer your opinion.
It is a pretty reasonable way to do it (I've done something very similar before) - but there are always downsides:
any code needing the entity will need the extra reference (assuming that the attribute and entity are in different assemblies)
the values (unless you are clever about it) must be determined at compile-time
you can't use it on entities outside of your control
In most cases the above aren't a problem. If they are an issue, you might want to support an external metadata model - but unless you need it, this would be overkill. Don't do it unless you must (meaning: go ahead and use attributes; they are usually fine).
There is no inherent reason to avoid custom attributes. It is a supported CLR feature which is the backbone for many available products (Code Contracts, FxCop, etc ...).
This is not an unreasonable approach and healthier than baking this stuff into a UI tier. There are a couple of points worth considering before taking the full dive:
You are tightly coupling business logic with the business entity itself. Are there circumstances where a field being required or valid values could change? You may be limiting yourself or be faced with an inconsistent validation mechanism
Dynamic assignment is possible but more tricky - i.e. when you set a field to be required thats what it will be unless you override
Custom attributes can be quite inflexible if further down the line you wanted to do something more complicated - namely if you need to pass state into an attribute driven validation scheme. Attributes like declarative assignment. Only having a true/false required property shouldn't be an issue here though
Just being a devils advocate really, in general for a fairly simple application where you only care about required fields, this is quite a tidy way of doing it
Let’s say I have to use an unstable assembly that I cannot refractor. My code is supposed to be the client of a CustomerCollection that is (surprise) a collection of Customer instances. Each customer instance has further properties like a collection of Order instances. I hope you get the idea.
Since the assembly behaves not that well my approach is to wrap each class in a façade where I can deal with exceptions and workarounds an all that stuff. (To make things more complicated I like to design the wrapper to be usable with WPF regarding data binding.)
So my question is about the design of the wrapper, e.g. CustomerCollectionFacade. How to expose the object tree (customers, orders, properties of orders)? Is the CustomerWrapper collection stored in a field or do I create CustomerWrapper instances on the fly (in the get accessor of a property maybe)?
Any ideas welcome. Thanks!
Edit:
Unfortunately the way proposed by krosenvold is not an option in my case. Since the object tree’s behavior is very interactive (editing from multiple views, events fired if properties change) I will not opt to abandon the ‘source object’. These changes are supposed to propagate to the source. Thanks anyway.
I generally try to isolate such transformations into one or more adapter classes and let them do the whole story at once. This is a god idea because it is easily testable, all the conversion logic ends up in one place, and you avoid littering the conversion logic "all over the place".
Sometimes there is state in the underlying (source) object that is going to be needed when/if you're updating the object. You might not be exposing this data in your cleaned-up api, so it's going to have to be hidden somewhere.
If you choose to encapsulate the original object there's always the chance that someone'll break that encapsulation sometime in the future and start leaking the gory details of the underlying object. That reason alone is usually enough for me to not keep a reference to the original instance, since I actually understand what I'm doing six months later when I'm in a hurry. But if you keep it somewhere else you'll need lifecycle management for the originals, so I usually end up stashing it away in some secret interface on the "clean" object.