Fluent builder is a well-known pattern to build objects with many properties:
Team team = teamBuilder.CreateTeam("Chelsea")
.WithNickName("The blues")
.WithShirtColor(Color.Blue)
.FromTown("London")
.PlayingAt("Stamford Bridge");
However, using it doesn't seem very clear to me due to one particular reason:
Every Team object has its minimal operational state, in other words, set of properties which have to be set (mandatory), so that the object is ready to use.
Now, how should the Fluent builder approach be used considering that you have to maintain this state?
Should the With_XYZ members modify the part of the object, that can't affect this state?
Maybe there are some general rules for this situation?
Update:
If the CreateTeam method should take the mandatory properties as arguments, what happens next?
What happens if I (for example) omit the WithNickName call?
Does this mean that the nickname should be defaulted to some DefaultNickname?
Does this mean that the example (see the link) is bad, because the object can be left in invalid state?
And, well, I suspect that in this case the fluent building approach actually loses it's "beauty", doesn't it?
CreateTeam() should have the mandatory the properties as parameters.
Team CreateTeam(string name, Color shirtColor, string Town)
{
}
Seems to me the points of Fluent Interface are:
Minimize the number of parameters to zero in a constructor while still dynamically initializing certain properties upon creation.
Makes the property/ parameter-value association very clear - in a large parameter list, what value is for what? Can't tell without digging further.
The coding style of the instantiation is very clean, readable, and editable. Adding or deleting property settings with this formatting style is less error prone. I.E. delete an entire line, rather than edit in the middle of a long parameter list; not to mention editing the wrong parameter
Related
Let's say we have a custom attribute:
[Precondition(1, "Some precondition")]
This would implement [Test, Order(1), Description("Some precondition")]
Can I access and modify the Order attribute (or create one) for this method?
I can modify the Description and Author, but Order is not a possibility.
I have tried
1: context.Test.Properties["Order"][0] = order;
2:method.CustomAttributes.GetEnumerator()
by walking the stack frames with
Object[] attributes = method.GetCustomAttributes(typeof(PreconditionAttribute), false);
if (attributes.Length >= 1){...}
3:
OrderAttribute orderAttribute = (OrderAttribute)Attribute.GetCustomAttribute(i, typeof(OrderAttribute));
orderAttribute.Order = _order;
Which is readonly.
If I try orderAttribute.Order = new OrderAttribute(myOrd), it doesn't do anything.
I have two answers to choose from. One is in the vein of "Don't do this" and the other is about how to do it. Just for fun, I'm putting both answers up, separately, so they can compete with one another. This one is about why I don't think this is a good idea.
It's easy enough to write either
[Test, Order(1), Description("xxx")] or the equivalent...
[Test(Description="xxx"), Order(1)]
The proposed attribute gives users a second way to specify order, making it possible to assign two different orders to a test. Which of two attributes will win the day depends on (1) how each one is implemented, (2) the order in which the attributes are listed and (3) the platform on which you are running. For all practical purposes, it's non-deterministic.
Keeping the two things separate allows devs to decide which they need independently... which is why NUnit keeps them separate.
Using the standard attributes means that the devs can rely on the nunit documentation to tell them what the attributes do. If you implement your own attribute, you should document what it does in itself as well as what it does in the presence of the standard attributes... As stated above, that's difficult to predict.
I know this isn't a real answer in SO terms, but it's not pure opinion either. There are real technical issues in providing the kind of solution you want. I'd love to see what people think of it in comparison with "how to" I'm going to post next.
See my prior answer first! If you really want to do this, here's the how-to...
In order to combine the action of two existing attributes, you need equivalent code to those two attributes.
In this case both are extremely simple and both have about the same amount of code. DescriptionAttribute is based on PropertyAttribute so some of its code is hidden. OrderAttribute has a bit more logic because it checks to make sure the order has not already been set. Ultimately, both of them have code that implements the IApplyToTest interface.
Because they are both simple, I would copy the code, in order to avoid relying on implementation details that could change. Start with the slightly more complete OrderAttribute. Change its name. Modify the ApplyToTest method to set the description. You're done!
It will look something like this, depending on the names you use for properties...
public void ApplyToTest(Test test)
{
if (!test.Properties.ContainsKey(PropertyNames.Order))
test.Properties.Set(PropertyNames.Order, Order);
test.Properties.Set(PropertyNames.Description, Description);
}
A comment on what you tried...
There is no reason to think that creating an attribute in your code will do anything. NUnit has no way to know about those attributes. Your attribute cannot modify the code so that the test magically has other attributes. The only way Attributes communicate with NUnit is by having their interfaces (like IApplyToTest) called. And only attributes actually present in the code will receive such a call.
I have a (not quite valid) CSV file that contains rows of multiple types. Any record could be one of about 6 different types and each type has a different number of properties. The first part of any row contains the timestamp and the type of record, followed by a standard CSV of the data.
Example
1456057920 PERSON, Ted Danson, 123 Fake Street, 555-123-3214, blah
1476195120 PLACE, Detroit, Michigan, 12345
1440581532 THING, Bucket, Has holes, Not a good bucket
And to make matters more complex, I need to be able to do different things with the records depending on certain criteria. So a PERSON type can be automatically inserted into a DB without user input, but a THING type would be displayed on screen for the user to review and approve before adding to DB and continuing the parse, etc.
Normally, I would use a library like CsvHelper to map the records to a type, but in this case since the types could be different, and the first part uses a space instead of comma, I dont know how to do that with a standard CSV library. So currently how I am doing it each loop is:
String split based off comma.
Split the first array item by the space.
Use a switch statement to determine the type and create the object.
Put that object into a List of type object.
Get confused as to where to go now because i now have a list of various types and will have to use yet another switch or if to determine the next parts.
I don't really know for sure if I will actually need that List but I have a feeling the user will want the ability to manually flip through records in the file.
By this point, this is starting to make for very long, confusing code, and my gut feeling tells me there has to be a cleaner way to do this. I thought maybe using Type.GetType(string) would help simplify the code some, but this seems like it might be terribly inefficient in a loop with 10k+ records and might make things even more confusing. I then thought maybe making some interfaces might help, but I'm not the greatest at using interfaces in this context and I seem to end up in about this same situation.
So what would be a more manageable way to parse this file? Are there any C# parsing libraries out there that would be able to handle something like this?
You can implement an IRecord interface that has a Timestamp property and a Process method (perhaps others as well).
Then, implement concrete types for each type of record.
Use a switch statement to determine the type and create and populate the correct concrete type.
Place each object in a List
After that you can do whatever you need. Some examples:
Loop through each item and call Process() to handle it.
Use linq .OfType<{concrete type}> to segment the list. (Warning with 10k
records, this would be slow since it would traverse the entire list for each concrete type.)
Use an overridden ToString method to give a single text representation of the IRecord
If using WPF, you can define a datatype template for each concrete type, bind an ItemsControl derivative to a collection of IRecords and your "detail" display (e.g. ListItem or separate ContentControl) will automagically display the item using the correct DataTemplate
Continuing in my comment - well that depends. What u described is actually pretty good for starters, u can of course expand it to a series of factories one for each object type - so that you move from explicit switch into searching for first factory that can parse a line. Might prove useful if u are looking to adding more object types in the future - you just add then another factory for new kind of object. Up to you if these objects should share a common interface. Interface is used generally to define a a behavior, so it doesn't seem so. Maybe you should rather just a Dictionary? You need to ask urself if you actually need strongly typed objects here? Maybe what you need is a simple class with ObjectType property and Dictionary of properties with some helper methods for easy typed properties access like GetBool, GetInt or generic Get?
Lets say my c# model updated while correspondent collection still contains old documents, I want old and new documents to coexist in the collection, while using only new version of c# model to read them. I wish no inheritance is used if possible. So I wonder which of this issues are solvable and how:
there is a new property in c# model which does not present in database. I think it never should be an issue, Mongo knows nothing about it, and it will be initialized with default value. The only issue here is to initialize it with particular value for all old documents, anybody knows how?
one of property has gone from model. I want MongoDb to find out there is no more property in c# class to map the field of old document to, and to ignore it instead of crashing. This scenario probably sounds a bit strange as it would mean some garbage left in database, but anyway, is the behavior possible to implement/configure?
type if changed, new type is convertible to old one, like integer->string. Is there any way to configure mapping for old docs?
I can consider using inheritance for second case if it is not solvable otherwise
Most of the answers to your questions are found here.
BsonDefaultValue("abc") attribute on properties to handle values not present in the database, and to give them a default value upon deserialization
BsonIgnoreExtraElements attribute on the class to ignore extra elements found during deserialization (to avoid the exception)
A custom serializer is required to handle if the type of a member is changed, or you need to write an upgrade script to fix the data. It would probably be easier to leave the int on load, and save to a string as needed. (That will mean that you'll need a new property name for the string version of the property.)
Hello fellow developers.
First of all I apologize beforehand for the wall of text that follows, but after a day going crazy on this, I need to call for help.
I've stumbled across a problem I cannot seem to solve. I'll try to describe the scenario in the best possible way.
Task at hand: in an existing Asp.Net Mvc application, create a lookup table for an integer field, and use the textual value from the lookup in the editing view. When saving, we must first check if the lookup already has a corresponding text value for the same Root ID. If there is, use that. Otherwise, create it and then use it.
The structure:
The data model is a graph of objects where we have the root object, a collection of level A child objects, and every level A child object has a collection of level B child objects, so something like this:
Root (with fields)
Level A child (with fields) x n
Level B child (with fields) x n
The field we have to handle is on the LevelB objects.
There is a single Mvc view that handles the whole data. For collection objects, all fields are named like levelA1levelB1MyField, levelA1levelB2MyField, etc so every single field has unique name during the post. When the post happens, all values are read through a formCollection parameter which has average 120/130 keys. The keys are isolated by splitting them and looping on the numerical part of the names, values are read and parsed to the expected types and assigned to the object graph.
The datalayer part backing the object graph is all stored procedures, and all the mapping (both object to sproc and sproc to object) is hand written. There's a single stored procedure for the read part, which gets multiple datasets, and the method calling it reads the datasets and creates the object graph.
For the saving, there are multiple sprocs, mainly a "CreateRoot" and "UpdateRoot". When the code has to perform such tasks, the following happens:
For create scenario, "CreateRoot" is called, then the sprocs "CreateLevelA" and "CreateLevelB" are called in loop for each element in the graph;
For update scenario, "UpdateRoot" is called, which internally deletes all "LevelA" and "LevelB" items, then the code recreates them calling the aforementioned sprocs in loop.
Last useful piece of information is that the "business objects graph" is used directly as a viewmodel in the view, instead of being mapped to a plain "html friendly" viewmodel. This is maybe what is causing me the most trouble.
So now the textbox on the view handles an "integer" field. That field must now accept a string. The field on LevelB must remain an integer, only with a lookup table (with FK of course) and the text field from the lookup must be used.
The approaches I tried with no success:
My first thought was to change the datatype on the property MyField from integer to string on the object, then change the sprocs accordingly and handle the join at sproc level: I'd have a consistent object for my view, and the read/write sprocs could translate from string to integer and viceversa, but I can't do that because the join keys to retrieve the integer when writing are part of the Root item (as I stated in the first lines of this wall of text), which I don't know in the CreateLevelB sproc, and changing the whole chain of calls to pass those parameters would have a huge impact on the rest of the application, so no good.
My next try was to keep things "as they are" and call some "translation methods": when reading, pass the integer to the view, and there call the translation method to display the text value. When saving, use the posted text to retrieve the integer. The save part would work, I'd have all the parameters I need, but for the read part, I'd have to instantiate the "data access layer" and call its method at View level, and there's no need to explain why that is a very bad choice, so I ruled this out too.
Now I'm out of options (or ideas anyway). Any suggestion to solve this is very welcome, and also if something is not clear enough just point it out and I will edit my post with more accurate information.
Thanks.
This is not a real answer but you could rip out all sprocs and use the updating facilities of an OR mapper. This will resolve all the layering issues. You just update data how you see fit and submit at the end.
I guess this would also make the questions around "should I use an int or a string" go away.
Edit: After reading your comment I thought of the following: Do not implement alternative 1. You rather want to sacrifice code quality in the view than in the data storage model. The last one is more important and more centrally used.
I would not be too concerned with messing up the view by calling the DAL from it or the like. Changes in a view are localized and do not mess up the application's architecture. They just degrade the view.
Maybe you could create a view model in your controller and do the translations between DAL-model and view model? Or is that pattern not allowed?
While reading Jon Skeet's article on fields vs properties he mentions that changing fields to properties is a breaking change.
I would like to understand the common scenarios in which this change can cause breaks. Along with the scenario, if you can, please provide any details.
For starters, the following points have been mentioned elsewhere:
You can't change fields to properties if you are using reflection on the class. This is obvious even though I don't have details. Serialization is one scenario where reflection is used to iterate over the object and changing fields to properties will break the serializer or change the output
You can't easily bind against fields. (Why is this? I read it here)
???
EDIT: Robert has a comprehensive list of reasons for choosing properties over fields and also explains how switching between them can cause a breaking change.
If you have a public field and another assembly has code that is using it, it will need to be recompiled.
IOW the definition of breaking includes "will need to be recompiled".
Properties can throw any arbitrary exceptions, whereas fields can't (at least when compiler knows about field assignment at compile time).
In Windows Forms at least, you can only databind things like DataGridViewColumns to properties on your business objects, not fields. So if your class was being used as a DataSource for a grid, its properties changing to fields would result in some new bugs for the grid owner.
You can pass a field as a ref or out parameter, or take its address in an unsafe context, whilst you cannot do these with a property.