Maybe this is a stupid question, but it's really not obvious for me :(
var address = new Address { Id = 1, Name = "John Doe" };
// Configure AutoMapper
Mapper.CreateMap<Address, AddressViewModel>();
// Perform mapping
var viewModel = Mapper.Map<Address, AddressViewModel>(address);
Imho Mapper.CreateMap<Address, AddressViewModel>(); is not needed, because AutoMapper gets this information afterwards where the mapping is performed.
So why do I have to write those configurations?
Having a quick browse through the source, it appears that it stores it's mappings in the engine so that any calls that follow load the mapping data from the engine.
This is probably due to the fact that reflection can be expensive and there's no way for the AutoMapper to know how few - or many - times someone will use just Mapper.Map. Definition: caching :)
Additionally this pattern is very useful for validating that your configuration is correct (see https://github.com/AutoMapper/AutoMapper/wiki/Configuration-validation). Some mappings require fairly specific configuration, and this catches any errors early.
If nothing else, Automapper doesn't necessarily get all of this information when the mapping is performed, because you may have specified any number of .ForMember / .AfterMap (etc) chainings in your CreateMap expression. You don't want to have to repeat all of that in every mapping invocation.
Related
Ok folks, this is a rather long question, where I´m trying my best to describe the current situation and provide some meaningful context, before I´m comming to my actual question.
TL;DR;
I need a way to identify invalid enum-to-enum mappings, which might cause runtime-issues, as their definitions diverged over the time.
Some Context
So my team and I are maintaining this rather complex set of REST-APIs...complex at least when it comes down to the actual object-graphs involved.
We have to deal with some hundreds of models in total.
To raise structural complexity, the original architectures went with full n-tier-style on an inner API-level.
On top of that, we´re having multiple of such architectured services, which sometimes need calling each other.
This is achieved through either ordinary http-calls here, some messaging there, you get the idea.
For letting an API communicate with another, and to maintain SOA- and/or microservice-principles, every API at least provides a corresponding client-library, which manages communication with it´s representing API, regardless of the actual underlying protocol involved.
Boiling this down, this incorporates at least the following layers per API (top-down)
Client-Layer
API-Layer
Domain-Layer
Persistence-Layer
Additionally, all those layers maintain their own representation of the various models. Often, those are 1:1 representation, just in another namespace. Sometimes there are more significant differences in between these layers. It depends...
To reduce boiler-plate when communicating between these layers, we´re falling back on AutoMapper most of the time (hate it or love it).
The problem:
As we evolve our overall system, we more and more noticed problems when mapping enum-to-enum properties within the various representations of the models.
Sometimes it´s because some dev just forgot to add a new enum-value in one of the layers, sometimes we re-generated an Open-API based generated client, etc., which then leads to out-of-sync definitions of those enums. The primary issue is, that a source enum may have more values then the target enum.
Another issue might occur, when there are slight differences in the naming, e.g. Executer vs. Executor
Let´s say we have this (very very over-simplified) model-representations
public enum Source { A, B, C, D, Executer, A1, B2, C3 } // more values than below
public enum Destination { C, B, X, Y, A, Executor } //fewer values, different ordering, no D, but X, Y and a Typo
class SourceType
{
public Source[] Enums { get; set; }
}
class DestinationType
{
public Destination[] Enums { get; set; }
}
Now let´s say our AutoMapper config looks something like this:
var problematicMapper = new MapperConfiguration(config =>
{
config.CreateMap<SourceType, DestinationType>();
}).CreateMapper();
So mapping the following model is kind of a jeopardy, semantic-wise (or at least offers some very odd fun while debugging).
var destination = problematicMapper.Map<DestinationType>(new SourceType()
{
Enums = new []
{
Source.A,
Source.B,
Source.C,
Source.D,
Source.Executer,
Source.A1,
Source.B2,
Source.C3
}
});
var mappedValues = destination.Enums.Select(x => x.ToString()).ToArray();
testOutput.WriteLine(string.Join(Environment.NewLine, mappedValues));
/*
Source.A => A <- ✔️ ok
Source.B => b <- ✔️ok
Source.C => c <- ✔️ok
Source.D => Y <- 🤷♀️ whoops
Source.Executer => A <- 🧏♂️ wait, what?
Source.A1 => Executor <- 🙊 nah
Source.B2 => 6 <- 🙉 wtf?
Source.C3 => 7 <- 🙈 wth?
*/
bare with me, as some situations here are staged and possibly more extreme than found in reality. Just wanted to point out some weird behavior, even with AutoMapper trying to gracefully handle most cases, like the re-orderings or different casings. Currently, we are facing either more values in the source-enum, or slightly differences in naming / typos
Fewer fun can be observed, when this ultimately causes some nasty production-bugs, which also may have more or less serious business-impact - especially when this kind of issue only happens during run-time, rather than test- and/or build-time.
Additionally, the problem is not exclusive to n-tier-ish architectures, but could also be an issue in orthogonal/onion-/clean-ish-architecture styles (wheras in such cases it should be more likely that such value-types would be placed somewhere in the center of the APIs, rather than on every corner / outer-ring /adapter-layer or whatever the current terminology is)
A (temporary) solution
Despite trying to reduce the shear amount of redundancy within the respective layers, or (manually) maintaining explicit enum-values within the definitions itself (which both are valid options, but heck, this is a lot of PITA-work), there is not much left to do while trying to mitigate this kind of issues.
Gladly, there is a nice option available, which levereages mapping enum-to-enum-properties per-name instead of per-value, as well as doing more customization on a very fine-granular level on a per-member-basis.
[AutoMapper.Extensions.EnumMapping] to the rescue!
from the docs:
The package AutoMapper.Extensions.EnumMapping will map all values from Source type to Destination type if both enum types have the same value (or by name or by value)
and
This package adds an extra EnumMapperConfigurationExpressionExtensions.EnableEnumMappingValidation extension method to extend the existing AssertConfigurationIsValid() method to validate also the enum mappings.
To enable and cusomize mappings, one should just need to create the respective type-maps within AutoMapper-configuration:
var mapperConfig = new MapperConfiguration(config =>
{
config.CreateMap<SourceType, DestinationType>();
config.CreateMap<Source, Destination>().ConvertUsingEnumMapping(opt => opt.MapByName());
config.EnableEnumMappingValidation();
});
mapperConfig.AssertConfigurationIsValid();
Which then would validate even enum-to-enum mappings.
The question (finally ^^)
As our team previously did not (need to) configure AutoMapper with maps for every enum-to-enum mapping (as was the case for dynamic-maps in previous-versions of AutoMapper), we´re a bit lost on how to efficiently and deterministically discover every map needed to be configured this way. Especially, as we´re dealing with possibly a couple of dozens of such cases per api (and per layer).
How could we possibly get to the point, where we have validated and adapted our existing code-base, as well as further preventing this kind of dumbery in the first place?
Leverage custom validation to discover missing mappings during test-time
Ok, now this approach leverages a multi-phased analysis, best fitted into an unit-test (which may already be present in your solution(s), nevertheless).
It´s not a golden gun to magically solve all your issues which may be prevalent, but puts you into a very tight dev-loop which should help clean up things.
Period.
The steps involved are
enable validation of your AutoMapper-configuration
use AutoMapper custom-validation to discover missing type maps
add and configure missing type-maps
ensure maps are valid
adapt changes in enums, or mapping logic (whatever best fits)
this can be cumbersome and needs extra attention, depending on the issues discovered by this approach
rinse and repeat
Examples below use xUnit. Use whatever you might have at hands.
0. starting point
We´re starting with your initial AutoMapper-configuration:
var mapperConfig = new MapperConfiguration(config =>
{
config.CreateMap<SourceType, DestinationType>();
});
1. enable validation of your AutoMapper-Configuration
Somewhere within your test-suit, ensure you are validating your AutoMapper-configuration:
[Fact]
public void MapperConfigurationIsValid() => mapperConfig.AssertConfigurationIsValid();
2. use AutoMapper custom-validation to discover missing type maps
Now modify your AutoMapper-configuration to this:
mapperConfig = new MapperConfiguration(config =>
{
config.CreateMap<SourceType, DestinationType>();
config.Advanced.Validator(context => {
if (!context.Types.DestinationType.IsEnum) return;
if (!context.Types.SourceType.IsEnum) return;
if (context.TypeMap is not null) return;
var message = $"config.CreateMap<{context.Types.SourceType}, {context.Types.DestinationType}>().ConvertUsingEnumMapping(opt => opt.MapByName());";
throw new AutoMapperConfigurationException(message);
});
config.EnableEnumMappingValidation();
});
This does a couple of things:
look for mappings, that map from an enum to an enum
which have no type-map associated to them (that is, they were "generated" by AutoMapper itself and hence are lacking an explicit CreateMap call)
if (!context.Types.DestinationType.IsEnum) return;
if (!context.Types.SourceType.IsEnum) return;
if (context.TypeMap is not null) return;
Raise an error, which message is the equivalent of the actual call missing to CreateMap
var message = $"config.CreateMap<{context.Types.SourceType}, {context.Types.DestinationType}>().ConvertUsingEnumMapping(opt => opt.MapByName());";
throw new AutoMapperConfigurationException(message);
3. add and configure missing type-maps
Re-running our previous test, which should fail now, should output something like this:
AutoMapper.AutoMapperConfigurationException : config.CreateMap<Sample.AutoMapper.EnumValidation.Source, Sample.AutoMapper.EnumValidation.Destination>().ConvertUsingEnumMapping(opt => opt.MapByName());
And boom, there you go. The missing type-map configuration call on a silver-plate.
Now copy that line and place it somewhere suitable withing your AutoMapper-configuration.
For this post I´m just putting it below the existing one:
config.CreateMap<SourceType, DestinationType>();
config.CreateMap<Sample.AutoMapper.EnumValidation.Source, Sample.AutoMapper.EnumValidation.Destination>().ConvertUsingEnumMapping(opt => opt.MapByName());
in a real-world scenario, this would be a line for every enum-to-enum mapping that not already has a type-map associated to it within the AutoMapper-configuration. Depending on how you actually configure AutoMapper, this line could need to be slightly adopted to your needs, e.g. for usage in MappingProfiles.
adapt changes in enums
Re-run the test from above, which should fail now, too, as there are incompatible enum-values.
The output should look something like this:
AutoMapper.AutoMapperConfigurationException : Missing enum mapping from Sample.AutoMapper.EnumValidation.Source to Sample.AutoMapper.EnumValidation.Destination based on Name
The following source values are not mapped:
- B
- C
- D
- Executer
- A1
- B2
- C3
There you go, AutoMapper discovered missing or un-mappable enum-values.
note that we lost automatic handling of differences in casing.
What´s to do now heavily depends on your solution and cannot be covered in a SO-post. So take appropriate actions to mitigate.
6. rinse and repeat
Go back to 3. until all issues are solved.
From then on, you should have a saftey-net in place, that should prevent you from falling into that kind of trap in the future.
However, note that mapping per-name instead of per-value might have a negative impact, performance-wise. That should definetley be taken into account when applying this kind of change to your code-base. But with all those inter-layer-mappings present I would guess a possible bottleneck is in another castle, Mario ;)
A full wrapup of the samples shown in this post can be found in this github-repo
I wrote a validator that would check if the Enums match so I don't have to add those Enum Mappings.
var problematicMapperConfiguration = new MapperConfiguration(config =>
{
config.Advanced.Validator(EnumMappingValidator.ValidateNamesMatch());
config.CreateMap<SourceType, DestinationType>();
});
problematicMapperConfiguration.AssertConfigurationIsValid();
With your example it would fail like that:
Expected enum AutoMapperEnumValidation.EarlocTests+Destination to contain enum "D".
Expected enum AutoMapperEnumValidation.EarlocTests+Destination to contain enum "Executer".
Expected enum AutoMapperEnumValidation.EarlocTests+Destination to contain enum "A1".
Expected enum AutoMapperEnumValidation.EarlocTests+Destination to contain enum "B2".
Expected enum AutoMapperEnumValidation.EarlocTests+Destination to contain enum "C3".
The validator is very simple and looks like this:
public class EnumMappingValidator
{
public static Action<ValidationContext> ValidateNamesMatch()
{
return validationContext =>
{
var sourceEnumType = GetEnumType(validationContext.Types.SourceType);
if (sourceEnumType == null)
return;
var destinationEnumType = GetEnumType(validationContext.Types.DestinationType);
if (destinationEnumType == null) throw new ArgumentException("Unexpected Enum to Non-Enum Map");
var sourceEnumNames = sourceEnumType.GetFields().Select(x => x.Name).ToList();
var destinationEnumNames = destinationEnumType.GetFields().Select(x => x.Name).ToList();
var errors = new List<string>();
foreach (var sourceEnumName in sourceEnumNames)
{
if (destinationEnumNames.All(x => x.ToLower() != sourceEnumName.ToLower()))
errors.Add($"Expected enum {destinationEnumType} to contain enum \"{sourceEnumName}\".");
}
if (errors.Any())
throw new ArgumentException(string.Join(Environment.NewLine, errors));
};
}
private static Type? GetEnumType(Type type)
{
if (type.IsEnum) return type;
var nullableUnderlyingType = Nullable.GetUnderlyingType(type);
if (nullableUnderlyingType?.IsEnum ?? false) return nullableUnderlyingType;
return null;
}
}
Github: https://github.com/matthiaslischka/AutoMapperEnumValidation
I have a page named "ReportController.aspx" whose purpose is to instantiate a report (class) based on query string parameters
switch (Request.QueryString["Report"])
{
case "ReportA":
CreateReportAReport("ReportA's Title");
break;
case "ReportB":
CreateReportBReport("ReportB's Title");
break;
case "ReportC":
CreateReportCReport("ReportC's Title");
break;
case "ReportD":
CreateReportDReport("ReportD's Title");
break;
...
Basically, each time a new report is needed there will be this overhead of adding a case and adding a method. This switch statement could get very very long. I read that is is possible to use a Dictionary to map a Report to ?. How would this look using a Dictionary (assuming this is a better way).
Also, CreateReportXReport method basically passes a bunch of additional QueryString values to the report class's constructor (each report class has a different constructor).
There's no getting around having to type in the new information somewhere; the key is to get it out of the code, to avoid recompiling and redeploying for such a trivial change.
Some good options are to list these value in an XML config file, or better yet, your database.
You'll probably want to fill out a dictionary with this data, whatever the source. This will:
Make it easy to cache
Make for clean, fast code
When the time comes to pull your data out of configuration into code, you'd add items to the dictionary like so:
Dictionary<string, IReportCreator> = configDataGetter.GetReportDataFromDB().
ToDictionary(r => r.Name, myReportCreatorFactory(r => r.ReportID))
This example assumes your getting data as entity object of some kind, and using a factory that would use a strategy pattern for your code that creates reports. There's a bagillion ways your could be doing this of course.
I assume the reports are just too extensive, varied, and different in nature that you can't just put sql and styling building block in the db?
Edit based on op's comments:
Ah, gotcha. Well, I don't know how much time you have, but as much as you push everything into some sort of factory, you have better options you'll later. I'm going to give you some thoughts that will hopefully help, from similar things I've done. Each step is an improvement in itself, but also a baby step to really separating your report logic from this shell code. Further, I can see you already know what you're doing and I'm sure know some of what I'll say below, but I don't know what you know, and it will be helpful for others.
First, pull out any and every bit of information from code to db (if you haven't already), and you'll add more db fields (and a table or two) as you improve your setup.
You might know about it already, but I'll mention it for others, to check out the strategy pattern I reference above. You can have the custom logic of each "report function" actually be in the constructor of your various strategy classes. They would all inherit from your base ReportGenerator (or sport a common IReportGenerator interface). They can and should share the same constructor; varying report parameters would be handled by a parameter of type dictionary. Each class's constructor implementation would know the types of the variables is needs (from db configuration), and would cast/use them accordingly.
Next step might be to really get rid of your select statement in your factory, using reflection. You'd have to have the name of the class as part of your reports configuration data in the db (and have a common constructor).
At this point, the way to add a new report is pretty clean, even though you've got to add a new class each time. That good. It fulfills the single responsibility and open-closed principals.
Now, there's just the final step of removing the classes from your app, so they can be added/edited on the fly. Check out MEF. This is what it's made for. Some things you might find on the internet that you probably shouldn't use are CodeDom (great when there was nothing else, but MEF is better) and the compilation-as-a-service features coming in .NET 5. MEF is the way to go.
Assuming that all reports implement IReport, you can do it using Func<IReport>, like this:
IDictionary<string,Func<IReport>> dictToReport = new Dictionary {
{"ReportA", () => CreateReportAReport("ReportA's Title") }
, {"ReportB", () => CreateReportBReport("ReportB's Title") }
, ...
};
You can then replace the switch with this code:
var myReport = dictToReport[Request.QueryString["Report"]]();
I think is better re-design this code and convert it into some database table ("Reports") to keep there the list of reports and ID of each report.
That's it.
To do this with a Dictionary<string, string> you would simply build one up as a static cache in the containing type
public class Container {
private static Dictionary<string, Func<Report>> ReportMap =
new Dictionary<string, Func<Report>>();
static Container() {
ReportMap["ReportA"] = () => CreateReportAReport("ReportA's Title");
ReportMap["ReportB"] = () => CreateReportBReport("ReportB's Title");
// etc ...
}
}
Now that the map is built you simply do a lookup in the function instead of a switch
Func<Report> func;
if (!ReportMap.TryGetValue(Request.QueryString["Report"), out func)) {
// Handle it not being present
throw new Exception(..);
}
Report report = func();
Fluent builder is a well-known pattern to build objects with many properties:
Team team = teamBuilder.CreateTeam("Chelsea")
.WithNickName("The blues")
.WithShirtColor(Color.Blue)
.FromTown("London")
.PlayingAt("Stamford Bridge");
However, using it doesn't seem very clear to me due to one particular reason:
Every Team object has its minimal operational state, in other words, set of properties which have to be set (mandatory), so that the object is ready to use.
Now, how should the Fluent builder approach be used considering that you have to maintain this state?
Should the With_XYZ members modify the part of the object, that can't affect this state?
Maybe there are some general rules for this situation?
Update:
If the CreateTeam method should take the mandatory properties as arguments, what happens next?
What happens if I (for example) omit the WithNickName call?
Does this mean that the nickname should be defaulted to some DefaultNickname?
Does this mean that the example (see the link) is bad, because the object can be left in invalid state?
And, well, I suspect that in this case the fluent building approach actually loses it's "beauty", doesn't it?
CreateTeam() should have the mandatory the properties as parameters.
Team CreateTeam(string name, Color shirtColor, string Town)
{
}
Seems to me the points of Fluent Interface are:
Minimize the number of parameters to zero in a constructor while still dynamically initializing certain properties upon creation.
Makes the property/ parameter-value association very clear - in a large parameter list, what value is for what? Can't tell without digging further.
The coding style of the instantiation is very clean, readable, and editable. Adding or deleting property settings with this formatting style is less error prone. I.E. delete an entire line, rather than edit in the middle of a long parameter list; not to mention editing the wrong parameter
I have the following code in a model
base.Name = instance.Name;
base.SSN = instance.SSN;
base.DateModified = DateTime.Now
base.ClienType = instance.ClientType;
If I add more properties to my base then i have to update my model to update the properties. Is there an easier way to update the base.properties instead of listing each of them and then updating the same?
Yes i know i am being lazy
I'm not quite sure why you are doing this, but you might want to take a look at AutoMapper - if your properties are the same on both side you can get it to automatically map one to the other without doing any real setup.
You could use Automapper to automatically map where there is a naming convention.
Also beware that things the following would not map as one has one less t
base.ClienType = instance.ClientType;
I'm considering using PostSharp for entity-to-DTO and DTO-to-entity mapper. To do that task manualy for about a 100 entities would be a maintenence nightmare. I've looked at AutoMapper on codeplex, but i think the overhead might be a serious problem in my case, besides i feel that PostSharp could give me some extra control over the mapping convention. If anyone can share any experiences with this king of problems, that would be great.
The direction i'm think in is something like this (please somebody tell me if this is not possible):
The aspect that i am planing to stick to a class would fill the next two methods with content:
EntityType EntityToDTO(DTOType DTO) {}
DTOType DTOToEntity(EntityType Entity) {}
The first method would return entity based on DTO, the second one would do the oposite. Inside the aspect i'm planing to loop through each property, create new target and asign the value of a property to the counterpart from target object. Is this possible to do at compiletime witout any runtime overhead?
If your DTOs field names match your entity field names, then I'd use Duck Typing
http://www.deftflux.net/blog/page/Duck-Typing-Project.aspx
http://haacked.com/archive/2007/08/19/why-duck-typing-matters-to-c-developers.aspx
Your code would work like this
UserDTO user = DuckTyping.Cast<UserDTO>(userEntity);
Basically, the duck typing library will be mapping over the fields by matching the names. They use dynamically generated IL to archive this.
If that has the potential of being too slow, I'd probably try to get CodeSmith to generate the methods for me.
If it helps, there is a project called PostSharp4ET that basically implements support for POCO objects to Entity Framework 1. See http://www.codeplex.com/efcontrib.
Note that PostSharp is not very good at generating new code. It is good at mixing new code with existing one. If you need to generate code, I would recommend writing a C# code generator based on reflection, and compile the resulting code. Or use a tool like CodeSmith, as mentioned previously.