We have a REST API which already uses "/v1/" in the controller routes and we're planning to create a "/v2/" path and also take advantage Web API 2. I was able to find a lot of information about versioning your controllers (attribute routing, custom resolvers, etc.) but one thing I have not been able to find any articles about is versioning your Model objects (a.k.a. data transfer objects). How are people versioning their Model objects?
In our codebase and problem domain, the controllers are "simple" (CRUD, really) and it's the Model objects which encode our domain expertise and upon which our core business logic operates. (I suspect this is true for many applications, so it's strange that most of the web articles about Web API 2 and versioning focus on controllers and elide concerns about the Model objects as if they'll take care of themselves.)
In a perfect world, I'd like to be able to just use the same classes for both API versions, and put attributes on properties to include or exclude them, things like "version 1 only", "version 2+ only", "deprecated in version 2", etc. I think I could implement this with a custom serializer that looks for attribute classes I create, but I want to know if there's built-in support for this or an open source library for it before I roll my own.
Another possible approach would be to derive the version 2 model classes from the version 1 model classes, but I could only add that way and not be able to remove anything. I could derive both the version 1 and the version 2 classes from a base class, but any of these inheritance-based approaches will require A) refactoring where classes are plus B) a factory pattern so that the internals can create the correct derived type. I'd like to avoid this, but would still prefer it over code duplication.
I suppose another approach is we could hide our real Model objects and copy their values into "dumb" data transfer objects at the interface. This approach is simple and would have maximum flexibility, but would also maximize the work.
Is there an option I've missed? What approach are other people using?
Related
I currently have all my common utility function methods in a base controller, which all of my controllers inherits from. These are methods for functionality like uploading files, resizing pictures, deleting files, sending e-mails, generating random passwords, hashing passwords, etc.
What is the recommended structure for these kinds of things?
In that case you shouldn't put all these utility functions inside your base controller. You will face a lot of problems if your project grows. Changes and testing of these methods can become difficult, all your inherited classes need to use the same utility methods, etc. Have a look at Composition over inheritance to see another approach.
Somethimes I tend to organize my projects in the following manner if that might help you:
Simple helper methods: Create a folder and a namespace (e.g. namespace [...].Common") inside your web project and put one or more public static classes inside it (e.g. class "FileHelper.cs", "StringHelper.cs", etc). If you need one of these methods in a controller action simply put a "using ...Common" statement at the top of your controller class and call e.g. FileHelper.MethodName.
If I can define a closed subject area with a group of methods I try to encapsulate these methods in a service class (maybe even outside the web project if I have a feeling that I might need this functionality in other projects too), define an interface for that class and plug that functionality into controller classes by using dependency injection. If you don't know about that concept you should definitely read Dependency injection in ASP.NET Core. Dependency injection is a vastly used major concept in ASP.NET Core projects that brings you a lot of advantages and - if used correctly - stears your work into well-organized projects.
More complex organizations are always possible depending on your needs. Have a look at multitier, hexagonal or onion architecture if your projects grow.
We would like to create a new project with a clean architecture. So our team decided to have:
Repository pattern
Data Access Layer
Business Access Layer
Common Layer (Abstractions such as IPersonRepository, IPersonService, ICSVExport)
Some Core services such as create CSV files.
UnitTests
Now what we have is:
PersonsApp.Solution
--PersonsApp.WebUI
-- Controllers (PersonController)
--PersonApp.Persistence
--Core folder
-IGenericRepository.cs (Abstraction)
-IUnitOfWork.cs (Abstraction)
--Infrastructure folder
-DbDactory.cs (Implementation)
-Disposable.cs (Implementation)
-IDbFactory.cs (Abstraction)
-RepositoryBase.cs (Abstraction)
--Models folder
- Here we DbContext, EF models (Implementation)
--Repositories
- PersonRepository.cs (Implementation)
--PersonApp.Service
--Core folder
-IPersonService.cs (Abstraction)
-ICSVService.cs (Abstraction)
--Business
-PersonService.cs (Abstraction)
--System
-CSVService.cs (Abstraction)
--PersonApp.Test
In my view, our structure is a little bit messy.
The first problem is:
PersonApp.Service has abstractions(interfaces) and implementations
in one class library.
The second problem is:
PersonApp.Persistence has abstractions(RepositoryBase) and
implementations in one class library. But if I move RepositoryBase,
IGenericRepository, IUnitOfWork in a class library called
PersonApp.Abstractions, then I will circular reference errors
between PersonApp.Abstractions and PersonApp.Persistence
What is the best way to organize our solution?
This is probably not a good S.O. question given it's asking something that is opinion-based. When planning out project structure I aim to keep things simple. If an abstraction is for polymorphism I will consider moving interfaces into a separate "common" assembly. For example if I want to provide several possible implementations of a thing, I will have a common assembly that declares the interface, then separate assemblies for the specific implementations. In most cases I use interfaces as contracts so that I can substitute the real with mocks. In these cases I keep the interfaces nested beneath the concrete implementation. I use a VS add-in called NestIn to provide nesting support. This keeps the project structure nice and compact. However, a caveat, if you are using .Net Standard libraries, file nesting doesn't appear to be supported. (Hopefully this changes / has changed)
So for a SomeService, my folder project structure would look like:
Services [folder]
SomeService.cs [concrete]
SomeService.dependencies.cs [partial] [nested]
ISomeService [nested]
the .dependencies.cs file is a partial class where I put all dependencies and the constructor. This keeps them tucked out of the way while I'm working on implementation. I used to rely on #regions way back, but frankly I cannot stand them now. Partial classes are much better IMO.
My repositories live alongside my entities in a Domain assembly.
Entities [folder]
Configuration [folder]
OrderConfiguration.cs
Order.cs
Repositories [folder]
OrderManagementRepository.cs
OrderManagementRepository.dependencies.cs
IOrderManagementRepository.cs
MySystemDbContext.cs
I don't use Generic repositories, rather repositories are designed to pair up with Controllers or Services that they serve. I might have some general purpose repositories that service more than one consumer. (stuff like lookups, etc.) This pattern evolved for me from wanting to satisfy SRP. The biggest issue with things like generic repositories is that they need to serve multiple masters. While an OrderRepository might serve a single responsibility in being worried solely about Orders, the problem I see is that many different places will need access to Order information. This means different criteria, and wanting different amounts of data. So instead, if I have an OrderManagementService that deals with orders, order lines, etc. and touches on Products and other bits and bobs in the process of placing orders, I will use an OrderManagementRepository to serve virtually all data needed by the service, and manage the wrapping of supported operations for managing an order. This means my service only typically needs 1 repository dependency (rather than an OrderRepository, ProductRepository, etc. etc. etc.) and my OrderManagemmentRepository has only 1 reason to change. (But that's getting off topic. :)
I started relying on Nesting a while ago back when you needed ReSharper or the like to get access to "Go to Implementation" for interfaces. Go to Definition would take you to the interfaces, which when in a separate namespace or assembly made navigating around dependencies a pain. By nesting interfaces under their concrete implementations, it's a quick click through from the interface to it's concrete implementation and back. I make use of tracking the current code file in the solution manager so as I navigate through code my project view highlights/expands to the currently viewed file.
Ultimately, your project structure should reflect how you prefer to navigate through the code to make it as intuitive and easy to get around to find the bits you need. That will be different for different people, so partial classes and nesting works really well for me, as I am a very visual person that uses the project view a lot. It might not serve any benefit for people that are hotkey navigation wizards. Ultimately though I'd say keep it simple, and adaptable. Trying to plan it out too much in the early stages is like premature optimization. Don't be afraid to move things around as a project grows. A project that grows simply by adding code will invariably turn into a unstable, confusing tangled mess, no matter how well you try to plan ahead. Good code comes from constant re-factoring which is moving things around and deleting as well as adding. When your style is adaptable and you are building in a way that is constantly refining and code is getting better through natural selection, the structure is free to evolve.
Hopefully that might give some food for thought. Good luck in the green fields!
Edit: Regarding polymorphic interfaces vs. contract interfaces. With polymorphic interfaces where I want to have multiple, substitute-able concrete implementations, this is a case where the interface (and any applicable base class) would reside in a separate assembly. The nesting solution applies for cases where the only substitution is for mocking purposes. (unit testing) A recent example of a polymorphic instance was when I needed to replace an in-built SMS service wrapper to support a new SMS provider. This resulted in re-factoring a hard-coded concrete class from the original code into a SMSCore assembly containing the ISMSProvider interface and some general common definitions, then two assemblies for the implementations: SMSByMessageMedia and SMSBySoprano.
Other cases that come up might be around customizations. For instance I have a number of personal libraries and such for general purpose code, and when implementing them for a client there might be some client-specific "isms" that I want to make. These cases are typically resolved by extending the general purpose implementation (Open-Closed Principle) by overriding, or implementing a provided interface for the custom dependency that the general purpose code can consume. In both of these cases, the client project is going to have a reference to the concrete implementation(s) anyways, so having extendable classes and dependency interfaces in that assembly/namespace doesn't pose any issues. This saves needing to add several different namespaces & assembly references.
I am building an application using DDD principles. I am now thinking about the namespace structure in the core of my application. Please see the idea below:
Company.Application.Core.Entities
Company.Application.Core.ValueObjects
However, I cannot find a single example of an application on GitHb, which follows this convention. Is there a specific reason not to follow this naming convention?
I also have a base class for entities i.e. Company.Application.Core.Entities.Entity and a base class for value objects i.e. Company.Application.Core.ValueObjects.ValueObject
The alternative option is to put all Value Objects and Entities in: Company.Application.Core
Your approach will work, but such composition tells story about your code focused on DDD Building Blocks, not about immanent features of your domain. In DDD we want to show important things about domain, the technology issues are not the most important concerns anymore.
I suggest creating following namespaces:
YourCompany.YourApplicationName.YourParticularBoundedContextName.Application
here you can keep all Application Scope building blocks i.e. Application Services and DTO's which are used to transfer parameters to Application Services and return data from them.
YourCompany.YourApplicationName.YourParticularBoundedContextName.Domain
this is the namespace where you will create subnamespaces for Domain Scope building blocks.
YourCompany.YourApplicationName.YourParticularBoundedContextName.Domain.AggregateName
each Aggregate have its own namespace in which there are Aggregate Root class, Entities and VOs used internally in this Aggregate, Repository interface, Aggregate Factory if needed etc.
I don't know if in C# it is possible, but in Java there is another advantage of having separate package (namespace) for Aggregate - you can make Aggregate Root class public and all other Entities and VOs that are internally used as package scope, so they will not be visible outside package (namespace). This way you build public API for your Aggregate that no one can break, because there is a guardian: the compiler :)
YourCompany.YourApplicationName.YourParticularBoundedContextName.Infrastructure
here is a place for repositories' implementations (each in subnamespace of corresponding Aggregate)
Base classes can be kept in:
YourCompany.YourApplicationName.Domain
and even kept in separate project as you can try to reuse it in another application.
What is the advantage? When working with code you are focusing on features and domain rather than on technological aspects. You will more frequently have to cope with problems like "how does this process flow look like" than "I want to see all my Entities and VOs at once", so let your code structure support this. Separating Entities (Aggregates parts actually) and VOs (also Aggregate parts) into separate namespaces you lost information what is working with what. You can simple end with big ball of mud, because you will reuse something that shouldn't be reused.
Please look at:
https://github.com/BottegaIT/ddd-leaven-v2
it is a sample project in Java with packaging done this way. Maybe it will help you.
Another example is:
https://github.com/VaughnVernon/IDDD_Samples
which is a sample for Vaughn Vernon's book about DDD.
There is also article that can be useful:
http://www.codingthearchitecture.com/2015/03/08/package_by_component_and_architecturally_aligned_testing.html
Using separate namespaces for your Entity types (that map to database tables etc.) and your DTO types (used for passing data between client and server layers of your application) is pretty standard practice, even if .Entities and .ValueObjects aren't particularly common choices. I don't think it's worth worrying about too much as long as you use them consistently.
At a current project I have to develop a .NET client application which uses a handful of SOAP web services to communicate with an external software.
Fortunately .NET makes it very easy to use a SOAP WS as it generates all the required objects when adding a service reference.
On the other hand after playing around with this auto generated classes for a while I'm not sure if it's better to use them directly in the business logic or if I should map them into my own models (e.g. using something like a repository pattern).
Pros for mapping:
- Separation of business logic and data access (WS could change)
- Central point which calls the WS (can validate the responses and do a proper error handling)
- Sometimes WS types are cumbersome to use (e.g. WebService1.TypeA is not compatible to WebService2.TypeA).
- Generated classes cannot/should not be customized.
- ...
Cons for mapping:
Some of the used WSDLs have a complex structure and lots of nested types. In case of mapping them to my own models I have to duplicate many classes and properties. That's the fact why I have concerns about this solution.
In short I'm unsure if the duplication of the web service classes to my own namespaces and an implementation of a repository or facade pattern is a proper way to go or just blowing up the architecture.
Are there any best practices or similar?
In my 20+ years of experience, adding a repository/service layer can be overkill if the lifetime of the project is uncertain or likely to be short lived. There is the added concern of performance, however SOAP itself would be more of a bottleneck than an object mapping layer when done correctly. Also, Naked Object applications don’t benefit from separation of concerns.
That being said, if you are connecting to a SOAP endpoint these days you are likely to be developing an enterprise application that should be built to be around for a few years and enhanced over time. That is, built to accept growing needs. So as far as your pros and cons, in my experience it depends on return on the time investment. From the information you posted here, the extra effort would be beneficial.
Generation can be a great tool when done right. I do a considerable amount of T4 generation in my projects for similar purposes. As far as best practices, I generate my classes into a ‘Generated’ sub namespace and extend them. This way I can extend the functionality and structure without fear of them being overwritten. In the generated classes I mark everything partial and virtual so that I have options outside of inheritance. This may be overkill to do all at once, but is something to consider. Leveraging partial classes could be another way to modify and extend the generated classes.
You can even generate the extended/partial classes. I use T4Toolbox to generate external files and use the ‘PreserveExistingFile’ to prevent the file from being overwritten. T4Toolbox (if you aren’t already using it) offers a great modular way to manage your generation, even generate into other projects.
Even if you don’t add a repository layer, I would encourage you to apply the concepts of the Composite and Façade patterns to simplify the interaction with the external service.
So in review, best practices in my experience:
Repository:
if you need it to be long-lived and extendable
Generation:
Use a namespace and class naming that makes it clear that the class is generated and will be overwritten.
Create classes that are partials or extend the generated classes for flexibility
T4Toolbox if using T4
modular T4
preserve custom code
I'm a C# coder with a (Windows) sysadmin background. I've been looking at the various service frameworks in order to create a unified REST-API for various infrastructure components (windows management, hardware management, etc.). I've settled on using ServiceStack as my framework for this, but have a question on how to manage my DTOs. Most of the time my source data is from non-database objects, which include:
Other web services (usually SOAP based). I usually bring these in via "Add Web Reference" (most, but not all, are asmx).
.NET Objects (usually WMI/WinRM/PowerShell [System.Management], or Active Directory [System.DirectoryServices])...
In some unfortunate cases, raw text output I get as a result of invoking a command (via ssh or cmd).
In all of these cases, I will have to call some sort of Save() method to update properties. In addition, there might be some non-CRUD methods I would like to expose to the REST service. Usually I don't need everything from the source data (for example, in the case of web service data, I'm only interested in boxing up certain properties and methods of a particular proxy class). My understanding is that my DTOs should be clean and not have any dependencies. Since I don't believe I have an ORM I can use, what design pattern should I use to map my data to a DTO?
Apologies if I'm misusing any terminology here...
With a variety of backend services and data sources, I think it would be hard to use anything highly structured like a framework to map your data to DTOs. I would keep it simple:
Keep your DTO classes separate from any of your backend classes. Generally resist the temptation to try to reuse code, use inheritance, etc., in your DTOs (though sometimes I find it useful to declare interfaces for the DTOs to implement). This will keep the interface of your your ServiceStack service clean and independent of backend details.
There are some extension methods available in ServiceStack to easily map properties between two classes: TranslateTo, PopulateWith, PopulateWithNonDefaultValues, etc. The link above mentions these. The trick is that while your DTO classes should not be subclasses of, or directly reusing your backend classes, you will find it convenient to have the property names match up if you want to use these mapping methods.
Keep your ServiceStack service classes simple; their primary responsibility should be translating between DTO classes and lower level model classes, and making one or two method calls on business logic classes to do the actual work.
It sounds like it would be useful for the highest level of your business layer--the classes that your ServiceStack services interact with--to present a clean interface that abstracts away the details about the source and format of a given type of data. So you may want three layers of model classes. From top to bottom: DTOs, business layer POCO classes, and framework-specific classes for specific backend services like web reference generated code or whatever.
I think that's about all there is to it.
I recommend that you define DTOs that meet the requirements of your API, and then have a 'business logic' layer that mediates between the actual objects and your DTOs.
Your ServiceStack services will have a dependency on both the DTO definitions and the business logic layer, and the business logic layer will have a dependency on the DTO definitions and the real-world object definitions. In effect, your REST services and DTOs will act as a facade over the real-world APIs.