I'm trying to migrate one of my modules from Postgres (with EF) to Cassandra.
Here is my best try for Cassandra mappings:
internal sealed class UserMappings : Mappings
{
public UserMappings()
{
For<User>().TableName("users")
.PartitionKey(x => x.Id)
.ClusteringKey(x => x.Id)
.Column(x => x.Id, x => x.WithDbType<Guid>().WithName("id"))
// I want to add mappings for password Hash here
}
}
The first problem is that I use VO for completive safety but want to store primitives in database.
Example VO for entity id:
public record UserId
{
public Guid Value { get; }
public UserId(Guid value)
{
if (value == Guid.Empty) throw new Exception("Invalid UserId");
Value = value;
}
public static implicit operator Guid(UserId id) => id.Value;
public static implicit operator UserId(Guid id) => new(id);
}
Secondly, my entity has private fields and I don't know how to map them to the database.
internal class User
{
private User()
{
}
public User(/*...*/)
{
//...
}
private string _passwordHash;
public UserId Id { get; }
//...
}
Also is public parameterless constructor required?
It sounds like you want to have some business logic in the classes that you are using to map to your database. I would recommend creating new classes that have only public properties and no logic whatsoever (i.e. POCOs) and then mapping these objects into your domain objects either "manually" or with a library like AutoMapper. This has the benefit of keeping your domain objects separate from the database schema.
The DataStax C# driver mapper will not be able to map private fields or properties that don't have a setter. It is able to map properties with a private setter though so you might want to leverage that instead.
Also keep in mind that you will need to provide a custom TypeConverter to the Mapper or Table objects if you use custom types in your mapping. You might get away with not implementing a TypeConveter if you have implicit operators to convert these types (like your UserId class) but I'm not 100% sure.
On the constructor issue, I think having a private empty constructor is enough.
Related
We are building a GraphQL server using Hot Chocolate 13.2. Our application has a very common scenario with a Code-First approach and a SQL Server database. Operations are 100% destined for CRUD, and, for each model that we have on the application there is a DbSet in a single context.
public DbSet<Currency> Currencies { get; set; }
public DbSet<Language> Languages { get; set; }
// Too many another contexts.
Since there is an expressive quantity of models, the number of mutations and query types on the code is growing too much. Furthermore, the process for creating new mutations are basically a copy/paste, so we started having too many classes doing exactly the same thing (except by the Type that changes for each class).
The problem:
From what we know, Hot Chocolate does not allow different Query and Mutations types being exposed with the same method name across these classes, that's why we cannot use Generic Methods. Example:
public class BaseMutation<T> : Validator where T : BaseEntity
{
public static T Add([Service]MyDbContext context, ClaimsPrincipal claims, [Argument]T model)
{
// Do some stuff
context.Set<T>().Add(model);
context.SaveChanges();
}
}
Our question:
Does anyone know a way to represent different query and mutation types in a single class? Or even, to create a unique class for Query Type objects and another class for the Mutation Types, with the possibility of having a on the calls?
By the way, I found an class attribute called "GraphQLName", that permits us for changing the operation name.
Is there any way for dynamically change this value to represent all these queries and mutations in a single class?
[GraphQLName("dynamic-name")]
public static T <DYNAME_METHOD_NAME>([Service]MyContext context, ClaimsPrincipal claims, [Argument] T model)
{
// Do some stuff
context.Set<T>().Add(model);
context.SaveChanges();
}
You could use the fluent API and write extension methods on top of it.
public class QueryType : ObjectType
{
protected override void Configure(IObjectTypeDescriptor descriptor)
{
descriptor.AddEntity<Foo>("foo");
descriptor.AddEntity<Bar>("bar");
}
}
public static class QueryExtensionMethods
{
public static IObjectTypeDescriptor AddEntity<T>(
this IObjectTypeDescriptor descriptor,
string name) where T: IHasId
{
descriptor.Field($"{name}ById")
.Argument("id", x => x.ID(typeof(T).Name))
.Resolve(x =>
{
var dbContext = x.Service<MyDbContext>();
var id = x.ArgumentValue<Guid>("id")
return dbContext.Set<T>().FirstOrDefault(x => x.Id == id);
})
.Type<ObjectType<T>>();
}
}
But, to also mention this here. This is not really GraphQL best practice. In GraphQL you should think beyond CURD an implement mutations and query in a way that you actually want them to be consumed, rather than exposing a generic solution.
you rather have mutations changeAddress or changeUserName rather than a combinded mutation updateUser
I am trying to create an app the can handle multiple database types.
So far I have created my Interface like so. Its very simple and all the database will do is Load and Save a profile
public interface IDataManager
{
Profile LoadProfile(int profileId);
bool SaveProfile(Profile profile);
bool CreateDatabase();
bool OpenConnection();
bool CloseConnection();
}
and lets just say the Profile class for the above looks like this.
public class Profile
{
public int Id { get; set; }
public string Name { get; set; }
}
My question is what is the best way to make it so that all the implementations of IDataManager return the same object types?
Here is an example of what I mean by this. (This is not quality code its just an example)
I create an SQLite class that implements IDataManager and then create an instance.
public IDataManager DataManager = new SQLiteDataManager();
Later in the code I want to load a Profile so I call the LoadProfile.
Profile profile = DataManager.LoadProfile(1);
My SQLite implementation of the LoadProfile method looks like this
public Profile LoadProfile(int profileId)
{
// Copied and pasted from a WinRT app
using (var conn = new global::SQLite.Net.SQLiteConnection(new global::SQLite.Net.Platform.WinRT.SQLitePlatformWinRT(), _sqlpath))
{
var tmp = conn.Table<PROFILE>().First(x => x.ID == profileId);
}
// do something and return
}
Now as you can see the return type from the query (tmp = type PROFILE) is not the same type as the LoadProfile method return type (Profile).
Do I have to convert tmp to Profile? which means it must be done in all the methods with return types, and for every different database implementation. or is there a better way of doing this?
Hope this makes sense.
If you use Entity Framework you can just use the provider for the db type you will be using but keep all the api and models the same
A list of Entity Framework providers for various databases
I am at a loss as to how to use the new IValueResolver interface in the new version of AutoMapper. Perhaps I used them improperly in the previous versions of AutoMapper...
I have a lot of model classes, some of them are generated from several databases on several database servers, using sqlmetal.
Some of these classes has a string property, PublicationCode, which identifies which publication the subscription, or offer, or invoice, or whatever it is, belongs to.
The publication can exist in either of two systems (the old and the new system), hence I have a bool property on the destination model classes which tells whether the publication is in the old or the new system.
Using the old version (<5?) of AutoMapper, I used a ValueResolver<string, bool> which took the PublicationCode as an input parameter, and returned a bool indicating the location of the publication (old or new system).
With the new version (5+?) of AutoMapper, this seems to no longer be possible. The new IValueResolver requires a unique implementation of each and every combination of source and destination models that I have, where src.PublicationCode needs to be resolved into a dst.IsInNewSystem.
Am I just trying to use the value resolvers in the wrong way? Is there a better way? The main reason I would like to use a resolver is that I would prefer to have services injected into the constructor, and not having to use DependencyResolver and the like in the code (I'm using Autofac).
Currently, I use it in the following way:
// Class from Linq-to-SQL, non-related properties removed.
public class FindCustomerServiceSellOffers {
public string PublicationCode { get; set; }
}
This is one of several data model classes I have, which contains a PublicationCode property). This particular class is mapped to this view model:
public class SalesPitchViewModel {
public bool IsInNewSystem { get; set; }
}
The mapping definition for these two classes is (where expression is an IProfileExpression), non-related mappings removed:
expression.CreateMap<FindCustomerServiceSellOffers, SalesPitchViewModel>()
.ForMember(d => d.IsInNewSystem, o => o.ResolveUsing<PublicationSystemResolver>().FromMember(s => s.PublicationCode));
And the resolver:
public class PublicationSystemResolver : ValueResolver<string, bool>
{
private readonly PublicationService _publicationService;
public PublicationSystemResolver(PublicationService publicationService)
{
_publicationService = publicationService;
}
protected override bool ResolveCore(string publicationCode)
{
return _publicationService.IsInNewSystem(publicationCode);
}
}
And the use of the mapper:
var result = context.FindCustomerServiceSellOffers.Where(o => someCriteria).Select(_mapper.Map<SalesPitchViewModel>).ToList();
You can create a more general value resolver by implementing IMemberValueResolver<object, object, string, bool> and using that in your mapping configuration. You can provide a source property resolution function as before:
public class PublicationSystemResolver : IMemberValueResolver<object, object, string, bool>
{
private readonly PublicationService _publicationService;
public PublicationSystemResolver(PublicationService publicationService)
{
this._publicationService = publicationService;
}
public bool Resolve(object source, object destination, string sourceMember, bool destMember, ResolutionContext context)
{
return _publicationService.IsInNewSystem(sourceMember);
}
}
cfg.CreateMap<FindCustomerServiceSellOffers, SalesPitchViewModel>()
.ForMember(dest => dest.IsInNewSystem,
src => src.ResolveUsing<PublicationSystemResolver, string>(s => s.PublicationCode));
So from my side, I want to add a few little things; try it
builder.Services.AddAutoMapper(typeof(TransactionProfile).Assembly); // working
builder.Services.AddAutoMapper(x => x.AddProfile<(TransactionProfile)>()); // not working
builder.Services.AddAutoMapper(x => x.AddMaps("Outlay.Infrastructure")); // not working
I'm prototyping an ASP.NET Web API that needs to talk to several databases which are almost identical. Each of our customers have their own instance of our database structure, but some are specialized to integrate with other systems they have. So for example in one database the Client table might have the column AbcID to reference a table in another system, but other databases won't have this column. Other than that the two tables are identical in name and columns. The columns can also have different lengths, varchar(50) instead of varchar(40) for example. And in some databases there can be one extra table. I have focused on solving the different columns problem first.
I was hoping to use an ORM to handle the data access layer of the API, and right now I'm experimenting with Entity framework. I already solved how to dynamically connect to the different databases from an API-call, but right now they have to be completely identical in structure.
I have tried to set up double .edmx models with a Database-first approach but this causes conflicting class names between the models. So instead I tried Code-first and come up with this (which isn't working).
DbContext extension:
In the constructor I check which database is being accessed and if it is one of the special ones I flag it for the model configuration.
public partial class MK_DatabaseEntities : DbContext
{
private string _dbType = "dbTypeDefault";
public DbSet<Client> Client { get; set; }
public DbSet<Resource> Resource { get; set; }
public MK_DatabaseEntities(string _companycode)
: base(GetConnectionString(_companycode))
{
if(_companycode == "Foo")
this._dbType = "dbType1";
}
// Add model configurations
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Conventions.Remove<PluralizingTableNameConvention>();
modelBuilder.Configurations
.Add(new ClientConfiguration(_dbType))
.Add(new ResourceConfiguration());
}
public static string GetConnectionString(string _companycode)
{
string _dbName = "MK_" + _companycode;
// Start out by creating the SQL Server connection string
SqlConnectionStringBuilder sqlBuilder = new SqlConnectionStringBuilder();
sqlBuilder.DataSource = Properties.Settings.Default.ServerName;
sqlBuilder.UserID = Properties.Settings.Default.ServerUserName;
sqlBuilder.Password = Properties.Settings.Default.ServerPassword;
// The name of the database on the server
sqlBuilder.InitialCatalog = _dbName;
sqlBuilder.IntegratedSecurity = false;
sqlBuilder.ApplicationName = "EntityFramework";
sqlBuilder.MultipleActiveResultSets = true;
string sbstr = sqlBuilder.ToString();
return sbstr;
}
}
ClientConfiguration:
In the configuration for Client I check the flag before mapping properties to database columns. This however does not seem to work.
public class ClientConfiguration : EntityTypeConfiguration<Client>
{
public ClientConfiguration(string _dbType)
{
HasKey(k => k.Id);
Property(p => p.Id)
.HasColumnName("ID")
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
if (_dbType == "dbType1")
{
Property(p => p.AbcId).HasColumnName("AbcID");
}
Property(p => p.FirstName).HasColumnName("FirstName");
Property(p => p.LastName).HasColumnName("LastName");
}
}
Client class:
This is how my Client class looks like, nothing weird here.
public class Client : IIdentifiable
{
public int Id { get; set; }
public string AbcId { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
public interface IIdentifiable
{
int Id { get; }
}
Back-up solution is to use raw SQL queries to deal with the offending tables and ORM for the rest, but it would be awesome if there is some way to do this that I have not thought of. Right now I'm trying Entity framework, but I am not opposed to trying some other ORM if that one can do it better.
Using Code First supports this scenario:
1) Common entities for both models:
public class Table1
{
public int Id { get; set; }
public string Name { get; set; }
}
2) Base version of table 2
public class Table2A
{
public int Id { get; set; }
public int Name2 { get; set; }
public Table1 Table1 { get; set; }
}
3) "Extended" version of table 2, inherits version A, and adds an extra column
public class Table2B : Table2A
{
public int Fk { get; set; }
}
4) Base context, including only the common entities. Note that there is a constructor which accepts a connection string, so there is no parameterless constructor. This forces inheriting contexts to provide their particular connection string.
public class CommonDbContext : DbContext
{
public CommonDbContext(string connectionString)
:base(connectionString)
{
}
public IDbSet<Table1> Tables1 { get; set; }
}
5) The context A, inherits the common context, adds the Table2A, and ignores the Table2B
public class DbContextA : CommonDbContext
{
public DbContextA() : base("SimilarA") { } // connection for A
public IDbSet<Table2A> Tables2A { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.Ignore<Table2B>(); // Ignore Table B
}
}
The context B, inherits the common, and includes the Table2B
public class DbContextB: CommonDbContext
{
public DbContextB() :base("SimilarB") { } // Connection for B
public IDbSet Tables2B { get; set; }
}
With this setup, you can instance either DbContextA or DbContextB. One advantage is that both inherit CommonDbContext, so you can use a variable of this base class to access the common entities, no matter if the concrete implementation is version A or B. You only need to change to the concrete type to access the specific entities of A or B (Table2A or Table2Bin this sample).
You can use a factory, or DI or whatever to get the required context depending on the DB. For example this could be your factory implementation:
public class CommonDbContextFactory
{
public static CommonDbContext GetDbContext(string contextVersion)
{
switch (contextVersion)
{
case "A":
return new DbContextA();
case "B":
return new DbContextB();
default:
throw new ArgumentException("Missing DbContext", "contextVersion");
}
}
}
NOTE: this is working sample code. You can of course adapt it to your particular case. I wanted to keep it simple to show how it works. For your case you'll probably need to change the factory implementation, and expose the connection string in A and B context constructors, and provide it in the factory method
Handling the different classes of your entities
The easiest way to handle the different entities of each DbContext is to use polymorphism, and or generics.
If you use polymorphism you need to implement methods which use the type of the base class (as parameter and as return type). This parameters and vars will hold entities either of the base or of the derived class (Table2A or Table2B). In this case, each context will receive an entity of the right type, and it will work directly without trouble.
The problem is when your app is multilayered, uses services or is a web app. In this case when you use the base class the polymorphic behavior can be lost, and you'll need to handle the entities of the base class. (For example if you let the user edit an entity of derived class in a web app form, the form can only take care of the properties of the base class, and when it's posted back, the properties of the derived class will be lost) In this case, you need to handle it intelligently (see note below):
For reading purposes, if you have a Table2B, you have a direct casting to Table2A. You can implement functionality for Table2A and directly used it. I.e. you can return collections or individual values of the base class (in many cases implicit casting will be enough). No more worries.
For inserting/updating, you have to take extra steps, but it's not too difficult. You need to implement methods that receive/return Table2A parameters in your contexts, or in another layer, depending on your architecture. For example, you can make the base context abstract and define virtual methods for this. (See example below). Then you need to make the right implementation for each particular case.
if you receive a Table2A but need to insert it in Table2B, simply map entity A into entity B with AutoMapper or ValueInjecter and fill the remaining properties with default values (beware of AutoMapper and EF dynamic proxies: it won't work).
if you receive a Table2A and need to update a Table2B, simply read the existing entity from the DB and repeat the mapping procedure (ValueInjecter will be less troublesome than AutoMapper also for this case).
This is a very simple example of what can be done, but you need to adapt it to your particular case:
Inside CommonDbContext class, declare virtual methods for the base type, like this:
public virtual Table2A GetTable2AById(int id);
public virtual void InsertTable2A(Table2A table);
You can also use generic interfaces/ methods, instead of abstract class / virtual methods, like this:
public T GetTable2AById<T>(int id)
{
// The implementation
}
In this case you should add the necessary constraints to the T type, like where T: Table2A or the ones you need (class new()).
NOTE It's not exact to say that the polymorphism is lost in this cases, because you can really make polymorphic Web Services with WCF, or Web API, adapt your UI to the real class of your entity (with templates for each case) and so on. That depends on what you need or want to achieve.
Been there, done that.
In all seriousness: dump EF in this specific case; it will bring a lot of pain and suffering for no benefit.
What you'll eventually end up doing (putting my Fortuneteller Hat on) is you'll rip out all the EF-based code, create an abstract object model and then write a series of backends that will map all the various database structures back and forth to said clean abstract object model. And you'll be either using raw SQL or something lightweight like Dapper or BLToolkit.
I have defined an enum in my Entity Framework 5 model, which I'm using to define the type of a field on a table, e.g.
public enum PrivacyLevel : byte {
Public = 1,
FriendsOnly = 2,
Private = 3,
}
And I have a table Publication that has a tinyint field PrivacyLevel, which I've mapped in the EF model to use the PrivacyLevel type defined above, using the method described here.
But I also want to be able to display a string description for each value of the enum. This I've done in the past for enums by decorating them with a Description attribute, e.g.
public enum PrivacyLevel : byte {
[Description("Visible to everyone")]
Public = 1,
[Description("Only friends can view")]
FriendsOnly = 2,
[Description("Only I can view")]
Private = 3,
}
I've got some code that converts enums to strings by checking if they have a Description attribute, and that works well. But here, because I had to define the enum in my model, the underlying code is auto-generated, and I don't have anywhere stable to decorate them.
Any ideas for a workaround?
Not sure if this is what you are after but from what I understand i will try to be as clear as possible, since you have a concrete database first approach, you can abstract much of your Entity models to ViewModels using a Dto Approach through AutoMapper.
Using automapper profiles you can quickly setup profiles for all sorts of environments and scenarios for flexibility and adaptability
So here is this "Enum" which is causing me a problem
here is my view model for this Enum
First here is my layout
here is a simply mapping for the Account entity to a viewmodel for Account
public class AccountProfile : Profile
{
protected override void Configure()
{
// Map from Entity object to a View Model we need or use
// AutoMapper will automatically map any names that match it's conventions, ie properties from Entity to ViewModel have exact same name properties
Mapper.CreateMap<Account, AccountViewModel>()
.ForMember(model => model.CurrentPrivacy, opt => opt.MapFrom(account => (PrivacyLevelViewModel)account.PrivacyLevel));
Mapper.CreateMap<Account, EditAccountViewModel>()
.ForMember(model => model.SelectedPrivacyLevel, opt => opt.MapFrom(account => (PrivacyLevelViewModel) account.PrivacyLevel));
// From our View Model Changes back to our entity
Mapper.CreateMap<EditAccountViewModel, Account>()
.ForMember(entity => entity.Id, opt => opt.Ignore()) // We dont change id's
.ForMember(entity => entity.PrivacyLevel, opt => opt.MapFrom(viewModel => (PrivacyLevel)viewModel.NewSelectedPrivacyLevel));
}
}
Note that this does not have to apply to MVC, this can be used in WPF or other applications not tied to the Web, but since it's a good way of explaining, it's why I used MVC for this example.
When I first get a Http Get request for my profile, I grab the entity from the database
and map anything I actually need to the view
public ActionResult Index()
{
// Retrieve account from db
var account = new Account() { Id = 1, Name = "Patrick", AboutMe = "I'm just another dude", ProfilePictureUrl = "", PrivacyLevel = PrivacyLevel.Private, Friends = new Collection<Account>() };
// ViewModel abstracts the Entities and ensures behavour that only matters to the UI
var accountViewModel = Mapper.Map<AccountViewModel>(account);
return View(accountViewModel); // strongly typed view model
}
So my profile index view can use my enum view model
Here's the output
Now when I want to change what my privacy setting is, I can create a new EditAccountViewModel which allows me to submit a new value in a dropdown
public class EditAccountViewModel
{
public int Id { get; set; }
public string Name { get; set; }
public string AboutMe { get; set; }
public int NewSelectedPrivacyLevel { get; set; }
public PrivacyLevelViewModel SelectedPrivacyLevel { get; set; }
public SelectList PrivacyLevels
{
get
{
var items = Enum.GetValues(typeof (PrivacyLevelViewModel))
.Cast<PrivacyLevelViewModel>()
.Select(viewModel => new PrivacyLevelSelectItemViewModel()
{
Text = viewModel.DescriptionAttr(),
Value = (int)viewModel,
});
//SelectPrivacyLevel was mapped by AutoMapper in the profile from
//original entity value to this viewmodel
return new SelectList(items, "Value", "Text", (int) SelectedPrivacyLevel);
}
}
}
Now once I send a post of my new changed value, the interesting part is how I modify the "real" entity from the db with the updated privacy setting
On submitting the form back to my edit action you can i get the original real db entity and then merge changes if the ViewModel state is valid
AutoMapper allows you to configure how ViewModels can be mapped to Entities,
if some properties should change, from integer entities to string values for view models,
maybe you want an enum to really be a string in the "view" and only the enum for the db,
with auto mapper it allows you to configure all these scenarious, and through convention
you dont need to configure "every single property" if your view models have the same
property names/camel case to upper case.
Lastly, before you can use these Profiles, you must load them at the application entry point, like global.asax or Main.
AutoMapper only needs to be 'configured' once to load any sort of profiles defined in the application. With some reflection you can load all Profiles in your assembly with this code:
public class AutoMapperConfig
{
public static void RegisterConfig()
{
Mapper.Initialize(config => GetConfiguration(Mapper.Configuration));
}
private static void GetConfiguration(IConfiguration configuration)
{
configuration.AllowNullDestinationValues = true;
configuration.AllowNullCollections = true;
IEnumerable<Type> profiles = Assembly.GetExecutingAssembly().GetTypes().Where(type => typeof(Profile).IsAssignableFrom(type));
foreach (var profile in profiles)
{
configuration.AddProfile(Activator.CreateInstance(profile) as Profile);
}
}
}
I call the configuration in my global.asax:
protected void Application_Start()
{
AreaRegistration.RegisterAllAreas();
WebApiConfig.Register(GlobalConfiguration.Configuration);
FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
RouteConfig.RegisterRoutes(RouteTable.Routes);
AutoMapperConfig.RegisterConfig(); // AutoMapperConfig.cs
}
More information about how to use AutoMapper and how it can benefit you can be found
here:
AutoMapper Github
In the end I came up with a much simpler solution: I just used an extension method to get the description of the enum. That also made it a lot easier for localization, so I could use a Resource string.
public static string Description(this PrivacyLevel level) {
switch (level) {
case PrivacyLevel.Public:
return Resources.PrivacyPublic;
case PrivacyLevel.FriendsOnly:
return Resources.PrivacyFriendsOnly;
case PrivacyLevel.Private:
return Resources.PrivacyPrivate;
default:
throw new ArgumentOutOfRangeException("level");
}
}
Some other idea:
Use byte PrivacyLevelByte in your EF classes. Create additional partial class for that particular model where you define property
PrivacyLevel PrivacyLevelEnum
{
get { return (PrivacyLevel)PrivacyLevelByte; }
set { PrivacyLevelByte = (byte)value;}
}
and define PrivacyLevel enum in your code and not by EF designer.
That allows you to handle any attributes but still gives you enum properties on EF models.