I'm currently creating objects for an application of mine when this stuff come to mind. I know that using DBML's over Manual Creation of classes(see class below) can improve the speed of my application development but I'm really confused of what would be the other disadvantages and advantages of using DBML's over Manual Creation of classes like what I'm doing below thanks for all people who would help. :)
[Serializable]
public class Building
{
public Building()
{
LastEditDate = DateTime.Now.Date;
LastEditUser = GlobalData.CurrentUser.FirstName + " " + GlobalData.CurrentUser.LastName;
}
public int BuildingID { get; set; }
public string BuildingName { get; set; }
public bool IsActive { get; set; }
public DateTime LastEditDate { get; set; }
public string LastEditUser { get; set; }
public static bool CheckIfBuildingNameExists(string buildingName, int buildingID = 0)
{
return BuildingsDA.CheckIfBuildingNameExists(buildingName, buildingID);
}
public static Building CreateTwin(Building building)
{
return CloningUtility.DeepCloner.CreateDeepClone(building);
}
public static List<Building> GetBuildingList()
{
return BuildingsDA.GetBuildingList();
}
public static List<Building> GetBuildingList(bool flag)
{
return BuildingsDA.GetBuildingList(flag).ToList();
}
public static Building SelectBuildingRecord(int buildingId)
{
return BuildingsDA.SelectBuilding(buildingId);
}
public static void InsertBuildingRecord(Building building)
{
BuildingsDA.InsertBuilding(building);
}
public static void UpdateBuildingRecord(Building building)
{
BuildingsDA.UpdateBuilding(building);
}
public static void DeleteBuildingRecord(int building)
{
BuildingsDA.DeleteBuilding(building);
}
}
and my DAL is like this:
internal static class BuildingsDA
{
internal static Building SelectBuilding(int buildingId)
{
SqlCommand commBuildingSelector = ConnectionManager.MainConnection.CreateCommand();
commBuildingSelector.CommandType = CommandType.StoredProcedure;
commBuildingSelector.CommandText = "Rooms.asp_RMS_Building_Select";
commBuildingSelector.Parameters.AddWithValue("BuildingID", buildingId);
SqlDataReader dreadBuilding = commBuildingSelector.ExecuteReader();
if (dreadBuilding.HasRows)
{
dreadBuilding.Read();
Building building = new Building();
building.BuildingID = int.Parse(dreadBuilding.GetValue(0).ToString());
building.BuildingName = dreadBuilding.GetValue(1).ToString();
building.IsActive = dreadBuilding.GetValue(2).ToString() == "Active";
building.LastEditDate = dreadBuilding.GetValue(3).ToString() != string.Empty ? DateTime.Parse(dreadBuilding.GetValue(3).ToString()) : DateTime.MinValue;
building.LastEditUser = dreadBuilding.GetValue(4).ToString();
dreadBuilding.Close();
return building;
}
dreadBuilding.Close();
return null;
}
....................
}
I would also want to know if what could be the faster between the two methods of OOP implementation thanks :)
DBML
Pros:
You can get your job done fast!
Cons:
You can't shape your entity the way you want, for example you need 5 columns from the table but it has 10 columns you will get all of them, at least its schema. If you don't care much about data volum
You client side will have dependency with DAL (Data Access Layer), if you change property name, type in DAL you need to change in both BLL (Business Logic Layer) and client (Presentation Layer)
If you manual create class you might take a little bit more time to code but you get more flexible with it. Your client code will not depend on your DAL, any changes on DAL will not cause problems on client code.
Creating your model classes manually you can put additional attributes to properties (it cannot be done with DBML), apply your own data validation (as far as I remember it is possible to be done with DBML using partial methods).
With many tables and assocatiations DBML could become hard to read.
Disadventage of creating model classes manually is that you have to do all DBML stuff (attributes and a lot of code).
If you want to create model classes manually you can take a look at Entity Framework Code First or Fluent NHibernate. Both allows creating model easily.
Related
Little introduction: we have a complex entity and overgrown business logic related to it. With various fields that we can change and fields that updates from external project management software (PMS) like MS Project and some others.
The problem is that it's hard to centralize business logic for changing every fields cause that changes can offend other fields some fields are calculated but should be calculated only in several business scenarios. And different synchronization processes uses different business logic that depends on external data of specific PMS.
At this moment we have such ways to change the fields in our solution:
Constructor with parameters and private parameterless constructor
public class SomeEntity
{
public string SomeField;
private SomeEntity ()
{
}
public SomeEntity (string someField)
{
SomeField = someField;
}
}
Private set with public method to change field value
public class SomeEntity
{
public string SomeField {get; private set;}
public void SetSomeField(string newValue)
{
// there may be some checks
if (string.IsNullOrEmpty(newValue))
{
throw new Exception();
}
SomeField = newValue;
}
}
Event methods that perform operations and set some fields
public class SomeEntity
{
public string SomeField { get; private set; }
public string SomePublishedField { get; private set; }
public void PublishEntity(string publishValue)
{
SomeField = publishValue;
SomePublishedField = $"{publishValue} {DateTime.Now()}";
}
}
Public setters
public class SomeEntity
{
public string SomeField { get; set; }
}
Services that implements business logic:
public class SomeService : ISomeService
{
private DbContext _dbContext;
private ISomeApprovalsService _approvalsService;
public async Task UpdateFromMspAsync (MspSomeEntity mspEntity,
CancellationToken cancellationToken = default)
{
var entity = await _dbContext.SomeEntities
.Include (e => e.Process)
.SingleAsync (e => e.MspId == mspEntity.Id, cancellationToken);
switch mspEntity.Status:
case MspStatusEnum.Cancelled:
entity.Process.State = ProcessStateEnum.Rejected;
entity.Status = EntityStatusEnum.Stopped;
break;
case MspStatusEnum.Accepted:
_approvalsService.SendApprovals (entity.Process);
entity.Status = EntityStatusEnum.Finished;
break;
await _dbContext.SaveChangesAsync (cancellationToken);
}
}
State machine inside entity
public class SomeEntity
{
private StateMachine<TriggerEnum, StateEnum> _stateMachine;
public SomeEntity()
{
ConfigureStateMachine();
}
public string SomeField1 { get; set; }
public string SomeField2 { get; set; }
public string SomeField3 { get; set; }
private void ConfigureStateMachine()
{
_statusStateMachine.Configure(StateEnum.Processing)
.OnEntry(s=>SomeField1 = null)
.Permit(TriggerEnum.Approve, StateEnum.Approved);
_statusStateMachine.Configure(StateEnum.Approved)
.OnEntry(s=> SomeField1 = SomeField2 + SomeField3)
.Permit(TriggerEnum.Publish, StateEnum.Finished)
.Permit(TriggerEnum.Cancel, StateEnum.Canceled);
// etc
}
public void Trigger (TriggerEnum trigger) => _statusStateMachine.Fire(trigger);
}
State machine as service to prevent buisness logic leaks inside of entity.
var machine = _services.GetService<IStateMachine<SomeEntity, TriggerEnum>>();
var entity = await _dbContext.SomeEntities.FirstAsync();
IAttachedStateMachine<TriggerEnum> attachedMachine = machine.AttachToEntity(entity);
attachedMachine.Trigger(TriggerEnum.Publish);
It's wrong by architecture to have so many ways of changing values and we want to refactor this but to change approach, best practice must be chosen.
Please share your experience of resolving similar situation.
Update: found approach for DDD that called "aggregation root". It's looks good but only on paper (in theory) and works good with simple examples like "User, customer, shopping cart, order". On practice on every private setter you will create setter method (like in #2 of my examples). Also different methods for every system you work with. Not even talking about business logic inside database entity that violates SOLID's "single responsibility principle".
I have always wondered what the best practice is for separating code in a class based language. As an example, I made a project that handles api interaction with my web api. I want to know what the right option is to go with, or another suggestion.
Example 1
Project Files
Api.cs
DataTypes
Anime.cs
Episode.cs
Api.cs
public class Api
{
public static async Task<List<Anime>> GetAnimesByKeyword(string keyword)
{
// Execute api request to server
return result;
}
public static async Task<List<Episode>> GetEpisodesByAnime(Anime anime)
{
// Execute api request to server
return result;
}
}
DataTypes -> Anime.cs
public class Anime
{
public string Name { get; set; }
public string Summary { get; set; }
// Other properties
}
DataTypes -> Episode.cs
public class Episode
{
public string Name { get; set; }
public Date ReleaseDate { get; set; }
// Other properties
}
Or example 2
Project Files
Api.cs
DataTypes
Anime.cs
Episode.cs
Api.cs
public class Api
{
// Nothing for now
}
DataTypes -> Anime.cs
public class Anime
{
public static async Task<Anime> GetById(int id)
{
return result;
}
public string Name { get; set; }
public string Summary { get; set; }
// Other properties
}
DataTypes -> Episode.cs
public class Episode
{
public static async Task<List<Episode>> GetEpisodesByAnime(Anime anime)
{
return result;
}
public string Name { get; set; }
public Date ReleaseDate { get; set; }
// Other properties
}
What of these 2 is the preferred way of structuring the code, or is there a better way to do this. It might seem insignificant, but it does matter to me.
Thanks for helping me out!
In general, follow the Single Responsibility Principle. In practice this means you have simple objects that are data-only and more complex service classes that do work like loading from an external service or database.
Your second example mixes concerns AND it binds these two classes together tightly (Episode now depends on Anime). You can also see how it's hard to decide which class to put that loading method on: should it be anime.GetEpisodes() or Episode.GetEpisodesByAnime()? As the object graph gets more complex this escalates.
Later you may well want a different data transfer object for an entity. Having simple data-only objects makes it easy to add these and to use Automapper to convert.
But (on your first example) don't use static methods because that makes your service class harder to test. One service may depend on another (use dependency injection) and to test each in isolation you don't want to have static methods.
I've inherited a MVC project that seems to use Telerik Open Access to handle data instead of using something I'm more familiar with like entity framework. I'm trying to understand the whole concept of how to work with this data method, but right now I'm just needing to find out how to add a table. I've limited my code examples to one table, but in reality there are dozens of them.
So I see that the class OpenAccessContext.cs has a database connection string, but it also has a IQueryable item made up of the class tblMaterial. The tblMaterial class is defined in tblMaterial.cs. I don't understand how this class is connected to the SQL database version of tblMaterial (so feel free to educate me on that).
I have a table called tblContacts in the SQL database. What do I need to do to connect it to my project? There's no "update from database" option when I right click any object in the solution (because they're all just classes). Will I need to create a new class manually called tblContacts.cs? If so, how do I connect it to the database version of tblContacts? Am I going to need to manually change multiple classes to add the table (OpenAccessContext, MetadataSources, Repository, etc.)?
I tried to keep this as one simple question (how do I add a table) so I don't get dinged, but any light you can shine on the Telerik Open Access would be helpful. (Please don't ding me for asking that!) I checked out the Telerik documentation here: http://docs.telerik.com/data-access/developers-guide/code-only-mapping/getting-started/fluent-mapping-getting-started-fluent-mapping-api , but it's related to setting up a new open access solution. I need to know how to modify one (without ruining the already working code). Thank you in advance for your help!
Here's the solution as seen in Visual Studio:
Open Access
Properties
References
OpenAccessContext.cs
OpenAccessMetadataSources.cs
Repository.cs
tblMaterial.cs
Here's the code:
OpenAccessContext.cs
namespace OpenAccess
{
public partial class OpenAccessContext : OpenAccessContext
{
static MetadataContainer metadataContainer = new OpenAccessMetadataSource().GetModel();
static BackendConfiguration backendConfiguration = new BackendConfiguration()
{
Backend = "mssql"
};
private static string DbConnection = ConfigurationManager.ConnectionStrings["ConnString"].ConnectionString;
private static int entity = ConfigurationManager.AppSettings["Entity"] == "" ? 0 : int.Parse(ConfigurationManager.AppSettings["Entity"]);
public OpenAccessContext() : base(DbConnection, backendConfiguration, metadataContainer)
{
}
public IQueryable<tblMaterial> tblMaterials
{
get
{
return this.GetAll<tblMaterial>(); //.Where(a => a.EntityId == entity);
}
}
}
}
OpenAccessMetadataSources.cs
namespace OpenAccess
{
public class OpenAccessMetadataSource : FluentMetadataSource
{
protected override IList<MappingConfiguration> PrepareMapping()
{
var configurations = new List<MappingConfiguration>();
// tblMaterial
var materialConfiguration = new MappingConfiguration<tblMaterial>();
materialConfiguration.MapType(x => new
{
MaterialId = x.MaterialId,
MaterialName = x.MaterialName,
MaterialDescription = x.MaterialDescription,
MaterialActive = x.MaterialActive,
MaterialUsageType = x.MaterialUsageType,
AddDate = x.AddDate,
AddBy = x.AddBy,
ModDate = x.ModDate,
ModBy = x.ModBy
}).ToTable("tblMaterial");
materialConfiguration.HasProperty(x => x.MaterialId).IsIdentity(KeyGenerator.Autoinc);
}
}
}
Repository.cs
namespace OpenAccess
{
public class Repository : IRepository
{
#region private variables
private static OpenAccessContext dat = null;
#endregion private varibles
#region public constructor
/// <summary>
/// Constructor
/// </summary>
public Repository()
{
if (dat == null)
{
dat = new OpenAccessContext();
}
}
#endregion public constructor
#region Material (tblMaterials)
public int CreateMaterial(tblMaterial itm)
{
try
{
dat.Add(itm);
dat.SaveChanges();
return itm.MaterialId;
}
catch (Exception)
{
return 0;
}
}
}
}
tblMaterial.cs
namespace OpenAccess
{
public class tblMaterial
{
public int MaterialId { get; set; }
public string MaterialName { get; set; }
public string MaterialDescription { get; set; }
public bool MaterialActive { get; set; }
public int MaterialUsageType { get; set; }
public DateTime? AddDate { get; set; }
public string AddBy { get; set; }
public DateTime? ModDate { get; set; }
public string ModBy { get; set; }
}
}
In the case of tblContacts, I would suggest to you the following workflow for extending the model:
Add a new class file that will hold the definition of the tblContact POCO class. In this class add properties that will correspond to the columns of the table. The types of the properties should logically match the datatypes of the table columns.
In the OpenAccessMetadataSource class, add a new MappingConfiguration<tblContact> for the tblContact class and using explicit mapping provide the mapping details that logically connect the tblContact class with the tblContacts table. Make sure to add both the existing and the new mapping configurations to the configurations list.
Expose the newly added class through an IQueryable<tblContact> property in the context. This property will allow you to compose LINQ queries against the tblContacts table.
Regarding the Repository class, it seems like it is related to the custom logic of the application. It surely is not a file generated by Data Access. Therefore, you need to discuss it in your team.
I also strongly advise you against using OpenAccess in the namespaces of your application. This is known to interfere with the Data Access' namespaces during build time and at some point it causes runtime errors.
I hope this helps.
We have been trying to write a C# client that seeds a Neo4j instance with some nodes and relationships. We are facing probelms when trying to create relationship properties.
Here is the code to create the relatioship with the flag property:
var s = clientConnection.CreateRelationship(root, new RelationshipPrincipleToContent("SECURITY", rootFolder) { flags = "+W" });
Here is the relationship class:
public class RelationshipPrincipleToContent : Relationship, IRelationshipAllowingSourceNode<Principles>, IRelationshipAllowingTargetNode<Content>{
public string flags { get; set; }
string RelationshipName;
public RelationshipPrincipleToContent(NodeReference targetNode) : base(targetNode){}
public RelationshipPrincipleToContent(string RelationshipName, NodeReference targetNode): base(targetNode){
this.RelationshipName = RelationshipName;
}
public override string RelationshipTypeKey{
get { return RelationshipName; }
}
}
When we look at the data in the data browser tab there are no properties on the relationships. We have also created a relationship index?
What are we missing/ doing wrong?
Firstly add a class (PayLoad.cs in this instance) that holds a set for a public string.
public class PayLoad
{
public string Comment { get; set; }
}
Update your relationship class to use this PayLoad class:
public class RelationshipPrincipleToContent : Relationship<PayLoad>, IRelationshipAllowingSourceNode<Principles>, IRelationshipAllowingTargetNode<Content>
{
string RelationshipName;
public RelationshipPrincipleToContent(string RelationshipName, NodeReference targetNode, PayLoad pl)
: base(targetNode, pl)
{
this.RelationshipName = RelationshipName;
}
public override string RelationshipTypeKey
{
get { return RelationshipName; }
}
}
}
Now just update your method call on the relationship class:
clientConnection.CreateRelationship(AllPrincipals, new RelationshipPrincipleToContent("SECURITY", rootFolder, new PayLoad() { Comment = "+R" }));
(Context: I lead the Neo4jClient project.)
Shaun's answer is correct, however dated.
The direction of both Neo4j and Neo4jClient is towards Cypher as a unified approach to everything you need to do.
This Cypher query:
START root=node(0), rootFolder=node(123)
CREATE root-[:SECURITY { flags: 'W+' }]->rootFolder
Is this in C#:
client.Cypher
.Start(new { root = client.RootNode, rootFolder })
.Create("root-[:SECURITY {security}]->rootFolder")
.WithParam("security", new { flags = "+W" })
.ExecuteWithoutResults();
Some notes:
Using Cypher for this type of stuff might look a bit more complex to start with, but it will grow better for you. For example, a simple switch from Create to CreateUnique will ensure you don't create the same relationship twice; that would be much harder with the procedural approach.
Non-Cypher wrappers in Neo4jClient are a bit old and clunky, and will not see any significant investment moving forward
The C# approach uses WithParam to ensure that everything gets encoded properly, and you can still pass in nice objects
The C# approach uses WithParam to allow query plan caching
I am doing validation using DataAnnotation attributes on the Model classes, and the Model class is used for validation on both the Client and Server side of the application.
My problem is, I can't figure out how to lazy load my Model's properties without causing circular references
The libraries involved are:
WCF Service Library
Client-Side DataAccess Library
Models Library
Because the Models library is used on both the Client and Server side for data validation, I cannot reference the DataAccess library from within the Models library. Therefore, how can I setup lazy-loading?
For example, I have a ConsumerModel which has a property of PhoneNumbers which should be lazy loaded. How can I load the PhoneNumberModels from within the ConsumerModel without referencing the Client-Side DAL?
Client-side DAL:
using MyModels;
public class ConsumerDataAccess
{
public ConsumerModel GetConsumerById(int id)
{
ConsumerDTO dto = WCFService.GetConsumer(id);
return new ConsumerModel(dto);
}
}
ConsumerModel:
public class ConsumerModel
{
public ObservableCollection<PhoneNumberModel> _phoneNumbers;
public ObservableCollection<PhoneNumberModel> PhoneNumbers
{
get
{
if (_phoneNumbers == null)
{
// Can't reference DataAccess Library since that would cause a Circular Reference
}
}
}
}
What are some alternative ways I could make this architecture work?
I would prefer to keep Validation with the Models, and to use the models from both the Client and Server side for validation. I would also prefer to keep using DataAnnotation for Validation.
EDIT
Here's my final solution based on Lawrence Wenham's answer if anyone is interested. I ended up using a delegate instead of an event.
DAL:
public class ConsumerDataAccess
{
public ConsumerModel GetConsumerById(int id)
{
ConsumerDTO dto = WCFService.GetConsumer(id);
ConsumerModel rtnValue = new ConsumerModel(dto);
ConsumerModel.LazyLoadData = LazyLoadConsumerData;
return rtnValue;
}
}
private object LazyLoadConsumerData(string key, object args)
{
switch (key)
{
case "Phones":
return PhoneDataAccess.GetByConsumerId((int)args);
default:
return null;
}
}
Model Library:
public class ConsumerModel
{
public delegate object LazyLoadDataDelegate(string id, object args);
public LazyLoadDataDelegate LazyLoadData { get; set; }
public ObservableCollection<PhoneNumberModel> _phoneNumbers;
public ObservableCollection<PhoneNumberModel> PhoneNumbers
{
get
{
if (_phoneNumbers == null && LazyLoadData != null)
{
_phoneNumbers = (ObservableCollection<PhoneNumberModel>)
LazyLoadData("Phones", ConsumerId);
}
return _phoneNumbers;
}
}
}
One way might be to raise an event in the get {} of your Model classes properties, and then implement a lazy-loading manager on the client side that has a reference to your DAL. EG:
public class LazyLoadEventArgs: EventArgs
{
public object Data { get; set; }
public string PropertyName { get; set; }
public int Key { get; set; }
}
Then in your Model classes:
public event EventHandler<LazyLoadEventArgs> LazyLoadData;
public ObservableCollection<PhoneNumberModel> PhoneNumbers
{
get
{
if (_phoneNumbers == null)
{
LazyLoadEventArgs args = new LazyLoadEventArgs {
PropertyName = "PhoneNumbers",
Key = this.Id
};
LazyLoadData(this, args);
if (args.Data != null)
this._phoneNumbers = args.Data as ObservableCollection<PhoneNumberModel>;
}
return _phoneNumbers;
}
}
The handler for the LazyLoadData event would have the job of fetching the data from the client side's DAL, then storing it in the .Data property of LazyLoadEventArgs. EG:
private void Model_HandleLazyLoadData(object sender, LazyLoadEventArgs e)
{
switch (e.PropertyName)
{
case "PhoneNumbers":
e.Data = DAL.LoadPhoneNumbers(e.Key);
break;
...
}
}
Do not use "lazy loading" with WCF. Network communication is time expensive. If you plan to use PhoneNumbers your service should expose method which will return Customer with phone numbers. Other approach is using WCF Data Services which offers client side linq queries with ability to define eager loading by Expand method.
You should reduce service calls to minimum.
After reading again your question I don't understand why do you share model between service and client. Model is strictly client's feature. The only shared part should be DTOs.