Dapper UpdateAsync ignore column - c#

I am trying to update with Dapper.Contrib this table:
public class MyTable
{
public int ID { get; set; }
public int SomeColumn1 { get; set; }
public int SomeColumn2 { get; set; }
public int CreateUserID { get; set; }
public int UpdateUserID { get; set; }
}
I don't want to update the CreateUserID column because it is an update method so that I want to ignore this column while calling the Dapper - Update.Async(entity) method.
I tried using [NotMapped] and [UpdateIgnore] attributes but no help.
Note: I still want this column to be passed on insert operations, therefore, [Computed] and [Write(false)] is not appropriate.
Can someone help me figure out how to ignore this column when updating the table in the database?

Well, it's just not supported. Here is related issue, and solution is expected only in Dapper v2. You can also inspect source code (it's pretty simple) and see that updated properties are searched as follows:
var allProperties = TypePropertiesCache(type);
keyProperties.AddRange(explicitKeyProperties);
var computedProperties = ComputedPropertiesCache(type);
var nonIdProps = allProperties.Except(keyProperties.Union(computedProperties)).ToList();
So all properties not marked with Key\ExplicitKey\Computed and which are writable are included. The same happens for InsertAsync (except properties with ExplicitKey are also included in insert, but you cannot use this attribute in your situtaion, because your property is not key after all).
So you have to either wait for this to be implemented, fork and implement yourself, or just write your own UpdateAsync method. You can see from source code that it's very simple and is not hard to reimplement.

As #Evk already mentioned in his answer, there is no solution implemented yet. He have also mentioned the workarounds.
Apart from that, you can choose to use Dapper (IDbConnection.Execute(...)) directly bypassing Dapper.Contrib for this particular case.
I had a similar problem (with DapperExtensions though) with particular column update and other complex queries those DapperExtensions either cannot generate at all or need much work to make it happen with it.
I used Dapper directly instead of DapperExtensions for that particular case; other part of project still take benefit of DapperExtensions. This is like tread-off. Such cases are very limited. I found this is better solution instead of tweaking/forcing DapperExtensions for this. This also saved me on time and efforts.

I suggest using the [Computed] attribute.
[Computed] - this property is computed and should not be part of updates
But, It appears that the documentation for Dapper.Contrib is worded in a confusing manner. The [Computed] attribute appears to be ignored on inserts as well - this may or may not work for your use case.

Related

set a value to the attribute at run time

I have a namespace that contains some classes, one of the classes I'm working on contains properties, where each proprty has an attribute associated with it, as the follwing
namespace Local.Business
{
[DynamoDBTable("myTableName")]
public class Business
{
[DynamoDBHashKey("PK")]
public string MunId {get; set;}
[DynamoDBRangeKey("SK")]
public string Id {get; set;}
[DynamoDBProperty("Dba")]
public string Dba {get; set;}
}
}
the string "myTableName" need to be determined at runtime(by calling a function or reading it from other class's property)
How can I achieve that, please?
What you are trying to do is inherently flawed. You kinda can ish change attributes sort of, sometimes, but there's a good chance that whatever is consuming the attributes won't see the change, which makes it entirely pointless.
Basically: you need to find another way of reconfiguring DynamoDB at runtime. This isn't it.
For the "kinda can ish":
you can materialize attributes, and if they're mutable, change the copies you have; but when other code materializes the attributes, they'll get different unrelated versions, which will not have any changes you have made
there is an API that supports this concept (System.ComponentModel), but: most attribute consumers do not use that API - it is mostly just UI binding tools (think PropertyGrid, DataGridView, etc) that would take any notice of it - because they are expecting to work with things like DataTable that require a different approach to metadata and reflection
Set the table name value to empty string in the class file:
[DynamoDBTable("")]
During runtime, use the overloaded functions on DynamoDBMapper to pass DynamoDBMapperConfig configured with TableNameOverride
Actually I deleted the table property and I gave the table name in the query
dynamoDBOperationConfig = new DynamoDBOperationConfig();
dynamoDBOperationConfig.OverrideTableName = "tableName";
string munId = "1";
var search = dynamoDBcontext.QueryAsync<Business>(munId, dynamoDBOperationConfig);
and It works fine, thank you all guys for helping

Exporting EF Entity to Excel/PDF and How to Exclude Attributes without violating SRP?

I am working with Entity Framework as my ORM for a project at work, and I need to be able to write only some of the values of each entity to an existing Excel template.
The data is required to be formatted as Excel Tables so that the end user can reference the information by using formulas like "=AVG(People_Table[Age])". (note, this is just a contrived example for a simplicity). There is also a requirement to export the values to PDF as well.
I've decide that reflection is the way to go to export the information in the least painful manner possible. The problem now, however, is I want to exclude certain properties from being written to the spreadsheet. And I also might want to write the properties in a certain order and specify a display format.
One way I could do this is with defining specific Data Attributes on the properties. I liked this answer on ignoring specific attributes: Exclude property from getType().GetProperties(). So a possible solution could be:
// class I want to export
public class PersonEntity {
[SkipAttribute] // per solution in the referenced answer
public int PersonId { get; set; }
[SkipAttribute]
public int ForeignKeyId { get; set; }
[Display(Order = 3)]
public int Age { get; set; }
[Display(Name="First Name", Order = 1)]
public string FirstName { get; set; }
[Display(Name="Last Name", Order = 2)]
public string LastName { get; set; }
/* additional properties remove for brevity */
}
The Problem I see with the above solution is that this entity class is now doing two things: One, proving a mapping between EF and the Database which is it's primary function, and two providing information on how to consume the class for exporting to Excel. I see this as getting messy and leading to confusion because it (possibly?) violates SRP. And, also, I only need the SkipAttribute when exporting to Excel, most of the time I will just ignore this attribute.
An alternative solution that I see could be to create a separate set of classes that only contains the needed properties and to use this for exporting to Excel, and then using a tool like AutoMapper to map from EF Person to this class.
So, the export class would be:
public class PersonExportModel {
[Display(Name="First Name")]
public string FirstName { get; set; }
[Display(Name="Last Name")]
public string LastName { get; set; }
public int Age { get; set; }
/* additional properties removed for brevity */
}
And I would just use reflection to dump the values out to the specified format using ClosedXML or a PDF rendering library like ITextSharp.
Concern with the above solution is that this is going to end up with a lot of extra code just to ignore a few unwanted properties (mostly PK's, FK's, and some complex relationship properties). I am also at the issue any updates to the EF class, like removing a property, will require me to also go through the other classes and remove the corresponding properties. But I like this solution because there is less confusion to me about what data is needed for exporting to Excel.
So I'm stuck between either bloating my EF class to tell how it should be exported or creating other ExportModels that are tightly coupled to the EF class and would be a pain to update if the underlying model changes. And the whole mapping between classes is a real pain, which can be alleviated with AutoMapper. This comes with, however, it's own set of problems with obfuscated mapping and performance penalties. I could live with these "problems" if it means I do not have to manually map between the two classes.
I've thought about farming the work out to a SSRS but I need to ability to write the data to specific existing workbooks which I understand is not possible. I'd also need the ability to create named tables which also I understand is not possible out of the box with SSRS. I'd also need to create two reports because the Excel output would look much different than the PDF format. So even the SSRS would cause a lot of extra work.
Any suggestions on which solution might be best, or perhaps an alternative approach? The requirement of this project is in flux so I'm looking for a solution that will be as painless as possible to updates.

SetFields in Mongo C# driver

I'm using C# mongo driver and I have users collection like below,
public class User
{
public string Name { get; set; }
public DateField Date { get; set; }
/*
* Some more properties
*/
public List<string> Slugs { get; set; } //I just need to return this property
}
I'm writing a query in which it just returns me the slugs property.
To do this i'm trying to use SetFields(...) method from the mongo driver. SetFields returns the cursor of the User type i'm expecting something to be of my Slugs property type so that I don't return whole set of properties when i just need one.
Is it possible ?
Yes and no. You can use the aggregation framework's projection operator $project to change the structure of the data, but I wouldn't do that for two reasons:
MongoDB generally tries to preserve the structure unless you force it to, particularly because it makes it easier to work with statically typed languages (the old object/relational mismatch: SQL queries don't 'answer' in users or blog post, but some wild Chimaera of properties collected from various tables, which might require additional DTOs depending on the query itself, which is all a bit ugly).
Aggregation framework queries are a bit more complicated and a bit slower, and I wouldn't let the urge to do some micro-optimization dictate a lot of unnecessary complexity.
After all, omitting a few fields is a micro-optimization already (setting index covered queries aside), but on the client-side the cost of empty fields should be next to none.

Add property to POCO class at runtime

I selected ServiceStack OrmLite for my project which is a pure Data-Oriented application. I am willing to allow the end user to create his own Object Types defined in an XML format that will be used to generate classes at runtime using CodeDOM.
I will be also defining some "system" objects required by the application (i.e. User) but I cannot foresee all the properties the end user will use and therefore I am looking for a way to allow extending the classes I create in design time. Sample bellow
public class User
{
public Guid Uid { get; set; }
public String Username { get; set; }
public String Password { get; set; }
}
The end user wants to have an Email and an Address. He should be able to add the 2 properties to the upper class and the whole class will be (which still can be used by OrmLite, since it allows overwriting :
public class User
{
public Guid Uid { get; set; }
public String Username { get; set; }
public String Password { get; set; }
public String Email{ get; set; }
public String Address { get; set; }
}
I know that there might be a risk of doing so to crash the system (if the class is already instantiated) so I am looking for the best way to avoid this issue and mimic the need I have.
It seems that there are two parts to what you're doing here. You need to create types dynamically to support the additional properties. You also need to ensure that you never end up with duplicate types in your AppDomain, i.e. two different definitions of User.
Runtime type generation
The various suggestions already given handle how to create the types. In one project, we had something similar. We created a base class that had the core properties and a dictionary to store the 'extension' properties. Then we used Reflection.Emit to create a derived type that had the desired properties. Each property definition simply read from or wrote to the dictionary in the base class. Since Reflection.Emit entails writing low-level IL code, it seems complex at first. We wrote some sample derived classes in another class library and compiled them. These were examples of what we'd actually need to achieve at runtime. Then we used ildasm.exe to see what code the compiler produced. This made it quite easy to work out how we could generate the same code at runtime.
Avoiding namespace collisions
Your second challenge is to avoid having duplicate type names. We appended a guid (with invalid characters removed) to the name of each generated type to make sure this never happened. Easy fix, though I don't know whether you could get away with that with your ORM.
If this is server code, you also need to consider the fact that assemblies are never unloaded in .NET. So if you're repeatedly generating new types at runtime, your process will continue to grow. The same will happen in client code, but this may be less of an issue if you don't expect the process to run for an extended period of time.
I said assemblies are not unloaded; however, you can unload an entire AppDomain. So if this is server code you could have the entire operation run in its own appdomain, then tear it down afterwards to ensure that the dynamically created types are unloaded.
Check out the ExpandoObject, which provides dynamic language support for doing something like this. You can use it to add additional properties to your POCO's at runtime. Here's a link on using .NET's DLR features: http://msdn.microsoft.com/en-us/library/system.dynamic.expandoobject%28v=vs.100%29.aspx
Why not use a key value pair for all its properties, or at least the dynamic ones?
http://msdn.microsoft.com/en-us/library/system.collections.hashtable.aspx
You can do it the way you're describing with Reflection but it will take a performance hit, this way will allow removal of properties also.
The project I'm currently working on has a similar requirement. We have a system already in production and had a client request addition fields.
We solved this by simply adding a CustomFields property to our model.
public class Model: IHasId<Guid>
{
[PrimaryKey]
[Index(Unique = true)]
public Guid Id { get; set; }
// Other Fields...
/// <summary>
/// A store of extra fields not required by the data model.
/// </summary>
public Dictionary<string, object> CustomFields { get; set; }
}
We've been using this for a few weeks with no issues.
An additional benefit we found from this was that each row could have its own custom fields so we could handle them on a per record basis instead of requiring them for every record.

What ways can I ensure that a string property is of a particular length?

I've created some classes that will be used to provide data to stored procedures in my database. The varchar parameters in the stored procs have length specifications (e.g. varchar(6) and I'd like to validate the length of all string properties before passing them on to the stored procedures.
Is there a simple, declarative way to do this?
I have two conceptual ideas so far:
Attributes
public class MyDataClass
{
[MaxStringLength = 50]
public string CompanyName { get; set; }
}
I'm not sure what assemblies/namespaces I would need to use to implement this kind of declarative markup. I think this already exists, but I'm not sure where and if it's the best way to go.
Validation in Properties
public class MyDataClass
{
private string _CompanyName;
public string CompanyName
{
get {return _CompanyName;}
set
{
if (value.Length > 50)
throw new InvalidOperationException();
_CompanyName = value;
}
}
}
This seems like a lot of work and will really make my currently-simple classes look pretty ugly, but I suppose it will get the job done. It will also take a lot of copying and pasting to get this right.
I'll post this as a different answer, because it is characteristically different than Code Contracts.
One approach you can use to have declarative validation is to use a dictionary or hash table as the property store, and share a utility method to perform validation.
For example:
// Example attribute class for MaxStringLength
public class MaxStringLengthAttribute : Attribute
{
public int MaxLength { get; set; }
public MaxStringLengthAttribute(int length) { this.MaxLength = length; }
}
// Class using the dictionary store and shared validation routine.
public class MyDataClass
{
private Hashtable properties = new Hashtable();
public string CompanyName
{
get { return GetValue<string>("CompanyName"); }
[MaxStringLength(50)]
set { SetValue<string>("CompanyName", value); }
}
public TResult GetValue<TResult>(string key)
{
return (TResult)(properties[key] ?? default(TResult));
}
public void SetValue<TValue>(string key, TValue value)
{
// Example retrieving attribute:
var attributes = new StackTrace()
.GetFrame(1)
.GetMethod()
.GetCustomAttributes(typeof(MaxStringLengthAttribute), true);
// With the attribute in hand, perform validation here...
properties[key] = value;
}
}
You can get at the calling property using reflection by working up your stack trace as demonstrated here. Reflect the property attributes, run your validation, and voila! One-liner getter/setters that share a common validation routine.
On an aside, this pattern is also convenient because you can design a class to use alternative dictionary-like property stores, such as ViewState or Session (in ASP.NET), by updating only GetValue and SetValue.
One additional note is, should you use this approach, you might consider refactoring validation logic into a validation utility class for shared use among all your types. That should help prevent your data class from getting too bulky in the SetValue method.
Well, whatever way you go, what's executed is going to look like your second method. So the trick is getting your first method to act your second.
First of all, It would need to be [MaxStringLength(50)]. Next, all that's doing is adding some data to the Type object for this class. You still need a way of putting that data to use.
One way would be a binary re-writer. After compilation (but before execution), the rewriter would read the assembly, looking for that Attribute, and when finding it, add in the code for the check. The retail product PostSharp was designed to do exactly that type of thing.
Alternately, you could trigger it at run-time. SOmething like:
public class MyDataClass
{
private string _CompanyName;
[MaxStringLength(50)]
public string CompanyName
{
get {return _CompanyName;}
set
{
ProcessValidation()
_CompanyName = value;
}
}
}
That's still quite ugly, but it's a bit better if you have a number of validation attributes.
The first method using attribute sounds good.
Implement your attribute by inherit from the System.Attribute class and mark your class with AttributeUsage attribute to let your attribute being set on a field.
Then, using reflection, check for presence and value of the attribute before sending the value to the SP.
Thats provide you with lot more flexibility than the second method. If tomorow you decide to let your SP receive the first N chars of a too lengthly string, you won't have to modify all your code but only the one that interpret the attribute.
There are indeed some validation attribute in the framework but I wouldn't use those one because you could implies some behaviour you don't expect and because you won't be able to modify then in any way (liek if you want something like [MaxLength(50, true)] to specify that using the first 5O chars is OK.
It sounds like a business rule. So I would put it in a Company class (Since it is CompanyName), and do the validation there. I don't see why it would require copying and pasting if you have it encapsulated.
Either an attribute or your second example should be fine. The attribute allows for reuse in other classes with string length constraints, however.
Though not exactly the same thing, I recently became aware of .NET 4 Code Contracts in an MSDN article. They provide a convenient and elegant way of encoding and analyzing code assumptions. It's worth taking a look at.

Categories

Resources