OrderBy on mix of DateTime and DBNull values throws error - c#

I want to order a list of my 'SortObject' class. This class is meant to imitate a DataGrid Row by holding arbitrary data organized in a dictionary (named 'Cells'), where the key is analogous to a DataGrid Column. It is enforced that any given key is associated with only one data type, for example the key "Name" will only have values of the String type.
My code (below) actually works for a majority of the cases I've used:
// public Dictionary<string, object> Cells { get; set; } <-- relevant field from 'SortObject'
List<SortObject> sortObjects = GetSortObjects(); // This is simplified, the data has a few different sources
IOrderedEnumerable<SortObject> orderedList = sortObjects.OrderBy(p => p.Cells["ArbitraryKey"]);
SortObject firstObject = sortedList.First();
// other work with 'orderedList' follows
The problem occurs when I'm trying to OrderBy objects of the DateTime type, and some of those objects are not set and default to 'System.DBNull'. In this case an exception is thrown when calling 'sortedList.First()' or in any of the later references to 'sortedList'. The exception is simple: "Object must be of type DateTime", which seems to be a consequence of OrderBy trying to compare the type DateTime to the type DBNull.
I've tried two solutions that haven't worked so far:
Attempt One: Set DBNull to new DateTime. In theory this should work, but I would need to create not simply DateTime type objects, but objects of any arbitrary type on the fly. (I'd also need to take note of these SortObjects and set their data back to DBNull once I had the order correct; I can't be actually changing data after all).
Attempt Two: Organize just DBNull, then just DateTime, then slap them together. Again this might work in theory, but the "other work" mentioned in the code snippet is extensive, including reordering using ThenBy() an arbitrary number of times on any key(s). Doubling its complexity is not an elegant solution and I consider it a backup.
What is the best way to resolve this?
PS: For OrderBy and DateTime I'm using the Microsoft .NET Framework v4.6.2

Change the OrderBy statement to OrderBy(v => v is DBNull ? null : v)
OrderBy can handle nulls, but not dbNulls.
That code should work for all the data types

Related

How To Handle DBNull When Pulling data from DataAdapter

I've imported a MSSQL dataset into the DataSet Designer of Visual Studio but I'm having trouble with non-nullable types. My database allows for null integers (and i'd like to keep it that way if possible), but integers in .NET cannot be null. I would love to use nullable integers in .NET, but that datatype doesn't seem to be an option allowed in the drop-down list of the DataSet Designer's DataType property.
I'd prefer not to manually have to edit the DataSet.Designer class file every time I make a schema change in order to change the data-type to a nullable integer. I just want my property to return Null/Nothing instead of throwing a StrongTypingException when I reference a property that is DBNull.
I see that there is a NullValue column property in the DataSet Designer, but I am not allowed to select "Return Null" unless my datatype is an Object.
Is there a clean way to handle the DBNULL > Null conversion using settings in the designer or a neat wrapper function? I don't want to have to use the IsColumnNameNull() Functions for every column every single time I call one of these properties. I also don't want to have to cast Objects into their actual types every single time.
Let me know if I'm looking at this problem the wrong way. Any feedback is appreciated. Thanks!
You can use generic extension method .Field<T>("ColumnName") from System.Data.DataSetExtensions assembly.
int? intValue = datarow.Field<int?>("CustomerId"); // c#
Dim intValue As Integer? = datarow.Field(Of Integer?)("CustomerId") ' vb.net
But you can create a class with correct types which represents data of one row instantiate it only once for use.
Or get rid of DataSet, DataTable and DataRow types and use light "POCO" classes with defined types for properties.
By using classes you made your application little bid more scalable for future changes - for example moving from DataSets to some ORM Frameworks, with classes you can change data access without affecting main business logic.

Mapping Enum to string column with custom SqlMapper.ITypeHandler - Dapper

I have a large number of PL/SQL stored procs that return columns with single character strings representing some kind of status value from a fixed range. In the project I'm working on, these columns have been mapped by Dapper to string properties on the domain objects, which are awkward and unreliable to manage, so I'd like to switch to enums.
If I used enums with single character names like enum Foo {A, P} I'm pretty sure Dapper would map them correctly but I don't want that, I want enums with descriptive labels like so:
enum Foo {
[StringValue("A")]
Active,
[StringValue("P")]
Proposed
}
In the above example, StringValueAttribute is a custom attribute and I can use reflection to convert the "A" to Foo.Active, which works fine - except I need Dapper to perform that conversion logic for me. I wrote a custom type handler to do this:
public class EnumTypeHandler<T> : SqlMapper.TypeHandler<T>
{
public override T Parse(object value)
{
if (value == null || value is DBNull) { return default(T); }
return EnumHelper.FromStringValue<T>(value.ToString());
}
public override void SetValue(IDbDataParameter parameter, T value)
{
parameter.DbType = DbType.String;
parameter.Value = EnumHelper.GetStringValue(value as Enum);
}
}
//Usage:
SqlMapper.AddTypeHandler(typeof(Foo),
(SqlMapper.ITypeHandler)Activator.CreateInstance(typeof(EnumTypeHandler<>).MakeGenericType(typeof(Foo)));
The registration with SqlMapper.AddTypeHandler() seems to work fine, but when my DbConnection.Query() code runs, I get an error saying that the value 'A' could not be converted - the error is thrown from Enum.Parse, suggesting that Dapper isn't actually calling my type handler at all despite it being registered. Does anyone know a way around this?
Another user has reported this as an issue on Dapper's github site. Seems like it's a deliberate optimisation specifically around enums in Dapper, so I've changed my database model rather than trying to change the mapping code. I looked at trying to modify Dapper itself, but the source code of Dapper is optimised like nothing I've ever seen, emitting opcodes to perform conversions in the most performant way possible - no way I want to start trying to work out how to make changes there.

Issues when using Reflection and IDataReader.GetSchemaTable to create a generic method that reads and processes an IDataReader's current result set

I am writing a class that encapsulates the complexity of retrieving data from a database using ADO.NET. Its core method is
private void Read<T>(Action<T> action) where T : class, new() {
var matches = new LinkedList<KeyValuePair<int, PropertyInfo>>();
// Read the current result set's metadata.
using (DataTable schema = this.reader.GetSchemaTable()) {
DataRowCollection fields = schema.Rows;
// Retrieve the target type's properties.
// This is functionally equivalent to typeof(T).GetProperties(), but
// previously retrieved PropertyInfo[]s are memoized for efficiency.
var properties = ReflectionHelper.GetProperties(typeof(T));
// Attempt to match the target type's columns...
foreach (PropertyInfo property in properties) {
string name = property.Name;
Type type = property.PropertyType;
// ... with the current result set's fields...
foreach (DataRow field in fields) {
// ... according to their names and types.
if ((string)field["ColumnName"] == name && field["DataType"] == type) {
// Store all successful matches in memory.
matches.AddLast(new KeyValuePair<int, PropertyInfo>((int)field["ColumnOrdinal"], property));
fields.Remove(field);
break;
}
}
}
}
// For each row, create an instance of the target type and set its
// properties to the row's values for their matched fields.
while (this.reader.Read()) {
T result = new T();
foreach (var match in matches)
match.Value.SetValue(result, this.reader[match.Key], null);
action(result);
}
// Go to the next result set.
this.reader.NextResult();
}
Regarding the method's correctness, which unfortunately I cannot test right now, I have the following questions:
When a single IDataReader is used to retrieve data from two or more result sets, does IDataReader.GetSchemaTable return the metadata of all result sets, or just the metadata corresponding to the current result set?
Are the column ordinals retrieved by IDataReader.GetSchemaTable equal to the ordinals used by the indexer IDataReader[int]? If not, is there any way to map the former into the latter?
Regarding the method's efficiency, I have the following question:
What is DataRowCollection's underlying data structure? Even if that question cannot eb answered, at least, what is the asymptotic computational complexity of removing a DataRow from a DataRowCollection using DataRowCollection.Remove()?
And, regarding the method's evident ugliness, I have the following questions:
Is there any way to retrieve specific metadata (e.g., just the columns' ordinals, names and types), not the full blown schema table, from an IDataReader?
Is the cast to string in (string)field["ColumnName"] == name necessary? How does .NET compare an object variable that happens to contain a reference to a string to a string variable: by reference value or by internal data value? (When in doubt, I prefer to err on the side of correctness, thus the cast; but, when able to remove all doubt, I prefer to do so.)
Even though I am using KeyValuePair<int, PropertyInfo>s to represent pairs of matched fields and properties, those pairs are not actual key-value pairs. They are just plain-old ordinary 2-tuples. However, version 2.0 of the .NET Framework does not provide a tuple data type, and, if I were to create my own special purpose tuple, I still would not know where to declare it. In C++, the most natural place would be inside the method. But this is C# and in-method type definitions are illegal. What should I do? Cope with the inelegance of using a type that, by definition, is not the most appropriate (KeyValuePair<int, PropertyInfo>) or cope with the inability to declare a type where it fits best?
As far as A1, I believe that until IDataReader.NextResult() is envoked, the GetSchemaTable will only return the information for the current resultset.
Then when NextResult() is envoked, you would have to do a GetSchemaTable again to get the information about the current resultset.
HTH.
I can answer a couple of these:
A2) Yes, the column ordinals that come out of GetSchemaTable are the same column ordinals that are used for the indexer.
B1) I'm not sure, but it won't matter, because you'll throw if you remove from the DataRowCollection while you're enumerating it in the foreach. If I were you, I'd make a hash table of the fields or the properties to help match them up instead of worrying about this linear-search-with-removal.
EDIT: I was wrong, this is a lie -- as Eduardo points out below, it won't throw. But it's still sort of slow if you think you might ever have a type with more than a few dozen properties.
C2) Yes, it's necessary, or else it would compare by reference.
C3) I would be inclined to use KeyValuePair anyway.

how to iterate through reader for notnull without repeating code over and over

First off, I am new to programming (especially with C#) and thanks for your help.
I have a static web form with about 150 form objects (most checkboxes). I decided to go 1 record per form submission in the sql db. So, for example, question X has a choice of 5 checkboxes. Each of these 5 checkboxes has a column in the db.
I have the post page complete(working) and am building an edit page where I load the record and then populate the form.
How I am doing this is by passing a stored proc the id and then putting all the returned column values into the according object properties, then setting the asp control object to them.
An example of setting the asp controls to the selected value:
questionX.Items[0].Selected = selectedForm.questionX0
questionX.Items[1].Selected = selectedForm.questionX1
questionX.Items[2].Selected = selectedForm.questionX2
As you see, this is very tiresome since there are over 150 of these to do. Also, I just found out if the response is NULL then I get the error that it cant be converted to a string. So, I have added this line of code to get past it:
This is the part where I am populating the returned column values into the object properties (entity is the object):
if (!String.IsNullOrEmpty((string)reader["questionX0"].ToString()))
{entity.patientUnderMdTreatment = (string)reader["questionX0"];}
So, instead of having to add this if then statement 150+ times. There must be a way to do this more efficiently.
First of all, it seems that you are using string.IsNullOrEmpty(value), but this won’t check for the special DBNull value that is returned from databases when the data is null. You should use something more akin to value is DBNull.
The rest of your problem sounds complex, so please don’t be put off if my answer is complex too. Personally I would use custom attributes:
Declare a custom attribute
The following is a skeleton to give you the idea. You may want to use the “Attribute” code snippet in Visual Studio to find out more about how to declare these.
[AttributeUsage(AttributeTargets.Field, AllowMultiple = false)]
public sealed class QuestionColumnAttribute : Attribute
{
public string ColumnName { get; private set; }
public QuestionColumnAttribute(string columnName)
{
ColumnName = columnName;
}
}
Use the custom attribute in the entity class
Where you declare your entity class, add this custom attribute to every field, for example where patientUnderMdTreatment is declared:
[QuestionColumn("questionX0")]
public string patientUnderMdTreatment;
Iterate over the fields
Instead of iterating over the columns in the reader, iterate over the fields. For each field that has a QuestionColumnAttribute on it, get the relevant column from the reader:
foreach (var field in entity.GetType().GetFields())
{
var attributes = field.GetCustomAttributes(typeof(QuestionColumnAttribute), true);
if (attributes.Length == 0)
continue;
object value = reader[attributes[0].ColumnName];
if (!(value is DBNull))
field.SetValue(entity, value.ToString());
}
For the first part of your question where you set the ASP controls, you can use a similar strategy iterating over the fields of selectedForm, and this is probably simpler because you don’t need a custom attribute — just take only the fields whose name starts with “questionX”.
this is a quick & easy way of doing it.. there are some suggestions to investigate LINQ, and I'd go with those first.
for (int i = 0; i < 150; i++)
{
if (!String.IsNullOrEmpty((string)reader["questionX" + i.ToString()].ToString()))
{entity.patientUnderMdTreatment = (string)reader["questionX" + i.ToString()];}
}
... though this wouldn't be any good with the
questionX.Items[0].Selected = selectedForm.questionX0
questionX.Items[1].Selected = selectedForm.questionX1
questionX.Items[2].Selected = selectedForm.questionX2
lines
so I hear two questions:
- how to deal with null coming from IDataReader?
- how to deal with multiple fields?
Lets start with simple one. Define yourself a helper method:
public static T IsDbNull<T>(object value, T defaultValue)
{
return (T)(value is DBNull ? defaultValue : value);
}
then use it:
entity.patientUnderMdTreatment = IsDbNull<string>(reader["question"], null);
Now how to map entity fields to the form? Well that really is up to you. You can either hardcode it or use reflection. The difference of runtime mapping vs compile-time is likely to be completely irrelevant for your case.
It helps if your form fields have identical names to ones in the DB, so you don't have to do name mapping on top of that (as in Timwi's post), but in the end you'll likely find out that you have to do validation/normalization on many of them anyway at which point hardcoding is really what you need, since there isn't a way to dynamically generate logic according to the changing spec. It doesn't matter if you'll have to rename 150 db fields or attach 150 attributes - in the end it is always a O(n) solution where n is number of fields.
I am still a little unsure why do you need to read data back. If you need to preserve user's input on form reload (due to validation error?) wouldn't it be easier/better to reload them from the request? Also are entity and selectedForm the same object type? I assume its not a db entity (otherwise why use reader at all?).
Its possible that there are some shortcuts you may take, but I am having hard time following what are you reading and writing and when.
I recommend using the NullableDataReader. It eliminates the issue.

Serializing Object - Replacing Values

I've got a collection of around 20,000 objects that need to get persisted to my database. Now, instead of doing 20,000 insert statements, I want to pass all the records in using an XML parameter.
As far as serializing the object and passing it into the procedure goes, I'm all set. However, I'm wondering if anyone has an elegant way to do the following:
In our C# code base; we have some static values that represent a NULL when saved to the database. For example, if an integer equals -1, or a DateTime equals DateTime.MinValue; save NULL. We have our own little custom implementation that handles this for us when saving objects.
Is there any way I can do something similar to this when performing the XML serialization? Right now it's outputting -1 and DateTime.MinValue in the XML. I do have an extension method (IsNull()) that will return true/false if the value being saved is the null default value.
Any suggestions? Tips/Tricks?
The XmlSerializer understands a number of different attributes; one of them is DefaultValueAttribute.
When included, the XmlSerializer will only serialize the value check if the actual value differs from the default, so all you should need is:
[DefaultValue(-1)]
public int SomeProperty
{get;set;}
Also, if you haven't considered it, take a look at the SqlBulkCopy class, which is a highly-performant approach to sending a large number of records to SQL Server.
You can implement IXmlSerializable to control an object's XML serialization. In particular, implement WriteXml to substitute blank or xsi:null values (however you want to handle this) for those properties/fields that contain your null signifier values.

Categories

Resources