InvalidCastException - Cannot cast string to custom object even with operators - c#

I am probably trying to do something that isn't possible. I am working with Castle.ActiveRecord / Nhibernate. I have been happy so far, but one thing I have always wanted to able to so is have something like:
[Property(ColumnType = "StringClob")]
public IDictionary<string, string> MetaData { get; set; }
This, for obvious reasons, this isn't possible at the moment. I started playing around with things to make this work and have come down to the following.
I have created a custom class that extends Dictionary
public class SerializableDictionary<TK, TV> : Dictionary<TK, TV>
{
public static explicit operator SerializableDictionary<TK, TV>(System.String serialized)
{
return JsonConvert.DeserializeObject<SerializableDictionary<TK, TV>>(serialized);
}
public override string ToString()
{
return JsonConvert.SerializeObject(this);
}
}
And in my ActiveRecord model I have this:
[Property(ColumnType = "StringClob")]
public SerializableDictionary<string, string> MetaData { get; set; }
This works extremely well when I save the data. It gets serialized nicely and stores as a string. When I try to load a model though, I get the following error:
Unable to cast object of type 'System.String' to type 'MyNamespace.SerializableDictionary`2[System.String,System.String]'.
I have looked as deep as possible (using dotPeek) into where this is happening. The stack trace led me to:
NHibernate.Bytecode.Lightweight.AccessOptimizer.SetPropertyValues(Object target, Object[] values)
I am at a complete loss. I have scoured google to see if what I am trying to do is even possible. Any help is much appreciated.

I was able to figure this out after being pointed in the right direction by Mike Christensen
in the comments.
All of this code is probably overkill, and somethings are probably inefficient because I was doing it as fast as I could. But here it is...
I started with the same extend class as in the question (I modified it a little, because I'm still not sure how to implement the generics).
public class SerializableDictionary : Dictionary<string, object>
{
public static explicit operator SerializableDictionary(String serialized)
{
return JsonConvert.DeserializeObject<SerializableDictionary>(serialized);
}
public static explicit operator String(SerializableDictionary dict)
{
return JsonConvert.SerializeObject(dict);
}
public override string ToString()
{
return JsonConvert.SerializeObject(this);
}
}
I next had to create a custom SqlType for NHibernate. This class had to implement IUserType, which came with a bunch methods that had to be implemented (I'm not sure I implemented these the best way possible). This class now allows NHibernate to use the NullSafeSet to set the model property with the string from the database. There I was able to do the deserialization.
public class SerializableDictionaryType : IUserType
{
public new bool Equals(object x, object y)
{
return x != null && x.Equals(y);
}
public int GetHashCode(object x)
{
return x.GetHashCode();
}
public object NullSafeGet(IDataReader rs, string[] names, object owner)
{
string dbString = (string) NHibernateUtil.String.NullSafeGet(rs, names);
SerializableDictionary dict = JsonConvert.DeserializeObject<SerializableDictionary>(dbString);
return dict;
}
public void NullSafeSet(IDbCommand cmd, object value, int index)
{
if (value == null)
{
NHibernateUtil.String.NullSafeSet(cmd, null, index);
return;
}
value = value.ToString();
NHibernateUtil.String.NullSafeSet(cmd, value, index);
}
public object DeepCopy(object value)
{
if (value == null) return null;
SerializableDictionary newDict = new SerializableDictionary();
foreach (KeyValuePair<string, object> item in (SerializableDictionary)value)
{
newDict.Add(item.Key, item.Value);
}
return newDict;
}
public object Replace(object original, object target, object owner)
{
return original;
}
public object Assemble(object cached, object owner)
{
return JsonConvert.DeserializeObject<SerializableDictionary>(cached.ToString());
}
public object Disassemble(object value)
{
return JsonConvert.SerializeObject(value);
}
public SqlType[] SqlTypes
{
get
{
SqlType[] types = new SqlType[1];
types[0] = new SqlType(DbType.String);
return types;
}
}
public Type ReturnedType
{
get { return typeof (SerializableDictionary); }
}
public bool IsMutable
{
get { return false; }
}
}
Then on the model itself I had to put the following:
[Property(ColumnType = "MyNamespace.SerializableDictionaryType, MyNamespace")]
public SerializableDictionary UserData { get; set; }
This ColumnType refers to the custom type mapper that I created. If you notice I had to use the full namespace and class along with the assembly it's located in.
With all of this setup it works like a charm. I'm planning on optimizing things to make it look nicer and would still like to find a way to deal with generics instead of being stuck with:
<string, object>
I'll update the answer with anything I find.

Related

Looking for a more elegant way to perform custom get/set functionality on class properties

I'm trying to find a way to refine some code that I have. I work with a 3rd party API that has a REALLY complicated API request object (I'll call it ScrewyAPIObject) that has tons of repetition in it. Every time you want to set a particular property, it can take a page worth of code. So I built a library to provide a simplified wrapper around the setting/getting of its properties (and to handle some value preprocessing).
Here's a stripped-down view of how it works:
public abstract class LessScrewyWrapper
{
protected ScrewyAPIRequest _screwy = new ScrewyAPIRequest();
public void Set(string value)
{
Set(_getPropertyName(), value);
}
public void Set(string property, string value)
{
// Preprocess value and set the appropriate property on _screwy. This part
// has tons of code, but we'll just say it looks like this:
_screwy.Fields[property] = "[" + value + "]";
}
protected string _getPropertyName()
{
// This method looks at the Environment.StackTrace, finds the correct set_ or
// get_ method call and extracts the property name and returns it.
}
public string Get()
{
// Get the property name being access
string property = _getPropertyName();
// Search _screwy's structure for the value and return it. Again, tons of code,
// so let's just say it looks like this:
return _screwy.Fields[property];
}
public ScrewyAPIRequest GetRequest()
{
return _screwy;
}
}
Then I have a child class that represents one specific type of the screwy API request (there are multiple kinds that all have the same structure but different setups). Let's just say this one has two string properties, PropertyA and PropertyB:
public class SpecificScrewyAPIRequest : LessScrewyWrapper
{
public string PropertyA
{
get { return Get(); }
set { Set(value); }
}
public string PropertyB
{
get { return Get(); }
set { Set(value); }
}
}
Now when I want to go use this library, I can just do:
SpecificScrewyAPIRequest foo = new SpecificScrewyAPIRequest();
foo.PropertyA = "Hello";
foo.PropertyB = "World";
ScrewyAPIRequest request = foo.GetRequest();
This works fine and dandy, but there are different kinds of data types, which involves using generics in my Set/Get methods, and it just makes the child classes look a little kludgy when you're dealing with 50 properties and 50 copies of Get() and Set() calls.
What I'd LIKE to do is simply define fields, like this:
public class SpecificScrewyAPIRequest : LessScrewyWrapper
{
public string PropertyA;
public string PropertyB;
}
It would make the classes look a LOT cleaner. The problem is that I don't know of a way to have .NET make a callback to my custom handlers whenever the values of the fields are accessed and modified.
I've seen someone do something like this in PHP using the __set and __get magic methods (albeit in a way they were not intended to be used), but I haven't found anything similar in C#. Any ideas?
EDIT: I've considered using an indexed approach to my class with an object-type value that is cast to its appropriate type afterwards, but I'd prefer to retain the approach where the property is defined with a specific type.
Maybe in your case DynamicObject is a suitable choice:
public class ScrewyDynamicWrapper : DynamicObject
{
public override bool TryGetMember(GetMemberBinder binder, out object result)
{
// get your actual value based on the property name
Console.WriteLine("Get Property: {0}", binder.Name);
result = null;
return true;
}
public override bool TrySetMember(SetMemberBinder binder, object value)
{
// set your actual value based on the property name
Console.WriteLine("Set Property: {0} # Value: {2}", binder.Name, value);
return true;
}
}
And define your wrapper objects:
public class ScrewyWrapper
{
protected dynamic ActualWrapper = new ScrewyDynamicWrapper();
public int? PropertyA
{
get { return ActualWrapper.PropertyA; }
set { ActualWrapper.PropertyA = value; }
}
public string PropertyB
{
get { return ActualWrapper.PropertyB; }
set { ActualWrapper.PropertyB = value; }
}
}
However, you can't rely on the property type inside ScrewyDynamicWrapper with this approach, so it depends on your actual API requirements - maybe it won't work for you.
Instead of fields, If you define as property in class, It will be more easy.
public class SpecificScrewyAPIRequest
{
public string PropertyA { get; set; }
public string PropertyB { get; set; }
}
Then you can create extension generic method to return ScrewyAPIRequest object.
public static class Extensions
{
public static ScrewyAPIRequest GetRequest<T>(this T obj)
{
ScrewyAPIRequest _screwy = new ScrewyAPIRequest();
var test= obj.GetType().GetProperties();
foreach (var prop in obj.GetType().GetProperties())
{
_screwy.Fields[prop.Name] = prop.GetValue(obj, null);
}
return _screwy;
}
}
Now you can easily get ScrewyAPIRequest from any class object.
Your code will look like following.
SpecificScrewyAPIRequest foo = new SpecificScrewyAPIRequest();
foo.PropertyA = "Hello";
foo.PropertyB = "World";
ScrewyAPIRequest request = foo.GetRequest();

How to have type-safe Mongo Object Ids in C#?

Say I have three collections in Mongo: flavor, color, and cupcake. Each collection has its own _id (obviously) and the cupcake collection references the _ids in flavor and cupcake, like so:
{
"_id": ObjectId("123"),
"flavorId": ObjectId("234"),
"colorId": ObjectId("345"),
"moreData": {}
}
This is a toy example, of course, and there is more stuff in these collections. That's not important to this question, except that it's the moreData that I'm really looking for when I query.
I want to be able to look up cupcake objects by flavorId and by colorId (and they are appropriately indexed for such lookups). However, both fields are ObjectId, and I want to avoid somebody accidentally looking for a colorId with a flavorId. How can I design the object and a repository class such that colorId and flavorId will be different types so that the compiler will not allow interchanging them, but still store both ids as ObjectId?
My first thought was to extend ObjectId and pass the extended object around, but ObjectId is a struct which cannot be extended.
You won't be able to prevent those errors, but you can use number intervals to make it easier for "someone" to find the problem.
If I'm not mistaken you can set the ids, so you can use a "prefix" for every kind.
Colors could start with 1000, flavors with 2000 and so on...
Hmm, it is a kind of soft problems, because in most repositories ID is something common (like integers). So having this in mind we could enforce passing an extra parameter instead of changing base object, like this bulletproof solution
cupcakeRepository.Find(ObjectId flavorId, ÒbjectType ÒbjectType.Flavor)
or just extend repository to be more verbose
cupcakeRepository.FindByColor(ObjectId id)
cupcakeRepository.FindByFlavor(ObjectId id)
So I ended up biting the bullet on building the Mongo-specific junk to make a custom class work for this. So here is my drop-in replacement for ObjectId:
public struct DocumentId<T> : IEquatable<DocumentId<T>>
{
static DocumentId()
{
BsonSerializer.RegisterSerializer(typeof(DocumentId<T>), DocumentIdSerializer<T>.Instance);
BsonSerializer.RegisterIdGenerator(typeof(DocumentId<T>), DocumentIdGenerator<T>.Instance);
}
public static readonly DocumentId<T> Empty = new DocumentId<T>(ObjectId.Empty);
public readonly ObjectId Value;
public DocumentId(ObjectId value)
{
Value = value;
}
public static DocumentId<T> GenerateNewId()
{
return new DocumentId<T>(ObjectId.GenerateNewId());
}
public static DocumentId<T> Parse(string value)
{
return new DocumentId<T>(ObjectId.Parse(value));
}
public bool Equals(DocumentId<T> other)
{
return Value.Equals(other.Value);
}
public override bool Equals(object obj)
{
if (ReferenceEquals(null, obj)) return false;
return obj is DocumentId<T> && Equals((DocumentId<T>)obj);
}
public static bool operator ==(DocumentId<T> left, DocumentId<T> right)
{
return left.Value == right.Value;
}
public static bool operator !=(DocumentId<T> left, DocumentId<T> right)
{
return left.Value != right.Value;
}
public override int GetHashCode()
{
return Value.GetHashCode();
}
public override string ToString()
{
return Value.ToString();
}
}
public class DocumentIdSerializer<T> : StructSerializerBase<DocumentId<T>>
{
public static readonly DocumentIdSerializer<T> Instance = new DocumentIdSerializer<T>();
public override DocumentId<T> Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)
{
return new DocumentId<T>(context.Reader.ReadObjectId());
}
public override void Serialize(BsonSerializationContext context, BsonSerializationArgs args, DocumentId<T> value)
{
context.Writer.WriteObjectId(value.Value);
}
}
public class DocumentIdGenerator<T> : IIdGenerator
{
public static readonly DocumentIdGenerator<T> Instance = new DocumentIdGenerator<T>();
public object GenerateId(object container, object document)
{
return DocumentId<T>.GenerateNewId();
}
public bool IsEmpty(object id)
{
var docId = id as DocumentId<T>? ?? DocumentId<T>.Empty;
return docId.Equals(DocumentId<T>.Empty);
}
}
The type parameter T can be anything; it is never used. It should be the type of your object, like so:
public class Cupcake {
[BsonId]
public DocumentId<Cupcake> Id { get; set; }
// ...
}
This way, your Flavor class has an Id of type DocumentId<Flavor> and your Color class has an Id of type DocumentId<Color>, and never shall the two be interchanged. Now I can create a CupcakeRepository with the following unambiguous methods as well:
public interface ICupcakeRepository {
IEnumerable<Cupcake> Find(DocumentId<Flavor> flavorId);
IEnumerable<Cupcake> Find(DocumentId<Color> colorId);
}
This should be safe with existing data as well because the serialized representation is exactly the same, just an ObjectId("1234567890abcef123456789").

JSON.NET Serialization - How does DefaultReferenceResolver compare equality?

I am using JSON.NET 6.0.3. I have changed PreserveReferences option as follows:
HttpConfiguration.Formatters.JsonFormatter.SerializerSettings.PreserveReferencesHandling = PreserveReferencesHandling.Objects;
My object graph resembles the following:
public class CarFromManufacturer
{
public int CarID { get; set; }
public string Make { get; set; }
public string Model { get; set; }
public CarManufacturer Manufacturer { get; set; }
}
public class CarManufacturer
{
public int ManufacturerID { get; set; }
public string Name { get; set; }
}
My WebAPI controller is returning the result set of IEnumerable[CarFromManufacturer]. So the result could be a list of 5 cars from two unique manufacturer objects. I am expecting the JSON result to list each manufacturer only once fully serialized and then subsequent uses of the same Manufacturer to be $ref ID to the original's $id. That is not happening.
Even though I can't find a single piece of documentation that speaks about how equality is established for the ReferenceResolver, I've implemented IEquatable<CarManufacturer> along with override of base.Equals and base.GetHashCode() with no luck.
I'd like to avoid implementing my own IReferenceResolver because have very similar object graphs working as expected in the same project.
The only thing I can think of is that I am using factory objects and instead of creating each unique CarManufacturer first, then creating the instances of CarFromManufacturer passing in CarManufacturer... i am creating a new instance of the CarManufacturer. This would explain why the objects aren't equal, but that's why I implemented IEquatable and overrides of base.Equals(object) and base.GetHashCode().
I've looked into the source for DefaultReferenceResolver and it uses the default constructor of BidirectionalDictionary which uses EqualityComparer<T>.Default which, from MSDN documentation, uses the T's implementation of IEquatable<T> if it exists, or otherwise uses T's base.Equals() implementation.... all of this would lead me to believe that IEquatable in CarManufacturer should fix my problem. However, placing breakpoints in CarManufacturer.Equals() and GethashCode() never hit..
JSON.NET's logic for resolving references by default just compares references using this comparer.
If you want to compare objects in a different manner, you'll have to implement a custom IReferenceResolver.
Here's an example that takes an IEqualityComparer<T> to accommodate your use case:
public class ReferenceResolver<T> : IReferenceResolver
{
private Dictionary<string, T> stringToReference;
private Dictionary<T, string> referenceToString;
private int referenceCount;
public ReferenceResolver(IEqualityComparer<T> comparer)
{
this.stringToReference = new Dictionary<string, T>();
this.referenceToString = new Dictionary<T, string>(comparer);
this.referenceCount = 0;
}
public void AddReference(
object context,
string reference,
object value)
{
this.referenceToString.Add((T)value, reference);
this.stringToReference.Add(reference, (T)value);
}
public string GetReference(
object context,
object value)
{
string result = null;
if (!this.referenceToString.TryGetValue((T)value, out result))
{
referenceCount++;
result = referenceCount.ToString(CultureInfo.InvariantCulture);
this.referenceToString.Add((T)value, result);
this.stringToReference.Add(result, (T)value);
}
return result;
}
public bool IsReferenced(
object context,
object value)
{
return this.referenceToString.ContainsKey((T)value);
}
public object ResolveReference(
object context,
string reference)
{
T r = default(T);
this.stringToReference.TryGetValue(reference, out r);
return r;
}
}
Json.Net will call the Equals method on the objects being compared. In certain scenarios you may not want this however for example when it is checking for circular references it does the same whereas it may be more ideal to check for reference equality. They do this however to give the developer full control by overridding the Equals method in their classes.
You can override the default implementation. For example to make this a reference equality you would do the following:
var settings = new JsonSerializerSettings
{
EqualityComparer = new DefaultEqualityComparer(),
};
public class DefaultEqualityComparer : IEqualityComparer
{
public bool Equals(object x, object y)
{
return ReferenceEquals(x, y);
}
public int GetHashCode(object obj)
{
return obj.GetHashCode();
}
}

Override for fluent NHibernate for long text strings nvarchar(MAX) not nvarchar(255)

When ever you set a string value in fluent NHibernate it alwasy sets the DB vales to Nvarchar(255), I need to store quite a lot of long string which are based on user inputs and 255 is impractical.
Just to add this is an issue with the automapper as I am using fluent NHibernate to build the database.
Adding this convention will set the default length for string properties to 10000. As others have noted, this will be a nvarchar(max) column.
public class StringColumnLengthConvention : IPropertyConvention, IPropertyConventionAcceptance
{
public void Accept(IAcceptanceCriteria<IPropertyInspector> criteria)
{
criteria.Expect(x => x.Type == typeof(string)).Expect(x => x.Length == 0);
}
public void Apply(IPropertyInstance instance)
{
instance.Length(10000);
}
}
Conventions can be added to an automap configuration like this:
Fluently.Configure()
.Mappings( m =>
m.AutoMappings.Add( AutoMap.AssemblyOf<Foo>()
.Conventions.Add<StringColumnLengthConvention >()))
For more information, see Conventions in the Fluent NHibernate wiki.
Setting the length to anything over 4001 will generate an NVarchar(MAX)...
.WithLengthOf(10000);
See here for more detail...
http://serialseb.blogspot.com/2009/01/fluent-nhibernate-and-nvarcharmax.html
With Fluent Nhibernate Automapper, one quickly realizes that the out-of-the-box behavior for varchar columns is less than ideal. First you discover that every string property was exported as varchar(255) and you need to make a column to be varchar(max). But ideally, you wouldn't have to make every string a varchar(max), right? So you head down that well-trodden path of finding the best way to exert control over the process without breaking out of the various elegant patterns at play...
If you want to have your resulting database varchar columns specified at different lengths, you look to convention classes to make it happen. You might try create name-specific conditions or generally use some naming pattern that you detected inside your convention class.
Neither is ideal. Overloading a name for the purpose of indicating an intended spec in another part of the code is unfortunate- your name should just be a name. Nor should you have to modify convention code every time you need to add or modify a limited-length class property. So how can you write a convention class that gives you control and provides that control in a simple and elegant way?
It'd be sweet if you could just decorate your property like I did for the Body property here:
using System;
using MyDomain.DBDecorations;
namespace MyDomain.Entities {
[Serializable]
public class Message
{
public virtual string MessageId { get; set; }
[StringLength(4000)] public virtual string Body { get; set; }
}
}
If this could work, we'd have the control over each string independently, and we'd be able to specify it directly in our entity.
Before I start a maelstrom over separation of database from application, let me point out that this is not specifically a database directive (I made a point of not calling the attribute 'Varchar'). I prefer to characterize this as an augmentation of the System.string, and in my own little universe I'm happy with that. Bottom line, I want a convenience!
To do this, we need to define the decoration we want to use:
using System;
namespace MyDomain.DBDecorations
{
[AttributeUsage(AttributeTargets.Property)]
public class StringLength : System.Attribute
{
public int Length = 0;
public StringLength(int taggedStrLength)
{
Length = taggedStrLength;
}
}
}
Finally, we need to use a string length convention to use the entity's property decoration. This part may not seem pretty, but it does the job, and the good news is that you won't have to look at it again!
StringColumnLengthConvention.cs:
using System.Reflection;
using FluentNHibernate.Conventions;
using FluentNHibernate.Conventions.AcceptanceCriteria;
using FluentNHibernate.Conventions.Inspections;
using FluentNHibernate.Conventions.Instances;
namespace MyMappings
{
public class StringColumnLengthConvention : IPropertyConvention, IPropertyConventionAcceptance
{
public void Accept(IAcceptanceCriteria<IPropertyInspector> criteria) { criteria.Expect(x => x.Type == typeof(string)).Expect(x => x.Length == 0); }
public void Apply(IPropertyInstance instance)
{
int leng = 255;
MemberInfo[] myMemberInfos = ((PropertyInstance)(instance)).EntityType.GetMember(instance.Name);
if (myMemberInfos.Length > 0)
{
object[] myCustomAttrs = myMemberInfos[0].GetCustomAttributes(false);
if (myCustomAttrs.Length > 0)
{
if (myCustomAttrs[0] is MyDomain.DBDecorations.StringLength)
{
leng = ((MyDomain.DBDecorations.StringLength)(myCustomAttrs[0])).Length;
}
}
}
instance.Length(leng);
}
}
}
Add this convention to your automapping configuration and there you have it- whenever you want a specific length to result during ExportSchema, now you can just decorate the string property -and only that property- right in your entity!
One of the consistent way that I found is:
Map(x => x.LongText, "LongText").CustomType<VarcharMax>().Nullable();
in which the VarcharMax and classes are
public class VarcharMax : BaseImmutableUserType<String>
{
public override object NullSafeGet(IDataReader rs, string[] names, object owner)
{
return (string)NHibernateUtil.String.NullSafeGet(rs, names[0]);
}
public override void NullSafeSet(IDbCommand cmd, object value, int index)
{
//Change the size of the parameter
((IDbDataParameter)cmd.Parameters[index]).Size = int.MaxValue;
NHibernateUtil.String.NullSafeSet(cmd, value, index);
}
public override SqlType[] SqlTypes
{
get { return new[] { new SqlType(DbType.String) }; }
}
}
public abstract class BaseImmutableUserType<T> : NHibernate.UserTypes.IUserType
{
public abstract object NullSafeGet(IDataReader rs, string[] names, object owner);
public abstract void NullSafeSet(IDbCommand cmd, object value, int index);
public abstract SqlType[] SqlTypes { get; }
public new bool Equals(object x, object y)
{
if (ReferenceEquals(x, y))
{
return true;
}
if (x == null || y == null)
{
return false;
}
return x.Equals(y);
}
public int GetHashCode(object x)
{
return x.GetHashCode();
}
public object DeepCopy(object value)
{
return value;
}
public object Replace(object original, object target, object owner)
{
return original;
}
public object Assemble(object cached, object owner)
{
return DeepCopy(cached);
}
public object Disassemble(object value)
{
return DeepCopy(value);
}
public Type ReturnedType
{
get { return typeof(T); }
}
public bool IsMutable
{
get { return false; }
}
}
Hi I came across this Question, with the same problem. I have an slightly safer way of doing it as I don't want all string fields to have 10000 chars by default.
Firstly I register fluent nhibernate with some overrides
...//snip
....Mappings(m => m.AutoMappings.Add(
AutoMap.AssemblyOf<Account>()
//Use my mapping overrides here
.UseOverridesFromAssemblyOf<MyMappingOverride>()
.Conventions.Add(new MyConventions()).IgnoreBase<Entity>
))
My Mapping override class looks like this:
public class MyMappingOverride : IAutoMappingOverride<MyClass> {
public void Override(AutoMapping<MyClass> mapping) {
mapping.Map(x => x.LongName).Length(765);
}
}
This is only required for small subset of entities with long text values. Maybe some else will find this useful?
Probably you are using "NHibernate validator" as well. If yes, Fluent NHibernate will consider all of the NHibernate validator related data annotations automatically, including string length, not null, etc.

Generic type conversion FROM string

I have a class that I want to use to store "properties" for another class. These properties simply have a name and a value. Ideally, what I would like is to be able to add typed properties, so that the "value" returned is always of the type that I want it to be.
The type should always be a primitive. This class subclasses an abstract class which basically stores the name and value as string. The idea being that this subclass will add some type-safety to the base class (as well as saving me on some conversion).
So, I have created a class which is (roughly) this:
public class TypedProperty<DataType> : Property
{
public DataType TypedValue
{
get { // Having problems here! }
set { base.Value = value.ToString();}
}
}
So the question is:
Is there a "generic" way to convert from string back to a primitive?
I can't seem to find any generic interface that links the conversion across the board (something like ITryParsable would have been ideal!).
I am not sure whether I understood your intentions correctly, but let's see if this one helps.
public class TypedProperty<T> : Property where T : IConvertible
{
public T TypedValue
{
get { return (T)Convert.ChangeType(base.Value, typeof(T)); }
set { base.Value = value.ToString();}
}
}
lubos hasko's method fails for nullables. The method below will work for nullables. I didn't come up with it, though. I found it via Google: http://web.archive.org/web/20101214042641/http://dogaoztuzun.com/post/C-Generic-Type-Conversion.aspx Credit to "Tuna Toksoz"
Usage first:
TConverter.ChangeType<T>(StringValue);
The class is below.
public static class TConverter
{
public static T ChangeType<T>(object value)
{
return (T)ChangeType(typeof(T), value);
}
public static object ChangeType(Type t, object value)
{
TypeConverter tc = TypeDescriptor.GetConverter(t);
return tc.ConvertFrom(value);
}
public static void RegisterTypeConverter<T, TC>() where TC : TypeConverter
{
TypeDescriptor.AddAttributes(typeof(T), new TypeConverterAttribute(typeof(TC)));
}
}
For many types (integer, double, DateTime etc), there is a static Parse method. You can invoke it using reflection:
MethodInfo m = typeof(T).GetMethod("Parse", new Type[] { typeof(string) } );
if (m != null)
{
return m.Invoke(null, new object[] { base.Value });
}
TypeDescriptor.GetConverter(PropertyObject).ConvertFrom(Value)
TypeDescriptor is class having method GetConvertor which accept a Type object and then you can call ConvertFrom method to convert the value for that specified object.
With inspiration from the Bob's answer, these extensions also support null value conversion and all primitive conversion back and fourth.
public static class ConversionExtensions
{
public static object Convert(this object value, Type t)
{
Type underlyingType = Nullable.GetUnderlyingType(t);
if (underlyingType != null && value == null)
{
return null;
}
Type basetype = underlyingType == null ? t : underlyingType;
return System.Convert.ChangeType(value, basetype);
}
public static T Convert<T>(this object value)
{
return (T)value.Convert(typeof(T));
}
}
Examples
string stringValue = null;
int? intResult = stringValue.Convert<int?>();
int? intValue = null;
var strResult = intValue.Convert<string>();
You could possibly use a construct such as a traits class. In this way, you would have a parameterised helper class that knows how to convert a string to a value of its own type. Then your getter might look like this:
get { return StringConverter<DataType>.FromString(base.Value); }
Now, I must point out that my experience with parameterised types is limited to C++ and its templates, but I imagine there is some way to do the same sort of thing using C# generics.
Check the static Nullable.GetUnderlyingType.
- If the underlying type is null, then the template parameter is not Nullable, and we can use that type directly
- If the underlying type is not null, then use the underlying type in the conversion.
Seems to work for me:
public object Get( string _toparse, Type _t )
{
// Test for Nullable<T> and return the base type instead:
Type undertype = Nullable.GetUnderlyingType(_t);
Type basetype = undertype == null ? _t : undertype;
return Convert.ChangeType(_toparse, basetype);
}
public T Get<T>(string _key)
{
return (T)Get(_key, typeof(T));
}
public void test()
{
int x = Get<int>("14");
int? nx = Get<Nullable<int>>("14");
}
I used lobos answer and it works. But I had a problem with the conversion of doubles because of the culture settings. So I added
return (T)Convert.ChangeType(base.Value, typeof(T), CultureInfo.InvariantCulture);
public class TypedProperty<T> : Property
{
public T TypedValue
{
get { return (T)(object)base.Value; }
set { base.Value = value.ToString();}
}
}
I using converting via an object. It is a little bit simpler.
Yet another variation. Handles Nullables, as well as situations where the string is null and T is not nullable.
public class TypedProperty<T> : Property where T : IConvertible
{
public T TypedValue
{
get
{
if (base.Value == null) return default(T);
var type = Nullable.GetUnderlyingType(typeof(T)) ?? typeof(T);
return (T)Convert.ChangeType(base.Value, type);
}
set { base.Value = value.ToString(); }
}
}
You can do it in one line as below:
YourClass obj = (YourClass)Convert.ChangeType(YourValue, typeof(YourClass));
Happy coding ;)

Categories

Resources