I have a WCF service that i cannot touch which returns List<FilesWithSettings>.
I need to enter few PC which are grouped together and retrieve List<FilesWithSettings> for each one along with PCIdentifier which brings me to Dictionary<PCIdentifier,List<FilesWithSettings>> or List<PCIdentifier> and List<List<FilesWithSettings>> which isn't elegant and unreadable.
Can you give me more elegant solution ?
I guess you've got three options:
List<List<T>> // Which is pretty nasty
or:
Dictionary<PCIdentifier, List<T>>
Which better enunciates your intent or even:
class PCResult
{
PCIdentifier Identifier { get; set; };
List<T> Results { get; set; }
}
and
List<PCResult>
Personally I prefer the third, but the second is fine too.
I would have something like
[DataContract]
public class PCState // need a better name
{
[DataMember]
public PCIdentifier Identifier {get;set;}
[DataMember]
public List<FilesWithSettings> Files {get;set;}
}
and return a List<PCState>. This avoids all the issues with complex overly-generic types and nested lists, etc, and is easily consumed.
Dictionary<PCIdentifier,List<FilesWithSettings>> is actually pretty elegant. You can clearly identify individual PCs and iterate over all PCs, but also get all data you need for each PC.
Related
Consider the following code:
public interface IIdentifiable<T>
{
T Id { get; set; }
}
public interface IViewModel
{
}
public class MyViewModel1 : IViewModel, IIdentifiable<int>
{
public string MyProperty { get; set; }
public int Id { get; set; }
}
public class MyViewModel2 : IViewModel, IIdentifiable<string>
{
public string MyProperty { get; set; }
public string Id { get; set; }
}
I also have class that operates with ViewModels:
public class Loader<T> where T: IViewModel
{
public void LoadData()
{
/*some important stuff here*/
if (typeof(IIdentifiable<??>).IsAssignableFrom(typeof(T)))
{ // ^- here's the first problem
data = data.Where(d => _dataSource.All(ds => ((IIdentifiable<??>) ds).Id != ((IIdentifiable<??>) d).Id)).ToList();
} // ^---- and there the second ----^
/*some important stuff here too*/
}
}
Now, as you can see, viewmodels that I have might implement the IIdentifiable<> interface. I want to check that, and if it's true,
I want to make sure my data list does not contains any entry that are already present in my _dataSourse list.
So I have 2 questions:
I don't know what IIdentifiable<> has in its generic parentheses, it might be int, string or even GUID.
I tried typeof(IIdentifiable<>).IsAssignableFrom(typeof(T)) which is the correct syntax, yet it always returns false.
Is there a way to check whether T is IIdentifiable<> without knowing the exact generic type?
If there is an answer for the first question, I would also like to know how can I compare the Id fields without knowing their type.
I found this answer quite useful, yet it doesn't cover my
specific case.
I know that I probably can solve that problem if I make my Loader<T> class a generic for two types Loader<T,K>, where K would be the
type in IIdentifiable<>, yet I would like to know if there are other solutions.
P.S. In addition to my first question: I'm also curious why one can write something like this typeof(IIdentifiable<>).IsAssignableFrom(typeof(T)) if it returns false when the generic type of IIdentifiable<> is not specified?
Edit: I guess, in hindsight, I understand why I can't write the code this bluntly - because there's might be the collection ICollection<IViewModel> where the entries implement different types of IIdentifiable<> (or don't implement it at all), and the check like that would fail awkwardly. Yet maybe there is a way to do something like that with some restrictions, but without creating second generic parameter to my Loader?
Try add two methods to your Loader<T>:
public bool CanCast<TId>()
{
var identifiableT = typeof(IIdentifiable<>).MakeGenericType(typeof(TId));
return identifiableT.IsAssignableFrom(typeof(T));
}
public IEnumerable<IIdentifiable<TId>> Filter<TId>(IEnumerable<T> data)
{
return data.Where(d => _dataSource.All(
ds => !((IIdentifiable<TId>) ds).Id.Equals(((IIdentifiable<TId>) d).Id)));
}
Then in LoadData
if (CanCast<int>())
data = Filter<int>(data);
else if (CanCast<Guid>())
data = Filter<Guid>(data);
// and so om
Well, I would suggest you to always use a string for identification. You can convert int and guid to a string. And if you want to ensure proper type is used then you can prefix the string with type information.
However, I do think that the performance of you algorithm would be very poor as you wouls essentially loop 2 containers so it would be O(n * m).
Thus it would be best to either do appropriate SQL query if both sources are from the database or use a dictionary if you do it in code. Alternatively if data is properly sorted, you could find duplicates more efficiently.
By the way generics are quite limited in C#. Sometime using ˋFunc<>ˋ could help but even then you have to provide extra information to the algorithm.
We should address your question in two steps (because there really are two problems to solve here).
First, make following change to your interface IIdentifiable<T>
public interface IIdentifiable<T>
where T : IEquatable<T>
{
T Id { get; set; }
}
This will ensure that you can compare Id properties correctly.
Secondly, in your LoadData() method, change the if statement to
if (T is IIdentifiable<T>)
{ // ^- here's the first problem
data = data.Where(d => _dataSource.All(ds => ((IIdentifiable<T) ds).Id != ((IIdentifiable<T) d).Id)).ToList();
}
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm working on a Model and am using enum for a list of named items.
class Verse
{
public int Number { get; set; }
public string Text { get; set; }
}
class Chapter
{
public int Number { get; set; }
public List<Verse> Verses { get; set; }
}
class Book
{
public string Name { get; set; }
public List<Chapter> Chapters { get; set; }
}
class Bible
{
public Versions Version { get; set; }
public List<Book> Books { get; set; }
}
enum Versions
{
asv1901,
bbe,
darby,
kjv,
nasb,
niv,
nkjv,
nlt,
rsv,
web,
ylt
}
That seemed like a logical way to do it, but I'm finding that working with enum is adding unnecessary difficulty.
foreach (var chapter in chapters)
{
var bibleitem = new Bible();
bibleitem.Version = (Versions)Enum.Parse(typeof(Versions), chapter.version);
}
Would it make more sense to just use string[] or something? I'm sure there is some added benefit, to enum, but I question my benefit.
The guidance from Microsoft is here:
https://msdn.microsoft.com/en-us/library/ms229058%28v=vs.100%29.aspx?f=255&MSPPError=-2147217396
In particular note: Do not use an enumeration for open sets
People write new Bibles all the time, so your set of enumerated values could change. You would be better off using string constants, for instance, where you could add more at will.
While we are at it, some additional critiques of your code.
class Verse
{
public int Number { get; set; }
public string Text { get; set; }
}
Why is this a class, and why are the properties settable? Do you envision having an existing Verse in hand, and wishing to change its number and text to something different? If not, then don't allow it. I would write this as
struct Verse
{
public int Number { get; private set; }
public string Text { get; private set; }
public Verse(int number, string text) : this()
{
this.Number = number;
this.Text = text;
}
}
Once it is created, it does not change. Also, this is a small immutable thing that is logically a value, so make it a struct.
class Chapter
{
public int Number { get; set; }
public List<Verse> Verses { get; set; }
}
Again, if you have an existing chapter, do you intend the set of verses to change? Because anyone can call Add on a list. Also, this constrains you to having the list available at all times, rather than computed lazily from a database. Make this IEnumerable<Verse>.
enum Versions
{
asv1901,
bbe,
This violates both naming guidelines and general legibility. Spell things out! AmericanStandardVersion1901 is far better than asv1901.
You should use enums when you have a named list of constants in your code and you know that this particular list is not gonna change over time (hence called names list of constants).
what benifits do you get?
READABILITY. Using enums increases the readability of your code. Consider the scenario where I have 2 employee types: Permanent and ContractBased. Now I can do this in my code like this:
if employee.Type == 1
// deal with permanent employee
else if employee.Type == 2
// deal with contract based employee here
such code is hard to read and maintain as no one could guess what employee.Type == 1 or what employee.Type == 2 means.
If I define an enum instead like this:
enum EmployeeType { Permanent=1, ContractBased=2 }
my code becomes like this:
if employee.Type == EmployeeType.Permanent
// deal with permanent employee
else if employee.Type == EmployeeType.ContractBased
// deal with contract based employee here
the readability of code gets maximized and also I have intellisense available.
The problem with strings:
1) you would end up having hard-coded string literals in your code
2) no intellisense
3) more memory consumption
how to deal with added complexity?
you should have an enum type variable for chapter.Version (which is right now missing) instead of int. that way you wouldnt need to do the parsing.
but I'm finding that working with enum is adding unnecessary difficulty.
it depends on your needs. if your set will not change enum is the best way to go as it adds a more control with a verbose description and limited set that cannot be bypassed when you work with many developers on the same project.
But
if your set can change during the development of the solution and you can't preview the set than a string would be the better way to go
Enums usually work best when:
No one adds or removes records to it anytime soon (hopefully never).
You don't need to use the real value behind your enum records.
You don't need to use the name of your records.
Enum.Parse can be used to get the enum record from a string, but as you noticed it's pretty ugly and I discourage you from using it. If you have the integral enum value you can simply perform a cast like this:
Versions version = (Versions)0;
But note that an enum is not guranteed to be of type int, it could also be any other integral value. int just happens to be the default. I do however also discourage you from relying on the enum's real integral value because something like this is also possible:
public enum Versions
{
One = 1,
Two = 2,
Three = 3
}
public void Do()
{
Versions version = (Version)-9;
// version is now Versions.One.
// Its value however is -9, what kind of version should -9 be?
}
The code above runs without errors because the runtime doesn't perform any checks on the value you are using for the cast.
The answer to your question depends on the nature of Versions. If you believe it will not be changed in the future then it is a good canditate for an enum in most cases. But you should use the enum everywhere across your application. I see in your sample that you are using the version as a string and therefore need to perform an ugly Enum.Parse. Consistency is an important factor when using enums, well it always is but it doesn't hurt to point it out again.
If you think your records are of a more dynamic nature you are probably best suited with strings. In that case you should use strings consistently. (Just wanted to point it out once again)
using the enum provides methods for comparing instances of this class, converting the value of an instance to its string representation, converting the string representation of a number to an instance of this class, and creating an instance of a specified enumeration and value.
Correcty using im class.
Exp.
public enum Versions
{
asv1901,
bbe,
darby,
kjv,
nasb,
niv,
nkjv,
nlt,
rsv,
web,
ylt
}
Next, implement
foreach (var chapter in chapters)
{
var bibleitem = new Bible();
bibleitem.Version = (Versions) "Your class enum"(typeof(Versions), chapter.version);
}
Enum and more used for good programming practices, clean code
Referency using enum: Enum Class Microsoft
I'm using Redis Cache using Stack Exchange library.
I used cloudStructure library to use Redis Dictionary and Redis List.
Problem is when I try to retrieve values and if that model has a null
value for one list property it is throwing me below exception -
Jil.DeserializationException : Error occurred building a deserializer
for TestMainClass: Expected a
parameterless constructor for
System.Collections.Generic.ICollection1[TestChildClass]
---- Jil.Common.ConstructionException : Expected a parameterless constructor for
System.Collections.Generic.ICollection1[TestChildClass]
public class TestMainClass
{
public TestMainClass();
public int Id { get; set; }
public virtual ICollection<TestChildClass> Mydata { get; set; }
public string Title { get; set; }
}
public class TestChildClass
{
public TestChildClass();
public int Id { get; set; }
public string Value { get; set; }
}
Redis code for retrieve value:
RedisDictionary<int, TestMainClass> dictionary =
new RedisDictionary<int, TestMainClass>("localhost", "mylocaldictionary");
var result = await dictionary.Get(121);
What If I could not able to convert ICollection < T > into List < T >?
It might be a nice feature if the serialization library detected interfaces like ICollection<T> and IList<T> and implemented them with the concrete List<T> during deserialization, but ultimately: every feature needs to be thought of, considered (impact), designed, implemented, tested, documented and supported. It may be that the library author feels this is a great idea and should be implemented; it might not be high on the author's list, but they'd be more than happy to take a pull request; or there might be good reasons not to implement it.
In the interim, as a general rule that will solve virtually every serialization problem you will ever encounter with any library:
the moment the library doesn't work perfectly with your domain model: stop serializing your domain model - use a DTO instead
By which, I mean: create a separate class or classes that are designed with the specific choice of serializer in mind. If it wants List<T>: then use List<T>. If it wants public fields: use public fields. If it wants the types to be marked [Serializable]: mark the types [Serializable]. If it wants all type names to start with SuperMagic: then start the type name with SuperMagic. As soon as you divorce the domain model from the serialization model, all the problems go away. In addition: you can support multiple serializers in parallel, without getting into the scenario that A needs X and doesn't work with Y; B needs Y and doesn't work with X.
All you then need to do is write a few lines of code to map between the two similar models (or use libraries that do exactly that, like AutoMapper).
So I'm currently writing an API, but I've hit a road block in construction. The issue is a series of values will constantly be called throughout, which requires a lot of parameters to be constantly pushed into series of classes and methods throughout the API.
Which is not very elegant nor practical. As it will induce a large amount of extra code.
My thought was originally was this:
public class CustomerProfile
{
public string ParentSite { get; private set; }
public string DynamicSite { get; private set; }
public string SiteDb { get; private set; }
public CustomerProfile(string parentSite, string dynamicSite, string siteDb)
{
if (string.IsEmptyOrNull(parentSite) &&
string.IsEmptyOrNull(dynamicSite) &&
string.IsEmptyOrNull(siteDb))
{
throw new Exception("Error Message: + "\n"
+ "Null value exception...");
}
else
{
ParentSite = parentSite;
DynamicSite = dynamicSite;
SiteDb = siteDb;
}
}
}
So my thought was to have a nice class that will set the properties, will act like a container for these repeatable values.
However, my issue seems to come from the next class.
public class Configuration
{
public CustomerProfile profile;
public Configuration(string parentSite, string dynamicSite, string siteDb)
{
CustomerProfile profile = new CustomerProfile(parentSite, dynamicSIte, siteDb);
}
}
This now works throughout the class I would just use profile.SiteDb or another property that resides within it.
But is this really the best approach?
I could use simple inheritance, but I'm not really sure that is cleaner or more efficient. Any thoughts on the matter would be terrific?
Is this approach the more ideal to pass property values from one class to another, as it will be used throughout several and several methods as well. I was looking for the cleanest way to invoke.
So my question is:
Out of all the ways to pass properties, what way is the best and why?
I thought this approach would be best but as I begin to use it
throughout it seems like it may not be the most ideal.
Thank you.
Short Version
The MSDN documentation for Type.GetProperties states that the collection it returns is not guaranteed to be in alphabetical or declaration order, though running a simple test shows that in general it is returned in declaration order. Are there specific scenarios that you know of where this is not the case? Beyond that, what is the suggested alternative?
Detailed Version
I realize the MSDN documentation for Type.GetProperties states:
The GetProperties method does not return properties in a particular
order, such as alphabetical or declaration order. Your code must not
depend on the order in which properties are returned, because that
order varies.
so there is no guarantee that the collection returned by the method will be ordered any specific way. Based on some tests, I've found to the contrary that the properties returned appear in the order they're defined in the type.
Example:
class Simple
{
public int FieldB { get; set; }
public string FieldA { get; set; }
public byte FieldC { get; set; }
}
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Simple Properties:");
foreach (var propInfo in typeof(Simple).GetProperties())
Console.WriteLine("\t{0}", propInfo.Name);
}
}
Output:
Simple Properties:
FieldB
FieldA
FieldC
One such case that this differs only slightly is when the type in question has a parent who also has properties:
class Parent
{
public int ParentFieldB { get; set; }
public string ParentFieldA { get; set; }
public byte ParentFieldC { get; set; }
}
class Child : Parent
{
public int ChildFieldB { get; set; }
public string ChildFieldA { get; set; }
public byte ChildFieldC { get; set; }
}
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Parent Properties:");
foreach (var propInfo in typeof(Parent).GetProperties())
Console.WriteLine("\t{0}", propInfo.Name);
Console.WriteLine("Child Properties:");
foreach (var propInfo in typeof(Child).GetProperties())
Console.WriteLine("\t{0}", propInfo.Name);
}
}
Output:
Parent Properties:
ParentFieldB
ParentFieldA
ParentFieldC
Child Properties:
ChildFieldB
ChildFieldA
ChildFieldC
ParentFieldB
ParentFieldA
ParentFieldC
Which means the GetProperties method walks up the inheritance chain from bottom up when discovering the properties. That's fine and can be handled as such.
Questions:
Are there specific situations where the described behavior would differ that I've missed?
If depending on the order is not recommended then what is the recommended approach?
One seemingly obvious solution would be to define a custom attribute which indicates the order in which the properties should appear (Similar to the Order property on the DataMember attribute). Something like:
public class PropOrderAttribute : Attribute
{
public int SeqNbr { get; set; }
}
And then implement such as:
class Simple
{
[PropOrder(SeqNbr = 0)]
public int FieldB { get; set; }
[PropOrder(SeqNbr = 1)]
public string FieldA { get; set; }
[PropOrder(SeqNbr = 2)]
public byte FieldC { get; set; }
}
But as many have found, this becomes a serious maintenance problem if your type has 100 properties and you need to add one between the first 2.
UPDATE
The examples shown here are simply for demonstrative purposes. In my specific scenario, I define a message format using a class, then iterate through the properties of the class and grab their attributes to see how a specific field in the message should be demarshaled. The order of the fields in the message is significant so the order of the properties in my class needs to be significant.
It works currently by just iterating over the return collection from GetProperties, but since the documentation states it is not recommended I was looking to understand why and what other option do I have?
The order simply isn't guaranteed; whatever happens.... Happens.
Obvious cases where it could change:
anything that implements ICustomTypeDescriptor
anything with a TypeDescriptionProvider
But a more subtle case: partial classes. If a class is split over multiple files, the order of their usage is not defined at all. See Is the "textual order" across partial classes formally defined?
Of course, it isn't defined even for a single (non-partial) definition ;p
But imagine
File 1
partial class Foo {
public int A {get;set;}
}
File 2
partial class Foo {
public int B {get;set:}
}
There is no formal declaration order here between A and B. See the linked post to see how it tends to happen, though.
Re your edit; the best approach there is to specify the marshal info separately; a common approach would be to use a custom attribute that takes a numeric order, and decorate the members with that. You can then order based on this number. protobuf-net does something very similar, and frankly I'd suggest using an existing serialization library here:
[ProtoMember(n)]
public int Foo {get;set;}
Where "n" is an integer. In the case of protobuf-net specifically, there is also an API to specify these numbers separately, which is useful when the type is not under your direct control.
For what it's worth, sorting by MetadataToken seemed to work for me.
GetType().GetProperties().OrderBy(x => x.MetadataToken)
Original Article (broken link, just listed here for attribution):
http://www.sebastienmahe.com/v3/seb.blog/2010/03/08/c-reflection-getproperties-kept-in-declaration-order/
I use custom attributes to add the necessary metadata myself (it's used with a REST like service which consumes and returns CRLF delimited Key=Value pairs.
First, a custom attribute:
class ParameterOrderAttribute : Attribute
{
public int Order { get; private set; }
public ParameterOrderAttribute(int order)
{
Order = order;
}
}
Then, decorate your classes:
class Response : Message
{
[ParameterOrder(0)]
public int Code { get; set; }
}
class RegionsResponse : Response
{
[ParameterOrder(1)]
public string Regions { get; set; }
}
class HousesResponse : Response
{
public string Houses { get; set; }
}
A handy method for converting a PropertyInfo into a sortable int:
private int PropertyOrder(PropertyInfo propInfo)
{
int output;
var orderAttr = (ParameterOrderAttribute)propInfo.GetCustomAttributes(typeof(ParameterOrderAttribute), true).SingleOrDefault();
output = orderAttr != null ? orderAttr.Order : Int32.MaxValue;
return output;
}
Even better, write is as an extension:
static class PropertyInfoExtensions
{
private static int PropertyOrder(this PropertyInfo propInfo)
{
int output;
var orderAttr = (ParameterOrderAttribute)propInfo.GetCustomAttributes(typeof(ParameterOrderAttribute), true).SingleOrDefault();
output = orderAttr != null ? orderAttr.Order : Int32.MaxValue;
return output;
}
}
Finally you can now query your Type object with:
var props = from p in type.GetProperties()
where p.CanWrite
orderby p.PropertyOrder() ascending
select p;
Relying on an implementation detail that is explicitly documented as being not guaranteed is a recipe for disaster.
The 'recommended approach' would vary depending on what you want to do with these properties once you have them. Just displaying them on the screen? MSDN docs group by member type (property, field, function) and then alphabetize within the groups.
If your message format relies on the order of the fields, then you'd need to either:
Specify the expected order in some sort of message definition. Google protocol buffers works this way if I recall- the message definition is compiled in that case from a .proto file into a code file for use in whatever language you happen to be working with.
Rely on an order that can be independently generated, e.g. alphabetical order.
1:
I've spent the last day troubleshooting a problem in an MVC 3 project, and it all came down to this particular problem. It basically relied on the property order being the same throughout the session, but on some occations a few of the properties switched places, messing up the site.
First the code called Type.GetProperties() to define column names in a dynamic jqGrid table, something that in this case occurs once per page_load. Subsequent times the Type.GetProperties() method was called was to populate the actual data for the table, and in some rare instances the properties switched places and messed up the presentation completely. In some instances other properties that the site relied upon for a hierarchical subgrid got switched, i.e. you could no longer see the sub data because the ID column contained erroneous data. In other words: yes, this can definitely happen. Beware.
2:
If you need consistent order throughout the system session but not nessecarily exactly the same order for all sessions the workaround is dead simple: store the PropertyInfo[] array you get from Type.GetProperties() as a value in the webcache or in a dictionary with the type (or typename) as the cache/dictionary key. Subsequently, whenever you're about to do a Type.GetProperties(), instead substitute it for HttpRuntime.Cache.Get(Type/Typename) or Dictionary.TryGetValue(Type/Typename, out PropertyInfo[]). In this way you'll be guaranteed to always get the order you encountered the first time.
If you always need the same order (i.e. for all system sessions) I suggest you combine the above approach with some type of configuration mechanism, i.e. specify the order in the web.config/app.config, sort the PropertyInfo[] array you get from Type.GetProperties() according to the specified order, and then store it in cache/static dictionary.