We can pass data between functions by using class objects. Like i have class
public class AddsBean
{
public long addId{get;set;}
public int bid { get; set; }
public long pointsAlloted { get; set; }
public string userId { get; set; }
public enum isApproved { YES, NO };
public DateTime approveDate { get; set; }
public string title { get; set; }
public string description { get; set; }
public string Link { get; set; }
public DateTime dateAdded { get; set; }
}
We can call function like public List<AddsBean> getAdds(string Id). This approach is good when you need all the variables of class. But what if you need only 2 or 3 variables of class?
Passing object of class is not good because it will be wastage of memory. Another possible solution is to make different classes of lesser variables but that is not practical.
What should we do that will best possible solution to fulfill motive and best according to performance also?
In Java - "References to objects are passed by value".. So, you dont pass the entire object, you just pass the reference to the object to the called function.
EG:
class A{
int i;
int j;
double k;
}
class B{
public static void someFunc(A a) // here 'a' is a reference to an object, we dont pass the object.
{
// some code
}
public static void main(String[] args){
A a = new A();
B.someFunc(a); // reference is being passed by value
}
}
first of all, as Java is pass by value and references typed, there is no need to worry about the memory wastage.
next, as you have mentioned, it is not good to pass all the object if you do not need them all, in some situation, it's true. as you need to protect your data in instance, thus you can use different granularity of class, for instance:
class A
{id, name}
class B extends A
{password,birthday}
by refer to different class you can control the granularity yourself, and provide different client with different scope of data.
But in some condition, you need to use a instance to store all data in the whole application, like configure data in hadoop, or some other configuration related instance.
Try to choose the most suitable scope!
If you're sure that this is the source of problems and you don't want to define a new class with a subset of the properties, .NET provides the Tuple class for grouping a small number of related fields. For example, a Tuple<int, int, string> contains two integers and a string, in that order.
public Tuple<string, long, DateTime> GetPointsData()
{
AddsBean bean = ... // Get your AddsBean somehow
return Tuple.Create<string, long, DateTime>(bean.userId, bean.pointsAlloted, bean.approveDate);
}
Once this method goes out of scope, there is no longer a live reference to the object bean referred to and will be collected by the garbage collector at some point in the future.
That said, unless you're sure that instances of the AddsBean class are having a noticeable negative effect on the performance of your app, you should not worry about it. The performance of your application is probably affected far more by other operations. Returning a reference type (a type defined with class instead of struct) only passes a reference to the object, not the data of the object itself.
Related
How to determine if a Class in .NET is big or small? Is it measured on how many it's attributes or fields, datatype of its attributes/fields? or return type of methods? parameters of it's methods? access modifier of its methods, virtual methods? thanks..
class A
{
string x { get; set; }
}
class B
{
int x { get; set; }
}
in this example if I instantiate class A and B like this
A objA = new A();
B objB = new B();
Is class objA the bigger one because it holds an String property and objB holds only an Int? although I didn't set any value to it's property. thanks
EDIT: Just to clarify my question
suppose i have a class
public class Member
{
public string MainEmpId { get; set; }
public string EmpId { get; set; }
}
and another class
public class User
{
public string AccessLevel { get; set; }
public string DateActivated { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string Mi { get; set; }
public string Password { get; set; }
public string UserId { get; set; }
public string UserName { get; set; }
public string Active { get; set; }
public string ProviderName { get; set; }
public string ContactPerson { get; set; }
public string Relation { get; set; }
public string Landline { get; set; }
public string MobileNo { get; set; }
public string Complaint { get; set; }
public string Remarks { get; set; }
public string Reason { get; set; }
public string RoomType { get; set; }
}
if I instantiate it like this
Member A = new Member();
User B = new User()
is the object A larger than object B?
I know it's an odd question but I believe every intantiation of an object eats memory space..
The size of a class instance is determined by:
The amount of data actually stored in the instance
The padding needed between the values
Some extra internal data used by the memory management
So, typically a class containing a string property needs (on a 32 bit system):
8 bytes for internal data
4 bytes for the string reference
4 bytes of unused space (to get to the minimum 16 bytes that the memory manager can handle)
And typically a class containing an integer property needs:
8 bytes for internal data
4 bytes for the integer value
4 bytes of unused space (to get to the minimum 16 bytes that the memory manager can handle)
As you see, the string and integer properties take up the same space in the class, so in your first example they will use the same amount of memory.
The value of the string property is of course a different matter, as it might point to a string object on the heap, but that is a separate object and not part of the class pointing to it.
For more complicated classes, padding comes into play. A class containing a boolean and a string property would for example use:
8 bytes for internal data
1 byte for the boolean value
3 bytes of padding to get on an even 4-byte boundary
4 bytes for the string reference
Note that these are examples of memory layouts for classes. The exact layout varies depending on the version of the framework, the implementation of the CLR, and whether it's a 32-bit or 64-bit application. As a program can be run on either a 32-bit or 64-bit system, the memory layout is not even known to the compiler, it's decided when the code is JIT:ed before execution.
In general, a class is larger when it has many instance (non-static) fields, regardless of their value; classes have a memory minimum of 12 bytes and fields with reference types are 4 bytes on 32-bit systems and 8 bytes on 64-bit systems. Other fields may be laid out with padding to word boundaries, such that a class with four byte fields actually may occupy four times 4 bytes in memory. But this all depends on the runtime.
Don't forget about the fields that may be hidden in, for example, your automatic property declarations. Since they are backed by a field internally, they'll add to the size of the class:
public string MyProperty
{ get; set; }
Note that the following property has no influence on the class size because it isn't backed by a field:
public bool IsValid
{ get { return true; } }
To get an idea of the in-memory size of a class or struct instance: apply the [StructLayout(LayoutKind.Sequential)] attribute on the class and call Marshal.SizeOf() on the type or instance.
[StructLayout(LayoutKind.Sequential)]
public class MyClass
{
public int myField0;
public int myField1;
}
int sizeInBytes = Marshal.SizeOf(typeof(MyClass));
However, because the runtime can layout the class in memory any way it wishes, the actual memory used by an instance may vary unless you apply the StructLayoutAttribute.
While the following article is old (.NET 1.1), the concepts explain clearly what the CLR is doing to allocate memory for objects instantiated in your application; which heaps are they placed in, where their object reference pointers are addressing, etc.
Drill Into .NET Framework Internals to See How the CLR Creates Runtime Objects
You can also check: how-much-memory-instance-of-my-class-uses.
There is easy way to test size of object after constructor is called.
There's a project on github called dotnetex that uses some magic and shows the size of a class or object.
Usage is simple:
GCex.SizeOf<Object>(); // size of the type
GCEx.SizeOf(someObject); // size of the object;
Under the hood it uses some magic.
To count size of a type it casts pointer of method table to internal MethodTableInfo struct and uses it's Size property like this:
public static unsafe Int32 SizeOf<T>()
{
return ((MethodTableInfo *)(typeof(T).TypeHandle.Value.ToPointer()))->Size;
}
To count size of an object it uses true dark magic that quite hard to get :) Take a look at the code.
When one says class size, I would assume that it means how many members the class has, and how complex the class is.
However, your question is the size of the memory required when we are creating instance of class. We cannot be so sure about exact size, because .Net framework preserves and keeps the underlying memory management away from us (which is a good thing). Even we have the correct size now, the value might be correct forever. Anyway, we can be sure that the following will take some space in the memory:
Instance variable.
Automatic property.
So that makes sense to say that User class will take more memory than Member class does.
We have a Transaction class that's very loaded; so loaded that I originally ended up passing almost 20 argument to the ctor. After extracting a few value objects, there are still 12 arguments left, which I still think is too much.
How would I go at avoiding this? I think it's reasonable the arguments are passed to the constructor since they're all required, and I want to make that explicit. I also like how if I add a property, I can add it to the ctor and let my compiler find the places it broke, instead of having to rely on tests for this per se. I don't think object initializers, or builders do the problem any good. It might become more obvious in the next coming days which arguments belong together, and could be composed though.
public class MyEntity()
{
public MyEntity(ValueType prop2, ValueType prop3, ...)
{
Id = Guid.NewGuid();
Prop2 = prop2;
Prop3 = prop3;
...
}
public Guid Id { get; private set; }
public ValueType Prop2 { get; private set; }
public ValueType Prop3 { get; private set; }
public ...
}
Are you sure that all the parameters are required? The word "required" is deceptive, the compiler may force me to provide a string argument, for example, but it can't force me to provide a value that is not null or empty.
The only way to truly force valid data to be provided is to validate it at the point of use. Sometimes this has to be in the constructor, e.g. a class that wraps something that only has meaning when initialised, like an I/O object. However, it's usually sufficient to allow the calling code to set properties any old way, then validate their values in the method call that requires them.
I'm rambling a bit. My point is, don't get hung up on constructor parameters as the only way to provide initialisation data to a class. They give very little additional compiler protection beyond simple properties.
How about encapsulating the parameters in a structure, and passing the structure in?
public struct ParamsStruct
{
Type1 param1;
Type2 param2;
...
}
public void Method(ParamsStruct p)
{
...
}
public void Main(String[] args)
{
ParamsStruct p;
p.param1 = ...
p.param2 = ...
Method(p);
}
When you output the full transaction details in a user or system interface, you will need all the parts. This is unlikely to help you find a split.
But, have a look at your internal processing - are there situations where you use only a subset of the fields on the transaction? Are there places where you pass in a Transaction, but only use 4 of the fields? If you literally always use all fields, then keep them in one object.
In the case of a banking transaction, I would consider a split along these lines:-
Where the money came from
Where the money went to
How the money was moved - which payment instrument or facility was used
Why the money was moved - reference numbers, etc
Amount and currency
Date
Status of the transaction
(Obviously this depends on your exact domain).
public class MyEntity()
{
public ValueType Prop1 { get; set; }
public ValueType Prop2 { get; set; }
// And so on...
public MyEntity()
{
Id = Guid.NewGuid();
}
}
Then:
MyEntity entity = new MyEntity();
entity.Prop1 = prop1;
entity.Prop2 = prop2;
// And so on...
You can eventually consider two different design approaches:
The
essence
pattern.
The fluent
interface
pattern.
Short Version
The MSDN documentation for Type.GetProperties states that the collection it returns is not guaranteed to be in alphabetical or declaration order, though running a simple test shows that in general it is returned in declaration order. Are there specific scenarios that you know of where this is not the case? Beyond that, what is the suggested alternative?
Detailed Version
I realize the MSDN documentation for Type.GetProperties states:
The GetProperties method does not return properties in a particular
order, such as alphabetical or declaration order. Your code must not
depend on the order in which properties are returned, because that
order varies.
so there is no guarantee that the collection returned by the method will be ordered any specific way. Based on some tests, I've found to the contrary that the properties returned appear in the order they're defined in the type.
Example:
class Simple
{
public int FieldB { get; set; }
public string FieldA { get; set; }
public byte FieldC { get; set; }
}
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Simple Properties:");
foreach (var propInfo in typeof(Simple).GetProperties())
Console.WriteLine("\t{0}", propInfo.Name);
}
}
Output:
Simple Properties:
FieldB
FieldA
FieldC
One such case that this differs only slightly is when the type in question has a parent who also has properties:
class Parent
{
public int ParentFieldB { get; set; }
public string ParentFieldA { get; set; }
public byte ParentFieldC { get; set; }
}
class Child : Parent
{
public int ChildFieldB { get; set; }
public string ChildFieldA { get; set; }
public byte ChildFieldC { get; set; }
}
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Parent Properties:");
foreach (var propInfo in typeof(Parent).GetProperties())
Console.WriteLine("\t{0}", propInfo.Name);
Console.WriteLine("Child Properties:");
foreach (var propInfo in typeof(Child).GetProperties())
Console.WriteLine("\t{0}", propInfo.Name);
}
}
Output:
Parent Properties:
ParentFieldB
ParentFieldA
ParentFieldC
Child Properties:
ChildFieldB
ChildFieldA
ChildFieldC
ParentFieldB
ParentFieldA
ParentFieldC
Which means the GetProperties method walks up the inheritance chain from bottom up when discovering the properties. That's fine and can be handled as such.
Questions:
Are there specific situations where the described behavior would differ that I've missed?
If depending on the order is not recommended then what is the recommended approach?
One seemingly obvious solution would be to define a custom attribute which indicates the order in which the properties should appear (Similar to the Order property on the DataMember attribute). Something like:
public class PropOrderAttribute : Attribute
{
public int SeqNbr { get; set; }
}
And then implement such as:
class Simple
{
[PropOrder(SeqNbr = 0)]
public int FieldB { get; set; }
[PropOrder(SeqNbr = 1)]
public string FieldA { get; set; }
[PropOrder(SeqNbr = 2)]
public byte FieldC { get; set; }
}
But as many have found, this becomes a serious maintenance problem if your type has 100 properties and you need to add one between the first 2.
UPDATE
The examples shown here are simply for demonstrative purposes. In my specific scenario, I define a message format using a class, then iterate through the properties of the class and grab their attributes to see how a specific field in the message should be demarshaled. The order of the fields in the message is significant so the order of the properties in my class needs to be significant.
It works currently by just iterating over the return collection from GetProperties, but since the documentation states it is not recommended I was looking to understand why and what other option do I have?
The order simply isn't guaranteed; whatever happens.... Happens.
Obvious cases where it could change:
anything that implements ICustomTypeDescriptor
anything with a TypeDescriptionProvider
But a more subtle case: partial classes. If a class is split over multiple files, the order of their usage is not defined at all. See Is the "textual order" across partial classes formally defined?
Of course, it isn't defined even for a single (non-partial) definition ;p
But imagine
File 1
partial class Foo {
public int A {get;set;}
}
File 2
partial class Foo {
public int B {get;set:}
}
There is no formal declaration order here between A and B. See the linked post to see how it tends to happen, though.
Re your edit; the best approach there is to specify the marshal info separately; a common approach would be to use a custom attribute that takes a numeric order, and decorate the members with that. You can then order based on this number. protobuf-net does something very similar, and frankly I'd suggest using an existing serialization library here:
[ProtoMember(n)]
public int Foo {get;set;}
Where "n" is an integer. In the case of protobuf-net specifically, there is also an API to specify these numbers separately, which is useful when the type is not under your direct control.
For what it's worth, sorting by MetadataToken seemed to work for me.
GetType().GetProperties().OrderBy(x => x.MetadataToken)
Original Article (broken link, just listed here for attribution):
http://www.sebastienmahe.com/v3/seb.blog/2010/03/08/c-reflection-getproperties-kept-in-declaration-order/
I use custom attributes to add the necessary metadata myself (it's used with a REST like service which consumes and returns CRLF delimited Key=Value pairs.
First, a custom attribute:
class ParameterOrderAttribute : Attribute
{
public int Order { get; private set; }
public ParameterOrderAttribute(int order)
{
Order = order;
}
}
Then, decorate your classes:
class Response : Message
{
[ParameterOrder(0)]
public int Code { get; set; }
}
class RegionsResponse : Response
{
[ParameterOrder(1)]
public string Regions { get; set; }
}
class HousesResponse : Response
{
public string Houses { get; set; }
}
A handy method for converting a PropertyInfo into a sortable int:
private int PropertyOrder(PropertyInfo propInfo)
{
int output;
var orderAttr = (ParameterOrderAttribute)propInfo.GetCustomAttributes(typeof(ParameterOrderAttribute), true).SingleOrDefault();
output = orderAttr != null ? orderAttr.Order : Int32.MaxValue;
return output;
}
Even better, write is as an extension:
static class PropertyInfoExtensions
{
private static int PropertyOrder(this PropertyInfo propInfo)
{
int output;
var orderAttr = (ParameterOrderAttribute)propInfo.GetCustomAttributes(typeof(ParameterOrderAttribute), true).SingleOrDefault();
output = orderAttr != null ? orderAttr.Order : Int32.MaxValue;
return output;
}
}
Finally you can now query your Type object with:
var props = from p in type.GetProperties()
where p.CanWrite
orderby p.PropertyOrder() ascending
select p;
Relying on an implementation detail that is explicitly documented as being not guaranteed is a recipe for disaster.
The 'recommended approach' would vary depending on what you want to do with these properties once you have them. Just displaying them on the screen? MSDN docs group by member type (property, field, function) and then alphabetize within the groups.
If your message format relies on the order of the fields, then you'd need to either:
Specify the expected order in some sort of message definition. Google protocol buffers works this way if I recall- the message definition is compiled in that case from a .proto file into a code file for use in whatever language you happen to be working with.
Rely on an order that can be independently generated, e.g. alphabetical order.
1:
I've spent the last day troubleshooting a problem in an MVC 3 project, and it all came down to this particular problem. It basically relied on the property order being the same throughout the session, but on some occations a few of the properties switched places, messing up the site.
First the code called Type.GetProperties() to define column names in a dynamic jqGrid table, something that in this case occurs once per page_load. Subsequent times the Type.GetProperties() method was called was to populate the actual data for the table, and in some rare instances the properties switched places and messed up the presentation completely. In some instances other properties that the site relied upon for a hierarchical subgrid got switched, i.e. you could no longer see the sub data because the ID column contained erroneous data. In other words: yes, this can definitely happen. Beware.
2:
If you need consistent order throughout the system session but not nessecarily exactly the same order for all sessions the workaround is dead simple: store the PropertyInfo[] array you get from Type.GetProperties() as a value in the webcache or in a dictionary with the type (or typename) as the cache/dictionary key. Subsequently, whenever you're about to do a Type.GetProperties(), instead substitute it for HttpRuntime.Cache.Get(Type/Typename) or Dictionary.TryGetValue(Type/Typename, out PropertyInfo[]). In this way you'll be guaranteed to always get the order you encountered the first time.
If you always need the same order (i.e. for all system sessions) I suggest you combine the above approach with some type of configuration mechanism, i.e. specify the order in the web.config/app.config, sort the PropertyInfo[] array you get from Type.GetProperties() according to the specified order, and then store it in cache/static dictionary.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Since immutability is not fully baked into C# to the degree it is for F#, or fully into the framework (BCL) despite some support in the CLR, what's a fairly complete solution for (im)mutability for C#?
My order of preference is a solution consisting of general patterns/principles compatible with
a single open-source library with few dependencies
a small number of complementary/compatible open-source libraries
something commercial
that
covers Lippert's kinds of immutability
offers decent performance (that's vague I know)
supports serialization
supports cloning/copying (deep/shallow/partial?)
feels natural in scenarios such as DDD, builder patterns, configuration, and threading
provides immutable collections
I'd also like to include patterns you as the community might come up with that don't exactly fit in a framework such as expressing mutability intent through interfaces (where both clients that shouldn't change something and may want to change something can only do so through interfaces, and not the backing class (yes, I know this isn't true immutability, but sufficient):
public interface IX
{
int Y{ get; }
ReadOnlyCollection<string> Z { get; }
IMutableX Clone();
}
public interface IMutableX: IX
{
new int Y{ get; set; }
new ICollection<string> Z{ get; } // or IList<string>
}
// generally no one should get ahold of an X directly
internal class X: IMutableX
{
public int Y{ get; set; }
ICollection<string> IMutableX.Z { get { return z; } }
public ReadOnlyCollection<string> Z
{
get { return new ReadOnlyCollection<string>(z); }
}
public IMutableX Clone()
{
var c = MemberwiseClone();
c.z = new List<string>(z);
return c;
}
private IList<string> z = new List<string>();
}
// ...
public void ContriveExample(IX x)
{
if (x.Y != 3 || x.Z.Count < 10) return;
var c= x.Clone();
c.Y++;
c.Z.Clear();
c.Z.Add("Bye, off to another thread");
// ...
}
Would the better solution be to just use F# where you want true immutability?
Use this T4 template I put together to solve this problem. It should generally suit your needs for whatever kinds of immutable objects you need to create.
There's no need to go with generics or use any interfaces. For my purposes, I do not want my immutable classes to be convertible to one another. Why would you? What common traits should they share that means they should be convertible to one another? Enforcing a code pattern should be the job of a code generator (or better yet, a nice-enough type system to allow you to do define general code patterns, which C# unfortunately does not have).
Here's some example output from the template to illustrate the basic concept at play (nevermind the types used for the properties):
public sealed partial class CommitPartial
{
public CommitID ID { get; private set; }
public TreeID TreeID { get; private set; }
public string Committer { get; private set; }
public DateTimeOffset DateCommitted { get; private set; }
public string Message { get; private set; }
public CommitPartial(Builder b)
{
this.ID = b.ID;
this.TreeID = b.TreeID;
this.Committer = b.Committer;
this.DateCommitted = b.DateCommitted;
this.Message = b.Message;
}
public sealed class Builder
{
public CommitID ID { get; set; }
public TreeID TreeID { get; set; }
public string Committer { get; set; }
public DateTimeOffset DateCommitted { get; set; }
public string Message { get; set; }
public Builder() { }
public Builder(CommitPartial imm)
{
this.ID = imm.ID;
this.TreeID = imm.TreeID;
this.Committer = imm.Committer;
this.DateCommitted = imm.DateCommitted;
this.Message = imm.Message;
}
public Builder(
CommitID pID
,TreeID pTreeID
,string pCommitter
,DateTimeOffset pDateCommitted
,string pMessage
)
{
this.ID = pID;
this.TreeID = pTreeID;
this.Committer = pCommitter;
this.DateCommitted = pDateCommitted;
this.Message = pMessage;
}
}
public static implicit operator CommitPartial(Builder b)
{
return new CommitPartial(b);
}
}
The basic pattern is to have an immutable class with a nested mutable Builder class that is used to construct instances of the immutable class in a mutable way. The only way to set the immutable class's properties is to construct a ImmutableType.Builder class and set that in the normal mutable way and convert that to its containing ImmutableType class with an implicit conversion operator.
You can extend the T4 template to add a default public ctor to the ImmutableType class itself so you can avoid a double allocation if you can set all the properties up-front.
Here's an example usage:
CommitPartial cp = new CommitPartial.Builder() { Message = "Hello", OtherFields = value, ... };
or...
CommitPartial.Builder cpb = new CommitPartial.Builder();
cpb.Message = "Hello";
...
// using the implicit conversion operator:
CommitPartial cp = cpb;
// alternatively, using an explicit cast to invoke the conversion operator:
CommitPartial cp = (CommitPartial)cpb;
Note that the implicit conversion operator from CommitPartial.Builder to CommitPartial is used in the assignment. That's the part that "freezes" the mutable CommitPartial.Builder by constructing a new immutable CommitPartial instance out of it with normal copy semantics.
Personally, I'm not really aware of any third party or previous solutions to this problem, so my apologies if I'm covering old ground. But, if I were going to implement some kind of immutability standard for a project I was working on, I would start with something like this:
public interface ISnaphot<T>
{
T TakeSnapshot();
}
public class Immutable<T> where T : ISnaphot<T>
{
private readonly T _item;
public T Copy { get { return _item.TakeSnapshot(); } }
public Immutable(T item)
{
_item = item.TakeSnapshot();
}
}
This interface would be implemented something like:
public class Customer : ISnaphot<Customer>
{
public string Name { get; set; }
private List<string> _creditCardNumbers = new List<string>();
public List<string> CreditCardNumbers { get { return _creditCardNumbers; } set { _creditCardNumbers = value; } }
public Customer TakeSnapshot()
{
return new Customer() { Name = this.Name, CreditCardNumbers = new List<string>(this.CreditCardNumbers) };
}
}
And client code would be something like:
public void Example()
{
var myCustomer = new Customer() { Name = "Erik";}
var myImmutableCustomer = new Immutable<Customer>(myCustomer);
myCustomer.Name = null;
myCustomer.CreditCardNumbers = null;
//These guys do not throw exceptions
Console.WriteLine(myImmutableCustomer.Copy.Name.Length);
Console.WriteLine("Credit card count: " + myImmutableCustomer.Copy.CreditCardNumbers.Count);
}
The glaring deficiency is that the implementation is only as good as the client of ISnapshot's implementation of TakeSnapshot, but at least it would standardize things and you'd know where to go searching if you had issues related to questionable mutability. The burden would also be on potential implementors to recognize whether or not they could provide snapshot immutability and not implement the interface, if not (i.e. the class returns a reference to a field that does not support any kind of clone/copy and thus cannot be snapshot-ed).
As I said, this is a start—how I'd probably start—certainly not an optimal solution or a finished, polished idea. From here, I'd see how my usage evolved and modify this approach accordingly. But, at least here I'd know that I could define how to make something immutable and write unit tests to assure myself that it was.
I realize that this isn't far removed from just implementing an object copy, but it standardizes copy vis a vis immutability. In a code base, you might see some implementors of ICloneable, some copy constructors, and some explicit copy methods, perhaps even in the same class. Defining something like this tells you that the intention is specifically related to immutability—I want a snapshot as opposed to a duplicate object because I happen to want n more of that object. The Immtuable<T> class also centralizes the relationship between immutability and copies; if you later want to optimize somehow, like caching the snapshot until dirty, you needn't do it in all implementors of copying logic.
If the goal is to have objects which behave as unshared mutable objects, but which can be shared when doing so would improve efficiency, I would suggest having a private, mutable "fundamental data" type. Although anyone holding a reference to objects of this type would be able to mutate it, no such references would ever escape the assembly. All outside manipulations to the data must be done through wrapper objects, each of which holds two references:
UnsharedVersion--Holds the only reference in existence to its internal data object, and is free to modify it
SharedImmutableVersion--Holds a reference to the data object, to which no references exist except in other SharedImmutableVersion fields; such objects may be of a mutable type, but will in practice be immutable because no references will ever be made available to code that would mutate them.
One or both fields may be populated; when both are populated, they should refer to instances with identical data.
If an attempt is made to mutate an object via the wrapper and the UnsharedVersion field is null, a clone of the object in SharedImmutableVersion should be stored in UnsharedVersion. Next, SharedImmutableCVersion should be cleared and the object in UnsharedVersion mutated as desired.
If an attempt is made to clone an object, and SharedImmutableVersion is empty, a clone of the object in UnsharedVersion should be stored into SharedImmutableVersion. Next, a new wrapper should be constructed with its UnsharedVersion field empty and its SharedImmutableVersion field populated with the SharedImmutableVersion from the original.
It multiple clones are made of an object, whether directly or indirectly, and the object hasn't been mutated between the construction of those clones, all clones will refer to the same object instance. Any of those clones may be mutated, however, without affecting the others. Any such mutation would generate a new instance and store it in UnsharedVersion.
Bit of an odd one this...
Lets say I have the following class:
public class Wibble
{
public string Foo {get;set;}
public string Bar {get;set;}
}
This class is used a process where the values of Foo and Bar are updated/changed. However after a certain point in the process I want to "lock" the instance to prevent any changes from being made. So the question is how best to do this?
A solution of sorts would be something like this:
public class Wibble
{
private string _foo;
private string _bar;
public bool Locked {get; set;}
public string Foo
{
get
{
return this._foo
}
set
{
if (this.Locked)
{
throw new ObjectIsLockedException()
}
this._foo = value;
}
}
public string Bar
{
get
{
return this._bar
}
set
{
if (this.Locked)
{
throw new ObjectIsLockedException()
}
this._bar = value;
}
}
}
However this seems a little inelegant.
The reason for wanting to do this is that I have an application framework that uses externally developed plugins that use the class. The Wibble class is passed into the plugins however some of them should never change the contents, some of them can. The intention behind this is to catch development integration issues rather than runtime production issues. Having the object "locked" allows is to quickly identify plugins that are not coded as specified.
I've implemented something similar to your locked pattern, but also with a read-only interface implemented by a private sub-class containing the actual class data, so that you could pass out what is clearly a read-only view of the data and which can't be up-casted to the original 'mutable version'. The locking was purely to prevent the data provider from making further changes after it had provided an immutable view.
It worked reasonably well, but was a bit awkward, as you've noted. I think it's actually cleaner to have mutable 'Builder' objects which can then generate immutable snapshots. Think StringBuilder and String. This means you duplicate some property code and have to write the routines to do the copying, but it's less awkward, in my opinion, than having a write-lock on every property. It's also evident at compile-time that the snapshot is supposed to be read-only and the user of the Builder cannot modify the snapshots that it created earlier.
I would recommend this:
An immutable base class:
public class Wibble
{
public string Foo { get; private set; }
public string Bar { get; private set; }
public Wibble(string foo, string bar)
{
this.Foo = foo;
this.Bar = bar
}
}
Then a mutable class which you can change, and then create an immutable copy when the time comes.
public class MutableWibble
{
public string Foo { get; set; }
public string Bar { get; set; }
public Wibble CreateImmutableWibble()
{
return new Wibble(this.Foo, this.Bar);
}
}
I can't remember the C# syntax exactly, but you get the idea.
Further reading: http://msdn.microsoft.com/en-us/library/acdd6hb7%28v=vs.71%29.aspx
You cannot make an object immutable!
You can follow this post:
How do I create an immutable Class?
But I think you can always change property values by reflection!
update
"...Actually, string objects are not that
immutable and as far as I know there are at least 2 ways to break string
immutability. With pointers as shown by this code example and with some advanced System.Reflection usage...."
http://codebetter.com/patricksmacchia/2008/01/13/immutable-types-understand-them-and-use-them/
The other option you have is use the BinaryFormatter to create a memberwise clone of the object to be "locked". Though you're not locking the object you're creating a snapshot which can be discarded while the original remains unchanged.