The mechanism of obtaining one of the resource files - c#

Maybe I came up with nonsense but I wonder such a decision.
There are several resource files in library-project Resources:
Resources.File1
Resources.File2
Resources.File3
I add a class to the Resources project:
public static class Foo {
static ? GetResource(Object obj) {
switch (obj) {
case obj.1: { return Resources.File1; }
case obj.2: { return Resources.File2; }
case obj.3: { return Resources.File3; }
}
}
Of course what I have written is completely wrong, but I think it's obvious what I want to do.

The auto-generated Resources class exposes its underlying ResourceManager. You can simply use it manually:
var data = Resources.ResourceManager.GetObject("File" + n);
Make sure to use the appropriate function: GetString, GetStream etc.

Related

How can this class be designed better?

We have a Web API library, that calls into a Business/Service library(where our business logic is located), which in turn calls a Data access library (Repository).
We use this type of data transfer object all over the place. It has a "Payers" property that we may have to filter (meaning, manipulate its value). I have gone about implementing that check as such, but it feels dirty to me, as I'm calling the same function all over the place. I have thought about either:
Using an attribute filter to handle this or
Making the RequestData a property on the class, and do the filtering in the constructor.
Any additional thoughts or design patterns where this could be designed more efficiently:
public class Example
{
private MyRepository _repo = new MyRepository();
private void FilterRequestData(RequestData data)
{
//will call into another class that may or may not alter RequestData.Payers
}
public List<ReturnData> GetMyDataExample1(RequestData data)
{
FilterRequestData(RequestData data);
return _repo.GetMyDataExample1(data);
}
public List<ReturnData> GetMyDataExample2(RequestData data)
{
FilterRequestData(RequestData data);
return _repo.GetMyDataExample2(data);
}
public List<ReturnData> GetMyDataExample3(RequestData data)
{
FilterRequestData(RequestData data);
return _repo.GetMyDataExample3(data);
}
}
public class RequestData
{
List<string> Payers {get;set;}
}
One way of dealing with repeated code like that is to use a strategy pattern with a Func (and potentially some generics depending on your specific case). You could refactor that into separate classes and everything but the basic idea looks like that:
public class MyRepository
{
internal List<ReturnData> GetMyDataExample1(RequestData arg) { return new List<ReturnData>(); }
internal List<ReturnData> GetMyDataExample2(RequestData arg) { return new List<ReturnData>(); }
internal List<ReturnData> GetMyDataExample3(RequestData arg) { return new List<ReturnData>(); }
}
public class ReturnData { }
public class Example
{
private MyRepository _repo = new MyRepository();
private List<ReturnData> FilterRequestDataAndExecute(RequestData data, Func<RequestData, List<ReturnData>> action)
{
// call into another class that may or may not alter RequestData.Payers
// and then execute the actual code, potentially with some standardized exception management around it
// or logging or anything else really that would otherwise be repeated
return action(data);
}
public List<ReturnData> GetMyDataExample1(RequestData data)
{
// call the shared filtering/logging/exception mgmt/whatever code and pass some additional code to execute
return FilterRequestDataAndExecute(data, _repo.GetMyDataExample1);
}
public List<ReturnData> GetMyDataExample2(RequestData data)
{
// call the shared filtering/logging/exception mgmt/whatever code and pass some additional code to execute
return FilterRequestDataAndExecute(data, _repo.GetMyDataExample2);
}
public List<ReturnData> GetMyDataExample3(RequestData data)
{
// call the shared filtering/logging/exception mgmt/whatever code and pass some additional code to execute
return FilterRequestDataAndExecute(data, _repo.GetMyDataExample3);
}
}
public class RequestData
{
List<string> Payers { get; set; }
}
This sort of thinking naturally leads to aspect oriented programming.
It's specifically designed to handle cross-cutting concerns (e.g. here, your filter function cuts across your query logic.)
As #dnickless suggests, you can do this in an ad-hoc way by refactoring your calls to remove the duplicated code.
More general solutions exist, such as PostSharp which give you a slightly cleaner way of structuring code along aspects. It is proprietary, but I believe the free tier gives enough to investigate an example like this. At the very least it's interesting to see how it would look in PostSharp, and whether you think it improves it at all! (It makes strong use of attributes, which extends first suggestion.)
(N.B. I'm not practically suggesting installing another library for a simple case like this, but highlighting how these types of problems might be examined in general.)

Is it possible to get the calling instance inside a method?

The general reason I want to do this is:
class MovieApiController : ApiController
{
public string CurrentUser {get;set;}
// ...
public string Index()
{
return Resources.GetText("Color");
}
}
class Resources
{
static string GetText(string id)
{
var caller = ??? as MovieApiController;
if (caller && caller.CurrentUser == "Bob")
{
return "Red";
}
else
{
return "Blue";
}
}
}
I don't need this to be 100% dependable. It seems like the callstack should have this information, but StackFrame doesn't seem to expose any information about the specific object on which each frame executes.
It is generally a bad idea for a method to try to "sniff" its surroundings, and produce different results based on who is making the call.
A better approach is to make your Resources class aware of whatever it needs to know in order to make its decision, and configure it in a place where all relevant information is known, for example
class MovieApiController : ApiController {
private string currentUser;
private Resources resources;
public string CurrentUser {
get {
return currentUser;
}
set {
currentUser = value;
resources = new Resources(currentUser);
}
}
// ...
public string Index() {
return resources.GetText("Color");
}
}
class Resources {
private string currentUser;
public Resources(string currentUser) {
this.currentUser = currentUser;
}
public string GetText(string id) {
if (currentUser == "Bob") {
return "Red";
} else {
return "Blue";
}
}
}
CurrentUser should be available at HttpContext.Current.User and you can leave your controller out of the resource class.
It seems like the callstack should have this information,
Why? The call stack indicated what methods are called to get where you are at - it does not have any information about instances.
Rethink your parameters be deciding what information does the method need to do its job. Reaching outside of the class (e.g. by using the callstack or taking advantage of static methods like HttpContext.Current) limit the re-usability of your code.
From what you've shown, all you need is the current user name (you don't even show where you use the id value. If you want to return different things based on what's passed in then maybe you need separate methods?
As a side note, the optimizer has a great deal of latitude in reorganizing code to make it more efficient, so there are no guarantees that the call stack even contains what you think it should from the source code.
Short answer - you can't, short of creating a custom controller factory that stores the current controller as a property of the current HttpContext, and even that could prove unpredictable.
But it's really not good for a class to behave differently by attempting to inspect its caller. When a method depends on another class it needs to get the correct behavior by depending on the right class, calling the right method, and passing the right parameters.
So in this case you could
have a parameter that you pass to GetResources that tells it what it needs to know in order to return the correct string.
Create a more specific version of the Resources class that does what you need
declare
public interface IResources
{
string GetText(string id);
}
And have multiple classes that implement IResources, use dependency injection to provide the correct implementation to this controller. Ideally that's the best scenario. MovieApiController doesn't know anything about the implementation of IResources. It just knows that there's an instance of IResources that will do what it needs. And the Resource class doesn't know anything about what is calling it. It behaves the same no matter what calls it.
That would look like this:
public class MovieApiController : ApiController
{
private readonly IResources _resources;
public MovieApiController(IResources resources)
{
_resources = resources;
}
public string Index()
{
return _resources.GetText("Color");
}
}
Notice how the controller doesn't know anything about the Resources class. It just knows that it has something that implements IResources and it uses it.
If you're using ASP.NET Core then dependency injection is built in. (There's some good reading in there on the general concept.) If you're using anything older then you can still add it in.
http://www.asp.net/mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-dependency-injection - This has a picture that is worth 1000 words for describing the concept.
http://www.c-sharpcorner.com/UploadFile/dacca2/implement-ioc-using-unity-in-mvc-5/
Some of these recommend understanding "inversion of control" first. You might find it easier to just implement something according to the example without trying to understand it first. The understanding comes when you see what it does.

C# Singleton Pattern over Inherited Classes

I'll begin this question with apologizing for the length of the post. So that I save you some time, my problem is that the class pattern I've got stuck in my head is obviously flawed, and I can't see a good solution.
In a project I'm working on, I need to use operate algorithms on a chunks of data, let's call them DataCache. Sometimes these algorithms return results that themselves need to be cached, and so I devised a scheme.
I have an Algorithm base class that looks like so
abstract class Algorithm<T>
{
protected abstract T ExecuteAlgorithmLogic(DataCache dataCache);
private readonly Dictionary<DataCache, WeakReference> _resultsWeak = new Dictionary<DataCache, WeakReference>();
private readonly Dictionary<DataCache, T> _resultsStrong = new Dictionary<DataCache, T>();
public T ComputeResult(DataCache dataCache, bool save = false)
{
if (_resultsStrong.ContainsKey(dataCache))
return _resultsStrong[dataCache];
if (_resultsWeak.ContainsKey(dataCache))
{
var temp = _resultsWeak[dataCache].Target;
if (temp != null) return (T) temp;
}
var result = ExecuteAlgorithmLogic(dataCache);
_resultsWeak[dataCache] = new WeakReference(result, true);
if (save) _resultsStrong[dataCache] = result;
return result;
}
}
If you call ComputeResult() and provide a DataCache you can optionally select to cache the result. Also, if you are lucky result still might be there if the GC hasn't collected it yet. The size of each DataCache is in hundreds of megabytes, and before you ask there are about 10 arrays in each, which hold basic types such as int and float.
My idea here was that an actual algorithm would look something like this:
class ActualAgorithm : Algorithm<SomeType>
{
protected override SomeType ExecuteAlgorithmLogic(DataCache dataCache)
{
//Elves be here
}
}
And I would define tens of .cs files, each for one algorithm. There are two problems with this approach. Firstly, in order for this to work, I need to instantiate my algorithms and keep that instance (or the results are not cached and the entire point is mute). But then I end up with an unsightly singleton pattern implementation in each derived class. It would look something like so:
class ActualAgorithm : Algorithm<SomeType>
{
protected override SomeType ExecuteAlgorithmLogic(DataCache dataCache)
{
//Elves and dragons be here
}
protected ActualAgorithm(){ }
private static ActualAgorithm _instance;
public static ActualAgorithm Instance
{
get
{
_instance = _instance ?? new ActualAgorithm();
return _instance;
}
}
}
So in each implementation I would have to duplicate code for the singleton pattern. And secondly tens of CS files also sounds a bit overkill, since what I'm really after is just a single function returning some results that can be cached for various DataCache objects. Surely there must be a smarter way of doing this, and I would greatly appreciate a nudge in the right direction.
What I meant with my comment was something like this:
abstract class BaseClass<K,T> where T : BaseClass<K,T>, new()
{
private static T _instance;
public static T Instance
{
get
{
_instance = _instance ?? new T();
return _instance;
}
}
}
class ActualClass : BaseClass<int, ActualClass>
{
public ActualClass() {}
}
class Program
{
static void Main(string[] args)
{
Console.WriteLine(ActualClass.Instance.GetType().ToString());
Console.ReadLine();
}
}
The only problem here is that you'll have a public constructor.
I refined my previous answer but as it is rather different than the other approach I proposed, I thought I might just make another answer. First, we'll need to declare some interfaces:
// Where to find cached data
interface DataRepository {
void cacheData(Key k, Data d);
Data retrieveData(Key k, Data d);
};
// If by any chance we need an algorithm somewhere
interface AlgorithmRepository {
Algorithm getAlgorithm(Key k);
}
// The algorithm that process data
interface Algorithm {
void processData(Data in, Data out);
}
Given these interfaces, we can define some basic implementation for the algorithm repository:
class BaseAlgorithmRepository {
// The algorithm dictionnary
Map<Key, Algorithm> algorithms;
// On init, we'll build our repository using this function
void setAlgorithmForKey(Key k, Algorithm a) {
algorithms.put(k, a);
}
// ... implement the other function of the interface
}
Then we can also implement something for the DataRepository
class DataRepository {
AlgorithmRepository algorithmRepository;
Map<Key, Data> cache;
void cacheData(Key k, Data d) {
cache.put(k, d);
}
Data retrieveData(Key k, Data in) {
Data d = cache.get(k);
if (d==null) {
// Data not found in the cache, then we try to produce it ourself
Data d = new Data();
Algorithm a = algorithmRepository.getAlgorithm(k);
a.processData(in, d);
// This is optional, you could simply throw an exception to say that the
// data has not been cached and thus, the algorithm succession did not
// produce the necessary data. So instead of the above, you could simply:
// throw new DataNotCached(k);
// and thus halt the whole processing
}
return d;
}
}
Finally, we get to implement algorithms:
abstract class BaseAlgorithm {
DataRepository repository;
}
class SampleNoCacheAlgorithm extends BaseAlgorithm {
void processData(Data in, Data out) {
// do something with in to compute out
}
}
class SampleCacheProducerAlgorithm extends BaseAlgorithm {
static Key KEY = "SampleCacheProducerAlgorithm.myKey";
void processData(Data in, Data out) {
// do something with in to compute out
// then call repository.cacheData(KEY, out);
}
}
class SampleCacheConsumerAlgorithm extends BaseAlgorithm {
void processData(Data in, Data out) {
// Data tmp = repository.retrieveData(SampleCacheProducerAlgorithm.KEY, in);
// do something with in and tmp to compute out
}
}
To build on this, I think you could also define some special kinds of algorithms that are just in fact composites of other algorithms but also implement the Algorithm interface. An example could be:
class AlgorithmChain extends BaseAlgorithm {
List<Algorithms> chain;
void processData(Data in, Data out) {
Data currentIn = in;
foreach (Algorithm a : chain) {
Data currentOut = new Data();
a.processData(currentIn, currentOut);
currentIn = currentOut;
}
out = currentOut;
}
}
One addition I would make to this is a DataPool, that would allow you to reuse exisiting but unused Data objects in order to avoid allocating lots of memory each time you make a new Data().
I think this set of classes could give a good basis to your whole architecture, with the additional benefit that it does not employ any Singleton (always passing references to the concerned objects). Which means also that implementing dummy classes for unit tests would be rather easy.
You could have your algorithms independant of their results:
class Engine<T> {
Map<AlgorithmKey, Algorithm<T>> algorithms;
Map<AlgorithmKey, Data> algorithmsResultCache;
T processData(Data in);
}
interface Algorithm<T> {
boolean doesResultNeedsToBeCached();
T processData(Data in);
}
Then you Engine is responsible for instanciating the algorithms which are only pieces of code where the input is data and the output is either null or some data. Each algorithm can say whether his result needs to be cached or not.
In order to refine my answer, I think you should give some precisions about how the algorithms are to be run (is there an order, is it user adjustable, do we know in advance the algorithms that will be run, ...).
Can you register your algorithm instances with a combined repository/factory of algorithms that'll keep references to them? The repository could be a singleton, and, if you give the repository control of algorithm instantiation, you could use it to ensure that only one instance of each existed.
public class AlgorithmRepository
{
//... use boilerplate singleton code
public void CreateAlgorithm(Algorithms algorithm)
{
//... add to some internal hash or map, checking that it hasn't been created already
//... Algorithms is just an enum telling it which to create (clunky factory
// implementation)
}
public void ComputeResult(Algorithms algorithm, DataCache datacache)
{
// Can lazy load algoirthms here and make CreateAlgorithm private ..
CreateAlgorithm(algorithm);
//... compute and return.
}
}
This said, having a separate class (and cs file) for each algorithm makes sense to me. You could break with convention and have multiple algo classes in a single cs file if they're lightweight and it makes it easier to manage if you're worried about the number of files -- there are worse things to do. FWIW I'd just put up with the number of files ...
Typically when you create a Singleton class you don't want to inherit from it. When you do this you lose some of the goodness of the Singleton pattern (and what I hear from the pattern zealots is that an angel loses its wings every time you do something like this). But lets be pragmatic...sometimes you do what you have to do.
Regardless I do not think combining generics and inheritance will work in this instance anyway.
You indicated the number of algorithms will be in the tens (not hundreds). As long is this is the case I would create a dictionary keyed off of System.Type and store references to your methods as the values of the dictionary. In this case I used
Func<DataCache, object> as the dictionary value signature.
When the class instantiates for the first time register all your available algorithms in the dictionary. At runtime when the class needs to execute an algorithm for type T it will get the Type of T and look up the alogorithm in the dictionary.
If the code for the algorithms will be relatively involved I would suggest splitting them off into partial classes just to keep your code readable.
public sealed partial class Algorithm<T>
{
private static object ExecuteForSomeType(DataCache dataCache)
{
return new SomeType();
}
}
public sealed partial class Algorithm<T>
{
private static object ExecuteForSomeOtherType(DataCache dataCache)
{
return new SomeOtherType();
}
}
public sealed partial class Algorithm<T>
{
private readonly Dictionary<System.Type, Func<DataCache, object>> _algorithms = new Dictionary<System.Type, Func<DataCache, object>>();
private readonly Dictionary<DataCache, WeakReference> _resultsWeak = new Dictionary<DataCache, WeakReference>();
private readonly Dictionary<DataCache, T> _resultsStrong = new Dictionary<DataCache, T>();
private Algorithm() { }
private static Algorithm<T> _instance;
public static Algorithm<T> Instance
{
get
{
if (_instance == null)
{
_instance = new Algorithm<T>();
_instance._algorithms.Add(typeof(SomeType), ExecuteForSomeType);
_instance._algorithms.Add(typeof(SomeOtherType), ExecuteForSomeOtherType);
}
return _instance;
}
}
public T ComputeResult(DataCache dataCache, bool save = false)
{
T returnValue = (T)(new object());
if (_resultsStrong.ContainsKey(dataCache))
{
returnValue = _resultsStrong[dataCache];
return returnValue;
}
if (_resultsWeak.ContainsKey(dataCache))
{
returnValue = (T)_resultsWeak[dataCache].Target;
if (returnValue != null) return returnValue;
}
returnValue = (T)_algorithms[returnValue.GetType()](dataCache);
_resultsWeak[dataCache] = new WeakReference(returnValue, true);
if (save) _resultsStrong[dataCache] = returnValue;
return returnValue;
}
}
First off, I'd suggest you rename DataCache to something like DataInput for more clarity, because it's easy to confuse it with objects that really act as caches (_resultsWeak and _resultsStrong) to store the results.
Concerning the need for these caches to remain in memory for future use, maybe you should consider placing them in one of the wider scopes that exist in a .NET application than the object scope, Application or Session for example.
You could also use an AlgorithmLocator (see ServiceLocator pattern) as a single point of access to all Algorithms to get rid of the singleton logic duplication in each Algorithm.
Other than that, I find your solution to be a nice one globally. Whether or not it is overkill will basically depend on the homogeneity of your algorithms. If they all have the same way of caching data, of returning their results... it will be a great benefit to have all that logic factored out in a single place. But we lack context here to judge.
Encapsulating the caching logic in a specific object held by the Algorithm (CachingStrategy ?) would also be an alternative to inheriting it, but maybe a bit awkward since the caching object would have to access the cache before and after calculation and would need to be able to trigger algorithm calculation itself and have a hand on the results.
[Edit] if you're concerned with having one .cs file per algorithm, you can always group all Algorithm classes pertaining to a particular T in the same file.

Initializing constructor from stored cache in C#

I'm not sure exactly how to describe this question, but here goes. I've got a class hierarchy of objects that are mapped in a SQLite database. I've already got all the non-trivial code written that communicates between the .NET objects and the database.
I've got a base interface as follows:
public interface IBackendObject
{
void Read(int id);
void Refresh();
void Save();
void Delete();
}
This is the basic CRUD operations on any object. I've then implemented a base class that encapsulates much of the functionality.
public abstract class ABackendObject : IBackendObject
{
protected ABackendObject() { } // constructor used to instantiate new objects
protected ABackendObject(int id) { Read(id); } // constructor used to load object
public void Read(int id) { ... } // implemented here is the DB code
}
Now, finally, I have my concrete child objects, each of which have their own tables in the database:
public class ChildObject : ABackendObject
{
public ChildObject() : base() { }
public ChildObject(int id) : base(id) { }
}
This works fine for all my purposes so far. The child has several callback methods that are used by the base class to instantiate the data properly.
I now want to make this slightly efficient. For example, in the following code:
public void SomeFunction1()
{
ChildObject obj = new ChildObject(1);
obj.Property1 = "blah!";
obj.Save();
}
public void SomeFunction2()
{
ChildObject obj = new ChildObject(1);
obj.Property2 = "blah!";
obj.Save();
}
In this case, I'll be constructing two completely new memory instantiations and depending on the order of SomeFunction1 and SomeFunction2 being called, either Property1 or Property2 may not be saved. What I want to achieve is a way for both these instantiations to somehow point to the same memory location--I don't think that will be possible if I'm using the "new" keyword, so I was looking for hints as to how to proceed.
Ideally, I'd want to store a cache of all loaded objects in my ABackendObject class and return memory references to the already loaded objects when requested, or load the object from memory if it doesn't already exist and add it to the cache. I've got a lot of code that is already using this framework, so I'm of course going to have to change a lot of stuff to get this working, but I just wanted some tips as to how to proceed.
Thanks!
If you want to store a "cache" of loaded objects, you could easily just have each type maintain a Dictionary<int, IBackendObject> which holds loaded objects, keyed by their ID.
Instead of using a constructor, build a factory method that checks the cache:
public abstract class ABackendObject<T> where T : class
{
public T LoadFromDB(int id) {
T obj = this.CheckCache(id);
if (obj == null)
{
obj = this.Read(id); // Load the object
this.SaveToCache(id, obj);
}
return obj;
}
}
If you make your base class generic, and Read virtual, you should be able to provide most of this functionality without much code duplication.
What you want is an object factory. Make the ChildObject constructor private, then write a static method ChildObject.Create(int index) which returns a ChildObject, but which internally ensures that different calls with the same index return the same object. For simple cases, a simple static hash of index => object will be sufficient.
If you're using .NET Framework 4, you may want to have a look at the System.Runtime.Caching namespace, which gives you a pretty powerful cache architecture.
http://msdn.microsoft.com/en-us/library/system.runtime.caching.aspx
Sounds perfect for a reference count like this...
#region Begin/End Update
int refcount = 0;
ChildObject record;
protected ChildObject ActiveRecord
{
get
{
return record;
}
set
{
record = value;
}
}
public void BeginUpdate()
{
if (count == 0)
{
ActiveRecord = new ChildObject(1);
}
Interlocked.Increment(ref refcount);
}
public void EndUpdate()
{
int count = Interlocked.Decrement(ref refcount);
if (count == 0)
{
ActiveRecord.Save();
}
}
#endregion
#region operations
public void SomeFunction1()
{
BeginUpdate();
try
{
ActiveRecord.Property1 = "blah!";
}
finally
{
EndUpdate();
}
}
public void SomeFunction2()
{
BeginUpdate();
try
{
ActiveRecord.Property2 = "blah!";
}
finally
{
EndUpdate();
}
}
public void SomeFunction2()
{
BeginUpdate();
try
{
SomeFunction1();
SomeFunction2();
}
finally
{
EndUpdate();
}
}
#endregion
I think your on the right track more or less. You can either create a factory which creates your child objects (and can track "live" instances), or you can keep track of instances which have been saved, so that when you call your Save method it recognizes that your first instance of ChildObject is the same as your second instance of ChildObject and does a deep copy of the data from the second instance over to the first. Both of these are fairly non-trivial from a coding standpoint, and both probably involve overriding the equality methods on your entities. I tend to think that using the first approach would be less likely to cause errors.
One additional option would be to use an existing Obect-Relational mapping package like NHibernate or Entity Framework to do your mapping between objects and your database. I know NHibernate supports Sqlite, and in my experience tends to be the one that requires the least amount of change to your entity structures. Going that route you get the benefit of the ORM layer tracking instances for you (and generating SQL for you), plus you would probably get some more advanced features your current data access code may not have. The downside is that these frameworks tend to have a learning curve associated with them, and depending on which you go with there could be a not insignificant impact on the rest of your code. So it would be worth weighing the benefits against the cost of learning the framework and converting your code to use the API.

C# thread safety of global configuration settings

In a C# app, suppose I have a single global class that contains some configuration items, like so :
public class Options
{
int myConfigInt;
string myConfigString;
..etc.
}
static Options GlobalOptions;
the members of this class will be uses across different threads :
Thread1: GlobalOptions.myConfigString = blah;
while
Thread2: string thingie = GlobalOptions.myConfigString;
Using a lock for access to the GlobalOptions object would also unnecessary block when 2 threads are accessing different members, but on the other hand creating a sync-object for every member seems a bit over the top too.
Also, using a lock on the global options would make my code less nice I think;
if I have to write
string stringiwanttouse;
lock(GlobalOptions)
{
stringiwanttouse = GlobalOptions.myConfigString;
}
everywhere (and is this thread-safe or is stringiwanttouse now just a pointer to myConfigString ? Yeah, I'm new to C#....) instead of
string stringiwanttouse = GlobalOptions.myConfigString;
it makes the code look horrible.
So...
What is the best (and simplest!) way to ensure thread-safety ?
You could wrap the field in question (myConfigString in this case) in a Property, and have code in the Get/Set that uses either a Monitor.Lock or a Mutex. Then, accessing the property only locks that single field, and doesn't lock the whole class.
Edit: adding code
private static object obj = new object(); // only used for locking
public static string MyConfigString {
get {
lock(obj)
{
return myConfigstring;
}
}
set {
lock(obj)
{
myConfigstring = value;
}
}
}
The following was written before the OP's edit:
public static class Options
{
private static int _myConfigInt;
private static string _myConfigString;
private static bool _initialized = false;
private static object _locker = new object();
private static void InitializeIfNeeded()
{
if (!_initialized) {
lock (_locker) {
if (!_initialized) {
ReadConfiguration();
_initalized = true;
}
}
}
}
private static void ReadConfiguration() { // ... }
public static int MyConfigInt {
get {
InitializeIfNeeded();
return _myConfigInt;
}
}
public static string MyConfigString {
get {
InitializeIfNeeded();
return _myConfigstring;
}
}
//..etc.
}
After that edit, I can say that you should do something like the above, and only set configuration in one place - the configuration class. That way, it will be the only class modifying the configuration at runtime, and only when a configuration option is to be retrieved.
Your configurations may be 'global', but they should not be exposed as a global variable. If configurations don't change, they should be used to construct the objects that need the information - either manually or through a factory object. If they can change, then an object that watches the configuration file/database/whatever and implements the Observer pattern should be used.
Global variables (even those that happen to be a class instance) are a Bad Thing™
What do you mean by thread safety here? It's not the global object that needs to be thread safe, it is the accessing code. If two threads write to a member variable near the same instant, one of them will "win", but is that a problem? If your client code depends on the global value staying constant until it is done with some unit of processing, then you will need to create a synchronization object for each property that needs to be locked. There isn't any great way around that. You could just cache a local copy of the value to avoid problems, but the applicability of that fix will depend on your circumstances. Also, I wouldn't create a synch object for each property by default, but instead as you realize you will need it.

Categories

Resources