store data in asp.net with signalr - c#

I am trying to store data in asp.net application with signalr. To ilustrate my problem here is example.
I have a hub that informs controll that user has connected and sends his id.
public static event EventHandler<IdEventArgs> userConnected;
public void Connected()
{
Debug.WriteLine("Hub Connected Method");
var id = Context.ConnectionId;
userConnected(this, new IdEventArgs(id));
}
Then I have my controller that adds event handler to hub
public ActionResult Index()
{
LoadingHub.userConnected += new EventHandler<IdEventArgs>(UserConnected);
return View();
}
And finally my method that should save the obtained data
private void UserConnected(object sender, IdEventArgs e)
{
Debug.WriteLine("User Connected with Id: " + e.Id);
//To do:
//save data
}
I tryed saving to Session but Session object is null here. I came across user profiles and maybe this would be a good solution - is it possible to create new profile when user connects to store data inside, and when he disconects to destroy said profile? Or maybe completly diffrent approach is more siutable here?

As already mentioned in comments, Session is not available with SignalR. I'd say you have 2 main options:
use the state feature, as explained here, this way your data will go back and forth over the connection but you'll have it around as long as the connection is alive (ideal for small payloads and if you don't mind clients accessing that info)
use dependency injection (check here) to pass a service to your hub, whose interface you define as you will (it could be a trivial pair of get/set methods), and whose implementations could be many, from an in-memory static (not necessarily literally) dictionary for dev scenarios up to any kind of persistent store you want to use to provide horizontal scalability (if needed). It's a bit of effort at the beginning but then it gets very easy and flexible

I will give example for solution where you need to create Singleton service for holding connections for user. It is easy to modify to hold any type of data. Just need to inject it in required service, controller
public class ConnectionMapping<TConnectionKey>
{
private readonly Dictionary<TConnectionKey, HashSet<string>> _connections =
new Dictionary<TConnectionKey, HashSet<string>>();
public int Count
{
get
{
return _connections.Count;
}
}
public void Add(TConnectionKey key, string connectionId)
{
lock (_connections)
{
HashSet<string> connections;
if (!_connections.TryGetValue(key, out connections))
{
connections = new HashSet<string>();
_connections.Add(key, connections);
}
lock (connections)
{
connections.Add(connectionId);
}
}
}
public IReadOnlyList<string> GetConnections(TConnectionKey key)
{
HashSet<string> connections;
if (_connections.TryGetValue(key, out connections))
{
return new List<string>(connections);
}
return new List<string>();
}
public void Remove(TConnectionKey key, string connectionId)
{
lock (_connections)
{
HashSet<string> connections;
if (!_connections.TryGetValue(key, out connections))
{
return;
}
lock (connections)
{
connections.Remove(connectionId);
if (connections.Count == 0)
{
_connections.Remove(key);
}
}
}
}
public string ToJson()
{
var entries = _connections.Select(d =>
string.Format("\"{0}\": [\"{1}\"]", d.Key, string.Join(",", d.Value)));
return "{" + string.Join(",", entries) + "}";
}
}

Related

How to cache the connection string and access it?

I want to cache the connection string and used cached object throughout my project. I tried like below
public static void Demo()
{
Hashtable Hashtable = new Hashtable()
Hashtable.Add("WEBConnectionString", ConfigurationManager.ConnectionStrings["WEBConnectionString"].ConnectionString);
HttpContext.Current.Application["CachedValue"] = Hashtable;}
public static string Method(string key)
{
string result = string.Empty;
Hashtable CachedObject = (Hashtable)HttpContext.Current.Application["CachedValue"];
if (CachedObject != null && CachedObject.ContainsKey(key))
{
result = CachedObject[key].ToString();
}
return result;
}
and accessing like this
string conString = Utility.Method("WEBConnectionString");
but CachedObject.ContainsKey(key) condition getting false. What am I doing wrong here? or is there any other method to cache the connection string.
This should work (in a somewhat generic way):
public class HttpContextCache
{
public void Remove(string key)
{
HttpContext.Current.Cache.Remove(key);
}
public void Store(string key, object data)
{
HttpContext.Current.Cache.Insert(key, data);
}
public T Retrieve<T>(string key)
{
T itemStored = (T)HttpContext.Current.Cache.Get(key);
if (itemStored == null)
{
itemStored = default(T);
}
return itemStored;
}
}
Anywhere you find appropriate in your code:
// cache the connection string
HttpContextCache cache = new HttpContextCache();
cache.Store("WEBConnectionString", ConfigurationManager.ConnectionStrings["WEBConnectionString"].ConnectionString);
// ...
// get connection string from the cache
HttpContextCache cache = new HttpContextCache();
string conString = cache.Retrieve<string>("WEBConnectionString");
My first thought is why would you cache it? It's configuration data and should be quick enough to fetch every time you need it.
If you really need caching, there are more modern alternatives to HttpContext.Current.Application.
You can use a IOC container as suggested in the comments and configure it as a singletion instance. Seems overkill to setup a IOC container for that purpose though. If you're having multiple servers and you want to make sure they have the same state consider using a distributed cache like Redis.
Other alternatives are storing the connection string in a static variable, use MemoryCache, HttpRuntime.Cache or HttpContext.Current.Cache.
Example using a lazy static variable:
private static Lazy<string> ConnectionString = new Lazy<string>(() => ConfigurationManager.ConnectionStrings["YourConnectionString"].ConnectionString);
// Access the connection string: var connectionString = ConnectionString.Value;

Call code once when controller is first accessed [duplicate]

I have read lots of information about page caching and partial page caching in a MVC application. However, I would like to know how you would cache data.
In my scenario I will be using LINQ to Entities (entity framework). On the first call to GetNames (or whatever the method is) I want to grab the data from the database. I want to save the results in cache and on the second call to use the cached version if it exists.
Can anyone show an example of how this would work, where this should be implemented (model?) and if it would work.
I have seen this done in traditional ASP.NET apps , typically for very static data.
Here's a nice and simple cache helper class/service I use:
using System.Runtime.Caching;
public class InMemoryCache: ICacheService
{
public T GetOrSet<T>(string cacheKey, Func<T> getItemCallback) where T : class
{
T item = MemoryCache.Default.Get(cacheKey) as T;
if (item == null)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(10));
}
return item;
}
}
interface ICacheService
{
T GetOrSet<T>(string cacheKey, Func<T> getItemCallback) where T : class;
}
Usage:
cacheProvider.GetOrSet("cache key", (delegate method if cache is empty));
Cache provider will check if there's anything by the name of "cache id" in the cache, and if there's not, it will call a delegate method to fetch data and store it in cache.
Example:
var products=cacheService.GetOrSet("catalog.products", ()=>productRepository.GetAll())
Reference the System.Web dll in your model and use System.Web.Caching.Cache
public string[] GetNames()
{
string[] names = Cache["names"] as string[];
if(names == null) //not in cache
{
names = DB.GetNames();
Cache["names"] = names;
}
return names;
}
A bit simplified but I guess that would work. This is not MVC specific and I have always used this method for caching data.
I'm referring to TT's post and suggest the following approach:
Reference the System.Web dll in your model and use System.Web.Caching.Cache
public string[] GetNames()
{
var noms = Cache["names"];
if(noms == null)
{
noms = DB.GetNames();
Cache["names"] = noms;
}
return ((string[])noms);
}
You should not return a value re-read from the cache, since you'll never know if at that specific moment it is still in the cache. Even if you inserted it in the statement before, it might already be gone or has never been added to the cache - you just don't know.
So you add the data read from the database and return it directly, not re-reading from the cache.
For .NET 4.5+ framework
add reference: System.Runtime.Caching
add using statement:
using System.Runtime.Caching;
public string[] GetNames()
{
var noms = System.Runtime.Caching.MemoryCache.Default["names"];
if(noms == null)
{
noms = DB.GetNames();
System.Runtime.Caching.MemoryCache.Default["names"] = noms;
}
return ((string[])noms);
}
In the .NET Framework 3.5 and earlier versions, ASP.NET provided an in-memory cache implementation in the System.Web.Caching namespace. In previous versions of the .NET Framework, caching was available only in the System.Web namespace and therefore required a dependency on ASP.NET classes. In the .NET Framework 4, the System.Runtime.Caching namespace contains APIs that are designed for both Web and non-Web applications.
More info:
https://msdn.microsoft.com/en-us/library/dd997357(v=vs.110).aspx
https://learn.microsoft.com/en-us/dotnet/framework/performance/caching-in-net-framework-applications
Steve Smith did two great blog posts which demonstrate how to use his CachedRepository pattern in ASP.NET MVC. It uses the repository pattern effectively and allows you to get caching without having to change your existing code.
http://ardalis.com/Introducing-the-CachedRepository-Pattern
http://ardalis.com/building-a-cachedrepository-via-strategy-pattern
In these two posts he shows you how to set up this pattern and also explains why it is useful. By using this pattern you get caching without your existing code seeing any of the caching logic. Essentially you use the cached repository as if it were any other repository.
I have used it in this way and it works for me.
https://msdn.microsoft.com/en-us/library/system.web.caching.cache.add(v=vs.110).aspx
parameters info for system.web.caching.cache.add.
public string GetInfo()
{
string name = string.Empty;
if(System.Web.HttpContext.Current.Cache["KeyName"] == null)
{
name = GetNameMethod();
System.Web.HttpContext.Current.Cache.Add("KeyName", name, null, DateTime.Noew.AddMinutes(5), Cache.NoSlidingExpiration, CacheitemPriority.AboveNormal, null);
}
else
{
name = System.Web.HttpContext.Current.Cache["KeyName"] as string;
}
return name;
}
AppFabric Caching is distributed and an in-memory caching technic that stores data in key-value pairs using physical memory across multiple servers. AppFabric provides performance and scalability improvements for .NET Framework applications. Concepts and Architecture
Extending #Hrvoje Hudo's answer...
Code:
using System;
using System.Runtime.Caching;
public class InMemoryCache : ICacheService
{
public TValue Get<TValue>(string cacheKey, int durationInMinutes, Func<TValue> getItemCallback) where TValue : class
{
TValue item = MemoryCache.Default.Get(cacheKey) as TValue;
if (item == null)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(durationInMinutes));
}
return item;
}
public TValue Get<TValue, TId>(string cacheKeyFormat, TId id, int durationInMinutes, Func<TId, TValue> getItemCallback) where TValue : class
{
string cacheKey = string.Format(cacheKeyFormat, id);
TValue item = MemoryCache.Default.Get(cacheKey) as TValue;
if (item == null)
{
item = getItemCallback(id);
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(durationInMinutes));
}
return item;
}
}
interface ICacheService
{
TValue Get<TValue>(string cacheKey, Func<TValue> getItemCallback) where TValue : class;
TValue Get<TValue, TId>(string cacheKeyFormat, TId id, Func<TId, TValue> getItemCallback) where TValue : class;
}
Examples
Single item caching (when each item is cached based on its ID because caching the entire catalog for the item type would be too intensive).
Product product = cache.Get("product_{0}", productId, 10, productData.getProductById);
Caching all of something
IEnumerable<Categories> categories = cache.Get("categories", 20, categoryData.getCategories);
Why TId
The second helper is especially nice because most data keys are not composite. Additional methods could be added if you use composite keys often. In this way you avoid doing all sorts of string concatenation or string.Formats to get the key to pass to the cache helper. It also makes passing the data access method easier because you don't have to pass the ID into the wrapper method... the whole thing becomes very terse and consistant for the majority of use cases.
Here's an improvement to Hrvoje Hudo's answer. This implementation has a couple of key improvements:
Cache keys are created automatically based on the function to update data and the object passed in that specifies dependencies
Pass in time span for any cache duration
Uses a lock for thread safety
Note that this has a dependency on Newtonsoft.Json to serialize the dependsOn object, but that can be easily swapped out for any other serialization method.
ICache.cs
public interface ICache
{
T GetOrSet<T>(Func<T> getItemCallback, object dependsOn, TimeSpan duration) where T : class;
}
InMemoryCache.cs
using System;
using System.Reflection;
using System.Runtime.Caching;
using Newtonsoft.Json;
public class InMemoryCache : ICache
{
private static readonly object CacheLockObject = new object();
public T GetOrSet<T>(Func<T> getItemCallback, object dependsOn, TimeSpan duration) where T : class
{
string cacheKey = GetCacheKey(getItemCallback, dependsOn);
T item = MemoryCache.Default.Get(cacheKey) as T;
if (item == null)
{
lock (CacheLockObject)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.Add(duration));
}
}
return item;
}
private string GetCacheKey<T>(Func<T> itemCallback, object dependsOn) where T: class
{
var serializedDependants = JsonConvert.SerializeObject(dependsOn);
var methodType = itemCallback.GetType();
return methodType.FullName + serializedDependants;
}
}
Usage:
var order = _cache.GetOrSet(
() => _session.Set<Order>().SingleOrDefault(o => o.Id == orderId)
, new { id = orderId }
, new TimeSpan(0, 10, 0)
);
public sealed class CacheManager
{
private static volatile CacheManager instance;
private static object syncRoot = new Object();
private ObjectCache cache = null;
private CacheItemPolicy defaultCacheItemPolicy = null;
private CacheEntryRemovedCallback callback = null;
private bool allowCache = true;
private CacheManager()
{
cache = MemoryCache.Default;
callback = new CacheEntryRemovedCallback(this.CachedItemRemovedCallback);
defaultCacheItemPolicy = new CacheItemPolicy();
defaultCacheItemPolicy.AbsoluteExpiration = DateTime.Now.AddHours(1.0);
defaultCacheItemPolicy.RemovedCallback = callback;
allowCache = StringUtils.Str2Bool(ConfigurationManager.AppSettings["AllowCache"]); ;
}
public static CacheManager Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
{
instance = new CacheManager();
}
}
}
return instance;
}
}
public IEnumerable GetCache(String Key)
{
if (Key == null || !allowCache)
{
return null;
}
try
{
String Key_ = Key;
if (cache.Contains(Key_))
{
return (IEnumerable)cache.Get(Key_);
}
else
{
return null;
}
}
catch (Exception)
{
return null;
}
}
public void ClearCache(string key)
{
AddCache(key, null);
}
public bool AddCache(String Key, IEnumerable data, CacheItemPolicy cacheItemPolicy = null)
{
if (!allowCache) return true;
try
{
if (Key == null)
{
return false;
}
if (cacheItemPolicy == null)
{
cacheItemPolicy = defaultCacheItemPolicy;
}
String Key_ = Key;
lock (Key_)
{
return cache.Add(Key_, data, cacheItemPolicy);
}
}
catch (Exception)
{
return false;
}
}
private void CachedItemRemovedCallback(CacheEntryRemovedArguments arguments)
{
String strLog = String.Concat("Reason: ", arguments.RemovedReason.ToString(), " | Key-Name: ", arguments.CacheItem.Key, " | Value-Object: ", arguments.CacheItem.Value.ToString());
LogManager.Instance.Info(strLog);
}
}
I use two classes. First one the cache core object:
public class Cacher<TValue>
where TValue : class
{
#region Properties
private Func<TValue> _init;
public string Key { get; private set; }
public TValue Value
{
get
{
var item = HttpRuntime.Cache.Get(Key) as TValue;
if (item == null)
{
item = _init();
HttpContext.Current.Cache.Insert(Key, item);
}
return item;
}
}
#endregion
#region Constructor
public Cacher(string key, Func<TValue> init)
{
Key = key;
_init = init;
}
#endregion
#region Methods
public void Refresh()
{
HttpRuntime.Cache.Remove(Key);
}
#endregion
}
Second one is list of cache objects:
public static class Caches
{
static Caches()
{
Languages = new Cacher<IEnumerable<Language>>("Languages", () =>
{
using (var context = new WordsContext())
{
return context.Languages.ToList();
}
});
}
public static Cacher<IEnumerable<Language>> Languages { get; private set; }
}
I will say implementing Singleton on this persisting data issue can be a solution for this matter in case you find previous solutions much complicated
public class GPDataDictionary
{
private Dictionary<string, object> configDictionary = new Dictionary<string, object>();
/// <summary>
/// Configuration values dictionary
/// </summary>
public Dictionary<string, object> ConfigDictionary
{
get { return configDictionary; }
}
private static GPDataDictionary instance;
public static GPDataDictionary Instance
{
get
{
if (instance == null)
{
instance = new GPDataDictionary();
}
return instance;
}
}
// private constructor
private GPDataDictionary() { }
} // singleton
HttpContext.Current.Cache.Insert("subjectlist", subjectlist);
You can also try and use the caching built into ASP MVC:
Add the following attribute to the controller method you'd like to cache:
[OutputCache(Duration=10)]
In this case the ActionResult of this will be cached for 10 seconds.
More on this here

SIgnalR- not use singleton pattern

In my application, I am sending real-time updates to each client based on his subscriptions.
So e.g. if a client is subscribed to items 1,2 & 3 he should only see updates from these items while at the same time if another client is subscribed to items 4,5 & 6 then he should be able to receive real-time updates for those items.
My problem is as soon as I connect with the second id SignalR forgets about the old group and starts notification of new ids only.
I guess it's because there's only one instance of real-time updates is running and it's being shared among all connected clients. Is there a way that each connected client has its own instance of a real-time object?
public class DataHub : Hub
{
private readonly RealTimeData data;
public DataHub() : this(RealTimeData.Instance) { }
public DataHub(RealTimeData rdata)
{
data = rdata; //can I instantiate RealTimeData object here?
}
public void Start(Int64 routerId)
{
data.StartTimer(routerId);
}
}
public class RealTimeData
{
private readonly static Lazy<RealTimeData> _instance = new Lazy<RealTimeData>(() => new RealTimeData(GlobalHost.ConnectionManager.GetHubContext<DataHub>().Clients)); //will avoiding this create a separate instance for each client?
private IHubConnectionContext Clients;
public Timer timer;
private readonly int updateInterval = 1000;
private readonly object updateRecievedDataLock = new object();
private bool updateReceivedData = false;
List<Items> allItems = new List<Items>();
private RealTimeData()
{
}
private RealTimeData(IHubConnectionContext clients)
{
Clients = clients;
}
public static RealTimeData Instance
{
get
{
return _instance.Value;
}
}
public void StartTimer(Int64 routerId)
{
this.routerId = routerId;
timer = new Timer(GetDataForAllItems, null, updateInterval, updateInterval);
}
public void GetDataForAllItems(object state)
{
if (updateReceivedData)
{
return;
}
lock (updateRecievedDataLock)
{
if (!updateReceivedData)
{
updateReceivedData = true;
//get data from database
allItems = Mapper.Instance.GetDataForAllItems(routerId);
updateReceivedData = false;
//send it to the browser for update
BroadcastData(allItems);
}
}
}
}
I think you are attacking the problem from a wrong angle. Instead of polling the DB each second let the action that saves to the DB publish a message on a message bus that you forward to the Clients.
Have a look at this library,
https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy
Demo project
https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/tree/master/SignalR.EventAggregatorProxy.Demo.MVC4
Disclaimer: I'm the author of the library
Group subscriptions in SignalR v2 are handled by using the this.Groups.Add and this.Groups.Remove methods in the Hub > HubBase ancestor class.
These methods could be used to subscribe to group 1,2,3 or 4,5,6 in your example by using something like this.
public class DataHub : Hub
{
public async Task Register(int groupNumber)
{
await this.Groups.Add(this.Context.ConnectionId, groupNumber.ToString());
}
public async Task Unregister(int groupNumber)
{
await this.Groups.Remove(this.Context.ConnectionId, groupNumber.ToString());
}
}
On the client this would be called by passing a group number to the Register/Unregister methods. An example might be:
this._DataHub.Invoke("Register", "1");
From you RealTimeData singleton or other business class that publishes the update, you would reference GlobalHost.ConnectionManager.GetHubContext<DataHub>() and call then use .Clients.Group("1").<clientMethod> to invoke the client-side method.
For example:
var hubContext = GlobalHost.ConnectionManager.GetHubContext<DataHub>();
hubContext.Clients.Group("1").UpdateDisplay("New tweet");
To read up a bit more of this have a look at http://www.asp.net/signalr/overview
I hope this answers the question.

Lock only on an Id

I have a method which needs to run exclusivley run a block of code, but I want to add this restriction only if it is really required. Depending on an Id value (an Int32) I would be loading/modifying distinct objects, so it doesn't make sense to lock access for all threads. Here's a first attempt of doing this -
private static readonly ConcurrentDictionary<int, Object> LockObjects = new ConcurrentDictionary<int, Object>();
void Method(int Id)
{
lock(LockObjects.GetOrAdd(Id,new Object())
{
//Do the long running task here - db fetches, changes etc
Object Ref;
LockObjects.TryRemove(Id,out Ref);
}
}
I have my doubts if this would work - the TryRemove can fail (which will cause the ConcurrentDictionary to keep getting bigger).
A more obvious bug is that the TryRemove successfully removes the Object but if there are other threads (for the same Id) which are waiting (locked out) on this object, and then a new thread with the same Id comes in and adds a new Object and starts processing, since there is no one else waiting for the Object it just added.
Should I be using TPL or some sort of ConcurrentQueue to queue up my tasks instead ? What's the simplest solution ?
I use a similar approach to lock resources for related items rather than a blanket resource lock... It works perfectly.
Your almost there but you really don't need to remove the object from the dictionary; just let the next object with that id get the lock on the object.
Surely there is a limit to the number of unique ids in your application? What is that limit?
The main semantic issue I see is that an object can be locked without being listed in the collection because the the last line in the lock removes it and a waiting thread can pick it up and lock it.
Change the collection to be a collection of objects that should guard a lock. Do not name it LockedObjects and do not remove the objects from the collection unless you no longer expect the object to be needed.
I always think of this type of objects as a key instead of a lock or blocked object; the object is not locked, it is a key to locked sequences of code.
I used the following approach. Do not check the original ID, but get small hash-code of int type to get the existing object for lock. The count of lockers depends on your situation - the more locker counter, the less the probability of collision.
class ThreadLocker
{
const int DEFAULT_LOCKERS_COUNTER = 997;
int lockersCount;
object[] lockers;
public ThreadLocker(int MaxLockersCount)
{
if (MaxLockersCount < 1) throw new ArgumentOutOfRangeException("MaxLockersCount", MaxLockersCount, "Counter cannot be less, that 1");
lockersCount = MaxLockersCount;
lockers = Enumerable.Range(0, lockersCount).Select(_ => new object()).ToArray();
}
public ThreadLocker() : this(DEFAULT_LOCKERS_COUNTER) { }
public object GetLocker(int ObjectID)
{
var idx = (ObjectID % lockersCount + lockersCount) % lockersCount;
return lockers[idx];
}
public object GetLocker(string ObjectID)
{
var hash = ObjectID.GetHashCode();
return GetLocker(hash);
}
public object GetLocker(Guid ObjectID)
{
var hash = ObjectID.GetHashCode();
return GetLocker(hash);
}
}
Usage:
partial class Program
{
static ThreadLocker locker = new ThreadLocker();
static void Main(string[] args)
{
var id = 10;
lock(locker.GetLocker(id))
{
}
}
}
Of cource, you can use any hash-code functions to get the corresponded array index.
If you want to use the ID itself and do not allow collisions, caused by hash-code, you can you the next approach. Maintain the Dictionary of objects and store info about the number of the threads, that want to use ID:
class ThreadLockerByID<T>
{
Dictionary<T, lockerObject<T>> lockers = new Dictionary<T, lockerObject<T>>();
public IDisposable AcquireLock(T ID)
{
lockerObject<T> locker;
lock (lockers)
{
if (lockers.ContainsKey(ID))
{
locker = lockers[ID];
}
else
{
locker = new lockerObject<T>(this, ID);
lockers.Add(ID, locker);
}
locker.counter++;
}
Monitor.Enter(locker);
return locker;
}
protected void ReleaseLock(T ID)
{
lock (lockers)
{
if (!lockers.ContainsKey(ID))
return;
var locker = lockers[ID];
locker.counter--;
if (Monitor.IsEntered(locker))
Monitor.Exit(locker);
if (locker.counter == 0)
lockers.Remove(locker.id);
}
}
class lockerObject<T> : IDisposable
{
readonly ThreadLockerByID<T> parent;
internal readonly T id;
internal int counter = 0;
public lockerObject(ThreadLockerByID<T> Parent, T ID)
{
parent = Parent;
id = ID;
}
public void Dispose()
{
parent.ReleaseLock(id);
}
}
}
Usage:
partial class Program
{
static ThreadLockerByID<int> locker = new ThreadLockerByID<int>();
static void Main(string[] args)
{
var id = 10;
using(locker.AcquireLock(id))
{
}
}
}
There are mini-libraries that do this for you, such as AsyncKeyedLock. I've used it and it saved me a lot of headaches.

.NET Remoting Singleton memory leak, TCP, Marshal by Reference

I am using the simplest example of remoting that I could find, sharing an object between a windows service and a windows forms program (client), running on the same machine.
The service instantiates the object like this:
serviceConfigRemote = new serviceConfigDataRemote();
serverChannel = new TcpServerChannel(9090);
ChannelServices.RegisterChannel(serverChannel, false);
RemotingServices.Marshal(this.serviceConfigRemote, "ServiceConfigData");
The client establishes a connection like this:
TcpClientChannel channel = new TcpClientChannel();
ChannelServices.RegisterChannel(channel, false);
configData = (serviceConfigDataRemote)Activator.GetObject(typeof(serviceConfigDataRemote), "tcp://localhost:9090/ServiceConfigData");
The idea is for the service to be able to make changes to some of the parameters of the object, for the client to be able to read those changes.
The object itself is:
public sealed class serviceConfigDataRemote : MarshalByRefObject
{
private bool myConnectedFlag;
private bool mySendingFlag;
private bool myUpdateFlag;
private string myClientConfiguration;
static readonly serviceConfigDataRemote instance = new serviceConfigDataRemote();
static serviceConfigDataRemote()
{
}
public serviceConfigDataRemote()
{
myConnectedFlag = false;
mySendingFlag = false;
myUpdateFlag = false;
myClientConfiguration = "";
}
public static serviceConfigDataRemote Instance
{
get
{
return instance;
}
}
public override object InitializeLifetimeService()
{
return (null);
}
public bool Connected
{
get { return myConnectedFlag; }
set { myConnectedFlag = value; }
}
public bool Sending
{
get { return mySendingFlag; }
set { mySendingFlag = value; }
}
public bool CheckForUpdates
{
get{return myUpdateFlag;}
set { myUpdateFlag = value; }
}
public string ClientConfiguration
{
get { return myClientConfiguration; }
set { myClientConfiguration = value; }
}
}
While the service is running by itself, the Mem Usage in Task Manager stays constant, even though the service is continually updating the object with status information. When the client is started, both begin to increase in Mem Usage, and never go down.
This is the problem that I referred to in My Previous Question about finding memory leaks.
It is appearing differently on different machines, some show no memory increases, but the machines that do will reliably reproduce this problem. Running .NET Memory Profiler shows that on the service, there is an ever increasing number of "New instances", with only one or two "Removed" in the tab Types/Resources where Namespace/System is Kernel and Name/Resource is HeapMemory. I'm still trying to learn how to use the Memory Profiler, so I apologize if this is the wrong information, and tip on where else I should be looking would also be appreciated.
This object is instantiated once, with just a couple of parameters to read and write, no file io, no allocating of memory that I can see, and yet my memory usage only appears to go up the moment I start a connection from the client to that object and read its values. Any and all input would be appreciated, as I would like to avoid pulling this code and replacing it with named pipes or similar, but I'm quickly approaching that point as my only option.
Shouldn't where your service instantiates the object,
serviceConfigRemote = new serviceConfigDataRemote();
look like
serviceConfigRemote = serviceConfigDataRemote.Instance;
instead?
At the very least, the way you have it, you're creating two different instances on the server side, one in the static instance member initializer to be used by the Instance property and another one via the new serviceConfigDataRemote() explicit construction. It may also serve you well to add a private constructor to that class so nothing else can instantiate the singleton other than the static initializer.
This may not be the solution to the ever-increasing memory, but it definitely appears to be something of an issue to address.
EDIT:
Here are a couple more tips I found scouring the 'nets:
Add [MTAThread] to the main method of the host service.
RemotingServices.Disconnect(this.serviceConfigRemote); when you're shutting down the host service.
Hope this may assist.
Have you tried using lazy instantiation on your Singleton. It's possible that it doesn't like the way you're instantiating it.
public sealed class serviceConfigDataRemote : MarshalByRefObject
{
private bool myConnectedFlag;
private bool mySendingFlag;
private bool myUpdateFlag;
private string myClientConfiguration;
static serviceConfigDataRemote instance;
static serviceConfigDataRemote()
{
}
public serviceConfigDataRemote()
{
myConnectedFlag = false;
mySendingFlag = false;
myUpdateFlag = false;
myClientConfiguration = "";
}
public static serviceConfigDataRemote Instance
{
get
{
if (instance == null)
{
lock (new Object())
{
if (instance == null)
{
instance = new serviceConfigDataRemote();
}
return instance;
}
}
return instance;
}
}
public override object InitializeLifetimeService()
{
return (null);
}
public bool Connected
{
get { return myConnectedFlag; }
set { myConnectedFlag = value; }
}
public bool Sending
{
get { return mySendingFlag; }
set { mySendingFlag = value; }
}
public bool CheckForUpdates
{
get { return myUpdateFlag; }
set { myUpdateFlag = value; }
}
public string ClientConfiguration
{
get { return myClientConfiguration; }
set { myClientConfiguration = value; }
}
}
Since the only OS you are seeing this bug in is XP, there are a couple possible issues.
XP has a incoming connection limit of 10 (on pro) or 5 (on home) , and this could play a part in the issue.
Ensure that all service packs/patches are installed. I know this may be a corny and cliche answer to any problems, but the fact this issue only appears in XP implies it is OS related.
Also, not sure how you're using the service, but Windows XP is a desktop OS, not a server OS. If you intend the service to be a server of some type, you really should be using 2000/2003/2008 etc, especially since it only has issues on XP.

Categories

Resources