How to implement badges? - c#

I've given some thought to implementing badges (just like the badges here on Stack Overflow) and think it would be difficult without Windows services, but I'd like to avoid that if possible.
I came up with a plan to implement some examples:
Audobiographer: Check if all fields in the profile is filled out.
Commentor: When making a comment check if the number of comments equal 10, if so award the badge.
Good Answer: When voting up check to see if vote score is 25 or higher.
How could this be implemented in the database? Or would another way be better?

A similar-to-Stackoverflow implementation is actually a lot simpler than you have described, based on bits of info dropped by the team every once in awhile.
In the database, you simply store a collection of BadgeID-UserID pairs to track who has what (and a count or a rowID to allow multiple awards for some badges).
In the application, there is a worker object for each badge type. The object is in cache, and when the cache expires, the worker runs its own logic for determining who should get the badge and making the updates, and then it re-inserts itself into the cache:
public abstract class BadgeJob
{
protected BadgeJob()
{
//start cycling on initialization
Insert();
}
//override to provide specific badge logic
protected abstract void AwardBadges();
//how long to wait between iterations
protected abstract TimeSpan Interval { get; }
private void Callback(string key, object value, CacheItemRemovedReason reason)
{
if (reason == CacheItemRemovedReason.Expired)
{
this.AwardBadges();
this.Insert();
}
}
private void Insert()
{
HttpRuntime.Cache.Add(this.GetType().ToString(),
this,
null,
Cache.NoAbsoluteExpiration,
this.Interval,
CacheItemPriority.Normal,
this.Callback);
}
}
And a concrete implementation:
public class CommenterBadge : BadgeJob
{
public CommenterBadge() : base() { }
protected override void AwardBadges()
{
//select all users who have more than x comments
//and dont have the commenter badge
//add badges
}
//run every 10 minutes
protected override TimeSpan Interval
{
get { return new TimeSpan(0,10,0); }
}
}

Jobs. That is the key. Out of process jobs that run at set intervals to check the criteria that you mention. I don't think you even need to have a windows service unless it requires some external resources to set the levels. I actually think StackOverflow uses jobs as well for their calculations.

You could use triggers and check upon update or insert, then if your conditions are met add badge. That would handle it pretty seem less. Commence the trigger bashing in 3, 2, 1...

comments must be stored within the database right? then i think there are two main ways to do this.
1) when a user logs in you get a count of the comments. this is obvisously not the desired approach as the count could take a lot of time
2) when a user posts a comment you could either do a count then and store the count with the use details or you could do a trigger which executes when a comment is added. the trigger would then get the details of the newly created comment, grab the user id, get a count and store that against the user in a table of some sort.
i like the idea of a trigger as your program can return w/out waiting for sql server to do its stuff.

Related

Use static class in ASP.NET Web API

I am posting this question using an automatic translation.
Please forgive any grammatical errors.
I have built an application using the .NET framework and the ASP.net Web API.
I have split the virtual path for each customer region within a site running on IIS and copied the same binary to run as separate applications.
The applications run in the same application pool.
Recently, some customers have been making a very large number of requests in a matter of minutes.
(I suspect a glitch in the system on the customer's end).
I am thinking of adding a static class to my current application that keeps track of the number of requests per customer in a given time period and blocks them if the threshold is exceeded.
From past StackOverFlow articles I have found that "information in the static class is lost if the application pool is recycled", but I have determined that this is not a problem in this case.
For my purposes, I only need to be able to retain information for a few minutes.
However, I still have a few questions that I can't find answers to, so I'd like to ask you all a few questions.
Even if the same binary is running in the same application pool, will the static class information be kept separately for different applications?
Will the static constructor of a static class be executed even after the application pool is recycled?
Is there a problem if I reference a field in Global.asax from within a static class?
Is there a problem with referencing the contents of web.config from within a static class?
Attached below is the source of my experimental implementation.
I plan to call the static method "ExcessiveRequestCheck.isExcessiveRequest" of this static class after the Web API receives the request and identifies the user ID.
Any advice would be sincerely appreciated.
P.S.
I understand that this approach does not work well in a load balancing environment. Currently my system only runs on one virtual machine. If you are moving to the cloud or deploying a load balancer, you will probably need a different approach than this one.
public static class ExcessiveRequestCheck
{
private static Dictionary<string, ExcessiveRequestInfo> dicExcessiveRequestCheckInfo = new Dictionary<string, ExcessiveRequestInfo>();
private static object initLock = new object();
private static object dicExcessiveRequestCheckInfoLock = new object();
//If possible, I want this process to be a static constructor
public static Dictionary<int, int> dicExcessiveRequestSkipConditions
{
get
{
lock (initLock)
{
if (ExcessiveRequestCheck._dicExcessiveRequestSkipConditions == null)
{
//if possible, I want to set this value from Web.config.
ExcessiveRequestCheck._dicExcessiveRequestSkipConditions = new Dictionary<int, int>() {
{ 5, 3 }, { 15, 5 }, { 45, 10 }, { 120, 20 }
};
}
return ExcessiveRequestCheck._dicExcessiveRequestSkipConditions;
}
}
}
private static Dictionary<int, int> _dicExcessiveRequestSkipConditions = null;
public const int BUFFER_CLEAR_MINUTES = 5;
public static bool isExcessiveRequest(string userId)
{
ExcessiveRequestCheck.refreshExcessiveRequestCheckInfo();
lock (ExcessiveRequestCheck.dicExcessiveRequestCheckInfoLock)
{
if (ExcessiveRequestCheck.dicExcessiveRequestCheckInfo.ContainsKey(userId) == false)
{
ExcessiveRequestCheck.dicExcessiveRequestCheckInfo.Add(userId, new ExcessiveRequestInfo() { countRequest = 1 });
return false;
}
bool doSkip = false;
ExcessiveRequestCheck.dicExcessiveRequestCheckInfo[userId].countRequest++;
foreach (KeyValuePair<int, int> pair in ExcessiveRequestCheck.dicExcessiveRequestSkipConditions)
{
if (ExcessiveRequestCheck.dicExcessiveRequestCheckInfo[userId].lastRequesttTime.AddSeconds(pair.Key) > DateTime.Now)
{
if (ExcessiveRequestCheck.dicExcessiveRequestCheckInfo[userId].countRequest > pair.Value)
{
ExcessiveRequestCheck.dicExcessiveRequestCheckInfo[userId].wasRequestSkip = true;
doSkip = true;
}
}
}
ExcessiveRequestCheck.dicExcessiveRequestCheckInfo[userId].lastRequesttTime = DateTime.Now;
return doSkip;
}
}
public static void refreshExcessiveRequestCheckInfo()
{
lock (ExcessiveRequestCheck.dicExcessiveRequestCheckInfoLock)
{
var keyList = ExcessiveRequestCheck.dicExcessiveRequestCheckInfo.Keys;
foreach (string key in keyList)
{
if (ExcessiveRequestCheck.dicExcessiveRequestCheckInfo.ContainsKey(key))
{
var value = ExcessiveRequestCheck.dicExcessiveRequestCheckInfo[key];
if (value.lastRequesttTime.AddMinutes(BUFFER_CLEAR_MINUTES) < DateTime.Now)
{
if (value.wasRequestSkip)
{
//this NLog instance was created in Global.asax.cs
WebApiApplication.logger.Fatal("skip request! user id=" + key);
}
ExcessiveRequestCheck.dicExcessiveRequestCheckInfo.Remove(key);
}
}
}
}
}
}
class ExcessiveRequestInfo
{
public DateTime requestStartTime { get; set; } = DateTime.Now;
public DateTime lastRequesttTime { get; set; } = DateTime.Now;
public int countRequest { get; set; } = 0;
public bool wasRequestSkip { get; set; } = false;
}
Your questions
Even if the same binary is running in the same application pool, will the static class information be kept separately for different applications?
Yes, they are separate
Will the static constructor of a static class be executed even after the application pool is recycled?
Yes, the static constructor is guaranteed to be called before any of the static methods are executed
Is there a problem if I reference a field in Global.asax from within a static class?
No more than accessing it from anywhere else
Is there a problem with referencing the contents of web.config from within a static class?
No more than accessing it from anywhere else
Your general approach
DoS
If you're trying to mitigate a denial-of-service attack or credential stuffing attack, your approach probably won't work, since requests to your service will still result in load being added to your server, and if they are performing a credential stuffing attack, it'll fill up your dictionary with millions of entries and possibly cause your application to crash.
If you want to mitigate a denial-of-service attack effectively, you will probably need a more network-oriented solution, such as a smart firewall or a WAF.
Rate limiting
If on the other hand you are attempting to throttle specific users' activities (i.e. rate limiting), again, your approach probably isn't the greatest, because it does not support load balancing-- your list is held in in-process memory. For per-user rate limiting you will probably need to track user activity in a central data store accessible to all of your servers.
Static constructors
As a general rule, you should try to avoid static constructors, or keep them very simple, as a failure in a static constructor will cause your entire application to fail to start. Be careful!
even if the same binary is running in the same application pool, will the static class information be kept separately for different applications?
If by different applications, you mean separate web sites? yes, it will be kept separate to each web site you have running for that app pool.
Will the static constructor of a static class be executed even after the application pool is recycled?
Hum, that's a bit confusing. The constructor will only be executed if you call the class and that given constructor. Since there is never a instance of the class created, then the "initialize/new" event is never used nor triggered. So, any method with parameters will run and work fine - including the constructor. I would suggest that there is not some "event" that gets triggered on first use - it would not and does not make sense in the context of a static class, since you never create an instance. So, if you have some methods with parameters then fine.
So, constructor in the context of new instance of the class makes no sense - (did not even think that is possible with static).
There is no concept of "new" event that triggers, so I fail to see how this issue can ever matter.
Is there a problem if I reference a field in Global.asax from within a static class?
Well, values in that class are global to ALL users. But, those values can go out of scope just about any old time you please. As a result, ZERO use of public members is practial. While a app-pool re-start will re-set those class values? They can go out of scope just about any old time. They are global to all and every user. So, persisting values, or attempting to persit values in a static class is NOT a viable choice for production code. You can have methods (code) in that class, but any public persisting values really can't be relied upon to persist correctly. I'm not 100% sure, but even just general .net garbage collection would likely cause a re-set.
If you need this information to persist, then you can't use static, you have to create a instance of that class and persist it in session(). And session is per user.
A static class public values will apply to EVERY user - not just the current user. In effect those values are global to all users - but without any real ccontrol or garrutee that the values will persit - you have no control over this and thus you can't adopt this concpet and design for any system of practial value.
Is there a problem with referencing the contents of web.config from within a static class?
Reading values? No problem. Update or modify values? - a MASSIVE different issue. You modify web.config, that will trigger a app pool restart.
So, you free to read any file - text files, xml or whatever, and that includes web.config. As long as you not modify such files, then no problems.
The main issue here?
It simple not practical to assume, or build a design in which public static class values are to persist. The ZERO control you have when such values may go out of scope is somthing you have ZERO control over, and thus such designs can't use nor rely on values persisting.
And of course on many web hosting systems? They are now adopting cloud computing. This means from one post back to the next, you might be using a different server, and again, that means such values can't persist in memory, since from one post-back to the next, or one web service call to the next? You may well be hitting a different server anyway (and they don't share memory) (so, this suggests say using SQL server based sessions, or at the very least persisting such values in a database).
In fact, if you need such persisting values and data? Then use a database. The WHOLE idea of web based software is you do NOT have state between post-backs. And you are attempting to go even down a worse road, but hope on a wing and a prayer that some global values "might" and "sort of" and "maybe" will persist between calls to the web site.
Answer:
You really can't do this with any realm of reliably.
So, most of your questions don't really matter. What matters is these values are to persist, and you can't rely on such a design. If you need some persisting values, then you have to adopt a system and design that supports that concept (viewstate, cookies, or session()).
Now, I suppose you can give this a try, and then come back with a detailed report and how your experience turned out. But, there are too many pit falls, and without any code or system control over persisting values in memory, I don't think I would go down this road.
In web land, it makes next to no sense to have public variables that you attempt to persist in a static class. You can have code, you can have cool methods, you can use session(). But, the concept of persisting values in static class is a design choice that does not make sense, and can't be relied upon.
Web software is assumed to be state-less, and that VERY much is the assumption you have to make in regards to a static class, or in fact general use of such code.

connect systems with events

Using the Entity-Component-System pattern I want to connect some systems with events. So some systems shouldn't run in a loop, they should just run on demand.
Given the example of a Health system a Death system should only run when a component gets below 1 health.
I thought about having two types of systems. The first type is a periodic system. This runs once per frame, for example a Render or Movement System. The other type is an event based system. As mentioned before a connection between Health and Death.
First I created a basic interface used by both system types.
internal interface ISystem
{
List<Guid> EntityCache { get; } // Only relevant entities get stored in there
ComponentRequirements ComponentRequirements { get; } // the required components for this system
void InitComponentRequirements();
void InitComponentPools(EntityManager entityManager);
void UpdateCacheEntities(); // update all entities from the cache
void UpdateCacheEntity(Guid cacheEntityId); // update a single entity from the cache
}
Further I created the interfaces
internal interface IReactiveSystem : ISystem
{
// event based
}
and
internal interface IPeriodicSystem : ISystem
{
// runs in a loop
}
but I'm not sure if they will be necessary. There is no problem using
foreach (ISystem system in entityManager.Systems)
{
system.UpdateCacheEntities();
}
but I don't want to run a system if not needed.
There are two types of Events, a ChangeEvent and a ExecuteEvent. The first gets triggered when a value from a component has changed. The second one gets triggered when something should be done with a specific entity.
If you Need or want to you can have a look at the EntityManager
https://pastebin.com/NnfBc0N9
the ComponentRequirements
https://pastebin.com/xt3YGVSv
and the usage of the ECS
https://pastebin.com/Yuze72xf
An example System would be something like this
internal class HealthSystem : IReactiveSystem
{
public HealthSystem(EntityManager entityManager)
{
InitComponentRequirements();
InitComponentPools(entityManager);
}
private Dictionary<Guid, HealthComponent> healthComponentPool;
public List<Guid> EntityCache { get; } = new List<Guid>();
public ComponentRequirements ComponentRequirements { get; } = new ComponentRequirements();
public void InitComponentRequirements()
{
ComponentRequirements.AddRequiredType<HealthComponent>();
}
public void InitComponentPools(EntityManager entityManager)
{
healthComponentPool = entityManager.GetComponentPoolByType<HealthComponent>();
}
public void UpdateCacheEntities()
{
for (int i = 0; i < EntityCache.Count; i++)
{
UpdateCacheEntity(EntityCache[i]);
}
}
public void UpdateCacheEntity(Guid cacheEntityId)
{
Health healthComponent = healthComponentPool[cacheEntityId];
healthComponent.Value += 10; // just some tests
// update UI
}
}
How can I create ChangeEvents and ExecuteEvents for the different systems?
EDIT
Is there a way to add event delegates to the components to run a specific system for this entity on change if a change event is listening or on demand if an execute event is listening?
By mentioning ChangeEvent and ExecuteEvent I just mean event delegates.
Currently I could do something like this
internal class HealthSystem : IReactiveSystem
{
//… other stuff
IReactiveSystem deathSystem = entityManager.GetSystem<Death>(); // Get a system by its type
public void UpdateCacheEntity(Guid cacheEntityId)
{
// Change Health component
// Update UI
if(currentHealth < 1) // call the death system if the entity will be dead
{
deathSystem.UpdateCacheEntity(cacheEntityId);
}
}
}
But I was hoping to achieve a better architecture by using event delegates to make systems communicate and share data between each other.
I am not an expert on this design pattern but I read something on it and my advice is: try not to forget the real purpose of this pattern. This time I found the article on Wikipedia really interesting.
It is basically saying (at least it is what I understood) that this pattern has been "designed" to avoid creating too many dependencies, losing the decoupling. Here an example I took from the article:
Suppose there is a drawing function. This would be a "System" that
iterates through all entities that have both a physical and a visible
component, and draws them. The visible component could typically have
some information about how an entity should look (e.g. human, monster,
sparks flying around, flying arrow), and use the physical component to
know where to draw it. Another system could be collision detection. It
would iterate through all entities that have a physical component, as
it would not care how the entity is drawn. This system would then, for
instance, detect arrows that collide with monsters, and generate an
event when that happens. It should not need to understand what an
arrow is, and what it means when another object is hit by an arrow.
Yet another component could be health data, and a system that manages
health. Health components would be attached to the human and monster
entities, but not to arrow entities. The health management system
would subscribe to the event generated from collisions and update
health accordingly. This system could also now and then iterate
through all entities with the health component, and regenerate health.
I think that you overcomplicated your architecture, losing the advantages that this pattern can give you.
First of all: why do you need the EntityManager? I quote again:
The ECS architecture handles dependencies in a very safe and simple
way. Since components are simple data buckets, they have no
dependencies.
Instead your components are constructed with the EntityManager dependency injected:
entityManager.AddSystem(new Movement(entityManager));
The outcome is a relatively complex internal structure to store entities and the associated components.
After fixing this, the question is: how can you "communicate" with the ISystems?
Again, answer is in the article: Observer Pattern. Essentially each component has a set of attached systems, which are notified every time a certain action occurs.
by what im getting at this, you want to have a repetitive, once every tick type event alongside a once in a year type event (exaggerated but clear), you can do this with a delegate call back function IE:
public delegate void Event(object Sender, EventType Type, object EventData);
public event Event OnDeath;
public event Event OnMove;
public void TakeDamage(int a)
{
Health-=a;
if(Health<1)
OnDeath?.Invoke(this,EventType.PlayerDeath,null);
}
public void ThreadedMovementFunction()
{
while(true)
{
int x,y;
(x,y) = GetMovementDirection();
if(x!=0||y!=0)
OnMove?.Invoke(this,EventType.PlayerMove,(x,y));
}
}
you can implement this into an interface, and then store the object class and only access the needed stuff like events and so on. but tbh i don't quite understand what you're looking for, so if you could elaborate on the exact issue or thing you need to solve, that would be greatly appreciated!

Taking a snapshot of an IObservable<T>

Suppose I have a service:
public interface ICustomersService
{
IObservable<ICustomer> Customers
{
get;
}
}
The implementation of the Customers property starts by grabbing all existing customers and passing them onto the observer, after which it only passes on customers that are added to the system later. Thus, it never completes.
Now suppose I wanted to grab a snapshot (as a List<ICustomer>) of the current customers, ignoring any that may be added in future. How do I do that? Any invocation of ToList() or its kin will block forever because the sequence never completes.
I figured I could write my own extension, so I tried this:
public static class RxExtensions
{
public static List<T> ToSnapshot<T>(this IObservable<T> #this)
{
var list = new List<T>();
using (#this.Subscribe(x => list.Add(x)));
return list;
}
}
This appears to work. For example:
var customers = new ReplaySubject<string>();
// snapshot has nothing in it
var snapshot1 = customers.ToSnapshot();
customers.OnNext("A");
customers.OnNext("B");
// snapshot has just the two customers in it
var snapshot2 = customers.ToSnapshot();
customers.OnNext("C");
// snapshot has three customers in it
var snapshot3 = customers.ToSnapshot();
I realize the current implementation depends on the scheduler being the current thread, otherwise ToSnapshot will likely close its subscription before items are received. However, I suspect I could also include a ToSnapshot override that takes an IScheduler and ensures any items scheduled there are received prior to ending the snapshot.
I can't find this sort of snapshot functionality built into Rx. Am I missing something?
You could try using a timeout on your observable
source.Customers().TakeUntil(DateTime.Now).ToEnumerable();
There are several ways to approach this. I have tried the following with success in commercial projects:
1) A separate method to get an enumerable of current customers as Chris demonstrated.
2) A method to combine a "state of the world" call with a live stream - this was somewhat more involved than Chris's example because in order to ensure no missed data one typically has to start listening to the live stream first, then get the snapshot, then combine the two with de-duping.
I achieved this with a custom Observable.Create implementation that cached the live stream until the history was retrieved and then merged the cache with the history before switching to live.
This returned Customers but wrapped with additional metadata that described the age of the data.
3) Most recently, it's been more useful to me to return IObservable<IEnumerable<Customer>> where the first event is the entire state of the world. The reason this has been more useful is that many systems I work on get updates in batches, and it's often faster to update a UI with an entire batch than item by item. It is otherwise similar to (2) except you can just use a FirstAsync() to get the snapshot you need.
I propose you consider this approach. You can always use a SelectMany(x => x) to flatten a stream of IObservable<IEnumerable<Customer>> to an IObservable<Customer> if you need to.
I'll see if I can dig out an example implementation when I get back to the home office!
What you've done here is actually pretty nifty. The reason ToSnapshot works is because the underlying implementation of your subscribe logic is yielding all of the customers to the observer before releasing control flow. Basically, Dispose is called only after the control flow is released, and the control flow is only released after you've yielded all pre-existing contacts.
While this is cool, it's also a misleading. The method you've written, ToSnapshot, should really be named something like TakeSyncronousNotifications. The extension is making heavy assumptions about how the underlying observable works, and isn't really in the spirit of Rx.
To make things easier to understand for the consumer, I would expose additional properties which explicitly state what is being returned.
public interface ICustomersService
{
IEnumerable<ICustomer> ExistingCustomers { get; }
IObservable<ICustomer> NewCustomers { get; }
IObservable<ICustomer> Customers { get; }
}
public class CustomerService : ICustomerService
{
public IEnumerable<ICustomer> ExistingCustomers { get { ... } }
public IObservable<ICustomer> NewCustomers { get { ... } }
public IObservable<ICustomer> Customers
{
get
{
return this.ExistingCustomers.ToObservable().Concat(this.NewCustomers);
}
}
}
Edit:
Consider the following problem...
50 = x + y. solve for and evaluate x.
The math just doesn't work unless you know what y is. In this example, y is the "new customers", x is the "existing customers", and 50 is the combination of the two.
By exposing only a combination of the existing and new customers, and not the existing and new customers themselves, you've lost too much data. You need to expose at least x or y to the consumer, otherwise there's no way to solve for the other.

.Net DateTime Precision

within my .net domain object I am tracking each state transition. This is done by putting the state set into a state history collection. So later on, one can see an desc ordered list to find out which state was changed at what time.
So there is a method like this:
private void SetState(RequestState state)
{
var stateHistoryItem = new RequestStateHistoryItem(state, this);
stateHistoryItems.Add(stateHistoryItem);
}
When a new RequestStateHistoryItem is instantiated, the current date is automatically assigned. Like this:
protected IdentificationRequestStateHistoryItem()
{
timestamp = EntityTimestamp.New();
}
The EntityTimestamp object is an object containing the appropiate user and created and changed date.
When listing the state history, I do a descending order with Linq:
public virtual IEnumerable<RequestStateHistoryItem> StateHistoryItems
{
get { return stateHistoryItems.OrderByDescending(s => s.Timestamp.CreatedOn.Ticks); }
}
Now when a new Request is instantiated the first state Received is set in the constructor SetState(RequestState.Received). Then, without any delay and depending on some conditions, a new state Started is set. After some time (db operations) the state Finished is set.
Now when performing the descending ordering, the Received always is AFTER the Started state. When I am debugging slowly, or when putting a System.Threading.Thread.Sleep(1000) before setting the state to Started, the ordering works.
If not, as told above, the Started state's CreatedOn is OLDER then the Received CreatedOn date?!
TimeOfDay {17:04:42.9430318} FINSHED
Ticks 634019366829430318
TimeOfDay {17:04:39.5376207} RECEICED
Ticks 634019366795376207
TimeOfDay {17:04:39.5367815} STARTED
Ticks 634019366795367815
How can that be? I would understand if the received and start date is exactly the same, but I don't understand how it can even be BEFORE the other one?
I already tried new DateTimePrecise().Now, (see DateTimePrecise class) I found in another question. Same result.
Anyone knows what that could be?
Update
public virtual bool Finish()
{
// when I put the SetState(State.Received) from the constructor into here, the timestamp of finish still is BEFORE received
SetState(IdentificationRequestState.Received);
SetState(IdentificationRequestState.Finished);
// when I put the SetState(State.Received) after Finished, then the Received timestamp is BEFORE Finished
SetState(IdentificationRequestState.Finished);
SetState(IdentificationRequestState.Received);
var match = ...
if (match != null)
{
...
}
else
{
...
}
}
DateTime.Now is not accurate to the millisecond. It is only updated at larger intervals, something like 30 or 15 milliseconds (which is just the way Window's internal clock works, IIRC).
System.Diagnostics.Stopwatch is a more accurate way to measure time differences. It also doesn't have the overhead of UTC to local time conversions etc. The DateTimePrecise class uses a combination of DateTime and Stopwatch to give a more accurate time than DateTime.Now does.
You are retrieving the timestamp at an undetermined time before you add it to your collection.
The delay between retrieving it and adding it to the collection is variable - for example your thread may be pre-empted by the scheduler after getting the timestamp and before adding to the collection.
If you want strict ordering, you need to use synchronisation, something like the following every time you instantiate a history item:
lock(syncLock)
{
// Timestamp is generated here...
var stateHistoryItem = new RequestStateHistoryItem(state, this);
// ... but an indeterminate time can pass before ...
...
// ... it's added to the collection here.
stateHistoryItems.Add(stateHistoryItem);
}
Have you tried setting both the Received and Started timestamps via the same approach (i.e. moving the Received stamp out of the constructor and setting it via property or method to match how the Started status is set?).
I know it doesn't explain why, but constructors are somewhat special in the runtime. .NET constructors are designed to execute as fast as possible, so it wouldn't surprise me that there are some side-effects of the focus on performance.

How to get a free entry in a C# dictionary

I am running a server, and I would like to have a users dictionary, and give each user a specific number.
Dictionary<int,ServerSideUser> users = new Dictionary<int,ServerSideUser>();
The key represents the user on the server, so when people send messages to that user, they send them to this number. I might as well have used the users IP number, but that's not that a good idea.
I need to allocate such a number for each user, and I'm really not sure how to do so. Someone suggested something like
Enumerable.Range(int.MinValue, int.MaxValue)
.Except(users.Select(x => x.Key)).First();
but I really don't think it's the optimal way.
Also, I have the same problem with a List (or LinkedList) somewhere else.
Any ideas?
If the size of the "number" doesn't matter, take a Guid, it will always be unique and non-guessable.
If you want a dictionary that uses an arbitrary, ordered integer key, you may also be able to use a List<ServerSideUser>, in which the list index serves as the key.
Is there a specific reason you need to use a Dictionary?
Using a List<> or similar data structure definitely has limitations. Because of concurrency issues, you wouldn't want to remove users from the list at all, except when cycling the server. Otherwise, you might have a scenario in which user 255 sends a message to user 1024, who disconnects and is replaced by a new user 1024. New user 1024 then receives the message intended for old user 1024.
If you want to be able to manage the memory footprint of the user list, many of the other approaches here work; Will's answer is particularly good if you want to use ints rather than Guids.
Why don't you keep track of the current maximum number and increment that number by one every time a new user is added?
Another option: Use a factory to generate ServerSideUser instances, which assigns a new, unique ID to each user.
In this example, the factory is the class itself. You cannot instantiate the class, you must get a new instance by calling the static Create method on the type. It increments the ID generator and creates a new instance with this new id. There are many ways to do this in a thread safe way, I'm doing it here in a rudimentary 1.1-compatible way (c# pseudocode that may actually compile):
public class ServerSideUser
{
// user Id
public Id {get;private set;}
// private constructor
private ServerSideUser(){}
private ServerSideUser(int id) { Id = id; }
// lock object for generating an id
private static object _idgenLock = new Object();
private static int _currentId = 0; // or whatever
// retrieves the next id; thread safe
private static int CurrentId
{
get{ lock(_idgenLock){ _currentId += 1; return _currentId; } }
}
public static ServerSideUser Create()
{
return new ServerSideUser(CurrentId);
}
}
I suggest the combination of your approach and incremental.
Since your data is in memory, it is enough to have the identifier of type int.
Make a variable for the next user and a linked list of free identifiers.
When new user is added, use an Id from the list. If the list is empty — use the variable and increment it.
When a user is removed, add its identifier to the Dictionary.
P.S. Consider using a database.
First of all, I'd also start by seconding the GUID suggestion. Secondly, I'd assume that you're persisting the user information on the server somehow, and that somehow is likely a database. If this is the case, why not let the database pick a unique ID for each user via a primary key? Maybe it's not the best choice for what you're trying to do here, but this is the kind of problem that databases have been handling for years, so, why re-invent?
I think it depends on how you define the "uniqueness" of the clients.
For example if you have different two clients from the same machine do you consider them two clients or one?
I recommend you to use long value represents the time of connection establishment like "hhmmss" or even you can include milliseconds
Why not just start from 1 and count upwards?
lock(dict) {
int newId = dict.Count + 1;
dict[newId] = new User();
}
If you're really concerned about half the worlds population turning up at your one server, try using long:s instead.. :-D
Maybe a bit brutal, but could DateTime.Now.Ticks be something for you? As an added bonus, you know when the user was added to your dict.
From the MSDN docs on Ticks...
A single tick represents one hundred nanoseconds or one ten-millionth of a
second. There are 10,000 ticks in a millisecond.
The value of this property represents the number of 100-nanosecond intervals
that have elapsed since 12:00:00 midnight, January 1, 0001, which
represents DateTime..::.MinValue.

Categories

Resources