I understand that a static member will be shared by all users of an ASP.NET website; but in this particular case - that's exactly what I want.
It's a private-use webpage I threw together to facilitate web-based chatting between two users. I wanted to avoid persisting data to a database or a datafile, and thought I could store the last X messages in a static concurrent queue. This seems to work great on my development machine.
I'm very inexperienced with ASP.NET, but in all of the examples I've found, none use this approach. Is this a bad-practice, are there 'gotchas' I should be aware of? The alternative, that I can see, is to use a database. But I felt like it would be more effort and, my guess, is more resources (I figure my 'buffer' of messages will take about 40kb of memory and save quite a few trips to the database).
Assuming that you make sure that the entire thing is thread-safe, that will work.
However, IIS can recycle your AppDomain at any time, so your queue may get blow away when you don't expect it.
Even if IIS wouldn't flush and restart your AppDomain every now and then, using static variables for this purpose sounds like a smelly hack to me.
The HttpApplicationState class provides access to an application-wide cache you can use to store information.
ASP.NET Application State Overview
This is perfectly fine as long as your requirements don't change and you are OK with randomly loosing all messages on server side.
I would slightly refactor code to provide "message storage" interface to simplify testing of the code (with potential benefit in the future if you decide to make it more complicated/persisted/multi-user).
Pro of the static storage approach (or HttpApplicationState):
no issues with server side storage of the messages - less privacy concerns. Nothing is stored forever so you can say whatever you want.
extremely simple implementation.
perfect for IM / phone conversation.
unlikely to have performance problems in single server case
Cons:
messages can be lost. Can be mitigated by storing history on the client (i.e. retrieving message with AJAX queries on the same web page)
require more care if data is sensitive when more users are involved/or application is shared with some other code as static data is visible to everyone. Also not much different from any other storage.
Can't be directly migrated to multiple servers/web garden scenario. Really unlikely issue for 2 person chat server.
Sure, one gotcha I've seen in the past has been the use of static variables with Web Gardens.
See this SO question:
Web Garden and Static Objects difficult to understand
Note a key point from the discussion:
Static objects are not shared in web gardens/web farms.
Related
I am designing an online time tracking software to be used internally. I am fairly new to c# and .NET though I have extensive PHP experience.
I am using Windows Forms Authentication, and once the user logs in using that, I create a Timesheet object (my own custom class).
As part of this class, I have a constructor that checks the SQL DB for information (recent entries by this user, user preferences, etc.)
Should I be storing this information in a session? And then checking the session object in the constructor first? That seems the obvious approach, but most examples I've looked at don't make much use of sessions. Is there something I don't know that others do (specifically related to .NET sessions of course)?
EDIT:
I forgot to mention two things. 1. My SQL DB is on another server (though I believe they are both on the same network, so not much of an issue)2. There are certain constants that the user will not be able to change (only the admin can modify them) such as project tasks. These are used on every page, but loaded the first time from the DB. Should I be storing these in a session? If not, where else? The only other way I can think of is a local flat file that updates each time the table of projects is updated, but that seems like a hack solution. Am I trying too hard to minimize calls to the DB?
There is a good overview on ASP.NET Session here: ASP.NET Session State.
If you don't have thousands of clients, but need "some state" stored server-side, this is very easy to use and works well. It can also be stored in the database in multi server scenarios, without changing a line in your code, just by configuration.
My advise would be not to store "big", or full object hierarchies in there, as storing in a session (if the session is shared among servers in a web farm in a database for example) can be somewhat costy. If you plan to have only one server, this is not really a problem, but you have to know that you won't be able to easily move to a multiple server mode easily.
The worst thing to do is follow the guys who just say "session is bad, whooooo!", don't use it, and eventually rewrite your own system. If you need it, use it :-)
I would shy away from session objects. And actually I would say look into .net MVC as well.
The reason I don't use the session is because I feel it can be a crutch for some developers.
I would save all of the information that you would have put into a session into a db. This will allow for better metrics tracking, support for Azure (off topic but worth mentioning) and is cleaner imo.
ASP developers know session state as a great feature, but one that is somewhat limited. These limitations include:
ASP session state exists in the process that hosts ASP; thus the actions that affect the process also affect session state. When the process is recycled or fails, session state is lost.
Server farm limitations. As users move from server to server in a Web server farm, their session state does not follow them. ASP session state is machine specific. Each ASP server provides its own session state, and unless the user returns to the same server, the session state is inaccessible. http://msdn.microsoft.com/en-us/library/ms972429.aspx
One of the main problems with Session is, that by default, it is stored in memory. If you have many concurrent users that store data in the session this could easily lead to performance problems.
Another thing is that application recycle will empty your in memory session which could lead to errors.
Off course you can move your session to SqlServer or a StateServer but then you will lose on performance.
Look into the HttpContext.User (IPrincipal) property. this is where user information is stored in the request.
Most people avoid session state simply because people like to avoid state in general. If you can find an algorithm or process which works all the time regardless of the previous state of an object, that process tends to be more fool proof against future maintenance and more easily testable.
I would say for this particular case, store your values in the database and read them from there any time you need that information. Once you have that working, take a look at the performance of the site. If it's performing fine then leave it alone (as this is the simplest case to program). If performance is an issue, look at using the IIS Cache (instead of session) or implementing a system like CQRS.
Session State Disadvantage
Session-state variables stay in memory until they are either removed or replaced, and therefore can degrade server performance. Session-state variables that contain blocks of information, such as large datasets, can adversely affect Web-server performance as server load increases. Think what will happen if you significant amount of users simultaneously online.
NOTE :- I haven't mentioned the advantages because they are straightforward which are : Simple implementation, Session-specific events, Data persistence, Cookieless support etc.
The core problem with sessions are scaleability. If you have a small application, with a small number of users, that will only ever be on one server, then it may be a good route for you to save small amounts of data - maybe just the user id - to allow quick access to the preferences etc.
If you MAY want multiple web servers, or the application MAY grow, then don't use session. And only use it for small pieces of information.
I have used ASP.NET in mostly intranet scenarios and pretty familiar with it but for something such as shopping cart or similar session data there are various possibilities. To name a few:
1) State-Server session
2) SQL Server session
3) Custom database session
4) Cookie
What have you used and what our your success or lessons learnt stories and what would you recommend? This would obviously make a difference in a large-scale public website so please comment on your experiences.
I have not mentioned in-proc since in a large-scale app this has no place.
Many thanks
Ali
The biggest lesson I learned was one I already knew in theory, but got to see in practice.
Removing all use of sessions entirely from an application (does not necessarily mean all of the site) is something we all know should bring a big improvement to scalability.
What I learnt was just how much of an improvement it could be. By removing the use of sessions, and adding some code to handle what had been handled by them before (which at each individual point was a performance lose, as each individual point was now doing more work than it had before) the performance gain was massive to the point of making actions one would measure in many seconds or even a couple of minutes become sub-second, CPU usage became a fraction of what it had been, and the number of machines and amount of RAM went from clearly not enough to cope, to be a rather over-indulgent amount of hardware.
If sessions cannot be removed entirely (people don't like the way browsers use HTTP authentication, alas), moving much of it into a few well-defined spots, ideally in a separate application on the server, can have a bigger effect that which session-storage method is used.
In-proc certainly can have a place in a large-scale application; it just requires sticky sessions at the load balancing level. In fact, the reduced maintenance cost and infrastructure overhead by using in-proc sessions can be considerable. Any enterprise-grade content switch you'd be using in front of your farm would certainly offer such functionality, and it's hard to argue for the cash and manpower of purchasing/configuring/integrating state servers versus just flipping a switch. I am using this in quite large scaled ASP.NET systems with no issues to speak of. RAM is far too cheap to ignore this as an option.
In-proc session (at least when using IIS6) can recycle at any time and is therefore not very reliable because the sessions will end when the server decides, not when the session actually times out. The sessions will also expire when you deploy a new version of the web site, which is not true of server-based session providers. This can potentially give your users a bad experience if you update in the middle of their session.
Using a Sql Server is the best option because it is possible to have sessions that never expire. However, the cost of the server, disk space, its maintenance, and peformance all have to be considered. I was using one on my E-commerce app for several years until we changed providers to one with very little database space. It was a shame that it had to go.
We have been using the state service for about 3 years now and haven't had any issues. That said, we now have the session timeout set at an hour an in E-commerce that is probably costing us some business vs the never expire model.
When I worked for a large company, we used a clustered SQL Server in another application that was more critical to remain online. We had multiple redundency on every part of the system including the network cards. Keep in mind that adding a state server or service is adding a potential single point of failure for the application unless you go the clustered route, which is more expensive to maintain.
There was also an issue when we first switched to the SQL based approach where binary objects couldn't be serialized into session state. I only had a few and modified the code so it wouldn't need the binary serialization so I could get the site online. However, when I went back to fix the serialization issue a few weeks later, it suddenly didn't exist anymore. I am guessing it was fixed in a Windows Update.
If you are concerned about security, state server is a no-no. State server performs absolutely no access checks, anybody who is granted access to the tcp port state server uses can access or modify any session state.
In proc is unreliable (and you mentioned that) so that's not to consider.
Cookies isn't really a session state replacement since you can't store much data there
I vote for a database based storage (if needed at all) of some kind, it has the best possibility to scale.
I work on a big project in company. We collect data which we get via API methods of the CMS.
ex.
DataSet users = CMS.UserHelper.GetLoggedUser(); // returns dataset with users
Now on some pages we need many different data, not just users, also Nodes of the tree of the CMS or specific data of subtreee.
So we thought of write an own "helper class" in which we later can get different data easy.
ex:
MyHelperClass.GetUsers();
MyHelperClass.Objects.GetSingleObject( ID );
Now the problem is our "Helper Class" is really big and now we like to collect different data from the "Helper Class" and write them into a typed dataset . Later we can give a repeater that typed dataset which contains data from different tables. (which even comes from the methods I wrote before via API)
Problem is: It is so slow now, even at loading the page! Does it load or init the whole class??
By the way CMS is Kentico if anyone works with it.
I'm tired. Tried whole night..but it's soooo slow. Please give a look to that architecture.
May be you find some crimes which are not allowed :S
I hope we get it work faster. Thank you.
alt text http://img705.imageshack.us/img705/3087/classj.jpg
Bottlenecks usually come in a few forms:
Slow or flakey network.
Heavy reading/writing to disk, as disk IO is 1000s of times slower than reading or writing to memory.
CPU throttle caused by long-running or inefficiently implemented algorithm.
Lots of things could affect this, including your database queries and indexes, the number of people accessing your site, lack of memory on your web server, lots of reflection in your code, just plain slow hardware etc. No one here can tell you why your site is slow, you need to profile it.
For what its worth, you asked a question about your API architecture -- from a code point of view, it looks fine. There's nothing wrong with copying fields from one class to another, and the performance penalty incurred by wrapper class casting from object to Guid or bool is likely to be so tiny that its negligible.
Since you asked about performance, its not very clear why you're connecting class architecture to performance. There are really really tiny micro-optimizations you could apply to your classes which may or may not affect performance -- but the four or five nanoseconds you'll gain with those micro-optimizations have already been lost simply by reading this answer. Network latency and DB queries will absolutely dwarf the performance subtleties of your API.
In a comment, you stated "so there is no problem with static classes or a basic mistake of me". Performance-wise, no. From a web-app point of view, probably. In particular, static fields are global and initialized once per AppDomain, not per session -- the variables mCurrentCultureCode and mcurrentSiteName sound session-specific, not global to the AppDomain. I'd double-check those to see your site renders correctly when users with different culture settings access the site at the same time.
Are you already using Caching and Session state?
The basic idea being to defer as much of the data loading to these storage mediums as possible and not do it on individual page loads. Caching especially can be useful if you only need to get the data once and want to share it between users and over time.
If you are already doing these things, ore cant directly implement them try deferring as much of this data gathering as possible, opting to short-circuit it and not do the loading up front. If the data is only occasionally used this can also save you a lot of time in page loads.
I suggest you try to profile your application and see where the bottlenecks are:
Slow load from the DB?
Slow network traffic?
Slow rendering?
Too much traffic for the client?
The profiling world should be part of almost every senior programmer. It's part of the general toolbox. Learn it, and you'll have the answers yourself.
Cheers!
First thing first... Enable Trace for your application and try to optimize Response size, caching and work with some Application and DB Profilers... By just looking at the code I am afraid no one can be able to help you better.
At the moment I am working on a project admin application in C# 3.5 on ASP.net. In order to reduce hits to the database, I'm caching a lot of information using static variables. For example, a list of users is kept in memory in a static class. The class reads in all the information from the database on startup, and will update the database whenever changes are made, but it never needs to read from the datebase.
The class pings other webservers (if they exist) with updated information at the same time as a write to the database. The pinging mechanism is a Windows service to which the cache object registers using a random available port. It is used for other things as well.
The amount of data isn't all that great. At the moment I'm using it just to cache the users (password hashes, permissions, name, email etc.) It just saves a pile of calls being made to the database.
I was wondering if there are any pitfalls to this method and/or if there are better ways to cache the data?
A pitfall: A static field is scoped per app domain, and increased load will make the server generate more app domains in the pool. This is not necessarily a problem if you only read from the statics, but you will get duplicate data in memory, and you will get a hit every time an app domain is created or recycled.
Better to use the Cache object - it's intended for things like this.
Edit: Turns out I was wrong about AppDomains (as pointed out in comments) - more instances of the Application will be generated under load, but they will all run in the same AppDomain. (But you should still use the Cache object!)
As long as you can expect that the cache will never grow to a size greater than the amount of available memory, it's fine. Also, be sure that there will only be one instance of this application per database, or the caches in the different instances of the app could "fall out of sync."
Where I work, we have a homegrown O/RM, and we do something similar to what you're doing with certain tables which are not expected to grow or change much. So, what you're doing is not unprecedented, and in fact in our system, is tried and true.
Another Pitfall you must consider is thread safety. All of your application requests are running in the same AppDomain but may come on different threads. Accessing a static variable must account for it being accessed from multiple threads. Probably a bit more overhead than you are looking for. Cache object is better for this purpose.
Hmmm... The "classic" method would be the application cache, but provided you never update the static variables, or understand the locking issues if you do, and you understand that they can disappear at anytime with an appdomain restart then I don't really see the harm in using a static.
I suggest you look into ways of having a distributed cache for your app. You can take a look at NCache or indeXus.Net
The reason I suggested that is because you rolled your own ad-hoc way of updating information that you're caching. Static variables/references are fine but they don't update/refresh (so you'll have to handle aging on your own) and you seem to have a distributed setup.
I'm developing a web service whose methods will be called from a "dynamic banner" that will show a sort of queue of messages read from a sql server table.
The banner will have a heavy pressure in the home pages of high traffic sites; every time the banner will be loaded, it will call my web service, in order to obtain the new queue of messages.
Now: I don't want that all this traffic drives queries to the database every time the banner is loaded, so I'm thinking to use the asp.net cache (i.e. HttpRuntime.Cache[cacheKey]) to limit database accesses; I will try to have a cache refresh every minute or so.
Obviously I'll try have the messages as little as possible, to limit traffic.
But maybe there are other ways to deal with such a scenario; for example I could write the last version of the queue on the file system, and have the web service access that file; or something mixing the two approaches...
The solution is c# web service, asp.net 3.5, sql server 2000.
Any hint? Other approaches?
Thanks
Andrea
It depends on a lot of things:
If there is little change in the data (think backend with "publish" button or daily batches), then I would definitely use static files (updated via push from the backend). We used this solution on a couple of large sites and worked really well.
If the data is small enough, memory caching (i.e. Http Cache) is viable, but beware of locking issues and also beware that Http Cache will not work that well under heavy memory load, because items can be expired early if the framework needs memory. I have been bitten by it before! With the above caveats, Http Cache works quite well.
I think caching is a reasonable approach and you can take it a step further and add a SQL Dependency to it.
ASP.NET Caching: SQL Cache Dependency With SQL Server 2000
If you go the file route, keep this in mind.
http://petesbloggerama.blogspot.com/2008/02/aspnet-writing-files-vs-application.html
Writing a file is a better solution IMHO - its served by IIS kernel code, w/o the huge asp.net overhead and you can copy the file to CDNs later.
AFAIK dependency cashing is not very efficient with SQL Server 2000.
Also, one way to get around the memory limitation mentioned by Skliwz is that if you are using this service outside of the normal application you can isolate it in it's own app pool. I have seen this done before which helps as well.
Thanks all, as the data are little in size, but the underlying tables will change, I think that I'll go the HttpCache way: I need actually a way to reduce db access, even if the data are changing (so that's the reason to not using a direct Sql dependency as suggested by #Bloodhound).
I'll make some stress test before going public, I think.
Thanks again all.
Of course you could (should) also use the caching features in the SixPack library .
Forward (normal) cache, based on HttpCache, which works by putting attributes on your class. Simplest to use, but in some cases you have to wait for the content to be actually be fetched from database.
Pre-fetch cache, from scratch, which, after the first call will start refreshing the cache behind the scenes, and you are guaranteed to have content without wait in some cases.
More info on the SixPack library homepage. Note that the code (especially the forward cache) is load tested.
Here's an example of simple caching:
[Cached]
public class MyTime : ContextBoundObject
{
[CachedMethod(1)]
public DateTime Get()
{
Console.WriteLine("Get invoked.");
return DateTime.Now;
}
}