Can I have two Cassandra Sessions in C# backend application? - c#

I need to talk to two Cassandra clusters in my backend at stable environment. However, in beta I have only one cluster and it's config is duplicated, so during startup we create two sessions.
Is it ok to have two sessions for one cluster?
Also we have multiple keyspaces, but only one connection for them. Should I make new session for each keyspace?
I see that session should be singleton, but I think it's not a demand, but a recommendation.

The recommendation to only create and reuse a single session per application is because session creation is very expensive.
Each time a session is created, the driver has to go through its standard initialisation process and open connection pools to every node in the cluster. Apart from the significant increase in memory usage, this will slow down your application for no benefit.
It makes no sense to create a session for each keyspace since the session can handle thousands of requests concurrently. All you need to do is to specify the keyspace when referring to a table in the query, for example:
SELECT ... FROM keyspace_name.table_name WHERE ...
As you pointed out, there is no technical barrier that prevents your application from creating multiple sessions. But there is also no benefit to doing so, just a lot of disadvantages so we don't recommend it. Cheers!

Related

Asp.Net core distributed caching

I am currently using MemoryCache _cache = new MemoryCache(new MemoryCacheOptions()); for caching some data from database that does not change so often, but it does change.
And on create/update/delete of that data I do the refresh of the cache.
This works fine, but the problem is that on production we will have few nodes, so when method for creating of record is called for instance, cache will be refreshed only on that node, not on other nodes, and they will have stale data.
My question is, can I somehow fix this using MemoryCache, or I need to do something else, and if I do, what are the possible solutions?
I think you are looking for is Distributed Caching
Using the IDistributedCache interface you can use either Redis or Sql Server and it supplies basic Get/Set/Remove methods. Changes made on one node will be available to other nodes.
Using Redis is a great way of sharing Session type data between servers in a load balanced environment, Sql Server does not seem to be a great fit given that you seem to be caching to avoid db calls.
It might also be worth considering if you are actually complicating things by caching in the first place. When you have a single application you see the benefit, as keeping them in application memory saves a request over the network, but when you have a load balanced scenario, you have to compare retrieving those records from a distributed cached vs retrieving them from the database.
If the data is just an in memory copy of a relatively small database table, then there is probably not a lot to choose performance wise between the two. If the data is based on a complicated expensive query then the cache is the way to go.
If you are making hundreds of requests a minute for the data, then any network request may be too much, but you can consider what are the consequences of the data being a little stale? For example, if you update a record, and the new record is not available immediately on every server, does your application break? Or does the change just occur in a more phased way? In that case you could keep your in process memory cache, just use a shorter Time To Live.
If you really need every change to propagate to every node straight away then you could consider using a library like Cache Manager in conjunction with Redis which can combine an in memory cache and synchronisation with a remote cache.
Somewhat dated question, but maybe still useful: I agree with what ste-fu said, well explained.
I'll only add that, on top of CacheManager, you may want to take a look at FusionCache ⚡🦥, which I recently released.
On top of supporting an optional distributed 2nd layer transparently managed for you, it also has some other nice features like an optimization that prevents multiple concurrent factory for the same cache key from being executed (less load on the source database), a fail-safe mechanism and advanced timeouts with background factory completion
If you will give it a chance please let me know what you think.
/shameless-plug

ASP.NET MVC shared counter best practices

My ASP.NET MVC 4 project is using EF5 code-first, and some of the domain objects contain non- persisted counter properties which are updated according to incoming requests. These requests come very frequently and s scenario in which multiple request sessions are modifying these counters is quite probable.
My question is, is there a best practice, not necessarily related to ASP.NET or to EF, to handle this scenario? I think (but I'm not sure) that for the sake of this discussion, we can treat the domain objects as simple POCOs (which they are).
EDIT: As requested, following is the actual scenario:
The system is a subscriber and content management system. Peer servers are issuing requests which my system either authorizes or denies. Authorized requests result in opening sessions in peer servers. When a session is closed in the peer server, it issues a request notifying that the session has been closed.
My system needs to provide statistics - for example, the number of currently open sessions for each content item (one of the domain entities) - and provide real-time figures as well as per-minute, hourly, daily, weekly etc. figures.
These figures can't be extracted by means of querying the database due to performance issues, so I've decided to implement the basic counters in-memory, persist them every minute to the database and take the hourly, daily etc. figures from there.
The issue above results from the fact that each peer server request updates these "counters".
I hope it's clearer now.
Sounds like your scenario still requires a solid persistence strategy.
Your counter objects can be persisted to the HttpRuntime.Cache.
Dan Watson has an exceptional writeup here:
http://www.dotnetguy.co.uk/post/2010/03/29/c-httpruntime-simple-cache/
Be sure to use CacheItemPriority.NotRemovable to ensure that it maintains state during memory reclamation. The cache would be maintained within the scope of the app domain. You could retrieve and update counters (its thread safe!) in the cache and query its status from presumably a stats page or some other option. However if the data needs to be persisted beyond the scope of runtime then the strategy you're already using is sufficient.
Actually I think you have no need to wary about performance to much before you do not have enough info from tests and profiler tools.
But if you're working with EF, so you have deals with DataContext, which is the Unit Of Work pattern implementation described by Martin Fowler in his book. The main idea of such a pattern is reducing amount of requesting to database and operating the data in-memory as much as possible until you do not commit all your changes. So my short advice will be just using your EF entities in standard way, but not committing changes each time when data updates, but with the some intervals, for example after the 100 changes, storing data between requests in Session, Application session, Cache or somewhere else. The only thing you should care about is you using proper DataContext object each time, and do not forget disposed it when you no need it any more.

Using InProc and Azure AppFabric Cache together

Just a bit of background first. I currently have a site hosted with Windows Azure, with multiple instances and also AppFabric as my sole caching provider.
Everything was going great until my traffic spiked earlier this morning. After the instances became overloaded and stopped responding everything came good again once the new instances started.
However I started getting messages from AppFabric saying that I was being throttled because there were too many requests in a given hour. Which is fair enough, it certainly was giving it hell.
In order to avoid these messages in the future I was planning on implementing an InProc cache for very short lifespan. So it checks InProc first, if not goes to AppFabric, if not goes to DB.
ObjectCache cache = MemoryCache.Default;
CacheItemPolicy policy = new CacheItemPolicy();
policy.AbsoluteExpiration = DateTimeOffset.Now.AddMinutes(5);
The questions I have are
Is this the best way to handle the situation?
Is this going to interfere with AppFabric Caching?
Any issues I am overlooking?
Update
I just wanted to say I chose the above method and it works well. I was using it only for general data storage and not session state. MemoryCache with session state would not work too well on Azure due to no server affinity (as mentioned by David below).
Update 16-03-2012
After realizing the obvious I also disabled SessionState on most pages. Most of my pages don't need it and hence this rapidly decreases my calls to cache under heavy load. I also disabled ViewState for most pages as well, just for that slightly quicker page load time.
Are you using cache to provide SessionState storage, or general data storage by your application, or both? It's not totally clear, because InProc usually refers to SessionState, but your sample code does not look like SessionState.
Assuming that you're storing data which can be safely cached locally, then I would recommend looking into AppFabric Local Caching. It does basically what you want, and doesn't require writing any separate code (I think...).
Otherwise, using MemoryCache as you outlined is a workable scheme. I've done this in my apps, you just need to be careful to avoid cache incoherence issues.
Depending on your application, you may also want to implement a per-request cache by storing data in the HttpContext.Items collection. This is helpful when different parts of your code might request the same data during a single request.
Try this: http://msdn.microsoft.com/en-us/magazine/hh708748.aspx
One thing I have done is use HttpContext.Items. This is only a per request cache but depending on the nature of your system can be useful.
I wouldn't suggest inproc, due to the fact there's no server affinity.
One option, with With Windows Azure Cache, to avoid the hourly quota throttling is to bump up cache size. Fortunately the price doesn't scale linearly. For instance: $45 for 128MB, $55 for 256MB. So one option is to bump up your Cache to the next size. You'll need to monitor Compute performance though, via perf counters, as there's no way to monitor cache usage realtime.
Another option is to move session state to SQL Azure, which is now an officially-supported session state provider as of Azure 1.4 (Aug. 2011 - see this article for more info). With the latest SQL Azure pricing updates, if the db stays below 100MB, it's a $4.99 monthly rate instead of the original $9.99 baseline. It's amortized daily, so even if you have transient spikes and go into 1+GB range, you still have quite an affordable cache repository.
Another possible solution would be to use Sticky Sessions like this example:
http://dunnry.com/blog/2010/10/14/StickyHTTPSessionRoutingInWindowsAzure.aspx

Should I use sessions?

I am designing an online time tracking software to be used internally. I am fairly new to c# and .NET though I have extensive PHP experience.
I am using Windows Forms Authentication, and once the user logs in using that, I create a Timesheet object (my own custom class).
As part of this class, I have a constructor that checks the SQL DB for information (recent entries by this user, user preferences, etc.)
Should I be storing this information in a session? And then checking the session object in the constructor first? That seems the obvious approach, but most examples I've looked at don't make much use of sessions. Is there something I don't know that others do (specifically related to .NET sessions of course)?
EDIT:
I forgot to mention two things. 1. My SQL DB is on another server (though I believe they are both on the same network, so not much of an issue)2. There are certain constants that the user will not be able to change (only the admin can modify them) such as project tasks. These are used on every page, but loaded the first time from the DB. Should I be storing these in a session? If not, where else? The only other way I can think of is a local flat file that updates each time the table of projects is updated, but that seems like a hack solution. Am I trying too hard to minimize calls to the DB?
There is a good overview on ASP.NET Session here: ASP.NET Session State.
If you don't have thousands of clients, but need "some state" stored server-side, this is very easy to use and works well. It can also be stored in the database in multi server scenarios, without changing a line in your code, just by configuration.
My advise would be not to store "big", or full object hierarchies in there, as storing in a session (if the session is shared among servers in a web farm in a database for example) can be somewhat costy. If you plan to have only one server, this is not really a problem, but you have to know that you won't be able to easily move to a multiple server mode easily.
The worst thing to do is follow the guys who just say "session is bad, whooooo!", don't use it, and eventually rewrite your own system. If you need it, use it :-)
I would shy away from session objects. And actually I would say look into .net MVC as well.
The reason I don't use the session is because I feel it can be a crutch for some developers.
I would save all of the information that you would have put into a session into a db. This will allow for better metrics tracking, support for Azure (off topic but worth mentioning) and is cleaner imo.
ASP developers know session state as a great feature, but one that is somewhat limited. These limitations include:
ASP session state exists in the process that hosts ASP; thus the actions that affect the process also affect session state. When the process is recycled or fails, session state is lost.
Server farm limitations. As users move from server to server in a Web server farm, their session state does not follow them. ASP session state is machine specific. Each ASP server provides its own session state, and unless the user returns to the same server, the session state is inaccessible. http://msdn.microsoft.com/en-us/library/ms972429.aspx
One of the main problems with Session is, that by default, it is stored in memory. If you have many concurrent users that store data in the session this could easily lead to performance problems.
Another thing is that application recycle will empty your in memory session which could lead to errors.
Off course you can move your session to SqlServer or a StateServer but then you will lose on performance.
Look into the HttpContext.User (IPrincipal) property. this is where user information is stored in the request.
Most people avoid session state simply because people like to avoid state in general. If you can find an algorithm or process which works all the time regardless of the previous state of an object, that process tends to be more fool proof against future maintenance and more easily testable.
I would say for this particular case, store your values in the database and read them from there any time you need that information. Once you have that working, take a look at the performance of the site. If it's performing fine then leave it alone (as this is the simplest case to program). If performance is an issue, look at using the IIS Cache (instead of session) or implementing a system like CQRS.
Session State Disadvantage
Session-state variables stay in memory until they are either removed or replaced, and therefore can degrade server performance. Session-state variables that contain blocks of information, such as large datasets, can adversely affect Web-server performance as server load increases. Think what will happen if you significant amount of users simultaneously online.
NOTE :- I haven't mentioned the advantages because they are straightforward which are : Simple implementation, Session-specific events, Data persistence, Cookieless support etc.
The core problem with sessions are scaleability. If you have a small application, with a small number of users, that will only ever be on one server, then it may be a good route for you to save small amounts of data - maybe just the user id - to allow quick access to the preferences etc.
If you MAY want multiple web servers, or the application MAY grow, then don't use session. And only use it for small pieces of information.

ASP.NET Session - Use or not use and best practices for an e-commerce app

I have used ASP.NET in mostly intranet scenarios and pretty familiar with it but for something such as shopping cart or similar session data there are various possibilities. To name a few:
1) State-Server session
2) SQL Server session
3) Custom database session
4) Cookie
What have you used and what our your success or lessons learnt stories and what would you recommend? This would obviously make a difference in a large-scale public website so please comment on your experiences.
I have not mentioned in-proc since in a large-scale app this has no place.
Many thanks
Ali
The biggest lesson I learned was one I already knew in theory, but got to see in practice.
Removing all use of sessions entirely from an application (does not necessarily mean all of the site) is something we all know should bring a big improvement to scalability.
What I learnt was just how much of an improvement it could be. By removing the use of sessions, and adding some code to handle what had been handled by them before (which at each individual point was a performance lose, as each individual point was now doing more work than it had before) the performance gain was massive to the point of making actions one would measure in many seconds or even a couple of minutes become sub-second, CPU usage became a fraction of what it had been, and the number of machines and amount of RAM went from clearly not enough to cope, to be a rather over-indulgent amount of hardware.
If sessions cannot be removed entirely (people don't like the way browsers use HTTP authentication, alas), moving much of it into a few well-defined spots, ideally in a separate application on the server, can have a bigger effect that which session-storage method is used.
In-proc certainly can have a place in a large-scale application; it just requires sticky sessions at the load balancing level. In fact, the reduced maintenance cost and infrastructure overhead by using in-proc sessions can be considerable. Any enterprise-grade content switch you'd be using in front of your farm would certainly offer such functionality, and it's hard to argue for the cash and manpower of purchasing/configuring/integrating state servers versus just flipping a switch. I am using this in quite large scaled ASP.NET systems with no issues to speak of. RAM is far too cheap to ignore this as an option.
In-proc session (at least when using IIS6) can recycle at any time and is therefore not very reliable because the sessions will end when the server decides, not when the session actually times out. The sessions will also expire when you deploy a new version of the web site, which is not true of server-based session providers. This can potentially give your users a bad experience if you update in the middle of their session.
Using a Sql Server is the best option because it is possible to have sessions that never expire. However, the cost of the server, disk space, its maintenance, and peformance all have to be considered. I was using one on my E-commerce app for several years until we changed providers to one with very little database space. It was a shame that it had to go.
We have been using the state service for about 3 years now and haven't had any issues. That said, we now have the session timeout set at an hour an in E-commerce that is probably costing us some business vs the never expire model.
When I worked for a large company, we used a clustered SQL Server in another application that was more critical to remain online. We had multiple redundency on every part of the system including the network cards. Keep in mind that adding a state server or service is adding a potential single point of failure for the application unless you go the clustered route, which is more expensive to maintain.
There was also an issue when we first switched to the SQL based approach where binary objects couldn't be serialized into session state. I only had a few and modified the code so it wouldn't need the binary serialization so I could get the site online. However, when I went back to fix the serialization issue a few weeks later, it suddenly didn't exist anymore. I am guessing it was fixed in a Windows Update.
If you are concerned about security, state server is a no-no. State server performs absolutely no access checks, anybody who is granted access to the tcp port state server uses can access or modify any session state.
In proc is unreliable (and you mentioned that) so that's not to consider.
Cookies isn't really a session state replacement since you can't store much data there
I vote for a database based storage (if needed at all) of some kind, it has the best possibility to scale.

Categories

Resources