Asynchronous output buffer for calendar (c#) - c#

Ok, I'm currently writing a scheduling web app for a large company, and it needs to be fast. Normal fast (<1s) doesn't cut it with these guys, so we're aiming for <0.5s, which is hard to achieve when using postbacks.
My question is: does anyone have a suggestion of how best to buffer calendar/schedule data to speed load times?
My plan is to load the selected week's data, and another week on either side, and use these to buffer the output: i.e. it will never have load the week you've asked for, it'll always have that in memory, and it'll buffer the weeks on either side for when you next change.
However, I'm not sure exactly how to achieve this, the asynch loading is simple when using ajax pagemethods, but it's a question of where to store the data (temporarily) after it loads: I am currently using a static class with a dictionary> to do it, but this is probably not the best way when it comes to scaling to the large userbase.
Any suggestions?
EDIT
The amount of data loaded is not particularly high (there are a few fields on each appointment, which are converted to a small container class and have some processing done on them to organise the dates and calculate the concurrent appointments, and it's unlikely there'll be more than ~30 appointments a week due to the domain) however the database is under very high load from other areas of the application (this is a very large scale system with thousands of users transfering a large volume of information around).

So are you putting your buffered content on the client or the server here? I would think the thing to do would be to chuck the data for previous and next weeks into a javascript data structure on the page and then let the client arrange it for you. Then you could just be bouncing back to the server asynchronously for the next week when one of your buffered neighbour weeks is opened so you're always a week ahead as you have said, assuming that the data will only be accessed in a week-by-week way.
I would also, for the sake of experimentation, see what happens if you put a lot more calendar data into the page to process with Javascript - this type of data can often be pretty small, even a lot of information barely adding up to the equivalent of a small image in terms of data transfer - and you may well find that you can have quiet a bit of information cached ahead of time.
It can be really easy to assume that because you have a tool like Ajax you should be using it the whole time, but then I do use a hammer for pretty much all jobs around the home, so I'm a fine one to talk on that front.

The buffering won't help on the first page, though - only on subsequent back/forward requests.
Tbh I don't think there's much point, as you'll want to support hyperlinks and redirects from other sources as much as or more than just back/forward. You might also want to "jump" to a month. Forcing users to page back and forwards to get to the month they want is actually going to take longer and be more frustrating than a <1s response time to go straight to the page they want.
You're better off caching data generally (using something like Velocity) so that you almost never hit the db, but even that's going to be hard with lots of users.
My recommendation is to get it working, then use a profiling tool (like ANTS Profiler) to see which bits of code you can optimise once it's functionally correct.

Related

How can I access a c# Memory Mapped File from Coldfusion 10?

I have a c# application that generates data every 1 second (stock tick data) which can be discarded after each itteration.
I would like to pass this data to a Coldfusion (10) application and I have considered having the c# application writing the data to a file every second and then having the Coldfusion application reading that data, but this is most likely going to cause issues with the potential for both applications trying to read or write to the file at the same time ?
I was wondering if using Memory Mapped Files would be a better approach ? If so, how could I access the memory mapped file from Coldfusion ?
Any advice would be greatly appreciated. Thanks.
We have produced a number of stock applications that include tick by tick tracking of watchlists, charting etc. I think the idea of a file is probably not a great idea unless you are talking about a single stock with regular intervals. In my experience a change every "second" is probably way understating the case. Some stokes (AAPL or GOOG are good examples) have hundreds of "ticks" per second during peak times.
So if you are NOT taking every tick but really are "updating the file" every 1 second then your idea has some merit in that you could use a file watching gateway to fire events for you and "see" that the file is updated.
But keep in mind that you are in effect introducing something "in the middle". A file now stands between your Java or CF applications and the quote engine. That's going to introduce latency no matter what you choose to do (file handles getting and releasing etc). And the locks of one process may interfere with the other.
When you are dealing with facebook updates miliseconds don't really matter much - in spite of all the teenage girls who probably disagree with me :) With stock quotes however, half of the task is shaving off miliseconds to get your processes as close to real time as possible.
Our choice is usually to choose sockets instead of something in the middle bridging the data. The quote engine then keeps it's watchlist and updates it's arrays like normal but also sends any updates down stream to the socket engine which pushes it to something taht can handle it (a chart application, watchlist, socketgateway for webpage etc).
Hope this helps - it's not a clear answer but more of a clarification to the hurdles you face.

Best practice - load a lot of stuff in the application_start? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a webshop with a lot of products and other content. Currently I load all content in to a global list at Application_Start, which takes aprox 15-25 seconds.
This makes the the site really fast, as I can get any product/content in O(1) time.
However, is this best practice?
Currently I got a webhotel which is not a VPS / Dedicated server, so it recycles the application from time to time, which gives random visitors load times up to 15-25 seconds (only to become a bigger number with more content). This is of course totally unacceptable, but I guess it would be solved with a VPS.
What is the normal way of doing this? I guess a webshop like Amazon probably don't load all their products into a huge list :-D
Any thoughts and ideas would be highly appreciated.
It looks like you've answered your question for your case "This is of course totally unacceptable".
If your goal O(1) normal request to database for single product is likely O(1) unless you need to have complicated joins between products. Consider trying to drop all your pre-caching logic and see if you have problem with performance. You can limit startup impact by lazy caching instead.
Large sites often use distributed caching like MemcaheD.
A more scalable setup is to set up a web service to provide the content, which the website calls when it needs it. The web service will need to cache frequently needed content to achieve fast response times.
First of all, 15-20 seconds to load data is too much time, so I suspect this cases
This time is for compile and not the data load
The data is too much and you full the memory
The method that you use to read data is very slow
The data storage is too slow, or the struct is on text file and the read of it is slow
My opinion is that you cache only small amount of data that you need to use it too many times in short time. The way you describe it is not good practice for some reasons.
If you have many pools you read the same data on all pools and you spend memory for no reason.
The data that you cache you can not change them - is for read only
Even if you cache some data then you need to render the page, and there is where you actually need to make the cache, on the final render, not on data.
What and how to cache.
We cache the final render page.
We also set cache for the page and other elements to the client.
We read and write the data from database as they come and we left the database do the cache, he knows better.
If we cache data then they must be small amount that needed to be used for long loop and we avoid the database call many times.
Also we cache as they ask for it, and if not used for long time, or the memory need space this part of cache gone away. If some part of the data come from complex combinations of many tables then we make a temporary flat big table that keep all the data together, every one in a row. This table are temporary and if we needed too much we make a second temporary database file that we keep this part of the data.
How fast is the database read ? Well is so fast that you not need to worry about that, you need to check other point of delays, like as I say the full render of a page, or some parts of the page.
What you need to worry about is a good database design, a good and fast way to retrieve your data, and a good optimize code to show them.
Separation of responsibilities will help you scale for the future.
With your current setup, you are limited to the resources of your web server, and, like you said, your start up times will grow out of control as you continue adding more products.
If you share the burden of each page request with SQL Server, you open up your application to allow it to scale as needed. Over time, you may decide to add more web servers, cluster SQL Server, or switch to a new database back-end altogether. However, if all the burden is on the application pool, then you are drastically limiting yourself.

asp.net c# speed up of classes

I work on a big project in company. We collect data which we get via API methods of the CMS.
ex.
DataSet users = CMS.UserHelper.GetLoggedUser(); // returns dataset with users
Now on some pages we need many different data, not just users, also Nodes of the tree of the CMS or specific data of subtreee.
So we thought of write an own "helper class" in which we later can get different data easy.
ex:
MyHelperClass.GetUsers();
MyHelperClass.Objects.GetSingleObject( ID );
Now the problem is our "Helper Class" is really big and now we like to collect different data from the "Helper Class" and write them into a typed dataset . Later we can give a repeater that typed dataset which contains data from different tables. (which even comes from the methods I wrote before via API)
Problem is: It is so slow now, even at loading the page! Does it load or init the whole class??
By the way CMS is Kentico if anyone works with it.
I'm tired. Tried whole night..but it's soooo slow. Please give a look to that architecture.
May be you find some crimes which are not allowed :S
I hope we get it work faster. Thank you.
alt text http://img705.imageshack.us/img705/3087/classj.jpg
Bottlenecks usually come in a few forms:
Slow or flakey network.
Heavy reading/writing to disk, as disk IO is 1000s of times slower than reading or writing to memory.
CPU throttle caused by long-running or inefficiently implemented algorithm.
Lots of things could affect this, including your database queries and indexes, the number of people accessing your site, lack of memory on your web server, lots of reflection in your code, just plain slow hardware etc. No one here can tell you why your site is slow, you need to profile it.
For what its worth, you asked a question about your API architecture -- from a code point of view, it looks fine. There's nothing wrong with copying fields from one class to another, and the performance penalty incurred by wrapper class casting from object to Guid or bool is likely to be so tiny that its negligible.
Since you asked about performance, its not very clear why you're connecting class architecture to performance. There are really really tiny micro-optimizations you could apply to your classes which may or may not affect performance -- but the four or five nanoseconds you'll gain with those micro-optimizations have already been lost simply by reading this answer. Network latency and DB queries will absolutely dwarf the performance subtleties of your API.
In a comment, you stated "so there is no problem with static classes or a basic mistake of me". Performance-wise, no. From a web-app point of view, probably. In particular, static fields are global and initialized once per AppDomain, not per session -- the variables mCurrentCultureCode and mcurrentSiteName sound session-specific, not global to the AppDomain. I'd double-check those to see your site renders correctly when users with different culture settings access the site at the same time.
Are you already using Caching and Session state?
The basic idea being to defer as much of the data loading to these storage mediums as possible and not do it on individual page loads. Caching especially can be useful if you only need to get the data once and want to share it between users and over time.
If you are already doing these things, ore cant directly implement them try deferring as much of this data gathering as possible, opting to short-circuit it and not do the loading up front. If the data is only occasionally used this can also save you a lot of time in page loads.
I suggest you try to profile your application and see where the bottlenecks are:
Slow load from the DB?
Slow network traffic?
Slow rendering?
Too much traffic for the client?
The profiling world should be part of almost every senior programmer. It's part of the general toolbox. Learn it, and you'll have the answers yourself.
Cheers!
First thing first... Enable Trace for your application and try to optimize Response size, caching and work with some Application and DB Profilers... By just looking at the code I am afraid no one can be able to help you better.

Real time data storage and access with .net

Does anyone have any experience with receiving and updating a large volume of data, storing it, sorting it, and visualizing it very quickly?
Preferably, I'm looking for a .NET solution, but that may not be practical.
Now for the details...
I will receive roughly 1000 updates per second, some updates, some new rows of data records. But, it can also be very burst driven, with sometimes 5000 updates and new rows.
By the end of the day, I could have 4 to 5 million rows of data.
I have to both store them and also show the user updates in the UI. The UI allows the user to apply a number of filters to the data to just show what they want. I need to update all the records plus show the user these updates.
I have an visual update rate of 1 fps.
Anyone have any guidance or direction on this problem? I can't imagine I'm the first one to have to deal with something like this...
At first though, some sort of in memory database I would think, but will it be fast enough for querying for updates near the end of the day once I get a large enough data set? Or is that all dependent on smart indexing and queries?
Thanks in advance.
It's a very interesting and also challenging problem.
I would approach a pipeline design with processors implementing sorting, filtering, aggregation etc. The pipeline needs an async (threadsafe) input buffer that is processed in a timely manner (according to your 1fps req. under a second). If you can't do it, you need to queue the data somewhere, on disk or in memory depending on the nature of your problem.
Consequently, the UI needs to be implemented in a pull style rather than push, you only want to update it every second.
For datastore you have several options. Using a database is not a bad idea, since you need the data persisted (and I guess also queryable) anyway. If you are using an ORM, you may find NHibernate in combination with its superior second level cache a decent choice.
Many of the considerations might also be similar to those Ayende made when designing NHProf, a realtime profiler for NHibernate. He has written a series of posts about them on his blog.
May be Oracle is more appropriate RDBMS solution fo you. The problem with your question is that at this "critical" levels there are too much variables and condition you need to deal with. Not only software, but hardware that you can have (It costs :)), connection speed, your expected common user system setup and more and more and more...
Good Luck.

C# Datasets, paging and large amounts of data

I want to show a large amount of data in a dataset, 100,000 records approx 10 columns, this consumes a large amount of ram 700MB. I have also tried using paging which reduces this by about 15-20% but I don't really like the Previous/Next buttons involved when using paging. I'm not writing the data to disk at present, should I be? If so what is the most common method? The data isn't to be stored forever just whilst it is being viewed, then a new query may be run and a nother 70,000 records could be viewed. What is the best way to proceed?
Thanks for the advice.
The reality is that the end-user rarely needs to see the totality of their dataset, so I would use which method you like for presenting the data (listview) and build a custom pager so that the dataset is only fed with the results of the number of records desired. Otherwise, each page load would result in re-calling the dataset.
The XML method to a temp file or utilizing a temp table created through a stored proc are alternatives but you still must sift and present the data.
An important question is where this data comes from. That will help determine what options are available to you. Writing to disk would work, but it probably isn't the best choice, for three reasons:
As a user, I'd be pretty annoyed if your app suddenly chewed up 700Mb of disk space with no warning at all. But, then, I'd notice such things. I suppose a lot of users wouldn't. Still: it's a lot of space.
Depending on the source of the data, even the initial transfer could take longer than your really want to allow.
Again, as a user, there's NO WAY I'm manually digging through 700Mb worth of data. That means you almost certainly never need to show it. You want to only load the requested page, one (or a couple) pages at a time.
I would suggest memory mapped files...not sure if .NET includes support for it yet.
That is a lot of data to be working with and keeping aroudn in memory.
Is this a ASP.NET app? Or a Windows app?
I personally have found that going with a custom pager setup (to control, next previous links) and paging at the database level to be the only possible way to get the best performance, only get the data needed....
implement paging in SQL if you want to reduce the memory footprint

Categories

Resources