ASP.NET - Storing Article View Count in Database - c#

I am building a Wiki / Blog similar application, and I have a question about the best way to store the View Count for each of the articles. The requirement is that I only want to store the unique number of users that viewed the article and not the total view count. So far I have come up with 3 different ways to accomplish:
1. SQL Server stored procedure: the problem with this approach is that the data is stored in XML data type and it might be a bit complicated to achieve the requirement using this method. I am leaving this as a last resort.
2. MSMQ: this would work great, since I can process the requests serially. The only problem with this approach is that, I cannot ensure that MSMQ is installed on the host server. This one is out of the question!
3. Using Application.Lock(): I know that using this method I can lock access to the Application object, update some entry in the application, update the database, and then call Application.Unlock(). While this sounds as a functional approach, it still feels like a workaround.
Does anyone has a suggestion on what I should do to achieve the requirement?

MsMQ and Application.Lock are def not the options to consider for something simple you want to do. (Application.Lock() is a def NO GO)
I also see no reason for XML. A stored proc does not rely on XML
Create a table
[page,userip]
on every view of the page
insert into <table>(page,userip) values(#page,#userip)
For the statistics just issue the a query
select count(*) from <table> group by userip having page=#page
This identifies a user on its IP, not completely failsafe as multiple users can come from the same ip.
But why not investigate google Analytics? All the info you need (and more)

Related

SQL long process stats generation for charting best practice

I have some SQL Server Store Procs that generates statistical data for charting in a C# web application.
Right now the user in the web app has to wait about 5 minutes to see these charts with updated data and this is a pain in the neck for the user and for me.
Some of the Store procs takes more than 5 minutes to generate the data but the web user don't need to see the info on the fly. Maybe update the chart every 2-3 hours.
So, I dont know what is the best practice to solve this.
I was thinking on creating a windows service that every 2-3 hours will call the SP's and then store the data in different tables.
Any clue on how to deal with this?
Appreciate the help
As I said in the comments, indexed views (kind of like materialized views) can increase performance of certain common queries without having to make temporary tables and things like that.
The benefits of indexed views are performance and that it doesn't require much extra coding and effort. When you create an indexed view as opposed to a temp table, the query navigator will (should) know when to take advantage of this view, without the end user needing to specify a temp or aggregate table.
Examples of the benefits of indexed views and how to implement them can be found here http://msdn.microsoft.com/en-us/library/dd171921(v=sql.100).aspx
here are some links to indexed views. Like the comments said, views allow you to quickly get information rather then always doing a select every time using a stored proc. Read the second link for a very good explanation about views.
MSDN
http://msdn.microsoft.com/en-ca/library/ms187864%28v=sql.105%29.aspx
Very well explained here
http://www.codeproject.com/Articles/199058/SQL-Server-Indexed-Views-Speed-Up-Your-Select-Quer

ASP.NET C# posts and comments permalinks or sql

I'm working on somekind of "social network" and have created my own comment system for every post, since large number of posts and comment is expected I'm not sure if sql is the best approach here.
Currently I have two tables on Sql db:
Posts:
Columns: ID,PostText,DateTime,Username(that posted)
Comments:
Columns: ID,PostID(post which it belong),CommentText,DateTime,Username(that Commented)
I would also like to implement "Like system" but then I would have to make new table?
exemple:
Like:
Columns: ID,PostID,Username(that liked) - to prevent double voting
There might be alot of refresing and I'm little worried that Sql might start giving me headaches with high traffic on website.
But I need advice from someone that had experiences with both sql and permalinks.Which works better given that permalinks have ability to be crawled for links. Im also not sure if large number of files(permalinks) would be trouble for my host.
If you think permalinks are better can you explan a way to format them and sort somehow using folder hierarchy since I've never worked with it.
Thanks!
You really need a new table for likes.
I think if you have high traffic, you should have an additional history table and you have to keep old likes (for example older than a week) here or try to partition your table.
But these will the first steps there will be a lot of work with really high traffic sites.
Morzel

LinqToSQL Design Query / Worry

I wonder if somebody could point me in the right direction. I've recently started playing with LinqToSQL and love the strongly typed data objects etc.
I'm just struggling to understand the impact on database performance etc. For example, say I was developing a simple user profile page. The page shows basic information about the user, some information on their recent activity, and a list of unread notifications.
If I was developing a stored procedure for this page, I could create a single SP which returns multiple datatables covering all of the required information - resulting in a single db call.
However, using LinqToSQL, this could results in many calls - one for user info, atleast one for activity, atleast one for notifications, if I then want further info on notifications this may result in further calls - multiple db calls.
Should I be worried about the number of db calls happenning as a result of using this design pattern? Ie, are the multiple db handshakes etc going to degrade my db etc?
I'd appreciate your thoughts on this!
Thanks
David
LINQ to SQL can consume multiple results from a stored proc if you need to go that route. Unfortnately the designer has problems mapping them correctly, so you will probably need to create your mapping manually. See http://www.thinqlinq.com/Default/Using-LINQ-to-SQL-to-return-Multiple-Results.aspx.
You can configure LINQ to SQL to eagerly load the child records if you know that you're going to need them for every parent record. Use the DataLoadOptions and .LoadWith to configure it.
You can also project an object graph with multiple child collections in the Select clause of a LINQ query to reduce the number of DB hits that you make.
Ultimately, you need to check a number of options to determine which route is the best performance for your situation. It's not a one size fits all scenario.
Is it worst from a performance standpoint ? Yes, it should be. Multiple roundtrips are usually worse than single.
The real question is, do you mind? Is your application going to receive enough visits to warrant the added complexity of a stored procedure? Or do you value the simplicity of future modifications over raw performance?
In any case, if you need the performance, you can create a stored procedure and map it on your context. This will give you one single call, but return the data as objects
Here is an article explaining a bit about that option:
linq-to-sql-returning-multiple-result-sets

Database triggers that tells my website that something has been updated

I am in the process of creating a friendlist using ASP.NET/C# and MSSQL 08. Simple datalist that lists the profile image and name of my friends.
Next to the name, I have a label showing current status of my friend. Like for instance, Online, Offile, Away etc.
My question is, how can I change the value of this label, without having a timer that calls the database all the time asking for the current status?
I would like to have the database (sql server 2008) tell me when a change as occured and tell my business logic to update the status label.
Is this possible?
Thanks!
To accomplish what you are looking for.. And this is just how I would do it, is to create a view based on the table with only the items that are needed to accomplish the task.. For instance, UserID | Online_Status.. Then using AJAX, make a call. It would be so small to the user that they would not even notice the bandwidth usage/processing... etc..etc...
This is pretty much exactly what you said you didn't want, but even if you had 1 million users and space them like 3-5 minutes apart.. You should be ok considering it would take milliseconds to perform the check.
Just my two cents..
I don't think you should do it like that. There are techniques to do this using comet but it will consume a lot of resources from your server clearly reducing the number of users that can access your site/app. The problem is that the the server and client needs to have a socket open for the server to be able to push data to the client.
What I would do is to have the client ask if there are any updates, keeping the payload to a minimum. If the server says there is data that changed the client makes another request to get that data.
You could use the SqlDependency class to get notified when the result of a database query changes.
There is an excellent article on MSDN explaining the SqlDependency class.
To use the SqlDependency class in the context of ASP.Net consider the strategy explained in the following video of MIX 2011.
Hope, this helps.
I believe this is what for the SqlCacheDependency is designed for. If you are using SQL Server 2005 or higher*, it implements a push-notification model from SQL Server to your application to notify you of when a change occurs in your dataset. So each time the cache is invalidated you can get the latest data, but until then it was just will read from your cached dataset and save a trip to the database. The documentation for it is here.
*However*,
As stated in the comments and such, this isn't really what SQL Server is designed for at its core, and I don't know to hand actually how efficient this solution is. If I understand your problem correctly, you would need a cache dependency PER USER which could very well be completely unscalable using this solution. Rather than second-guess what is going to be the most efficient solution, you really should develop, test, measure and find out for yourself. Every situation is going to be different, there is no "right way".
* In Sql Server 2000 and 7 it uses a pull-model.
All options given to this moment are valid ones and that's how most websites do it today; however, the OP is asking for some sort push notification mechanism as opposed to pull, and I think for that kind of thing, websockets are the way to do it.

MongoDB, C# and NoRM + Denormalization

I am trying to use MongoDB, C# and NoRM to work on some sample projects, but at this point I'm having a much harder time wrapping my head around the data model. With RDBMS's related data is no problem. In MongoDB, however, I'm having a difficult time deciding what to do with them.
Let's use StackOverflow as an example... I have no problem understanding that the majority of data on a question page should be included in one document. Title, question text, revisions, comments... all good in one document object.
Where I start to get hazy is on the question of user data like username, avatar, reputation (which changes especially often)... Do you denormalize and update thousands of document records every time there is a user change or do you somehow link the data together?
What is the most efficient way to accomplish a user relationship without causing tons of queries to happen on each page load? I noticed the DbReference<T> type in NoRM, but haven't found a great way to use it yet. What if I have nullable optional relationships?
Thanks for your insight!
The balance that I have found is using SQL as the normalized database and Mongo as the denormalized copy. I use a ESB to keep them in sync with each other. I use a concept that I call "prepared documents" and "stored documents". Stored documents are data that is only kept in mongo. Useful for data that isn't relational. The prepared documents contain data that can be rebuilt using the data within the normalized database. They act as living caches in a way - they can be rebuilt from scratch if the data ever falls out of sync (in complicated documents this is an expensive process because these documents require many queries to be rebuilt). They can also be updated one field at a time. This is where the service bus comes in. It responds to events sent after the normalized database has been updated and then updates the relevant mongo prepared documents.
Use each database to their strengths. Allow SQL to be the write database that ensures data integrity. Let Mongo be the read-only database that is blazing fast and can contain sub-documents so that you need less queries.
** EDIT **
I just re-read your question and realized what you were actually asking for. I'm leaving my original answer in case its helpful at all.
The way I would handle the Stackoverflow example you gave is to store the user id in each comment. You would load up the post which would have all of the comments in it. Thats one query.
You would then traverse the comment data and pull out an array of user ids that you need to load. Then load those as a batch query (using the Q.In() query operator). Thats two queries total. You would then need to merge the data together into a final form. There is a balance that you need to strike between when to do it like this and when to use something like an ESB to manually update each document. Use what works best for each individual scenario of your data structure.
I think you need to strike a balance.
If I were you, I'd just reference the userid instead of their name/reputation in each post.
Unlike a RDBMS though, you would opt to have comments embedded in the document.
Why you want to avoid denormalization and updating 'thousands of document records'? Mongodb db designed for denormalization. Stackoverlow handle millions of different data in background. And some data can be stale for some short period and it's okay.
So main idea of above said is that you should have denormalized documents in order to fast display them at ui.
You can't query by referenced document, in any way you need denormalization.
Also i suggest have a look into cqrs architecture.
Try to investigate cqrs and event sourcing architecture. This will allow you to update all this data by queue.

Categories

Resources