ASP.NET Working with normalized database - c#

First of all sorry for what may seem as dumb question, but I have zero experience in that area.
So at work I was given a database (which is way more normalized than needed) and for each Insert/update/delete/select I have a separate stored procedure.
As someone with zero experience I started creating my own stored procedures and displaying text instead of ID and it was all going well until I realized I have to update this records at some point :).
So my question is can you give me directions on how to display "eye-friendly" information in the GridView and at the same time be able to edit/update this information?
Currently what I am doing is just calling a stored procedure and databind the grid view to it.
Thanks in advance!

Study time! I would like to explain this to You directly, but it is much more useful to already made tutorials, which have needed information. And this topic would be so rich, it would be hard to explain in just few sentences/examples.
The best option is to use tools/classes that .NET provides to You. Among them there is DataSet, which will incredibly help You out with all set of modifications deletes/selects.
You can bind DataSource to GridView, which will autofill data and You can just allow some kind of modifications to it.
Other approach is to use EntityFramework, where You can modify data in ways You want - it will do a lot of work for You.
Latest topic You should be interested in is LINQ - simple get data & do some queries/modifications in Your app (not at SQL server).
Check links below:
What is ADO.NET?
http://www.entityframeworktutorial.net/what-is-entityframework.aspx
Also recommending this YT video, be careful about video/sounds quality. Try some other videos with similar name.
http://tutorialspoint.com/linq/

Related

Is it possible for Lucene to monitor a Sql Table and keep itself updated?

I am trying to understand some basics of Lucene, the full text search engine. More specifically I am looking at Lucene.Net.
Today I have an old legacy .NET 4.8 web app. Some is MVC, but the newer parts follow a pretty nice API first pattern. The app holds a lot of records (app half a million) with tons of different fields. The search functionality there is outdated to say the least. It is a ton of old Linq2SQL queries that fan out in like queries.
I would like to introduce a new and better way to search records, so I started looking at Lucene.Net. But I am trying to understand one key concept, and I can't seem to find the answer anywhere, and I think it might be because it cannot be done, but I would like to make sure.
Is it possible to set up Lucene to monitor a SQL table or view so I don't have to maintain the Lucene index from within my code. The code of this app does not lend itself to easily keeping a Lucene index updated when things are added, changed or deleted. But the database is good source of truth. I can live with a small delay on having the index up to date. But basically I would like define for each business model what fields are part of the index and what the id is, and then be able to query with that index from the C# server side code of my Web App.
Is such a scenario even possible or am I asking too much?
It's totally possible, but not out of the box. You have to implement it if you want it. Fundamentally you need to implement three things.
A way to know every time a piece of relevant data in the sql database changes
A place to capture information about that change, call it a change log.
A routine that reads the change log, applies those changes to the
LuceneNet index and than marks the record in the change log has processed.
There are of course lots of different ways to handle each of these.
This SO answer Lucene.Net index updates, when a manual change is done in SQL Database provides more details on one way this can be accomplished.

SQL long process stats generation for charting best practice

I have some SQL Server Store Procs that generates statistical data for charting in a C# web application.
Right now the user in the web app has to wait about 5 minutes to see these charts with updated data and this is a pain in the neck for the user and for me.
Some of the Store procs takes more than 5 minutes to generate the data but the web user don't need to see the info on the fly. Maybe update the chart every 2-3 hours.
So, I dont know what is the best practice to solve this.
I was thinking on creating a windows service that every 2-3 hours will call the SP's and then store the data in different tables.
Any clue on how to deal with this?
Appreciate the help
As I said in the comments, indexed views (kind of like materialized views) can increase performance of certain common queries without having to make temporary tables and things like that.
The benefits of indexed views are performance and that it doesn't require much extra coding and effort. When you create an indexed view as opposed to a temp table, the query navigator will (should) know when to take advantage of this view, without the end user needing to specify a temp or aggregate table.
Examples of the benefits of indexed views and how to implement them can be found here http://msdn.microsoft.com/en-us/library/dd171921(v=sql.100).aspx
here are some links to indexed views. Like the comments said, views allow you to quickly get information rather then always doing a select every time using a stored proc. Read the second link for a very good explanation about views.
MSDN
http://msdn.microsoft.com/en-ca/library/ms187864%28v=sql.105%29.aspx
Very well explained here
http://www.codeproject.com/Articles/199058/SQL-Server-Indexed-Views-Speed-Up-Your-Select-Quer

MongoDB, C# and NoRM + Denormalization

I am trying to use MongoDB, C# and NoRM to work on some sample projects, but at this point I'm having a much harder time wrapping my head around the data model. With RDBMS's related data is no problem. In MongoDB, however, I'm having a difficult time deciding what to do with them.
Let's use StackOverflow as an example... I have no problem understanding that the majority of data on a question page should be included in one document. Title, question text, revisions, comments... all good in one document object.
Where I start to get hazy is on the question of user data like username, avatar, reputation (which changes especially often)... Do you denormalize and update thousands of document records every time there is a user change or do you somehow link the data together?
What is the most efficient way to accomplish a user relationship without causing tons of queries to happen on each page load? I noticed the DbReference<T> type in NoRM, but haven't found a great way to use it yet. What if I have nullable optional relationships?
Thanks for your insight!
The balance that I have found is using SQL as the normalized database and Mongo as the denormalized copy. I use a ESB to keep them in sync with each other. I use a concept that I call "prepared documents" and "stored documents". Stored documents are data that is only kept in mongo. Useful for data that isn't relational. The prepared documents contain data that can be rebuilt using the data within the normalized database. They act as living caches in a way - they can be rebuilt from scratch if the data ever falls out of sync (in complicated documents this is an expensive process because these documents require many queries to be rebuilt). They can also be updated one field at a time. This is where the service bus comes in. It responds to events sent after the normalized database has been updated and then updates the relevant mongo prepared documents.
Use each database to their strengths. Allow SQL to be the write database that ensures data integrity. Let Mongo be the read-only database that is blazing fast and can contain sub-documents so that you need less queries.
** EDIT **
I just re-read your question and realized what you were actually asking for. I'm leaving my original answer in case its helpful at all.
The way I would handle the Stackoverflow example you gave is to store the user id in each comment. You would load up the post which would have all of the comments in it. Thats one query.
You would then traverse the comment data and pull out an array of user ids that you need to load. Then load those as a batch query (using the Q.In() query operator). Thats two queries total. You would then need to merge the data together into a final form. There is a balance that you need to strike between when to do it like this and when to use something like an ESB to manually update each document. Use what works best for each individual scenario of your data structure.
I think you need to strike a balance.
If I were you, I'd just reference the userid instead of their name/reputation in each post.
Unlike a RDBMS though, you would opt to have comments embedded in the document.
Why you want to avoid denormalization and updating 'thousands of document records'? Mongodb db designed for denormalization. Stackoverlow handle millions of different data in background. And some data can be stale for some short period and it's okay.
So main idea of above said is that you should have denormalized documents in order to fast display them at ui.
You can't query by referenced document, in any way you need denormalization.
Also i suggest have a look into cqrs architecture.
Try to investigate cqrs and event sourcing architecture. This will allow you to update all this data by queue.

Populate .Net report viewer control from data table returned from a web servive

Thanks in advance for any assistance. It seems like what i'm trying to do should be very simple, but after literally days of scouring the internet I can't seem to find an answer that pulls it altogether in a simple fashion suitable for an experience .Net developer who is new to the Report Viewer.
Very simply, what I want is an example or step by step demo for the following question:
How do I populate a .Net Report Viewer control using a Data Table returned from a web service? Within the web service a stored proc is called that returns what eventually ends up in the DataTable that gets sent back to the application. Would prefer C# answer, but VB is also fine as i am fairly familiar with it.
Related questions to this are:
1. Does the returned DataTable need to have column names and types (does it need to be strongly typed)?
2) If so, do I need to know these column names/types when I am designing the report or is there a way to dynamically create?
If you're doing a client side report (rdlc), then I would recommend writing a class that holds that data for a row, and binding the datasource (report.LocalReport.DataSource I believe) of the report to a collection of that type. You can do datasets (which you could put your data table in), but, in my opinion, that mucks up a project and I don't care for it. I know you're looking for a step-by-step answer, and hopefully someone will give you one, but hopefully my response will get you to change direction a bit.
If you're talking about a server side report (rdl), then you'll need to redo your thinking a bit, as server side reports PULL data, they do not have data pushed onto them.
HTH,
Brian

What goes into rolling your own wiki using c# and sql?

I'd like to understand how a wiki works, at least from a high level. When a user saves changes, does it always insert a new row in the database for that wiki article (10 revisions, 10 rows in the database).
I agree with all the answers. Wikis normally handle every edit as a new record inside the database.
You may be interested in checking out the full Layout of the MediaWiki database diagram, the wiki engine behind Wikipedia.
Note that the full text of each revision is stored in a MEDIUMBLOB field in the text table.
I just wrote a wiki in C# actually. One thing I would add to everyone else's comments is that you'll want to make sure you can compare two versions. For doing this in C# I strongly suggest the C# implementation of the diff_match_patch library from Google. It's quite fast and it's quite easy to extend if you need more in the way of pretty printing or handling of structured text like HTML.
Every entry inside of the wiki is a new entry inside of the database.
That way revisions can be tracked. It is all about the community and tracking.
Behind the scenes the database is storing datetime, changes made, etc.
Yes, it does. Otherwise it will be impossible to see full page history, which is what's expected from a Wiki implementation.
Yes.
...
Seems a bit short. Let's just say that you have to store the original article and then details about each change afterwards. So you might have an Article table and a Revision table. That way you can roll back to any prior state.
Of course the design of the tables, the logic behind stripping revised text from the original and storing this separately is pretty complex.
Here is the dev blog for TWiki that might give you some useful information. http://twiki.org/cgi-bin/view/Blog/WebHome?category=Development.
Is Sql a requirement of the project? There is a lot of movement around NoSql at the moment and a wiki seems to fit nicely into the document store database. Some information on this can be found here http://nosql-database.org/.
There is a implementation on Codeplex at http://wikiplex.codeplex.com/. This is from another post on stackoverflow asp.net mvc wiki.
You might want to check if maybe a version control engine can be used for the text parts (Users etc might still need a database) as most version control systems have all the necessary functions implemented (history, diffing, log entries for changes, ...) which would save you a lot of work.

Categories

Resources