Refresh the method after item add or delete from SQL database - c#

As in the title, I am developing an application using C# and WPF which acts as a client on many computers and handling a data using SQL within a company. I want it to refresh the views of the items on all computers using this application when one person adds or deletes something from the server's db. I know I might need to use SQL triggers, but I am kind of confused where to start.
just this type of idea:
private void Timer_Tick(object sender, EventArgs e)
{
if (trigger1.triggered == true)
{
RefreshView();
trigger1.triggered == false;
}
}

It's not a trigger you want. It's Query Notification.

SQL Triggers occur when something happens in the database, and allow you to do things like delete related rows in tables without a foreign key to maintain data integrity. I don't think that they will be of any benefit here because they are intended to do things within the database rather than externally.
The problem with your request and your proposed way of doing it is that it could cause a lot of network traffic. If you don't have a problem with that, one way would be to use a timestamp. When the record changes, save the new timestamp in the database.
Now, in your timer on each machine, check to see if the timestamp has changed since the last time you checked. If it has, reload the data. If not, continue running.
There will be lots of things to think about with this proposition, though. For example, what if the data on your screen has changed but not been saved when someone else changes it? Do you lose your changes? Do you list the changes and ask what to do? Are you actually entitled to make those changes or does it require administrator approval?
It would be more normal when using Optimistic Locking (last record written is the correct one) to check for a clash at save time rather than polling for the changes. That way network traffic is reduced but you are told if the record has changed since you loaded it and given the available options on how to proceed.
In the case of lists of records, the simple way to avoid huge network traffic is a simple Refresh button to reload the list.
This article may give you some ideas on ensuring data integrity in multi-user environments: https://www.codeproject.com/Articles/1178358/What-You-See-Is-What-You-Update
I hope that this gives you some food for thought.

What I wanted to do is to avoid calling the method on refresh because as #oldcoder said, the network traffic will be unnecessarily big. Just was thinking of something like INSERT from pc1 -> DB -> send information about the insert to pc2,pc3. If this is not possible or too complicating, I will just refresh the function every 10sec or so.

Related

Uploading a Log (.txt file) vs inserting 1 record per event on database, Efficiency on recording Logs

I'm trying to record log files on my database. My question is which has the less load on making logs on the database. I'm thinking of storing long term log files ,maybe 3-5 years maximum, for an Inventory Program.
Process: I'll be using a barcode scanner.
After scanning a barcode, I'll get all the details of who is logged in, date and time, product details then saved per piece.
I came up with two ideas.
After the scanning event, It will be saved on a DataTable then after finishing a batch.. DataTable will be written on a *.txt file and then uploaded to my database.
After every scanned barcode, an INSERT query will be executed. I suspect this option will be heavy on the server side since I'm not the only one using this server
What are the pros and cons of the two options?
Are there more efficient ways of storing logs?
Based on your use case, I also think you need to consider at least 2 additional factors, the first being how important is it that the scanned item is logged in the database immediately. If you need the scanned item to be logged because you'll be checking to see if its been scanned, for example to prevent other scans, then doing a single insert is probably a very good idea. The second thing to consider is will you ever need to "unscan" an item, and at which part of the process? If the person scanning needs the ability to revert the scan immediately, it might be a good idea to wait until theyre done all their scannings before dumping the data to the database, as this will let you avoid ever having to delete from the table.
Overall I wouldnt worry too much about what the database can handle, sql-server is very good at handling simultaneous single inserts into a table thats designed for that use case. If youre only going to be inserting new data to the end of the table, and not updating or deleting existing records, performance is going to scale very well. The same goes for larger batch inserts, theyre very efficient no matter how many rows you want to bring in, assuming your table is designed for that purpose.
So overall I would probably pick the more efficient solution from the application side for your specific use case, and then once you have decided that, you can shape the database around the code, rather than trying to shape your code around suspected limitations of the database.
What are the pros and cons of the two options?
Basically your question is which way is more efficient (bulk insert or multiple single insert)?
The answers is always depends and always be situation based. So unfortunately, I don't think there's a right answer for you
The way you structure the log table.
If you choose bulk insert, how many rows do you want to insert at 1 time?
Is it read-only table? And if you want to read from it, how often do you do the read?
Do you need to scale it up?
etc...
Are there more efficient ways of storing logs?
There're some possible ways to improve I can think of (not all of them can work together)
If you go with the first option, maybe you can schedule the insert to non-peak hours
If you go with the first option, chunk the log files and do the insert
Use another database to do the logging
If you go with the second option, do some load testing
Personally, I prefer to go with second option if the project is small to medium size and the logging is critical part of the project.
hope it helps.
Go with the second option, and use transactions. This way the data will not be sent to the db until you call the transaction to complete. (Which can be scheduled.) This will also prevent broken data from getting into your database when a crash or something occurs.
Transactions in .net
Transaction Tutorial in C#

How to manage frequent data access in .net application?

I have three tables in my sql Database say Specials, Businesses, Comments. And in my master page i have a prompt area where i need to display alternative data from these 3 tables based on certain conditions during each page refresh (These tables have more than 1000 records). So in that case what will be the best option to retrieve data from these tables?
Accessing data each time from database is not a good idea i know, is there any other good way to do this, like Caching or any other new techniques to effectively manage this. Now it takes too much time to load the page after each page refresh.
Please give your suggestions.
At present what i was planning is to create a SP for data retrieval and to keep the value returned in a Session.
So that we can access the data from this session rather going to DB each time on page refresh. But do not know is there any other effective way to accomplish the same.
Accessing data each time from database is not a good idea
It not always true, it depends on how frequently the data is getting changed. If you choose to cache the data, you will have to revalidate it every time the data is changed. I am assuming you do not want to display a static count or something that once displayed will not change. If that's not the case, you can simply store in cookies and display from there.
Now it takes too much time to load the page after each page refresh.
Do you know what takes too much time? Is it client side code or server side code (use Glimpse to know that)? If server side, is it the code that hits the DB and the query execution time or its server side in memory manipulation.
Generally first step to improve performance is to measure it precisely and in order for you to solve such issues you ought to know where the problem is.
Based on your first statement, If i were you, I would display each count in a separate div which will be refreshed asynchronously. You could choose to update the data periodically using a timer or even better push it from server (use SignalR). The update will happen transparently so no page reload required.
Hope this helps.
I agree that 1000 records doesn't seem like a lot, but if you really aren't concerned about there being a slight delay you may try using HttpContext.Cache object. It's very much like a dictionary with string keys and object values, with the addition that you can set expirations etc...
Excuse typos, on mobile so no compile check:
var tableA = HttpContext.Cache.Get("TableA")
if tableA == null {
//if its null, there was no copy in the cache so create your
//object using your database call
tableA = Array, List, however you store your data
//add the item to the cache, with an expiration of 1 minute
HTTPContext.Cache.Insert("TableA", tableA, null, NoAbsoluteExpiration, TimeSpan(0,1,0))
}
Now, no matter how many requests go through, you only hit the database once a minute, or once for however long you think is reasonable considering your needs. You can also trigger a removal of the item from cache, if some particular condition happens.
One suggestion is to think of your database as a mere repository to persist state. Your application tier could cache collections of your business objects, persist them when they change, and immediately return state to your presentation tier (the web page).
This assumes all updates to the data are coming from your page. If the database is being populated from different places, you'll need to either tie everything into a common application tier, or poll the database to update your cache.

Using timestamp from sqlserver in entity framework to only get changes rather then reloading whole table/view

I would like to have optimized version of my WinForms C# based application for slower connections. For this reason I wanted to introduce timestamp column into all tables (that change) and load most of things the first time it's needed and then just read updates/inserts/deletes that could have been done by other people using application.
For this question to have an example I've added a timestamp column into Table called Konsultanci. Considering that this table might be large I would like to load it once and then check for updates/inserts. In a simple way to load it all I do it like this:
private void KonsultantsListFill(ObjectListView listView)
{
using (var context = new EntityBazaCRM(Settings.sqlDataConnectionDetailsCRM)) {
ObjectSet<Konsultanci> listaKonsultantow = context.Konsultancis;
GlobalnaListaKonsultantow = listaKonsultantow.ToList(); // assign to global variable to be used all around the WinForms code.
}
}
How would I go with checking if anything changed to the table? Also how do I handle updates in WinForms c#? Should I be checking for changes on each tabpage select, opening new gui's, saving, loading of clients, consultants and so on? Should I be refreshing all tables all the time (like firing a background thread that is executed every single action that user does? or should it only be executed prior to eventual need for the data).
What I'm looking here is:
General advice on how to approach timestamp problem and refreshing data without having to load everything multiple times (slow connection issues)
A code example with Entity Framework considering timestamp column? Eventually code to be used prior executing something that requires data?
Timestamps are not well suited to help you detect when your cache needs to be updated. First off, they are not datetimes (read here) so they don't give you any clue as to when a record was updated. Timestamps are geared more towards assisting in optimistic locking and concurrency control, not cache management. When trying to update your cache you need a mechanism like a LastModified datetime field on your tables (make sure it's indexed!) and then a mechanism to periodically check for rows that have been modified since the last time you checked.
Regarding keeping your data fresh, you could run a separate query (possibly on another thread) that finds all records with the LastModified > than the last time you checked and then "upsert" (update or insert) them into your cache context. Another mechanism with Entity Framework is to use the Context.Refresh() method.

Providing "Totals" for custom SQL queries on a regular basis

I would like some advice on how to best go about what I'm trying to achieve.
I'd like to provide a user with a screen that will display one or more "icon" (per say) and display a total next to it (bit like the iPhone does). Don't worry about the UI, the question is not about that, it is more about how to handle the back-end.
Let's say for argument sake, I want to provide the following:
Total number of unread records
Total number of waiting for approval
Total number of pre-approved
Total number of approved
etc...
I suppose, the easiest way to descrive the above would be "MS Outlook". Whenever emails arrive to your inbox, you can see the number of unread email being updated immediately. I know it's local, so it's a bit different, but now imagine having the same principle but for the queries above.
This could vary from user to user and while dynamic stored procedures are not ideal, I don't think I could write one sp for each scenario, but again, that's not the issue heree.
Now the recommendation part:
Should I be creating a timer that polls the database every minute (for example?) and run-all my relevant sql queries which will then provide me with the relevant information.
Is there a way to do this in real time without having a "polling" mechanism i.e. Whenever a query changes, it updates the total/count and then pushes out the count of the query to the relevant client(s)?
Should I have some sort of table storing these "totals" for each query and handle the updating of these immediately based on triggers in SQL and then when queried by a user, it would only read the "total" rather than trying to calculate them?
The problem with triggers is that these would have to be defined individually and I'm really tring to keep this as generic as possible... Again, I'm not 100% clear on how to handle this to be honest, so let me know what you think is best or how you would go about it.
Ideally when a specific query is created, I'd like to provide to choices. 1) General (where anyone can use this) and b) Specific where the "username" would be used as part of the query and the count returned would only be applied for that user but that's another issue.
The important part is really the notification part. While the polling is easy, I'm not sure I like it.
Imagine if I had 50 queries to be execute and I've got 500 users (unlikely, but still!) looking at the screen with these icons. 500 users would poll the database every minute and 50 queries would also be executed, this could potentially be 25000 queries per miuntes... Just doesn't sound right.
As mentioned, ideally, a) I'd love to have the data changes in real-time rather than having to wait a minute to be notified of a new "count" and b) I want to reduce the amount of queries to a minimum. Maybe I won't have a choice.
The idea behind this, is that they will have a small icon for each of these queries, and a little number will be displayed indicating how many records apply to the relevant query. When they click on this, it will bring them the relevant result data rather than the actual count and then can deal with it accordingly.
I don't know if I've explained this correctly, but if unclear, please ask, but hopefully I have and I'll be able to get some feedback on this.
Looking forward to your feeback.
Thanks.
I am not sure if this is the ideal solution but maybe a decent 1.
The following are the assumptions I have taken
Considering that your front end is a web application i.e. asp.net
The data which needs to be fetched on a regular basis is not hugh
The data which needs to be fetched does not change very frequently
If I were in this situation then I would have gone with the following approach
Implemented SQL Caching using SQLCacheDependency class. This class will fetch the data from the database and store in the cache of the application. The cache will get invalidated whenever the data in the table on which the dependency is created changes thus fetching the new data and again creating the cache. And you just need to get the data from the cache rest everything (polling the database, etc) is done by asp.net itself. Here is a link which describes the steps to implement SQL Caching and believe me it is not that difficult to implement.
Use AJAX to update the counts on the UI so that the User does not feel the pinch of PostBack.
What about "Improving Performance with SQL Server 2008 Indexed Views"?
"This is often particularly effective for aggregate views in decision
support or data warehouse environments"

Tracking open pages with ASP.Net

I'm sure that this question has already been asked, but I don't really see it.
Using asp.net and C#, how does one track the pages that are open/closed?
I have tried all sorts of things, including:
modifying the global.asax file application/session start/end operations
setting a page's destructor to report back to the application
static variables (which persist globally rather than on a session by session basis)
Javascript window.onload and window.onbeforeunload event handlers
It's been educational, but so far no real solution has emerged.
The reason I want to do this is to prevent multiple users from modifying the same table at the same time. That is, I have a list of links to tables, and when a user clicks to modify a table, I would like to set that link to be locked so that NO USER can then modify that table. If the user closes the table modification page, I have no way to unlock the link to that table.
You should not worry about tracking pages open or closed. Once a webpage is rendered by IIS it's as good as "closed".
What you need to do is protect from two users updating your table at the same time by using locks...For example:
using (Mutex m = new Mutex(false, "Global\\TheNameOfTheMutex"))
{
// If you want to wait for 5 seconds for other page to finish,
// you can do m.WaitOne(TimeSpan.FromSeconds(5),false)
if (!m.WaitOne(TimeSpan.Zero, false))
Response.Write("Another Page is updating database.");
else
UpdateDatabase();
}
What this above snippet does is, it will not allow any other webpage to call on the UpdateDatabase method while another page is already runing the UpdateDatabase call.So no two pages can call updatedatabase at the same exact time.
But this does not protect the second user from running UpdateDatabase AFTER the first call has finished, so you need to make sure your UpdateDatabase method has proper checks in place ie. it does not allow stale data to be updated.
I think your going the wrong way about this...
You really should be handling your concurrency via your business layer / db and not relying on the interface because people can and will find a way around whatever you implement.
I would recommend storing a 'key' in your served up page everytime you serve up a page that can modify the table. The key is like a versioning stamp of the last time the table was updated. Send this key along with your update and validate that they match before doing the update. If they don't then you know someone else came along and modified that table and you should inform the user that there was a concurrency conflict, the data has changed, and do they want to see the new data.
You should not use page requests to lock database tables. That won't work well for many reasons. Each request creates a new page object, and there are multiple application contexts, which may be on multiple threads/processes, etc. Any of which may drop off the face of the earth at any point in time.
The highest level of tackling this issue is to find out why you need to lock the tables in the first place. One common way to avoid this is to accept all user table modifications and allow the users to resolve their conflicts.
If locking is absolutely necessary, you may do well with a lock table that is modified before and after changes. This table should have a way of expiring locks when users walk away without doing so.
Eg. See http://www.webcheatsheet.com/php/record_locking_in_web_applications.php It's for PHP but the concept is the same.

Categories

Resources