I'm sure that this question has already been asked, but I don't really see it.
Using asp.net and C#, how does one track the pages that are open/closed?
I have tried all sorts of things, including:
modifying the global.asax file application/session start/end operations
setting a page's destructor to report back to the application
static variables (which persist globally rather than on a session by session basis)
Javascript window.onload and window.onbeforeunload event handlers
It's been educational, but so far no real solution has emerged.
The reason I want to do this is to prevent multiple users from modifying the same table at the same time. That is, I have a list of links to tables, and when a user clicks to modify a table, I would like to set that link to be locked so that NO USER can then modify that table. If the user closes the table modification page, I have no way to unlock the link to that table.
You should not worry about tracking pages open or closed. Once a webpage is rendered by IIS it's as good as "closed".
What you need to do is protect from two users updating your table at the same time by using locks...For example:
using (Mutex m = new Mutex(false, "Global\\TheNameOfTheMutex"))
{
// If you want to wait for 5 seconds for other page to finish,
// you can do m.WaitOne(TimeSpan.FromSeconds(5),false)
if (!m.WaitOne(TimeSpan.Zero, false))
Response.Write("Another Page is updating database.");
else
UpdateDatabase();
}
What this above snippet does is, it will not allow any other webpage to call on the UpdateDatabase method while another page is already runing the UpdateDatabase call.So no two pages can call updatedatabase at the same exact time.
But this does not protect the second user from running UpdateDatabase AFTER the first call has finished, so you need to make sure your UpdateDatabase method has proper checks in place ie. it does not allow stale data to be updated.
I think your going the wrong way about this...
You really should be handling your concurrency via your business layer / db and not relying on the interface because people can and will find a way around whatever you implement.
I would recommend storing a 'key' in your served up page everytime you serve up a page that can modify the table. The key is like a versioning stamp of the last time the table was updated. Send this key along with your update and validate that they match before doing the update. If they don't then you know someone else came along and modified that table and you should inform the user that there was a concurrency conflict, the data has changed, and do they want to see the new data.
You should not use page requests to lock database tables. That won't work well for many reasons. Each request creates a new page object, and there are multiple application contexts, which may be on multiple threads/processes, etc. Any of which may drop off the face of the earth at any point in time.
The highest level of tackling this issue is to find out why you need to lock the tables in the first place. One common way to avoid this is to accept all user table modifications and allow the users to resolve their conflicts.
If locking is absolutely necessary, you may do well with a lock table that is modified before and after changes. This table should have a way of expiring locks when users walk away without doing so.
Eg. See http://www.webcheatsheet.com/php/record_locking_in_web_applications.php It's for PHP but the concept is the same.
Related
As in the title, I am developing an application using C# and WPF which acts as a client on many computers and handling a data using SQL within a company. I want it to refresh the views of the items on all computers using this application when one person adds or deletes something from the server's db. I know I might need to use SQL triggers, but I am kind of confused where to start.
just this type of idea:
private void Timer_Tick(object sender, EventArgs e)
{
if (trigger1.triggered == true)
{
RefreshView();
trigger1.triggered == false;
}
}
It's not a trigger you want. It's Query Notification.
SQL Triggers occur when something happens in the database, and allow you to do things like delete related rows in tables without a foreign key to maintain data integrity. I don't think that they will be of any benefit here because they are intended to do things within the database rather than externally.
The problem with your request and your proposed way of doing it is that it could cause a lot of network traffic. If you don't have a problem with that, one way would be to use a timestamp. When the record changes, save the new timestamp in the database.
Now, in your timer on each machine, check to see if the timestamp has changed since the last time you checked. If it has, reload the data. If not, continue running.
There will be lots of things to think about with this proposition, though. For example, what if the data on your screen has changed but not been saved when someone else changes it? Do you lose your changes? Do you list the changes and ask what to do? Are you actually entitled to make those changes or does it require administrator approval?
It would be more normal when using Optimistic Locking (last record written is the correct one) to check for a clash at save time rather than polling for the changes. That way network traffic is reduced but you are told if the record has changed since you loaded it and given the available options on how to proceed.
In the case of lists of records, the simple way to avoid huge network traffic is a simple Refresh button to reload the list.
This article may give you some ideas on ensuring data integrity in multi-user environments: https://www.codeproject.com/Articles/1178358/What-You-See-Is-What-You-Update
I hope that this gives you some food for thought.
What I wanted to do is to avoid calling the method on refresh because as #oldcoder said, the network traffic will be unnecessarily big. Just was thinking of something like INSERT from pc1 -> DB -> send information about the insert to pc2,pc3. If this is not possible or too complicating, I will just refresh the function every 10sec or so.
I have three tables in my sql Database say Specials, Businesses, Comments. And in my master page i have a prompt area where i need to display alternative data from these 3 tables based on certain conditions during each page refresh (These tables have more than 1000 records). So in that case what will be the best option to retrieve data from these tables?
Accessing data each time from database is not a good idea i know, is there any other good way to do this, like Caching or any other new techniques to effectively manage this. Now it takes too much time to load the page after each page refresh.
Please give your suggestions.
At present what i was planning is to create a SP for data retrieval and to keep the value returned in a Session.
So that we can access the data from this session rather going to DB each time on page refresh. But do not know is there any other effective way to accomplish the same.
Accessing data each time from database is not a good idea
It not always true, it depends on how frequently the data is getting changed. If you choose to cache the data, you will have to revalidate it every time the data is changed. I am assuming you do not want to display a static count or something that once displayed will not change. If that's not the case, you can simply store in cookies and display from there.
Now it takes too much time to load the page after each page refresh.
Do you know what takes too much time? Is it client side code or server side code (use Glimpse to know that)? If server side, is it the code that hits the DB and the query execution time or its server side in memory manipulation.
Generally first step to improve performance is to measure it precisely and in order for you to solve such issues you ought to know where the problem is.
Based on your first statement, If i were you, I would display each count in a separate div which will be refreshed asynchronously. You could choose to update the data periodically using a timer or even better push it from server (use SignalR). The update will happen transparently so no page reload required.
Hope this helps.
I agree that 1000 records doesn't seem like a lot, but if you really aren't concerned about there being a slight delay you may try using HttpContext.Cache object. It's very much like a dictionary with string keys and object values, with the addition that you can set expirations etc...
Excuse typos, on mobile so no compile check:
var tableA = HttpContext.Cache.Get("TableA")
if tableA == null {
//if its null, there was no copy in the cache so create your
//object using your database call
tableA = Array, List, however you store your data
//add the item to the cache, with an expiration of 1 minute
HTTPContext.Cache.Insert("TableA", tableA, null, NoAbsoluteExpiration, TimeSpan(0,1,0))
}
Now, no matter how many requests go through, you only hit the database once a minute, or once for however long you think is reasonable considering your needs. You can also trigger a removal of the item from cache, if some particular condition happens.
One suggestion is to think of your database as a mere repository to persist state. Your application tier could cache collections of your business objects, persist them when they change, and immediately return state to your presentation tier (the web page).
This assumes all updates to the data are coming from your page. If the database is being populated from different places, you'll need to either tie everything into a common application tier, or poll the database to update your cache.
I have a running order for 2 handlers Deleting and Reordering pictures and would like some advises for the best solution.
On the UI some pictures are deleted, the user clicks on the deleted button. The whole flow, delete command up to an event handler which actually deletes the physical files is started.
Then immediately the user sorts the remaining pictures. A new flow from reorder command up to the reordering event handler for the file system fires again.
Already there is a concurrency problem. The reordering cannot be correctly applied without having the deletion done. At the moment this problem is handled with some sort of lock. A temp file is created and then deleted at the end of the deletion flow. While that file exists the other thread (reordering or deletion depending on the user actions) awaits.
This is not an ideal solution and would like to change it.
The potential solution must be also pretty fast (off course the current one is not a fast one) as the UI is updated thru a JSON call at the end of ordering.
In a later implementation we are thinking to use a queue of events but for the moment we are pretty stuck.
Any idea would be appreciated!
Thank you, mosu'!
Edit:
Other eventual consistency problems that we had were solved by using a Javascript data manager on the client side. Basically being optimist and tricking the user! :)
I'm starting to believe this is the way to go here as well. But then how would I know when is the data changed in the file system?
Max suggestions are very welcomed and normally they apply.
It is hard sometimes to explain all the details of an implementation but there is a detail that should be mentioned:
The way we store the pictures means that when reordered all pictures paths (and thus all links) change.
A colleague hat the very good idea of simply remove this part. That means that even if the order will change the path of the picture will remain the same. On the UI side there will be a mapping between the picture index in the display order and its path and this means there is no need to change the file system anymore, except when deleting.
As we want to be as permissive as possible with our users this is the best solution for us.
I think, in general, it is also a good approach when there appears to be a concurrency issue. Can the concurrency be removed?
Here is one thought on this.
What exactly you are reordering? Pictures? Based on, say, date.
Why there is command for this? The result of this command going to be seen by everyone or just this particular user?
I can only guess, but it looks like you've got a presentation question here. There is no need to store pictures in some order on the write side, it's just a list of names and links to the file storage. What you should do is to store just a little field somewhere in the user settings or collection settings: Date ascending or Name descending. So you command Reorder should change only this little field. Then when you are loading the gallery this field should be read first and based on this you should load one or another view. Since the store is cheap nowadays, you can store differently sorted collections on the read side for every sort params you need.
To sum up, Delete command is changing the collection on the write side, but Reoder command is just user or collection setting. Hence, there is no concurrency here.
Update
Based on your comments and clarifications.
Of course, you can and, probably, should restrict user actions only by one at the time. If time of deletion and reordering is reasonably short. It's always a question of type of user experience you are asked to achieve. Take a usual example of ordering system. After an order placed, user can almost immediately see it in the UI and the status will be something like InProcess. Most likely you won't let user to change the order in any way, which means you are not going to show any user controls like Cancel button(of course this is just an example). Hence, you can use this approach here.
If 2 users can modify the same physical collection, you have no choice here - you are working with shared data and there should be kind of synchronization. For instance, if you are using sagas, there can be a couple of sagas: Collection reordering saga and Deletion saga - they can cooperate. Deletion process started first - collection aggregate was marked as deletion in progress and then right after this reordering saga started, it will attempt to start the reordering process, but since deletion saga is inprocess, it should wait for DeletedEvent and continue the process afterwards.The same if Reordering operation started first - the Deletion saga should wait until some event and continue after that event arrived.
Update
Ok, if we agreed not touch the file system itself, but the aggregate which represents the picture collection. The most important concurrency issues can be solved with optimistic concurrency approach - in the data storage a unique constraint, based on aggregate id and aggregate version, is usually used.
Here are the typical steps in the command handler:
This is the common sequence of steps a command handler follows:
Validate the command on its own merits.
Load the aggregate.
Validate the command on the current state of the aggregate.
Create a new event, apply the event to the aggregate in memory.
Attempt to persist the aggregate. If there's a concurrency conflict during this step, either give up, or retry things from step 2.
Here is the link which helped me a lot some time ago: http://www.cqrs.nu/
I would like to have optimized version of my WinForms C# based application for slower connections. For this reason I wanted to introduce timestamp column into all tables (that change) and load most of things the first time it's needed and then just read updates/inserts/deletes that could have been done by other people using application.
For this question to have an example I've added a timestamp column into Table called Konsultanci. Considering that this table might be large I would like to load it once and then check for updates/inserts. In a simple way to load it all I do it like this:
private void KonsultantsListFill(ObjectListView listView)
{
using (var context = new EntityBazaCRM(Settings.sqlDataConnectionDetailsCRM)) {
ObjectSet<Konsultanci> listaKonsultantow = context.Konsultancis;
GlobalnaListaKonsultantow = listaKonsultantow.ToList(); // assign to global variable to be used all around the WinForms code.
}
}
How would I go with checking if anything changed to the table? Also how do I handle updates in WinForms c#? Should I be checking for changes on each tabpage select, opening new gui's, saving, loading of clients, consultants and so on? Should I be refreshing all tables all the time (like firing a background thread that is executed every single action that user does? or should it only be executed prior to eventual need for the data).
What I'm looking here is:
General advice on how to approach timestamp problem and refreshing data without having to load everything multiple times (slow connection issues)
A code example with Entity Framework considering timestamp column? Eventually code to be used prior executing something that requires data?
Timestamps are not well suited to help you detect when your cache needs to be updated. First off, they are not datetimes (read here) so they don't give you any clue as to when a record was updated. Timestamps are geared more towards assisting in optimistic locking and concurrency control, not cache management. When trying to update your cache you need a mechanism like a LastModified datetime field on your tables (make sure it's indexed!) and then a mechanism to periodically check for rows that have been modified since the last time you checked.
Regarding keeping your data fresh, you could run a separate query (possibly on another thread) that finds all records with the LastModified > than the last time you checked and then "upsert" (update or insert) them into your cache context. Another mechanism with Entity Framework is to use the Context.Refresh() method.
When a user visits an .aspx page, I need to start some background calculations in a new thread. The results of the calculations need to be stored in the user's Session, so that on a callback, the results can be retrieved. Additionally, on the callback, I need to be able to see what the status of the background calculation is. (E.g. I need to check if the calculation is finished and completed successfully, or if it is still running) How can I accomplish this?
Questions
How would I check on the status of the thread? Multiple users could have background calculations running at the same time, so I'm unsure how the process of knowing which thread belongs to which user would work.. (though in my scenario, the only thread that matters, is the thread originally started by user A -- and user A does a callback to retrieve/check on the status of that thread).
Am I correct in my assumption that passing an HttpSessionState "Session" variable for the user to the new thread, will work as I expect (e.g. I can then add stuff to their Session later).
Thanks. Also I have to say, I might be confused about something but it seems like the SO login system is different now, so I don't have access to my old account.
Edit
I'm now thinking about using the approach described in this article which basically uses a class and a Singleton to manage a list of threads. Instead of storing my data in the database (and incurring the performance penalty associated with retrieving the data, as well as the extra table, maintenance, etc in the database), I'll probably store the data in my class as well.
Edit 2
The approach mentioned in my first edit worked well. Additionally I had timers to ensure the threads, and their associated data, were both cleaned up after the corresponding timers called their cleanup methods. The Objects containing my data and the threads were stored in the Singleton class. For some applications it might be appropriate to use the database for storage but it seemed like overkill for mine, since my data is tied to a specific instance of a page, and is useless outside of that page context.
I would not expect session-state to continue working in this scenario; the worker may have no idea who the user is, and even if it does (or more likely: you capture this data into the worker), no reason to store anything (updating session is a step towards the end of the request pipeline; but if you aren't in the pipeline...?).
I suspect you might need to store this data separately using some unique property of the user (their id or cn), or invent a GUID otherwise. On a single machine it may suffice to store this in a synchronised dictionary (or similar), but on a farm/cluster you may need to push the data down a layer to your database or state server. And fetch manually.