I have an application with one DB which is used by many users. Whenever one user makes changes, we save the changes to the database.
Now, I need to notify other logged-in users about this change. How can this be done?
I'm thinking - when the application succcessfully saves / updates the data in the database, the application will send a notification to the connected clients with the new record updated or added.
I'm using C# and SQL Server database.
Your immediate options are push-based notifications with something like a message bus, or polling loops on known ids.
Message busses operate on publish-subscribe models which work well for Windows applications. Have a look at MassTransit or MSMQ as a starting point. There are plenty of options out there. They can be combined into web apps using something that essentially wraps a polling loop with the client like SignalR.
Polling-based options work typically on a timer and do quick timestamp or version # checks against the database, reloading a record if a difference is found.
Push-based are more efficient but only notify of changes between supported applications. (Client to Client) they don't detect changes by applications that don't participate as publishers, nor do they see changes made directly to the database.
Polling-based options cover all changes, but are generally slower and require a schema that has version information to work effectively.
Related
I am new to .NET and seeking help for the Windows Service Updates Notifications.
I have a use case that is somewhat similar to "https://stackoverflow.com/questions/41232170/c-sharp-show-notification-prompting-a-update".
The application is developed in C#.NET and is deployed and running as Windows Service.
However, the application is not using any MSI installer to install it. I am using a batch script that configures the Windows Service application.
Now, I want to show the notifications about the updates about the Windows Service to the user, when the system gets restarted.
I came across about the usage of WCF or by using the Task Scheduler, but not sure which one would be the better solution.
Please advice.
Ok, there are (were, because MS disabled the first one that I'm going to explain) two ways to notify your user about updates from a service.
First, the bad, ugly (and non-working in recent versions) way: interactive services.
You can configure a service as interactive, if you add the SERVICE_INTERACTIVE_PROCESS flag the service will be able to create a GUI that will be attached to Display_0. This presents a ton of problems (trying to show a GUI when there's no user session, if you have two sessions open only the first one will show the GUI and so on) but it's a cheap dirty way to show data to the user. Avoid it.
Second, the right way: a standalone GUI program.
In this case you create a secondary program that will show the data to the user, you can start it with the user session or let the user decide if he wants to receive info by opening manually this application. This program needs to receive the updates from the service in some way, which is better is up to you but I would use UDP for this, in this way your service doesn't needs to care if any GUI app is connected or not, you broadcast an UDP message and everyone listening will receive it, you don't need to mantain a server that handles connections, you don't need to have an storage in order to maintain the event data and it will support any number of instances of the GUI (if more than one user has started a session in the machine all of them will get notified).
But as I said, that would be my preference, you can do it as fancy as you want, you can use Pipes, use a file that contains the event and use a FileSystemWatcher to get notified when changes happen in it, you can even host an ASP .net web app which implements a SignalR hub and then you can create your GUI in html. It's up to you decide which mechanism is the best for your scenario.
I'm currently working on a requirement that is to "replace the previously developed Polling mechanism for change notifications of database".
Let me elaborate a little:
We have an oracle database where we have put some triggers to get notified for any changes on the table. Using it, we were trying to get changed data and converting it into an XML/Json which is the request-body of an WEBAPI to perform a POST operation in another database.
The new requirement is to skip the polling mechanism and come up with something like "rather than we call the database for notifications, it calls us every time it gets updated".
I did a little googling and everyone suggest for the best approach as:
Database Change Notifications. Here I need to grant permissions to Oracle and then create an application in .Net where I can write a callback function for future processing. Until here, I'm good but my question is:
The .Net application I need to create that communicates with the database is required to be a Web application and has to be online always? Can I create a console application to get notified, if yes, how will the database contact my application for any change? What exactly is the internal process going on when the database notifies my application for any change?
We are developing an Desktop Applications on Winforms in C# with a single backend system or Database as Postgresql. The Desktop Applications will be used by multiple users on a Local Lan or on a WAN where my Database server will be placed on a WAN.
The users are going to perform update, insert, delete on the Database tables. What i need is when any user performs any of the above three operations, automatically other users who are logged in or will log in after some times, gets to see in a winforms about the activities performed by the other user.
Please help me on how should i proceed with the same.
Regards
Vineet More
What you are looking for is message queue service.
Workflow would go like this:
custom service will use postgresql LISTEN to "collect" notifications from PostgreSQL and feed as messages into MQS
Inside PostgreSQL you can use pgsql NOTIFY to notify custom services from within trigger for example.
Application would be reading messages from MQS and presents to the users on UI.
Regards
H
I need some suggestions from the community to a requirement I have. Below is the requirement and I need some approach suggestions.
Users from the client need to retrieve data from my source database (Let say SQL database in my production server). The users access the data by a intermediary service layer (WCF Rest service). On another server (Info Server) I have a SQL Database (Info DB) which will hold all queries that can be requested. Since in some cases my data is huge, I give the option to user to schedule the data retrieval and look at the data later. The schedule information per user would also be stored in the Info DB. I also allow user to retrieve data real time in case he wants.
In both cases I want to Query data from Source (Production DB), store them in file format (May be CSV or excel) and then when user wants the data I would send the data over to the client.
Since the queries are stored in InfoDB. I let the admin define schedule run time for the every Query. This is to enable Admin to adjust long running queries run at night time when calls to server is low. In case the user demands a query to be run at real time, I would allow that.
As a solution architecture I have thought of this :
I will have a WCF rest service which would be installed on Info Server. This service will act as calling point for the Users. When user calls a query real time, the service will get the results, save to a file format and transfer it over. If the user schedules the query, the service will add an entry for the user/ for the query in the info database.
I will have a Windows Service on the Info Server. This Windows Service will periodically check the Info DB for Scheduled Queries entries and if the queries fall within the scheduled time , it will start running the query, get the data and save that to a file location and add the file location entry to the Schedule entry. This will enable me to track which schedules are finished and where the data is available (File path).
Now here are my issues with this:
My data can be huge, will a WCF rest service be good enough to transfer large files over the wire ? Can I transfer files over wire or I need to transfer data as JSON ? What is the best approach.
If I use a windows service, is this a good approach or is there a better alternative ? The reason I am asking is because as per my understanding Windows Service will have to run always , because I need to figure out the entries which are scheduled. This means at specific interval the Windows Service would check the info database and see if the schedule entry should be run or not. In ideal scenario the windows service will run through out the day and check the database periodically without much action because preferably all schedules would be at night time.
I have used an intermediary service approach because if I need to move to cloud tomorrow, I can easily move this solution. Am I right in my assumption ?
If I move to cloud tomorrow, would I be bale to encrypt the data transfer (may be data encryption or file encryption). I have no idea on data encryption/decryption.
Need your suggestion(s) to this.
My data can be huge, will a WCF rest service be good enough to
transfer large files over the wire ? Can I transfer files over wire or
I need to transfer data as JSON ? What is the best approach.
When you say huge, how huge? Are we talking gigabytes, megabytes, or kilobytes. I regularly have 100mb rest responses (you will probably have to tweak some things, to increase your MaxMessageLength, but this should be enough to get you going. I would take their advice and use a streaming API though, especially if you are talking several megs of content.
If I use a windows service, is this a good approach or is there a better alternative ? The reason I am asking is because as per my understanding Windows Service will have to run always , because I need to figure out the entries which are scheduled. This means at specific interval the Windows Service would check the info database and see if the schedule entry should be run or not. In ideal scenario the windows service will run through out the day and check the database periodically without much action because preferably all schedules would be at night time.
Beware writing your own scheduler. You might be better off dropping things onto a queue for processing, then just firing up workers at the appropriate time. That way you can just invoke the worker directly for your realtime call. Plus you can run it whenever the database is idle, not on a scheduled basis. It's tricky "knowing" when a service will be idle. Especially in a world of round-the-clock users.
I have used an intermediary service approach because if I need to move to cloud tomorrow, I can easily move this solution. Am I right in my assumption ?
Yes, wrapping an endpoint in a rest service (WCF) will make moving to the cloud much easier.
If I move to cloud tomorrow, would I be bale to encrypt the data transfer (may be data encryption or file encryption). I have no idea on data encryption/decryption.
HTTPS is your friend here. Read this. Don't invent your own here, or use proprietary encryption. HTTPS is old, straightforward and good.
I will try and explain exactly what I want to achieve first.
Imagine two users are using a windows forms application, when User A opens a particular form a lock is applied to the data record underlying the form so that only that user can make changes at that time.
User B has a list of all records (in a grid) which among others contains a reference to the record already opened by User A. What we want to do is when User A opens the records User B's list of records is updated to show a lock icon next to the row to indicate the record is in use.
This is a trivial example of what we do with messaging but you get the idea, User A does something User B needs to knows about it.
I have implemented a system using Jabber-net for C# and an OpenFire Jabber Server. Basically when a message is to be sent, a new row gets inserted on a messages table in the database. The messages table is watched by a service client using the SqlDependancy object, so that when a new message is ready the service builds the relevant message and sends it to the desired client via Jabber and the OpenFire server.
This works OK, however OpenFire's out of the box functionality is for supporting Instant Messaging which obviously isn't what I'm trying to achieve . The problem I have is that if a user is logged in to two Application Contexts (i.e. Test and Live) OpenFire does not know which one to send a message to because the JID structure of user#server/resource takes no notice of the resource.
Basically the way I'm currently using OpenFire and Jabber-net isn't quite right.
Is there a pattern I can use for achieving what I want to achieve i.e. send a message to a client telling it do something, whilst being able to specify which client you are sending the message too. XMPP seemed like the answer because I can construct my own messages types to be parsed.
My application is a Windows Forms, .NET 3.5 C# application.
I'd just add some more data to indicate which Application Context is affected and have the other clients decide whether they need to handle the message or not.