I have SQL Server database with information for files - I'm talking about custom properties. These are categories and description for each file.
The Windows Forms application is for the user. But I will also make a Windows Service that will track any changes with the files. If a change happens(renamed,moved,deleted) the service has to update that same database accordingly. And I think it should do it right away, without any delay.
Now this is going to be my first time making WS plus the first time I will have to handle concurrency (theoretically I know about threads and so on).
So:
First of all, is it OK if one process is updating a database another process may be using at the same time? Do you need to handle that situation on the first place? (Probably, fx in our daily "user lives" we can't modify a file when it's being used by another process)
Is the idea these two to share one data source good ?
If it is, then how to handle the concurrency ? I can use WCF for the messages between the two, but then does the solution have something to do with WCF ? Because I'm going to use this for the first time as well :D.
Any help is appreciated. Thanks in advance for the time !
Since MS SQL is transactional there will be no big deal. You just have to watch out for data wich might be read and updated by one process - there it can be neccessary to use a Transaction scope (that's a .NET Class ;)).
From the Software architectural Point of view you should conside using a three-tier and not a two-tier application:
Two Tier:
Essentially your System with the persistance-layer (DB) communicating with the Clients directly
Three Tier:
Persistance-Layer <--> Logic-Layer (e.g. a WCF-Service handling the app logic) <--> Clients (Service and Forms - triggering app logic and showing results)
When it comes to concurrency it's going to be really straight forward. The MSSQL database engine handles just about all of it (e.g. locking and sharing). Further, if you leverage the SqlCommandBuilder to build your statements, the statements will automatically use optimistic concurrency.
As for the Windows service and how it gets notified, use a FileSystemWatcher, it going to be more efficient and you won't be published some service port on the local box.
I'd normally give you some good code examples but I'm answering this from my phone.
Related
I've created a BackgroundService in a WebAPI based on the code examples here: https://learn.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice . The article doesn't give any guidance for implementing this in a multi-server environment. My use-case involves a FileSystemWatcher monitoring a shared network folder for changes. It works great.
The issue is there will be multiple instances of this and I don't want all of the instances responding - just one. Is this feasible, and if so, what steps do I need to implement? I've read about using queues, but I can't see how that will help. Also, Hangfire or similar is not an option. Do I need to re-examine my logic?
I can think of multiple ways to achieve this, with pros and cons.
Individual service
If you need only one instance of this, implement it as a standalone service and deploy on one server only. True, you can't leverage background processes, but do you really need to?
Configuration
Have a config value indicating where to register the service. This could be comma separated values and whatnot. This will require some deployment handling though, to change the config to on, on the server running the background service. It could even be a comma separated value to indicate server names.
Persist value in db
If there is a single database somewhere, you can have the services communicating through it. Have a table storing which server executes the background service and once the first one locks it, then the others just sleep. Some keep alive logic needs to be implemented as well.
I would honestly go with solution one. Individually scalable, deployable and no workaround needed.
A background service, indicates that it should be running on all instances if it's part of the application.
you need to go with microservice architecture.
On microserver will use file watcher and prepare queue
then you can have another microserver which works up that queue msg(this you can scale with multiple instance )
you also make another service/microservice to keep eye on the health of file watcher and do failover task
I am struggling with a C# Website design concept.
Say I have a the need for an application that increments an integer continuously all day (simple representation of any continuous long running process). I need to write a website that would allow me (and other users) to log on, view the current value, ideally witness it updating, possibly interact with it by, say, resetting it, and then log off, leaving the process running.
Can I write this as one website, or would I have to write a website to serve pages and separate application to do the continuous work?
Personally I would have the "work" be some kind of Windows Service that can be interacted with (through database state, or directly through some transport mechanism, WCF, Message Queue, whatever). The website would then just talk to the existing service and do what it needs to do (get status, update etc).
You could have one webpage as there would be no need to serve multiple pages. The page could read the counter value from internal memory, a database or a web service which is continuously updating (Maybe add an AjaxUpdate Panel to show it ticking up). You could then code a function such as ResetCounter() which would connect to the database / web service and reset the count.
Is there a problem with storing the integer in an ACID compliant database, like SQL Server? Then you can interact with a web application you build, right? Seems to be the ideal way of handling a shared object like this integer value. ACID compliance means the integer will survive a hardware failure pretty well, you can log activities about who is tweaking the integer, etc. Writing your own service that keeps the value in shared memory probably doesn't offer a huge advantage compared to transacting with a database.
There are a couple of routes you could take. I would separate this into 3 different roles:
State Management: This layers simply stores the state of the counter or work. Determine what type of data store will be used (such as SQL Server)
Worker: This layer is the 'worker' role, responsible for incrementing the counter or whatever work needs to be done. This could be a Windows Service as others have posted, but I would probably opt for Windows Workflow exposed as a WCF Service. It would be much easier to manage the 'worker' this way and offers a more scalable solution.
UI: The next layer would the actual website, such as an ASP.NET MVC application, which could subscribe to the service and make various method calls.
See Workflow Services: http://msdn.microsoft.com/en-us/library/dd456797
I am requesting for an implementation suggestion in
OS: Windows server 2008
Platform : ASP.NET, C#
DB: MS SQL 2005
Scenario is:
We have to implement a monitoring daemon which will monitor set of DB tables in MS SQL in a frequent interval (Say 10 sec) and identify all critical entries.
The entries after analysis will associate with some actions based on its criticality or category. So assume one of a critical entry will be denoted as ACCT_USR_LIMIT_EXECEED : SomeName.
3.So ACCT_USR_LIMIT_EXECEED : SomeName shall associate to an action which shall be an email dispatch or a DB table update query execution or a folder size measurement or deleting a folder or cleaning up some files in the local HDD etc.
The amount critical entries to be analysed shall be moderate as of now but it has scope to increase too.
How do we approach this, and i see the possibilities are,
Shall we write one windows service for monitoring and dispatching actions or,
write two different services, one to monitor and dispatch its analysis to a MSMQ and the other service to dispatch an action reading the same MSMQ entry pushed.
Will having a single windows service will help us? or whats the best approach for this.
Kindly suggest
I think it all depends on the expected volume of dispatches. If you're going to do 2000 dispatches every second then I would think that a separation is a good idea, so one service won't impact the other and you could possibly have a separate environment (server) for each of them. If the expected amount is something like 10 every minute then I can't see why you should make it complicated, some threading and proper business layers will do just fine.
There is no straight forward answer to this as it depends on lot of parameters. I agree with what "Hyp" said.
As you said, some C# solution will help a lot - So, technically if you want to achieve this Windows Service / MSMQ stuff then you may have a look here -
http://stackoverflow.com/questions/1521841/receiving-msmq-messages-with-windows-service
http://stackoverflow.com/questions/3956467/how-to-create-a-c-sharp-listener-service-for-msmq-as-a-windows-service
Hope this helps.
I have nearly completed a Quartz.NET based Windows Service (using ADO.NET, not RAM jobs). The service copies/moves files to various paths depending upon a schedule. I have some concerns however. It is very important that this service has some sort of detection method/system that will detect when the program has failed for whatever reason - whether it's files failing to be copied, or the whole scheduler crashing . Just wondering what you guys think is the best way to do this? I have a couple of vague ideas but I'm looking to hear some more input.
Here are the methods that we use:
We monitor the windows service itself using the IT monitoring system. We use one of those commercial products that monitors servers, services, databases, etc, but there are open source projects that can do this for you if you don't already have one in place.
We log fatal execeptions to a database table and have a separate service monitoring that table for exceptions.
We also use an ADO.Net store, so we also monitor the Quartz.net tables for things like stuck triggers.
With things like this you can definitely go down the over engineering path. Just keep in mind the cost benefit of adding each of these options and then decide how much work you want to put into monitoring, VS the cost of an outage.
Following on from this question...
What to do when you’ve really screwed up the design of a distributed system?
... the client has reluctantly asked me to quote for option 3 (the expensive one), so they can compare prices to a company in India.
So, they want me to quote (hmm). In order for me to get this as accurate as possible, I will need to decide how I'm actually going to do it. Here's 3 scenarios...
Scenarios
Split the database
My original idea (perhaps the most tricky) will yield the best speed on both the website and the desktop application. However, it may require some synchronising between the two databases as the two "systems" so heavily connected. If not done properly and not tested thouroughly, I've learnt that synchronisation can be hell on earth.
Implement caching on the smallest system
To side-step the sync option (which I'm not fond of), I figured it may be more productive (and cheaper) to move the entire central database and web service to their office (i.e. in-house), and have the website (still on the hosted server) download data from the central office and store it in a small database (acting as a cache)...
Set up a new server in the customer's office (in-house).
Move the central database and web service to the new in-house server.
Keep the web site on the hosted server, but alter the web service URL so that it points to the office server.
Implement a simple cache system for images and most frequently accessed data (such as product information).
... the down-side is that when the end-user in the office updates something, their customers will effectively be downloading the data from a 60KB/s upload connection (albeit once, as it will be cached).
Also, not all data can be cached, for example when a customer updates their order. Also, connection redundancy becomes a huge factor here; what if the office connection is offline? Nothing to do but show an error message to the customers, which is nasty, but a necessary evil.
Mystery option number 3
Suggestions welcome!
SQL replication
I had considered MSSQL replication. But I have no experience with it, so I'm worried about how conflicts are handled, etc. Is this an option? Considering there are physical files involved, and so on. Also, I believe we'd need to upgrade from SQL express to SQL non-free, and buy two licenses.
Technical
Components
ASP.Net website
ASP.net web service
.Net desktop application
MSSQL 2008 express database
Connections
Office connection: 8 mbit down and 1 mbit up contended line (50:1)
Hosted virtual server: Windows 2008 with 10 megabit line
Having just read for the first time your original question related to this I'd say that you may have laid the foundation for resolving the problem simply because you are communicating with the database by a web service.
This web service may well be the saving grace as it allows you to split the communications without affecting the client.
A good while back I was involved in designing just such a system.
The first thing that we identified was that data which rarely changes - and immediately locked all of this out of consideration for distribution. A manual process for administering using the web server was the only way to change this data.
The second thing we identified was that data that should be owned locally. By this I mean data that only one person or location at a time would need to update; but that may need to be viewed at other locations. We fixed all of the keys on the related tables to ensure that duplication could never occur and that no auto-incrementing fields were used.
The third item was the tables that were truly shared - and although we worried a lot about these during stages 1 & 2 - in our case this part was straight-forwards.
When I'm talking about a server here I mean a DB Server with a set of web services that communicate between themselves.
As designed our architecture had 1 designated 'master' server. This was the definitive for resolving conflicts.
The rest of the servers were in the first instance a large cache of anything covered by item1. In fact it wasn't a large cache but a database duplication but you get the idea.
The second function of the each non-master server was to coordinate changes with the master. This involved a very simplistic process of actually passing through most of the work transparently to the master server.
We spent a lot of time designing and optimising all of the above - to finally discover that the single best performance improvement came from simply compressing the web service requests to reduce bandwidth (but it was over a single channel ISDN, which probably made the most difference).
The fact is that if you do have a web service then this will give you greater flexibility about how you implement this.
I'd probably start by investigating the feasability of implementing one of the SQL server replication methods
Usual disclaimers apply:
Splitting the database will not help a lot but it'll add a lot of nightmare. IMO, you should first try to optimize the database, update some indexes or may be add several more, optimize some queries and so on. For database performance tuning I recommend to read some articles from simple-talk.com.
Also in order to save bandwidth you can add bulk processing to your windows client and also add zipping (archiving) to your web service.
And probably you should upgrade to MS SQL 2008 Express, it's also free.
It's hard to recommend a good solution for your problem using the information I have. It's not clear where is the bottleneck. I strongly recommend you to profile your application to find exact place of the bottleneck (e.g. is it in the database or in fully used up channel and so on) and add a description of it to the question.
EDIT 01/03:
When the bottleneck is an up connection then you can do only the following:
1. Add archiving of messages to service and client
2. Implement bulk operations and use them
3. Try to reduce operations count per user case for the most frequent cases
4. Add a local database for windows clients and perform all operations using it and synchronize the local db and the main one on some timer.
And sql replication will not help you a lot in this case. The most fastest and cheapest solution is to increase up connection because all other ways (except the first one) will take a lot of time.
If you choose to rewrite the service to support bulking I recommend you to have a look at Agatha Project
Actually hearing how many they have on that one connection it may be time to up the bandwidth at the office (not at all my normal response) If you factor out the CRM system what else is a top user of the bandwidth? It maybe the they have reached the point of needing more bandwidth period.
But I am still curious to see how much information you are passing that is getting used. Make sure you are transferring efferently any chance you could add some easy quick measures to see how much people are actually consuming when looking at the data.