Current situation :
Multiple .NET Framework solutions (implementations) that each have :
Core logic library project
Windows Service project (builds as exe)
WPF UI (builds as exe)
Core logic is run on a timer that is kept alive by the Windows Service
Windows Service communicates over WCF to a WPF UI
Desired situation :
Multiple .NET solutions that only include the unique portions of logic that cannot be reusable between implementations
Reuse one common WPF UI
Reuse one common Windows Service by each individual implementation
I am struggling to figure out how one common Windows Service project could be maintained, that could then be consumed by every implementing project (1 (windows service) -> to -> many (implementations).
The main limitation here being that Windows Service needs to build to an exe and have an installer to go with it, so that the service can be added to the registry.
Possibilities I could think of thus far :
Some strange pre-build event process of pulling down pre-built components for the Windows Service from somewhere (GitHub perhaps), adding them to the build directory and using said components in the build of the final installer
Sounds absolutely appalling and a nightmare to maintain to me
Maintain a common .NET project for the Windows Service. Have a rule that any new or existing solutions must 'Add>Existing Project' and point to the common .NET Windows Service project.
Is dependent on humans following a particular workflow each time they create/make changes to a solution, which...I am not a fan of
Is a little counterintuitive and is definitely repetitive and redundant to have the same exact project in each individual solution, despite needing only the build results of said project.
Abandon Windows Service + WCF. Adapt an architecture that more resembles microservices - run the logic that keeps the core logic alive and allows the UI to get updates to it in a docker instance.
Definitely sounds like the 'right' and future-proof way to do this
The most effort to refactor a significant portion of the codebase
Introduces new, unknown and potentially significant problems that using Docker may introduce given the app's highly restricted execution environment (Windows Server with sometimes extremely restrictive privileges)
Is there some method of architecting such an application that I am not aware of/not thinking of?
Related
Following a normal microservices framework we would like to place each microservice in it's own git repo and then have one repository for the Service Fabric project. When we update one of the microservice the though would be that the Service Fabric project would redeploy just that service.
Is there any examples of splitting the Service Fabric project up like this? I've noticed in all of their examples everything is in one solution/repository.
tl;dr: Figure out what best works for your development team(s) in terms of managing code and releases of individual services. Use diff packages to upgrade only changes in your Service Fabric applications. Smallest repo size should be one Service Fabric Application contained in one Visual Studio Solution.
Longer version:
It is fully possible to split your Service Fabric Application into multiple applications, the smallest being one Service Fabric Application for each microservice you have. If this is a good idea or not completely depends on the type of application you are trying to build. Are there any dependecies between the services? How do you partition services and could there be any scenario when you want to do that in a coordinated maner? How are you planning to monitor your services? If you wan't to do that in a coordinated maner then again it might make sense to have more services in the same application.
Splitting the code into repos that are smaller than your Visual Studio solution would likely only lead to trouble for you. You could technically work with Git submodules or subtrees to some effect, but the way Visual Studio handles project references inside solutions would likely make you end up in merge-hell very soon.
When it comes to upgrading your Service Fabric Application there is actually a way for you to upgrade only the changed services in your application based on the version numbers in the service manifest. This is called a diff package and can be used to deploy an application to a cluster where that application has been deployed at least once (i.e. it is an upgrade, not install). This could greatly affect the upgrade time of your deployment if you have only upgrade a minority of the services in the application.
The full documentation for this can be found here. There is also a SO answer that describes it.
I would say that your choice is, as much in development, a trade-off between different gains.
Splitting the services into more fine grained application containing fewer service could make upgrades easier (but this effect could to some extent technically also be achieved by using diff packages). The downside of this approach is that you would have to manage dependencies as strict interfaces between your services. One approach for that would be to publish your service/actor interfaces to a private NuGet-feed. This in turn introduces some additional complexity in your development pipeline.
Keeping everything in the same repo, same Visual Studio solution, same Service Fabric Application could work for smaller solutions but will likely be hard to work with in the long run if your solution grows in terms of merges, versioning and releases.
With our projects we follow a pattern similar to this, but not that fine grained. Each SF Application is contained in it's own repo, but we'll have multiple specific microservices in an application. We separate our applications into specific pieces of functionality in respect to the end application (Data Tier, Middle Tier, Presentation, Analytics, etc). When we upgrade we'll upgrade specific applications at a time, not necessarily specific services. Upgrading specific services is a huge pita ops wise. We still have a shared interfaces project and we use SF remoting to communicate between the different applications and we are able to do that because we manage containers and interfaces in its own repo that we then distribute via a private nuget server. This makes things difficult workflow wise but in the end it's nice because it makes us remain aware of interface compatibility between applications. We also have some core microservices that every application will have which we distribute using SF Nuget. It's still young and has some sharp edges, but it's awesome.
Reading your question, it sounds like your repository split is mainly for deployment concerns, so I will focus on that aspect.
We are using one Git repository per Service Fabric application (which contains multiple services), this helps to simplify how Continuous Integration and Continuous deployment is done: if there is a change in the repo (code or config), the SF application needs to be build and deployed.
If you are using the Build and Releases features of VSTS online, you can easily leverage the build Tasks available for Service Fabric in order to support differential upgrades. Using the “Update Service Fabric App Versions” task (https://www.visualstudio.com/en-us/docs/build/steps/utility/service-fabric-versioning), using the “Update only if changed” option with the "deterministic compiler flag" (https://blogs.msdn.microsoft.com/dotnet/2016/04/02/whats-new-for-c-and-vb-in-visual-studio/), to make sure that binaries are always the same if code is the same, you easily end-up with differential upgrades per SF application.
You shouldn't necessarily think of a Service Fabric service as being a microservice.
The Service Fabric taxonomy of code/services/apps etc. gives you a high flexibility in how you compose to your needs (as already pointed out). Consider the fact that you can have more code packages running in one service and trying to translate that in to a microservice definition, just makes things even harder to cope with.
As the SF Appation is you unit of deployment (whether is contains one or more updated services), you should strive to structure your repo/solution/SF Application setup in a way so you can contain most changes to one SF App (= one solution and one repo).
If you get in a situation where you constantly need to deploy multiple SF Apps to get a change out, you will not be productive.
I currently have an asp.net console application which simply retrieves a lot of data via an API from a remote server and, using Entity Framework, saves it into an SQL database. The application takes 3-4 days to run and I run it manually once a month or so.
The project is separated into a Models class, and a Repository class as well as the application itself.
I need to now build an ASP.NET MVC web application which allows users to view the data that has been retrieved and am looking for advice on how best to structure this.
Do I create a new ASP.NET MVC project in my solution and set that as the start up application, referencing the same Models and Repository classes? If so, how do I then run my console app? Or is it best to keep these as separate solutions, just referencing the same database?
Is there a better way of doing this as well? (ie, is there some way the console application can be rebuilt as being part of the front end and use queues or workers to fetch the data regularly?)
Thanks for your help,
Robbie.
Same solution. Different projects. By being in the same solution you gain the easy ability to reference shared components. I would actually recommend breaking out your entities, repositories, etc. into a third project, a class library, that then both your console app and MVC app will reference.
If you don't put everything in the same solution, then you're either stuck in DLL hell, where you have to build your project and manually copy the DLL into the other project, add the reference, and then keep everything up to date when you make changes in that DLL. The more projects that get involved, the greater the entropy and greater the likelihood that your projects all end up running on different versions of the DLL.
Another option is to create a Nuget package containing the shared components, host it in your own private repo, and then add it to each project that needs it. However, while it's pretty easy to set all this up, it's not 100% frictionless, and you will have to remember to repackage and republish the Nuget whenever you make changes, and then individually update the package in each referencing project.
Long and short, same solution is always the best way to go unless there's a very good reason not to. It's the only "it just works" approach.
Personally I would keep these as separate projects and separate solutions that just reference the same database, but move code that can be shared by both solutions into a separate class library.
The way your web application will present your modeled the data will most likely be very different to how your console application will use it; so using the same models and repositories will most likely further couple your web application to your console application.
This is very similar to the way micro services work, where the micro service acts and grows independent of its consumers (in this instance, your web application) and only communicate via a clearly defined API.
I've made a windows service with a timer (I've seen discussions about timer-driven windows services vs windows task scheduler, and still want to go with my own windows service) that runs some business logic.
To separate my concerns and make it easy to test & run manually, all my business logic is in a separate project that I also reference in a windows forms tester GUI.
Now I want to make another timer-driven windows service in another solution that runs some other business logic, so I'm thinking I don't want to end up with several codebases for my windows service and timer, so I'll reuse them from this solution, and go write my other business logic project.
How does this work? Am I going to end up with the same DLL name for the service project in both solutions? If they run on the same server, that will cause problems. It's such a small piece of code, I almost feel like the service isn't worthwhile as its own project, or isn't worthy of reuse, but I also hate the idea of not reusing it.
Also, I dislike the notion of reusing, say, just one or two .cs files and not the whole project, not only because that seems like it goes against the intentions of .Net, but also because our Mercurial source control makes that cumbersome.
What's the right way to approach this?
I'm creating a new application using Prism and ClickOnce, but while testing ClickOnce's hash checking for delta-only updates I noticed that I would need to make some architecture changes to take full advantage of ClickOnce updates.
To be clear, I am deploying to machines with poor internet connections and I really want to publish small, quick updates with minimal bandwidth. Ideally, only modules that have been modified would be sent over the wire.
First, I noticed that the client application project's hash (the one that makes the .exe) was always changed no matter what, and always re-downloaded. This lead me to remove Shell.xaml, ShellViewModel.cs, and some ResourceDictionaries and move them into a new ShellModule. This leaves AggregateModuleCatalog, Bootstrapper, App.xaml, and App.cs; each file is fairy small so this is fine.
Second, I noticed that ClickOnce's programmatic updating could go into a module, so I have that in a AutoUpdateModule.
Finally, my last concern is the Infrastructure project. Each module directly references Infrastructure, and if Infrastructure is modified all modules get new hash values. I believe this means that even if I add a line to an enum inside Infrastructure the whole app will be re-downloaded.
Is there an elegant solution to this issue? And if you've deployed Prism applications using ClickOnce what are some other architectural modifications that have helped or improved your deployment?
I'm unfamiliar with Prism, however, there is no way within ClickOnce to apply partial updates for standard applications.
I had a similar problem (I think) with a Windows app project that I was working on about 5 years ago. Specifically, my users often had spotty data connections (connecting over some times poor cellular data connections) and I needed to be sure that my updates were as small as possible.
Even though I could prevent the full application from being re-downloaded after each update, I did find that there were numerous, third-party dlls that never changed. So, I created a separate installer that placed those modules in the Windows GAC and they were installed only once, when the prerequisites were installed. This cut my application size down from 25MB to 2MB-- quite an improvement.
I'm not sure if this applies to your circumstance, but it might help.
I need to build an application in C# that will have multiple UIs, 2 for web and one that will be the same application, but able to be used with no internet access. I am leaning towards MVC for web, then MVVM/WPF for the windows application (Silverlight is not an option). I should be able to inject a different repository implementation for the two paradigms, thus solving the disconnected-from-the-internet issue.
What I am wondering is how best to re-use as much presentation logic as possible. Ideally, I would like to be able to use the same controller/presenter-type entities to run both UIs. I'm looking for an example of a good solution to this problem. I don't see a clear path to re-using MVC's Controllers (they seem too tighly bound to the MVC framework to work), but at the same time I'm not excited about the overhead involved in implementing a custom MVVM or MVP pattern for the web (which I fear is the answer).
Alternatively, am I crazy to even try to re-use those components? Is it not worth the hassle? We can easily share the services underpinning the UIs, but it seems a shame to write such similar UI code twice.
The right thing to do is to share only the Business Layer and Database Access Layer. At least you will have consistency between all the clients.
Then build the clients taking advantage of the benefits of each platform (richness of the desktop app and simplicity in the web app)
Of course it all depends on the budget.
You have the option of using WPF for everything for max re-use. WPF can be deployed as partial trust XBAPs.
There are downsides though
* Download size can be a problem
* Clients need the correct framework version and can only run in Internet Explorer (Firefox through plugin (not working on Windows 7))
I've tried it on a solution with a small XBAP client and a larger Standalone Application - and it is really minor details that cannot be reused (Window in app, Page in XBAP and so on). Makes for nice consistent layout too.
This is slightly hackish (and not really recommended, unless you really understand what you are doing :)), but you could try creating a desktop app, which embeds a browser. This enables you to reuse the GUI. You will also need to package a web-server, which might be a problem though if you are using C#/MVC/.NET.