I have a c# Windows Forms application which is slowing down or even freezing after a day or so when deployed on a particular customer's site. I would love to rewrite this inherited project from scratch but for now the potential sources of the problem are rather widespread.
Can anyone suggest a way to perform some basic profiling (even "poor man's profiling"/break-and-sample) on a dotNET application in a live environment without completely crippling the performance.
Given the severity of the slow-down in the application, I guess that just a few data-points should be enough to find the cause.
Related
I'm seeking some advice about improving our release strategy for an inhouse WPF application.
Currently, we are using ClickOnce to release new versions to our users.
Still manual, but we are looking into using DevOps pipelines to streamline this.
We have noticed that, as our application grows, we are getting more risks of making some breaking changes without noticing it during the testing phase. We have a small team and tight deadlines, so testing phase is limited.
So, to improve our way of working we were investigating canary releases. This would imply that we first release new version to a set of key-users and when they don't notice any issues, we would launch it for everybody.
From an application perspective, I could make it work. But I'm not sure how we can make this work with our database.
Has anyone tried this approach with desktop applications already? Or are there better ways to do this kind of thing?
Any help is appreciated!
Kind Regards
Tim
We have integrated a two step release pipeline for this same purpose. We have an internal UAT stage that pushes out the app internally, via Microsoft's AppCenter, and then a second full release stage which publish to "production".
We're not using ClickOnce, but the principle is the same. You could have an UAT and Production publish locations for each stage.
We have a number of small ASP.NET MVC apps. All are basically a bunch of forms which capture data and store them in a SQL Server database, usually which are then loaded through to our datawarehouse and used for reporting.
We are looking to rewrite all the small applications and apply a level of consistency and good practice to each. All the applications are fairly similar and I think from a user perspective it would be better if they seemed to be part of the same large application so we were considering merging them together in some way as part of the re-write.
Our two currently preferred options seem to be:
Create a separate portal application which will be the users point of entry to the apps. This could have 'tiles' on the homepage, one for each of the apps (which would be registered in this parent app) and could link them through to all. In this scenario all the Apps would remain in different projects and be compiled/deployed independently. This seems to have the advantage of keeping the separate so we can make changes to an app and deploy without affecting the others. I could just pull common code out into a class library? One thing that annoys me about this is that the parent app must basically use hard coded links to link to each app.
I looked into using 'areas' in ASP.NET MVC and have all the small apps as different areas in one big project. This seems kindof cleaner in my head as they are all in one place, however it has the disadvantage of requiring the whole app deployed when any of the individual ones are changed, and I have a feeling we will run into trouble after adding a number of apps in to the mix.
We have a SharePoint installation and someone suggested creating the portal type app in SharePoint... This doesn't sound like the best idea to me but am willing to consider if anyone can point out advantages to this method.
Are there any recommendations on the architecture of this? Has anyone completed similar projects in the past and something worked well/not well?
We have 4 developers and we do not expect the apps to change too much once developed (except to fix potential bugs etc.). We will however plan to add new apps to the solution as time goes on.
Thank you
MVC Areas advantage would be allowing code sharing, by refactoring the repeated redundant parts of each app to use the same infrastructure code (security, logging, data access, etc.)
But it will also mean more conflicts when merging the code initially.
Deployment concerns can be mitigated with a continuous deployment tool (there are many in the market) or if you deploy to an Azure WebApp, then deployment slots can give you a zero down time deployment.
I am using TFS2013 & MS Release Management vNext to provide some continuous deployment capability to the development team, but am struggling to find a feature (or a way of achieving the capability) related to downstream (or chained) builds.
What I would like to achieve is the ability to start another build upon successfully completing another.
The basic premise behind this is we would sometimes have services that need to be deployed before a web application is subsequently deployed, but not in all cases (otherwise the build itself would simply deploy all of these components every single time) - in 80% of cases the web application will be deployed in isolation.
Has anyone achieved this in any other way than custom TFS build templates? Is there actually an un-documented feature somewhere in MS RM?
Thanks for your time in advance
If i understand correctly then you want to deploy to an environment based on some pre-condition being met. There is a nice story for that with new release management pipeline see the video below
https://www.youtube.com/watch?v=OPuWRL4jORQ
We are faced with the problem maintaining lots of windows services.
The idea is to reorganize windows services in to class libraries and connect libraries to one master windows service. Is there a good idea ? Any advices please)
There is a framework for hosting "services" within a single Windows Service called TopShelf. You might want to consider using that. https://github.com/Topshelf/Topshelf
I am interpreting your question to be "We have tons of little Windows applications that run as services - how can we simplify them?".
In general, lots of smaller programs are better. Single monolithic applications are difficult to maintain and test; when someone needs to make a small change it can trigger catastrophic consequences for dozens of other components of the application. It can also make it impossible to change one small application without taking down the whole service, as Chris Knight comments above.
On the other hand, lots of small programs suffer from the breadth problem. You probably want to make sure all your little programs run on a consistent framework - i.e. they all log their results to the same place, they all use a standardized configuration system, and they are all managed in the same place.
I have seen situations where people write services because they need to run a task "when a particular condition happens", so they make it a constantly running service and continuously check for that condition. Is it possible that you could take some of your services and turn them into triggered launches of individual applications?
If this isn't the correct interpretation, please let me know :)
What does Windows do to an app when it is run in Compatibility Mode?
Is there a way I can detect the compatibility mode settings in .NET?
What does Windows do to an app when it is run in Compatibility Mode?
It inserts several compatibility shims that mimic old behavior or bugs. Sometimes this is necessary, some programs behavior depends on old bugs which have since been fixed; or they used undocumented functionality.
Joel's blog entry, How Microsoft Lost the API War gives a nice example of that:
I first heard about this from one of the developers of the hit game SimCity, who told me that there was a critical bug in his application: it used memory right after freeing it, a major no-no that happened to work OK on DOS but would not work under Windows where memory that is freed is likely to be snatched up by another running application right away. The testers on the Windows team were going through various popular applications, testing them to make sure they worked OK, but SimCity kept crashing. They reported this to the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it.
That's what compatibility shims are meant to do. Insert legacy behavior. Whether it is to report a different version of Windows; make a certain API behavior a different way; or disable some other features of Windows that might cause problems like Aero.
The technical details of shims are here.
The Shim Infrastructure implements a form of application programming interface (API) hooking. Specifically, it leverages the nature of linking to redirect API calls from Windows itself to alternative codeāthe shim itself. The Windows Portable Executable (PE) and Common Object File Format (COFF) Specification includes several headers, and the data directories in this header provide a layer of indirection between the application and the linked file. Calls to external binary files take place through the Import Address Table (IAT).
Is there a way I can detect the compatibility mode settings in .NET?
The Question Is a Program Running in Compatibility Mode seems to give a relevant answer.