Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
When I am creating projects in VS I only add a set up project to my solution if it is a service (I can't get it running without installing it first).
If the prorgam isn't a service I don't normally create a set up project but rather copy the .exe file as well as all the necessary dlls for the application to run to a desired folder and I run the application from there.
My question is if there is any benefit in terms of performance or anything else when you install a program (through its set up) rather than just running it without having it first installed it
It highly depends on needs of your application. You may for instance:
Add necessary registry entries
Register file extensions to your application
Check prerequisites and (potentially) install missing libraries or frameworks
Check for potential problems, which will disallow your application to work correctly
Allow user to choose only subset of features your application offers (thus making the installation smaller)
Choose binary and library files for specific environment (for example 32 vs 64-bit). For example, NVidia now gives you one unified installer for a series of graphics cards and then installer chooses the appropriate ones to install.
Automatically add shortcuts to start menu/screen and desktop
You can of course embed most of these actions in your application, but I'd vote against that. That's because your application would have some boilerplate code, which would run only once, or even your application might not start because of missing requirements, which setup application might have resolved.
Also, it's less user friendly. With setup program, users may very quickly prepare application to work and also - equally quickly - remove the program from their computer (along with all config files, registry entries etc.).
If you plan to use the program only by yourself, it's your choice. But if you want to publish your program, I'd suggest at least making an option to either install program or use it in the portable mode (without installation).
I think there are benefits of having setup project from end-user and developer perspective as well. Normally when you finish your project you want to easily distribute it. End users are rather used to downloading and installing application in a common way meaning:
selecting the destination path
select whether to install for all or only current user
checking whether to create desktop/programs menu icons or not
lunch the program after installation
and this can be easily accomplished by setup project.
I think regular user finds hard to let's say
download compressed file
extract package (assuming that appropriate for doing it is already
installed. In other case it is required to install it)
create dektop shortcut
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The company where I am currently employed is struggling with an architectural decision for our range of applications. At the moment we have a couple applications that have common parts (think like a calendar module). Until now we kept on copying code from other existing application, but in the future we want to evolve our applications to a more modular design:
As you can see in the picture above it is possible to have different versions of modules per application.
We are considering to possible solutions:
Building a core application framework where we can install our
modules in. We're think about a tool like Nuget to accomplish this.
Building one application where all our modules are included in (=one code base), but the customer only gets the functionality that is activated for him. We're forseeing some problems with versioning here.
Any suggestions on this? We can't be the first company who struggles with this problem?All our applications are ASP.NET MVC 4/5 web applications, built with Razor Templates or JavaScript templates (knockout.js). All of our applications are deployed on Microsoft Azure and we have extensive inhouse knowledge of buildscripts (MSBuild), CI Servers...
Having separate project/assembly for each module and delivering it as Nuget package is definitely a good strategy.
Advantage:
Can maintain and release multiple version. Different client get different version.
Installation of latest or specific version supported through Nuget. This helps during development where App A developer can target 2.0 version of module A while App B developer can target 1.0.
Single source base with separate branches for each version. Client using 1.0 request a change will get code from branch 1.0 with just the fix requested.
Each module can be released or updated independently.
Challenges:
During development debugging assembly code that's installed using Nuget. Nuget supports it inbuilt. We achieved it in our case (Framework being used by multiple platform).
Code changes required in module code (a bug or a new feature required). Well this is tricky:
Option 1: Same developer just go ahead and make that change, create new package and install new version in his app. Got to authorize the change as it is critical code.
Option 2: A designated team responsible to fix issue or change request in framework code.
You can also try using the plugin architecture, just build the different modules that makes up the application as a plugin, then build the module that is required for every application as a single code base. In this case, installing a component for any particular user, will be a matter of adding or pulling out plugins. Many large projects makes you of this particular architecture as it reduces copy and paste, increases and reuse and speed of development. You can check nopcommerce an open source project for an idea of how it is done.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am about to start development of application in C# and .NET. The application is going to be big in terms of how user will configure it to display on the screen.
I need some way to store configuration data and I have only explored 2 options yet and it is XML files and INI files , which one of these is better? Is there any other new way to store data that works great with .NET Framework?
This is a really broad question, but basically the way I see it there are a few prime places to store application data. Exactly what and how you store is going to depend on the type of data. The following is my personal short list:
Registry - The windows registry can be used to store single key/value pairs or small amounts of read only data. You really can only write these settings when your application is installed unless you are running in administrator mode (which isn't a good idea).
App Config - Similar to the registry this allows storage of application data that is generally best written during installation or during configuration but not much after that. The nice thing about this is the system administrator can often find these files and they are xml which means they are easier to edit (and read) than other files.
Isolated Storage - If you are storing application, user or machine specific information and you don't mind writing your own file readers and writers (or you are interested in delving into xml storage) this is an excellent option for you. It allows user specific settings and it doesn't require the user to have special privileges on the computer.
Local Database - If you want values that you can look up easily, read and write often and are stored simply a local database can be excellent. You might consider looking into SQLLite or a similar tool for this.
Network Database - This is pretty much as advanced as it gets. If you want user information to be automatically processed regardless of where the user opens your application and you want to be able to share settings between computers this is probably your best option. You can use MySQL for free or SQLExpress if you aren't storing GB of settings. It does require a significant amount of setup but it might be the best option anyways if you require this level of capabilities.
Hopefully this gets you started. Best of luck!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
EDIT - I'm sorry that I made this post more complicated than it should. I won't delete this post in case someone with a similar question needs help.
I recently became interested in building my own Linux distro(probably just for family and friends). I have researched extensively on whether I should customize an existing distro(eg. Arch Linux, Debian) or build from scratch(LFS). I have come to the conclusion that building from scratch would best fit my needs(Im on no time constraint).
My main question is:
Would it be possible to build an application that functions as a full OS, and just program the linux distro to run it at launch?
Second main question:
Would doing this way restrict programs from being installed? Would developers have to make a custom version of their software to run on this mock OS?
Problems I see in this:
-If I use a language like c#, would that work? or does it require windows natives?
-If I use java(probably not), but if I would, would I have to package JRE with the distro?
-If I use something like java, can I use libraries like LWJGL(for openGL for stuff like window frames)
-Does java or c# use special file system methods? Would I need to make the Linux base build the file system for the VM language to use? Or can I arrange all that in the mock Os itself?
-Performance problems with VM languages?
-Are there any legal problems with packaging things like JRE or if I find a way to use windows natives?
Additional notes:
-I have no concern on time, even if just the file system takes me 3 years.
-If building this from scratch would not be possible, would I be able to customize a distro in order to function like this?
-I understand that I would have to make linux handle stuff like hardware drivers, because communicating with hardware is not something in my ballpark. Would this mean that I have to customize something like Arch Linux?
I am very sorry there are so many questions in this, and if I had enough reputation to add a 500 rep bounty I would.
An operating system is really a huge undertaking. There have been attempts to build a system to create custom OSs in C# called Cosmos, which I've considered looking into several times. In most cases though, applications probably would have to be specifically built for the OS. I may actually bother, now that you've reminded me.
Obviously, the result would not be Linux, but rather a custom OS built using Cosmos. That's probably the closest you can manage using C#.
So in summary: If your goal is to execute managed code on a machine at the most basic level, Cosmos is probably what you want. You will still be doing almost everything, but you'll have some insulation from the actual guts of the machine.
EDIT: Alternatives include Mosa, which does not use Visual Studio, and Singularity, which you can only use for research, but was produced by Microsoft directly.
The short answer is add this to your kernel command line:
init=/path/to/my/application
Typically the first process a Linux kernel will start is the init process. init takes care of running the startup scripts for everything else that needs to start at boot-time - e.g. kernel modules, daemons, console login/x desktop.
You can tell the kernel to use any userspace binary you like instead of the default of /sbin/init, as above, though you may still want some of the startup script to run.
Though more likely, you'll want to edit the startup scripts to just run one application (Is your application text or X-based?)
Alternatively you could hack the kernel to run your application in kernel space, and possibly never even start any processes. But that would be a nightmare to debug/maintain.
I need to deploy an application. I don't need registry, so I thought about just copying the DLL and exe files to the client desktops (there are only three). What are the disadvantages of this in comparison to using clickonce deployment from a memory stick?
You don't get your add/remove programs integration, but if you're working on something that's very small and constrained, that might not be a big deal. Are you concerned about something in particular that's prompting you to ask the question?
One main reason springs to mind- new versions. Assuming that the 3 PC's are networked, clickonce deployment to a server (not a memory stick) has the benefit of automagically checking for updates, downloading and installing new versions every time a user runs your app.
I have a three-tier application which is installed in corporate environments. With every server version update, all clients have to be updated, too. Currently, I provide an MSI package which is automatically deployed via Active Directory, however my customers (mostly with 20-300 users each) seem to hate the MSI solution because it is
Complicated to get it running (little Active Directory knowledge);
The update process can't be triggered by the server, when a new version is detected;
Customers can't install multiple versions of the client (e.g. 2.3 and 2.4) at the same time to speak to different servers;
The update process itself doesn't always work as expected (sometimes very strange behaviour healing itself after a few hours)
I've now made a few experiments with ClickOnce, but that way to unflexible for me and too hard to integrate in my automated build process. Also, it produces cryptic error messages which would surely confuse my customers.
I would have no problems to write the update logic myself, but there the problem is that the users running to self-updating applications have too restricted rights to perform an update. I've found that they are able to write to their Local Application Data directory, but I don't think this would be the typical place to install application files into.
Do you know a way to an update that "just works"?
You can somewhat replicate what ClickOnce does, just adjust it for your needs.
Create a lightweight executable that checks a network/web location for updates.
If there are updates, it copies them locally and replaces the "real" application files.
It runs the "real" application.
The location for the application files should be determined by permissions and operating system. If users only have write permission to a limited set of folders, then you don't have a choice but use one of these folders. Another option is provide an initial installation package that installs the lightweight executable and grants r/w permission on a specific folder such as "C:\Program Files\MyApp". This approach usually requires a buy-in from IT.
I hope this helps.
It is really hard to provide you exact answers because critical information about the client side installer is not explicit. Do you install client side files into Program Files? Then you may meet problems when users are restricted.
You don't think Local Application Data is a folder to deploy application, but Google does. Its Chrome browser installs that way on Windows, and its automatic update process is even unnoticable (which may sound horrible). So why not deploy your application into this folder for restricted users? You may find more about Chrome installer here,
http://robmensching.com/blog/archive/2008/09/04/Dissecting-the-Google-Chrome-setup.aspx
Here's an open-source solution I wrote to address specific needs we had for WinForms and WPF apps. The general idea is to have the greatest flexibility, at the lowest overhead possible. It should give you all the flexibility you need for all that you have described.
So, integration is super-easy, and the library does pretty much everything for you, including synchronizing operations. It is also highly flexible, and lets you determine what tasks to execute and on what conditions - you make the rules (or use some that are there already). Last by not least is the support for any updates source (web, BitTorrent, etc) and any feed format - whatever is not implemented you can just write for yourself.
Cold updates (requiring an application restart) is also supported, and done automatically unless "hot-swap" is specified for the task.
This boild down to one DLL, less than 70kb in size.
More details at http://www.code972.com/blog/2010/08/nappupdate-application-auto-update-framework-for-dotnet/
Code is at http://github.com/synhershko/NAppUpdate (Licensed under the Apache 2.0 license)
I plan on extending it more when I'll get some more time, but honestly you should be able to quickly enhance it yourself for whatever it currently doesn't support.
If you don't want to give your users too many rights, it is possible to write a Windows Service, which will run on each computer under an account with the appropriate privileges, and which can update your application, when a new version gets available.