In my search for the meaning of life, I stumbled upon a blog post that mentioned that your deployment strategy is not your architecture, it is simply an implementation detail, and as such we need to design for allowing different deployment patterns, whether you want to deploy your system to 1 node or multi-node, or another type of structure.
Do the latest versions of Visual Studio provide some kind of flexibility (besides azure) to be able to deploy services in a variety of strategies?
For example, let's say I have a solution
Acme Solution
--Acme Startup Proj
--Acme Service A.csproj
--Acme Service B.csproj
--Acme Service C.csproj
I want to be able to deploy this entire solution as 1 solution, or I would like to be able to deploy 3 separate binaries, one for each microservice.
AcmeServiceA.exe
AcmeServiceb.exe
AcmeServicec.exe
What does Visual Studio give you in terms of flexibility of deployment configuration?
Deployment techniques will vary with which technologies your app is built. For the sake of an example, I'm going to assume we're dealing with web services or sites.
You've specified two deployment scenarios: deploying a single project (e.g. microservice), and deploying all projects (full rollout). Let's start small...
Deploying an individual project
The main thing to plan for is that each deployable atom (this could be a project or a service + DB backend... something as small as you would prefer not to split it into smaller deployments).
For web projects (either it be Web API projects or other types), Visual Studio's built-in options can be generally summarized as: WebDeploy, Azure, and now with .NET Core, Docker images. I'm not going to go into the details of each, because those are separate questions. But I may refer to some details for you to research if they sound interesting (I'm more familiar conceptually with WebDeploy, so I'll refer to that a lot; but I'm not advocating for or against it).
If you were using WebDeploy for example, you could have each project produce a WebDeploy Package. (Again, look this up for more details on how to do it). This package can be crafted to contain a file payload (the site/service files) as well as a database payload, or other subatoms using the WebDeploy provider model. Visual Studio has pretty decent support for this scenario, and there is documentation on it.
Or you could generate a Docker image. From my understanding (and lack of experience with Docker as yet), if you wanted to deploy your web service and database, they ought to be in separate containers. You'll soon find yourself building these yourself outside of VS. That's not a bad thing, Docker sounds very flexible once you get the hang of it; but you are leaving the IDE for this.
Either way, now you can deploy the atomic package. This was the easy part.
Deploying the solution
So, you've got lots of these atomic deployment packages. How do you roll them all out?
Well, at this point VS doesn't provide a lot for you. And it's hard to justify what VS should do here. Almost every organization is going to come up with slightly different rules. Do you deploy from your CI? Do you create packages and deploy them to different environments in your release pipeline? Or do you do it in the cloud and hotswap environments (like Azure deployment slots)?
A VS native solution has to be either extremely configurable (and hence extremely complicated), or it will be too simple to fit most customers' needs. (As an aside, the initial support for WebDeploy back in VS2010 errored on the first of these. It was extremely configurable, and very difficult for customer or even the product team to wrap their heads around all of the possible scenarios. Source: I was the QA for that feature once upon a time.)
Really at this point you need to determine how and when you rollout your deployments. You need something to orchestrate each of these deployments.
VS generally orchestrates things with MSBuild. Again, I'm not advocating this as your orchestration platform (I actually dislike it for that... it's ok for your project configuration, but IMO not a good fit for task management), but if this is what you want to use, it can work. It's actually pretty simple if you're using it for to the Web Project scenario. You can build your solution and use the parameter /p:PublishOnBuild=true. If you are using WebDeploy to directly publish, you're done! If you're creating WebDeploy Packages, then you still need to push those, but at least you've created them all at once.
If you are using WebDeploy Packages, they will each generate a script to use for publishing. There are ways of passing in different WebDeploy parameters as well, so you can reuse the same package (build output) to publish to different environments. However, you'll have to write your own script to combine all of these into one megalithic deployment.
Ditto for Docker as well. You may get a set of images, but you still need something to orchestrate publishing all of them. Tools like Kubernetes can help you rollout, or in the event of issues, rollback.
There's also more generic orchestration platforms like Octopus Deploy.
How Unsatisfying!
Yeah, it kind of sucks that there isn't an out-of-the-box solution for large scale deployments. But if there was, it wouldn't work for 95% of teams. Most of what VS does provide is enough for an individual or very small development team to get their code to their servers. Any larger of a team, and you'll get better mileage out of building a system that is tailored for how your team operates. There are plenty of tools out there and none of them work perfectly in all cases. Find one that works for you, and you'll be fine. And in the end, it all comes down to pushing files and running scripts. If you don't like one system or tool, you can try another one.
If you are looking for an improved deployment experience in Visual Studio, check out Flexera's InstallShield Limited Edition in-box solution (ISLE, http://blogs.msdn.com/b/visualstudio/archive/2013/8/15/what-s-new-in-visual-studio-2013-and-installshield-limited-edition.aspx). ISLE is a great solution for those customers looking for added capabilities not found in Visual Studio Installer Projects, such as TFS and MSBuild integration, support for creating new web sites and ISO 19770-2 Tagging support, etc.
VS2015: https://marketplace.visualstudio.com/items?itemName=VisualStudioProductTeam.MicrosoftVisualStudio2015InstallerProjects
VS2017: https://marketplace.visualstudio.com/items?itemName=VisualStudioProductTeam.MicrosoftVisualStudio2017InstallerProjects
With the Setup and Deployment project templates you can choose to package all assemblies in the solution or each one individually as MicroService's using Setup, Web, CAB or Merge Module Projects:
Then choose which assemblies are included:
It really kind of depends on the exact use case how to achieve that requested some kind of flexibility and on your definition of an acceptable level of such a flexibility.
Taking your example with these three different executables as separate microservices (Service A, B, C) and as a complete service (Startup) in the context of Web.Api you could do the following:
Each project (Service A, B, C) can be designed as a separate OWIN self hosted executable (as outlined in Use OWIN to Self-Host ASP.NET Web API 2) and provide one or more endpoints to be exposed.
The main project (Startup) could also be an OWIN self host or a regular IIS Web.Api application that references the three projects (Service A, B, C) and load their respective endpoints in its own Startup routine (and optionally additional endpoints of iteself).
You can then use a separate configuration project in Visual Studio (or an external project in a completely different environment) and make use of deployment technologies like Puppet, Chef, or whatever to deploy according to your scenarios.
Your code would then be unaffected by the deployment you are actually wishing to perform and that respective configuration would be managed separately.
If this does not answer your question or if I have misunderstood your question, could you please clarify it and give more details?
When we talk about the meaning of life, here are the two cents about it by a deployment (install) specialist :-)- the answer is (seems :-) long, but it will contain specific information where to look for every point..
First of all, let's state that deployment is NOT a nobrainer, though many developers would like to see it like that (and as a deployment specialist, I observe quite often stakeholders in the software development process actually thinking like this- simple put, deployment is kinda forgotten until the day before shipment :-)
Compare it with coke for a bold and simple example. The "developers" produce the liquid, but it is quite easy to realize here, that the job isn't done yet. :-)
Visual Studio itself has not really support for deployment strategies. Based on several areas of deployment as mentioned in the following list there are of course a lot of technologys, some by Microsoft, helping with that.
What I would do is building setup bundles for different customers or scenarios which install subsets of services like client/server scenarios or others (see no. 3. in the following list.)
Second, as you may have seen from other answers, deployment is not deployment.
Partly, this depends if one sees deployment as just producing binary files by MSBuild or deploying to a test system or deploying to the customer, e.g. by updating the productive web site or producing DVDs or uploading executables to the update web site...
There are several different areas which sure have relations, but every numbered area is large and complicated enough to have own specialists for it:
Deployment seen as part of architecture has to deal with source and binary structures and entities, e.g. project and binary structure (how much .exe, .dll files, how are their dependencies, variation planning.
=> As you mentioned, you are here in the area of (Visual Studio, etc.) solutions, projects, as well namespaces, especially in the WCF area you have contracts, etc., you have (POC#) interfaces, etc. You have Nuget or other tools to resolve and manage dependencies.
.NET has to offer the concept of the assembly to deal with this, the architecture, e.g. if to deploy interfaces and contracts in own assemblies, how to deal with client/server scenarios, how the assemblies depend on each other, is up to you and architecture..
Concerning services, there is an interesting subtask how to host services, They can be hosted on a web server, they can be selfhosted in an .exe, they can be hosted with IIS or OWIN, etc. Links for more information:
Selfhosting in WCF:
https://msdn.microsoft.com/en-us/library/ee939340.aspx
Selfhosting in a Windows service, here with SignalR:
https://code.msdn.microsoft.com/windowsapps/SignalR-self-hosted-in-6ff7e6c3
Hosting with OWIN:
https://en.wikipedia.org/wiki/Open_Web_Interface_for_.NET
Deployment as part of a local Windows or other operating system integration: You have to think about, in which system directories you have to place certain files or data generally. You have to think about shared dlls, shared data, project data, temporary data, user specific data, registry, file system, Windows logo requirements, best practices, service configuration, etc.
Deployment as a process of creating setups, own installations, which, besides other things, accomplishes the needed actions mentioned in 2- with additional tasks like graphical installation front-end (setup GUI), license acknowledgement, what's new section, selection of optional components (just think of Visual Studio setup), uninstall/repair/modify possibilities, and so on.
Deployment as a devops process, e.g. part of continuous integration , continuous delivery and/or continuous deployment. Here are two main points: Technically, to have a defined process which is doing things mentioned in 2. and 3. (or alternatively web deploy steps) automatically as part of the build process ("post-build step").
This can include creating setups or hierarchies of setups- or working without setups at all). The second is to enable testers, developers and managers (or even customers) to see at least every morning or even more often the already installed example of the last nightly or daily build, maybe with several deployment variants (client/server?, basic/prof?) or on different systems.
You are here half in developer world, half in admin world.
Here the main point is often not creating complicated setups like in 3. but primarily to define own "pack" and copy (and sign... etc.) processes, and to automate them as part of the development (and test and delivery) process. Puppet and Chef were already mentioned.
Deployment as web or cloud deployment (can also be the endpoint of a devops process)- others have said something about that, I will omit details here, but an important differentiation is, if you are talking about deployment to the customer or deployment to an intermediary test or staging system.
Maybe one thing making this point worth to be mentioned additionally to devops, is that a deploy to online servers, server farms or a cloud has very own challenges.
Deployment seen primarily as an administrative process of distributing shippable, buyed and/or own programmed software to all the thousands of PCs in a company and it's daughter firms. there are of course special tools for this including update strategy, monitoring, license management and more. You are here in admin world, not in developer world anymore. Microservices will be a new and high challenge to admins which are mostly used to install and distribute "large" packages like MS Office or Oracle or whatsoever.
This topic is not so boring for developers as it seems. Primary because the two "worlds" of developers and admins are merging. And developers have to care about the customer view of "running the software in the real world". Devops is only the beginning. Everybody knows virtual machines, but now we have software defined networking, virtual apps, virtual server farms, the cloud, etc. You can define a deployment architecture by dependendies without any programming just by configuration. So deployment should be part of your application architecture, but mostly it isn't (enough). In fact until now the admin view is nearly nowhere integrated with the view of the software producers/developers. Concerning Microsoft, there is a lot of work done here by the Windows team, esp. in the server product line, and that was never really strategically coordinated with the developer team AFAIK (this is probably valid for EVERY software shop until now :-)
Currently, a lot of people publishing related to devops or the continuous buzzwords are not very experienced with setups. Building setups can be seen as a special technology among the other necessary steps.
Given that you are interested in knowing more about 3. (setups) :
If you don't want only to copy executables, but to have the functionality of full setups, which do more work than just copy, part of setup strategy can be to have bundle setups (sometimes called suite setups or bootstrapper setups) with own selection features. They can call the underlying small setups e.g. for your microservices.
Visual Studio itself has not longer an own support for the more sophisticated setup types like MSI, and especially never had for grouping setups to bundles, what can be one possible solution of deploying a bunch (or variants of bunches) of services- VS has e.g some support for "ClickOnce" deployment, but this has been made more for database ("smart") clients than for services or even microservices.
ClickOnce: https://msdn.microsoft.com/de-de/library/31kztyey.aspx
A replacement for the lack of "real" setup creation in Visual Studio can be the WiX toolset which is an Open Source project formed by Microsoft employees. Or InstallShield Express (which is a free, but a limited variant of the commercial ones).
With both you can create full MSI setups which are maybe the most sophisticated setup type in the windows setup zoo.
a) Of course there are other setups types besides MSI (aka Windows Installer), they are from third party vendors which are more or less proprietary but more simple: , e.g. Nullsoft - NSIS and InnoSetup.
I will not give links for creating single MSI setups because they can be easily found with the given links of creating bundles of MSI setups in the next lines:
b)
The tool for creating setups that select and install other (defined subsets of underlying) in the Wix "world" is called "Burn":
Creating bundles of setups with Burn:
http://wixtoolset.org/documentation/manual/v3/bundle/
Special (paid) support for this you can get for example from the founder of WiX who created a company especially for this:
https://www.firegiant.com/wix/tutorial/net-and-net/bootstrapping/
Rob Mensching, the founder, can be found here on SA as well answering dedicated questions.
c) InstallShield Suite setups:
Another is the already mentioned tool InstallShield, but for this you will need their InstallShield Premium variant which costs bucks:
http://helpnet.installshield.com/installshield21helplib/helplibrary/SteCreatingSuites.htm
d) Setup-Factory :
https://www.indigorose.com/setup-factory/
e) I am sure, many people would advise to take a look into Docker.
Virtual applications are not only setups, but they isolate themselves in the "installed" state from other apps like a sandbox.
See for example https://docs.docker.com/docker-for-windows/
f)
The list would be not complete, if I would not mention APP-V as virtual application installation technology which shares some but not all features with docker. But these technologies are not really made for orchestrating multiple deliveries but to deliver just one app.
And Microsoft has defined a new setup type called AppX.
Especially you have to differ, if you want to create "legacy" (full) desktop applications for Windows where MSI setups are the known technology for or store apps which are the new type since Windows 8 (aka Universal Windows apps aka Windows Store apps aka modern apps aka Metro apps).
AppX:
https://msdn.microsoft.com/en-us/library/windows/desktop/hh446767(v=vs.85).aspx
AppX targets a more simple setup type than MSI.
Universal Windows apps (UWP):
https://learn.microsoft.com/en-us/windows/uwp/get-started/whats-a-uwp
For anything more detailed we have to know more of your requirements.
Following a normal microservices framework we would like to place each microservice in it's own git repo and then have one repository for the Service Fabric project. When we update one of the microservice the though would be that the Service Fabric project would redeploy just that service.
Is there any examples of splitting the Service Fabric project up like this? I've noticed in all of their examples everything is in one solution/repository.
tl;dr: Figure out what best works for your development team(s) in terms of managing code and releases of individual services. Use diff packages to upgrade only changes in your Service Fabric applications. Smallest repo size should be one Service Fabric Application contained in one Visual Studio Solution.
Longer version:
It is fully possible to split your Service Fabric Application into multiple applications, the smallest being one Service Fabric Application for each microservice you have. If this is a good idea or not completely depends on the type of application you are trying to build. Are there any dependecies between the services? How do you partition services and could there be any scenario when you want to do that in a coordinated maner? How are you planning to monitor your services? If you wan't to do that in a coordinated maner then again it might make sense to have more services in the same application.
Splitting the code into repos that are smaller than your Visual Studio solution would likely only lead to trouble for you. You could technically work with Git submodules or subtrees to some effect, but the way Visual Studio handles project references inside solutions would likely make you end up in merge-hell very soon.
When it comes to upgrading your Service Fabric Application there is actually a way for you to upgrade only the changed services in your application based on the version numbers in the service manifest. This is called a diff package and can be used to deploy an application to a cluster where that application has been deployed at least once (i.e. it is an upgrade, not install). This could greatly affect the upgrade time of your deployment if you have only upgrade a minority of the services in the application.
The full documentation for this can be found here. There is also a SO answer that describes it.
I would say that your choice is, as much in development, a trade-off between different gains.
Splitting the services into more fine grained application containing fewer service could make upgrades easier (but this effect could to some extent technically also be achieved by using diff packages). The downside of this approach is that you would have to manage dependencies as strict interfaces between your services. One approach for that would be to publish your service/actor interfaces to a private NuGet-feed. This in turn introduces some additional complexity in your development pipeline.
Keeping everything in the same repo, same Visual Studio solution, same Service Fabric Application could work for smaller solutions but will likely be hard to work with in the long run if your solution grows in terms of merges, versioning and releases.
With our projects we follow a pattern similar to this, but not that fine grained. Each SF Application is contained in it's own repo, but we'll have multiple specific microservices in an application. We separate our applications into specific pieces of functionality in respect to the end application (Data Tier, Middle Tier, Presentation, Analytics, etc). When we upgrade we'll upgrade specific applications at a time, not necessarily specific services. Upgrading specific services is a huge pita ops wise. We still have a shared interfaces project and we use SF remoting to communicate between the different applications and we are able to do that because we manage containers and interfaces in its own repo that we then distribute via a private nuget server. This makes things difficult workflow wise but in the end it's nice because it makes us remain aware of interface compatibility between applications. We also have some core microservices that every application will have which we distribute using SF Nuget. It's still young and has some sharp edges, but it's awesome.
Reading your question, it sounds like your repository split is mainly for deployment concerns, so I will focus on that aspect.
We are using one Git repository per Service Fabric application (which contains multiple services), this helps to simplify how Continuous Integration and Continuous deployment is done: if there is a change in the repo (code or config), the SF application needs to be build and deployed.
If you are using the Build and Releases features of VSTS online, you can easily leverage the build Tasks available for Service Fabric in order to support differential upgrades. Using the “Update Service Fabric App Versions” task (https://www.visualstudio.com/en-us/docs/build/steps/utility/service-fabric-versioning), using the “Update only if changed” option with the "deterministic compiler flag" (https://blogs.msdn.microsoft.com/dotnet/2016/04/02/whats-new-for-c-and-vb-in-visual-studio/), to make sure that binaries are always the same if code is the same, you easily end-up with differential upgrades per SF application.
You shouldn't necessarily think of a Service Fabric service as being a microservice.
The Service Fabric taxonomy of code/services/apps etc. gives you a high flexibility in how you compose to your needs (as already pointed out). Consider the fact that you can have more code packages running in one service and trying to translate that in to a microservice definition, just makes things even harder to cope with.
As the SF Appation is you unit of deployment (whether is contains one or more updated services), you should strive to structure your repo/solution/SF Application setup in a way so you can contain most changes to one SF App (= one solution and one repo).
If you get in a situation where you constantly need to deploy multiple SF Apps to get a change out, you will not be productive.
In an effort to introduce reusable code at my new employer I've elected to create a class library that will be referenced by 200+ existing small applications. This library contains logging, dbconnection logic, etc.
Is there a way to setup TFS online's build service to automatically determine which projects have referenced this common library as a nuget package? I'd like them to build after (or part of) the CI build for the common library runs.
The projects that will depend on the nuget package do exist in the same TFS Team Project, but are not in the same branches, each application has its own set of branches.
Not really, and I'd say that what you want to do kind of defeats the purpose of NuGet.
You have 200 applications consuming this common library. The common library presumably works. Awesome. When you release a new production-stable version of the package, you should bump its version number and let everything that's using the old version continue to do so.
It should be the responsibility of the consumer of that library to choose whether to update it or not when a newer version is made available. The team responsible for each application should be able to make a conscious decision to upgrade the component.
Also, keep the single responsibility principle in mind. Having a "god assembly" that contains logging, database logic, and other totally unrelated stuff sounds like a really bad idea, especially if these things are going to continue to evolve over time. You'll bump into a situation where an application needs New Feature X in the database piece, but unfortunately someone made Unrelated Breaking Change Y in the logger logic a few weeks ago. Now you have to integrate Unrelated Breaking Change Y into your application even if you don't want or need it.
I'm creating a new application using Prism and ClickOnce, but while testing ClickOnce's hash checking for delta-only updates I noticed that I would need to make some architecture changes to take full advantage of ClickOnce updates.
To be clear, I am deploying to machines with poor internet connections and I really want to publish small, quick updates with minimal bandwidth. Ideally, only modules that have been modified would be sent over the wire.
First, I noticed that the client application project's hash (the one that makes the .exe) was always changed no matter what, and always re-downloaded. This lead me to remove Shell.xaml, ShellViewModel.cs, and some ResourceDictionaries and move them into a new ShellModule. This leaves AggregateModuleCatalog, Bootstrapper, App.xaml, and App.cs; each file is fairy small so this is fine.
Second, I noticed that ClickOnce's programmatic updating could go into a module, so I have that in a AutoUpdateModule.
Finally, my last concern is the Infrastructure project. Each module directly references Infrastructure, and if Infrastructure is modified all modules get new hash values. I believe this means that even if I add a line to an enum inside Infrastructure the whole app will be re-downloaded.
Is there an elegant solution to this issue? And if you've deployed Prism applications using ClickOnce what are some other architectural modifications that have helped or improved your deployment?
I'm unfamiliar with Prism, however, there is no way within ClickOnce to apply partial updates for standard applications.
I had a similar problem (I think) with a Windows app project that I was working on about 5 years ago. Specifically, my users often had spotty data connections (connecting over some times poor cellular data connections) and I needed to be sure that my updates were as small as possible.
Even though I could prevent the full application from being re-downloaded after each update, I did find that there were numerous, third-party dlls that never changed. So, I created a separate installer that placed those modules in the Windows GAC and they were installed only once, when the prerequisites were installed. This cut my application size down from 25MB to 2MB-- quite an improvement.
I'm not sure if this applies to your circumstance, but it might help.
I would like to ask you what experience you have with developing and deploying one application that in general has some standard features, but the application can also have customer specific features.
For example:
Customer 1 have the standard features but also want a search function.
Customer 2 have the standard features only.
Customer 3 have the standard features and also want an employee calendar.
How would you solve this?
Would you have one project where you deploy all the application from and then have some kind of config file to determind which features are avaliable in the specific application?
Would you have one project for each customer? This is how I'm doing it now, but the problem here is that if there are bugs that need to be fixed in the standard features I have to fix them in every project.
Any other suggestions are very welcome.
The application is developed in Delphi and C#.
My company solves that problem by giving all customers all features. This keeps development simpler and allows us to spend more time working on improving the product and not have to spend time dealing with the complexities of optional features.
We sometimes meet mild resistance from clients who want a cheaper version with less functionality but that's never been a sales problem.
On the other hand if you sell clients cheaper less functional versions, they are liable to try to get away with these cheaper versions. This can then lead to them not liking the software as much they should because they bought the cheap crippled version. I strongly believe in getting the best product possible to the user.
This advice may not be appropriate to your personal situation, but you did say that any opinions would be welcome.
One version per customer is not a good idea, IMHO. It will stuck your sales one day or later.
Better let all features be released to all customers, i.e. just maintain one software, but locked by a password, for instance. You can give a unique licence number at the software installation in order to identify the customer (put its name in the licence), then compute some passwords according to this licence number, to unlock some features, on request - when paid. This password can be easily automated via a web site, with minimal cost for you.
Or you can also let all functions available for testing, but lock the printing or the saving - just to let the customer think about spending some money to have this "nice added feature".
Sometimes, having all features tend to create a "Gasworks" application. You'll probably need a separate setup application, to customize the application to your customer's needs. Worth thinking about this architecture.
Even with a revision control, multiple versions are a nightmare to maintain: for instance, just back-port all hotfixes to a previous version takes a lot of time. If you don't have to (because of regulatory purpose / certified versions e.g.), don't "branch" your software.
No definitely do not have one project per customer, you could have one solution per customer where you agregate all projects given setup need.
Just to give you in an alternative to plugin architecture, which is right way to go, but also usualy fairly complex.
Option1.
Put common functionality in main project (Core)
Additional stuff like calendar put in separated DLL projects ( one per functionality)
Create VS SOLUTIONS, where you agregate all projects for specific setup + Core. So customer1 will have Customer1Silution with Core and all additional projects he need, customer2 its solution with Core and its additional stuff.
Option2.
Have one big setup for every one and based on its configuration/license enable/ disable access to user to a additional functionality.
Depends on your resources like time, experience, people you work with, clients , you can chose an option more appropriate to you.
1 plugin based: may be the best one but it, complex and it will take a time you become familiar with it, if you never did before something similiar.
Option 1 easy and fast, but if the clients quantity and configyration become defer you will jump into scale problems.
Option 2 is an average between those two, but keep an eye on your setup dimension.
Considering tha fact that you refer projects and nit DLLs in your solutions, if you fix a problem in Core in one solution it will affect also all other solutions.
you have several options:
put the "standard features" into separate module(s) which can be used/linked by the other versions
use a "plugin-architecture" to load the optional features dynamically
In addition to what others have said there is another option: Conditional defines.
With conditional compiling you can wrap feature specific code with IFDEFS (IFDEF EmployeeCalendar, IFDEF SearchFunction...). Then for each client you copy the project file only and set the conditional defines according to features you want to include.
If a client wants/pays for additional feature you just add it to Conditional defines in that client's project file.
This is similar to the modules approach (BPL/DLL) but avoids the added cost of having to deploy/manage extra files. The drawback is that the feature set is fixed at compile time.
With BPL/DLL you could dynamically load additional modules at run time, but if that is not important in your case, then Conditional defines might be a good choice.
Of course if your features are not easily separable then you can end up with a lot of IFDEFs in the code, but then your problem is clear separation of features, and it would be the problem with modules too.