My team recently inherited an ASP.NET solution that runs on Azure App Services. One project in the solution seems to define C# code that leverages the Azure WebJobs SDK to run several functions. I am not a C# or ASP.NET developer by trade, but I'm involved in developing the build and release pipelines for the project.
I have two App Service environments that need to run the WebJobs project, but in the case of one of those environments, some of the functions should not run.
Each function seems to have its own .cs file, and it seems these functions can inherit configuration from an App.config file (which can be transformed at runtime using files like App.Staging.config and App.Prod.config). An example of a function in the project might look like:
using Microsoft.Azure.WebJobs;
using System;
using System.Configuration;
using System.IO;
namespace My.Project.WebJobs
{
public class SomeTask
{
public void ExecuteTask([TimerTrigger(typeof(CustomScheduleDaily3AM))]TimerInfo timerInfo, TextWriter log)
{
var unitOfWork = new UnitOfWork();
var SomeSetting = int.Parse(ConfigurationManager.AppSettings["SomeSetting"]);
unitOfWork.Execute.Something();
}
}
}
With my limited understanding of my options, the first idea that occurred to me was to potentially add an enable/disable switch to the method(s) (i.e. SomeTask or ExecuteTask in the example above) that might read its true/false value from a setting defined in App.config. Though, not being well-versed in C#... I'm not confident this is possible. Doing it this way, the function may still run, but no action is taken on account of the method(s) being disabled.
I feel as if there may be a solution that relies more on Azure configuration as opposed to function-level code changes. Any help is appreciated.
After researching, I found three ways to meet your need.
Create multiple instance depends on different environment, and publish your WebJobs to different instance.
Use staging slots.
Use 'ASPNETCORE_ENVIRONMENT' appsettings. I would show how below:
Configure the ASPNETCORE_ENVIRONMENT on portal. (This setting was in launchSettings.json file in local machine)
Modify your code like this (just for showing how it works, more info see the doc):
if (_env.IsDevelopment())
{
Console.WriteLine(_env.EnvironmentName); //modify with your function
}
else if (_env.IsStaging())
{
Console.WriteLine(_env.EnvironmentName); //modify with your function
}
else
{
Console.WriteLine("Not dev or staging"); //modify with your function
}
Here is a reference about Use multiple Environments in ASPNET CORE,
Related
I am looking for an example for a simple webjob:
the task would be to process the response from a web link and save it to blob on a regular time interval.
first of all the ms documentation is confusing me as far as time triggers are concerned:
https://learn.microsoft.com/en-us/azure/app-service/webjobs-create#ncrontab-expressions
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer?tabs=csharp#example
and also how exactly should I proceed on building the WebJob, should I use an azure webjob template (.net 4.x.x), or .net core console app ??
https://learn.microsoft.com/en-us/azure/app-service/webjobs-sdk-how-to
https://github.com/Azure/azure-webjobs-sdk-samples/tree/master/BasicSamples
https://learn.microsoft.com/en-us/azure/app-service/webjobs-sdk-get-started
https://learn.microsoft.com/en-us/azure/app-service/webjobs-create
all this resource and no simple example for a time scheduled task that would get a web response, also the confusion on building the webjob VS, wth?? I want to build a c# app in VS and deploy to azure as webjob via azure devops.
wasted 3 days on this since im not a .net developer...
Webjobs have changed and grown over the years including contributions from Azure Functions, which is also built on top of the Webjobs SDK. I can see how this can get confusing, but the short answer is that all of the different methods are still valid, but some are newer than others. Of the two timer trigger styles, the second is more current.
I generally recommend Functions instead of Webjobs for something like this since at this point as it will save you some boiler-plate code, but it is entirely up to you. As I mentioned, the foundations are very similar. You can deploy Functions apps to any App Service plan, including the Consumption plan- this is specific to Functions that is pay-by-usage instead of a monthly fee like you would need for WebJobs.
As far as .NET Framework vs. .NET Core, you can use it will depend on what runtime you used to set up your App Service. If you have a choice, I would recommend using Core since that will be the only version moving forward. If you elect to use Functions, you will definitely want to use Core.
As far as the Console App question, all WebJobs are essentially console apps. From a code perspective, they are a console app that implements the Webjobs SDK. You could run them outside of Azure if you wanted to. Functions apps are different. The Function's host is what actually runs behind the scenes and you are creating a class library that the host consumes.
Visual Studio vs. Visual Studio Code is very much a personal preference. I prefer VS for Webjobs and work with both VS and VS Code for Functions apps depending on which language I am working in.
The most basic version of a Webjob in .NET Core that pulls data from a webpage on a schedule and outputs it to blob storage would look something like this. A Function app would use exactly the same GetWebsiteData() method plus a [FunctionName("GetWebsiteData")] at the beginning, but you wouldn't need the Main method as that part is handled by the host process.
public class Program
{
static async Task Main(string[] args)
{
var builder = new HostBuilder();
builder.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddAzureStorage();
b.AddTimers();
});
builder.ConfigureAppConfiguration((context, configurationBuilder) =>
{
configurationBuilder
.AddJsonFile($"appsettings.json", optional: true);
});
var host = builder.Build();
using (host)
{
await host.RunAsync();
}
}
public async static void GetWebsiteData(
[TimerTrigger("0 */1 * * * *")] TimerInfo timerInfo,
[Blob("data/websiteData", FileAccess.Write)] Stream outputBlob,
ILogger logger)
{
using(var client = new HttpClient())
{
var url = "https://microsoft.com";
var result = await client.GetAsync(url);
//you may need to do some additional work here to get the output format you want
outputBlob = await result.Content.ReadAsStreamAsync();
}
}
}
We are trying to hack WebJobs to connect to storage as an MSI user to comply with some requirements. we are using this technique
https://github.com/Azure/azure-webjobs-sdk/issues/2109
the problem is this line
webJobConfiguration.Services.AddSingleton(new DistributedLockManagerContainerProvider
{
InternalContainer = container
});
Apparently the Azure Webjob api hasn't been updated to use Microsoft.Azure and this is still using a container type of Microsoft.WindowsAzure.Storage.Blob.CloudBlobContainer. Wouldn't be a problem except that our entire API has been converted to Microsoft.Azure namespace classes.
Is there an easy way to switch between the old and new namespaces?
I've hunted for the answer to this one, in SO and beyond but I've not seen any answers thus far.
We are looking at adding some reporting to an existing Windows Services / WPF EXE. Ideally we'd self-host a little vNext application that would expose reporting endpoints our app can use. This was possible with OWIN and ASP.NET 4.
Is this even possible with vNext?
I've tried a few samples etc and the K Runtime seems to, clearly, be a different runtime to the CLR. Build etc is all rather different too... so I guess at the very least it would have to be a completely separate process .... or am I barking up the wrong tree?
In particular it seems we need to invoke the K runtime (k web or elsed a k pack'ed .cmd) which seems coutner intuitive as I'm already within a process I'm running (the main exe/service).
EDIT: I'm wondering if the answer is NoWin , referenced and providing the OWIN container. B ut I'm struggling to see if that's the best approach...
Here a possible solution: How to Run DNX Applications in a Windows Service and
How to Host ASP.NET in a Windows Service thanks to Erez Testiler.
Basically the idea is to add the following references:
"Microsoft.AspNet.Hosting": "1.0.0-beta7" – Bootstraps the web server
"Microsoft.AspNet.Server.Kestrel": "1.0.0-beta7" – Web server implementation
"Microsoft.AspNet.StaticFiles": "1.0.0-beta7" – Hosts static files
"Microsoft.AspNet.Mvc": "6.0.0-beta7" – Includes all the MVC packages
And then programmatically configure and start the Server and ASP.NET:
using Microsoft.AspNet.Builder;
using Microsoft.AspNet.Hosting;
using Microsoft.AspNet.Hosting.Internal;
using Microsoft.Framework.Configuration;
using Microsoft.Framework.Configuration.Memory;
using Microsoft.Framework.DependencyInjection;
using System;
using System.Diagnostics;
using System.Linq;
using System.ServiceProcess;
....
private readonly IServiceProvider _serviceProvider;
private IHostingEngine _hostingEngine;
private IDisposable _shutdownServerDisposable;
public Program(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
protected override void OnStart(string[] args)
{
var configSource = new MemoryConfigurationSource();
configSource.Add("server.urls", "http://localhost:5000");
var config = new ConfigurationBuilder(configSource).Build();
var builder = new WebHostBuilder(_serviceProvider, config);
builder.UseServer("Microsoft.AspNet.Server.Kestrel");
builder.UseServices(services => services.AddMvc());
builder.UseStartup(appBuilder =>
{
appBuilder.UseDefaultFiles();
appBuilder.UseStaticFiles();
appBuilder.UseMvc();
});
_hostingEngine = builder.Build();
_shutdownServerDisposable = _hostingEngine.Start();
}
It seems to be a quite good solution to me.
Ok I spent some time on jabbr.net and had some help from the awesome #dfowl and a helpful if rather curt younger dev (those were the days).
#dfowl: that scenario Is pretty much dead
My take- as our Windows Service/WPF runs under CLR and vNext runs under CLR they are different runtimes.
There is a way to do it, based on an older version of the K runtime and it's er, hairy. File in possible, but never something you'd put in production:
Alxandr's CLR Bootstrap for K runtime
I'd like to compile my source (or a part of it) by one of my webservers (like some websites offer nightly builds of their program). As I want my program to be customized by a third party and they have their own standalone application with say their logo and some custom strings in it. My preferable solution would be a dll file which would be loaded into my application, so I can still update the main application while retaining the customization by the third party.
So, the third party goes to my website, enters some fields and a dll file is generated (or do you have any other better way to do this?) and the dll file will be included by the application, which will grab the logo resource and some strings from it, to show in the application.
How can this be done? I'd rather to use Linux to build it, but if Windows is easier then that's not a problem either.
You can use the CSharpCodeProvider API for that, here is an example:
var csc = new CSharpCodeProvider(new Dictionary<string, string>() { { "CompilerVersion", "v3.5" } });
var parameters = new CompilerParameters(new[] { "mscorlib.dll", "System.Core.dll" }, "foo.exe", true);
parameters.GenerateExecutable = true;
CompilerResults results = csc.CompileAssemblyFromSource(parameters,
#"using System.Linq;
class Program {
public static void Main(string[] args) {
var q = from i in Enumerable.Rnge(1,100)
where i % 2 == 0
select i;
}
}");
results.Errors.Cast<CompilerError>().ToList().ForEach(error => Console.WriteLine(error.ErrorText));
If you want to use linux take a look at Mono MCS C# Compiler
It's easy on either platform. Just accept the data into a template C# file (String.Replace will work fine). And then just shell out to the compiler.
If you trust your third party vendor try to use Team City by JetBrains, commiting some changes in svn repo folder, will cause recompilation of project and you will get precompiled project.
How about this
Build a web interface to capture the third party's customisations into, say, a database.
Set up a Continuous Integration server to manage automated builds (for example Jenkins).
Then implement custom build steps in your CI solution to grab the customisations, drop them into copies of the source code, and have your CI do a build for each client - publishing the build artefacts somewhere where the client can see them (say, somewhere within the web interface)
You could set up custom triggers in your CI server to watch the database for new customisations. Or to be triggered by some operation in the web UI.
I have a project that is deployed to production as a windows service. However for local development purposes it would be useful to run it as a console application. At the moment I have a class Called ReportingHost that provides my core functionality, And a class called ReportingServiceHost that inherits from ServiceBase and allows me to run the application as a service. There is also a program class with a main method that calls ServiceBase.Run on my ReportingServiceHost.
I think I need to write a ReportingConsoleHost class that allows me to run the functionality in a console. Then I need to modify my Main to react to a command line switch and choose one or the other. These are the two bits I am having trouble with.
I have had a look at this and attempted to use that code but my app exits immediately, it doesn't show a console window and it doesn't wait for Enter before closing.
Part of the problem is that I dont have a deep understanding of how these things work. a definitive pattern for splitting my functionality, my two different ways of running that functionality, and a main method that chooses one of these ways based on a command line argument is what I am hoping to achieve.
I suspect your test project was configured as a windows exe, not a console exe. With a windows exe Console.ReadLine will return immediately.
To have a console exe that works both as a service and at the command line, start it as a service project (in Visual Studio) - and add a check on Environment.UserInteractive - i.e.
static void Main() {
if(Environment.UserInteractive) {
// code that starts the listener and waits on ReadLine
} else {
// run the service code that the VS template injected
}
}
You can of course also use a command line switch. I have example on microsoft.public.dotnet.languages.csharp that acts as:
an installer / uninstaller
a service
a console-mode app
depending on the switches
I have done this before by implementing a normal Windows Service (by deriving from ServiceBase), but putting a check in the main method to check for a command line argument.
If the args contain /console, start the console version, otherwise start the service.
Something like this:
internal class MyService : ServiceBase
{
internal static void Main(string[] args)
{
if (args.Length == 0)
{
// run as a service....
ServiceBase[] servicesToRun = new ServiceBase[] {new MyService()};
Run(servicesToRun);
}
else
{
// run as a console application....
}
}
}
My advise? Put all your logic for your service in a separate assembly. (A class library or DLL.) Then create one project as service which references your class library and puts the code to use as services. Create a second console project which also references your class library but which will make it available as a console application.
You would end up with three different projects in your solution but it does allow you to keep things separate. Actually, this would make it possible to extend your service in several other shapes too. You could, for example, create a 4th project as a web service and thus call your service from a web browser on a client system. Because the software logic is separated from the usage logic, you gain lots of control over it.
Be aware that a service will possibly run with more limitations than a console application. In general, services don't have network access by default, don't have a monitor assigned to them to display error messages and in general run with a limited user account or system account. Your service might work as a console yet fail as a service because of this.
There are already two good answers above - but I thought I'd post a link to Brian Noyes' Debuggable Self-Host Windows Service Project blog post - it talks about WCF but should apply to any 'Windows Service'.
The best thing is the sample code - if you can't figure out where the above examples 'fit', grab the complete project and see how it works. Thanks Brian!