Related
I tracked down a memory leak in this IDS/IPS traffic monitor console app I've been working on. I thought it was entity framework, turns out it's the firewall code. The following code worked perfectly fine in .net 4.6.2 and it works mostly fine in .net core. However, it has a memory leak (specifically the second firewallRule line):
INetFwPolicy2 firewallPolicy = (INetFwPolicy2)Activator.CreateInstance(Type.GetTypeFromProgID("HNetCfg.FwPolicy2"));
INetFwRule firewallRule = firewallPolicy.Rules.OfType<INetFwRule>().Where(x => x.Name == fwRuleName).FirstOrDefault();
I have to restart the app daily as it will burn through as much memory as I let it. I've seen it as high as 12GB when it should be running around 35MB. It only takes a day or so to get that high too. So basically each call to firewallRule is eating ~5MB-10MB. Seems crazy a single or null record would use up that much memory.
I believe it's that firewallRule line because I've literally commented out every other line in the method that's being called (and the app for that matter), and when I finally comment out that line, the leak stops. Please let me know if my logic is flawed.
If you want to test it, you have to make a reference to NetFwTypeLib to make that code work.
Does anyone know why this is happening and how to remedy it?
Update:
Here is the original code for those that believe I haven't commented it all out:
public static void FirewallSetup(string ip, string countryCode, bool isIpsString)
{
// FirewallControl fwCtrl = new FirewallControl();
// fwCtrl.Block(ip, countryCode, isIpsString);
//}
//private void Block(string ip, string countryCode, bool isIpsString)
//{
var remoteAddresses = "*";
string fwRuleName;
bool createNewFwRule = false;
//var fwRuleSuffix = SQLControl.GetFirewallRuleSuffix(countryCode).ToString();
//if (string.IsNullOrEmpty(fwRuleSuffix) || fwRuleSuffix == "0")
//{
fwRuleName = fwRuleNamePrefix + countryCode;
//}
//else
//{
// fwRuleName = fwRuleNamePrefix + countryCode + fwRuleSuffix;
//}
INetFwPolicy2 firewallPolicy = (INetFwPolicy2)Activator.CreateInstance(Type.GetTypeFromProgID("HNetCfg.FwPolicy2"));
INetFwRule firewallRule = firewallPolicy.Rules.OfType<INetFwRule>().Where(x => x.Name == fwRuleName).FirstOrDefault();
//if (firewallRule == null)
//{
// createNewFwRule = true;
//}
//else
//{
// //We need the following for creating the remoteAddresses string below
// //but, also need to count as Windows has a 5000 ip limit per rule
// remoteAddresses = firewallRule.RemoteAddresses;
// string[] aRemoteAddresses = remoteAddresses.Split(",");
// int remoteAddressesCount = aRemoteAddresses.Length;
// //Log.Debug(">>>Firewall remote addresses scope=" + remoteAddresses);
// Log.Warning(">>>" + fwRuleName + " Firewall remote addresses count:" + remoteAddressesCount);
// //5000 would be 0 to 4999 I think?
// if (remoteAddressesCount >= 4999)
// {
// //If remote ip scope is 5000, create a new fw rule
// var newFwRuleNameSuffix = SQLControl.GetNewFirewallRuleSuffix(countryCode).ToString();
// fwRuleName = fwRuleNamePrefix + countryCode + newFwRuleNameSuffix;
// createNewFwRule = true;
// }
//}
//If necessary, we create a new rule
//TODO: Create another method for this?
//if (createNewFwRule)
//{
// firewallRule = (INetFwRule)Activator.CreateInstance(Type.GetTypeFromProgID("HNetCfg.FWRule"));
// firewallRule.Name = fwRuleName;
// firewallPolicy.Rules.Add(firewallRule);
// firewallRule.Description = "Block inbound traffic from " + countryCode;
// firewallRule.Profiles = (int)NET_FW_PROFILE_TYPE2_.NET_FW_PROFILE2_ALL;
// firewallRule.Protocol = (int)NET_FW_IP_PROTOCOL_.NET_FW_IP_PROTOCOL_TCP;
// firewallRule.Direction = NET_FW_RULE_DIRECTION_.NET_FW_RULE_DIR_IN;
// firewallRule.Action = NET_FW_ACTION_.NET_FW_ACTION_BLOCK;
// firewallRule.Enabled = true;
// //firewallRule.RemoteAddresses = ip;
// //firewallPolicy.Rules.Add(firewallRule); //throws error and not needed anyway
// //firewallRule.LocalPorts = "4000";
// //firewallRule.Grouping = "#firewallapi.dll,-23255";
// //firewallRule.Profiles = firewallPolicy.CurrentProfileTypes;
//}
//if (isIpsString || remoteAddresses == "*")
//{
// firewallRule.RemoteAddresses = ip;
//}
//else
//{
// firewallRule.RemoteAddresses = remoteAddresses + "," + ip;
//}
}
In production, it simply executes this method "as is" and just eats memory. Again, it is specific to .Net Core as I ran this same code in .Net 4.6.1 (un-commented) for more than a year without issue.
Now that I think about it and have commented it out, I can also put it through some loop and troubleshoot it here internally. It's a race...
The server it's currently running on is Windows Server 2016.
Update 2: It's reproducible in Windows 10 also. Here is a the entire code to reproduce the error:
using System;
using System.Linq;
using NetFwTypeLib;
namespace FirewallLeakTesterCore
{
class Program
{
static void Main(string[] args)
{
int count = 0;
while (count < 1000)
{
Console.WriteLine(count += 1);
INetFwPolicy2 firewallPolicy = (INetFwPolicy2)Activator.CreateInstance(Type.GetTypeFromProgID("HNetCfg.FwPolicy2"));
INetFwRule firewallRule = firewallPolicy.Rules.OfType<INetFwRule>().Where(x => x.Name == "firewallRuleName").FirstOrDefault();
}
Console.ReadKey();
}
}
}
And here is the result:
Visual Studio Diagnostic Tools:
Seems like a Microsoft bug? I guess I'll try and tackle one of these memory profilers, but even if I discover the issue, what can I do? This is a basic call to a basic windows library which I think is why some people in the comments are missing the point.
I'm not asking for anyone to memory profile my app or track down the issue (and never was... I've already done that). I'm asking for alternate solutions. Is there possibly a structure or linq issue or other statement entirely I can use?
If you want to leave the question closed, then so be it, but I have a hard time believing no one else is going to run into this.
I see a lot of comments in regards to your memory profiling and not a lot about addressing your problem... Here is an article you could use to test for memory leaks from JetBrains https://blog.jetbrains.com/dotnet/2018/10/04/unit-testing-memory-leaks-using-dotmemory-unit/
As to your problem, I guess you might have a look at Marshal.ReleaseComObject
I would release any COM call with something like this:
if (_netFwPolicy is not null && OperatingSystem.IsWindows())
{
Marshal.ReleaseComObject(_netFwPolicy);
}
But what you really need to consider is wrapping the instances in a class that handles IDispose, this way any foreach will free the item as well, looping over the items in your policy will become an issue when used like this...
I know there are a lot of samples doing what you are doing on the internet, which doesn't by definition make it a good idea, perhaps consider playing with this wrapper.
public interface IFireWallRule :INetFwRule, IDisposable
{
}
class FireWallRule: IFireWallRule
{
private INetFwRule? _data;
private bool _disposedValue;
public FireWallRule(INetFwRule data)
{
_data=data;
}
internal INetFwRule Root => _data?? throw new ObjectDisposedException("This firewall rule is already disposed and can't be accessed");
public string Name { get => Root.Name; set => Root.Name=value; }
public string Description { get => Root.Description; set => Root.Description=value; }
public string ApplicationName { get => Root.ApplicationName; set => Root.ApplicationName=value; }
public string serviceName { get => Root.serviceName; set => Root.serviceName=value; }
public int Protocol { get => Root.Protocol; set => Root.Protocol=value; }
public string LocalPorts { get => Root.LocalPorts; set => Root.LocalPorts=value; }
public string RemotePorts { get => Root.RemotePorts; set => Root.RemotePorts=value; }
public string LocalAddresses { get => Root.LocalAddresses; set => Root.LocalAddresses=value; }
public string RemoteAddresses { get => Root.RemoteAddresses; set => Root.RemoteAddresses=value; }
public string IcmpTypesAndCodes { get => Root.IcmpTypesAndCodes; set => Root.IcmpTypesAndCodes=value; }
public NET_FW_RULE_DIRECTION_ Direction { get => Root.Direction; set => Root.Direction=value; }
public dynamic Interfaces { get => Root.Interfaces; set => Root.Interfaces=value; }
public string InterfaceTypes { get => Root.InterfaceTypes; set => Root.InterfaceTypes=value; }
public bool Enabled { get => Root.Enabled; set => _data.Enabled=value; }
public string Grouping { get => Root.Grouping; set => _data.Grouping=value; }
public int Profiles { get => Root.Profiles; set => _data.Profiles=value; }
public bool EdgeTraversal { get => Root.EdgeTraversal; set => Root.EdgeTraversal=value; }
public NET_FW_ACTION_ Action { get => Root.Action; set => Root.Action=value; }
protected virtual void Dispose(bool _)
{
if (!_disposedValue)
{
if (_data is not null && OperatingSystem.IsWindows())
{
Marshal.ReleaseComObject(_data);
_data=null;
}
_disposedValue=true;
}
}
~FireWallRule()
{
Dispose(false);
}
public void Dispose()
{
// Do not change this code. Put cleanup code in 'Dispose(bool disposing)' method
Dispose(true);
GC.SuppressFinalize(this);
}
}
To make life "easy" I'd consider some extension methods to make working with the Wrapper class easier
internal static class FireWallExcensionMethods
{
public static void Add(this INetFwPolicy2 policy, FireWallRule rule)
{
policy.Rules.Add(rule.Root);
}
public static void Add(this INetFwRules rules, FireWallRule rule)
{
rules.Add(rule.Root);
}
public static IEnumerable<FireWallRule> GetRules(this INetFwPolicy2 policy)
{
foreach (INetFwRule rule in policy.Rules)
{
yield return new FireWallRule(rule);
}
}
}
Referencing this answer regarding regenerating new SessionID
I created this code in my Global.asax.cs:
protected void Application_Start(object sender, EventArgs e)
{
Bootstrapper.Initialized += new EventHandler<ExecutedEventArgs>(Bootstrapper_Initialized);
}
void Bootstrapper_Initialized(object sender, Telerik.Sitefinity.Data.ExecutedEventArgs e)
{
if (e.CommandName == "Bootstrapped")
{
EventHub.Subscribe<ILoginCompletedEvent>(LoginCompletedEventVerification);
}
}
private void LoginCompletedEventVerification(ILoginCompletedEvent evt)
{
if (evt.LoginResult == UserLoggingReason.Success)
{
var manager = new SessionIDManager();
var oldId = manager.GetSessionID(Context);
var newId = manager.CreateSessionID(Context);
bool isAdd = false, isRedir = false;
manager.SaveSessionID(Context, newId, out isRedir, out isAdd);
var ctx = HttpContext.Current.ApplicationInstance;
var mods = ctx.Modules;
var ssm = (SessionStateModule)mods.Get("Session");
var fields = ssm.GetType().GetFields(BindingFlags.NonPublic | BindingFlags.Instance);
SessionStateStoreData rqItem = null;
SessionStateStoreProviderBase store = null;
FieldInfo rqIdField = null, rqLockIdField = null, rqStateNotFoundField = null;
foreach (var field in fields)
{
if (field.Name.Equals("_store")) store = (SessionStateStoreProviderBase)field.GetValue(ssm);
if (field.Name.Equals("_rqId")) rqIdField = field;
if (field.Name.Equals("_rqLockId")) rqLockIdField = field;
if (field.Name.Equals("_rqSessionStateNotFound")) rqStateNotFoundField = field;
if ((field.Name.Equals("_rqItem")))
{
rqItem = (SessionStateStoreData)field.GetValue(ssm);
}
}
var lockId = rqLockIdField.GetValue(ssm);
if ((lockId != null) && (oldId != null))
{
store.ReleaseItemExclusive(Context, oldId, lockId);
store.RemoveItem(Context, oldId, lockId, rqItem);
}
rqStateNotFoundField.SetValue(ssm, true);
rqIdField.SetValue(ssm, newId);
}
}
Please keep in mind that I am developing in a Sitefinity web application.
Every time my application hits LoginCompletedEventVerification during a successful login, Context comes up as null. Now, I initially wanted to add this snippet to the Sitefinity LoginWidget, but making that happen is a whole other story.
I did not include it in the code sample, but I do have Session_Start firing to create my application's "shopping cart." I am just trying to create a new SessionID for the cart after authentication.
Is there a reason I cannot get a value for Context during this event?
Thanks in advance. I appreciate any suggestions or criticism!
EDIT: Sitefinity knowledge base article where I got my Bootstrapper_Initialized code
I did not include it in the code sample, but I do have Session_Start
firing to create my application's "shopping cart." I am just trying to
create a new SessionID for the cart after authentication.
Nooooo. Forget about accessing HttpContext in the Application_Start event.
Alternatively you could do that in Application_BeginRequest:
private static object syncRoot = new object();
private static bool initialized = false;
public void Application_BeginRequest(object sender, EventArgs e)
{
if (!initialized)
{
lock (syncRoot)
{
if (!initialized)
{
// Do your stuff here with HttpContext
initialized = true;
}
}
}
}
Another thing you should be aware of is that HttpContext will not be available in any background threads that you might have spawned and in which the HTTP request has already finished executing. So you should be extremely careful where you are trying to access this HttpContext.
The LoginCompletedEvent event is synchronous - it is not fired in a background thread, it is rather part of the authentication request. You can access the current context by either directly calling HttpContect.Current or since you are in the context of a Sitefinity application you can use the Sitefinity wrapper for the current context:
var currentContext = SystemManager.CurrentHttpContext;
I am using DotNetOpenAuth in conjunction with Mono 2.10. When context.Application.Unlock() is called, an exception is thrown indicating the lock was never acquired in the first place. I've modified the code as shown below.
My question is, does the code serve the same purpose, and does mono under Apache even support locking in this way?
Original
context.Application.Lock();
try
{
if ((store = (IRelyingPartyApplicationStore)context.Application[ApplicationStoreKey]) == null)
{
context.Application[ApplicationStoreKey] = store = new StandardRelyingPartyApplicationStore();
}
}
finally
{
context.Application.UnLock();
}
My Modifications
lock (app)
{
try
{
if ((store = (IRelyingPartyApplicationStore)context.Application[ApplicationStoreKey]) == null)
{
context.Application[ApplicationStoreKey] = store = new StandardRelyingPartyApplicationStore();
}
}
finally
{
//context.Application.UnLock();
}
}
Actually is not the same think the Application.Lock(); with the lock(app)
The Application.Lock(); is lock all threads on pools, the lock(app) can lock only current pool threads.
If you have problems with the Application data, then save them in a static variable and there you can use the lock(), and its faster and suggested by microsoft.
for more details read also this similar answer: https://stackoverflow.com/a/10964038/159270
By the way this is the code of the Application.Lock();
public void Lock()
{
this._lock.AcquireWrite();
}
internal virtual void AcquireWrite()
{
lock (this)
{
while (this._lock != 0)
{
try
{
Monitor.Wait(this);
continue;
}
catch (ThreadInterruptedException)
{
continue;
}
}
this._lock = -1;
}
}
I am currently writing a little bootstrap code for a service that can be run in the console. It essentially boils down to calling the OnStart() method instead of using the ServiceBase to start and stop the service (because it doesn't run the application if it isn't installed as a service and makes debugging a nightmare).
Right now I am using Debugger.IsAttached to determine if I should use ServiceBase.Run or [service].OnStart, but I know that isn't the best idea because some times end users want to run the service in a console (to see the output etc. realtime).
Any ideas on how I could determine if the Windows service controller started 'me', or if the user started 'me' in the console? Apparantly Environment.IsUserInteractive is not the answer. I thought about using commandline args, but that seems 'dirty'.
I could always see about a try-catch statement around ServiceBase.Run, but that seems dirty. Edit: Try catch doesn't work.
I have a solution: putting it up here for all the other interested stackers:
public void Run()
{
if (Debugger.IsAttached || Environment.GetCommandLineArgs().Contains<string>("-console"))
{
RunAllServices();
}
else
{
try
{
string temp = Console.Title;
ServiceBase.Run((ServiceBase[])ComponentsToRun);
}
catch
{
RunAllServices();
}
}
} // void Run
private void RunAllServices()
{
foreach (ConsoleService component in ComponentsToRun)
{
component.Start();
}
WaitForCTRLC();
foreach (ConsoleService component in ComponentsToRun)
{
component.Stop();
}
}
EDIT: There was another question on StackOverflow where the guy had problems with the Environment.CurrentDirectory being "C:\Windows\System32" looks like that may be the answer. I will test today.
Another workaround.. so can run as WinForm or as windows service
var backend = new Backend();
if (Environment.UserInteractive)
{
backend.OnStart();
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Fronend(backend));
backend.OnStop();
}
else
{
var ServicesToRun = new ServiceBase[] {backend};
ServiceBase.Run(ServicesToRun);
}
I usually flag my Windows service as a console application which takes a command line parameter of "-console" to run using a console, otherwise it runs as a service. To debug you just set the command line parameters in the project options to "-console" and you're off!
This makes debugging nice and easy and means that the app functions as a service by default, which is what you'll want.
What works for me:
The class doing the actual service work is running in a separate thread.
This thread is started from within the OnStart() method, and stopped from OnStop().
The decision between service and console mode depends on Environment.UserInteractive
Sample code:
class MyService : ServiceBase
{
private static void Main()
{
if (Environment.UserInteractive)
{
startWorkerThread();
Console.WriteLine ("====== Press ENTER to stop threads ======");
Console.ReadLine();
stopWorkerThread() ;
Console.WriteLine ("====== Press ENTER to quit ======");
Console.ReadLine();
}
else
{
Run (this) ;
}
}
protected override void OnStart(string[] args)
{
startWorkerThread();
}
protected override void OnStop()
{
stopWorkerThread() ;
}
}
Like Ash, I write all actual processing code in a separate class library assembly, which was then referenced by the windows service executable, as well as a console app.
However, there are occasions when it is useful to know if the class library is running in the context of the service executable or the console app. The way I do this is to reflect on the base class of the hosting app. (Sorry for the VB, but I imagine that the following could be c#-ified fairly easily):
Public Class ExecutionContext
''' <summary>
''' Gets a value indicating whether the application is a windows service.
''' </summary>
''' <value>
''' <c>true</c> if this instance is service; otherwise, <c>false</c>.
''' </value>
Public Shared ReadOnly Property IsService() As Boolean
Get
' Determining whether or not the host application is a service is
' an expensive operation (it uses reflection), so we cache the
' result of the first call to this method so that we don't have to
' recalculate it every call.
' If we have not already determined whether or not the application
' is running as a service...
If IsNothing(_isService) Then
' Get details of the host assembly.
Dim entryAssembly As Reflection.Assembly = Reflection.Assembly.GetEntryAssembly
' Get the method that was called to enter the host assembly.
Dim entryPoint As System.Reflection.MethodInfo = entryAssembly.EntryPoint
' If the base type of the host assembly inherits from the
' "ServiceBase" class, it must be a windows service. We store
' the result ready for the next caller of this method.
_isService = (entryPoint.ReflectedType.BaseType.FullName = "System.ServiceProcess.ServiceBase")
End If
' Return the cached result.
Return CBool(_isService)
End Get
End Property
Private Shared _isService As Nullable(Of Boolean) = Nothing
#End Region
End Class
Jonathan, not exactly an answer to your question, but I've just finished writing a windows service and also noted the difficulty with debugging and testing.
Solved it by simply writing all actual processing code in a separate class library assembly, which was then referenced by the windows service executable, as well as a console app and a test harness.
Apart from basic timer logic, all more complex processing happened in the common assembly and could be tested/run on demand incredibly easily.
I have modified the ProjectInstaller to append the command-line argument parameter /service, when it is being installed as service:
static class Program
{
static void Main(string[] args)
{
if (Array.Exists(args, delegate(string arg) { return arg == "/install"; }))
{
System.Configuration.Install.TransactedInstaller ti = null;
ti = new System.Configuration.Install.TransactedInstaller();
ti.Installers.Add(new ProjectInstaller());
ti.Context = new System.Configuration.Install.InstallContext("", null);
string path = System.Reflection.Assembly.GetExecutingAssembly().Location;
ti.Context.Parameters["assemblypath"] = path;
ti.Install(new System.Collections.Hashtable());
return;
}
if (Array.Exists(args, delegate(string arg) { return arg == "/uninstall"; }))
{
System.Configuration.Install.TransactedInstaller ti = null;
ti = new System.Configuration.Install.TransactedInstaller();
ti.Installers.Add(new ProjectInstaller());
ti.Context = new System.Configuration.Install.InstallContext("", null);
string path = System.Reflection.Assembly.GetExecutingAssembly().Location;
ti.Context.Parameters["assemblypath"] = path;
ti.Uninstall(null);
return;
}
if (Array.Exists(args, delegate(string arg) { return arg == "/service"; }))
{
ServiceBase[] ServicesToRun;
ServicesToRun = new ServiceBase[] { new MyService() };
ServiceBase.Run(ServicesToRun);
}
else
{
Console.ReadKey();
}
}
}
The ProjectInstaller.cs is then modified to override a OnBeforeInstall() and OnBeforeUninstall()
[RunInstaller(true)]
public partial class ProjectInstaller : Installer
{
public ProjectInstaller()
{
InitializeComponent();
}
protected virtual string AppendPathParameter(string path, string parameter)
{
if (path.Length > 0 && path[0] != '"')
{
path = "\"" + path + "\"";
}
path += " " + parameter;
return path;
}
protected override void OnBeforeInstall(System.Collections.IDictionary savedState)
{
Context.Parameters["assemblypath"] = AppendPathParameter(Context.Parameters["assemblypath"], "/service");
base.OnBeforeInstall(savedState);
}
protected override void OnBeforeUninstall(System.Collections.IDictionary savedState)
{
Context.Parameters["assemblypath"] = AppendPathParameter(Context.Parameters["assemblypath"], "/service");
base.OnBeforeUninstall(savedState);
}
}
This thread is really old, but I thought I would throw my solution out there. Quite simply, to handle this type of situation, I built a "service harness" that is used in both the console and Windows service cases. As above, most of the logic is contained in a separate library, but this is more for testing and "linkability".
The attached code by no means represents the "best possible" way to solve this, just my own approach. Here, the service harness is called by the console app when in "console mode" and by the same application's "start service" logic when it is running as a service. By doing it this way, you can now call
ServiceHost.Instance.RunningAsAService (Boolean)
from anywhere in your code to check if the application is running as a service or simply as a console.
Here is the code:
public class ServiceHost
{
private static Logger log = LogManager.GetLogger(typeof(ServiceHost).Name);
private static ServiceHost mInstance = null;
private static object mSyncRoot = new object();
#region Singleton and Static Properties
public static ServiceHost Instance
{
get
{
if (mInstance == null)
{
lock (mSyncRoot)
{
if (mInstance == null)
{
mInstance = new ServiceHost();
}
}
}
return (mInstance);
}
}
public static Logger Log
{
get { return log; }
}
public static void Close()
{
lock (mSyncRoot)
{
if (mInstance.mEngine != null)
mInstance.mEngine.Dispose();
}
}
#endregion
private ReconciliationEngine mEngine;
private ServiceBase windowsServiceHost;
private UnhandledExceptionEventHandler threadExceptionHanlder = new UnhandledExceptionEventHandler(ThreadExceptionHandler);
public bool HostHealthy { get; private set; }
public bool RunningAsService {get; private set;}
private ServiceHost()
{
HostHealthy = false;
RunningAsService = false;
AppDomain.CurrentDomain.UnhandledException += threadExceptionHandler;
try
{
mEngine = new ReconciliationEngine();
HostHealthy = true;
}
catch (Exception ex)
{
log.FatalException("Could not initialize components.", ex);
}
}
public void StartService()
{
if (!HostHealthy)
throw new ApplicationException("Did not initialize components.");
try
{
mEngine.Start();
}
catch (Exception ex)
{
log.FatalException("Could not start service components.", ex);
HostHealthy = false;
}
}
public void StartService(ServiceBase serviceHost)
{
if (!HostHealthy)
throw new ApplicationException("Did not initialize components.");
if (serviceHost == null)
throw new ArgumentNullException("serviceHost");
windowsServiceHost = serviceHost;
RunningAsService = true;
try
{
mEngine.Start();
}
catch (Exception ex)
{
log.FatalException("Could not start service components.", ex);
HostHealthy = false;
}
}
public void RestartService()
{
if (!HostHealthy)
throw new ApplicationException("Did not initialize components.");
try
{
log.Info("Stopping service components...");
mEngine.Stop();
mEngine.Dispose();
log.Info("Starting service components...");
mEngine = new ReconciliationEngine();
mEngine.Start();
}
catch (Exception ex)
{
log.FatalException("Could not restart components.", ex);
HostHealthy = false;
}
}
public void StopService()
{
try
{
if (mEngine != null)
mEngine.Stop();
}
catch (Exception ex)
{
log.FatalException("Error stopping components.", ex);
HostHealthy = false;
}
finally
{
if (windowsServiceHost != null)
windowsServiceHost.Stop();
if (RunningAsService)
{
AppDomain.CurrentDomain.UnhandledException -= threadExceptionHanlder;
}
}
}
private void HandleExceptionBasedOnExecution(object ex)
{
if (RunningAsService)
{
windowsServiceHost.Stop();
}
else
{
throw (Exception)ex;
}
}
protected static void ThreadExceptionHandler(object sender, UnhandledExceptionEventArgs e)
{
log.FatalException("Unexpected error occurred. System is shutting down.", (Exception)e.ExceptionObject);
ServiceHost.Instance.HandleExceptionBasedOnExecution((Exception)e.ExceptionObject);
}
}
All you need to do here is replace that ominous looking ReconcilationEngine reference with whatever method is boostrapping your logic. Then in your application, use the ServiceHost.Instance.Start() and ServiceHost.Instance.Stop() methods whether you are running in console mode or as a service.
Maybe checking if the process parent is C:\Windows\system32\services.exe.
The only way I've found to achieve this, is to check if a console is attached to the process in the first place, by accessing any Console object property (e.g. Title) inside a try/catch block.
If the service is started by the SCM, there is no console, and accessing the property will throw a System.IO.IOError.
However, since this feels a bit too much like relying on an implementation-specific detail (what if the SCM on some platforms or someday decides to provide a console to the processes it starts?), I always use a command line switch (-console) in production apps...
Here is a translation of chksr's answer to .NET, and avoiding the bug that fails to recognize interactive services:
using System.Security.Principal;
var wi = WindowsIdentity.GetCurrent();
var wp = new WindowsPrincipal(wi);
var serviceSid = new SecurityIdentifier(WellKnownSidType.ServiceSid, null);
var localSystemSid = new SecurityIdentifier(WellKnownSidType.LocalSystemSid, null);
var interactiveSid = new SecurityIdentifier(WellKnownSidType.InteractiveSid, null);
// maybe check LocalServiceSid, and NetworkServiceSid also
bool isServiceRunningAsUser = wp.IsInRole(serviceSid);
bool isSystem = wp.IsInRole(localSystemSid);
bool isInteractive = wp.IsInRole(interactiveSid);
bool isAnyService = isServiceRunningAsUser || isSystem || !isInteractive;
This is a bit of a self-plug, but I've got a little app that will load up your service types in your app via reflection and execute them that way. I include the source code, so you could change it slightly to display standard output.
No code changes needed to use this solution. I have a Debugger.IsAttached type of solution as well that is generic enough to be used with any service. Link is in this article:
.NET Windows Service Runner
Well there's some very old code (about 20 years or so, not from me but found in the wild, wild web, and in C not C#) that should give you an idea how to do the job:
enum enEnvironmentType
{
ENVTYPE_UNKNOWN,
ENVTYPE_STANDARD,
ENVTYPE_SERVICE_WITH_INTERACTION,
ENVTYPE_SERVICE_WITHOUT_INTERACTION,
ENVTYPE_IIS_ASP,
};
enEnvironmentType GetEnvironmentType(void)
{
HANDLE hProcessToken = NULL;
DWORD groupLength = 300;
PTOKEN_GROUPS groupInfo = NULL;
SID_IDENTIFIER_AUTHORITY siaNt = SECURITY_NT_AUTHORITY;
PSID pInteractiveSid = NULL;
PSID pServiceSid = NULL;
DWORD dwRet = NO_ERROR;
DWORD ndx;
BOOL m_isInteractive = FALSE;
BOOL m_isService = FALSE;
// open the token
if (!::OpenProcessToken(::GetCurrentProcess(),TOKEN_QUERY,&hProcessToken))
{
dwRet = ::GetLastError();
goto closedown;
}
// allocate a buffer of default size
groupInfo = (PTOKEN_GROUPS)::LocalAlloc(0, groupLength);
if (groupInfo == NULL)
{
dwRet = ::GetLastError();
goto closedown;
}
// try to get the info
if (!::GetTokenInformation(hProcessToken, TokenGroups,
groupInfo, groupLength, &groupLength))
{
// if buffer was too small, allocate to proper size, otherwise error
if (::GetLastError() != ERROR_INSUFFICIENT_BUFFER)
{
dwRet = ::GetLastError();
goto closedown;
}
::LocalFree(groupInfo);
groupInfo = (PTOKEN_GROUPS)::LocalAlloc(0, groupLength);
if (groupInfo == NULL)
{
dwRet = ::GetLastError();
goto closedown;
}
if (!GetTokenInformation(hProcessToken, TokenGroups,
groupInfo, groupLength, &groupLength))
{
dwRet = ::GetLastError();
goto closedown;
}
}
//
// We now know the groups associated with this token. We want
// to look to see if the interactive group is active in the
// token, and if so, we know that this is an interactive process.
//
// We also look for the "service" SID, and if it's present,
// we know we're a service.
//
// The service SID will be present iff the service is running in a
// user account (and was invoked by the service controller).
//
// create comparison sids
if (!AllocateAndInitializeSid(&siaNt,
1,
SECURITY_INTERACTIVE_RID,
0, 0, 0, 0, 0, 0, 0,
&pInteractiveSid))
{
dwRet = ::GetLastError();
goto closedown;
}
if (!AllocateAndInitializeSid(&siaNt,
1,
SECURITY_SERVICE_RID,
0, 0, 0, 0, 0, 0, 0,
&pServiceSid))
{
dwRet = ::GetLastError();
goto closedown;
}
// try to match sids
for (ndx = 0; ndx < groupInfo->GroupCount ; ndx += 1)
{
SID_AND_ATTRIBUTES sanda = groupInfo->Groups[ndx];
PSID pSid = sanda.Sid;
//
// Check to see if the group we're looking at is one of
// the two groups we're interested in.
//
if (::EqualSid(pSid, pInteractiveSid))
{
//
// This process has the Interactive SID in its
// token. This means that the process is running as
// a console process
//
m_isInteractive = TRUE;
m_isService = FALSE;
break;
}
else if (::EqualSid(pSid, pServiceSid))
{
//
// This process has the Service SID in its
// token. This means that the process is running as
// a service running in a user account ( not local system ).
//
m_isService = TRUE;
m_isInteractive = FALSE;
break;
}
}
if ( !( m_isService || m_isInteractive ) )
{
//
// Neither Interactive or Service was present in the current
// users token, This implies that the process is running as
// a service, most likely running as LocalSystem.
//
m_isService = TRUE;
}
closedown:
if ( pServiceSid )
::FreeSid( pServiceSid );
if ( pInteractiveSid )
::FreeSid( pInteractiveSid );
if ( groupInfo )
::LocalFree( groupInfo );
if ( hProcessToken )
::CloseHandle( hProcessToken );
if (dwRet == NO_ERROR)
{
if (m_isService)
return(m_isInteractive ? ENVTYPE_SERVICE_WITH_INTERACTION : ENVTYPE_SERVICE_WITHOUT_INTERACTION);
return(ENVTYPE_STANDARD);
}
else
return(ENVTYPE_UNKNOWN);
}
Seems I am bit late to the party, but interesting difference when run as a service is that at start current folder points to system directory (C:\windows\system32 by default). Its hardly unlikely user app will start from the system folder in any real life situation.
So, I use following trick (c#):
protected static bool IsRunAsService()
{
string CurDir = Directory.GetCurrentDirectory();
if (CurDir.Equals(Environment.SystemDirectory, StringComparison.CurrentCultureIgnoreCase))
{
return true;
}
return (false);
}
For future extension, additional check make be done for System.Environment.UserInteractive == false (but I do not know how it correlates with 'Allow service to interact with desktop' service settings).
You may also check window session by System.Diagnostics.Process.GetCurrentProcess().SessionId == 0 (I do not know how it correlates with 'Allow service to interact with desktop' service settings as well).
If you write portable code (say, with .NetCore) you may also check Environment.OSVersion.Platform to ensure that you are on windows first.
I know in certain circumstances, such as long running processes, it is important to lock ASP.NET cache in order to avoid subsequent requests by another user for that resource from executing the long process again instead of hitting the cache.
What is the best way in c# to implement cache locking in ASP.NET?
Here's the basic pattern:
Check the cache for the value, return if its available
If the value is not in the cache, then implement a lock
Inside the lock, check the cache again, you might have been blocked
Perform the value look up and cache it
Release the lock
In code, it looks like this:
private static object ThisLock = new object();
public string GetFoo()
{
// try to pull from cache here
lock (ThisLock)
{
// cache was empty before we got the lock, check again inside the lock
// cache is still empty, so retreive the value here
// store the value in the cache here
}
// return the cached value here
}
For completeness a full example would look something like this.
private static object ThisLock = new object();
...
object dataObject = Cache["globalData"];
if( dataObject == null )
{
lock( ThisLock )
{
dataObject = Cache["globalData"];
if( dataObject == null )
{
//Get Data from db
dataObject = GlobalObj.GetData();
Cache["globalData"] = dataObject;
}
}
}
return dataObject;
There is no need to lock the whole cache instance, rather we only need to lock the specific key that you are inserting for.
I.e. No need to block access to the female toilet while you use the male toilet :)
The implementation below allows for locking of specific cache-keys using a concurrent dictionary. This way you can run GetOrAdd() for two different keys at the same time - but not for the same key at the same time.
using System;
using System.Collections.Concurrent;
using System.Web.Caching;
public static class CacheExtensions
{
private static ConcurrentDictionary<string, object> keyLocks = new ConcurrentDictionary<string, object>();
/// <summary>
/// Get or Add the item to the cache using the given key. Lazily executes the value factory only if/when needed
/// </summary>
public static T GetOrAdd<T>(this Cache cache, string key, int durationInSeconds, Func<T> factory)
where T : class
{
// Try and get value from the cache
var value = cache.Get(key);
if (value == null)
{
// If not yet cached, lock the key value and add to cache
lock (keyLocks.GetOrAdd(key, new object()))
{
// Try and get from cache again in case it has been added in the meantime
value = cache.Get(key);
if (value == null && (value = factory()) != null)
{
// TODO: Some of these parameters could be added to method signature later if required
cache.Insert(
key: key,
value: value,
dependencies: null,
absoluteExpiration: DateTime.Now.AddSeconds(durationInSeconds),
slidingExpiration: Cache.NoSlidingExpiration,
priority: CacheItemPriority.Default,
onRemoveCallback: null);
}
// Remove temporary key lock
keyLocks.TryRemove(key, out object locker);
}
}
return value as T;
}
}
Just to echo what Pavel said, I believe this is the most thread safe way of writing it
private T GetOrAddToCache<T>(string cacheKey, GenericObjectParamsDelegate<T> creator, params object[] creatorArgs) where T : class, new()
{
T returnValue = HttpContext.Current.Cache[cacheKey] as T;
if (returnValue == null)
{
lock (this)
{
returnValue = HttpContext.Current.Cache[cacheKey] as T;
if (returnValue == null)
{
returnValue = creator(creatorArgs);
if (returnValue == null)
{
throw new Exception("Attempt to cache a null reference");
}
HttpContext.Current.Cache.Add(
cacheKey,
returnValue,
null,
System.Web.Caching.Cache.NoAbsoluteExpiration,
System.Web.Caching.Cache.NoSlidingExpiration,
CacheItemPriority.Normal,
null);
}
}
}
return returnValue;
}
Craig Shoemaker has made an excellent show on asp.net caching:
http://polymorphicpodcast.com/shows/webperformance/
I have come up with the following extension method:
private static readonly object _lock = new object();
public static TResult GetOrAdd<TResult>(this Cache cache, string key, Func<TResult> action, int duration = 300) {
TResult result;
var data = cache[key]; // Can't cast using as operator as TResult may be an int or bool
if (data == null) {
lock (_lock) {
data = cache[key];
if (data == null) {
result = action();
if (result == null)
return result;
if (duration > 0)
cache.Insert(key, result, null, DateTime.UtcNow.AddSeconds(duration), TimeSpan.Zero);
} else
result = (TResult)data;
}
} else
result = (TResult)data;
return result;
}
I have used both #John Owen and #user378380 answers. My solution allows you to store int and bool values within the cache aswell.
Please correct me if there's any errors or whether it can be written a little better.
I saw one pattern recently called Correct State Bag Access Pattern, which seemed to touch on this.
I modified it a bit to be thread-safe.
http://weblogs.asp.net/craigshoemaker/archive/2008/08/28/asp-net-caching-and-performance.aspx
private static object _listLock = new object();
public List List() {
string cacheKey = "customers";
List myList = Cache[cacheKey] as List;
if(myList == null) {
lock (_listLock) {
myList = Cache[cacheKey] as List;
if (myList == null) {
myList = DAL.ListCustomers();
Cache.Insert(cacheKey, mList, null, SiteConfig.CacheDuration, TimeSpan.Zero);
}
}
}
return myList;
}
This article from CodeGuru explains various cache locking scenarios as well as some best practices for ASP.NET cache locking:
Synchronizing Cache Access in ASP.NET
I've wrote a library that solves that particular issue: Rocks.Caching
Also I've blogged about this problem in details and explained why it's important here.
I modified #user378380's code for more flexibility. Instead of returning TResult now returns object for accepting different types in order. Also adding some parameters for flexibility. All the idea belongs to
#user378380.
private static readonly object _lock = new object();
//If getOnly is true, only get existing cache value, not updating it. If cache value is null then set it first as running action method. So could return old value or action result value.
//If getOnly is false, update the old value with action result. If cache value is null then set it first as running action method. So always return action result value.
//With oldValueReturned boolean we can cast returning object(if it is not null) appropriate type on main code.
public static object GetOrAdd<TResult>(this Cache cache, string key, Func<TResult> action,
DateTime absoluteExpireTime, TimeSpan slidingExpireTime, bool getOnly, out bool oldValueReturned)
{
object result;
var data = cache[key];
if (data == null)
{
lock (_lock)
{
data = cache[key];
if (data == null)
{
oldValueReturned = false;
result = action();
if (result == null)
{
return result;
}
cache.Insert(key, result, null, absoluteExpireTime, slidingExpireTime);
}
else
{
if (getOnly)
{
oldValueReturned = true;
result = data;
}
else
{
oldValueReturned = false;
result = action();
if (result == null)
{
return result;
}
cache.Insert(key, result, null, absoluteExpireTime, slidingExpireTime);
}
}
}
}
else
{
if(getOnly)
{
oldValueReturned = true;
result = data;
}
else
{
oldValueReturned = false;
result = action();
if (result == null)
{
return result;
}
cache.Insert(key, result, null, absoluteExpireTime, slidingExpireTime);
}
}
return result;
}
The accepted answer (recommending reading outside of the lock) is very bad advice and is being implemented since 2008. It could work if the cache uses a concurrent dictionary, but that itself has a lock for reads.
Reading outside of the lock means that other threads could be modifying the cache in the middle of read. This means that the read could be inconsistent.
For example, depending on the implementation of the cache (probably a dictionary whose internals are unknown), the item could be checked and found in the cache, at a certain index in the underlying array of the cache, then another thread could modify the cache so that the items from the underlying array are no longer in the same order, and then the actual read from the cache could be from a different index / address.
Another scenario is that the read could be from an index that is now outside of the underlying array (because items were removed), so you can get exceptions.