C# static method called only on startup - c#

I have this static method, which is called somewhere on startup of every WebApi method (using MVC core, WebApi project):
private static object syncObj = new object();
private static string myProp;
public static string MyStaticMethod()
{
lock (syncObj)
{
if (myProp == null)
{
myProp = Utils.GetMyPropFromRegistry();
if (string.IsNullOrWhiteSpace(myProp))
{
throw new MyException("myProp value not set");
}
}
}
return myProp;
}
When the application starts and the first time this method is called, I receive an exception, since I do not have a proper value written in the registry (a call from Utils.GetMyPropFromRegistry returns null). This is all OK. But, if I add value to the registry, while the application is still running (in debug mode), this method does not get called ever again. It actually returns the same exception message through WebApi as the first time, just debugger does not stop on any breakpoint (which it did on application startup). If I examine the stack trace, it shows correctly where this exception occurred, it just doesn't stop there and it doesn't read the new value from the registry.
It seems like C# (or ASP.NET Core), somehow remembers that method throws an exception and it doesn't even execute code?
Is there anything special about static methods that throw an exception on startup? Or maybe it has something to do with the lock statement? I've used static methods before, but never run into such an issue.
Places where this is called:
// 1: Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services
.AddAuthentication(ConfigureJwtBearerAuthenticationScheme)
.AddJwtBearer(options =>
{
var data = MyStaticMethod();
// Do something with data
});
}
// 2: WebApiController.cs:
public async Task<ActionResult<LoginResult>> Login(LoginModel model)
{
var data = MyStaticMethod();
var result = (do something with data)
return result;
}

Related

Unit testing - Operation is not valid due to the current state of the object when using a HasDbFunction()

I have added a HasDbFunction() into my modelBuilder to allow access to oracles LPad function like so:
modelBuilder.HasDbFunction(() => OracleDbFunctions.LPad(default, default, default));
The OracleDbFunction Class is:
public static class OracleDbFunctions
{
[DbFunction(IsBuiltIn = true)]
public static string LPad(this string text, int totalWidth, string padding)
=> throw new InvalidOperationException();
}
It all works perfectly fine and does what I expect. However, now I am trying to unit test all my apis call which use this function. I get the following error: "Operation is not valid due to the current state of the object"
I have ensured all data my DbContextMock uses is populated so not sure why this error occurs. Would I have to somehow initialise my OracleDbFunction.LPad in my DbContextMock?

Dependency Injection not resolving fast enough for use when a service relies on another service

I am injecting two services into my dot net core web api, the main service relies on data in the helper service. The helper service populates this data in the constructor, however when the main service goes to use this data it is not ready because the constructor of the helper service has not finished by the time it is needed.
I thought DI and the compiler would resolve and chain these services properly so the helper service would not be used until it was fully instantiated.
How I tell the main service to wait until the helper service is fully resolved and instantiated?
Generic sample code of what I am doing. I call the DoSomething() in MainSerice the HelperService calls out to an external API to get some data, that data is needed in the MainService.
StartUp.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<IHelperService, HelperService);
services.Scoped<IMainService, MainService);
}
MainService.cs
public class MainService : IMainService
{
private readonly IHelperServuce _helper;
public MainService(IHelperService HelperService)
{
_helper = HelperService;
}
public void DoSomething()
{
string helperParameter = _helper.Param1; //This fails because the constructor of HelperService has not finished
}
}
HelperService.cs
public class HelperService : IHelperService
{
public HelperService()
{
GetParamData();
}
private async void GetParamData()
{
var response = await CallExternalAPIForParameters(); //This may take a second.
Params1 = response.value;
}
private string _param1;
public string Param1
{
get
{
return _param1;
}
private set
{
_param1 = value;
}
}
}
You are not awaiting the async method GetParamData() data in the constructor. That is, ofcourse, not possible. Your constructor should only initialize simple data. You could fix this by, instead of using a property to return, you could also return a Task from a method called (for example) Task<string> GetParam1(). Which could cache the string value.
for example:
public class HelperService : IHelperService
{
private string _param1;
// note: this is not threadsafe.
public async Task<string> GetParam1()
{
if(_param1 != null)
return _param1;
var response = await CallExternalAPIForParameters(); //This may take a second.
_params1 = response.value;
return _param1;
}
}
You could even return a ValueTask<string> because most of the calls can be executed synchronously.
Pass a lambda to the helper service that initializes the variable in your main service, as in...
Helper service.getfirstparam( (response) ->
{ firstparam = response.data;});
While (firstparam == null)
sleep
// now do your processing

Setup on Mock not returning expected value

Here is a simplified version of a problem I encountered:
public interface IService
{
IProvider Provider { get; }
}
public interface IProvider
{
List<int> Numbers{ get; }
string Text { get; }
}
[TestMethod]
public void ServiceTest()
{
var service = new Mock<IService>();
var provider = new Mock<IProvider>();
service.Setup(s => s.Provider).Returns(provider.Object); // A
service.Setup(s => s.Provider.Text).Returns("some text"); // B - incorrect
// they actually meant to do this, instead of 'B'
// provider.Setup(p => p.Text).Returns("some text");
provider.Setup(p => p.Numbers).Returns(new List<int> { 1, 2, 3 });
DummyApplicationCode(service.Object);
}
int DummyApplicationCode(IService service)
{
// will throw, because the Provider was replaced at 'B'
int shouldBeOne = service.Provider.Numbers.First();
return shouldBeOne;
}
A unit test was failing because way down in the application code under test, the mocked IService was returning the wrong IProvider.
I eventually spotted the line (bear in mind the code I was looking at was not as simple as above) which had caused it, labelled 'B' above, which someone else had added due to misunderstanding the Moq Setup.
I'm aware that subsequent Setups on a mock will override previous ones but I hadn't spotted this issue because the Return of the offending line was for a separate sub-property.
I expect this is by design but it threw me as I hadn't anticipated someone would do this.
My question: Since the Setup at 'B' is only concerned with the return of the provider Text, why does the service 'Provider' property need to replace that which was defined at 'A'?
This is clearly intentional when looking at the source:
https://github.com/moq/moq4/blob/master/Source/Mock.cs
https://github.com/moq/moq4/blob/master/Source/Interceptor.cs
Setup creates a "call" by using AddCall on Interceptor. This contains the following block of code which, as long as we're creating a non-conditional setup, removes all previous setups. It's even commented.
if (!call.IsConditional)
{
lock (calls)
{
// if it's not a conditional call, we do
// all the override setups.
// TODO maybe add the conditionals to other
// record like calls to be user friendly and display
// somethig like: non of this calls were performed.
if (calls.ContainsKey(key))
{
// Remove previous from ordered calls
InterceptionContext.RemoveOrderedCall(calls[key]);
}
calls[key] = call;
}

Ninject scope when running Azure Webjob in parallel

I have a console application as my webjob to process notifications inside my application. The processes are triggered using a queue. The application interacts with a SQL Azure database using entity framework 6. The Process() method that's being called reads/write data to the database.
I'm getting several errors when the queue messages are processed. They never get to the poison queue since they are reprocessed successfully after 2-3 times. Mainly the errors are the following:
An unhandled exception of type 'System.StackOverflowException' occurred in mscorlib.dll
Error: System.OutOfMemoryException: Exception of type ‘System.OutOfMemoryException’ was thrown.
The default batch size is 16, so the messages are processed in parallel.
My guess is that the Ninject setup for processing messages in parallel is wrong. Therefore, when they are processed at the same time, there are some errors and eventually they are processed successfully.
My question is: Does this setup look ok? Should I use InThreadScope() maybe since I don't know the parallel processing is also multi-threaded.
Here's the code for my application.
Program.cs
namespace NotificationsProcessor
{
public class Program
{
private static StandardKernel _kernel;
private static void Main(string[] args)
{
var module = new CustomModule();
var kernel = new StandardKernel(module);
_kernel = kernel;
var config =
new JobHostConfiguration(AzureStorageAccount.ConnectionString)
{
NameResolver = new QueueNameResolver()
};
var host = new JobHost(config);
//config.Queues.BatchSize = 1; //Process messages in parallel
host.RunAndBlock();
}
public static void ProcessNotification([QueueTrigger("%notificationsQueueKey%")] string item)
{
var n = _kernel.Get&ltINotifications>();
n.Process(item);
}
public static void ProcessPoison([QueueTrigger("%notificationsQueueKeyPoison%")] string item)
{
//Process poison message.
}
}
}
Here's the code for Ninject's CustomModule
namespace NotificationsProcessor.NinjectFiles
{
public class CustomModule : NinjectModule
{
public override void Load()
{
Bind&ltIDbContext>().To&ltDataContext>(); //EF datacontext
Bind&ltINotifications>().To&ltNotificationsService>();
Bind&ltIEmails>().To&ltEmailsService>();
Bind&ltISms>().ToSmsService>();
}
}
}
Code for process method.
public void ProcessMessage(string message)
{
try
{
var notificationQueueMessage = JsonConvert.DeserializeObject&ltNotificationQueueMessage>(message);
//Grab message and check if it has to be processed
var notification = _context.Set().Find(notificationQueueMessage.NotificationId);
if (notification != null)
{
if (notification.NotificationType == NotificationType.AppointmentReminder.ToString())
{
notificationSuccess = SendAppointmentReminderEmail(notification); //Code that sends email using the SendGrid Api
}
}
}
catch (Exception ex)
{
_logger.LogError(ex + Environment.NewLine + message, LogSources.EmailsService);
throw;
}
}
Update - Added Exception
The exception is being thrown at the Json serializer. Here's the stack trace:
Error: System.OutOfMemoryException: Exception of type ‘System.OutOfMemoryException’ was thrown.
at System.String.CtorCharCount(Char c, Int32 count) at Newtonsoft.Json.JsonTextWriter.WriteIndent() at Newtonsoft.Json.JsonWriter.AutoCompleteClose(JsonContainerType type) at Newtonsoft.Json.JsonWriter.WriteEndObject() at Newtonsoft.Json.JsonWriter.WriteEnd(JsonContainerType type) at Newtonsoft.Json.JsonWriter.WriteEnd() at Newtonsoft.Json.JsonWriter.AutoCompleteAll() at Newtonsoft.Json.JsonTextWriter.Close() at Newtonsoft.Json.JsonWriter.System.IDisposable.Dispose() at Newtonsoft.Json.JsonConvert.SerializeObjectInternal(Object value, Type type, JsonSerializer jsonSerializer) at Newtonsoft.Json.JsonConvert.SerializeObject(Object value, Type type, Formatting formatting, JsonSerializerSettings settings) at Core.Services.Communications.EmailsService.SendAppointmentReminderEmail(Notificaciones email) in c:\ProjectsGreenLight\EAS\EAS\EAS\Core\Services\Communications\EmailsService.cs:line 489 at Core.Services.Communications.EmailsService.ProcessMessage(String message) in c:\ProjectsGreenLight\EAS\EAS\EAS\Core\Services\Communications\EmailsService.cs:line 124 at Core.Services.NotificacionesService.Process(String message) in c:\ProjectsGreenLight\EAS\EAS\EAS\Core\Services\NotificacionesService.cs:line 56
Since you're receiving OutOfMemoryExceptions and StackOverflowExceptions, i suggest that there may be a recursive or deeply nested method. It would be extremely helpful if you'd have a stacktrace to the exception, sadly that's not the case for StackOverflowExceptions. However, OutOfMemoryException has a stack trace, so you need to log this and see what it says / add it to your question. Also, as far as the StackoverflowException goes, you can also try this.
Scoping
You should not use .InThreadScope() in such a scenario. Why? usually the thread-pool is used. Thread's are reused. That means, that scoped objects live longer than for the processing of a single messages.
Currently you are using .InTransientScope() (that's the default if you don't specify anything else). Since you are using the IDbContext in only one place, that's ok. If you'd wanted to share the same instance across multiple objects, then you'd have to use a scope or pass along the instance manually.
Now what may be problematic in your case is that you may be creating a lot of new IDbContexts but not disposing of them after you've used them. Depending on other factors this may result in the garbage collector taking longer to clean-up memory. See Do i have to call dispose on DbContext.
However i grant that this won't fix your issue. It might just speed up your application.
Here's how you can do it anyway:
What you should do is: Have INotifications derive from IDisposable:
public interface INotifications : IDisposable
{
(...)
}
internal class Notifications : INotifications
{
private readonly IDbContext _context;
public Notifications(IDbContext context)
{
_context = context;
}
(...)
public void Dispose()
{
_context.Dispose();
}
}
and alter your calling code to dispose of INotifications:
public static void ProcessNotification([QueueTrigger("%notificationsQueueKey%")] string item)
{
using(var n = _kernel.Get<INotifications>())
{
n.Process(item);
}
}

Preventing simultaneous calls to a WCF function

I have a WCF service running as a windows service. This service references a DLL containing a class X with a public method func1. func1 calls another method func2(private) asynchronously using tasks(TPL). func2 performs a long running task independently. The setup is :
WCF
public string wcfFunc()
{
X obj = new X();
return obj.func1();
}
DLL
public class X
{
static bool flag;
public X()
{
flag = true;
}
public string func1()
{
if (!flag)
return "Already in action";
Task t = null;
t = Task.Factory.StartNew(() => func2(),TaskCreationOptions.LongRunning);
return "started";
}
void func2()
{
try
{
flag = false;
//Does a long running database processing work through .Net code
}
catch (Exception)
{
}
finally
{
flag = true;
}
}
}
The WCF function is called from a website. The website is used by multiple users. No two execution of the database processing func2 is allowed. Any user can trigger it. But during an execution, if any other user attempts to trigger it, it should show that the processing is already running.
I tried to use a static variable 'flag' to check it, but it does not seem to be working.
Any solutions? Thanks in advance.
You can read the following article, to prevent multiple calls to the WCF service method, you will need to first ensure that only one instances of your service can be created in addition to setting the concurrency mode.
In short, Make the following changes to your ServiceBehavior:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single, InstanceContextMode=InstanceContextMode.Single)]
public class YourService: IYourService
{...}
NOTE : This will disable concurrency in all the methods exposed by your service, If you do not want that you will have to move the needed method to a separate service and then configure it as above.

Categories

Resources