I have the following class:
/// <summary>
/// Represents an implementation of the <see cref="IAspNetCoreLoggingConfigurationBuilder"/> to configure the ASP.NET Core Logging.
/// </summary>
public class AspNetCoreLoggingConfigurationBuilder : IAspNetCoreLoggingConfigurationBuilder
{
#region Properties
/// <summary>
/// Gets the <see cref="ILogSource"/> that's used to write log entries.
/// </summary>
public ILogSource LogSource{ get; private set; }
#endregion
#region IAspNetCoreLoggingConfigurationBuilder Members
/// <summary>
/// Sets the log source that should be used to save log entries.
/// </summary>
/// <param name="logSource">The source </param>
public void SetLogSource(ILogSource logSource)
{
LogSource = logSource;
}
#endregion
}
I also have a method in which I create an instance of this class:
/// <summary>
/// Adds logging to the <see cref="IApplicationBuilder"/> request execution pipeline.
/// </summary>
/// <param name="app">The <see cref="IApplicationBuilder"/> to configure the application's request pipeline.</param>
/// <param name="configuration">Builder used to configure the ASP.NET Core Logging.</param>
/// <returns>A reference to this instance after the operation has completed.</returns>
public static IApplicationBuilder UseAspNetCoreLogging(this IApplicationBuilder app, Action<IAspNetCoreLoggingConfigurationBuilder> configuration)
{
var aspNetLoggerConfiguration = new AspNetCoreLoggingConfigurationBuilder();
configuration(aspNetLoggerConfiguration);
// Add the registered ILogSource into the registered services.
_services.AddInstance(typeof (ILogSource), aspNetLoggerConfiguration.LogSource);
// The entire configuration for the middleware has been done, so return the middleware.
return app.UseMiddleware<AspNetCoreLoggingMiddleware>();
}
Notice the first line here, I'm creating an instance of the class.
However, when I inspect this variable in a watch, when my cursor is on line configuration(aspNetLoggerConfiguration); I do get that the variable does not exists in the current context.
Creating an instance of the variable does work when doing it directly in the watch window.
Anyone has a clue?
P.S. It's a DNX project which I'm testing in xUnit. The code is running in 'Debug' mode.
Thats no runtime and no compiling-error.
It's a problem of Visual Studio not beeing able to show the object in a debug-window as it is a runtime-object (something like that).
Another occurence of this problem is in a wcf-service client. Create a new serviceclient Client and try to show client.InnerChannel in the watch window. It won't work. You can however create a temp-object (bool, string, etc..) and write the desired value into it to see your value.
#if DEBUG
var tmpLog = aspNetLoggerConfiguration.LogSource;
#endif
You should see the LogSource in the tmpLog when your mouse is over it.
Related
I'll explain my use case, but I think it can be generalized as it is mainly a code design question.
So I use serilog for logging in my application. What I want is to check whenever the application actually log something that, if the level of the log is above a maximum one, it force exiting the application (in my case any Error or Fatal logging trigger this) :
public static void CheckForceExiting(LogEventLevel level, LogEventLevel maxLevel = MAX_LEVEL_BEFORE_EXIT)
{
//level is at most the maximum one, so keep the application running.
if (level <= maxLevel)
return;
//level is above maximum one, so Exit the App.
//Disable UI interaction
(Application.Current.MainWindow.DataContext as Main_VM).IsEnabled = false;
string msg = "An exception was caught !", title = "/!\\ERROR/!\\ ";
if (level > LogEventLevel.Error)
{
msg = "Unhandled exception occured !";
title = "/!\\FATAL ERROR/!\\ ";
}
msg += " Closing SDC now.\n\nYou can find the log in ./Logs folder if needed.";
title += HelperUtils.AssemblyName.Name + ", Ver: " + HelperUtils.AssemblyName.Version.ToString();
MessageBox.Show(msg, title, MessageBoxButton.OK, MessageBoxImage.Error);
OnCloseLogging();
Environment.Exit(1);
}
My first, working, attempt was to code a full wrapper of serilog.Log own static class, that, for each logging method on this class, add a call to my above method.
But it doesn't satisfy me to just somewhat copy/paste serilog.Log class to just add in any of their logging method a call to CheckForceExiting().
So, instead, I preferred to use my own custom serilog Sink class, that will just call CheckForceExiting() as its "logging" action :
/// <summary>
/// Custom <see cref="ILogEventSink"/> Sink class
/// that perform/call <see cref="LoggingUtils.CheckForceExiting(LogEventLevel, LogEventLevel)"/> check on each logging level
/// in order to force exiting the application if level is too high (default is Error or Fatal).
/// </summary>
class CheckLevelSink : ILogEventSink
{
/// <summary>
/// Default constructor
/// </summary>
public CheckLevelSink()
{
}
/// <summary>
/// Perform custom "logging".
/// Here we are just calling <see cref="LoggingUtils.CheckForceExiting(LogEventLevel, LogEventLevel)"/>
/// to force exiting the application if <paramref name="logEvent"/> level is above maximum allowed level
/// </summary>
/// <param name="logEvent">The logEvent to "log". Here onluy check its level to force exiting the application or not.</param>
public void Emit(LogEvent logEvent)
{
LoggingUtils.CheckForceExiting(logEvent.Level);
}
}
And add this sink to the "chained logging" in serilog configuration :
/// <summary>
/// Creating a generic host to allow the creation and use of a Serilog logger
/// </summary>
/// <returns>The generic host created</returns>
public static IHost CreateHostBuilder()
{
return Host.CreateDefaultBuilder()
.UseSerilog((host, loggerConfig) =>
{
loggerConfig.WriteTo.File("Logs/log.txt", rollingInterval: RollingInterval.Day)
.Enrich.FromLogContext()
.MinimumLevel.Information();
loggerConfig.WriteTo.CheckLevel();
#if DEBUG
loggerConfig.WriteTo.Debug()
.MinimumLevel.Debug();
#endif
})
.ConfigureServices(services =>
{
})
.Build();
}
/// <summary>
/// Extension method pattern to allow an easy way to add the custom <see cref="CheckLevelSink"/> Sink in a <see cref="LoggerConfiguration"/>
/// </summary>
/// <param name="sinkConfiguration">The Sink configuration in which to "add" our custom <see cref="CheckLevelSink"/></param>
/// <param name="restrictedToMinimumLevel">The minimum <see cref="LogEventLevel"/> log level from which we allow logging in the <see cref="CheckLevelSink"/> Sink.</param>
/// <returns>The <see cref="LoggerConfiguration"/> configuration of the logger that have "added" the custom <see cref="CheckLevelSink"/> Sink.</returns>
public static LoggerConfiguration CheckLevel(
this LoggerSinkConfiguration sinkConfiguration,
LogEventLevel restrictedToMinimumLevel = LogEventLevel.Error)
{
return sinkConfiguration.Sink(new CheckLevelSink(), restrictedToMinimumLevel);
}
OK, I found it better, but still seems a little overkill no ?
So, in a more generalized way, whenever you call an external API method and you want to automatically add your custom logic on top (before and/or after) of the API own logic, what is the best practice ?
(Note: I envisaged to use inheritance to serilog File.Write method, but it is a sealed class. So I'm not sure inheritance is a generalized applicable solution).
Thanks.
In my project I do have many Database Contexts.
1. MyContext1
2. MyContext2
3. MyContext3
I am currently using database first approach (edmx based).
As part of edmx creation all these contexts are created. I would like to disable lazy loading for all these contexts.
I thought of writing a partial class for this. So for each Context there will be a partial class and which is responsible for disabling the lazy loading.
My current approach is something like below
[DbConfigurationType(typeof(InterceptorConfiguration))]
public partial class MyContext1 : DbContext
{
public static MyContext1 Create()
{
var applicationDbContext = new MyContext1();
applicationDbContext.Configuration.LazyLoadingEnabled = false;
return applicationDbContext;
}
}
Here i do have static method where i manually create instance of context and apply the configurations and returning it. Is there any other way to do this without creating a direct instance in the partial class?
Since there is already a default constructor in the edmx auto generated class I cannot write a constructor in the partial class which I have created.
I can disable this one in service layer, but since this one is an existing project I dont want to touch everywhere. So is there any better solution to do the same ?
Since this one is an existing application and it has many edmx files I cannot edit/change anything in the edmx including t4 template
Finally got a solution.
Since I am using Simple Injector Dependency Injection package in my solution.
I have created a provider for getting the instance at run time.
public sealed class DbContextProvider<T> : IDbContextProvider<T>
where T : DbContext
{
/// <summary>
/// The producer
/// </summary>
private readonly InstanceProducer producer;
/// <summary>
/// Initializes a new instance of the <see cref="DbContextProvider{T}"/> class.
/// </summary>
/// <param name="container">The container.</param>
/// <exception cref="InvalidOperationException">You forgot to register {typeof(T).Name}. Please call: " +
/// $"container.Register<{typeof(T).Name}>(Lifestyle.Scope);</exception>
public DbContextProvider(Container container)
{
this.producer = container.GetCurrentRegistrations()
.FirstOrDefault(r => r.ServiceType == typeof(T))
?? throw new InvalidOperationException(
$"You forgot to register {typeof(T).Name}. Please call: " +
$"container.Register<{typeof(T).Name}>(Lifestyle.Scope);");
}
/// <summary>
/// Gets the context.
/// </summary>
/// <value>
/// The context.
/// </value>
public T Context
{
get
{
DbContext dbContext = (T)this.producer.GetInstance();
//Dynamic proxies are used for change tracking and lazy loading
//if DbContext.Configuration.ProxyCreationEnabled is set to false, DbContext will not load child objects
//for some parent object unless Include method is called on parent object.
dbContext.Configuration.ProxyCreationEnabled = false;
return (T)dbContext;
}
}
}
Then the interface
public interface IDbContextProvider<out T> where T : DbContext
{
/// <summary>
/// Gets the context.
/// </summary>
/// <value>
/// The context.
/// </value>
T Context { get; }
}
I can call this one from service layer like
private readonly IDbContextProvider<MyDbContext> _baseContextProvider;
public MyService(IDbContextProvider<MyDbContext> baseContextProvider)
{
this._baseContextProvider = baseContextProvider;
}
So I have a typical three tiered application layered as below
DAL -> Repository -> Business -> Web.UI/API
I have been reading this article about registering dependencies by centralizing them via modules.
The web layer only has a reference to Business which only has a reference to the Repo which only has a reference to the lowest DAL layer. In this topology since the UI/API layer knows nothing about the Repository and has no reference to it, I can't register the modules in the Repository in the UI/API layer. Similarly I can't register the modules present in the DAL in the Business layer. What I want to do is start the registration process in the top most layer which then sets off a cascading effect of registrations in subsequent layers.
Typically what this would look like is each layer exposing a RegisterAllModules method and somehow trigger the RegisterAllModules method from the layer below it. Has something like this been done? Or is there another way to do this? At this point I don't know if I should roll my own logic out as I mentioned here above, since I don't know if there is a documented way to do something like this or not. Thoughts on how to best go forward here is what I am looking for.
Thanks.
Mmmm... I don't know if what follows is a proper response, but I'm going to try to give you the tools for a solution that suits your exact requirementes.
have you looked into json/xml module configuration? You do not need to know the assemblies through cross reference, you just need to know the name of the assemblies in app.config (or web.config). E.g: you can register one module for Repositories in the Repo assembly and one module for Business services in the Business.dll. This completely removes the need of cross-referencing the various assemblies (for Module scanning, you will still need references for method calls, but that is expected anyway). See here for details: http://docs.autofac.org/en/latest/configuration/xml.html#configuring-with-microsoft-configuration
if you want to enforce no call is done from (say) UI to Repo, you can leverage the "Instance Per Matching Lifetime Scope" function (see http://docs.autofac.org/en/latest/lifetime/instance-scope.html#instance-per-matching-lifetime-scope). You can use that registration method in order to enforce a Unit-of-work approach. E.g: a Repository can only be resolved in a "repository" LifetimeScope, and only Business components open scopes tagged "repository".
an alternative approach to tagged scopes is in using the "Instance per Owned<>" pattern. In this way, each Business service would require an Owned<Repository>.
Something like:
var builder = new ContainerBuilder();
builder.RegisterType();
builder.RegisterType().InstancePerOwned();
AFAICT, a correct approach would be to register the components through Modules, referenced by the Json/Xml config, and each Module should target specific LifetimeScopes.
When you a class calls the underlying layer, it should open a new LifetimeScope("underlying layer").
I will elaborate further, if you want advice on implementation strategies.
Best,
Alberto Chiesa
Edit:
I didn't knew the "composition root" meaning. Well, thanks for the info!
I favor a SIMPLE configuration file (be it the .config file or a separate .json or .xml file), because I feel that a list of modules to be imported is simpler done through a list than through a class. But this is opinion.
What is not an opinion is that you can import modules from assembly that are not referenced by the "Composition Root" assembly, in a simple and tested way.
So, I would go for Modules for every component registration, but for a textual configuration file for Module registration. YMMV.
Now, let me show you an example of the Unit of Work pattern that I'm using in many live projects.
In our architecture we make heavy use of a Service Layer, which holds responsibility for opening connections to the db and disposing them when finished, etc.
It's a simpler design than what you're after (I prefer shallow other than deep), but the concept is the same.
If you are "out" of the Service Layer (e.g. in an MVC Controller, or in the UI), you need a ServiceHandle in order to access the Service layer. The ServiceHandle is the only class that knows about Autofac and is responsible for service resolution, invocation and disposal.
The access to the Service Layer is done in this way:
non service classes can require only a ServiceHandle
invocation is done through _serviceHandle.Invoke(Func)
Autofac injects the ready to use handles via constructor injection.
This is done through the use of BeginLifetimeScope(tag) method, and registering services (in a module) in this way:
// register every service except for ServiceBase
Builder.RegisterAssemblyTypes(_modelAssemblies)
.Where(t => typeof(IService).IsAssignableFrom(t) && (t != typeof(ServiceBase)))
.InstancePerDependency();
// register generic ServiceHandle
Builder.RegisterGeneric(typeof(ServiceHandle<>))
.AsSelf()
.AsImplementedInterfaces()
.InstancePerDependency();
And registering every shared resource as InstancePerMatchingLifetimeScope("service")
So, an example invocation would be:
... in the constructor:
public YourUiClass(ServiceHandle<MyServiceType> myserviceHandle)
{
this._myserviceHandle = myserviceHandle;
}
... in order to invoke the service:
var result = _myserviceHandle.Invoke(s => s.myServiceMethod(parameter));
This is the ServiceHandle implementation:
/// <summary>
/// Provides a managed interface to access Model Services
/// </summary>
/// <typeparam name="TServiceType">The Type of the parameter to be managed</typeparam>
public class ServiceHandle<TServiceType> : IServiceHandle<TServiceType> where TServiceType : IService
{
static private readonly ILog Log = LogManager.GetLogger(typeof(ServiceHandle<TServiceType>));
private readonly ILifetimeScope _scope;
/// <summary>
/// True if there where Exceptions caught during the last Invoke execution.
/// </summary>
public bool ErrorCaught { get; private set; }
/// <summary>
/// List of the errors caught during execution
/// </summary>
public List<String> ErrorsCaught { get; private set; }
/// <summary>
/// Contains the exception that was thrown during the
/// last Invoke execution.
/// </summary>
public Exception ExceptionCaught { get; private set; }
/// <summary>
/// Default constructor
/// </summary>
/// <param name="scope">The current Autofac scope</param>
public ServiceHandle(ILifetimeScope scope)
{
if (scope == null)
throw new ArgumentNullException("scope");
_scope = scope;
ErrorsCaught = new List<String>();
}
/// <summary>
/// Invoke a method to be performed using a
/// service instance provided by the ServiceHandle
/// </summary>
/// <param name="command">
/// Void returning action to be performed
/// </param>
/// <remarks>
/// The implementation simply wraps the Action into
/// a Func returning an Int32; the returned value
/// will be discarded.
/// </remarks>
public void Invoke(Action<TServiceType> command)
{
Invoke(s =>
{
command(s);
return 0;
});
}
/// <summary>
/// Invoke a method to be performed using a
/// service instance provided by the ServiceHandle
/// </summary>
/// <typeparam name="T">Type of the data to be returned</typeparam>
/// <param name="command">Action to be performed. Returns T.</param>
/// <returns>A generically typed T, returned by the provided function.</returns>
public T Invoke<T>(Func<TServiceType, T> command)
{
ErrorCaught = false;
ErrorsCaught = new List<string>();
ExceptionCaught = null;
T retVal;
try
{
using (var serviceScope = GetServiceScope())
using (var service = serviceScope.Resolve<TServiceType>())
{
try
{
retVal = command(service);
service.CommitSessionScope();
}
catch (RollbackException rollbackEx)
{
retVal = default(T);
if (System.Web.HttpContext.Current != null)
ErrorSignal.FromCurrentContext().Raise(rollbackEx);
Log.InfoFormat(rollbackEx.Message);
ErrorCaught = true;
ErrorsCaught.AddRange(rollbackEx.ErrorMessages);
ExceptionCaught = rollbackEx;
DoRollback(service, rollbackEx.ErrorMessages, rollbackEx);
}
catch (Exception genericEx)
{
if (service != null)
{
DoRollback(service, new List<String>() { genericEx.Message }, genericEx);
}
throw;
}
}
}
catch (Exception ex)
{
if (System.Web.HttpContext.Current != null)
ErrorSignal.FromCurrentContext().Raise(ex);
var msg = (Log.IsDebugEnabled) ?
String.Format("There was an error executing service invocation:\r\n{0}\r\nAt: {1}", ex.Message, ex.StackTrace) :
String.Format("There was an error executing service invocation:\r\n{0}", ex.Message);
ErrorCaught = true;
ErrorsCaught.Add(ex.Message);
ExceptionCaught = ex;
Log.ErrorFormat(msg);
retVal = default(T);
}
return retVal;
}
/// <summary>
/// Performs a rollback on the provided service instance
/// and records exception data for error retrieval.
/// </summary>
/// <param name="service">The Service instance whose session will be rolled back.</param>
/// <param name="errorMessages">A List of error messages.</param>
/// <param name="ex"></param>
private void DoRollback(TServiceType service, List<string> errorMessages, Exception ex)
{
var t = new Task<string>
service.RollbackSessionScope();
}
/// <summary>
/// Creates a Service Scope overriding Session resolution:
/// all the service instances share the same Session object.
/// </summary>
/// <returns></returns>
private ILifetimeScope GetServiceScope()
{
return _scope.BeginLifetimeScope("service");
}
}
Hope it helps!
I have problem with loading mef parts under iis. The load method looks like this:
private void LoadPlugins(string path)
{
var aggregateCatalog = new AggregateCatalog();
var directoryCatalogExe = new DirectoryCatalog(path, "*.exe");
aggregateCatalog.Catalogs.Add(directoryCatalogExe);
var container = new CompositionContainer(aggregateCatalog);
container.ComposeParts(this);
}
The method works perfectly in console application or in cassini. Under iis the parts count is 0 - no error, no exception in event log, nothing...
I have completely no idea what is going on. The path is 100% correct.
I would agree with #stakx assesment. I use a different approach to container creation to make it more environment agnostic. I create an interface:
/// <summary>
/// Defines the required contract for implementing a composition container factory.
/// </summary>
public interface ICompositionContainerFactory
{
#region Methods
/// <summary>
/// Creates an instance of <see cref="CompositionContainer"/>.
/// </summary>
/// <returns>An instance of <see cref="CompositionContainer"/>.</returns>
CompositionContainer CreateCompositionContainer();
#endregion
}
With a default implementation (which works in console apps, service hosts):
public class DefaultCompositionContainerFactory : ICompositionContainerFactory
{
#region Methods
/// <summary>
/// Creates an instance of <see cref="CompositionContainer"/>.
/// </summary>
/// <returns>
/// An instance of <see cref="CompositionContainer"/>.
/// </returns>
public CompositionContainer CreateCompositionContainer()
{
var domain = AppDomain.CurrentDomain;
string path = domain.BaseDirectory;
// Use the base directory from where the application is running.
var catalog = new DirectoryCatalog(path);
// Create the container.
var container = new CompositionContainer(catalog);
return container;
}
#endregion
}
And a web specific implementation:
public class WebCompositionContainerFactory : ICompositionContainerFactory
{
#region Methods
/// <summary>
/// Creates an instance of <see cref="CompositionContainer"/>.
/// </summary>
/// <returns>
/// An instance of <see cref="CompositionContainer"/>.
/// </returns>
public CompositionContainer CreateCompositionContainer()
{
string path = HttpRuntime.BinDirectory;
// Use the base directory from where the application is running.
var catalog = new DirectoryCatalog(path);
// Create the container.
var container = new CompositionContainer(catalog);
return container;
}
#endregion
}
Which I wire up through configuration.
The other thing to consider is that you are passing *.exe as your catalog filter, are you using executable assemblies in your web application?
One possible cause of this might be a wrong value for path.
For example, you should not assume that the current directory will be your code's "bin" directory, so passing "." might be a bad idea.
If that is what you're doing, try specifying a path based on Assembly.GetExecutingAssembly().Location:
// using System.IO;
// using System.Reflection;
string binPath = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);
LoadPlugins(binPath);
The difference when you are running in a console app or cassini and IIS is the security context.
When running the console app or cassini, the security context is the logged on user, which is you.
When running under IIS the security context is the identity of the application pool, which by default is NETWORK SERVICE.
It is probably that your MEF parts are in a directory that NETWORK SERVICE does not have access too.
My question is very similar to this issue: AntiForgery Exception: A required anti-forgery token was not supplied or was invalid
but I have the MVC3 and I using Razor installed.
controller has the
[ValidateAntiForgeryToken]
specified
in html is printed <input name="__RequestVerificationToken"... using #Html.AntiForgeryToken()
Also I observed, that if I remove the Authorization cookie in the browser, and controller method does not have [Authorize] I don't have any problems with AntiForery. Why?
Check your cookies and make sure that you are seeing the requestVerificationToken cookie being set correctly. I have run into this before where the cookies for the site were all set to be SSL only and I was trying to run it over regular HTTP locally, so the cookie was never being accepted because it was being transmitted over unsecure channels.
For me, this meant changing a line in the web.config under system.web/httpCookies to requireSSL="false"... but if this isn't what you are seeing, I would still look at things that might be messing with your cookies in the system (e.g. session resets, manually clearing the cookies somewhere, etc.). If you have the validation attribute on the controller methods correctly, and are still getting this, it is likely due to something modifying or removing that cookie!
Edit: Also, if you have this on the controller instead of only on the POST methods, that would be why... This is only applicable to form POSTs to the server.
Here's a simple custom version that you CAN apply to the form that will automatically validate on ALL POST action methods:
/// <summary>
/// Custom Implementation of the Validate Anti Forgery Token Attribute.
/// </summary>
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)]
public class CustomValidateAntiForgeryTokenAttribute : FilterAttribute, IAuthorizationFilter
{
/// <summary>
/// The ValidateAntiForgeryTokenAttribute.
/// </summary>
private readonly ValidateAntiForgeryTokenAttribute _validator;
/// <summary>
/// The AcceptVerbsAttribute.
/// </summary>
private readonly AcceptVerbsAttribute _verbs;
/// <summary>
/// Initializes a new instance of the <see cref="CustomValidateAntiForgeryTokenAttribute"/> class.
/// </summary>
/// <param name="verbs">The verbs.</param>
public CustomValidateAntiForgeryTokenAttribute(HttpVerbs verbs) : this(verbs, null)
{
}
/// <summary>
/// Initializes a new instance of the <see cref="CustomValidateAntiForgeryTokenAttribute"/> class.
/// </summary>
/// <param name="verbs">The verbs.</param>
/// <param name="salt">The salt.</param>
public CustomValidateAntiForgeryTokenAttribute(HttpVerbs verbs, string salt)
{
_verbs = new AcceptVerbsAttribute(verbs);
_validator = new ValidateAntiForgeryTokenAttribute
{
Salt = salt
};
}
/// <summary>
/// Called when authorization is required.
/// </summary>
/// <param name="filterContext">The filter context.</param>
public void OnAuthorization(AuthorizationContext filterContext)
{
var httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride();
var found = false;
foreach (var verb in _verbs.Verbs)
{
if (verb.Equals(httpMethodOverride, StringComparison.OrdinalIgnoreCase))
{
found = true;
}
}
if (found && !filterContext.RequestContext.RouteData.Values["action"].ToString().StartsWith("Json"))
{
_validator.OnAuthorization(filterContext);
}
}
}
Then you can just add the following to all of your controllers, or to your base controller if you override and inherit from one:
[CustomValidateAntiForgeryToken(HttpVerbs.Post)]
Anti forgery token is tied to the user identity. If you changing currently logged in user identity between generating and validating tokens then token will not be validated successfully. Also, that explains why everything is working for you in anonymous mode.