Testing WCF Service that Uses Impersonation - c#

I am converting some existing integration tests of a legacy WCF service to be automated via NUnit. The current tests call a deployed version of the WCF service; what I would like to do is have the tests hit the service class (MyService.svc.cs) directly/internally.
The problem I am having is that the service uses impersonation:
//this is a method in MyService.svc.cs
public SomeObject GetSomeObject()
{
using (GetWindowsIdentity().Impersonate())
{
//do some stuff
}
return null;
}
private WindowsIdentity GetWindowsIdentity()
{
var callerWinIdentity = ServiceSecurityContext.Current.WindowsIdentity;
var cf = new ChannelFactory<IMyService>();
cf.Credentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation;
return callerWinIdentity;
}
The problem is that ServiceSecurityContext.Current is always null when I call it from a unit test.
The impersonation is important in downstream operations, so I can't just bypass this code and just call what is within the using block. It might be possible to wrap my test code in WindowsIdentity.GetCurrent().Impersonate() and then call what is within the using block (bypassing MyService.svc.cs code), but this would be less than ideal, as it would not be a complete end-to-end test.
I do not need to fake different users to impersonate--I just need the runner's user context to be available in ServiceSecurityContext.Current.
Is this possible?

I'd still be interested in a better and less invasive way of doing this, but this seems to work for now.
I created a second constructor for MyService to allow the use of WindowsIdentity.GetCurrent().
private readonly bool _useLocalIdentity;
public MyService(bool useLocalIdentity) :this()
{
_useLocalIdentity = useLocalIdentity;
}
private WindowsIdentity GetWindowsIdentity()
{
if (_useLocalIdentity)
{
return WindowsIdentity.GetCurrent();
}
var callerWinIdentity = ServiceSecurityContext.Current.WindowsIdentity;
if (callerWinIdentity == null)
{
throw new InvalidOperationException("Caller not authenticated");
}
var cf = new ChannelFactory<IMyService>();
cf.Credentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation;
return callerWinIdentity;
}

Related

Integration tests - what would you test for in this controller?

I'm applying NUnit integration tests on our controller endpoints in a .NET Web API 2 project whose models and controllers are generated via Entity code first from database.
I'm having trouble thinking of what parts of the controller I should test. In the end, we'd just like to be able to automate "can a user with "x" role get this data?"
Looking in the GET portion of this controller, what parts would you test and what's your reasoning?
namespace api.Controllers.myNamespace
{
public class myController : ApiController
{
private string strUserName;
private string strError = "";
private string strApiName = "myTable";
private myDatabase db = new myDatabase();
// ----------------------------------------------------------------------
// GET: api/path
public IQueryable<myTable> GetmyTable()
{
try
{
this.strUserName = this.getUserName();
if
(
// ----- authorize -----
db.view_jnc_role_api_permission.Count
(
view =>
(
view.permission == "get"
&& view.apiName == this.strApiName
&& view.userName == this.strUserName
)
) == 1
// ----- /authorize -----
)
{
// ----- get -----
IQueryable<myTable> data =
from tbl in db.myTable
where tbl.deleted == null
select tbl;
// ----- /get -----
return data;
}
else
{
strError = "Unauthorized.";
throw new HttpResponseException(HttpStatusCode.Forbidden);
}
}
catch (Exception ex)
{
if (strError.Length == 0)
{
if (this.showException())
{
strError = ex.ToString();
}
}
throw new HttpResponseException(ControllerContext.Request.CreateErrorResponse(HttpStatusCode.Forbidden, strError));
}
}
}
For reference, here's what I have so far. Some of these private fields I'm defining shouldn't be here - currently trying to get access to private methods from my test project via AssemblyInfo.cs to fix this:
namespace api.myNamespace
{
[TestFixture]
public class myController : ApiController
{
private string strUserName;
private string strError = "";
private string strApiName = "myTable";
private myDb db = new myDb();
// Using TransactionScope to (hopefully) prevent integration test's changes to database from persisting
protected TransactionScope TransactionScope;
// Instantiate _controller field
private myController _controller;
[SetUp]
public void SetUp() {
TransactionScope = new TransactionScope(TransactionScopeOption.RequiresNew);
// It's possible that one test may leave some state which could impact subsequent tests - so we must reinstantiate _controller at the start of each new test:
_controller = new myController();
}
[TearDown]
public void TearDown()
{
TransactionScope.Dispose();
}
**//------ TESTS -------//
// CanSetAndGetUserName
// AuthorizedUserCanGetData
// UnauthorizedUserCannotGetData
// AuthorizedUserCanPutData
// UnauthorizedUserCannotPutData
// AuthorizedUserCanPostData
// UnauthorizedUserCannotPostData
// AuthorizedUserCanDeleteData
// UnauthorizedUserCannotDeleteData**
[Test]
public void CanGetAndSetUsername()
{
// ARRANGE
var user = _controller.getUserName();
// ACT
// ASSERT
Assert.That(user, Is.EqualTo("my-internal-username"));
}
[Test]
public void UnauthorizedUserCannotGetData()
{
var user = "Mr Unauthorized";
// Unfinished bc integration testing is super abstract, subjective, hard, time consuming and hard. All downvoters are plebs.
Assert.That(user, Is.EqualTo());
}
}
}
}
integration tests means several things:
you setup your test data in the database, via a script for example.
you call the endpoint under test knowing exactly what data you should call it with and what you should get. This is all based on your test data you setup in step 1.
you compare your expected data with the one you got back.
this is an integration test as it touches everything, both api and database.
Now, you said you are having trouble deciding which parts of the controller to test. This suggests you are confusing integration tests with unit tests.
Integration tests we already covered.
Unit tests cover parts of functionality. You do not test controllers, forget about that.
What you really need to consider doing is this:
First, separate your code from the controller. Keep the controller very basic. It receives a call, validates the request model and passes it further to a class library where the functionality happens. This allows you to forget "testing the controller" and focus on your functionality instead. Unit tests will help here and your test cases will become something like this
I have a user, set up in a certain way.
I have some data, set up in a certain way
When I call method X, then I should get this response.
With such a setup in place, you can set your test data any way you like and check every single test case.
The only reason you wonder how you test your controller is because you dumped all your code into it, which of course makes everything hard. Think SOLID, think SOC ( Separation of concerns ).
One piece of advice: never ever return IQueryable from an endpoint, that's not data, that simply a query that hasn't run yet. Return a List, IEnumerable, an singular object, whatever you need, just make sure you execute that first by calling ToList() for example on your IQueryable expression first.
So, the steps are like this:
Setup your IQueryable first
Execute it by calling ToList(), First(), FirstOrDefault() whatever is appropriate and return the result of that.

Switch between production and test Webservice

Switch between production and test Webservice.
I have 2 version for the same WebService definition. Each version has its own database url etc.
MyLib.FooWebServicePROD.FooWebService _serviceProd;
MyLib.FooWebServiceTEST.FooWebService _serviceTest;
For now to siwtch form one to the other I used the Rename option in Visual Studio.
I would like to wrap all my instance and definition in a layer of abstration so the programe will not be edited everytime.
So I made mine singleton public sealed class FooBarWrap but with a huge amount a duplication like:
public bool Close()
{
if (_serviceProd != null)
{
_serviceProd.logout(guid);
log4N.Info("Closing PROD");
}
if (_serviceTest != null)
{
_serviceTest.logout(guid);
log4N.Info("Closing TEST");
}
return true;
}
public bool Login()
{
try
{
log4N.Info("Connection to FooBar webservice...");
if (isProd)
{
_serviceProd = new MyLib.FooWebServicePROD.FooWebService();
_serviceProd.Timeout = System.Threading.Timeout.Infinite;
_serviceProd.Logon(guid);
}
else {
_serviceTest = new MyLib.FooWebServiceTEST.FooWebService();
_serviceTest.Timeout = System.Threading.Timeout.Infinite;
_serviceTest.Logon(guid);
}
log4N.Info("done");
return true;
}
catch (Exception ex)
{
log4N.Info("failed !");
log4N.Error("Echec connexion au webservice FooBar", ex);
return false;
}
}
Is there a simplier way to achieve this? Without the client having a reference to one or the other web service, and without the heavy code duplication?
if (FooBarWrap.Instance.Login()){
//DoSomething
var ClientResult = FooBarWrap.Instance.SomeRequest()
}
Is there a simplier way to achieve this? Without the client having a reference to one or the other web service, and without the heavy code duplication?
It is.
You could simply use conditional dependency injection where depending on the environment you are or any other condition like host name, port number or url path, you would get different implementation of the service interface.
A simple conditional dependency injection that depending on condition provides one or the other implementation of the same interface.
kernel.Bind<ISomeService>().To<SomeService1>();
kernel.Bind<ISomeService>().To<SomeService2>().When(x => HttpContext.Current[host|port|url path] == "some value");
Ninject calls that kind of injection contextual binding
https://github.com/ninject/ninject/wiki/Contextual-Binding

Castle Windsor WCF Facility is not processing one way operations

I currently have a simple use case.
1) A client app that connects to a WCF Service using Castle's AsWcfClient option.
2) WCF Service "A" that is hosted using Castle and is injecting a single dependency. This dependency is a client proxy to another WCF Service (Call it Service "B").
3) Service "B" does some work.
To visualize: Client -> Service "A" with Castle injected proxy to -> Service "B"
Simple right? Works without issue IF, and that's a big if, the Service "B" host is up and running.
The behavior I have seen and can reproduce on demand is that if Service "B" is down, the call chain completes without any hint that there is any issue. To say it another way, there is no resolution exception thrown by Castle nor any WCF exception. I have isolated this to how IsOneWay=true operations are handled.
This is a major issue because you think that everything has executed correctly but in reality none of your code has been executed!
Is this expected behavior? Is there away I can turn on some option in Castle so that it will throw and exception when a WCF Client proxy is the resolved dependency? Some other option?
One more note, the only clue that you have that is issue is occurring is when/if you do a Container.Release() on the client proxy as it throws an exception. This can't be depended upon thou for various reasons not worth getting into here.
Thank!
Additionally below is the code that recreates this issue. To run it
1) Create a new Unit Test project in Visual Studio
2) Add the Castle Windsor WCF Integration Facility via NuGet
3) Paste the code from below into a .cs file, everything is in one to make it easy.
4) Run the two unit tests, SomeOperation_With3Containers_NoException() works as the dependency service (Service "B" from above) is running. SomeOperation_With2Containers_NoException() fails are the .Release
5) Set break points and you can see that no code is hit in the implementations.
****UPDATE****: The primary way this needs to be handled is with an IErrorHandler implantation (As mentioned by Roman in the comments below). Details and an example can be found here: http://msdn.microsoft.com/en-us/library/system.servicemodel.dispatcher.ierrorhandler(v=vs.110).aspx
Use this implementation to log any exception on the One Way operation and use that data to take the appropriate action.
using Castle.Facilities.WcfIntegration;
using Castle.MicroKernel.Registration;
using Castle.Windsor;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System;
using System.ServiceModel;
using System.ServiceModel.Description;
namespace UnitTestProject1
{
[ServiceContract]
public interface IServiceContractA
{
[OperationContract(IsOneWay = true)]
void SomeOperation();
}
[ServiceContract]
public interface IServiceDependancy
{
[OperationContract]
void SomeOperation();
}
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class ServiceContractAImplementation : IServiceContractA
{
private IServiceDependancy ServiceProxy;
public ServiceContractAImplementation() { }
public ServiceContractAImplementation(IServiceDependancy dep)
{
ServiceProxy = dep;
}
public void SomeOperation()
{
ServiceProxy.SomeOperation();
}
}
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class ServiceDependancyImplementation : IServiceDependancy
{
public void SomeOperation()
{
//do nothing, just want to see if we can create an instance and hit the operation.
//if we need to do something, do something you can see like: System.IO.File.Create(#"d:\temp\" + Guid.NewGuid().ToString());
}
}
public class ServiceCastleInstaller : IWindsorInstaller
{
public void Install(Castle.Windsor.IWindsorContainer container, Castle.MicroKernel.SubSystems.Configuration.IConfigurationStore store)
{
container.AddFacility<WcfFacility>(f => f.CloseTimeout = TimeSpan.Zero);
var returnFaults = new ServiceDebugBehavior { IncludeExceptionDetailInFaults = true, HttpHelpPageEnabled = true };
container.Register(Component.For<IServiceBehavior>().Instance(returnFaults));
//local in-proc service hosting
var namedPipeBinding = new NetNamedPipeBinding();
//it works using Named Pipes
var serviceModelPipes = new DefaultServiceModel().AddEndpoints(
WcfEndpoint.BoundTo(namedPipeBinding).At("net.pipe://localhost/IServiceContractA")
).Discoverable();
container.Register(Component.For<IServiceContractA>()
.ImplementedBy<ServiceContractAImplementation>()
.LifeStyle.PerWcfOperation()
.AsWcfService(serviceModelPipes)
);
//our service (IServiceContractA) has a dependancy on another service so needs a client to access it.
container.Register(Castle.MicroKernel.Registration.Component.For<IServiceDependancy>()
.AsWcfClient(WcfEndpoint.BoundTo(namedPipeBinding)
.At(#"net.pipe://localhost/IServiceDependancy")).LifeStyle.Transient);
}
}
public class ServiceDependancyCastleInstaller : IWindsorInstaller
{
public void Install(Castle.Windsor.IWindsorContainer container, Castle.MicroKernel.SubSystems.Configuration.IConfigurationStore store)
{
container.AddFacility<WcfFacility>(f => f.CloseTimeout = TimeSpan.Zero);
var returnFaults = new ServiceDebugBehavior { IncludeExceptionDetailInFaults = true, HttpHelpPageEnabled = true };
container.Register(Component.For<IServiceBehavior>().Instance(returnFaults));
//local in-proc service hosting
var namedPipeBinding = new NetNamedPipeBinding();
var serviceModel = new DefaultServiceModel().AddEndpoints(
WcfEndpoint.BoundTo(namedPipeBinding).At("net.pipe://localhost/IServiceDependancy")
).Discoverable();
container.Register(Component.For<IServiceDependancy>()
.ImplementedBy<ServiceDependancyImplementation>()
.LifeStyle.PerWcfOperation()
.AsWcfService(serviceModel)
);
}
}
[TestClass]
public class UnitTest1
{
[TestMethod]
public void SomeOperation_With3Containers_NoException()
{
//setup the container that is going to host the service dependancy
using (var dependancyContainer = new WindsorContainer().Install(new ServiceDependancyCastleInstaller()))
{
//container that host the service that the client will call.
using (var serviceContainer = new WindsorContainer().Install(new ServiceCastleInstaller()))
{
//client container, nice and simple so doing it in the test here.
using (var clientContainer = new WindsorContainer())
{
clientContainer.AddFacility<WcfFacility>();
var endpoint = WcfEndpoint.BoundTo(new NetNamedPipeBinding())
.At("net.pipe://localhost/IServiceContractA");
clientContainer.Register(Castle.MicroKernel.Registration.Component.For<IServiceContractA>()
.AsWcfClient(endpoint).LifeStyle.Transient);
var proxy = clientContainer.Resolve<IServiceContractA>();
proxy.SomeOperation();
clientContainer.Release(proxy);
}
}
}
}
[TestMethod]
public void SomeOperation_With2Containers_NoException()
{
//this one fails.
// this test omits the dependancy that the IServiceContractA has
//Note that all seems to work, the only hint you have that it doesnt
//is the .Release call throws and exception.
//container that host the service that the client will call.
using (var serviceContainer = new WindsorContainer().Install(new ServiceCastleInstaller()))
{
//client container, nice and simple so doing it in the test here.
using (var clientContainer = new WindsorContainer())
{
clientContainer.AddFacility<WcfFacility>();
var endpoint = WcfEndpoint.BoundTo(new NetNamedPipeBinding())
.At("net.pipe://localhost/IServiceContractA");
clientContainer.Register(Castle.MicroKernel.Registration.Component.For<IServiceContractA>()
.AsWcfClient(endpoint).LifeStyle.Transient);
var proxy = clientContainer.Resolve<IServiceContractA>();
//this call seems like it works but any break points
//in code don't get hit.
proxy.SomeOperation();
//this throws and exception
clientContainer.Release(proxy);
}
}
}
}
}
One way operations exists for the purpose of "Fire and forget" scenarios. You don't care about result, whether it was successful or not. You don't have to wait for the server to respond (only the initial TCP handshake if it's HTTP binding). By using one way operations, the client only gets the confidence that the server received the message successfully on the wire, and the server makes no guarantees that it will succeed in processing the message. This is true in HTTP protocol. In other protocols, like Microsoft MSMQ, or IBM MQ, the server doesn't even need to be online at the same time as the client.
In your scenario, The client does not receive an exception, because service A is up and running. If service A was down, you would have seen an error (Again, assuming HTTP, or in your case .net pipe). The condition of service B does not matter, because service B is an implementation detail of service A, and your client doesn't care about service A return values. If your were to debug service A (by attaching to it) while service B is down, you would have seen first chance, and maybe even unhandled exceptions (depending on the implementation of service A).
Castle should not have thrown an exception anyway, because it has successfully resolved a proxy for service B in service A. The fact that service B is down is no concern of Castle, or any other DI container for that matter.

Client compatibility check management

I'm working on a client-server application (.NET 4, WCF) that must support backwards compatibility. In other words, old clients should be compatible with new servers and vice versa. As a result, our client code is littered with statements such as:
if (_serverVersion > new Version(2, 1, 3))
{
//show/hide something or call method Foo()...
}
else
{
//show/hide something or call method Foo2()...
}
Obviously, this becomes somewhat of a maintenance nightmare. Luckily, we're allowed to break backwards compatibility with every minor release. When we get to a point where compatibility can be broken, I'd like to clean up the code that's in the form of the example above.
My questions:
(1) Is there a way to easily identify code blocks such as these when they are no longer "valid"? My initial thoughts were to somehow conditionally apply an Obsolete attribute based on the assembly's version. When we get to a new minor version, the Obsolete attribute would "kick-in", and all of a sudden we would have several compiler warnings pointing to these code blocks... Has anyone done anything like this? Or is there a better way to manage this?
(2) I cringe every time I see hard-coded versions such as new Version(2, 1, 3). What makes things worse is that during development, we don't know the final Version that's being released, so the version checks are based on the current build number + 1 when the developer adds the check. Although this works, it's not very clean. Any ideas on how this could be improved?
Thanks!
I would suggest at least creating a method where you can do the logic like this:
public static class ServerUtilities
{
public static bool IsValidToRun(Version desiredVersion)
{
if (_serverVersion >= desiredVersion)
return true;
else if (/* your other logic to determine if they're in some acceptable range */)
return true;
return false;
}
}
Then, use like this:
if (ServerUtilities.IsValidToRun(new Version(2, 1, 3)))
{
// Do new logic
}
else
{
// Do old logic
}
If you need to centralize the versions, have a static repository of features to version mapping, so that you can call:
if (ServerUtilities.IsValidToRun(ServerFeatures.FancyFeatureRequiredVersion))
{
...
}
public static class ServerFeatures
{
public static Version FancyFeatureRequiredVersion
{
get { return new Version(2, 1, 3); }
}
}
An alternative would be to implement versioning of your service contracts: at that point you could leverage WCF's own features to ignore minor changes which do not break the client, as listed on this Versioning Strategies page.
In Figure 1 you can see that when adding new parameters to an operation signature, removing parameters from an operation signature and adding new operations the client is unaffected.
In case there are still breaking changes or your client has to support both versions (please correct me if I'm wrong since I don't know your deploying strategy), you could offer different versions of the service on different endpoints and have a WCF client factory in your client code, which could then be configured to return the client for the appropriate endpoint.
At this point you have isolated the different implementations in different clients, which is probably cleaner and less a maintenance nightmare than it is now.
Very basic sample implementation to clear things up: suppose that we have two different contracts for our service, an old one and a new one.
[ServiceContract(Name = "Service", Namespace = "http://stackoverflow.com/2012/03")]
public interface IServiceOld
{
[OperationContract]
void DoWork();
}
[ServiceContract(Name = "Service", Namespace = "http://stackoverflow.com/2012/04")]
public interface IServiceNew
{
[OperationContract]
void DoWork();
[OperationContract]
void DoAdditionalWork();
}
Note how both services have the same name but different namespaces.
Let's handle the issue of having a client that has to be able to support both the extended and new service and the old one. Let's assume that we want to call the DoAdditionalWork method when we previously just called DoWork, and that we want to handle the situation client-side, because hypothetically DoAdditionalWork could require some extra arguments from the client. Then the configuration of the service could be something like this:
<service name="ConsoleApplication1.Service">
<endpoint address="http://localhost:8732/test/new" binding="wsHttpBinding" contract="ConsoleApplication1.IServiceNew" />
<endpoint address="http://localhost:8732/test/old" binding="wsHttpBinding" contract="ConsoleApplication1.IServiceOld" />
...
</service>
Fine, we have the service side, now to the interesting part: we would like to communicate with the services using the same interface. In this case I will use the old one, but you could need to put an adapter in between. Ideally, in our client code, we would do something like this:
IServiceOld client = *Magic*
client.DoWork();
The magic in this case is a simple factory like this:
internal class ClientFactory
{
public IServiceOld GetClient()
{
string service = ConfigurationManager.AppSettings["Service"];
if(service == "Old")
return new ClientOld();
else if(service == "New")
return new ClientNew();
throw new NotImplementedException();
}
}
I delegated the decision of which client to use to the app.config, but you could insert your version check there. The implementation of ClientOld is just a regular WCF client for IServiceOld:
public class ClientOld : IServiceOld
{
private IServiceOld m_Client;
public ClientOld()
{
var factory = new ChannelFactory<IServiceOld>(new WSHttpBinding(), "http://localhost:8732/test/old");
m_Client = factory.CreateChannel();
}
public void DoWork()
{
m_Client.DoWork();
}
...
}
ClientNew instead implements the behavior we were wishing for, namely calling the DoAdditionalWork operation:
public class ClientNew : IServiceOld
{
private IServiceNew m_Client;
public ClientNew()
{
var factory = new ChannelFactory<IServiceNew>(new WSHttpBinding(), "http://localhost:8732/test/new");
m_Client = factory.CreateChannel();
}
public void DoWork()
{
m_Client.DoWork();
m_Client.DoAdditionalWork();
}
...
}
That's it, now our client can be used like in the following example:
var client = new ClientFactory().GetClient();
client.DoWork();
What have we achieved? The code using the client is abstracted from what additional work the actual WCF client has to do and the decision about which client to use is delegated to a factory. I hope some variation/expansion of this sample suits your needs.

Unit test and Session?

I have a routine something like this:
public bool IsValidEmployee(string email, string password)
{
bool valid = false;
var employee = dataAccess.GetEmployee(email, password);
if(employee! = null)
{
valid = true;
HttpContext.Current.Session["Employee"] = employee;
}
return valid;
}
My unit test:
[TestMethod()]
[HostType("ASP.NET")]
[AspNetDevelopmentServerHost("C:\Projects", "/")]
[UrlToTest("http://localhost:59349/")]
public void GetEmployeeTest()
{
Domain target = new Domain();
var mockHttpContext = new Mock<HttpContextBase>();
mockHttpContext.SetupSet(c => c.Session["Employee"] = It.IsAny<object>());
Assert.IsTrue(target.IsValidEmployee("sam#gmail.com", "test");
}
The code fails as
Object Null Reference on 'HttpContext.Current.Session["Employee"] = employee;'
Any suggestions how i can fix this error?
I don't believe just mocking out HttpSession is enough to get the session in your method to take the mocked behavior. You need a way to inject that dependency.
You could redesign your function to take in the session object as a param. This would make your method testable
For example
public bool IsValidEmployee(string email, string password, HttpSessionStateBase session)
{
bool valid = false;
var employee = dataAccess.GetEmployee(email, password);
if(employee! = null)
{
valid = true;
session["Employee"] = employee;
}
return valid;
}
Additionally you could create a "SessionManager" that could impliment ISessionManager that would wrap all your access to session state and pass that around making it even more testable thus decoupling the responsibility of how and where to persist session state from Validating an Employee.
Moles will allow you to intercept and substitute the calls to session.
http://research.microsoft.com/en-us/projects/pex/getstarted.pdf
Using session should be avoided if at all possible due to the burden it places on the server.
That code looks pretty error prone, but maybe that's why you are adding unit tests.
I think you also need to mock getter for the session...
public void GetEmployeeTest()
{
Domain target = new Domain();
var mockHttpContext = new Mock<HttpContextBase>();
var mockSession = new Mock<HttpSessionStateBase>();
mockHttpContext.SetupGet(c => c.Session).Returns(mockSession.Object);
mockHttpContext.SetupSet(c => c.Session["Employee"] = It.IsAny<object>());
Assert.IsTrue(target.IsValidEmployee("sam#gmail.com", "test"));
}
Which code fails? Is this a test failure due to an error in your method, or is the occurring error in the test code itself?
Regardless of how you might improve your test (and this may come across as sounding a bit obvious), but I wonder if you have set a break-point on the setup line in your test method, and actually checked that the object you are trying to assign a value to is null? If this is the case, you need to check your setup, and as others may have said you'll probably need to mock your session in such a way as to ensure the setup portion of your test won't fail.
Cheers.

Categories

Resources