How to make an existing C# dll available as a web service - c#

In principal this looks like a simple job, but I wonder if anyone can take me through the basic steps?
I have an application API, implemented as a C# class library project in the application solution. People can thus write their own conventional .Net applications using this API by referencing the dll directly.
I now need to make exactly the same functionality available as a web service so applications can be written to remotely access the same API over http. Ideally I would just like to tag the API classes and methods with appropriate web service attributes, but I suspect there is more to it than that. I also must have the API dll continue to work as an API for desktop applications as it does at present.
Is this do-able? If so, what are the steps I need to take?

The web service can be composed mostly of wrapper methods. Take the simple case...
If your API method in the assembly is
public void DoFoo(string bar)
Then your web API method (your choice of implementation, such as WebAPI, ASMX web service, etc) will look like
public void DoFoo(string bar) {
// ... initialization or validation
try {
refToDll.DoFoo(bar);
} catch (Exception e) {
// implementation specific return of error.
}
}
If you have mostly static methods or those taking primitive types, that becomes more easy. If your API has types defined, this becomes harder. You will need to change the type signature and reimplement methods. Without your API it would be difficult to make specific suggestions. However, there are several options. If you had
public class BazClass {
public string GetScore() {
return scores.Sum();
}
}
You basically need to ensure that the remote side (the web API) can reconstruct the context from your client side. You have to pass in a serializable instance or other representation of BazClass and let the remote API work on it. It just doesn't exist otherwise. You could also create a bunch of methods that store state on the server and you work with a "handle" on the client side, or object reference, but that will have to be a design decision (just look at interop with native libraries, and handles, and translate to cross network). Example:
public string BazGetScore(Transport.BazClass baz) {
// Depending on the framework and class (all public getters/setters)?
// your framework may allow for transparent serialization
BazClass bazReal = bazFactory(baz);
string score = bazReal.GetScore();
return score;
}
How much of your source API is based on interfaces? This may make the creation of a Proxy class much more transparent to your end user. If you have
public class Baz : IBaz { ... }
Then you can create a Proxy class that acts just like an IBaz but calls the remote API instead of acting locally. Depending on your framework and tooling, this may be able to be facilitated by the tools.
namespace RemoteAPIProxy {
public class Baz : IBaz {
public string GetScore() {
// initialization of network, API, etc
Transport.Baz baz = new Transport.Baz.From(this);
string score = CallRemoteAPI("BazGetScore", baz);
return score;
}
}
}
In summary, you may have to make some intermediate classes depending on if you need to support state, non-public methods, or full scope. The "how" can mostly be considered just another wrapper, but you need to be conscious of how you get your local state over the wire and into the context of the remote API. Use interfaces, serialization helpers, and lightweight transport objects for state to help with the "glue". Remember, the only "I" in "API" is for "Interface", so you might want to make sure you have some. Good luck!

Related

How do I implement AOP in an Azure Mobile App Services client?

On an Azure Mobile App Services server side app using MVC 5, Web API 2.0, and EF Core 1.0, controllers can be decorated like so to implement token based authentication:
// Server-side EF Core 1.0 / Web API 2 REST API
[Authorize]
public class TodoItemController : TableController<TodoItem>
{
protected override void Initialize(HttpControllerContext controllerContext)
{
base.Initialize(controllerContext);
DomainManager = new EntityDomainManager<TodoItem>(context, Request);
}
// GET tables/TodoItem
public IQueryable<TodoItem> GetAllTodoItems()
{
return Query();
}
...
}
I want to be able to do something similar on the client side where I decorate a method with something like [Authorize] from above, perhaps with a, [Secured], decoration, below:
public class TodoItem
{
string id;
string name;
bool done;
[JsonProperty(PropertyName = "id")]
public string Id
{
get { return id; }
set { id = value;}
}
[JsonProperty(PropertyName = "text")]
public string Name
{
get { return name; }
set { name = value;}
}
[JsonProperty(PropertyName = "complete")]
public bool Done
{
get { return done; }
set { done = value;}
}
[Version]
public string Version { get; set; }
}
// Client side code calling GetAllTodoItems from above
[Secured]
public async Task<ObservableCollection<TodoItem>> GetTodoItemsAsync()
{
try
{
IEnumerable<TodoItem> items = await todoTable
.Where(todoItem => !todoItem.Done)
.ToEnumerableAsync();
return new ObservableCollection<TodoItem>(items);
}
catch (MobileServiceInvalidOperationException msioe)
{
Debug.WriteLine(#"Invalid sync operation: {0}", msioe.
}
catch (Exception e)
{
Debug.WriteLine(#"Sync error: {0}", e.Message);
}
return null;
}
Where [Secured] might be defined something like this:
public class SecuredFilterAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
// Check if user is logged in, if not, redirect to the login page.
}
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
// Check some globally accessible member to see if user is logged out.
}
}
Unfortunately, the above code only works in Controllers in MVC 1.0 applications and above according to the Microsoft article on "Creating Custom Action Filters": https://msdn.microsoft.com/en-us/library/dd381609(v=vs.100).aspx
How do I implement something like a "Custom Action Filter" that allows me to use the "[Secured]" decoration in a Mobile App Service client instead of the server? The answer will help me create custom authentication from the client side and keep the code in one location without complicating the implementation, i.e., it is a cross-cutting concern like performance metrics, custom execution plans for repeated attempts, logging, etc.
Complicating the scenario, the client also implements Xamarin.Forms for iOS and has to be a functional Ahead of Time pattern due to iOS's requirement for native code, JIT is not yet possible.
The reason attributes work in the scenarios you describe is because other code is responsible for actually invoking the methods or reading the properties, and this other code will look for the attributes and modify behaviour accordingly. When you are just running C# code, you don't normally get that; there isn't a native way to, say, execute the code in an attribute before a method is executed.
From what you are describing, it sounds like you are after Aspect Oriented Programming. See What is the best implementation for AOP in .Net? for a list of frameworks.
In essence, using an appropriate AOP framework, you can add attributes or other markers and have code executed or inserted at compile time. There are many approaches to it, hence why I am not being very specific, sorry.
You do need to understand that the AOP approach is different from how things like ASP.Net MVC works as AOP will typically modify your runtime code (in my understanding anyway and I'm sure there are variations on that as well).
As to whether AOP is really the way to go will depend on your requirements, but I would proceed with caution - it's not for the faint of heart.
One completely alternative solution to this problem is to look at something like Mediatr or similar to break your logic into a set of commands, which you can call via a message bus. The reason that helps is that you can decorate your message bus (or pipeline) with various types of logic, including authorization logic. That solution is very different from what you are asking for - but may be preferable anyway.
Or just add a single-line authorisation call as the first line inside each method instead of doing it as an attribute...
What you are more generally describing in known by a few different names/terms. The first that comes to mind is "Aspect Oriented Programming" (or AOP for short). It deals with what are known as cross cutting concerns. Im willing to bet you want to do one of two things
Log exceptions/messages in a standardized meaningful way
Record times/performance of areas of your system
And in the generala sense, yes C# is able to do such things. There will be countless online tutorials on how to do so, it is much too broad to answer in this way.
However, the authors of asp.net MVC have very much thought of these things and supply you with many attributes just as you describe, which can be extended as you please, and provide easy access to the pipeline to provide the developer with all the information they need (such as the current route, any parameters, any exception, any authorization/authentication request etc etc)
This would be a good place to start: http://www.strathweb.com/2015/06/action-filters-service-filters-type-filters-asp-net-5-mvc-6/
This also looks good: http://www.dotnetcurry.com/aspnet-mvc/976/aspnet-mvc-custom-action-filter

One DLL with two implementations for two applications

I have a DLL with some classes and methods. And two applications using it.
One admin-application that needs almost every method and a client-application that only needs parts of the stuff. But big parts of it are used by both of them. Now I want make a DLL with the admin stuff and one with the client stuff.
Duplicating the DLL and edit things manually everytime is horrible.
Maybe conditional compiling helps me but I dont know how to compile the DLL twice with different coditions in one solution with the three projects.
Is there a better approach for this issue than having two different DLLs and manually editing on every change?
In general, you probably don't want admin code exposed on the client side. Since it's a DLL, that code is just waiting to be exploited, because those methods are, by necessity, public. Not to mention decompiling a .NET DLL is trivial and may expose inner-workings of your admin program you really don't want a non-administrator to see.
The best, though not necessarily the "easiest" thing to do, if you want to minimize code duplication, is to have 3 DLLs:
A common library that contains ONLY functions that BOTH applications use
A library that ONLY the admin application will use (or else compile it straight into the application if nothing else uses those functions at all)
A library that ONLY the client application will use (with same caveat as above)
A project that consists of a server, client, and admin client should likely have 3-4 libraries:
Common library, used by all 3
Client library, used by client and server
Admin library, used by server and admin client
Server library, used only by server (or else compile the methods directly into the application)
Have you considered using dependency injection on the common library, some form of constructor injection to determine the rules that need to be applied during execution.
Here's a very simple example:
public interface IWorkerRule
{
string FormatText(string input);
}
internal class AdminRules : IWorkerRule
{
public string FormatText(string input)
{
return input.Replace("!", "?");
}
}
internal class UserRules : IWorkerRule
{
public string FormatText(string input)
{
return input.Replace("!", ".");
}
}
public class Worker
{
private IWorkerRule Rule { get; set; }
public Worker(IWorkerRule rule)
{
Rule = rule;
}
public string FormatText(string text)
{
//generic shared formatting applied to any consumer
text = text.Replace("#", "*");
//here we apply the injected logic
text = Rule.FormatText(text);
return text;
}
}
class Program
{
//injecting admin functions
static void Main()
{
const string sampleText = "This message is #Important# please do something about it!";
//inject the admin rules.
var worker = new Worker(new AdminRules());
Console.WriteLine(worker.FormatText(sampleText));
//inject the user rules
worker = new Worker(new UserRules());
Console.WriteLine(worker.FormatText(sampleText));
Console.ReadLine();
}
}
When run you'll produce this output.
This message is *Important* please do something about it?
This message is *Important* please do something about it.

Dealing with concurrency and complex WCF services interacting with objects of the overall application

I am enjoying creating and hosting WCF services.
Up until now I can create services defining contracts for the service and data (interfaces) and defining hosts and configuration options to reach them (endpoint specifications).
Well, consider this piece of code defining a service and using it (no mention for endpoints that are defined in app.config not shown here):
[ServiceContract]
public interface IMyService {
[OperationContract]
string Operation1(int param1);
[OperationContract]
string Operation2(int param2);
}
public class MyService : IMyService {
public string Operation1(int param1) { ... }
public string Operation2(int param2) { ... }
}
public class Program {
public static void Main(stirng[] args) {
using (ServiceHost host = new ServiceHost(typeof(MyService))) {
host.Open();
...
host.Close();
}
}
}
Well, this structure is good when creating something that could be called a Standalone service.
What if I needed my service to use objects of a greater application.
For example I need a service that does something basing on a certain collection defined somewhere in my program (which is hosting the service). The service must look into this collection and search and return a particular element.
The list I am talking about is a list managed by the program and edited and modified by it.
I have the following questions:
1) How can I build a service able to handle this list?
I know that a possible option is using the overloaded ServiceHost constructor accepting an Object instead of a Type service.
So I could pass my list there. Is it good?
[ServiceContract]
public interface IMyService {
[OperationContract]
string Operation1(int param1);
[OperationContract]
string Operation2(int param2);
}
public class MyService : IMyService {
private List<> myinternallist;
public MyService(List<> mylist) {
// Constructing the service passing the list
}
public string Operation1(int param1) { ... }
public string Operation2(int param2) { ... }
}
public class Program {
public static void Main(stirng[] args) {
List<> thelist;
...
MyService S = new MyService(thelist)
using (ServiceHost host = new ServiceHost(S)) {
host.Open();
...
host.Close();
// Here my application creates a functions and other that manages the queue. For this reason my application will edit the list (it can be a thread or callbacks from the user interface)
}
}
}
This example should clarify.
Is it the good way of doing? Am I doing right?
2) How to handle conflicts on this shared resource between my service and my application?
When my application runs, hosting the service, my application can insert items in the list and delete them, the same can do the service too. Do I need a mutex? how to handle this?
Please note that the concurrency issue concerns two actors: the main application and the service. It is true that the service is singleton but the application acts on the list!!!
I assume that the service is called by an external entity, when this happens the application still runs right? Is there concurrency in this case???
Thankyou
Regarding point 2, you can use Concurrent Collections to manage most of the thread safety required.
I'm not sure what you mean by point 1. It sounds like you're describing basic polymorphism, but perhaps you could clarify with an example please?
EDIT: In response to comments you've made to Sixto's answer, consider using WCF's sessions. From what you've described it sounds to me like the WCF service should be sat on a seperate host application. The application you are using currently should have a service reference to the service, and using sessions would be able to call an operation mimicking your requirement for instantiating the service with a list defined by the current client application.
Combine this with my comment on exposing operations that allow interaction with this list, and you'll be able to run multiple client machines, working on session stored Lists?
Hope that's explained well enough.
Adding the constructor to MyService for passing the list certainly will work as you'd expect. Like I said in my comment to the question however, the ServiceHost will only ever contain a single instance of the MyService class so the list will not be shared because only one service instance will consume it.
I would look at a dependency injector (DI) container for WCF to do what you are trying do. Let the DI container provide the singleton list instance to your services. Also #Smudge202 is absolutely correct that using the Concurrent Collection functionality is what you need to implement the list.
UPDATE based on the comments thread:
The DI approach would works by getting all of an object's dependencies from the DI container instead of creating them manually in code. You register all the types that will be provided by the container as part of the application start up. When the application (or WCF) needs a new object instance it requests it from the container instead of "newing" it up. The Castle Windsor WCF integration library for example implements all the wiring needed to provide WCF a service instance from the container. This posts explains the details of how to use the Microsoft Unity DI container with WCF if you want to roll your own WCF integration.
The shared list referenced in this question would be registered in the container as an already instantiated object from your application. When a WCF service instance is spun up from the DI container, all the constructor parameters will be provided including a reference to the shared list. There is a lot of information out there on dependency injection and inversion of control but this Martin Fowler article is a good place to start.

.NET Remoting: how to access server application objects from remotable object?

I'm writing a windows service application, which will be accessed through .NET Remoting.
The problem is I can't figure out how to access service objects from remotable class.
For example, I've a handler class:
class Service_console_handler
{
public int something_more = 20;
//some code...
TcpChannel serverChannel = new TcpChannel(9090);
ChannelServices.RegisterChannel(serverChannel);
RemotingConfiguration.RegisterWellKnownServiceType(
typeof(RemoteObject), "RemoteObject.rem",
WellKnownObjectMode.Singleton);
//from here on RemoteObject is accessible by clients.
//some more code doing something and preparing the data...
}
And I've a remotable class:
public class RemoteObject : MarshalByRefObject
{
private int something = 10;
public int Get_something()
{
return something;
}
}
Clients can access data in RemoteObect with no problem. But how can I access Service_console_handler object (i.e. to retrieve useful info from something_more)?
Sorry for dumb questions and thanks in advance.
What you want is somehow to access the instance of ServiceConsoleHandler via a RemoteObject instance, which is visible for the client.
For this you need to consider two things: (1) Get control over the object construction of the RemoteObject instance and make it accessible and (2) modify ServiceConsoleHandler so it can be accessed remotely.
(1)
How would you construct a RemoteObject instance in ServiceConsoleHandler, if you don’t need to consider remoting?
I guess you would do something like this:
class ServiceConsoleHandler
{
…
RemoteObject remoteObject = new RemoteObject();
// now assume that you also already have
// modified the RemoteObject class so it can hold
// a reference to your server:
remoteObject.Server = this;
…
}
It would be nice if you could make this object accessible for the client. You can do this by using RemotingServices.Marshal instead of RemotingConfiguration.RegisterWellKnownServiceType:
class ServiceConsoleHandler
{
…
TcpServerChannel channel = new TcpServerChannel(9090);
ChannelServices.RegisterChannel(channel, true);
RemoteObject remoteObject = new RemoteObject();
remoteObject.Server = this;
RemotingServices.Marshal(remoteObject, "RemoteObject.rem");
…
}
(2)
If you execute the code right now and access the remoteObject.Server in the client code you would get some remoting exception, because the class ServiceConsoleHandler cannot be accessed remotely. Therefore you need the add the [Serializable] attribute:
[Serializable]
class ServiceConsoleHandler
{ … }
Reason: Types which should be accessed remotely, need to be marshaled to some special transferrable representation. This way they can be squeezed through the TCP port and transferred via the TCP protocol. Basic data types can by marshaled by the framework, so you don't need to think about them. For custom types you will need to state, how to do this. One way to do this is by subclassing from MarshalByRefObject. That’s exactly what you have already done with RemoteObject. Another way is to mark your custom classes as [Serializable] as shown above.
That’s it. Now you should be able to access the server’s field in the client code. Note that you don’t need your existing code for object activation:
TcpClientChannel channel = new TcpClientChannel();
ChannelServices.RegisterChannel(channel, true);
RemoteObject remoteObject = (RemoteObject)Activator.GetObject(
typeof(RemoteObject), "tcp://localhost:9090/RemoteObject.rem");
Console.WriteLine(remoteObject.Server.SomethingMore);
For me .NET remoting is full of funny surprises and sleepless nights. To counter this, make yourself familiar with the remoting concepts (which are from my point of view poorly documented). Dig into the serialization concept (MarshalByRefObject vs. [Serializable]). If you want to make a production code out of it, think a very good ways to handle remoting exceptions. Also consider multithreading. There could be more than one client using this remote object at once.
Have fun!
Thank you! I very much appreciate thoroughness and clarity of you answer.
Most bizzare thing is that I didn't even know that you can publish object instance. About a dozen simple tutorials I studied proposed RemotingConfiguration.RegisterWellKnownServiceType as only method to do remoting. Stupid me.
Now remoting looks much more useful to me. I just wrote a quick test application and it worked. Thanks again.

WCF Service authorization patterns

I'm implementing a secure WCF service. Authentication is done using username / password or Windows credentials. The service is hosted in a Windows Service process. Now, I'm trying to find out the best way to implement authorization for each service operation.
For example, consider the following method:
public EntityInfo GetEntityInfo(string entityId);
As you may know, in WCF, there is an OperationContext object from which you can retrieve the security credentials passed in by the caller/client. Now,authentication would have already finished by the time the first line in the method is called. However, how do we implement authorization if the decision depends on the input data itself? For example, in the above case, say 'admin' users(whose permissions etc are stored in a database), are allowed to get entity info, and other users should not be allowed... where do we put the authorization checks?
Say we put it in the first line of the method like so:
CheckAccessPermission(PermissionType.GetEntity, user, entityId) //user is pulled from the current OperationContext
Now, there are a couple of questions:
Do we validate the entityId (for example check null / empty value etc) BEFORE the authorization check or INSIDE the authorization check? In other words, if authorization checks should be included in every method, is that a good pattern? Which should happen first - argument validation or authorization?
How do we unit test a WCF service when authorization checks are all over the place like this, and we don't have an OperationContext in the unit test!? (Assuming I'm tryin to test this service class implementation directly without any of the WCF setup).
Any ideas guys?
For question 1, it's best to perform authorization first. That way, you don't leak validation error messages back to unauthorized users.
BTW, instead of using a home-grown authentication method (which I assume is what your CheckAccessPermission is), you might be able to hook up to WCF's out-of-the-box support for ASP.NET role providers. Once this is done, you perform authorization via OperationContext.Current.ServiceSecurityContext.PrimaryIdentity.IsInRole(). The PrimaryIdentity is an IPrincipal.
About question #2, I would do this using Dependency Injection and set up your service implementation something like this:
class MyService : IMyService
{
public MyService() : this(new UserAuthorization()) { }
public MyService(IAuthorization auth) { _auth = auth; }
private IAuthorization _auth;
public EntityInfo GetEntityInfo(string entityId)
{
_auth.CheckAccessPermission(PermissionType.GetEntity,
user, entityId);
//Get the entity info
}
}
Note that IAuthorization is an interface that you would define.
Because you are going to be testing the service type directly (that is, without running it inside the WCF hosting framework) you simply set up your service to use a dummy IAuthorization type that allows all calls. However, an even BETTER test is to mock the IAuthorization and test that it is called when and with the parameters that you expect. This allows you to test that your calls to the authorization methods are valid, along with the method itself.
Separating the authorization into it's own type also allows you to more easily test that it is correct in isolation. In my (albeit limited) experience, using DI "patterns" give you vastly better separation of concerns and testability in your types as well as leading to a cleaner interface (this is obviously open to debate).
My preferred mocking framework is RhinoMocks which is free and has very nice fluent interface but there are lots of others out there. If you'd like to know more about DI here are some good primers and .Net frameworks:
Martin Fowler on DI
Jeremy Miller on DI
Scott Hanselman's List of DI Containers
My personal favorite DI container: The Castle Project Windsor Container
For question 1, absolutely do authorization first. No code (within your control) should execute before authorization to maintain the tightest security. Paul's example above is excellent.
For question 2, you could handle this by subclassing your concrete service implementation. Make the true business logic implementation an abstract class with an abstract "CheckPermissions" method as you mention above. Then create 2 subclasses, one for WCF use, and one (very isolated in a non deployed DLL) which returns true (or whatever you'd like it to do in your unit testing).
Example (note, these shouldn't be in the same file or even DLL though!):
public abstract class MyServiceImpl
{
public void MyMethod(string entityId)
{
CheckPermissions(entityId);
//move along...
}
protected abstract bool CheckPermissions(string entityId);
}
public class MyServiceUnitTest
{
private bool CheckPermissions(string entityId)
{
return true;
}
}
public class MyServiceMyAuth
{
private bool CheckPermissions(string entityId)
{
//do some custom authentication
return true;
}
}
Then your WCF deployment uses the class "MyServiceMyAuth", and you do your unit testing against the other.

Categories

Resources