C# - Can WebMethodAttribute adversely effect performance? - c#

I just noticed my Excel service running much faster. I'm not sure if there is an environmental condition going on. I did make a change to the method. Where before it was
class WebServices{
[ WebMethod( /*...*/) ]
public string Method(){}
}
Now its attribute is removed and the method moved into another class
class NotWebService {
public string Method(){}
}
But, I did this because the Method is not called or used as a service. Instead it was called via
WebServices service = new WebServices();
service.Method();
and inside the same assembly. Now when I call the method
NotWebService notService = new NotWebService();
notService.Method();
The response time appears to have gone up. Does the WebMethodAttribute have the potential to slow local calls?

Indeed the WebMethod attribute adds a lot of functionality in order to expose the method through a XML WebService.
Part of the functionality that causes overhead are the following features considered as part of the configurable stuff for a web method:
BufferResponse
CacheDuration
Session Handling
Transaction Handling
For more information just check the WebMethod attribute documentation
Regards,

I know this is an old question, but to avoid misinformation I feel the need to answer it anyway.
I disagree with wacdany's assessment.
A method marked as a webmethod should have no additional overhead if called directly as a method, rather than via HTTP. After all, it compiles to exactly the same intermediate language, other than the presence of a custom attribute.
Now adding a custom attribute can impact performace if it is one of the ones that is special to the compiler or runtime. WebMethodAttibute is neither.
I would next consider whether there is any special overhead to constructing the webservice object. If you have added a constructor there might be some, but by default there is no real overhead, since the constuctors of the base classes are trivial.
Therefore if you really were calling the method directly, there should be no real overhead, despite it also being accesable as a web service action. If you experienced a significant difference, it would be wise to verify if you were constructing the real WebServices class, and not somehow inadvetently using a web service proxy, perhaps due to adding a web service reference to your project.

Related

Provide whole library's functionality via WCF

I want to build a WCF Service application, which is supposed to use a library of mine in order to make all the library's methods available for the service's client. There must be a better way than explicitly writing an OperationContract for each method of my library, which acts as some kind of proxy and calls the library's actual method on the server's side in order to get the returnvalue and deliver it back to the client.
If you want access to those methods, you'll need to create operation contracts for them.
You can make this easier by creating a small app that loops through the code files, finds and the method signatures, and then formats them for the interface. Then you'd just need to copy that code into the interface.
There must be a better way than explicitly writing an
OperationContract for each method of my library
No, not really.
Also remember a library often is stateful, i.e. you instantiate an object, and when you call instance methods against that object, you preserve state as you saved private members at instance level.
Only static methods could be 'directly' mapped to service operations.
Most probably, you might want to entirely write your WCF contract from scratch to make it service-friendly (i.e. stateless), and possibly interoperable (faults instead of exceptions...etc.)

If I test a service wrapper, should I abstract everything?

I have a service manager class used to abstract my calls from my MVC project to my REST service.
All the manager class does is set up the Rest calls (using RestSharp) and return the service data back to the MVC application.
So, at first I thought about not testing such a trivial class, but have decided that tests will safeguard against future changes that might be more complex.
However, here is my dilemma. How far should I abstract things just so that I can test in isolation?
So, I am having my RestClient being injected by MVC into my manager class. I am letting the MVC injector set the base url.
All of this I am ok with, but here are the questions I have:
For my method call, should I have my method take in a parameter (userId) and an IRestRequest?
My problem with this is that all of a sudden my generic servicemanager will become Rest specific as my interface would need to include both parameters.
If I do not inject the IRestRequest into the method and let the implementation create it, is this ok since this will be ignored as the main method being tested is the RestClient.Execute, which will be stubbed out and not care about the actual RestRequest?
In fact, as this is part of the implementation, I could maybe mock and verify that the Execute method is being sent in the appropriate RestRequest object?
Or, should I not inject the IRestRequest, but instead inject an IRequestResolver into my constructor? Then in my methodcall, I can just use the IRequestResolver, which will take in a string representing the method. This will then be used to figure out the RestRequest parameters and return an RestRequest object filled in appropriately for the method?
Or, should I just essentially do the sub-bullet under my first bullet, and use the concrete implementation.
Any other options I am missing?
I am leaning towards the fourth bullet as it gets to the actual solution being tested?
Let me know if you need any more details to help me resolve my dilemma.
After discussing this with a friend, I have decided to go with using the concrete implementation.
The reason being that this object is really just a POCO, and there is no need to abstract this out (even though an abstraction is provided). This is my concrete implementation of my service caller, so the actual solution being tested is the call. I will probably mock this out and verify that the restrequest is being called the way it should be, though.
But, the short answer we came up with is that POCO's have no need to be abstracted away.
I know this dilemma well; and have been here a few times.
When it comes down to it, you could abstract everything, but you end up with meaningless tests and you find that you're writing test framework that just doesn't matter or you're actually bypassing accepted framework norms in search of the 'one true test methodology'. I have actually been in conversations where people have honestly debated passing abstract interfaces where you'd just need to put an integer.
Whilst writing this I've seen your own answer; and I agree completely.
As long as you can validate assumptions and test behaviour then you're doing enough; like you say - you only need to check that things have changed and you yourself know the boundaries of your own context - Will the provider realistically ever change ? No - don't abstract it.
Recently I've architected some large scale Microsoft Dynamics CRM solutions for my employer; ultimately my tests assume that the CRM API is OK and they just test the behaviour of my wrappers.
Anyway, that's just the ballpark as I see it, I hope this is of some value to you!

What is the best design practice when you have a class where properties are dependent on a web service call? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
If I have a class with some read-only properties that are populated by a web service call, what is considered the best way to design this?
Is it considered proper for the property getters to make the web service call. It seems that the downside of this is that the getter is doing more than one thing and obscures the expense of the call. I realize that any of the property getters only needs to make the web service call once (by checking for nulls or flag before making the call). But that a single property getter could potentially be setting the private fields for other properties seems to smell to me.
On the other hand if I have have a public method (ie InitWebServiceVals) that calls the web service and updates the private fields I am creating a temporal dependency between the method and the property getters. So the API obscures the fact that you shouldn't read a property until the "InitWebServiceVals" is called.
Or is there some other method or pattern that addresses this? For example, making the webservice call in the constructor? Or is this generally indicative of a design issue?
I have run into this issue a number of times and I always ended up preferring the second method to the first.
Any thoughts?
Seth
I would throw one other option at you. You could use a factory (either a class or static method) to instance your class. The factory would be responsible for making the web service calls and handing off the property values to the class (either through a parameterized constructor on your class that accepts the values, or by declaring the setters as internal).
This would have the added benefit of decoupling the "how does my class get those values" part from the class itself.
So:
var myClass = MyClass.Create(); // where create is a static
// or
var myClass = MyClassFactory.Create(); // using a separate factory
// or
var myClass = MyClass.CreateFromTestData(value1, value2, value3); // etc
I would use lazy initializers. There is full support for them baked into the .NET framework. See Lazy(Of T).
I would say that a form of the Factory Pattern and return the Class from a "Create" type of static method, this allows you to separate the WebService side should you change how you are retrieving the data, from Web Service to Restful etc it also makes it easier to impliment unit testing, asyncronous lazy loading etc as well as testing. You could also easily use an IOC container or Dependency Injection to inject the Service API at runtime.
To clarify the testing, if you define an interface with the Create Method you can simple "swap" or "Inject" out the Interface implementation.
public MyClass webServiceClass = IMyFactoryInterface.Create();
public static MyClassFactory : IMyFactoryInterface
{
public static MyClass Create(params anyParametersRequired)
{
// Do Something
}
}
In my experience, having property getters (or setters) doing anything computationally expensive is a bad idea. When I see a property, I generally assume that it is going to be a fast, simple operation.
I would avoid the second solution (InitWebServiceVals), because of the requirement that the consumer know about it and take an extra action. I would either make the web service call in the object's constructor or when the first property is accessed, depending on when you want to take the hit of accessing the web service.
Having the access of Property A make the service call and also set the values for Properties B, C, and D is okay, it's lazy instantiation and perfectly justified if the call is better deferred until first needed.
EDIT:
So after some additional thought, there's a third option which I like a little better, depending on the intended use of the object. If there are potentially multiple web services generating the property values, or property values that don't come from web services and should be available immediately, or even that it's likely the object will be instantiated and not accessed immediately, then having the constructor make the service call asynchronously and making the property getters smart enough to wait for that call to finish would offload the cost of the service call to another thread.
I would recommend making the web service call in the constructor, so that the performance hit is taken before anyone asks for a property. If you don't want to take the hit immediately on construction, then you could at least start the web service call async in the constructor, and have the property get block until the data is available.
Rico Mariani has an excellent post about how you shouldn't do ANYTHING expensive in a property get, since getting a property value should be a cheap operation.
http://blogs.msdn.com/b/ricom/archive/2011/12/19/performance-guidelines-for-properties.aspx
If possible, I would avoid creating the object and then setting properties. A better solution is to create a higher-level provider that returns the object you want in a fully initialized state. If that service needs to call into a webservice then so be it. This lets you avoid the temporal dependency and has the advantage of communicating that getting the object an expensive operation.
Example:
ISomeService.Get() returns a Widget
ISuperWidgetProvider.Get() returns a SuperWidget by calling into ISomeService, getting a few properties, and getting the rest from some other data source.
The answer depends on multiple factors:
reaction time of getter (what is allowed time delay for getter)
whether service is sync or async
how "heavy" is service call
how often the results of service call change
how often to you expect you code to call this getter
Answer is pretty clear and simple if answer this questions.
There are three main strategies to help you:
caching
lazy initialization
constructor initialization
Update: I feel like your are trying to solve caching problem inside some "working" class that is not supposed to do this, if it is the case you need to introduce a CachingMyService wrapper around the service, that will be used by you client code.

Custom code access permissions

We have a server written in C# (Framework 3.5 SP1). Customers write client applications using our server API. Recently, we created several levels of license schemes like Basic, Intermediate and All. If you have Basic license then you can call few methods on our API. Similarly if you have Intermediate you get some extra methods to call and if you have All then you can call all the methods.
When server starts it gets the license type. Now in each method I have to check the type of license and decide whether to proceed further with the function or return.
For example, a method InterMediateMethod() can only be used by Intermediate License and All license. So I have to something like this.
public void InterMediateMethod()
{
if(licenseType == "Basic")
{
throw new Exception("Access denied");
}
}
It looks like to me that it is very lame approach. Is there any better way to do this? Is there any declarative way to do this by defining some custom attributes? I looked at creating a custom CodeAccessSecurityAttribute but did not get a good success.
Since you are adding the "if" logic in every method (and god knows what else), you might find it easier to use PostSharp (AOP framework) to achieve the same, but personally, I don't like either of the approaches...
I think it would be much cleaner if you'd maintained three different branches (source code) for each license, which may add a little bit of overhead in terms of maintenance (maybe not), but at least keep it clean and simple.
I'm also interested what others have to say about it.
Good post, I like it...
Possibly one easy and clean approach would be to add a proxy API that duplicates all your API methods and exposes them to the client. When called, the proxy would either forward the call to your real method, or return a "not licensed" error. The proxies could be built into three separate (basic, intermediate, all) classes, and your server would create instances of the approprate proxy for your client's licence. This has the advantage of having minimal performance overhead (because you only check the licence once). You may not even need to use a proxy for the "all" level, so it'll get maximum performance. It may be hard to slip this in depending on your existing design though.
Another possibility may be to redesign and break up your APIs into basic/intermediate/all "sections", and put them in separate assemblies, so the entire assembly can be enabled/disabled by the licence, and attempting to call an unlicensed method can just return a "method not found" error (e.g. a TypeLoadException will occur automatically if you simply haven't loaded the needed assembly). This will make it much easier to test and maintain, and again avoids checking at the per-method level.
If you can't do this, at least try to use a more centralised system than an "if" statement hand-written into every method.
Examples (which may or may not be compatible with your existing design) would include:
Add a custom attribute to each method and have the server dispatch code check this attribute using reflection before it passes the call into the method.
Add a custom attribute to mark the method, and use PostSharp to inject a standard bit of code into the method that will read and test the attribute against the licence.
Use PostSharp to add code to test the licence, but put the licence details for each method in a more data driven system (e.g. use an XML file rather than attributes to describe the method permissions). This will allow you to easily change the licensing across the entire server by editing a single file, and allow you to easily add whole new levels or types of licences in future.
Hope that gives you some ideas.
You might really want to consider buying a licensing solution rather than rolling your own. We use Desaware and are pretty happy with it.
Doing licensing at the method level is going to take you into a world of hurt. Maintenance on that would be a nightmare, and it won't scale at all.
You should really look at componentizing your product. Your code should roughly fall into "features", which can be bundled into "components". The trick is to make each component do a license check, and have a licensing solution that knows if a license includes a component.
Components for our products are generally on the assembly level, but for our web products they can get down to the ASP.Net server control level.
I wonder how the people are licensing the SOA services. They can be licensed per service or per end point.
That can be very hard to maintain.
You can try with using strategy pattern.
This can be your starting point.
I agree with the answer from #Ostati that you should keep 3 branches of your code.
What I would further expand on that is then I would expose 3 different services (preferably WCF services) and issue certificates that grant access to the specific service. That way if anyone tried to access the higher level functionality they would just not be able to access the service period.

ASP.NET Web Service Results, Proxy Classes and Type Conversion

I'm still new to the ASP.NET world, so I could be way off base here, but so far this is to the best of my (limited) knowledge!
Let's say I have a standard business object "Contact" in the Business namespace. I write a Web Service to retrieve a Contact's info from a database and return it. I then write a client application to request said details.
Now, I also then create a utility method that takes a "Contact" and does some magic with it, like Utils.BuyContactNewHat() say. Which of course takes the Contact of type Business.Contact.
I then go back to my client application and want to utilise the BuyContactNewHat method, so I add a reference to my Utils namespace and there it is. However, a problem arises with:
Contact c = MyWebService.GetContact("Rob);
Utils.BuyContactNewHat(c); // << Error Here
Since the return type of GetContact is of MyWebService.Contact and not Business.Contact as expected. I understand why this is because when accessing a web service, you are actually programming against the proxy class generated by the WSDL.
So, is there an "easier" way to deal with this type of mismatch? I was considering perhaps trying to create a generic converter class that uses reflection to ensure two objects have the same structure than simply transferring the values across from one to the other.
You are on the right track. To get the data from the proxy object back into one of your own objects, you have to do left-hand-right-hand code. i.e. copy property values. I'll bet you that there is already a generic method out there that uses reflection.
Some people will use something other than a web service (.net remoting) if they just want to get a business object across the wire. Or they'll use binary serialization. I'm guessing you are using the web service for a reason, so you'll have to do property copying.
You don't actually have to use the generated class that the WSDL gives you. If you take a look at the code that it generates, it's just making calls into some .NET framework classes to submit SOAP requests. In the past I have copied that code into a normal .cs file and edited it. Although I haven't tried this specifically, I see no reason why you couldn't drop the proxy class definition and use the original class to receive the results of the SOAP call. It must already be doing reflection under the hood, it seems a shame to do it twice.
I would recommend that you look at writing a Schema Importer Extension, which you can use to control proxy code generation. This approach can be used to (gracefully) resolve your problem without kludges (such as copying around objects from one namespace to another, or modifying the proxy generated reference.cs class only to have it replaced the next time you update the web reference).
Here's a (very) good tutorial on the subject:
http://www.microsoft.com/belux/msdn/nl/community/columns/jdruyts/wsproxy.mspx

Categories

Resources