I am creating an application that consumes a SOAP web service in C#. I generated a proxy class for the web service WSDL using the svcutil tool.
I added the proxy class to my code and I am using it to make calls to the web service and get results asynchrounsly.
Everything works pretty fine when the client has an Internet access. However, if I run attempt to access while the application doesn't have Internet access it crashes raising the following exception:
An exception of type 'System.ServiceModel.EndpointNotFoundException' occurred in
System.ServiceModel.Internals.dll but was not handled in user code
I am trying to catch this exception to prevent the application from crashing and provide the user with a more friendly error message, However, since I am doing async web calls, simply surrounding the web service calls by a try- catch does not help.
According to the exception details it happens in the End_FunctionName function that is defined inside the auto-generated proxy file.
Any tips about how to be able to gracefully handle this exception ?
Its pretty difficult to know exactly what is happening; however, I'm going to assume you have a web service like such
[ServiceContract]
public interface IMyService
{
[OperationContract]
String Hello(String Name);
[OperationContract]
Person GetPerson();
}
You probably have a proxy like this :
public class MyPipeClient : IMyService, IDisposable
{
ChannelFactory<IMyService> myServiceFactory;
public MyPipeClient()
{
//This is likely where your culprit will be.
myServiceFactory = new ChannelFactory<IMyService>(new NetNamedPipeBinding(), new EndpointAddress(Constants.myPipeService + #"/" + Constants.myPipeServiceName));
}
public String Hello(String Name)
{
//But this is where you will get the exception
return myServiceFactory.CreateChannel().Hello(Name);
}
public Person GetPerson()
{
return myServiceFactory.CreateChannel().GetPerson();
}
public void Dispose()
{
((IDisposable)myServiceFactory).Dispose();
}
}
If you have an error connecting you will get it not when you try to connect to the channel factory but when you actually try to call a function.
To fix this problem, you can put a try catch around every single function call and handle async calls manually.
Conversely, you can have a function like init() that is called synchronously every single time you instantiate a connection. This way you know that if that call connects that you have a connection.
If you are at risk of a connection dropping at any time I advise you go with the former option.
Anyway here is an example of how you'd fix it:
public class MyPipeClient : IMyService, IDisposable
{
ChannelFactory<IMyService> myServiceFactory;
public MyPipeClient()
{
myServiceFactory = new ChannelFactory<IMyService>(new NetNamedPipeBinding(), new EndpointAddress(Constants.myPipeService + #"/" + Constants.myPipeServiceName + 2) );
}
public String Hello(String Name)
{
try
{
return Channel.Hello(Name);
}
catch
{
return String.Empty;
}
}
public Person GetPerson()
{
try
{
return Channel.GetPerson();
}
catch
{
return null;
}
}
public Task<Person> GetPersonAsync()
{
return new Task<Person>(()=> GetPerson());
}
public Task<String> HelloAsync(String Name)
{
return new Task<String>(()=> Hello(Name));
}
public void Dispose()
{
myServiceFactory.Close();
}
public IMyService Channel
{
get
{
return myServiceFactory.CreateChannel();
}
}
}
I uploaded the source I wrote so that you could download the full source. You can get it here : https://github.com/Aelphaeis/MyWcfPipeExample
PS : This Repository throws the exception you are getting. In order to remove it just go to MyPipeClient and remove the + 2 in the constructor.
If you are using a Duplex, Consider using this repository:
https://github.com/Aelphaeis/MyWcfDuplexPipeExample
Related
I'm currently in the process of writing a UWP application. I'm using a WCF Service to talk to a Business Layer that contains EF to talk to the SQL Server database. It's all in the very early stages as I've never used UWP before.
I got the UWP to consume the WCF to return data from the DB. This was working fine when I returned a string. However, now that I want to return objects this is no longer working in the sense that UWP just receives a reference. No object is returned. When I run the WCF in debug mode, the object is returned and I can see all the fields with data etc.
Could one of you geniuses help me, as I'm stuck and don't know what to do.
Here is my code. My class in Business Layer
namespace Business.Models
{
[DataContract(IsReference = true)]
public class SystemUser
{
public tblSystemUser User { get; set; }
// public tblRole Role { get; set; }
}
}
WCF code:
namespace ActiveCareWCF
{
[ServiceContract]
public interface IUser
{
[OperationContract]
Task<SystemUser> DoLogin(string userName);
}
public class Service : IUser
{
public async Task<SystemUser> DoLogin(string userName)
{
SystemUser systemUser = new SystemUser();
userName = #"username";
try
{
ServiceCalls serviceCalls = new ServiceCalls();
systemUser = serviceCalls.AuthorizeUser(userName);
return systemUser --- **this is returning an object**;
}
catch (Exception ex)
{
throw ex;
}
}
}
The SVC file:
public class ActiveCareService : IUser
{
public async Task<SystemUser> DoLogin(string userName)
{
Service service = new Service();
var user = service.DoLogin(userName);
return user.Result;
}
}
And finally the call in UWP.
private async void Login_Click(object sender, RoutedEventArgs e)
{
ActiveCareService.UserClient client = new ActiveCareService.UserClient();
var userFromService = client.DoLoginAsync(#"username").Result;
await client.CloseAsync();
var dialog = new MessageDialog("Logged in as " + userFromService.FirstName); **This just returns a string of the type of object it is, not the actual object. This is what I'm struggling with.**
await dialog.ShowAsync();
Frame.Navigate(typeof(YourSites));
}
Thanks in advance.
I'm doing the same thing, for now my services always return JSON strings, it means must serialize in service and deserialize at uwp.
You must have the model of serialized classes both at service and UWP. It works pretty well.
At My service I have a full Model and at UWP app I create a reduced model that matches with my needs.
Hope this can help.
I wanted to understand whether there are any better way to do logging or error handling in customized way with WCF
Here is the scenario.
I have a service as below
namespace IntegrationServices.Contract.SomeServices
{
[ServiceContract(Name = "SomeServices")]
public interface ISomeService
{
//Having 30+ contracts below is one of them
[OperationContract]
[WebInvoke(UriTemplate = "/GetOnlineSomething")]
SomeTransactionResponse GetOnlineSomething(string someNumber);
}
}
Which is implemented by below calss
namespace IntegrationServices.Service.PaymentServices
{
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
[GlobalErrorBehaviorAttribute(typeof(GlobalErrorHandler), Project.Name)]
public class PaymentService : ISomeService
{
public OnlinePaymentTransactionResponse GetOnlinePaymentTransaction(string someNumber)
{
//we have authentication code here which is OK
//Logging the request
_transactionKey = Guid.NewGuid();
TransactionRequest(/*some message and some parameter*/);
try
{
//do something
}
catch (Exception ex)
{
LogHelper.WriteErrorLogAsync(/*logging some more information*/);
response.ErrorMessage = Project.PHAPICommonErrorMessage;
}
//Logging the response
TransactionResponse(/*some parameter and error message from catch block*/);
return response;
}
}
}
Logging Function is as below
private void TransactionRequest(string xmlObject, Guid? groupKey, string name)
{
//writing to DB
}
private void TransactionResponse(string xmlObject, Guid? groupKey, string name)
{
//writing to DB
}
Now my question here is, I have to write in all 30+ function to log request and response like above.
Can anybody help me to how I can improve above or need to redesign whole approach.
I've had great success with using PostSharp for logging in my code bases. In the context of WCF its similar to the IServiceBehavior approach suggested by Aleksey L in that it gives you "hooks" that execute before and after the method's execution in which you can add your logging. The benefit comes in that you can also use the PostSharp logging attribute outside the context of WCF call.
I have a console application as my webjob to process notifications inside my application. The processes are triggered using a queue. The application interacts with a SQL Azure database using entity framework 6. The Process() method that's being called reads/write data to the database.
I'm getting several errors when the queue messages are processed. They never get to the poison queue since they are reprocessed successfully after 2-3 times. Mainly the errors are the following:
An unhandled exception of type 'System.StackOverflowException' occurred in mscorlib.dll
Error: System.OutOfMemoryException: Exception of type ‘System.OutOfMemoryException’ was thrown.
The default batch size is 16, so the messages are processed in parallel.
My guess is that the Ninject setup for processing messages in parallel is wrong. Therefore, when they are processed at the same time, there are some errors and eventually they are processed successfully.
My question is: Does this setup look ok? Should I use InThreadScope() maybe since I don't know the parallel processing is also multi-threaded.
Here's the code for my application.
Program.cs
namespace NotificationsProcessor
{
public class Program
{
private static StandardKernel _kernel;
private static void Main(string[] args)
{
var module = new CustomModule();
var kernel = new StandardKernel(module);
_kernel = kernel;
var config =
new JobHostConfiguration(AzureStorageAccount.ConnectionString)
{
NameResolver = new QueueNameResolver()
};
var host = new JobHost(config);
//config.Queues.BatchSize = 1; //Process messages in parallel
host.RunAndBlock();
}
public static void ProcessNotification([QueueTrigger("%notificationsQueueKey%")] string item)
{
var n = _kernel.Get<INotifications>();
n.Process(item);
}
public static void ProcessPoison([QueueTrigger("%notificationsQueueKeyPoison%")] string item)
{
//Process poison message.
}
}
}
Here's the code for Ninject's CustomModule
namespace NotificationsProcessor.NinjectFiles
{
public class CustomModule : NinjectModule
{
public override void Load()
{
Bind<IDbContext>().To<DataContext>(); //EF datacontext
Bind<INotifications>().To<NotificationsService>();
Bind<IEmails>().To<EmailsService>();
Bind<ISms>().ToSmsService>();
}
}
}
Code for process method.
public void ProcessMessage(string message)
{
try
{
var notificationQueueMessage = JsonConvert.DeserializeObject<NotificationQueueMessage>(message);
//Grab message and check if it has to be processed
var notification = _context.Set().Find(notificationQueueMessage.NotificationId);
if (notification != null)
{
if (notification.NotificationType == NotificationType.AppointmentReminder.ToString())
{
notificationSuccess = SendAppointmentReminderEmail(notification); //Code that sends email using the SendGrid Api
}
}
}
catch (Exception ex)
{
_logger.LogError(ex + Environment.NewLine + message, LogSources.EmailsService);
throw;
}
}
Update - Added Exception
The exception is being thrown at the Json serializer. Here's the stack trace:
Error: System.OutOfMemoryException: Exception of type ‘System.OutOfMemoryException’ was thrown.
at System.String.CtorCharCount(Char c, Int32 count) at Newtonsoft.Json.JsonTextWriter.WriteIndent() at Newtonsoft.Json.JsonWriter.AutoCompleteClose(JsonContainerType type) at Newtonsoft.Json.JsonWriter.WriteEndObject() at Newtonsoft.Json.JsonWriter.WriteEnd(JsonContainerType type) at Newtonsoft.Json.JsonWriter.WriteEnd() at Newtonsoft.Json.JsonWriter.AutoCompleteAll() at Newtonsoft.Json.JsonTextWriter.Close() at Newtonsoft.Json.JsonWriter.System.IDisposable.Dispose() at Newtonsoft.Json.JsonConvert.SerializeObjectInternal(Object value, Type type, JsonSerializer jsonSerializer) at Newtonsoft.Json.JsonConvert.SerializeObject(Object value, Type type, Formatting formatting, JsonSerializerSettings settings) at Core.Services.Communications.EmailsService.SendAppointmentReminderEmail(Notificaciones email) in c:\ProjectsGreenLight\EAS\EAS\EAS\Core\Services\Communications\EmailsService.cs:line 489 at Core.Services.Communications.EmailsService.ProcessMessage(String message) in c:\ProjectsGreenLight\EAS\EAS\EAS\Core\Services\Communications\EmailsService.cs:line 124 at Core.Services.NotificacionesService.Process(String message) in c:\ProjectsGreenLight\EAS\EAS\EAS\Core\Services\NotificacionesService.cs:line 56
Since you're receiving OutOfMemoryExceptions and StackOverflowExceptions, i suggest that there may be a recursive or deeply nested method. It would be extremely helpful if you'd have a stacktrace to the exception, sadly that's not the case for StackOverflowExceptions. However, OutOfMemoryException has a stack trace, so you need to log this and see what it says / add it to your question. Also, as far as the StackoverflowException goes, you can also try this.
Scoping
You should not use .InThreadScope() in such a scenario. Why? usually the thread-pool is used. Thread's are reused. That means, that scoped objects live longer than for the processing of a single messages.
Currently you are using .InTransientScope() (that's the default if you don't specify anything else). Since you are using the IDbContext in only one place, that's ok. If you'd wanted to share the same instance across multiple objects, then you'd have to use a scope or pass along the instance manually.
Now what may be problematic in your case is that you may be creating a lot of new IDbContexts but not disposing of them after you've used them. Depending on other factors this may result in the garbage collector taking longer to clean-up memory. See Do i have to call dispose on DbContext.
However i grant that this won't fix your issue. It might just speed up your application.
Here's how you can do it anyway:
What you should do is: Have INotifications derive from IDisposable:
public interface INotifications : IDisposable
{
(...)
}
internal class Notifications : INotifications
{
private readonly IDbContext _context;
public Notifications(IDbContext context)
{
_context = context;
}
(...)
public void Dispose()
{
_context.Dispose();
}
}
and alter your calling code to dispose of INotifications:
public static void ProcessNotification([QueueTrigger("%notificationsQueueKey%")] string item)
{
using(var n = _kernel.Get<INotifications>())
{
n.Process(item);
}
}
I have an legacy System.Web.Services.WebService (not WCF) that I have to maintain.
Ocassionly I run into some wired behaviours that I would describe as race conditions.
Either the service hangs and has to be restarted.
Sometimes I get this exception:
System.NotSupportedException: Multiple simultaneous connections
or connections with different connection strings inside the same
transaction are not currently supported.
at MySql.Data.MySqlClient.MySqlConnection.Open()
...
I know whats the root cause. The service utilizes a lib that talks to mysql and was not designed with webservices in mind. Unfortunatly I cannot change this lib.
One example webmethod looks like this:
[WebMethod(EnableSession = true)]
public void DoSomething()
{
var login = this.Session["login"] as LoginDetails;
ExternalLib.SetLoginData(login.Schema, login.User, login.Pass);
ExternalLib.PerformTask();
}
So the problem here is this:
ExternalLib.SetLoginData just set's some global vars
ExternalLib.PerformTask performs database calls, some inside a transaction.
The process is like 1. Create MySqlConnection or take it from cache 2. Create MySqlCommand 3. Execute Command 4. Dispose command
Client a) calls DoSomething() and I init his connection. Half way done with his job Client b) calls DoSomething() which apparently changes the Login-Data for client a and the next call inside the transaction will use the login from client b) which causes the transaction.
Anyway, I know this is a bad design but my question is how to workaround this.
Currently (since I only have 10 clients) I created a dedicated Website on a differnet port which all point to the same root directory but this is an akward solution.
Maybe there is a possibility to run every session inside its on realm. Any suggestions. If I understand this page correctly for WCF is is the default behaviour: http://msdn.microsoft.com/en-us/magazine/cc163590.aspx
Per-Call Services
Per-call services are the Windows Communication
Foundation default instantiation mode. When the service type is
configured for per-call activation, a service instance, a common
language runtime (CLR) object, exists only while a client call is in
progress. Every client request gets a new dedicated service instance.
Seeing as this is probably a threading issue you can lock the ExternalLib to prevent separate instances from calling the code.
public class ExtenalLibWrapper
{
private static object Locker = new object();
public void DoSomething(LoginDetails details)
{
lock(Locker)
{
ExternalLib.SetLoginData(login.Schema, login.User, login.pass);
ExternalLib.PerformTask();
}
}
}
I already wrapped all my public methods in a neat execute wrapper to provide global exception logging.
This forces my webservice to process one request after another, but like I mentioned, the max. number of simultanious clients is 10
public class MyService : System.Web.Services.WebService
{
[WebMethod(EnableSession = true)]
public static int Add(int value1, int value2)
{
return Execute(() =>
{
var calculator = new Calculator();
return calculator.Add(value1, value2);
});
}
private static Logger logger =
LogManager.GetLogger(typeof(MyService).Name);
private static System.Threading.SemaphoreSlim ss =
new System.Threading.SemaphoreSlim(1, 1);
private void Execute(Action method)
{
ss.Wait();
try { method.Invoke(); }
catch (Exception ex)
{
logger.FatalException(method.Method + " failed", ex); throw;
}
finally { ss.Release(); }
}
private T Execute<T>(Func<T> method)
{
ss.Wait();
try { return method.Invoke(); }
catch (Exception ex)
{
logger.FatalException(method.Method + " failed", ex); throw;
}
finally
{
ss.Release();
}
}
}
What would be a better approach when providing a wcf client with the call result.
1. Wrapping the result in an object
public enum DefinedResult : short {
Success = 0,
TimeOut = 1,
ServerFailure = 2,
UserNotFount = 3,
Uknown = 4,
//etc.
}
[DataContract]
public class ServiceResult {
readonly DefinedResults dResult;
public ServiceResult(DefinedResult result) {
this.dResult = result;
}
[DataMember]
public bool IsSuccess
{
get {return this.dResult == DefinedResult.Success;}
}
}
//Client:
WcfClient client = new WcfClient();
ServiceResult result = client.DoWork();
2. Throwing a custom Exception:
[Serializable]
public UserNotFoundException: Exception {
public UserNotFoundException(string message): base(message) {}
}
//client:
WcfClient client = new WcfClient();
try {
result = client.DoWork();
}
catch(FaultException<ExceptionDetail> ex) {
switch(ex.Detail.Type)
{
case "MyCompany.Framework.Exceptions.UserNotFound":
//handle
break;
case "MyCompany.Framework.Exceptions.ServerError":
//handle
break;
}
}
Now, the client can be another .net process (server side) or the same service can be called by java script, hence the question - which one of these (or may be there is something better) is a better approach to let the client know of what happened with the call?
First of all, it depends: if you want to return a condition which is not exceptional, then use a result value. Otherwise, use exceptions. In WCF, it goes like this:
Create a custom exception class:
[DataContract]
class MyException : FaultException<mydetails>
Define that your service throws it:
[FaultContract(...)]
void mymethod()...
throw MyException in your service method
Then you can catch your exception in the service method like catch FaultException<mydetails>
This is the nicest way there is.
FaultExceptions are swallowed by WebHttpBinding (required for JSON/REST services). In this case, if you want to provide detailed infos to your client, Option 1 is better.
If JSON is not in the way, I would recommend Option 2.