Ordered delivery with NetNamedPipeBinding using oneWay calls - c#

Is it possible to guarantee ordered delivery with oneWay calls using namedPipe binding?
I have a WCF service/client communicating using namedPipe binding. The client is exposing a callback contract in which all the methods in the callback are marked as OneWay. Something like this
[ServiceContract(CallbackContract = typeof(IMyServiceCallback))]
public interface IMyService
{
[OperationContract]
void MyOperation();
}
public interface IMyServiceCallback
{
[OperationContract(IsOneWay=true)]
void MyCallback1();
[OperationContract(IsOneWay=true)]
void MyCallback2();
}
At the server side, the implementation of MyOperation method always calls MyCallback1 first and then MyCallback2 but I am observing that sometimes the client receives the calls in the incorrect order (MyCallback2 first and then MyCallback1).
On searching the internet I found that the order is not guaranteed with oneway operation as mentioned here and also there is something called reliableSession which ensure message ordering.
All the examples on the internet for reliable session are with TCP binding (and not a single one with NamedPipeBinding) and the tcpBinding also has a property called ReliableSession which is not present on the NetNamedPipeBinding. So I am not sure whether reliable session is expected to work with NetNamedPipeBinding or not.
Question:
Does reliable session work with namedPipeBinding? If yes, how? If no, Is there any other approach with which I can guarantee ordered delivery?

http://msdn.microsoft.com/en-us/library/aa480191.aspx
Introduction to Reliable Messaging with the Windows Communication Foundation
...
The NetNamedPipeBinding sits on top of the Windows operating system's support for reliable message delivery and reliable streams through named pipes. Because named pipes are connection-oriented, readily support sessions, are reliable by design, and are typically not bridged, there is no need for WS-RM support in this binding.
Chances are, your messages are being delivered in the order the server sends them, and the latter is what you need to work with. The server may be running concurrently and offer no guarantee for ordered dispatch.
Then again, I could be wrong. From my link above, there are some attributes you can specify on your contract and implementation that control ordered delivery.
This question has some more information as well.

Related

Filtering connections on my ASP.NET Core sever between Razor/MVC and gRPC, depending on Kestrel endpoint

In my .NET 6 ASP application I plan on exposing 3 types of network endpoints:
Razor pages-based web UI, listening either on a TCP port or Unix socket
Public gRPC API, listening on TCP or Unix socket as well
Server management gRPC API for internal use only, listening only on a Unix socket for security reasons
The public parts (Razor and public gRPC) could be used behind a reverse proxy, which is why I plan on allowing both TCP and Unix sockets (in case the proxy is on the same machine, but I can make them TCP-only). I also need to have them all in the same Host, so they can access the same services (same instances, not just services configured in similar ways), and so I can also register regular Hosted Services. Lastly, since both Microsoft and gRPC documentation say to not use Grpc.Core, I do not wish to use it, even though it would completely solve my problem.
Since it is not possible to have several WebHosts inside a single Generic Host, my best bet is now to rely on MapWhen to direct the requests depending on which endpoint received the connection. However, the only data I have for the MapWhen predicate idicates the connection origin using the IPAddress class (via HttpContext.Connection), which cannot represent a Unix socket, so I can't use it to reliably identify the connection source.
If I go the MapWhen route, the code would have the following structure:
public void KestrelSetupCallback(KestrelServerOptions options) {
// The two *Endpoint below are either IPEndpoint or UnixSocketEndpoint
// obtained from app configuration
options.Listen(razorEndpoint, o => o.Protocols = HttpProtocols.Http1AndHttp2);
options.Listen(apiEndpoint, o => o.Protocols = HttpProtocols.Http2);
options.ListenUnixSocket(managementSocketPath, o => o.Protocols = HttpProtocols.Http2);
}
public void WebappSetupCallback(IApplicationBuilder app) {
// Additional middlewares can be added here if needed by the predicates
app.MapWhen(RazorPredicate, RazorSetupCallback);
app.MapWhen(ApiPredicate, ApiSetupCallback);
app.MapWhen(ManagementPredicate, ManagementSetupCallback);
}
I have found about the ListenOptions.Use method , which I could call on each endpoint's ListenOptions to add a connection middleware that would set a Feature identifying the endpoint. I am not sure if that is possible though (if the IFeatureCollection is writable at this point), and I would like to know if there are other options.
Would this approach work, given my use case? Would I need to alter the code structure?
Aside from that, what approach could I take to implement the *Predicate methods?
Is there a better alternate method to achieve my use case? This feels very XYZ Problem to me and I'm afraid of missing a feature designed precisely for that.
There are ~3 ways to do this (there's more but lets start here):
Split the pipeline like you have above. Use MapWhen to determine what conditions should run which branch
You can use RequireHost on endpoints/controllers etc to determine which ones will get matched depending on the incoming host.
You can boot up multiple hosts in the same process and treat them like separate islands. They'll have separate config, DI, logging etc.

wcf oneway non blocking operation

I need such scenario: client sends message to server, not waiting for response, and don't care, if message was send properly.
using(host.RemoteService client = new host.RemoteService())
{
client.Open();
cliend.SendMessage("msg");
}
in scenario when firewall is on, or there is no connection to the internet, client dies at "SendMessage". I mean program stops to respond. I wish program don't care about the result. I mean if there is no connection, i wish program to go further, omitting "SendMessage" or sth like that.
What should I do, is there any solution for non blocking method?
Try something like this in your service contract:
[OperationContract(IsOneWay=true)]
void Send(string message);
See the following link:
One Way Operation in WCF
Edit: OP was already using my suggested solution.
Suggested approaches to solve the issue - taken from MSDN (One-Way Services):
Clients Blocking with One-Way Operations
It is important to realize that while some one-way applications return
as soon as the outbound data is written to the network connection, in
several scenarios the implementation of a binding or of a service can
cause a WCF client to block using one-way operations. In WCF client
applications, the WCF client object does not return until the outbound
data has been written to the network connection. This is true for
all message exchange patterns, including one-way operations; this
means that any problem writing the data to the transport prevents the
client from returning. Depending upon the problem, the result could
be an exception or a delay in sending messages to the service.
You can mitigate some of this problem by inserting a buffer between
the client object and the client transport's send operation. For
example, using asynchronous calls or using an in-memory message
queue can enable the client object to return quickly. Both
approaches may increase functionality, but the size of the thread pool
and the message queue still enforce limits.
It is recommended, instead, that you examine the various controls on
the service as well as on the client, and then test your application
scenarios to determine the best configuration on either side. For
example, if the use of sessions is blocking the processing of messages
on your service, you can set the
System.ServiceModel.ServiceBehaviorAttribute.InstanceContextMode
property to PerCall so that each message can be processed by a
different service instance, and set the ConcurrencyMode to
Multiple in order to allow more than one thread to dispatch messages
at a time. Another approach is to increase the read quotas of the
service and client bindings.
Modify your attribute
[OperationContract(IsOneWay=true)]

delegates across different machines

it seems like it should be dead easy, but i couldn't find anything in google on it:
I have a video store server, and it has multiple client applications, installed on users' machines, communicating via (let's say) web services.
When a DVD is returned, I'd like to be able to notify useres that have been waiting for that DVD.
When dealing with a single application, then that's no problem using delegates.
my question is- can this approach work with remote clients as well?
You can use a duplex WCF service for that.
But if it really is a DVD handling service where the user doesn't need to be notified immediately, I would recommend a solution where the users' clients poll the server every say 10 minutes. It is far more simple to implement.
Yes - you can use .NET remoting. See this article for a simple example:
http://www.codeproject.com/KB/IP/remotingandevents.aspx
If you want to have a client application that will provide a delegate that people can wire up to, then yes. You would use .net remoting for that.
I used this example: http://www.codeproject.com/KB/dotnet/DotNetRemotingEventsExpl.aspx
Basically what you are going to do, is to expose a remoting server that publishes a known object. The trick with events, is that the server has to know about the type that the client is wiring the event handlers to. So what you do in that case is that you also provide an abstract class as an event sink.
Basically that class will look something like this:
public abstract class MyEventSinkClass : MarshalByRefObject
{
public abstract void MyAbstractEventHandler(string arg1, string arg2);
public void MyEventHandler(string arg1, string arg2)
{
MyAbstractEventHandler(arg1,arg2);
}
}
Then on the client side they would create a class, and inherit from MyEventSinkClass. They put their logic for handling the event in the override for MyAbstractEventHandler. When they wire up the instance that they are using remoting for, instead of wiring like you normally would, they need to wire to their instance of the class that inherits MyEventSinkClass to the MyEventHandler Method. Then when the event fires, it will eventually call into the overriden method and execute their code.
You can find the details of how to setup a remoting server and client in the link I gave, it isn't difficult.
If you don't want to invent the wheel, Use a Message Queuing tool.
Then, when a dvd is return you post a message on some queue. The users are registering to the queues of the DVDs they are interesting in.
Then the communication is persistent and async. the users are getting notifications even if they are offline (they'll get it once they connect and poll the queue)

WCF service instance will not close despite only calling oneway method

I have a WCF service running inside a windows service on a remote machine.
In the WCF service's contract, I have a method that takes a long time to run set up as
[OperationContract(IsOneWay = true)]
void Update(myClass[] stuff);
Everything works fine, the method gets called, I can see what it needs to do start getting done.
The problem is when I go to close the instance of the WCF service in my code, it times out and I get:
The socket connection was aborted.
This could be caused by an error
processing your message or a receive
timeout being exceeded by the remote
host, or an underlying network
resource issue. Local socket timeout
was '00:02:00'.
I thought the one way contract allowed me to fire and move on. Is there something I am missing? If not are there workarounds for this?
The ServiceContract attribute on your service's interface definition defaults the SessionMode property to SessionMode.Allowed, i.e.,
[ServiceContract(SessionMode = SessionMode.Allowed)]
public interface IMyContract
{
[OperationContract(IsOneWay = true)]
void Update(myClass[] stuff);
}
According to Juval Lowy's Programming WCF Services,
...when the SessionMode property is
configured with SessionMode.Allowed,
it merely allows transport sessions,
but does not enforce it. The exact
resulting behavior is a product of the
service configuration and the binding
used.
Thus, if you are using the WSHttpBinding with security or reliable messaging, the NetTcpBinding, or the NetNamedPipeBinding, then the service will behave as a per-session service. This simply means that as long as the client proxy has not been closed, a session will still be in place between the service and the client. By closing the client proxy as suggested by Shiraz should fix this.
Juval's book also says this with regard to one-way operations:
If the number queued messages has
exceeded the queue's capacity, then
the client will block, even when
issuing a one-way call. However, one
the call is queued, the client is
unblocked and can continue executing,
while the service processes the
operation in the background.
So while one-way operations do allow for fire-and-forget operation, you can still run into cases where your client may block.
Your "Update" is a method on the service.
When you open the wcf client, a connection to the service remains open until you call Close (or Abort).
You are probably not calling close, and it is therefore remaining open until it timesout.

Can an WCF Service create his own host?

I have a client / server type of application and I'd like the server object to create his own host. It looks something like this:
public class Server : IServer {
private ServiceHost m_Host;
public Server() {
m_Host = new ServiceHost(this);
m_Host.Open();
}
}
It seems to work fine when there are few message transfers occurring. But when it starts to speed up (my application requires that data is transfered every 50 ms), the server hangs and and the transfers stop after a few seconds without throwing an exception.
So, is it possible for an object to create his own host? Or do I really have to create it in the main() or do something else?
EDIT: I think the problem in this case is that I want the object that implements the service itself to create his own ServiceHost.
There's nothing really stopping any object to create an instance of ServiceHost.
The big question then is - can you guarantee that your object containing the service host is "alive"? Or was it garbage collected by any chance?
We use Windows (NT) Services to host our own custom service host classes to provide around-the-clock availability for WCF services - works just fine.
Marc
To be a WCF service it simply needs to implement the service contract. There's nothing to stop you adding more methods to open and close an instance of itself as a service.
Check out the ServiceBehaviorAttribute which allows you to specify how your service ... behaves. ;) The ConcurrencyMode property defined the support for multithreading and defaults to single threaded mode, and the InstanceContextMode defines if the service object is per session, per call or singleton.
Quote from ConcurrencyMode:
Setting ConcurrencyMode to Single instructs the system to restrict instances of the service to one thread of execution at a time, which frees you from dealing with threading issues. A value of Multiple means that service objects can be executed by multiple threads at any one time. In this case, you must ensure thread safety.
Quote from InstanceContextMode:
If the InstanceContextMode value is set to Single the result is that your service can only process one message at a time unless you also set the ConcurrencyMode value to Multiple.
We could really use some code examples of your service to further debug the behavior you're describing. For example, is your service object expensive to construct (assuming non singleton implementation), or do the operation slow down? Do you know where the time is spent, is it code, or could it as well be some firewall that limits connection? What protocol do you use?

Categories

Resources