WCF - The socket connection was aborted - c#

I m getting the following Exception, when i run WCF service on a remote host with TCP binding. basichttpbinding works however.
Also, when i host it in the same machine, it works as well.
Also using test client and connection to remote machine, TCP works as well.
Why do you think i m getting the following exception? what s the fix?
Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at IService1.GetSectionEntity(String id)
at Service1Client.GetSectionEntity(String id) in C:\src\Service1.cs:line 2318
at GetSectionSync(Id sectionId, Boolean loadFromDbIfNotInCache)
2011-02-22 10:41:13,888|WARN | HttpHandler|The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:00:29.9870000'.
2011-02-22 10:41:13,891|ERROR|HttpHandler|The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:00:29.9870000'.
System.ServiceModel.CommunicationException: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:00:29.9870000'. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
at System.ServiceModel.Channels.SocketConnection.ReadCore(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, Boolean closing)
--- End of inner exception stack trace ---

When in doubt, turn on WCF diagnostics: http://msdn.microsoft.com/en-us/library/ms733025.aspx

Related

Inserting large amount of data in Apache Ignite

I am using Apache Ignite .NET thin client 2.8.1 to insert a large amount of data in the Apache Ignite node. Ignite is hosted on Amazon Linux AMI.
I am trying to insert over 500000 records using PutAllAsync method:
await cacheClient.PutAllAsync(entities); // ICacheClient<int, T>
After that I see the following exception in client logs:
2021-02-03 07:23:37.9917 - NO_TRACE - ****************** - Error: Could not get data from cache
Exception has been thrown by the target of an invocation. System.Reflection.TargetInvocationException System.Object InvokeMethod(System.Object, System.Object[], System.Signature, Boolean) System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.IO.IOException: Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
--- End of inner exception stack trace ---
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
at Apache.Ignite.Core.Impl.Client.ClientSocket.SocketRead(Byte[] buf, Int32 pos, Int32 len)
at Apache.Ignite.Core.Impl.Client.ClientSocket.ReceiveBytes(Int32 size)
at Apache.Ignite.Core.Impl.Client.ClientSocket.ReceiveMessage()
at Apache.Ignite.Core.Impl.Client.ClientSocket.SendRequest(RequestMessage& reqMsg)
at Apache.Ignite.Core.Impl.Client.ClientSocket.DoOutInOp[T](ClientOp opId, Action`1 writeAction, Func`2 readFunc, Func`3 errorFunc)
at Apache.Ignite.Core.Impl.Client.Cache.CacheClient`2.DoOutInOp[T](ClientOp opId, Action`1 writeAction, Func`2 readFunc)
at Apache.Ignite.Core.Impl.Client.Cache.CacheClient`2.DoOutOp(ClientOp opId, Action`1 writeAction)
And the following error in ignite logs:
[07:44:53,529][WARNING][grid-timeout-worker-#22][ClientListenerNioListener] Unable to perform handshake within timeout [timeout=10000, remoteAddr=/172.31.56.14:52631]
Are there any best practices of how to insert over 5000 records in ignite cache? Insert batches within transaction?
Ignite is hosted on Amazon Linux AMI
Where do you run the thin client? I suspect that the issue can be simply the low connection speed between the server and the client.
Can you test the connection speed?
What is the size of the data, in megabytes?
How much time does PutAll take for 10, 100, 1000 entries?
There is no DataStreamer in the thin client (yet), so my suggestions are:
Split one big PutAll into multiple smaller ones (e.g. 100 entries at a time)
Increase timeouts on server and client
UPDATE: DataStreamer is now available in Ignite.NET thin client:
IIgniteClient.GetDataStreamer
https://ptupitsyn.github.io/Whats-New-In-Ignite-Net-2.11/
Maybe you actually need to use some sort of streaming mode.
If you have persistence, consider disabling WAL beforehand.
Maybe you have hit some specific issue such as IGNITE-14076. Try if you can work around it by using smaller batches.
Try adjusting timeouts for thin client in ClientConnectorConfiguration and client configuration.

Apache NMS throws an established connection was aborted by the software in your host machine under heavy use

Background:
C# WPF application talking to JAVA server running on linux via ActiveMQ/JSON
Total 5 instances of connection:
Queues: 2
Topics: 3 (1 producer, 2 consumers)
Problem:
Under heavy use (throughput rate of sending/receiving around 200 messages in less than 500ms and memory working set around 1-1.2 GB), throws ‘An established connection was aborted by the software in your host machine’.
Sample stack:
Apache.NMS.NMSException: Unable to read data from the transport connection: An established connection was aborted by the software in your host machine. ---> System.IO.IOException: Unable to read data from the transport connection: An established connection was aborted by the software in your host machine. ---> System.Net.Sockets.SocketException: An established connection was aborted by the software in your host machine
at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
--- End of inner exception stack trace ---
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
at System.IO.BufferedStream.Read(Byte[] array, Int32 offset, Int32 count)
at System.IO.BinaryReader.FillBuffer(Int32 numBytes)
at System.IO.BinaryReader.ReadInt32()
at Apache.NMS.Util.EndianBinaryReader.ReadInt32()
at Apache.NMS.ActiveMQ.OpenWire.OpenWireFormat.Unmarshal(BinaryReader dis)
at Apache.NMS.ActiveMQ.Transport.Tcp.TcpTransport.ReadLoop()
Tried so far:
Switched off Inactivity Monitoring to reduce traffic across 5 connections. Mostly because application has got its own heartbeat implementation.
Set ConnectionFactory.OptimizeAcknowledge to true to batch the acknowledgement

Reproduce connection reset c#

How can I reproduce the following exception for testing purposes:
Exception: System.IO.IOException
Message: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
Nested Exception
Exception: System.Net.Sockets.SocketException Message: An existing connection was forcibly closed by the remote host
Source: System
at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
It intermittently occurs, I have written some code that uses the retry pattern however I'd like to be able to test my fix works.

Can Linq AsParallel() dispose of SoapHttpClientProtocol objects prematurely?

In an ASP.Net MVC 4 web application that I'm working on. I have one page that basically generates a report by getting data from a SOAP service.
My code basically looks like this
List<CustomThings> serverInfos = ServerInfos;
serverInfos.AsParallel().ForAll(srvInfo =>
{
SoapHttpClientProtocol soapProxy = CreateProxy(srvInfo);
//call make soap calls through the soap client
//store results in the proper places
}
The reason I'm doing AsParallel here is because doing several requests over HTTP in a serial fashion takes forever. I should throw in that this code does work, although sporadically.
Is it possible that things are getting disposed of in an unpredictable fashion, and PLINQ is not a good solution for what I'm trying to do here?
Is it possible that another threading issue could cause an error which makes the soap client "give up"?
Additional Info
This particular soap proxy is talking to an ArcGIS Server. Normally, you can check the server logs and see when particular requests are inititiated and if the requests failed. There is nothing showing in these logs.
Here's an example of an inner exception stack trace I get from the AsParallel code.
Exception: System.AggregateException: One or more errors occurred.
---> System.Net.WebException: The underlying connection was closed: A connection that was expected to be kept alive was closed by the
server. ---> System.IO.IOException: Unable to read data from the
transport connection: An existing connection was forcibly closed by
the remote host. ---> System.Net.Sockets.SocketException: An existing
connection was forcibly closed by the remote host at
System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32
size, SocketFlags socketFlags) at
System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset,
Int32 size) --- End of inner exception stack trace --- at
System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset,
Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32
offset, Int32 size) at
System.Net.Connection.SyncRead(HttpWebRequest request, Boolean
userRetrievedStream, Boolean probeRead) --- End of inner exception
stack trace --- at
System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest
request) at
System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest
request) at
System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String
methodName, Object[] parameters) at
ESRI.ArcGIS.SOAP.FeatureServerProxy.Query(Int32 LayerOrTableID, String
DefinitionExpression, QueryFilter QueryFilter, ServiceDataOptions
ServiceDataOptions, String GdbVersion, Double MaximumAllowableOffset)
at
System.Linq.Parallel.SelectQueryOperator2.SelectQueryOperatorResults.GetElement(Int32
index) at System.Linq.Parallel.QueryResults1.get_Item(Int32 index)
at
System.Linq.Parallel.PartitionedDataSource1.ListContiguousIndexRangeEnumerator.MoveNext(T&
currentElement, Int32& currentKey) at
System.Linq.Parallel.PipelineSpoolingTask2.SpoolingWork() at
System.Linq.Parallel.SpoolingTaskBase.Work() at
System.Linq.Parallel.QueryTask.BaseWork(Object unused) at
System.Linq.Parallel.QueryTask.<.cctor>b__0(Object o) at
System.Threading.Tasks.Task.InnerInvoke() at
System.Threading.Tasks.Task.Execute()
PLINQ does not even know your connection object exists. It cannot close it.
Read the message carefully:
An existing connection was forcibly closed by the remote host.
The server closed the connection in an unexpected way. Your client is not at fault.
Interpreting the exception precisely is an essential debugging skill. This information was right there in the exception message.
Maybe you are generating too much load. Set a sustainable degree of parallelism. The default heuristics are for CPU work, not for IO.
.WithDegreeOfParallelism(10)
A connection that was expected to be kept alive was closed by the server.
This could mean that the server does not support HTTP keep alive.
I don't think you are doing anything terribly wrong with AsParallel for your Soap HTTP requests, and i don't think it is a threading issue.
However, the parallel requests obviously push your client/server to the number of connection limits, and that is why you are seeing the connections getting closed.
I would bet your client, server or both are not configured to handle the number of concurrent connections you are issuing. That is why it works when you run the requests in serial fashion.
I guess you don't have access to server config, so one thing you could do is to control the number of parallel requests you issue to the server at the same time by setting the ParallelEnumerable.WithDegreeOfParallelism setting like in the following snippet:
.AsParallel()
.WithDegreeOfParallelism(15)
That way you control the parallelism, and don't risk overloading the server with a large number of requests on a small number of connections.
Regarding the client you should make sure that you have set the max. number of concurrent client connections to an appropriate number, just to make sure that your requests can use separate connections to the server, and prevent reusing connections which could cause your Keep-Alive issues.
The server could close the connection if the number of requests using a connection has exceeded the keep alive max number of connections or if it exceeds the timeout settings.
You can set the client connection limit programmatically using the ServicePointManager.DefaultConnectionLimit setting. E.g. you could set it to 50:
System.Net.ServicePointManager.DefaultConnectionLimit = 50;
Here is an example setting max. connections to 50 using the config file:
<configuration>
<system.net>
<connectionManagement>
<add address="*" maxconnection="50" />
</connectionManagement>
</system.net>
I used "50" just as an example, you should determine/calculate/measure what is the best setting for your setup.
Also make sure you are disposing your HTTP Connections properly after each request to prevent connection timeouts.

Remoting server forcibly closing client connections in the middle of remote calls

I have a system consisting of a server accepting remoting calls from clients with TCP as the underlying transportlayer. It normally works like a charm, but if I increase the no. of clients, the server starts at random to close the TCP connections in the middle of the calls. Not all calls are interrupted this way.
That is really unexpected behaviour... I get no exceptions on the server side, just the client side exception:
System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
Server stack trace:
ved System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
ved System.Runtime.Remoting.Channels.SocketStream.Read(Byte[] buffer, Int32 offset, Int32 size)
ved System.Runtime.Remoting.Channels.SocketHandler.ReadFromSocket(Byte[] buffer, Int32 offset, Int32 count)
ved System.Runtime.Remoting.Channels.SocketHandler.Read(Byte[] buffer, Int32 offset, Int32 count)
ved System.Runtime.Remoting.Channels.SocketHandler.ReadAndMatchFourBytes(Byte[] buffer)
ved System.Runtime.Remoting.Channels.Tcp.TcpSocketHandler.ReadAndMatchPreamble()
ved System.Runtime.Remoting.Channels.Tcp.TcpSocketHandler.ReadVersionAndOperation(UInt16& operation)
ved System.Runtime.Remoting.Channels.Tcp.TcpClientSocketHandler.ReadHeaders()
ved System.Runtime.Remoting.Channels.Tcp.TcpClientTransportSink.ProcessMessage(IMessage msg, ITransportHeaders requestHeaders, Stream requestStream, ITransportHeaders& responseHeaders, Stream& responseStream)
ved System.Runtime.Remoting.Channels.BinaryClientFormatterSink.SyncProcessMessage(IMessage msg)
Exception rethrown at [0]:
ved System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
ved System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
ved EBH.GuG.AgentKit.Transports.RemotingAgentHostEndPoint.SyncInvoke(Agent a, Int32 port)
Are you running on windows XP/2000/98 ?
If so, XP has an inbuilt throttling mechanism of 10 outbound sockets (to stop you using desktop machines as servers to force you to pay for windows Server) My hunch is that you may be hitting this limit.
Additional:
Perhaps you could rearchitect your calls with a callback so that they don't maintain open sockets while work is being performed, which should improve your concurrent throughput.
How well do you know the network hardware between your clients and your server?
Every time I've had this kind of problem, it's always turned out to be caused by a misconfigured firewall, load balancer etc.
If you set up a test environment with your clients and server connected to the same switch, you should be able to perform a load test to work out if your code fails without any other network hardware involved...

Categories

Resources