Slow response sending data to IoT Hub using Device Client - c#

I'm using a .NET web app as a buffer between an IoT device and IoT Hub. The IoT device sends data to the web app which then routes that data to Azure IoT Hub. I'm using the Device Client to do this. It worked fine until the number of requests started growing as we went into production.
Here is the bit of code that sends data to IoT Hub.
public async Task SendDataToIoTHub(Message eventMessage, string deviceId, string deviceKey)
{
string connectionString = $"HostName=" + IoTHubHostname + ";DeviceId=" + deviceId + ";SharedAccessKey=" + deviceKey;
using (DeviceClient deviceClient = DeviceClient.CreateFromConnectionString(connectionString))
{
if (deviceClient == null)
{
return;
}
await deviceClient.SendEventAsync(eventMessage);
}
}
I have about 3k-4k requests every minute and the above code gets called for each request. It has really started being unreliable since the request rate hit this amount. There was a ' too many TCP connections' warning for a brief period of time which caused the entire application to start slowing down dramatically.
I'm wondering if this initialization of the device client is correct or if there is a better way to do it.
Thanks!

This could be due to reaching the maximum outbound TCP connections. The limits are:
1,920 connections per B1/S1/P1 instance
3,968 connections per B2/S2/P2 instance
8,064 connections per B3/S3/P3 instance
16,000 connections per I1/I2/I3 instance
Source (This post also shows how to view the number of TCP connections over time)
You're opening a new connection every time you create a new DeviceClient, so with those rates, you will hit some limits. You could scale up your instance, but maybe you should consider keeping a single DeviceClient per device (if you don't have thousands of devices).
There might be a reason for this "web app as a buffer" as you call it, and if you want to continue using it, you might consider using the REST API to send your device events instead. You don't need an AMQP connection if you're only going to send one event.

Related

Back-end for Location sharing between two clients

Want to create one back-end for location sharing between clients and back-end should be in .net-core and MS-SQL.
Simple approach is user1 send x and y co-ordinates and save to db every 2 second and user2 call the get api and get the x and y co-ordinates in every 2 second.
Issue - If 1 million user register then 1 million request per 2 sec will hit. Not good for servers and MS-SQL
Question - Is it possible to create web socket for every user which send their location and send the data to that socket in every 2 sec and when other user who want to see the location, merge that user with socket.
or any other approach???
You can use memory based database (Redis for example) to store the data (save and answer).
Web socket as transport, but even using web sockets you need to store the data somethere. To store in the application - not a good idea.
Maybe you can use SignalR - receive data from one client and send it to all connected clients. But there will be some difficulties with many connections.
You could use Asp.net SignalR, it can send messages to all connected clients simultaneously. For example, a chat room. Or, send messages to specific clients or groups of clients.
The SignalR Hubs API provides the following method to send messages to clients:
SendMessage sends a message to all connected clients, using Clients.All.
SendMessageToCaller sends a message back to the caller, using Clients.Caller.
SendMessageToGroups sends a message to all clients in the SignalR Users group.
public Task SendMessage(string user, string message)
{
return Clients.All.SendAsync("ReceiveMessage", user, message);
}
public Task SendMessageToCaller(string user, string message)
{
return Clients.Caller.SendAsync("ReceiveMessage", user, message);
}
public Task SendMessageToGroup(string user, string message)
{
return Clients.Group("SignalR Users").SendAsync("ReceiveMessage", user, message);
}
More details information about using it, check the following links:
Tutorial: Get started with ASP.NET Core SignalR
How can I make one to one chat system in Asp.Net.Core Mvc Signalr?.
Mapping SignalR Users to Connections

Azure Cloud to Device direct message using Cloud Service-Worker Role

We are working one project which required high volume of message/command exchanged between device to device.
We are using Cloud Service Worker role for processing commands and send to relevant devices using Cloud to Device Direct Method.
The worker role configuration is A2V2-2 Core with 4 GB RAM.There is no problem with worker role capacity.The CPU and Memory all in control.
For less no of message/command processing its working fine(Eg 500 Messages).But when no of messages increase we are facing performance issue(Eg 1000 Messages).We are targeting <5 Sec latency.When we try to log in Worker Role VM and found that the no of TCP connections keep increasing and its causing slowness while sending message/commands to devices.
The following line of code which we are using to send messages using Direct Method.Looking better way to dispose Service client object after each direct method call.
var methodInvocation = new CloudToDeviceMethod(methodInfo.MethodName) { ResponseTimeout = TimeSpan.FromSeconds(methodInfo.ResponseTimeout) };
//set the payload
methodInvocation.SetPayloadJson(methodInfo.Payload);
//invokes direct method
var response = _serviceClient.InvokeDeviceMethodAsync(methodInfo.DeviceId, methodInvocation);
if (_serviceClient != null)
{
//closes the service client connection
_serviceClient.CloseAsync();
_serviceClient.Dispose();
}
At last we found the solution.The Azure Service Client object was not properly closed and disposed.We have closed and disposed Service Client object explicitly after successful direct method call.
//closes the service client connection
await _serviceClient.InvokeDeviceMethodAsync(methodInfo.DeviceId, methodInvocation);
_serviceClient.CloseAsync().Wait();
_serviceClient.Dispose();

What could be causing a delay in TCP ACK in my app

I have a .NET windows service that uses an open source library (Asterisk.net - C#) to listen to TCP connections on a specific port.
This service is deployed on a number of VM instances of Windows 7 (all from the same source image). The connections all come from the VM host (centos). On one (and only one) of these instances the ACK response from windows to the connecting client is delayed by three seconds on the occasional incoming connection. Other times, the ACK is sent immediately.
This delay causes the client to time out:
I'm no expert at TCP sockets, but it seems from a debug that these ACKs are sent before the connection is handed to the app (or in this case library), and even if the accepting thread is blocked, so it is a problem at the windows or .net library level?
The code that handles the inbound connections quickly hands it off to a thread and returns.
public IO.SocketConnection Accept()
{
if (tcpListener != null)
{
TcpClient tcpClient = tcpListener.AcceptTcpClient();
if (tcpClient != null)
return new IO.SocketConnection(tcpClient, encoding);
}
return null;
}
So, what could be causing this infuriating delay? What am I missing?
These 3 packets you recorded are the establishing of the connection. See https://en.m.wikipedia.org/wiki/Transmission_Control_Protocol#Connection_establishment.
However, if it happens only in 1 VM, then it's no programming issue. Try loading the VMs in different order, check firewall settings etc.
Also, test with a network terminal software like HWGroup Hercules and monitor the connections using Sysinternals TCPView.

How to send updates from server to clients?

I am building a c#/wpf project.
It's architecture is this:
A console application which will be on a virtual machine (or my home computer) that will be the server side.
A wpf application that will be the client app.
Now my problem is this - I want the server to be able to send changes to the clients. If for example I have a change for client ABC, I want the server to know how to call a service on the clients computer.
The problem is, that I don't know how the server will call the clients.
A small example in case I didn't explain it well:
The server is on computer 1, and there are two clients, on computers 2 and 3.
Client 2 has a Toyota car and client 3 has a BMW car.
The server on computer 1 wants to tell client 2 that it has a new car, an Avenger.
How do I keep track and call services on the clients?
I thought of saving their ip address (from calling ipconfig from the cmd) in the DB - but isn't that based on the WI-FI/network they are connected to?
Thanks for any help!
You could try implementing SignalR. It is a great library that uses web sockets to push data to clients.
Edit:
SignalR can help you solve your problem by allowing you to set up Hubs on your console app (server) that WPF application (clients) can connect to. When the clients start up you will register them with a specified Hub. When something changes on the server, you can push from the server Hub to the client. The client will receive the information from the server and allow you to handle it as you see fit.
Rough mockup of some code:
namepsace Server{}
public class YourHub : Hub {
public void SomeHubMethod(string userName) {
//clientMethodToCall is a method in the WPF application that
//will be called. Client needs to be registered to hub first.
Clients.User(userName).clientMethodToCall("This is a test.");
//One issue you may face is mapping client connections.
//There are a couple different ways/methodologies to do this.
//Just figure what will work best for you.
}
}
}
namespace Client{
public class HubService{
public IHubProxy CreateHubProxy(){
var hubConnection = new HubConnection("http://serverAddress:serverPort/");
IHubProxy yourHubProxy = hubConnection.CreateHubProxy("YourHub");
return yourHubProxy;
}
}
}
Then in your WPF window:
var hubService = new HubService();
var yourHubProxy = hubService.CreateHubProxy();
yourHubProxy.Start().Wait();
yourHubProxy.On("clientMethodToCall", () => DoSometingWithServerData());
You need to create some kind of subscription model for the clients to the server to handle a Publish-Subscribe channel (see http://www.enterpriseintegrationpatterns.com/patterns/messaging/PublishSubscribeChannel.html). The basic architecture is this:
Client sends a request to the messaging channel to register itself as a subscriber to a certain kind of message/event/etc.
Server sends messages to the channel to be delivered to subscribers to that message.
There are many ways to handle this. You could use some of the Azure services (like Event hub, or Topic) if you don't want to reinvent the wheel here. You could also have your server application track all of these things (updates to IP addresses, updates to subscription interest, making sure that messages don't get sent more than once; taking care of message durability [making sure messages get delivered even if the client is offline when the message gets created]).
In general, whatever solution you choose is plagued with a common problem - clients hide behind firewalls and have dynamic IP addresses. This makes it difficult (I've heard of technologies claiming to overcome this but haven't seen any in action) for a server to push to a client.
In reality, the client talks and the server listens and response. However, you can use this approach to simulate a push by;
1. polling (the client periodically asks for information)
2. long polling (the client asks for information and the server holds onto the request until information arrives or a timeout occurs)
3. sockets (the client requests server connection that is used for bi-directional communication for a period of time).
Knowing those terms, your next choice is to write your own or use a third-party service (azure, amazon, other) to deliver messages for you. I personally like long polling because it is easy to implement. In my application, I have the following setup.
A web API server on Azure with and endpoint that listens for message requests
A simple loop inside the server code that checks the database for new messages every 100ms.
A client that calls the API, handling the response.
As mentioned, there are many ways to do this. In your particular case, one way would be as follows.
Client A calls server API to listen for message
Server holds onto call, waiting for new message entry in database
Client B calls server API to post new message
Server saves message to database
Server instance from step 2 sees new message
Server returns message to Client A.
Also, the message doesn't have to be stored in a database - it just depends on your needs.
Sounds like you want to track users à la https://www.simple-talk.com/dotnet/asp.net/tracking-online-users-with-signalr/ , but in a desktop app in the sense of http://www.codeproject.com/Articles/804770/Implementing-SignalR-in-Desktop-Applications or damienbod.wordpress.com/2013/11/20/signalr-a-complete-wpf-client-using-mvvm/ .

Azure Service Bus Subscriber regularly phoning home?

We have pub/sub application that involves an external client subscribing to a Web Role publisher via an Azure Service Bus Topic. Our current billing cycle indicates we've sent/received >25K messages, while our dashboard indicates we've sent <100. We're investigating our implementation and checking our assumptions in order to understand the disparity.
As part of our investigation we've gathered wireshark captures of client<=>service bus traffic on the client machine. We've noticed a regular pattern of communication that we haven't seen documented and would like to better understand. The following exchange occurs once every 50s when there is otherwise no activity on the bus:
The client pushes ~200B to the service bus.
10s later, the service bus pushes ~800B to the client. The client registers the receipt of an empty message (determined via breakpoint.)
The client immediately responds by pushing ~1000B to the service bus.
Some relevant information:
This occurs when our web role is not actively pushing data to the service bus.
Upon receiving a legit message from the Web Role, the pattern described above will not occur again until a full 50s has passed.
Both client and server connect to sb://namespace.servicebus.windows.net via TCP.
Our application messages are <64 KB
Questions
What is responsible for the regular, 3-packet message exchange we're seeing? Is it some sort of keep-alive?
Do each of the 3 packets count as a separately billable message?
Is this behavior configurable or otherwise documented?
EDIT:
This is the code the receives the messages:
private void Listen()
{
_subscriptionClient.ReceiveAsync().ContinueWith(MessageReceived);
}
private void MessageReceived(Task<BrokeredMessage> task)
{
if (task.Status != TaskStatus.Faulted && task.Result != null)
{
task.Result.CompleteAsync();
// Do some things...
}
Listen();
}
I think what you are seeing is the Receive call in the background. Behind the scenes the Receive calls are all using long polling. Which means they call out to the Service Bus endpoint and ask for a message. The Service Bus service gets that request and if it has a message it will return it immediately. If it doesn't have a message it will hold the connection open for a time period in case a message arrives. If a message arrives within that time frame it will be returned to the client. If a message is not available by the end of the time frame a response is sent to the client indicating that no message was there (aka, your null BrokeredMessage). If you call Receive with no overloads (like you've done here) it will immediately make another request. This loop continues to happend until a message is received.
Thus, what you are seeing are the number of times the client requests a message but there isn't one there. The long polling makes it nicer than what the Windows Azure Storage Queues have because they will just immediately return a null result if there is no message. For both technologies it is common to implement an exponential back off for requests. There are lots of examples out there of how to do this. This cuts back on how often you need to go check the queue and can reduce your transaction count.
To answer your questions:
Yes, this is normal expected behaviour.
No, this is only one transaction. For Service Bus you get charged a transaction each time you put a message on a queue and each time a message is requested (which can be a little opaque given that Recieve makes calls multiple times in the background). Note that the docs point out that you get charged for each idle transaction (meaning a null result from a Receive call).
Again, you can implement a back off methodology so that you aren't hitting the queue so often. Another suggestion I've recently heard was if you have a queue that isn't seeing a lot of traffic you could also check the queue depth to see if it was > 0 before entering the loop for processing and if you get no messages back from a receive call you could go back to watching the queue depth. I've not tried that and it is possible that you could get throttled if you did the queue depth check too often I'd think.
If these are your production numbers then your subscription isn't really processing a lot of messages. It would likely be a really good idea to have a back off policy to a time that is acceptable to wait before it is processed. Like, if it is okay that a message sits for more than 10 minutes then create a back off approach that will eventually just be checking for a message every 10 minutes, then when it gets one process it and immediately check again.
Oh, there is a Receive overload that takes a timeout, but I'm not 100% that is a server timeout or a local timeout. If it is local then it could still be making the calls every X seconds to the service. I think this is based on the OperationTimeout value set on the Messaging Factory Settings when creating the SubscriptionClient. You'd have to test that.

Categories

Resources