I have a question with SendEventAsync() method.
I tested insertion and removal of LAN cable
_sendDeviceClient.SetRetryPolicy(no);
_sendDeviceClient.OperationTimeoutInMillisecounds = xxx;
Do not retry.
Waiting xxx Millisecounds.
foreach() //Message1 Message2......
{
try
{
await _sendDeviceClient.SendEventAsync(message);
//Message send. Do success process
}
catch(Exception e)
{
//Message failed. Do failed process
}
}
My log is "Message send", but in IotHub message was not receive message.
Sometimes, "Message failed", but Iothub received message.
I don't know why this happened.
In any case, is it a problem to implement with try & catch?
In this scenario , i am assuming you don't want to break the loop until all your message sent to respected destination. I would suggested you to use Aggregate exception which could tell you about the overall message and their status:
At the end of your loop , You can pass the List to its constructor and throw that.
At the end of your loop do:
AggregateException aggregateEx = new AggregateException(errors);
throw aggregateEx;
An application that runs on a device has to manage the mechanisms for connection, re-connection, and the retry logic for sending and receiving messages. Also, the retry strategy requirements depend heavily on the device's IoT scenario, context, capabilities.
The Azure IoT Hub device SDKs aim to simplify connecting and communicating from cloud-to-device and device-to-cloud. These SDKs provide a robust way to connect to Azure IoT Hub and a comprehensive set of options for sending and receiving messages.
Most likely , message delivery gest failed due to connection failure , which can happen at many levels.
1) Network errors: disconnected socket and name resolution errors
2) Protocol-level errors for HTTP, AMQP, and MQTT transport: detached links or expired sessions
3) Application-level errors that result from either local mistakes: invalid credentials or service behavior (for example, exceeding the quota or throttling)
The device SDKs detect errors at all three levels. OS-related errors and hardware errors are not detected and handled by the device SDKs. The SDK design is based on The Transient Fault Handling Guidance from the Azure Architecture Center.
I can see that you have opted for no retry policy which means you have a bandwidth or cost concerns.
Ideally one should implement proper Retry logic so that it ensure the delivery.Here you can take a look at the complete sample for IOT HUb
you can read more about RetryGuidance here
Hope it helps.
Related
We have two windows services that live on a Corporate On-Premise Server and that continually send messages to Azure Service Bus in the cloud. Although the messages do end up on the service bus eventually, there are periods of time where the messages just seem to never make it through for a long stretch of time.
This is causing delay issues for us, as we depend on the message arriving onto the service bus and being processed within a minute. However, as can be seen below, a message can be 'blocked' for stretches of up to 30-40 minutes before making its way through to Azure Service Bus. This happens every day, and almost at some time during every hour.
The errors are mainly one of the following (example logs at end of this post):
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 191.239.XX.XXX:443
Error during communication with Service Bus. Check the connection information, then retry.
No such host is known
The request operation did not complete within the allotted timeout of 00:01:10. The time allotted to this operation may have been a portion of a longer timeout. TrackingId:f2db6377-e17d-401a-b339-11fbb51c7bf7, Timestamp:19/05/2017 12:47:36 AM
The way that we send messages to the service bus is as follows, simplified below:
private TopicClient _azureTopic;
...
<Begin Loop>
if (_azureTopic == null)
{
var connectionString = "Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=managerfiddev;SharedAccessKey=AABBCCDDEEFFGGHHHASDFADFAadfadfdfz=EntityPath=mytopic";
_azureTopic = TopicClient.CreateFromConnectionString(connectionString);
_azureTopic.RetryPolicy = RetryPolicy.NoRetry;
}
var brokeredMessage = new BrokeredMessage(message.Message)
{
MessageId = message.Id.ToString()
};
brokeredMessage.Properties["ReceivedTimestamp"] = DateTime.Now;
_azureTopic.Send(brokeredMessage);
<End Loop>
Note:
There is a deliberate reason why we have a NoRetry policy. Without wanting to add too much noise to the question, the same message that failed will be tried again in the next iteration (it sends the message to subscribers in a round robin fashion).
Example log of errors during a small window of time.
20:31:51 Event.WindowsService Event.WindowsService::PublishAzureServiceBusTopicMessage()
error trying to synchronise message with Azure. Message ID: 1191251
Error during communication with Service Bus. Check the connection
information, then retry.
20:32:00 Event.WindowsService Event.WindowsService::PublishAzureServiceBusTopicMessage()
error trying to synchronise message with Azure. Message ID: 1191251
No such host is known
20:32:00 RFID.WindowsService RFID.WindowsService::PublishAzureServiceBusTopicMessage()
error trying to synchronise message with Azure. Message ID: 1930029
No such host is known
20:32:10 RFID.WindowsService RFID.WindowsService::PublishAzureServiceBusTopicMessage()
error trying to synchronise message with Azure. Message ID: 1930029
No such host is known
20:32:10 Event.WindowsService Event.WindowsService::PublishAzureServiceBusTopicMessage()
error trying to synchronise message with Azure. Message ID: 1191251
No such host is known
20:32:10 RFID.WindowsService RFID.WindowsService::PublishAzureServiceBusTopicMessage()
error trying to synchronise message with Azure. Message ID: 1930029
No such host is known
20:34:00 RFID.WindowsService RFID.WindowsService::PublishAzureServiceBusTopicMessage()
error trying to synchronise message with Azure. Message ID: 1930034
Error during communication with Service Bus. Check the connection
information, then retry.
20:38:34 Event.WindowsService Event.WindowsService::PublishAzureServiceBusTopicMessage()
error trying to synchronise message with Azure. Message ID: 1191269
Error during communication with Service Bus. Check the connection
information, then retry.
20:38:51 RFID.WindowsService RFID.WindowsService::PublishAzureServiceBusTopicMessage()
error trying to synchronise message with Azure. Message ID: 1930043
Error during communication with Service Bus. Check the connection
information, then retry.
Service bus has native retry capabilities on Namespace Manager, Messaging Factory, and Client (see Retry guidance for specific services).
Because it is handling transient exception, you shouldn't have duplicated sent messages.
if you want to retry only once You can configure it like that:
var connectionString = "myconnectionstring";
var client = TopicClient.CreateFromConnectionString(connectionString);
client.RetryPolicy = new RetryExponential(minBackoff: TimeSpan.FromSeconds(2),
maxBackoff: TimeSpan.FromSeconds(2),
maxRetryCount: 1);
This should do the trick.
If you want to ensure deduplication, just google azure servicebus deduplication.
I'm integrating RMQ into my project, in order to implement work queues.
I understand what if module succeeded, it calls the ack method so RMQ will know about it.
What about failures?
I read that only when connection or channel are closed, RMQ knows we've failed and re-push the message to the queue.
I'd like however to make the RMQ re-push messages whenever I have an internal error, regardless of wheter I crash or not (e.g. failure to insert to DB, I handle that gracefully without crashing however I want the whole job to be re-tried).
Do I have to manually close and open the channel again in order to trigger that?
You can use negative ACK, or rejects. Info here.
The AMQP specification defines the basic.reject method that allows
clients to reject individual, delivered messages, instructing the
broker to either discard them or requeue them.
I'm sending a message to a private queue via c# :
MessageQueue msgQ = new MessageQueue(#".\private$\aaa");
msgQ.Formatter = new XmlMessageFormatter(new[] { typeof (String) });
msgQ.Send(msg);
It does work and I do see the message in the queue.
However, is there any way to get an ACK whether the message got to the queue with success ?
ps
BeginPeek and PeekCompleted is an event which is raised when a message becomes available in the queue or when the specified interval of time has expired. it is not helping me because I need to know if the message that I sent was received by msmq. BeginPeek will be raised also if someone else entered a message to the queue. and the last thing I want is to check via BeginPeek - from who this message comes from.
How can I do that?
ps2
Or maybe I don't have to worry since msgQ.Send(msg); will raise an exception if a message wasn't inserted....?
I think what you are trying to do should not be handled in code. When you send the message, it is placed in the outgoing queue. There are numerous reasons why it would not reach the destination, such as a network partition or the destination queue being full. But this should not matter to your application - as far as it is concerned, it sent the message, it committed transaction, it received no error. It is a responsibility of the underlying infrastructure to do the rest, and that infrastructure should be monitored to make sure there are no technical issues.
Now what should really be important to your application is the delivery guarantees. I assume from the scenario that you are describing that you need durable transactional queues to ensure that the message is not lost. More about the options available can be read here
Also, if you need some identifier to display to the user as a confirmation, a common practice is to generate it in the sending code and place it in the message itself. Then the handling code would use the id to do the required work.
Using transactional queues and having all your machines enroll in DTC transactions likely would provide what you're looking for. However, it's kinda a pain in the butt and DTC has side effects - like all transactions are enrolled together, including DB transactions.
Perhaps a better solution would to be use a framework like MassTransit or NServiceBus and do a request-response, allowing the reviecer to respond with actual confirmation message say not only "this has been delivered" but also "I acknowledge this" with timeout options.
As Oleksii have explained about reliable delivery.
However this can effect on performance.
What I can suggest is:
Why not create a MSMQ server on the machine that is sending MSG to other system.
What I am thinking is
Server 1 sends MSMSQ to Server 2
Server 2 receives adds to queue
Server 2 process queue/fire your code here to send a MSMQ msg to Server 1
Server 1 receives MSG (any successful msg with MSGId)
Do your further task
This approach can be an extra mile, but will keep your servers out of performance Lag.
We have pub/sub application that involves an external client subscribing to a Web Role publisher via an Azure Service Bus Topic. Our current billing cycle indicates we've sent/received >25K messages, while our dashboard indicates we've sent <100. We're investigating our implementation and checking our assumptions in order to understand the disparity.
As part of our investigation we've gathered wireshark captures of client<=>service bus traffic on the client machine. We've noticed a regular pattern of communication that we haven't seen documented and would like to better understand. The following exchange occurs once every 50s when there is otherwise no activity on the bus:
The client pushes ~200B to the service bus.
10s later, the service bus pushes ~800B to the client. The client registers the receipt of an empty message (determined via breakpoint.)
The client immediately responds by pushing ~1000B to the service bus.
Some relevant information:
This occurs when our web role is not actively pushing data to the service bus.
Upon receiving a legit message from the Web Role, the pattern described above will not occur again until a full 50s has passed.
Both client and server connect to sb://namespace.servicebus.windows.net via TCP.
Our application messages are <64 KB
Questions
What is responsible for the regular, 3-packet message exchange we're seeing? Is it some sort of keep-alive?
Do each of the 3 packets count as a separately billable message?
Is this behavior configurable or otherwise documented?
EDIT:
This is the code the receives the messages:
private void Listen()
{
_subscriptionClient.ReceiveAsync().ContinueWith(MessageReceived);
}
private void MessageReceived(Task<BrokeredMessage> task)
{
if (task.Status != TaskStatus.Faulted && task.Result != null)
{
task.Result.CompleteAsync();
// Do some things...
}
Listen();
}
I think what you are seeing is the Receive call in the background. Behind the scenes the Receive calls are all using long polling. Which means they call out to the Service Bus endpoint and ask for a message. The Service Bus service gets that request and if it has a message it will return it immediately. If it doesn't have a message it will hold the connection open for a time period in case a message arrives. If a message arrives within that time frame it will be returned to the client. If a message is not available by the end of the time frame a response is sent to the client indicating that no message was there (aka, your null BrokeredMessage). If you call Receive with no overloads (like you've done here) it will immediately make another request. This loop continues to happend until a message is received.
Thus, what you are seeing are the number of times the client requests a message but there isn't one there. The long polling makes it nicer than what the Windows Azure Storage Queues have because they will just immediately return a null result if there is no message. For both technologies it is common to implement an exponential back off for requests. There are lots of examples out there of how to do this. This cuts back on how often you need to go check the queue and can reduce your transaction count.
To answer your questions:
Yes, this is normal expected behaviour.
No, this is only one transaction. For Service Bus you get charged a transaction each time you put a message on a queue and each time a message is requested (which can be a little opaque given that Recieve makes calls multiple times in the background). Note that the docs point out that you get charged for each idle transaction (meaning a null result from a Receive call).
Again, you can implement a back off methodology so that you aren't hitting the queue so often. Another suggestion I've recently heard was if you have a queue that isn't seeing a lot of traffic you could also check the queue depth to see if it was > 0 before entering the loop for processing and if you get no messages back from a receive call you could go back to watching the queue depth. I've not tried that and it is possible that you could get throttled if you did the queue depth check too often I'd think.
If these are your production numbers then your subscription isn't really processing a lot of messages. It would likely be a really good idea to have a back off policy to a time that is acceptable to wait before it is processed. Like, if it is okay that a message sits for more than 10 minutes then create a back off approach that will eventually just be checking for a message every 10 minutes, then when it gets one process it and immediately check again.
Oh, there is a Receive overload that takes a timeout, but I'm not 100% that is a server timeout or a local timeout. If it is local then it could still be making the calls every X seconds to the service. I think this is based on the OperationTimeout value set on the Messaging Factory Settings when creating the SubscriptionClient. You'd have to test that.
We are planning to use NServiceBus in our application for dispatching messages.
In our case each message has timeToLive property, defining period of time, in which this message should be processed.
For the case if message handling was unsuccessful in first attempt, our plan is to move it to specific retry storage (retry queue) and than retry message (with some timeouts between retries) while it is successfully handled or timeToLive is expired.
In case if timeToLive is expired, we plan to log message content and discard message.
Actually, this retry behaviour is mostly determined by protocol, which we are implementing.
Is there any ways to achieve such a behaviour with NServiceBus? I see, that unsuccessful messages goes to specific error queue. Is it possible to create a separate bus, pointing to error queue?
I would suggest that you have a separate process that monitors the error queue perform retries according to the logic you describes. Take a look at the ReturnToSourceQueue tool that comes with nservicebus for inspiration:
http://nservicebus.svn.sourceforge.net/viewvc/nservicebus/trunk/src/tools/management/Errors/ReturnToSourceQueue/NServiceBus.Tools.Management.Errors.ReturnToSourceQueue/Class1.cs?view=markup
I have a blog post on how to handle failures that might give you some ideas as well:
http://andreasohlund.net/2010/03/15/errorhandling-in-a-message-oriented-world/
Hope this helps!