I was following the documentation for How to use Service Bus Queues on Azure documentation.
I created 2 different web applications to test out using Queues for a project I need to work on. I created one project to publish messages into the queue and then the other application is supposed to listen to the Queue to process the messages.
I successfully was able to publish messages to the Queue and I can see that the queue length in the Azure portal says 3. So there should be 3 messages waiting for me to process. However, when I run the web application that has the QueueClient.OnMessage no messages are getting pushed. Is there something else that I am missing when doing this?
var client = QueueClient.CreateFromConnectionString(ConnectionString, "TestQueue");
var options = new OnMessageOptions
{
AutoComplete = false,
AutoRenewTimeout = TimeSpan.FromMinutes(1)
};
// Callback to handle received messages.
client.OnMessage((message) =>
{
try
{
// Process message from queue.
var messageBody = message.GetBody<string>();
// Remove message from queue.
message.Complete();
}
catch (Exception)
{
// Indicates a problem, unlock message in queue.
message.Abandon();
}
}, options);
}
}
The ConnectionStrings are the same in both applications so there is no differences there.
Here is the code that I am using to connect and send a message to the Queue.
var client = QueueClient.CreateFromConnectionString(ConnectionString, "TestQueue");
var message = new BrokeredMessage(value);
message.Properties["TestProperty"] = "This is a test";
message.Properties["UserId"] = "TestUser";
client.Send(message);
If anyone has any insights into why this is happening it would be greatly appreciated.
So it turns out that I was running into a 407 Proxy Authentication Required error when trying to connect to the service bus that I noticed when trying to debug this through a different application.
All I had to do was add the following to the system.net section of the web.config once I did that I was able to receive messages.
<defaultProxy useDefaultCredentials="true">
<proxy bypassonlocal="True" usesystemdefault="True"/>
</defaultProxy>
Related
First let me explain what I have. I have myself an Azure Service Bus with an Azure Function App. The Service Bus is setup to use SQL Filters to push specific message types into specific topics. Then using my Azure Function App these will get the newest message and then process it.
A basic example
1: I send a request to my EmailAPI
2: EmailAPI then pushing a new message into the Service Bus with a type of "Email"
3: The SQL Filter then sees the type is of "Email" and is placed into the email Topic in the Service Bux
4: The EmailListener Azure Function monitors the Service bus and notices a new message
5: Gather the Service Bus message and process it (basically just send the email using the information provided)
Now let's say for some reason the SMTP server connection is a little broken and some times we get a TimeOutException when attempting to send the email (EmailListener). What happens now when an exception is thrown, the Function App EmailListener will attempt to send it again instantly, no wait, it will just attempt to send it again. It will do this for a total of 10 times and then inform the Service Bus to place the message in the Dead Letter queue.
What I am attempting to do is when an exception is thrown (such as TimeOutException), we wait X amount of time before attempting to process the same message again. I have looked around at many different posts talking about the host.json and attempting to set those settings, but these have not worked. I have found a solution, however the solution requires your to create a clone of the message and push it back into the Service Bus and give it a delayed process time. I would prefer not to implement my own manual delay system, if Azure Service Bus / Function App can deal with retries itself.
The biggest issue I am having (which is probably down to my understanding) is who is at fault? Is it the Service Bus settings to handle the Retry Policy or is it the Azure Function App to deal with attempting to retry after X time.
I have provided a some code, but I feel code isn't really going to help explain my question.
// Pseudo code
public static class EmailListenerTrigger
{
[FunctionName("EmailListenerTrigger")]
public static void Run([ServiceBusTrigger("messages", "email", Connection = "ConnectionString")]string mySbMsg, TraceWriter log)
{
var emailLauncher = new EmailLauncher("SmtpAddress", "SmtpPort", "FromAddress");
try
{
emailLauncher.SendServiceBusMessage(mySbMsg);
}
catch(Exception ex)
{
log.Info($"Audit Log: {mySbMsg}, Excpetion: {ex.message}");
}
}
}
reference one: https://blog.kloud.com.au/2017/05/22/message-retry-patterns-in-azure-functions/ (Thread.Sleep doesn't seem like a good idea)
reference two: https://github.com/Azure/azure-functions-host/issues/2192 (Manually implemented retry)
reference three: https://www.feval.ca/posts/function-queue-retry/ (This seems to refer to queues when I am using topics)
reference four: Can the Azure Service Bus be delayed before retrying a message? (Talks about Defering the message, but then you need to manually get it back out the queue/topic.)
You might be able to solve your issue with the use of Durable Functions. There is for example a built-in method CallActivityWithRetryAsync() that can retry when the activity functions throws an exception.
https://learn.microsoft.com/en-us/sandbox/functions-recipes/durable-diagnostics#calling-activity-functions-with-retry
Your flow would probably something like this:
Service Bus triggered Function. This one starts an Orchestrator Function
The orchestrator calls your activity function (using the aforementioned method)
Your email sending is implemented in an Activity Function and can throw exceptions as needed
While there is no native support for what you want to do, it is still doable without having to do a lot of custom development. You can basically add a service bus output binding to your Azure function, that is connected to the same queue your function consumes messages from. Then, use a custom property to track the number of retries. The following is an example:
private static TimeSpan[] BackoffDurationsBetweenFailures = new[] { }; // add delays here
[FunctionName("retrying-poc")]
public async Task Run(
[ServiceBusTrigger("myQueue")] Message rawRequest,
IDictionary<string, object> userProperties,
[ServiceBus("myQueue")] IAsyncCollector<Message> collector)
{
var request = GetRequest(rawRequest);
var retryCount = GetRetryCount(userProperties);
var shouldRetry = false;
try
{
await _unreliableService.Call(request);
}
catch (Exception ex)
{
// I don't retry if it is a timeout, but that's my own choice.
shouldRetry = !(ex is TimeoutException) && retryCount < BackoffDurationsBetweenFailures.Length;
}
if (shouldRetry)
{
var retryMessage = new Message(rawRequest.Body);
retryMessage.UserProperties.Add("RetryCount", retryCount + 1);
retryMessage.ScheduledEnqueueTimeUtc = DateTime.UtcNow.Add(BackoffDurationsBetweenFailures[retryCount]);
await collector.AddAsync(retryMessage);
}
}
private MyBusinessObject GetRequest(Message rawRequest)
=> JsonConvert.DeserializeObject<MyBusinessObject>(Encoding.UTF8.GetString(rawRequest.Body));
private int GetRetryCount(IDictionary<string, object> properties)
=> properties.TryGetValue("RetryCount", out var value) && int.TryParse(value.ToString(), out var retryCount)
? retryCount
: 0;
Is there a way to invoke a .netcore web api method whenever a message is added to an Azure Servicebus queue ? I would like to implement this without any sort of timer based polling.
I can manually call an api endpoint to process the queue like this:
[HttpGet]
public async Task<IActionResult> ProcessQue()
{
List<string> reList = new List<string>();
try
{
// Register a OnMessage callback
queueClient.RegisterMessageHandler(
async (message, token) =>
{
// Process the message
reList.Add($"Received message: SequenceNumber:{message.SequenceNumber} Body:{message.GetBody<string>()}");
// Complete the message so that it is not received again.
// This can be done only if the queueClient is opened in ReceiveMode.PeekLock mode.
await queueClient.CompleteAsync(message.LockToken);
},
new RegisterHandlerOptions() {MaxConcurrentCalls = 1, AutoComplete = false});
}
catch (Exception exception)
{
Console.WriteLine($"{DateTime.Now} > Exception: {exception.Message}");
}
return Ok(reList);
}
I am looking for a way that this method will fire automatically when a message is added to a queue. Azure functions is probably the right way to do this but I havent been able to connect an azure function to a servicebus queue.
Any suggestions, advise, pocket lint, anything is much appreciated.
You can use Azure Functions for invoking the web api. You need to set the Azure Servicebus queue as the binding. You can get more information on how to use Azure Functions Service Bus bindings on MSDN.
I am working with a c# program within my network and am able to post messages to an Azure Service Bus queue. When receiving them, I get an exception on MessageReceiver.Receive(). The code and error is below;
MessagingFactory factory = MessagingFactory.CreateFromConnectionString(QueueConnectionString);
//Receiving a message
MessageReceiver testQueueReceiver = factory.CreateMessageReceiver(QueueName);
using (BrokeredMessage retrievedMessage = testQueueReceiver.Receive(new TimeSpan(0, 0, 20)))
{
try
{
var message = new StreamReader(retrievedMessage.GetBody<Stream>(), Encoding.UTF8).ReadToEnd();
retrievedMessage.Complete();
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
retrievedMessage.Abandon();
}
}
The error gets thrown on the 'using' line at
testQueueReceiver.Receive(...);
The server rejected the upgrade request. 400 This endpoint is only for web-socket requests
I can't find anything on the web with the exception of one post which seems to suggest it is a firewall / ports issue. I have all the azure service bus ports outbound open (9350-9354, 80, 443) locally but there is a chance the 9000's are blocked at the firewall. Should it require these? Any pointers would be greatly appreciated !
Service MessagingCommunication Exception - The End point is only for web socket requests
I'm just wondering why don't you use OnMessage instead of polling the queue?
var connectionString = "";
var queueName = "samplequeue";
var client = QueueClient.CreateFromConnectionString(connectionString, queueName);
client.OnMessage(message =>
{
Console.WriteLine(String.Format("Message body: {0}", message.GetBody<String>()));
Console.WriteLine(String.Format("Message id: {0}", message.MessageId));
message.Complete()
});
This was fixed due to some proxy issues.
The account that the code was running under was an async service. I needed to log in as that account, open IE and go to connections (LAN) and remove the proxy checkboxes (detect settings automatically, etc). Once this was done, the code bypassed the proxy and worked fine.
I'm trying to create an application using MassTransit and Azure Service Bus following this article http://docs.masstransit-project.com/en/latest/advanced/turnout.html.
After I started the application in Azure Service Bus has created two queues (one of them expired). And after I execute subscriber was created turnout queue and messages was moved to this queue from main. If subscriber works I can retrieve messages. If I stop subscriber (kill process or shutdown machine) messages still in turnout queue. Next time I execute subscriber it creates new turnout queue and I do not retrieve messages that were treated but not completed. So, how I can do not lose messages? And also how I can set the limit of max count of messages that treats in one node?
_busControl = Bus.Factory.CreateUsingAzureServiceBus(cfg =>
{
var host = cfg.Host("********", h =>
{
//h.OperationTimeout = TimeSpan.FromMinutes(1);
});
cfg.MaxConcurrentCalls = 1;
cfg.UseServiceBusMessageScheduler();
cfg.TurnoutEndpoint<ISimpleRequest>(host, "test_longruning",
e =>
{
e.SuperviseInterval = TimeSpan.FromSeconds(30);
e.PartitionCount = 1;
e.SetJobFactory(async context =>
{
Console.WriteLine($"{DateTime.Now} Start Message: {context.Command.CustomerId}");
await Task.Delay(TimeSpan.FromMinutes(7), context.CancellationToken);
Console.WriteLine($"{DateTime.Now} End Message: {context.Command.CustomerId}");
});
});
});
First, I should warn you that Turnout is very pre-production at this point. While it works in the happy path, the handling of service failures is not yet up to snuff. While the message time to live settings should end up with the commands back in the right queues, it hasn't been extensively tested.
That said, you can use ServiceBusExplorer to move messages back into the proper queues, that's how I do it. It's manual, but it's the only tool that really gives you complete control over your service bus environment.
We have been using RabbitMQ as messaging service in the project. We will be pushing message into a queue and which will be received at the message consumer and will be trying to make entry into database. Once the values entered into the database we will be sending positive acknowledgement back to the RabbitMQ server if not we will be sending negative acknowledgement.
I have created Message Consumer as Windows service.Message has been successfully entered and well received by the message consumer(Made entry in table)but with an exception log "Shared Queue closed".
Please find the code block.
while (true)
{
try
{
if (!Connection.IsOpen || !Channel.IsOpen)
{
CreateConnection(existConnectionConfig, QueueName);
consumer = new QueueingBasicConsumer(Channel);
consumerTag=Channel.BasicConsume(QueueName,false,consumer);
}
BasicDeliverEventArgs e = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
IBasicProperties props = e.BasicProperties;
byte[] body = e.Body;
bool ack = onMessageReceived(body);
if (ack == true)
{
Channel.BasicAck(e.DeliveryTag, false);
}
else
Channel.BasicNack(e.DeliveryTag, false, true);
}
catch (Exception ex)
{
//Logged the exception in text file where i could see the
//message as "Shared queue closed"
}
}
I have surfed in net too but couldn't able to what the problem. It will be helpful if anyone able to help me out.
Thanks in advance,
Selva
In answer to your question, I have experienced the same problems when my Web Client has reset the connection due to App Pool recycling or some other underlying reason the connection has been dropped that appears beyond your scope. You may need to build in a retry mechanism to cope with this.
You might want to look at MassTransit. I have used this with RabbitMQ and it makes things a lot easier by effectively providing a management layer to RabbitMQ. MassTransit takes away the headache of retry mechanisms - see Connection management. It also provides a nice multi threaded concurrent consumer configuration.
This has the bonus of your implementation being more portable - you could easily change things to MSMQ should the requirement come along.