I have created a azure webjob which will send a strongly typed message to a service bus queue and it successfully sends.
I want to create another webjob which should be triggered whenever there is a message in the servicebus queue. Please find below the code i am trying. For some reason, though there are messages in the servicebus queue, the webjob is not getting triggered and getting an error when i run the webjob locally.
Error:
System.InvalidOperationException
{"Missing value for trigger parameter 'blobIinfo'."}
Code:
public static void Main()
{
var config = new JobHostConfiguration
{
NameResolver = new QueueNameResolver(),
ServiceBusConnectionString = ApplicationSettings.ServiceBusConnectionString
};
var host = new JobHost(config);
host.Call(typeof(BankLineFileProcessorWebJob).GetMethod("ProcessQueueMessage"));
}
[NoAutomaticTrigger]
public static void ProcessQueueMessage(
TextWriter log,
[ServiceBusTrigger("testsftppollingqueue")] SftpQueueMessage blobIinfo
)
{
while (true)
{
log.WriteLine("Queue message refers to blob: " + blobIinfo.BlobUri);
Thread.Sleep(TimeSpan.FromMinutes(PollingInterval));
}
}
Can anyone help me how to solve this?
Thanks
You have to use
host.RunAndBlock();
instead of
host.Call(typeof(BankLineFileProcessorWebJob).GetMethod("ProcessQueueMessage"));
Also, please take out the NoAutomaticTrigger attribute.
Related
I have found the following question (How to configure the RequiresDuplicateDetection for AzureServiceBus topics) about how to set the RequiresDuplicationDetection property when configuring a publish topic from a producer application in MassTransit. However, I have not been able to find out how to do it for commands that are transmitted to a queue with Send rather than Publish.
Additionally, I have found that when configuring a consumer of one the queues in question I can set the property easily, as shown below. This however is not ideal for my use case, if possible I would much rather the producer set this property when it starts and creates the queue.
cfg.ReceiveEndpoint(queue, e =>{
e.RequiresDuplicateDetection = true;
e.ConfigureConsumer<JobEventConsumer>(registrationContext, consumerConfig =>{
consumerConfig.UseMessageRetry(r =>{
r.Interval(10, TimeSpan.FromMilliseconds(200));
r.Ignore<ValidationException>();
});
});
});
Update: After a bit more investigation I have also found that setting the property to true at the global config level doesn't seem to work either. Code shown below
class Program {
static async Task Main(string[] args) {
EndpointConvention.Map<ExtractionRequest>(new Uri("queue:test-queue"));
var busControl = Bus.Factory.CreateUsingAzureServiceBus(cfg =>{
cfg.Host("My connection string");
cfg.RequiresDuplicateDetection = true;
cfg.EnablePartitioning = true;
});
await busControl.StartAsync();
try {
do {
string value = await Task.Run(() =>{
Console.WriteLine("Enter message (or quit to exit)");
Console.Write("> ");
return Console.ReadLine();
});
if ("quit".Equals(value, StringComparison.OrdinalIgnoreCase)) break;
await busControl.Send<ExtractionRequest>(new {});
}
while (true);
}
finally {
await busControl.StopAsync();
}
}
}
public interface ExtractionRequest {}
Any advice on how to turn RequiresDuplicationDetection on for a queue from the producer is welcomed.
Thanks in advance, James.
You can't set queue properties from a message sender, it's the responsibility of the receive endpoint.
The receive endpoint is the responsible component because it's declaring the queue and related attributes.
The reason publish is different is because topics can be configured by the producer, since there may be multiple consumer subscriptions on a single topic.
I have a job that imports files into a system. Everytime a file is imported, we create a blob in azure and we send a message with instructions to a queue so that the data is persisted in SQL accordingly. We do this using azure-webjobs and azure-webjobssdk.
We experienced an issue in which after the messages failed more than 7 times, they didn't move to the poision queue as expected. The code is the following:
Program.cs
public class Program
{
static void Main()
{
//Set up DI
var module = new CustomModule();
var kernel = new StandardKernel(module);
//Configure JobHost
var storageConnectionString = AppSettingsHelper.Get("StorageConnectionString");
var config = new JobHostConfiguration(storageConnectionString) { JobActivator = new JobActivator(kernel), NameResolver = new QueueNameResolver() };
config.Queues.MaxDequeueCount = 7;
config.UseTimers();
//Pass configuration to JobJost
var host = new JobHost(config);
host.RunAndBlock();
}
}
Functions.cs
public class Functions
{
private readonly IMessageProcessor _fileImportQueueProcessor;
public Functions(IMessageProcessor fileImportQueueProcessor)
{
_fileImportQueueProcessor = fileImportQueueProcessor;
}
public async void FileImportQueue([QueueTrigger("%fileImportQueueKey%")] string item)
{
await _fileImportQueueProcessor.ProcessAsync(item);
}
}
_fileImportQueueProcessor.ProcessAsync(item) threw an exception and the message's dequeue count was properly increased and re-processed. However, it was never moved to the poison-queue. I attached a screenshot of the queues with the dequeue counts at over 50.
After multiple failures the webjob was stuck in a Pending Restart state and I was unable to either stop or start and I ended up deleting it completely. After running the webjob locally, I saw messages being processed (I assumed that the one with a dequeue count of over 7 should've been moved to the poison queue).
Any ideas on why this is happening and what can be done to have the desired behavior.
Thanks,
Update
Vivien's solution below worked.
Matthew has kind enough to do a PR that will address this. You can check out the PR here.
Fred,
The FileImportQueue method being an async void is the source of your problem.
Update it to return a Task:
public class Functions
{
private readonly IMessageProcessor _fileImportQueueProcessor;
public Functions(IMessageProcessor fileImportQueueProcessor)
{
_fileImportQueueProcessor = fileImportQueueProcessor;
}
public async Task FileImportQueue([QueueTrigger("%fileImportQueueKey%")] string item)
{
await _fileImportQueueProcessor.ProcessAsync(item);
}
}
The reason for the dequeue count to be over 50 is because when _fileImportQueueProcessor.ProcessAsync(item) threw an exception it will crash the whole process. Meaning the WebJobs SDK can't execute the next task that will move the message to the poison queue.
When the message is available again in the queue the SDK will process it again and so on.
I'm developing two WebJobs for azure: One which will be putting messages in the Service Bus Queue using a topic and another which is subscribed to the ServiceBusTrigger using the same topic.
The messages are sent to the service bus queue correctly but when run the WebJob subscribed to the ServiceBusTrigger those messages are not being processed in FIFO basis.
The code for the WebJob which puts messages in the service bus queue is the following:
NamespaceManager namespaceManager = NamespaceManager.Create();
// Delete if exists
if (namespaceManager.TopicExists("SampleTopic"))
{
namespaceManager.DeleteTopic("SampleTopic");
}
TopicDescription td = new TopicDescription("SampleTopic");
td.SupportOrdering = true;
TopicDescription myTopic = namespaceManager.CreateTopic(td);
SubscriptionDescription myAuditSubscription = namespaceManager.CreateSubscription(myTopic.Path, "ImporterSubscription");
TopicClient topicClient = TopicClient.Create("SampleTopic");
for(int i = 1; i <= 10; i++)
{
var message = new BrokeredMessage("message"+i);
topicClient.Send(message);
}
topicClient.Close();
The WebJob which is subscrited to the service bus trigger has the following code:
namespace HO.Importer.Azure.WebJob.TGZProcessor
{
public class Program
{
static void Main(string[] args)
{
JobHostConfiguration config = new JobHostConfiguration();
config.UseServiceBus();
JobHost host = new JobHost(config);
host.RunAndBlock();
}
public static void WriteLog([ServiceBusTrigger("SampleTopic", "ImporterSubscription")] string message,
TextWriter logger)
{
Console.WriteLine(message));
}
}
}
How can I achieve to process the messages fromo the queue as FIFO?
Thanks in advance!
Use SessionId or PartitionKey, that will ensure the message is handled by the same message broker.
See: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-partitioning
"SessionId: If a message has the BrokeredMessage.SessionId property set, then Service Bus uses this property as the partition key. This way, all messages that belong to the same session are handled by the same message broker. This enables Service Bus to guarantee message ordering as well as the consistency of session states."
While Azure Service Bus provides FIFO feature(Sessions), it is better not to assume this kind of behavior with a broker based queuing system. Ben Morris had a good post Don’t assume message ordering in Azure Service Bus on the fact that assuming ordering with asynchronous messaging is almost a fallacy and reasons for that.
We are using Azure Service Bus in our project and while reading messages from service bus topic/subscription.
We are using subscriptionClient.OnMessageAsync event in conjunction with onMessageOptions.ExceptionReceived.
Let me write down the steps we followed to reproduce the issue we are facing.
Create a service bus namespace with default config in the azure portal
Create a topic inside it with default config in the azure portal
Create a subscription inside it with default config in the azure portal
Create a console app and paste the code added below
Connect the service bus using Service Bus Explorer
Run the console app
Send a few test messages from service bus explorer & watch the console app window
Though the messages are processed successfully every time the control is going inside the ExceptionReceived method
Here's the code:
class Program
{
static void Main()
{
var subscriptionClient = SubscriptionClient.CreateFromConnectionString
(
"servicebusendpointaddress",
"topicname",
"subscriptionname",
ReceiveMode.PeekLock
);
var onMessageOptions = new OnMessageOptions();
onMessageOptions.ExceptionReceived += OnMessageError;
subscriptionClient.OnMessageAsync(OnMessageReceived, onMessageOptions);
Console.ReadKey();
}
private static void OnMessageError(object sender, ExceptionReceivedEventArgs e)
{
if (e != null && e.Exception != null)
{
Console.WriteLine("Hey, there's an error!" + e.Exception.Message + "\r\n\r\n");
}
}
private static async Task OnMessageReceived(BrokeredMessage arg)
{
await arg.CompleteAsync();
Console.WriteLine("Message processing done!");
}
}
Are we missing something here?
Also one point to mention is that is we enable ‘autocomplete’ and remove the await arg.CompleteAsync(); then this is not happening.
var onMessageOptions = new OnMessageOptions() { AutoComplete = true};
In both the cases the messages are being processed successfully & removed from the subscription immediately.
You might be getting this because you are debugging and stepping though the code i.e. the lock expires. The LockDuration by default is 60 seconds.
You can try setting your OnMessageOptions() like this to test:
var onMessageOptions = new OnMessageOptions() { AutoRenewTimeout = TimeSpan.FromMinutes(1) };
At this line of code i am getting the error as i mentioned
I declared MSMQ_NAME as string as follows
private const string MSMQ_NAME = ".\\private$\\ASPNETService";
private void DoSomeMSMQStuff()
{
using (MessageQueue queue = new MessageQueue(MSMQ_NAME))
{
queue.Send(DateTime.Now); //Exception raises
queue.Close();
}
}
Can you first verify the queue is existing with the name 'ASPNETService' at below location?
Computer Management -> Services and Applications -> Message Queuing -> Private Queues
I had a similar problem. I was confused because my code worked on my local development machine, but not in production. Even stranger, the queues were created the exact same way.
It turns out that IIS doesn't have access to them by default. I just opened up the permissions.
Computer Management -> Private Queues -> right-click queue name -> Properties -> Security Tab -> click "Everyone" user -> click Full Control/Allow checkbox -> click OK
This fixed it for me, and in my case it's not an issue, but you may want to think about the ramifications of just opening it up for all users.
Also, I had to do this across all queues on all servers. There doesn't seem to be a way to multi-select queues or folders in order to set permissions for multiple queues simultaneously.
I was having the same problem.
I had created a new private queue and gave Full Permission to Everyone.
But I was still catching a "Queue does not exist or you do not have sufficient permissions to perform the operation" when trying to Send() to the queue. And I was able to verify that MessageQueue.Exists(".\\private$\\myqueue") was returning true.
Restarting the Message Queuing Service resolved my the problem for me.
I had same problem and I did like below where I check whether queue exists or not. If yes send message else create queue and then send message
MessageQueue msgQueue = null;
string queuePath = ".\\Private$\\billpay";
Payment newPayment = new Payment()
{
Payee = txtPayee.Text,
Payor = txtPayor.Text,
Amount = Convert.ToInt32(txtAmount.Text),
DueDate = dpDueDate.SelectedDate.Value.ToShortDateString()
};
Message msg = new Message();
msg.Body = newPayment;
msg.Label = "Gopala - Learning Message Queue";
if (MessageQueue.Exists(queuePath) == false)
{
//Queue doesnot exist so create it
msgQueue = MessageQueue.Create(queuePath);
}
else
{
msgQueue = new MessageQueue(queuePath);
}
msgQueue.Send(msg);
I was facing the same problem, I had resolved it using the following class to create queue
private MessageQueue messageQueue;
public const string DEFAULT_QUEUE_NAME = "newQueue";
public const string QUEUENAME_PREFIX = ".\\Private$\\";
public static string QueueName
{
get
{
string result = string.Format("{0}{1}", QUEUENAME_PREFIX, DEFAULT_QUEUE_NAME);
return result;
}
}
public void SendMessage()
{
string queuePath = QueueName;
MessageQueue messageQueue = MessageQueue.Create(queuePath);
messageQueue.Send("msg");
}
Create message queue in same manner for receiving the message.
For others struggling with this and pulling their hair out like I have been, I finally found something that works when all of the upvoted suggestions failed.
Even if you think the host name of your target queue's hosting system is being resolved correctly, don't believe it. Try replacing the host name with an IP address and see if it works. It does for me. I can WRITE to a public queue using a host name on my remote server without problems, but trying to READ from it produces exactly the error listed for this question.
For example, if I declare the following:
private static string QueueName = #"FormatName:DIRECT=TCP:SOMEHOST\MyQueue";
private static System.Messaging.MessageQueue Queue = new System.Messaging.MessageQueue(QueueName);
Where "MyQueue" is a public queue on server SOMEHOST, the following code will successfully insert messages to the queue, but always fails on the Receive():
Queue.Formatter = new XmlMessageFormatter(new Type[] { typeof(String) });
// The Receive() call here is a blocking call. We'll wait if there is no message in the queue, and processing
// is halted until there IS a message in the queue.
//
try
{
Queue.Send("hello world", System.Messaging.MessageQueueTransactionType.Single);
var msg = Queue.Receive(MessageQueueTransactionType.Single);
}
catch (Exception ex)
{
// todo error handling
}
One simple change in how I specify the queue location is all that's needed to make the Receive() stop failing with the dreaded "queue does not exist or you do not have sufficient permissions" error:
private static string QueueName = #"FormatName:DIRECT=TCP:192.168.1.100\MyQueue";
(Obviously I've obfuscated IP addresses and other sensitive info). Using the IP address is not obviously a production-worthy scenario, but it did point me to some type of name resolution problem as being the possible cause of the error. I cannot explain why Send() works but Receive() does not when I am using a host name instead of IP, but I can reproduce these results consistently. Until I can figure out what's going on with the name resolution, I'm no longer wasting a day trying to read messages from a queue.