I have been attempting to create Azure service bus queue on azure portal but the queue is automatically disappearing fro portal after few hours of use.
I am creating queue manually through azure portal account. Have sufficient funds in the account. The service bus currently on Azure is in Preview mode.
I am only attempting to publish/subscribe a message from queue through the code.
Queue could be deleted as a result of
Custom code that performs namespace management operations and deletes the entity.
AutoDeleteOnIdle is enabled with a relatively short timespan, causing entity to be removed if it sees no action.
I suspect AutoDeleteOnIdle is set to a relatively low value. By default, it's TimeSpan.MaxValue and should not cause this issue.
It appears that queue was getting recycled overnight by some nightly process at azure due to lack of some usage indication at Microsoft azure. After I configured the SAS key information at the queue level and also left a message in queue overnight I don't see the queue getting recycled any more.
Thanks to Sean Feldman for providing useful information which helped me through the process.
Related
I know I can call up the scheduled message in c#/code and delete a scheduled message like this
Scheduled messages can be removed by calling CancelScheduledMessageAsync(sequenceNumber)
But I can't seem to figure out how to do it with Service Bus Explorer or in the Azure dashboard. Is it possible with either?
It's possible to delete specific messages using QueueExplorer (I'm the author). It is a commercial tool, but if it's a one-of thing you can use free trial.
https://www.cogin.com/mq/
Btw. we are a bit lucky with scheduled messages, since Azure Service Bus API has that CancelScheduledMessageAsync function. It's more problematic for regular messages. All we can do, whether from some script or from QueueExplorer, is to start receiving all messages before the one we want deleted, and then "abandon" receive for all of those in front. It's not only slow, but increments their Delivery count and they could end up in dead letter queue. It would be great if Azure Service Bus would have "delete message" functionality in API.
Both Azure dashboard and Service Bus Explorer do not support this option.
For Service Bus Explorer you can raise a feature request here.
I have two application for my client, one is web application where user schedule some action and another one is windows service which runs every 3 min and execute scheduled task and email to client, both works independent of each others.
Previously both application hosted in Azure VM
Recently i have converted my application into Azure Web Role to achieve Scalability.
Now i am working on implementing worker role for Windows service. but i have few confusion that client web project need scalability so i converted web project into azure web role
But in what case using Windows Service as Azure Worker role is better that running service inside VM?
Do i continue with running service in Azure VM?
But in what case using Windows Service as Azure Worker role is better that running service inside VM?
There are several benefits for moving a Windows Service to an Azure Worker Role - IMHO, the top three are:
Azure Worker Roles are PaaS (Platform as a Service) - you are provided with a 'platform' on which your code runs (a Windows Server VM, but that is abstracted away from you) - there is no need to manage the underlying infrastructure OS, networking, disks etc. This means that you can focus on your code and how that works / scales / performs etc. without having to worry whether the VM is up and running. Furthermore, the underlying Azure Fabric will manage failures for you, starting a new instance of a worker if the underlying hardware fails.
Azure Worker Roles give you the benefit of scale - Because they run on the Azure PaaS platform, your code 'package' can be scaled to multiple instances through the Azure Portal with a few mouse-clicks. Scaling can be automatic (triggered by the underlying Azure Fabric) on a queue length (if you are receiving messages from a queue) or based on average CPU usage; alternatively, you can scale manually or on a set schedule (e.g. 'we do lots of processing overnight so increase the number of workers to 4 from 1am - 6am and then back to 2 for the rest of the day'). See https://azure.microsoft.com/en-gb/documentation/articles/cloud-services-how-to-scale/ for more information on scaling Worker Roles (aka. 'Cloud Services')
Similar API to Windows Services - the API for an Azure Worker Role is almost exactly the same as that exposed within a Windows Service - you have an OnStart(), OnStop() and Run() methods^ allowing you to easily port an existing Windows Service to a Worker Role with minimum fuss.
^ Ok, these might not quite be right as its a couple of months or so since I last worked with WR's and I can't remember the interface exactly, but you get the idea ;-)
Do i continue with running service in Azure VM?
Let me answer that question in the context of your problem (my emphasis):
I have two application for my client, one is web application where user schedule some action and another one is windows service which runs every 3 min and execute scheduled task and email to client.
I think (IMHO) that you need to think about developing for the cloud, rather than the traditional model of development. What I read from this is that you have a web-application that writes something to a persistent store (probably a database table); you then have a second service (that you are looking to migrate to an Azure Worker Role) that polls the persistent store at a specific interval, detects whether there are any new clients to e-mail and sends out the e-mail.
If we were to re-architecture this for the cloud, I would keep Worker Roles in the mix, but do the following:
Web-app publishes a message to a queue to indicate that a client needs to be e-mailed - this message could contain their first and last names, e-mail address and possibly some data that would go into the e-mail message body (if required).
The Worker Role would poll this queue for messages. For each message received, the Worker Role would send an e-mail based on the content of the message via your preferred e-mail provider (hopefully they have a nice .Net API - no raw SMTP please!). Once the e-mail was successfully sent, the Worker Role would delete the message off the queue.
This approach would be both scale-able and repeatable - a true cloud architecture!
FYI, if your interested in using the queue approach, either Azure Storage Queues or Azure Service Bus Queues could work here. It sounds like you have simple queuing requirements and as such, Storage Queues would be a perfect fit. Take a look at their comparison here: https://azure.microsoft.com/en-gb/documentation/articles/service-bus-azure-and-service-bus-queues-compared-contrasted/.
Hope this helps!
I have been running an Azure worker role deployment that uses the Microsoft.ServiceBus 2.2 library to respond to jobs posted from other worker roles and web roles. Recently (suspiciously around the time of the OS update discussed here), the instances of the cluster started constantly recycling, rebooting, running for a short period of time, and then recycling again.
I can confirm that the role instances make it all the way through the OnStart() method of my RoleEntryPoint from the trace messages I have in my diagnostics. Occasionally, the Instances pane of the Azure Management Portal would mention that a recycling role had experienced an "unhandled exception," but would not give more detail. After logging in with remote desktop to one of the instances, the two clues I have are:
Performance counters indicate that \Processor(_Total)\% Processor Time is hovering at 100%, periodically dropping to the mid-80s coinciding with drops in \TCPv4\Connections Established. Some drops in \TCPv4\Connections Established do not correlate with drops in \Processor(_Total)\% Processor Time.
I was able to find, in the Local Server Events in the Server Manager of one of the instances, the following message:
Application: WaWorkerHost.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info: Microsoft.ServiceBus.Common.CallbackException
Stack:
at Microsoft.ServiceBus.Common.Fx+IOCompletionThunk.UnhandledExceptionFrame(UInt32, UInt32, System.Threading.NativeOverlapped*)
at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32, UInt32, System.Threading.NativeOverlapped*)
There have been no permissions configuration changes associated with the service bus during this time, and this message occurs despite us not having updated any of our VMs. Nonetheless, it also appears that our service is still functioning => jobs are being processed and removed from the Service Bus Queues they are listening to.
Most Googling on these issues turns up suggestions that this is somehow related to IntelliTrace, however, these VMs do not have IntelliTrace enabled on them.
Does anyone have any ideas on what is going on here?
The service bus exceptions turned out to be a red herring from the perspective of the crashing - a namespace conflict in one of the data contracts being sent between two different VM roles that were published at different times. Adding additional tracing to exceptions thrown during one of the receive retries revealed it. Still a mystery as to why it's working at all, and the role recycling has not ceased, just the service bus exception.
I had the similar issue. The main reason is that it could not resolve the Service Bus dll version issues make sure the version you are redirecting in AppSettings and the version you actually added reference to are same.
It may occur with any dll mismatches not only with service bus dll...
I am working to port an application which was designed to work in a non-Azure environment. One of the elements of the architecture is a singleton which does not scale, and which I'm hoping to replace w/ multiple worker processes serving the resource that the singleton currently provides.
I have the necessary changes in place to replace the singleton, and am at the point of constructing the communications framework to provide interconnection from the UI servers to the resource workers and I'm wondering if I should just use a TCP binding on a WCF service or whether using the Azure Service Bus would make more sense. The TCP/WCF is easy, but doesn't solve the complete problem: how do I ensure that only one worker processes a UI request?
From reading the available documentation, it sounds like the service bus will solve this, but I've yet to see a concrete example of implementation. I'm hoping someone here can assist and/or point me in the right direction.
Seems that Azure Service Bus queues are the right solution for you.
Azure Service Bus can be used in 3 different ways:
Queues
Topics
Relays
From windows azure site:
Service Bus queues provide one-way asynchronous queuing. A sender sends a message to a Service Bus queue, and a receiver picks up that message at some later time. A queue can have just a single receiver
You can find more info at:
http://www.windowsazure.com/en-us/develop/net/fundamentals/hybrid-solutions/
Adding to Davide's answer.
Another alternative would be to use Windows Azure Queues. They are designed to facilitate asynchronous communication between web and worker roles. From your web role you push messages into a queue which is polled by your worker roles.
Your worker role can "Get" one or more messages from a queue and work on those messages. When you get a message from a queue, you can instruct the queue service to make those messages invisible to other callers for a certain amount of time (known as message visibility timeout). That would ensure that only worker role instance get to work on a message.
Once the worker role has completed the work, it can simply delete the message. If there's an error in processing the message, the message automatically reappears in the queue once the visibility timeout period has expired. You may find this link helpful: http://www.windowsazure.com/en-us/develop/net/how-to-guides/queue-service/.
Azure queues are not designed for inter process communication, but inter-application communication. The message delivery latency is substantial, and delivery timing cannot be guaranteed. Websockets or NetTcpBinding is more suitable for applications that talk to eachother in realtime. Although must admit, you get some free stuff with queuez, especially the locking mechanisms. Just my 2 cents
This is driving me crazy. We use a fairly large number of private MSMQ queues in our C#/ASP.NET web application where I work and have a common library to send and receive messages from our queues. Yesterday, this stopped working for me altogether, but none of the other developers I work with are running into this issue, which makes me think it has something to do with my local dev environment or my Windows account settings.
I am now always getting "Timeout for the requested operation has expired" exceptions when the following line of messaging code is called:
var returnMessage = fromMessageQueue.ReceiveByCorrelationId(strCorrelationID, tsWait);
We basically have an "Inbound" and "Outbound" queue for each of our (business) clients. The Inbound queues look clean, but when I look in the Outbound queues, I can see "stuck" messages that are the responses I need.
I've even written a small test console application against a dummy queue I setup for troubleshooting, that still returns the same timeout exceptions.
I've checked the permissions on the private queues I've been troubleshooting with, EVERYONE and ANONYMOUS users have full control to the queues. I've even granted my own domain login account to a few queues, but that didn't work either.
I'm afraid I'm very stuck until I can get this resolved.
I usually get this when I have installed the software and have it running as a service whilst trying to run a debug copy through visual studio (2 services running on 1 queue)