Azure Service Charge For A Worker Role - c#

I'm just wondering what would happen if I have a worker roles hosting in Azure, and instead of
public class WorkerRole : RoleEntryPoint
{
public override void Run()
{
while (true)
{
Thread.Sleep(10000);
Trace.TraceInformation("Working", "Information");
}
}
//Other code remove for brevity
}
}
I do
public class WorkerRole : RoleEntryPoint
{
public override void Run()
{
while (true)
{
}
}
//Other code remove for brevity
}
}
I know that the second code snippet spinning the process all the time, which is bad. But is there any other differences in terms of money?
Thanks.

As long as you don't produce network transfer, that second while(true) code would simply block your worker role's main thread if each iteration takes few milliseconds to be processed.
You can check how worker roles are billed using Azure Pricing Calculator for cloud services (and you'll discover that you pay a fixed price per month based on CPU power and RAM, and also based on network bandwidth).

Related

.NET client-side WCF with queued requests

Background
I'm working on updating legacy software library. The legacy code uses an infinitely looping System.Threading.Thread that executes processes in the queue. These processes perform multiple requests with another legacy system that can only process one request at a time.
I'm trying to modernize, but I'm new to WCF services and there may be a big hole in my knowledge that'd simplify things.
WCF Client-Side Host
In modernizing, I'm trying to move to a client-side WCF service. The WCF service allows requests to be queued from multiple a applications. The service takes a request and returns a GUID back so that I can properly associate via the callbacks.
public class SomeService : ISomeService
{
public Guid AddToQueue(Request request)
{
// Code to add the request to a queue, return a Guid, etc.
}
}
public interface ISomeCallback
{
void NotifyExecuting(Guid guid)
void NotifyComplete(Guid guid)
void NotifyFault(Guid guid, byte[] data)
}
WCF Client Process Queues
The problem I'm having is that the legacy processes can include more than one request. Process 1 might do Request X then Request Y, and based on those results follow up with Request Z. With the legacy system, there might be Processes 1-10 queued up.
I have a cludgy model where the process is executed. I'm handling events on the process to know when it's finished or fails. But, it just feels really cludgy...
public class ActionsQueue
{
public IList<Action> PendingActions { get; private set; }
public Action CurrentAction { get; private set; }
public void Add(Action action)
{
PendingAction.Add(action)
if (CurrentAction is null)
ExecuteNextAction();
}
private void ExecuteNextAction()
{
if (PendingActions.Count > 0)
{
CurrentAction = PendingActions[0];
PendingActions.RemoveAt(0);
CurrentAction.Completed += OnActionCompleted;
CurrentAction.Execute();
}
}
private OnActionCompleted(object sender, EventArgs e)
{
CurrentAction = default;
ExecuteNextAction();
}
}
public class Action
{
internal void Execute()
{
// Instantiate the first request
// Add handlers to the first request
// Send it to the service
}
internal void OnRequestXComplete()
{
// Use the data that's come back from the request
// Proceed with future requests
}
}
With the client-side callback the GUID is matched up to the original request, and it raises a related event on the original requests. Again, the implementation here feels really cludgy.
I've seen example of Async methods for the host, having a Task returned, and then using an await on the Task. But, I've also seen recommendations not to do this.
Any recommendations on how to untangle this mess into something more usable are appreciated. Again, it's possible that there's a hole in my knowledge here that's keeping me from a better solutiong.
Thanks
Queued communication between the client and the server of WCF is usually possible using a NetMsmqbinding, which ensures persistent communication between the client and the server. See this article for specific examples.
If you need efficient and fast message processing, use a non-transactional queue and set the ExactlyOnce attribute to False, but this has a security impact. Check this docs for further info.
In case anyone comes along later with a similar issue, this is a rough sketch of what I ended up with:
[ServiceContract(Name="MyService", SessionMode=Session.Required]
public interface IMyServiceContract
{
[OperationContract()]
Task<string> ExecuteRequestAsync(Action action);
}
public class MyService: IMyServiceContract
{
private TaskQueue queue = new TaskQueue();
public async Task<string> ExecuteRequestAsync(Request request)
{
return await queue.Enqueue(() => request.Execute());
}
}
public class TaskQueue
{
private SemaphoreSlim semaphore;
public TaskQueue()
{
semaphore = new SemaphoreSlim(1);
}
Task<T> Enqueue<T>(Func<T> function)
{
await semaphore.WaitAsync();
try
{
return await Task.Factory.StartNew(() => function.invoke();)
}
finally
{
semaphore.Release();
}
}
}

Multi Thread Worker Service in .Net Core

I'm trying build a worker service on Core 5.0. My tree is basically like that =>
1 -) Program.cs 2-) Worker.cs 3-) MyStartUp.cs 4-) Client.cs
In MyStartUp.cs I am getting a list and calling Client class some servers according to list.
In the Client class, I connect to the devices and write the data I read to the database.
Device count nearly 1200, server way is TCP/IP.
What is your best suggestion for write a worker service like that?
How can I use threads in it best form?
Below is my first try. This form is working but it's so slow for 1000 different client because there is so much reads in client.
public class Worker : BackgroundService
{
private readonly ILogger<Worker> _logger;
public Worker(ILogger<Worker> logger)
{
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
StartUp startUp = new StartUp();
}
}
public class StartUp
{
public StartUp()
{
//... get client datas and initialize client object
StartClients();
}
public void StartClients()
{
foreach (var item in ClientList)
{
new Thread(item.Run).Start();
}
}
}
public class Client
{
System.Timers.Timer timer ;
public Client()
{
timer = new Timer();
timer.Interval = 100;
timer.Elapsed += Timer_Elapsed;
//... initialize client connection and database
}
public void Run()
{
timer.Start();
}
private void Timer_Elapsed(object sender, ElapsedEventArgs e)
{
//... write values of read client to database
}
}
Say that you have 1k timers that run every 100ms, and say that each timer tick takes 50ms to execute. That means each timer needs 500ms/s, or 50% of one core, and you would need 500 cores to keep up with the work. You probably do not have that many cores, nor IO to process the requests, and that means the work will start piling up and your computer will more or less freeze since it does not have time to update the UI.
50ms might be an overestimation of the time used, but even at 5ms you would probably have issues unless you are running this on a monster server.
The solution would be to decrease the polling frequency to something more reasonable, say every 100s instead of every 100ms. Or to have one or more threads that polls your devices as fast as they can. For example something like:
private BlockingCollection<MyClient> clients = new ();
private List<Task> workers = new ();
public void StartWorker(){
workers.Add(Task.Run(Run));
void Run(){
foreach(var client in clients.GetConsumingEnumerable()){
// Do processing
clients.Add(client); // re-add client to queue
}
}
}
public void CloseAllWorkers(){
clients.CompleteAdding();
Task.WhenAll(workers).Wait();
}
I would note that usages of Thread is mostly superseded by tasks. And that creating a thread just to start a System.Timers.Timer is completely useless since the timer will run the tick event on the threadpool, regardless of the thread that started it. At least unless a synchronization object was specified.

Consumer Producer- Producer thread never executes assigned function

I have .NET Core Web API solution. In each call, I need to perform some database operations. The issue is at a time multiple db connections get opened & close. So to avoid it, I want to implement Queue of objects to be sent to database and then want a separate thread to perform db operation.
I've tried some code as below. But here, Consumer thread never executes assigned function. There is no separate thread for Producer, I am simply feeding queue with object.
What modifications I should do? Need some guidance as I'm new to Threading stuff.
public static class BlockingQueue
{
public static Queue<WebServiceLogModel> queue;
static BlockingQueue()
{
queue = new Queue<WebServiceLogModel>();
}
public static object Dequeue()
{
lock (queue)
{
while (queue.Count == 0)
{
Monitor.Wait(queue);
}
return queue.Dequeue();
}
}
public static void Enqueue(WebServiceLogModel webServiceLog)
{
lock (queue)
{
queue.Enqueue(webServiceLog);
Monitor.Pulse(queue);
}
}
public static void ConsumerThread(IConfiguration configuration)
{
WebServiceLogModel webServiceLog = (WebServiceLogModel)Dequeue();
webServiceLog.SaveWebServiceLog(configuration);
}
public static void ProducerThread(WebServiceLogModel webServiceLog)
{
Enqueue(webServiceLog);
Thread.Sleep(100);
}
}
I've created and started thread in StartUp.cs:
public Startup(IConfiguration configuration)
{
Thread t = new Thread(() => BlockingQueue.ConsumerThread(configuration));
t.Start();
}
In Controller, I've written code to feed the queue:
[HttpGet]
[Route("abc")]
public IActionResult GetData()
{
BlockingQueue.ProducerThread(logModel);
return StatusCode(HttpContext.Response.StatusCode = (int)HttpStatusCode.NotFound, ApplicationConstants.Message.NoBatchHistoryInfo);
}
First of all, try to avoid static classes and methods. Use pattern singleton in that case (and if you really need this).
Second, try to avoid lock, Monitor - those concurrency primitives significantly lower your performance.
In such situation, you can use BlockingCollection<> as 'Adam G' mentioned above, or you can develop your own solution.
public class Service : IDisposable
{
private readonly BlockingCollection<WebServiceLogModel> _packets =
new BlockingCollection<WebServiceLogModel>();
private Task _task;
private volatile bool _active;
private static readonly TimeSpan WaitTimeout = TimeSpan.FromSeconds(1);
public Service()
{
_active = true;
_task = ExecTaskInternal();
}
public void Enqueue(WebServiceLogModel model)
{
_packets.Add(model);
}
public void Dispose()
{
_active = false;
}
private async Task ExecTaskInternal()
{
while (_active)
{
if (_packets.TryTake(out WebServiceLogModel model))
{
// TODO: whatever you need
}
else
{
await Task.Delay(WaitTimeout);
}
}
}
}
public class MyController : Controller
{
[HttpGet]
[Route("abc")]
public IActionResult GetData([FromServices] Service service)
{
// receive model form somewhere
WebServiceLogModel model = FetchModel();
// enqueue model
service.Enqueue(model);
// TODO: return what you need
}
}
And in Startup:
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<Service>();
// TODO: other init staffs
}
}
You even can add Start/Stop methods to the service instead of implementing IDisposable and start your service in the startup class in the method Configure(IApplicationBuilder app).
I think your consumer thread is executed just once if there is something in the queue and then immediately returns. If you want to have a thread doing work in background, which is started just once, it should never return and should catch all exceptions. Your thread from BlockingQueue.ConsumerThread is invoked once in Stratup and returns.
Also please be aware that doing such solution is not safe. ASP.NET doesn't guarantee background threads to be running if there are no requests coming in. Your application pool can recycle (and by default it recycles after 20 minutes of inactivity or every 27 hours), so there is a chance that your background code won't be executed for some queue items.
Also, while it doesn't solve all issues, I would suggest using https://www.hangfire.io/ to do background tasks in ASP.NET server. It has persistence layer, can retry jobs and has simple API's. In your request handler you can push new jobs to Hangfire and then have just 1 job processor thread.

NServiceBus events lost when published in separate thread

I've been working on getting long running messages working with NServiceBus on an Azure transport. Based off this document, I thought I could get away with firing off the long process in a separate thread, marking the event handler task as complete and then listening for custom OperationStarted or OperationComplete events. I noticed the OperationComplete event is not received by my handlers most cases. In fact, the only time it is received is when I publish it immediately after the OperationStarted event is published. Any actual processing in between somehow prevents the completion event from being received. Here is my code:
Abstract class used for long running messages
public abstract class LongRunningOperationHandler<TMessage> : IHandleMessages<TMessage> where TMessage : class
{
protected ILog _logger => LogManager.GetLogger<LongRunningOperationHandler<TMessage>>();
public Task Handle(TMessage message, IMessageHandlerContext context)
{
var opStarted = new OperationStarted
{
OperationID = Guid.NewGuid(),
OperationType = typeof(TMessage).FullName
};
var errors = new List<string>();
// Fire off the long running task in a separate thread
Task.Run(() =>
{
try
{
_logger.Info($"Operation Started: {JsonConvert.SerializeObject(opStarted)}");
context.Publish(opStarted);
ProcessMessage(message, context);
}
catch (Exception ex)
{
errors.Add(ex.Message);
}
finally
{
var opComplete = new OperationComplete
{
OperationType = typeof(TMessage).FullName,
OperationID = opStarted.OperationID,
Errors = errors
};
context.Publish(opComplete);
_logger.Info($"Operation Complete: {JsonConvert.SerializeObject(opComplete)}");
}
});
return Task.CompletedTask;
}
protected abstract void ProcessMessage(TMessage message, IMessageHandlerContext context);
}
Test Implementation
public class TestLongRunningOpHandler : LongRunningOperationHandler<TestCommand>
{
protected override void ProcessMessage(TestCommand message, IMessageHandlerContext context)
{
// If I remove this, or lessen it to something like 200 milliseconds, the
// OperationComplete event gets handled
Thread.Sleep(1000);
}
}
Operation Events
public sealed class OperationComplete : IEvent
{
public Guid OperationID { get; set; }
public string OperationType { get; set; }
public bool Success => !Errors?.Any() ?? true;
public List<string> Errors { get; set; } = new List<string>();
public DateTimeOffset CompletedOn { get; set; } = DateTimeOffset.UtcNow;
}
public sealed class OperationStarted : IEvent
{
public Guid OperationID { get; set; }
public string OperationType { get; set; }
public DateTimeOffset StartedOn { get; set; } = DateTimeOffset.UtcNow;
}
Handlers
public class OperationHandler : IHandleMessages<OperationStarted>
, IHandleMessages<OperationComplete>
{
static ILog logger = LogManager.GetLogger<OperationHandler>();
public Task Handle(OperationStarted message, IMessageHandlerContext context)
{
return PrintJsonMessage(message);
}
public Task Handle(OperationComplete message, IMessageHandlerContext context)
{
// This is not hit if ProcessMessage takes too long
return PrintJsonMessage(message);
}
private Task PrintJsonMessage<T>(T message) where T : class
{
var msgObj = new
{
Message = typeof(T).Name,
Data = message
};
logger.Info(JsonConvert.SerializeObject(msgObj, Formatting.Indented));
return Task.CompletedTask;
}
}
I'm certain that the context.Publish() calls are being hit because the _logger.Info() calls are printing messages to my test console. I've also verified they are hit with breakpoints. In my testing, anything that runs longer than 500 milliseconds prevents the handling of the OperationComplete event.
If anyone can offer suggestions as to why the OperationComplete event is not hitting the handler when any significant amount of time has passed in the ProcessMessage implementation, I'd be extremely grateful to hear them. Thanks!
-- Update --
In case anyone else runs into this and is curious about what I ended up doing:
After an exchange with the developers of NServiceBus, I decided on using a watchdog saga that implemented the IHandleTimeouts interface to periodically check for job completion. I was using saga data, updated when the job was finished, to determine whether to fire off the OperationComplete event in the timeout handler. This presented an other issue: when using In-Memory Persistence, the saga data was not persisted across threads even when it was locked by each thread. To get around this, I created an interface specifically for long running, in-memory data persistence. This interface was injected into the saga as a singleton, and thus used to read/write saga data across threads for long running operations.
I know that In-Memory Persistence is not recommended, but for my needs configuring another type of persistence (like Azure tables) was overkill; I simply want the OperationComplete event to fire under normal circumstances. If a reboot happens during a running job, I don't need to persist the saga data. The job will be cut short anyway and the saga timeout will handle firing the OperationComplete event with an error if the job runs longer than a set maximum time.
The cause of this is that if ProcessMessage is fast enough, you might get the current context before it gets invalidated, such as being disposed.
By returning from Handle successfully, you're telling NServiceBus: "I'm done with this message", so it may do what it wants with the context as well, such as invalidating it. In the background processor, you need an endpoint instance, not a message context.
By the time the new task starts running, you don't know if Handle has returned or not, so you should just consider the message has already been consumed and is thus unrecoverable. If errors happen in your separate task, you can't retry them.
Avoid long running processes without persistence. The sample you mention has a server that stores a work item from a message, and a process that polls this storage for work items. Perhaps not ideal, in case you scale out processors, but it won't lose messages.
To avoid constant polling, merge the server and the processor, poll inconditionally once when it starts, and in Handle schedule a polling task. Take care for this task to only poll if no other polling task is running, otherwise it may become worse than constant polling. You may use a semaphore to control this.
To scale out, you must have more servers. You need to measure if the cost of N processors polling is greater than sending to N servers in a round-robin fashion, for some N, to know which approach actually performs better. In practice, polling is good enough for a low N.
Modifying the sample for multiple processors may require less deployment and configuration effort, you just add or take processors, while adding or removing servers needs changing their enpoints in all places (e.g. config files) that point to them.
Another approach would be to break the long process into steps. NServiceBus has sagas. It's an approach usually implemented for a know or bounded amount of steps. For an unknown amount of steps, it's still feasible, although some might consider it an abuse of the seemingly intended purpose of sagas.

Execute function every 5 minutes in background

I have function which reads Data out of an Webservice. With that Data i create Bitmaps. I send the Bitmaps to Panels (Displays) which displays the created Bitmaps. Manually its working like charm. What i need now is, that my Application run this function every 5 min automtically in the Backround.
My Application is running under IIS. How can i do that? Can someone help me with that?
You don't have to be depended on asp.net project, but you can use Cache Callback to do it.
I have found a nice approach, to do it.
actually i don't remember the link so i'll give you a code that i use:
public abstract class Job
{
protected Job()
{
Run();
}
protected abstract void Execute();
protected abstract TimeSpan Interval { get; }
private void Callback(string key, object value, CacheItemRemovedReason reason)
{
if (reason == CacheItemRemovedReason.Expired)
{
Execute();
Run();
}
}
protected void Run()
{
HttpRuntime.Cache.Add(GetType().ToString(), this, null,
Cache.NoAbsoluteExpiration, Interval, CacheItemPriority.Normal, Callback);
}
}
Here is the implementation
public class EmailJob : Job
{
protected override void Execute()
{
// TODO: send email to whole users that are registered
}
protected override TimeSpan Interval
{
get { return new TimeSpan(0, 10, 0); }
}
}
An Asp.Net application is not the correct framework for a task like this.
You should probably create a dedicated service for this type of tasks.
Another option is to create a scheduled task that will run every X minutes
On a side note, if you must do this through your asp.net application, I recommend reading on how to Simulate a Windows Service using ASP.NET to run scheduled jobs

Categories

Resources