Azure storage 'Lease' - correct exception approach - c#

I am developing NET Core web app and I am using blob to store some objects what could be modified during request. I had to prevent multiple paralel access to one object so I add 'lease' into my storage integration.
In prctice when I receive request, one object is fetched from blob with lease for some period. On the end of request this object is updated in storage and lease is removed - pretty simple.
But what is correct exception handling?
I am faicing agains the problem when some exception occured in the middle of request, lease is not released. I tried to implement release into dispose (in some class where I control fetching and leasing from blob). But this is not executed when unhandled exception is thrown.
Add try/catch/finally seems not clean for me. My question is do you know some best common approach how release lease on the end request? Thank you

Per your description, I write a simple demo for you about lease and break lease, just try the code below:
using System;
using Azure.Storage.Blobs;
using Microsoft.AspNetCore.Mvc;
using Azure.Storage.Blobs.Specialized;
using System.Threading;
namespace getSasTest.Controllers
{
[ApiController]
[Route("[controller]")]
public class editBlob : ControllerBase
{
[HttpGet]
public string get()
{
var connstr = "";
var container = "";
var blob = "";
var blobClient = new BlobContainerClient(connstr,container).GetBlobClient(blob);
var leaseClient = new BlobLeaseClient(blobClient);
try
{
//auto break lease after 15s
var duration = new TimeSpan(0, 0, 15);
leaseClient.Acquire(duration, null);
}
//if some error occurs, request ends here
catch (Azure.RequestFailedException e)
{
if (e.ErrorCode.Equals("LeaseAlreadyPresent"))
{
return "Blob is under process,it will take some time,please try again later";
}
else
{
return "some other Azure request errors:"+ e.Message;
}
}
catch (Exception e) {
return "some other errors:" + e.Message;
}
//mock time consumption to process blob
Thread.Sleep(10000);
//break relase first if process finishs in 15s.
leaseClient.Break();
return "Done";
}
}
}

So, based on your requirements, can't you authorize a party/cron/event (register event/ttl handlers?) to break a lease when some weirdness (whatever that means for you) is detected. It looks like you are really worried on the "pattern" over correctness?.
This should complement the correctness.
In practice, an exception handling strategy should lead to sufficient actionable information.
For some, that means:
E -> E - 1 (digest or no digest) -> E - 2 (digest or no digest) ..
such that:
E - n: Exception, at a certain level of nesting
digest: would you propagate this or digest and move forward?
Breaking the lease (essentially correctness of program) should not mean you hamper your version of the elegance.
Every service is usually a pair of services:
service itself
service clean-up handler 1 through n
service edge case handler 1 through n
primer for service
so on...

Related

Windows service with NHibernate is increasing used memory

I'm debugging an existing windows service (written in C#) that needs to be manually restarted every few months because it keeps eating memory.
The service is not very complicated. It requests a json file from an external server, which holds products.
Next it parses this json file into a list of products.
For each of these products it is checking if this product already exists in the database. If not it will be added if it does exists the properties will be updated.
The database is a PostgreSQL database and we use NHibernate v3.2.0 as ORM.
I've been using JetBrains DotMemory to profile the service when it runs:
The service starts and after 30s it starts doing its work. SnapShot #1 is made before the first run.
Snapshot #6 was made after the 5th run.
The other snapshots are also made after a run.
As you can see after each run the number of objects increases with approx. 60k and the memory used increases with a few MBs after every run.
Looking closer at Snapshot #6, shows the retained size is mostly used by NHibernate session objects:
Here's my OnStart code:
try
{
// Trying to fix certificate errors:
ServicePointManager.ServerCertificateValidationCallback += delegate
{
_logger.Debug("Cert validation work around");
return true;
};
_timer = new Timer(_interval)
{
AutoReset = false // makes it fire only once, restart when work is done to prevent multiple runs
};
_timer.Elapsed += DoServiceWork;
_timer.Start();
}
catch (Exception ex)
{
_logger.Error("Exception in OnStart: " + ex.Message, ex);
}
And my DoServiceWork:
try
{
// Call execute
var processor = new SAPProductProcessor();
processor.Execute();
}
catch (Exception ex)
{
_logger.Error("Error in DoServiceWork", ex);
}
finally
{
// Next round:
_timer.Start();
}
In SAPProductProcessor I use two db calls. Both in a loop.
I loop through all products from the JSON file and check if the product is already in the table using the product code:
ProductDto dto;
using (var session = SessionFactory.OpenSession())
{
using (var transaction = session.BeginTransaction(IsolationLevel.ReadCommitted))
{
var criteria = session.CreateCriteria<ProductDto>();
criteria.Add(Restrictions.Eq("Code", code));
dto = criteria.UniqueResult<ProductDto>();
transaction.Commit();
}
}
return dto;
And when the productDto is updated I save it using:
using (var session = SessionFactory.OpenSession())
{
using (var transaction = session.BeginTransaction(IsolationLevel.ReadCommitted))
{
session.SaveOrUpdate(item);
transaction.Commit();
}
}
I'm not sure how to change the code above to stop increasing the memory and the number of object.
I already tried using var session = SessionFactory.GetCurrentSession(); instead of using (var session = SessionFactory.OpenSession()) but that didn't stop the increase of memory.
Update
In the constructor of my data access class MultiSessionFactoryProvider sessionFactoryProvider is injected. And the base class is called with : base(sessionFactoryProvider.GetFactory("data")). This base class has a method BeginSession:
ISession session = _sessionFactory.GetCurrentSession();
if (session == null)
{
session = _sessionFactory.OpenSession();
ThreadLocalSessionContext.Bind(session);
}
And a EndSession:
ISession session = ThreadLocalSessionContext.Unbind(_sessionFactory);
if (session != null)
{
session.Close();
}
In my data access class I call base.BeginSession at the start and base.EndSession at then end.
The suggestion about the Singleton made me have a closer look at my data access class.
I thought when creating this class with every run would free the NHibernate memory when it runs out of scope. I even added some dispose call in the class' destructor. But that didn't work, or more likely I'm not doing it correctly.
I now save my data access class in a static field and re-use it. Now my memory doesn't increase anymore and more important the number of open objects stay the same. I just run the service using DotMemory again for over an hour calling the run around 150 times and the memory of the last snapshot is still around 105MB and the number of object is still 117k and my SessionFactory dictionary is now just 4MB instead of 150*4MB.

Better performance of a service call than a library was not expected

I was trying to figure out some performance values of two scenarios. I thought I was only going to declare the obvious at first. But when I got the results I got a little confused. And now I am looking for a justification for the case.
I have a library which makes couple of queries through a MongoDb database and Active Directory services, then returns the results to client, which are:
GetUserType - to MongoDb - there is a collection which has username and type fields in its all documents. In the query I give the username and ask for the type field.
LoginCheck - to Active Directory - given the username and the password from the client, I create a PrincipalContext object to access to AD server and call ValidateCredentials upon it.
This job is performing on an existing MVC application at the moment. And we are going to create a new desktop application and employ it with the same job.
We were curios about how different can these two scenarios perform? We thought that a direct call to a library without any http connection would perform better than a service request without an hesitation. But we still wondered how much difference is there, and if it was acceptable we are going to make it work through the rest MVC service - because of reasons :)
Hence we tested out the following architectures:
Scenario 1:
Scenario 2:
Basically, what I do for performance test is this:
For scenario 1:
for(var i = 0; i<10000; i++)
{
new Class1().HeavyMethod();
}
For scenario 2:
// client side
for(var i = 0; i<10000; i++)
{
using ( var client = new HttpClient() )
{
var values = new Dictionary<string, string>();
var content = new FormUrlEncodedContent(values);
var response = client.PostAsync("http://localhost:654/Home/HeavyLift", content).Result;
var responseString = response.Content.ReadAsStringAsync().Result;
}
}
// MVC rest service
public class HomeController : Controller
{
public JsonResult HeavyLift()
{
return Json(new Class1().HeavyMethod(), JsonRequestBehavior.AllowGet);
}
}
Common Class:
public class Class1
{
public string HeavyMethod ()
{
var userName = "asdfasdfasd";
var password = "asdfasdfasdf";
try
{
// this call is to MongoDB
var userType = Personnel.GetPersonnelsType(userName).Result;
// this call is to Active Directory
var user = new ADUser(new Session
{
UserType = userType.Type,
UserName = userName,
Password = password
});
return userType.Type + "-" + user.Auth();
}
catch ( Exception e )
{
return e.Message;
}
}
}
The results for 10000 consecutive calls are confusingly shocking:
Scenario 1: 159181 ms
Scenario 2: 13952 ms
Scenario 1 starts off pretty quicly for the first few dosens of calls, then it starts to slow down.
Scenario 2 though offers a constant response time through 10k calls.
What is actually happening here?
Note: I checked the memory and cpu usages of the server that this scenarios runs on(everything runs on the same server) but there is nothing interesting actually, they are behaving just the same in terms of memory and cpu resources.

How to Wait on Service Request (RQS)

**Note: Cross-posted at LabVIEW forums: http://forums.ni.com/t5/LabVIEW/C-VISA-wait-on-RQS/td-p/3122939
I'm trying to write a simple C# (.NET 4.0) program to control a Keithley 2400 SMU over VISA GPIB and I'm having trouble with getting the program to wait for the Service Request that the Keithley sends at the end of the sweep.
The sweep is a simple linear voltage sweep, controlled internally by the Keithley unit. I've got the unit set up to send a ServiceRequest signal at the end of the sweep or when compliance is reached.
I'm able to send the commands to the SMU and read the data buffer, but only if I manually enter a timeout between the sweep start command and the read data command.
One issue I'm having is that I'm quite new to C# - I'm using this project (porting parts of my LV code) to learn it.
Here is what I have so far for my C# code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using NationalInstruments.VisaNS;
private void OnServiceRequest(object sender, MessageBasedSessionEventArgs e)
{
Console.WriteLine("Service Request Received!");
}
// The following code is in a class method, but
public double[,] RunSweep()
{
// Create the session and message-based session
MessageBasedSession mbSession = null;
Session mySession = null;
string responseString = null;
// open the address
Console.WriteLine("Sending Commands to Instrument");
instrAddr = "GPIB0::25::INSTR";
mySession = ResourceManager.GetLocalManager().Open(instrAddr);
// Cast to message-based session
mbSession = (MessageBasedSession)mySession;
// Here's where things get iffy for me... Enabling the event and whatnot
mbSession.ServiceRequest += new MessageBasedSessionEventHandler(OnServiceRequest);
MessageBasedSessionEventType srq = MessageBasedSessionEventType.ServiceRequest;
mbSession.EnableEvent(srq, EventMechanism.Handler);
// Start the sweep (SMU was set up earlier)
Console.WriteLine("Starting Sweep");
mbSession.Write(":OUTP ON;:INIT");
int timeout = 10000; // milliseconds
// Thread.Sleep(10000); // using this line works fine, but it means the test always takes 10s even if compliance is hit early
// This raises error saying that the event is not enabled.
mbSession.WaitOnEvent(srq, timeout);
// Turn off the SMU.
Console.WriteLine("I hope the sweep is done, cause I'm tired of waiting");
mbSession.Write(":OUTP OFF;:TRAC:FEED:CONT NEV");
// Get the data
string data = mbSession.Query(":TRAC:DATA?");
// Close session
mbSession.Dispose();
// For now, create a dummy array, 3x3, to return. The array after is the starting value.
double[,] dummyArray = new double[3, 3] {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}};
return dummyArray;
}
All the above is supposed to mimic this LabVIEW code:
So, any ideas on where I'm going wrong?
Thanks,
Edit:
After a little more fiddling, I've found that the Service Request function OnServiceRequest is actually fired at the right time ("Service Request Received!" is printed to the console).
It turns out that I need to enable the event as a Queue rather than a handler. This line:
mbSession.EnableEvent(srq, EventMechanism.Handler);
Should actually be:
mbSession.EnableEvent(srq, EventMechanism.Queue);
Source: The documentation under "Remarks". It was a pain to find the docs on it... NI needs to make it easier :-(.
With this change, I also don't need to create the MessageBasedSessionEventHandler.
The final, working code looks like:
rm = ResourceManager.GetLocalManager().Open("GPIB0::25::INSTR");
MessageBasedSession mbSession = (MessageBasedSession)rm;
MessageBasedSessionEventType srq = MessageBasedSessionEventType.ServiceRequest;
mbSession.EnableEvent(srq, EventMechanism.Queue); // Note QUEUE, not HANDLER
int timeout = 10000;
// Start the sweep
mbSession.Write(":OUTP ON;:INIT");
// This waits for the Service Request
mbSession.WaitOnEvent(srq, timeout);
// After the Service Request, turn off the SMUs and get the data
mbSession.Write(":OUTP OFF;:TRAC:FEED:CONT NEV");
string data = mbSession.Query(":TRAC:DATA?");
mbSession.Dispose();
What you're doing looks correct to me so it's possible that there's an issue with the NI library.
The only thing I can think of to try is waiting for "all events" instead of just "ServiceRequest." like this:
mbSession.WaitOnEvent(MessageBasedSessionEventType.AllEnabledEvents, timeout);
Note: that it doesn't look like you can "enable" all events (so don't change that part).
I also looked for some examples of how other people are doing Keithley sweeps and I found this and this(Matlab ex). As I suspected in both they don't use events to determine when the sweep is finished, but rather a "while loop that keeps polling the Keithley" (the first link actually uses threads, but it's the same idea). This makes me think that maybe that's your best bet. So you could just do this:
int timeout = 10000;
int cycleWait = 1000;
for (int i = 0; i < timeout / cycleWait; i++)
{
try
{
string data = mbSession.Query(":TRAC:DATA?");
break;
}
catch
{
Thread.Sleep(cycleWait);
}
}
(You may also have to check if data is null, but there has to be some way of knowing when the sweep is finished).

Closing WCF Service from Async method?

I have a service layer project on an MVC 5 ASP.NET application I am creating on .NET 4.5.2 which calls out to an External 3rd Party WCF Service to Get Information asynchronously. An original method to call external service was as below (there are 3 of these all similar in total which I call in order from my GetInfoFromExternalService method (note it isnt actually called that - just naming it for illustration)
private async Task<string> GetTokenIdForCarsAsync(Car[] cars)
{
try
{
if (_externalpServiceClient == null)
{
_externalpServiceClient = new ExternalServiceClient("WSHttpBinding_IExternalService");
}
string tokenId= await _externalpServiceClient .GetInfoForCarsAsync(cars).ConfigureAwait(false);
return tokenId;
}
catch (Exception ex)
{
//TODO plug in log 4 net
throw new Exception("Failed" + ex.Message);
}
finally
{
CloseExternalServiceClient(_externalpServiceClient);
_externalpServiceClient= null;
}
}
So that meant that when each async call had completed the finally block ran - the WCF client was closed and set to null and then newed up when another request was made. This was working fine until a change needed to be made whereby if the number of cars passed in by User exceeds 1000 I create a Split Function and then call my GetInfoFromExternalService method in a WhenAll with each 1000 - as below:
if (cars.Count > 1000)
{
const int packageSize = 1000;
var packages = SplitCarss(cars, packageSize);
//kick off the number of split packages we got above in Parallel and await until they all complete
await Task.WhenAll(packages.Select(GetInfoFromExternalService));
}
However this now falls over as if I have 3000 cars the method call to GetTokenId news up the WCF service but the finally blocks closes it so the second batch of 1000 that is attempting to be run throws an exception. If I remove the finally block the code works ok - but it is obviously not good practice to not be closing this WCF client.
I had tried putting it after my if else block where the cars.count is evaluated - but if a User uploads for e.g 2000 cars and that completes and runs in say 1 min - in the meantime as the user had control in the Webpage they could upload another 2000 or another User could upload and again it falls over with an Exception.
Is there a good way anyone can see to correctly close the External Service Client?
Based on the related question of yours, your "split" logic doesn't seem to give you what you're trying to achieve. WhenAll still executes requests in parallel, so you may end up running more than 1000 requests at any given moment of time. Use SemaphoreSlim to throttle the number of simultaneously active requests and limit that number to 1000. This way, you don't need to do any splits.
Another issue might be in how you handle the creation/disposal of ExternalServiceClient client. I suspect there might a race condition there.
Lastly, when you re-throw from the catch block, you should at least include a reference to the original exception.
Here's how to address these issues (untested, but should give you the idea):
const int MAX_PARALLEL = 1000;
SemaphoreSlim _semaphoreSlim = new SemaphoreSlim(MAX_PARALLEL);
volatile int _activeClients = 0;
readonly object _lock = new Object();
ExternalServiceClient _externalpServiceClient = null;
ExternalServiceClient GetClient()
{
lock (_lock)
{
if (_activeClients == 0)
_externalpServiceClient = new ExternalServiceClient("WSHttpBinding_IExternalService");
_activeClients++;
return _externalpServiceClient;
}
}
void ReleaseClient()
{
lock (_lock)
{
_activeClients--;
if (_activeClients == 0)
{
_externalpServiceClient.Close();
_externalpServiceClient = null;
}
}
}
private async Task<string> GetTokenIdForCarsAsync(Car[] cars)
{
var client = GetClient();
try
{
await _semaphoreSlim.WaitAsync().ConfigureAwait(false);
try
{
string tokenId = await client.GetInfoForCarsAsync(cars).ConfigureAwait(false);
return tokenId;
}
catch (Exception ex)
{
//TODO plug in log 4 net
throw new Exception("Failed" + ex.Message, ex);
}
finally
{
_semaphoreSlim.Release();
}
}
finally
{
ReleaseClient();
}
}
Updated based on the comment:
the External WebService company can accept me passing up to 5000 car
objects in one call - though they recommend splitting into batches of
1000 and run up to 5 in parallel at one time - so when I mention 7000
- I dont mean GetTokenIdForCarAsync would be called 7000 times - with my code currently it should be called 7 times - i.e giving me back 7
token ids - I am wondering can I use your semaphore slim to run first
5 in parallel and then 2
The changes are minimal (but untested). First:
const int MAX_PARALLEL = 5;
Then, using Marc Gravell's ChunkExtension.Chunkify, we introduce GetAllTokenIdForCarsAsync, which in turn will be calling GetTokenIdForCarsAsync from above:
private async Task<string[]> GetAllTokenIdForCarsAsync(Car[] cars)
{
var results = new List<string>();
var chunks = cars.Chunkify(1000);
var tasks = chunks.Select(chunk => GetTokenIdForCarsAsync(chunk)).ToArray();
await Task.WhenAll(tasks);
return tasks.Select(task => task.Result).ToArray();
}
Now you can pass all 7000 cars into GetAllTokenIdForCarsAsync. This is a skeleton, it can be improved with some retry logic if any of the batch requests has failed (I'm leaving that up to you).

Multithreaded linq2sql applications TransactionScope difficulties

I've created a file processing service which reads and imports xml files from a specific directory.
The service starts several workers which will poll a filequeue for new files and uses linq2sql for dataaccess. Each workerthread has its own datacontext.
The files being processed contain several orders and each order contains several addresses (Customer/Contractor/Subcontractor)
I've defined a transactionscope around the handling of each file. This way I want to ensure that the whole file is handled correctly, or that the whole file is rolled back when an exception occurs:
try
{
using (var tx = new TransactionScope(TransactionScopeOption.RequiresNew))
{
foreach (var order in orders)
{
HandleType1Order(order);
}
tx.Complete();
}
}
catch (SqlException ex)
{
if (ex.Number == SqlErrorNumbers.Deadlock)
{
throw new FileHandlerException("File Caused a Deadlock, retrying later", ex, true);
}
else
throw;
}
One of the requirements for the service is that is creates or updates found addresses in the xml files. So I've created an address service which is responsible for address management. The following piece of code gets executed for each order (within the method HandleType1Order()) in the xml importfile (And thus is part of the TransactionScope for the entire file).
using (var tx = new TransactionScope())
{
address = GetAddressByReference(number);
if (address != null) //address is already known
{
Log.Debug("Found address {0} - {1}. Updating...", address.Code, address.Name);
UpdateAddress(address, name, number, isContractor, isSubContractor, isCustomer);
}
else
{
//address not known, so create it
Log.Debug("Address {0} not known, creating address", number);
address = CreateAddress(name, number, sourceSystemId, isContractor, isSubContractor,
isCustomer);
_addressRepository.Save(address);
}
_addressRepository.Flush();
tx.Complete();
}
What I'm trying to do here, is to create or update an address, with the number being unique.
The method GetAddressByReference(string number) returns a known address or null when an address is not found.
public virtual Address GetAddressByReference(string reference)
{
return _addressRepository.GetAll().SingleOrDefault(a=>a.Code==reference);
}
When I run the service it however creates multiple addresses with the same number. The method GetAddressByReference() get's called concurrently and should return a known address when a second thread executes the method with the same addressnumber, however it returns null. There is propably something wrong with my transaction boundaries, or isolationlevel, but I can't seem to get it to work.
Can someone point me in the right direction? Help is much appreciated!!
p.s. I've no problem with the transactions being deadlocked and causing a rollback, the file will just be retried when a deadlock occurs.
Edit 1 Threading code:
public void Work()
{
_isRunning = true;
while (true)
{
ImportFileTask task = _queue.Dequeue(); //dequeue blocks on empty queue
if (task == null)
break; //Shutdown worker when a null task is read from the queue
IFileImporter importer = null;
try
{
using (new LockFile(task.FilePath).Acquire()) //create a filelock to sync access accross all processes to the file
{
importer = _kernel.Resolve<IFileImporter>();
Log.DebugFormat("Processing file {0}", task.FilePath);
importer.Import(task.FilePath);
Log.DebugFormat("Done Processing file {0}", task.FilePath);
}
}
catch(Exception ex)
{
Log.Fatal(
"A Fatal exception occured while handling {0} --> {1}".FormatWith(task.FilePath, ex.Message), ex);
}
finally
{
if (importer != null)
_kernel.ReleaseComponent(importer);
}
}
_isRunning = false;
}
The above method runs in all of our worker threads. It uses Castle Windsor to resolve the FileImporter, which has a transient lifestyle (thus not shared accross threads).
You didn't post your threading code, so its difficult to say what the issue is. I'm assuming you have started DTC (Distributed Transaction Coordinator)?
Are you using a ThreadPool? Are you using the "lock" keyword?
http://msdn.microsoft.com/en-us/library/c5kehkcz.aspx

Categories

Resources