I have an Excel Add-In written in C#, .NET 4.5. It will send many web service requests to a web server to get data. E.g. it sends 30,000 requests to web service server. When data of a request comes back, the addin will plot the data in Excel.
Originally I did all the requests asynchronously, but sometime I will get OutOfMemoryException
So I changed, sent the requests one by one, but it is too slow, takes long time to finish all requests.
I wonder if there is a way that I can do 100 requests at a time asynchronously, once the data of all the 100 requests come back and plot in Excel, then send the next 100 requests.
Thanks
Edit
On my addin, there is a ribbon button "Refresh", when it is clicked, refresh process starts.
On main UI thread, ribbon/button is clicked, it will call web service BuildMetaData,
once it is returned back, in its callback MetaDataCompleteCallback, another web service call is sent
Once it is returned back, in its callback DataRequestJobFinished, it will call plot to plot data on Excel. see below
RefreshBtn_Click()
{
if (cells == null) return;
Range firstOccurence = null;
firstOccurence = cells.Find(functionPattern, null,
null, null,
XlSearchOrder.xlByRows,
XlSearchDirection.xlNext,
null, null, null);
DataRequest request = null;
_reportObj = null;
Range currentOccurence = null;
while (!Helper.RefreshCancelled)
{
if(firstOccurence == null ||IsRangeEqual(firstOccurence, currentOccurence)) break;
found = true;
currentOccurence = cells.FindNext(currentOccurence ?? firstOccurence);
try
{
var excelFormulaCell = new ExcelFormulaCell(currentOccurence);
if (excelFormulaCell.HasValidFormulaCell)
{
request = new DataRequest(_unityContainer, XLApp, excelFormulaCell);
request.IsRefreshClicked = true;
request.Workbook = Workbook;
request.Worksheets = Worksheets;
_reportObj = new ReportBuilder(_unityContainer, XLApp, request, index, false);
_reportObj.ParseParameters();
_reportObj.GenerateReport();
//this is necessary b/c error message is wrapped in valid object DataResponse
//if (!string.IsNullOrEmpty(_reportObj.ErrorMessage)) //Clear previous error message
{
ErrorMessage = _reportObj.ErrorMessage;
Errors.Add(ErrorMessage);
AddCommentToCell(_reportObj);
Errors.Remove(ErrorMessage);
}
}
}
catch (Exception ex)
{
ErrorMessage = ex.Message;
Errors.Add(ErrorMessage);
_reportObj.ErrorMessage = ErrorMessage;
AddCommentToCell(_reportObj);
Errors.Remove(ErrorMessage);
Helper.LogError(ex);
}
}
}
on Class to GenerateReport
public void GenerateReport()
{
Request.ParseFunction();
Request.MetacompleteCallBack = MetaDataCompleteCallback;
Request.BuildMetaData();
}
public void MetaDataCompleteCallback(int id)
{
try
{
if (Request.IsRequestCancelled)
{
Request.FormulaCell.Dispose();
return;
}
ErrorMessage = Request.ErrorMessage;
if (string.IsNullOrEmpty(Request.ErrorMessage))
{
_queryJob = new DataQueryJob(UnityContainer, Request.BuildQueryString(), DataRequestJobFinished, Request);
}
else
{
ModifyCommentOnFormulaCellPublishRefreshEvent();
}
}
catch (Exception ex)
{
ErrorMessage = ex.Message;
ModifyCommentOnFormulaCellPublishRefreshEvent();
}
finally
{
Request.MetacompleteCallBack = null;
}
}
public void DataRequestJobFinished(DataRequestResponse response)
{
Dispatcher.Invoke(new Action<DataRequestResponse>(DataRequestJobFinishedUI), response);
}
public void DataRequestJobFinished(DataRequestResponse response)
{
try
{
if (Request.IsRequestCancelled)
{
return;
}
if (response.status != Status.COMPLETE)
{
ErrorMessage = ManipulateStatusMsg(response);
}
else // COMPLETE
{
var tmpReq = Request as DataRequest;
if (tmpReq == null) return;
new VerticalTemplate(tmpReq, response).Plot();
}
}
catch (Exception e)
{
ErrorMessage = e.Message;
Helper.LogError(e);
}
finally
{
//if (token != null)
// this.UnityContainer.Resolve<IEventAggregator>().GetEvent<DataQueryJobComplete>().Unsubscribe(token);
ModifyCommentOnFormulaCellPublishRefreshEvent();
Request.FormulaCell.Dispose();
}
}
on plot class
public void Plot()
{
...
attributeRange.Value2 = headerArray;
DataRange.Value2 = ....
DataRange.NumberFormat = ...
}
OutOfMemoryException is not about the too many requests sent simultaneously. It is about freeing your resources right way. In my practice there are two main problems when you are getting such exception:
Wrong working with immutable structures or System.String class
Not disposing your disposable resources, especially graphic objects and WCF requests.
In case of reporting, for my opinion, you got a second one type of a problem. DataRequest and DataRequestResponse are good point to start the investigation for the such objects.
If this doesn't help, try to use the Tasks library with async/await pattern, you can find good examples here:
// Signature specifies Task<TResult>
async Task<int> TaskOfTResult_MethodAsync()
{
int hours;
// . . .
// Return statement specifies an integer result.
return hours;
}
// Calls to TaskOfTResult_MethodAsync
Task<int> returnedTaskTResult = TaskOfTResult_MethodAsync();
int intResult = await returnedTaskTResult;
// or, in a single statement
int intResult = await TaskOfTResult_MethodAsync();
// Signature specifies Task
async Task Task_MethodAsync()
{
// . . .
// The method has no return statement.
}
// Calls to Task_MethodAsync
Task returnedTask = Task_MethodAsync();
await returnedTask;
// or, in a single statement
await Task_MethodAsync();
In your code I see a while loop, in which you can store your Task[] of size of 100, for which you can use the WaitAll method, and the problem should be solved. Sorry, but your code is huge enough, and I can't provide you a more straight example.
I'm having a lot of trouble parsing your code to figure out is being iterated for your request but the basic template for batching asynchronously is going to be something like this:
static const int batchSize = 100;
public async Task<IEnumerable<Results>> GetDataInBatches(IEnumerable<RequestParameters> parameters) {
if(!parameters.Any())
return Enumerable.Empty<Result>();
var batchResults = await Task.WhenAll(parameters.Take(batchSize).Select(doQuery));
return batchResults.Concat(await GetDataInBatches(parameters.Skip(batchSize));
}
where doQuery is something with the signature
Task<Results> async doQuery(RequestParameters parameters) {
//.. however you do the query
}
I wouldn't use this for a million requests since its recursive, but your case should would generate a callstack only 300 deep so you'll be fine.
Note that this also assumes that your data request stuff is done asynchronously and returns a Task. Most libraries have been updated to do this (look for methods with the Async suffix). If it doesn't expose that api you might want to create a separate question for how to specifically get your library to play nice with the TPL.
Related
I'm wondering how I should handle domain exceptions in a proper way?
Does all of my consumer's code should be wrapped into a try, catch block, or I should just thrown an Exception, which will be handled by apropriate FaultConsumer?
Consider this two samples:
Example-1 - whole operation is wrapped into try...catch block.
public async Task Consume(ConsumeContext<CreateOrder> context)
{
try
{
//Consumer that creates order
var order = new Order();
var product = store.GetProduct(command.ProductId); // check if requested product exists
if (product is null)
{
throw new DomainException(OperationCodes.ProductNotExist);
}
order.AddProduct(product);
store.SaveOrder(order);
context.Publish<OrderCreated>(new OrderCreated
{
OrderId = order.Id;
});
}
catch (Exception exception)
{
if (exception is DomainException domainException)
{
context.Publish<CreateOrderRejected>(new CreateOrderRejected
{
ErrorCode = domainException.Code;
});
}
}
}
Example-2 - MassTransit handles DomainException, by pushing message into CreateOrder_error queue. Another service subscribes to this event, and after the event is published on this particular queue, it process it;
public async Task Consume(ConsumeContext<CreateOrder> context)
{
//Consumer that creates order
var order = new Order();
var product = store.GetProduct(command.ProductId); // check if requested product exists
if (product is null)
{
throw new DomainException(OperationCodes.ProductNotExist);
}
order.AddProduct(product);
store.SaveOrder(order);
context.Publish<OrderCreated>(new OrderCreated
{
OrderId = order.Id;
});
}
Which approach should be better?
I know that I can use Request/Response and gets information about error immediately, but in my case, it must be done via message broker.
In your first example, you are handling a domain condition (in your example, a product not existing in the catalog) by producing an event that the order was rejected for an unknown product. This makes complete sense.
Now, if the database query to check the product couldn't connect to the database, that's a temporary situation that may resolve itself, and thus using a retry or scheduled redelivery makes sense - to try again before giving up entirely. Those are exceptions you would want to throw.
But the business exception you'd want to catch, and handle by publishing an event.
public async Task Consume (ConsumeContext<CreateOrder> context) {
try {
var order = new Order ();
var product = store.GetProduct (command.ProductId); // check if requested product exists
if (product is null) {
throw new DomainException (OperationCodes.ProductNotExist);
}
order.AddProduct (product);
store.SaveOrder (order);
context.Publish<OrderCreated> (new OrderCreated {
OrderId = order.Id;
});
} catch (DomainException exception) {
await context.Publish<CreateOrderRejected> (new CreateOrderRejected {
ErrorCode = domainException.Code;
});
}
}
My take on this is that you seem to go to the fire-and-forget commands mess. Of course, it is very context-specific, since there are scenarios, especially integration when you don't have a user on the other side sitting and wondering if their command was eventually executed and what is the outcome.
So, for integration scenarios, I concur with Chris' answer, publishing a domain exception event makes perfect sense.
For the user-interaction scenarios, however, I'd rather suggest using request-response that can return different kinds of response, like a positive and negative response, as described in the documentation. Here is the snippet from the docs:
Service side:
public class CheckOrderStatusConsumer :
IConsumer<CheckOrderStatus>
{
public async Task Consume(ConsumeContext<CheckOrderStatus> context)
{
var order = await _orderRepository.Get(context.Message.OrderId);
if (order == null)
await context.RespondAsync<OrderNotFound>(context.Message);
else
await context.RespondAsync<OrderStatusResult>(new
{
OrderId = order.Id,
order.Timestamp,
order.StatusCode,
order.StatusText
});
}
}
Client side:
var (statusResponse,notFoundResponse) = await client.GetResponse<OrderStatusResult, OrderNotFound>(new { OrderId = id});
// both tuple values are Task<Response<T>>, need to find out which one completed
if(statusResponse.IsCompletedSuccessfully)
{
var orderStatus = await statusResponse;
// do something
}
else
{
var notFound = await notFoundResponse;
// do something else
}
I have been working on a webscraping project.
I am having two issues, one being presenting the number of urls processed as percentage but a far larger issue is that I can not figure out how I know when all the threads i am creating are totaly finished.
NOTE: I am aware of that the a parallel foreach once done moves on BUT this is within a recursive method.
My code below:
public async Task Scrape(string url)
{
var page = string.Empty;
try
{
page = await _service.Get(url);
if (page != string.Empty)
{
if (regex.IsMatch(page))
{
Parallel.For(0, regex.Matches(page).Count,
index =>
{
try
{
if (regex.Matches(page)[index].Groups[1].Value.StartsWith("/"))
{
var match = regex.Matches(page)[index].Groups[1].Value.ToLower();
if (!links.Contains(BaseUrl + match) && !Visitedlinks.Contains(BaseUrl + match))
{
Uri ValidUri = WebPageValidator.GetUrl(match);
if (ValidUri != null && HostUrls.Contains(ValidUri.Host))
links.Enqueue(match.Replace(".html", ""));
else
links.Enqueue(BaseUrl + match.Replace(".html", ""));
}
}
}
catch (Exception e)
{
log.Error("Error occured: " + e.Message);
Console.WriteLine("Error occured, check log for further details."); ;
}
});
WebPageInternalHandler.SavePage(page, url);
var context = CustomSynchronizationContext.GetSynchronizationContext();
Parallel.ForEach(links, new ParallelOptions { MaxDegreeOfParallelism = 25 },
webpage =>
{
try
{
if (WebPageValidator.ValidUrl(webpage))
{
string linkToProcess = webpage;
if (links.TryDequeue(out linkToProcess) && !Visitedlinks.Contains(linkToProcess))
{
ShowPercentProgress();
Thread.Sleep(15);
Visitedlinks.Enqueue(linkToProcess);
Task d = Scrape(linkToProcess);
Console.Clear();
}
}
}
catch (Exception e)
{
log.Error("Error occured: " + e.Message);
Console.WriteLine("Error occured, check log for further details.");
}
});
Console.WriteLine("parallel finished");
}
}
catch (Exception e)
{
log.Error("Error occured: " + e.Message);
Console.WriteLine("Error occured, check log for further details.");
}
}
NOTE that Scrape gets called multiple times(recursive)
call the method like this:
public Task ExecuteScrape()
{
var context = CustomSynchronizationContext.GetSynchronizationContext();
Scrape(BaseUrl).ContinueWith(x => {
Visitedlinks.Enqueue(BaseUrl);
}, context).Wait();
return null;
}
which in turn gets called like so:
static void Main(string[] args)
{
RunScrapper();
Console.ReadLine();
}
public static void RunScrapper()
{
try
{
_scrapper.ExecuteScrape();
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
my result:
How do I solve this?
(Is it ethical for me to answer a question about web page scraping?)
Don't call Scrape recursively. Place the list of urls you want to scrape in a ConcurrentQueue and begin processing that queue. As the process of scraping a page returns more urls, just add them into the same queue.
I wouldn't use just a string, either. I recommend creating a class like
public class UrlToScrape //because naming things is hard
{
public string Url { get; set; }
public int Depth { get; set; }
}
Regardless of how you execute this it's recursive, so you have to somehow keep track of how many levels deep you are. A website could deliberately generate URLs that send you into infinite recursion. (If they did this then they don't want you scraping their site. Does anybody want people scraping their site?)
When your queue is empty that doesn't mean you're done. The queue could be empty, but the process of scraping the last url dequeued could still add more items back into that queue, so you need a way to account for that.
You could use a thread safe counter (int using Interlocked.Increment/Decrement) that you increment when you start processing a url and decrement when you finish. You're done when the queue is empty and the count of in-process urls is zero.
This is a very rough model to illustrate the concept, not what I'd call a refined solution. For example, you still need to account for exception handling, and I have no idea where the results go, etc.
public class UrlScraper
{
private readonly ConcurrentQueue<UrlToScrape> _queue = new ConcurrentQueue<UrlToScrape>();
private int _inProcessUrlCounter;
private readonly List<string> _processedUrls = new List<string>();
public UrlScraper(IEnumerable<string> urls)
{
foreach (var url in urls)
{
_queue.Enqueue(new UrlToScrape {Url = url, Depth = 1});
}
}
public void ScrapeUrls()
{
while (_queue.TryDequeue(out var dequeuedUrl) || _inProcessUrlCounter > 0)
{
if (dequeuedUrl != null)
{
// Make sure you don't go more levels deep than you want to.
if (dequeuedUrl.Depth > 5) continue;
if (_processedUrls.Contains(dequeuedUrl.Url)) continue;
_processedUrls.Add(dequeuedUrl.Url);
Interlocked.Increment(ref _inProcessUrlCounter);
var url = dequeuedUrl;
Task.Run(() => ProcessUrl(url));
}
}
}
private void ProcessUrl(UrlToScrape url)
{
try
{
// As the process discovers more urls to scrape,
// pretend that this is one of those new urls.
var someNewUrl = "http://discovered";
_queue.Enqueue(new UrlToScrape { Url = someNewUrl, Depth = url.Depth + 1 });
}
catch (Exception ex)
{
// whatever you want to do with this
}
finally
{
Interlocked.Decrement(ref _inProcessUrlCounter);
}
}
}
If I was doing this for real the ProcessUrl method would be its own class, and it would take HTML, not a URL. In this form it's difficult to unit test. If it were in a separate class then you could pass in HTML, verify that it outputs results somewhere, and that it calls a method to enqueue new URLs it finds.
It's also not a bad idea to maintain the queue as a database table instead. Otherwise if you're processing a bunch of urls and you have to stop, you'd have start all over again.
Can't you add all tasks Task d to some type of concurrent collection you thread through all recursive calls (via method argument) and then simply call Task.WhenAll(tasks).Wait()?
You'd need an intermediate method (makes it cleaner) that launches the base Scrape call and passes in the empty task collection. When the base call returns you have in hand all tasks and you simply wait them out.
public async Task Scrape (
string url) {
var tasks = new ConcurrentQueue<Task>();
//call your implementation but
//change it so that you add
//all launched tasks d to tasks
Scrape(url, tasks);
//1st option: Wait().
//This will block caller
//until all tasks finish
Task.WhenAll(tasks).Wait();
//or 2nd option: await
//this won't block and will return to caller.
//Once all tasks are finished method
//will resume in WriteLine
await Task.WhenAll(tasks);
Console.WriteLine("Finished!"); }
Simple rule: if you want to know when something finishes, the first step is to keep track of it. In your current implementation you are essentially firing and forgetting all launched tasks...
I am building an ASP.NET web.api service. there is api needs more than 2 minutes to retrieve desired data, so I implemented cache mechanism, and every request sent to API Server, the server will return the cached data and meanwhile start a new thread to load new data into the cache, the issue is if I submitted a lot of requests, a lot of thread will be running and eventually crashed the server, I want to implement a mechanism to control only a thread at any certain time, but I know ASP.NET Web.API is inherently multi threads, how do I tell other request to wait, because there is one thread already retrieving new set of data ?
[Dependency]
public ICacheManager<OrderArray> orderArrayCache { get; set; }
private ReadOrderService Service = new ReadOrderService();
private const string _ckey = "all";
public dynamic Get()
{
try
{
OrderArray cache = orderArrayCache.Get(_ckey);
if(cache == null || cache.orders.Length == 0)
{
OrderArray data = Service.GetAllOrders();
orderArrayCache.Add(_ckey, data);
return data;
}
else
{
Caching();
return cache;
}
}
catch (Exception error)
{
ErrorLog.WriteLog(Config._SystemName, this.GetType().Name, System.Reflection.MethodBase.GetCurrentMethod().Name, error.ToString());
return 0;
}
}
public void Caching()
{
Thread worker = new Thread(() => CacheWorker());
worker.Start();
}
public void CacheWorker()
{
try
{
//ActivityLog.WriteLog(Config._SystemName, this.GetType().Name, System.Reflection.MethodBase.GetCurrentMethod().Name, "Cache Worker Is Starting to Work");
OrderArray data = Service.GetAllOrders();
orderArrayCache.Put(_ckey, data);
//ActivityLog.WriteLog(Config._SystemName, this.GetType().Name, System.Reflection.MethodBase.GetCurrentMethod().Name, "Cache Worker Is Working Hard");
}
catch(Exception error)
{
//ActivityLog.WriteLog(Config._SystemName, this.GetType().Name, System.Reflection.MethodBase.GetCurrentMethod().Name, error.ToString());
}
}
Without commenting on the overall architecture, it's as trivial as setting a flag that you're working, and not starting the thread if that flag is set.
Of course in the ASP.NET MVC/WebAPI context, a controller instance is created for every request, so a simple field won't work. You could make it static, but that'll only work per AppDomain: one application can run in multiple AppDomains, by using multiple worker processes.
You could solve that by using a mutex, but then your application could be in a server farm, introducing a whole shebang of new problems.
That being said, the naive, static approach:
private static bool _currentlyRetrievingCacheableData = false;
public void Caching()
{
if (_currentlyRetrievingCacheableData)
{
return;
}
Thread worker = new Thread(() => CacheWorker());
worker.Start();
}
public void CacheWorker()
{
try
{
_currentlyRetrievingCacheableData = true;
// ...
}
catch(Exception error)
{
// ...
}
finally
{
_currentlyRetrievingCacheableData = false;
}
}
There's still a race issue here, but at most two threads can be accessing the CacheWorker() method. You can prevent that by using a lock statement.
Do note that all of this are workarounds for doing the obvious: let the cache refreshing mechanism live outside your web application code, for example in a Windows Service or a Scheduled Task.
I am new to TAP and async/await in practice in C# so I may have some bad code smells here, so be gentle. :-)
I have a service method that looks as follows:
public OzCpAddOrUpdateEmailAddressToListOutput AddOrUpdateEmailAddressToList(
OzCpAddOrUpdateEmailAddressToListInput aParams)
{
var result = new OzCpAddOrUpdateEmailAddressToListOutput();
try
{
var mailChimManager = new MailChimpManager(aParams.MailChimpApiKey);
Task<Member> mailChimpResult =
mailChimManager.Members.AddOrUpdateAsync(
aParams.Listid,
new Member
{
EmailAddress = aParams.EmailAddress
});
//Poll async task until it completes.
//Give it at most 8 seconds to do what it needs to do
var outOfTime = DateTime.Now.AddSeconds(8);
while (!mailChimpResult.IsCompleted)
{
if (DateTime.Now > outOfTime)
{
throw new Exception("Timed out waiting for MailChimp API.");
}
}
//Should there have been a problem with the call then we raise an exception
if (mailChimpResult.IsFaulted)
{
throw new Exception(
mailChimpResult.Exception?.Message ??
"Unknown mail chimp library error.",
mailChimpResult.Exception);
}
else
{
//Call to api returned without failing but unless we have
//the email address subscribed we have an issue
if (mailChimpResult.Result.Status != Status.Subscribed)
{
throw new Exception(
$"There was a problem subscribing the email address
{aParams.EmailAddress} to the mailchimp list id
{aParams.Listid}");
}
}
}
catch (Exception ex)
{
result.ResultErrors.AddFatalError(PlatformErrors.UNKNOWN, ex.Message);
}
return result;
}
But when I call in from MVC Controller action mailChimpResult.IsCompleted always returns false and eventually I hit the timeout.
I realise this is because I am not chaining the async calls as per HttpClient IsComplete always return false and because of different threads this behaviour is "expected".
However I want my service method to hide the complexity of the async nature of what it is doing and merely do what appears to be a synchronous call in my action method namely:
var mailChimpResult =
_PlatformMailChimpService.AddOrUpdateEmailAddressToList(
new OzCpAddOrUpdateEmailAddressToListInput
{
EmailAddress = aFormCollection["aEmailAddress"],
Listid = ApplicationSettings.Newsletter.MailChimpListId.Value,
MailChimpApiKey = ApplicationSettings.Newsletter.MailChimpApiKey.Value
});
if (mailChimpResult.Result == true)
{
//So something
}
Ideally you should avoid the .Result and .IsFaulted properties of the Task and Task<T> objects, that was code smell number one. When you're using these objects you should use async and await through the entire stack. Consider your service written this way instead:
public async Task<OzCpAddOrUpdateEmailAddressToListOutput>
AddOrUpdateEmailAddressToList(
OzCpAddOrUpdateEmailAddressToListInput aParams)
{
var result = new OzCpAddOrUpdateEmailAddressToListOutput();
try
{
var mailChimManager = new MailChimpManager(aParams.MailChimpApiKey);
Member mailChimpResult =
await mailChimManager.Members.AddOrUpdateAsync(
aParams.Listid,
new Member
{
EmailAddress = aParams.EmailAddress
});
}
catch (Exception ex)
{
result.ResultErrors.AddFatalError(PlatformErrors.UNKNOWN, ex.Message);
}
return result;
}
Notice that I was able to remove all of that unnecessary polling and examining of properties. We mark the method as Task<OzCpAddOrUpdateEmailAddressToListOutput> returning and decorate it with the async keyword. This allows us to use the await keyword in the method body. We await the .AddOrUpdateAsync which yields the Member.
The consuming call into the service looks similar, following the same paradigm of async and await keywords with Task or Task<T> return types:
var mailChimpResult =
await _PlatformMailChimpService.AddOrUpdateEmailAddressToList(
new OzCpAddOrUpdateEmailAddressToListInput
{
EmailAddress = aFormCollection["aEmailAddress"],
Listid = ApplicationSettings.Newsletter.MailChimpListId.Value,
MailChimpApiKey = ApplicationSettings.Newsletter.MailChimpApiKey.Value
});
if (mailChimpResult.Result == true)
{
//So something
}
It is considered best practice to postfix the "Async" word to the method, to signify that it is asynchronous, i.e.; AddOrUpdateEmailAddressToListAsync.
I recently encountered this very strange problem.
Initially I have this block of code
public async Task<string> Fetch(string module, string input)
{
if (module != this._moduleName)
{
return null;
}
try
{
var db = new SQLiteAsyncConnection(_dbPath);
ResponsePage storedResponse = new ResponsePage();
Action<SQLiteConnection> trans = connect =>
{
storedResponse = connect.Get<ResponsePage>(input);
};
await db.RunInTransactionAsync(trans);
string storedResponseString = storedResponse.Response;
return storedResponseString;
}
catch (Exception e)
{
return null;
}
}
However control will never be handed back to my code after the transaction finishes running. I traced the program and it seems that after the lock is release, the flow of program stops. Then I switched to using the GetAsync method from SQLiteAsyncConnection class. Basically it did the same thing so I was still blocked at await. Then I removed the async calls and used the synchronous api like below:
public async Task<string> Fetch(string module, string input)
{
if (module != this._moduleName)
{
return null;
}
try
{
var db = new SQLiteConnection(_dbPath);
ResponsePage storedResponse = new ResponsePage();
lock (_dbLock)
{
storedResponse = db.Get<ResponsePage>(input);
}
string storedResponseString = storedResponse.Response;
return storedResponseString;
}
catch (Exception e)
{
return null;
}
}
Only then can the logic flows back to my code. However I can't figure out why is this so.
Another problem is that for this kind of simple query is there any gain in terms of query time if I use aysnc api instead of sync api? If not I'll stick to the sync version then.
You are most likely calling Result (or Wait) further up the call stack from Fetch. This will cause a deadlock, as I explain on my blog and in a recent MSDN article.
For your second question, there is some overhead from async, so for extremely fast asynchronous operations, the synchronous version will be faster. There is no way to tell whether this is the case in your code unless you do profiling.