I am developping an UWP MVVM application which resolving adress with location :
private async void ResolveAddress()
{
//TODO : Manage cancel
Debug.WriteLine("Resolving adress ...");
var result = await MapLocationFinder.FindLocationsAtAsync(SelectedLocation);
if (result.Status == MapLocationFinderStatus.Success)
{
if (result.Locations.Count > 0)
{
Debug.WriteLine("Adress resolved : " + Address);
Address = result.Locations[0].Address.FormattedAddress;
}
}
Debug.WriteLine("Resolve fail");
}
This call can occured really often (based on the location selected by the user), so the method may have not finishing running when an other call is make.
//Binding property
public Geopoint SelectedLocation
{
get { return _selectedLocation; }
set
{
Debug.WriteLine("Location change");
_selectedLocation = value;
ResolveAddress();
RaisePropertyChanged();
}
}
The Adress field is also a binding property.
I encounter 2 problems with this implementation :
I am not sure the Adress field will the last selected location (the call N-1 can finished after N).
The adress field contains progressivly all the adress resolved I want only the the last.
I find a way to cancel async task :
https://msdn.microsoft.com/fr-fr/library/jj155759.aspx
But the MapLocationFinder.FindLocationsAtAsync does not have a cancelation token in parameters.
What is the best way to accomplish this ?
Thanks.
I am not sure the Adress field will the last selected location (the call N-1 can finished after N).
You can solve this with an asynchronous callback token. I describe it in more detail in a very old blog post where I call them "asynchronous callback contexts". Given how overloaded the term "context" is today, I now prefer the term "token".
private object _addressCallbackToken;
private async void ResolveAddress()
{
Debug.WriteLine("Resolving adress ...");
var token = _addressCallbackToken = new object();
var result = await MapLocationFinder.FindLocationsAtAsync(SelectedLocation);
if (token != _addressCallbackToken)
return;
if (result.Status == MapLocationFinderStatus.Success)
{
if (result.Locations.Count > 0)
{
Debug.WriteLine("Adress resolved : " + Address);
Address = result.Locations[0].Address.FormattedAddress;
}
}
Debug.WriteLine("Resolve fail");
}
However, I strongly recommend not using async void in this fashion. You may find my article on async MVVM data binding useful.
Related
I have created a chatbot in web channel and direct line.
When i tested in bot emulator i get the right response and when i try to test the same intent in localhost ( webchat) i got different response.
I will show you and example :
call an agent
give me your customer number
( after custemer number was sended ) are you sure ?
if you click Yes ...the data are stored in database ( sql server )
If you do the save in localhost you get : You cancelled the form ( in fact i havent cancell any form
Here is the luisdialog here i call the form :
[LuisIntent("human")]
public async Task human(IDialogContext context, LuisResult result)
{
var form = new FormDialog<Human>(
new Human(),
Human.BuildForm,
FormOptions.PromptInStart,
result.Entities);
context.Call<Human>(form, LossFormCompleted)
}
private async Task LossFormCompleted(IDialogContext context,
IAwaitable<Human> result)
{
HumanCall form = null;
try
{
form = await result;
}
catch (OperationCanceledException)
{
}
if (form == null)
{
await context.PostAsync("You cancelled the form.");
}
else
{
//call the LossForm service to complete the form fill
var message = $"Your data are stored in database";
await context.PostAsync(message);
}
context.Wait(this.MessageReceived);
}
The form model is :
[Serializable]
public class Human
{
[Prompt("What is your contract number?")]
public string contract;
public static IForm<Human> BuildForm()
{
OnCompletionAsyncDelegate<HumanCall> wrapUpRequest = async (context, state) =>
{
using (BotModelDataContext BotDb = new BotModelDataContext())
{
tblBot bot = new tblBot();
bot = BotDb.tblBots.SingleOrDefault(q => q.Reference == state.contract);
if (bot != null)
{
using (bbbserviceSoapClient cws = new bbbserviceSoapClient())
{
viewc a= new viewc();
a.Lastname = bot.Lastname;
}
}
}
};
return new FormBuilder<Human>().Message
("can you send us some info ?")
.Field(nameof(contract))
.OnCompletion(wrapUpRequest)
.Confirm("Are you sure: Yes or No. ")
.Build();
}
}
}
Can someone help me where i'm wrong ? What can i do to retrieve the same response ? It's about timeout problem or what do you thing ?
I do a test based on the code that you provided and make slight modifications, and I find that if some exceptions occur in wrapUpRequest method, it would show "You cancelled the form" instead of the message "Your data are stored in database".
So I suspect that exceptions occurring in wrapUpRequest method (perhaps database query issue or request sent by bbbserviceSoapClient is timeout etc) when you do test via web chat, which causes the issue.
To troubleshoot the issue, you can try to implement/write custom log to detect if any exception occurs within wrapUpRequest method when you test via web chat.
I have been working on a webscraping project.
I am having two issues, one being presenting the number of urls processed as percentage but a far larger issue is that I can not figure out how I know when all the threads i am creating are totaly finished.
NOTE: I am aware of that the a parallel foreach once done moves on BUT this is within a recursive method.
My code below:
public async Task Scrape(string url)
{
var page = string.Empty;
try
{
page = await _service.Get(url);
if (page != string.Empty)
{
if (regex.IsMatch(page))
{
Parallel.For(0, regex.Matches(page).Count,
index =>
{
try
{
if (regex.Matches(page)[index].Groups[1].Value.StartsWith("/"))
{
var match = regex.Matches(page)[index].Groups[1].Value.ToLower();
if (!links.Contains(BaseUrl + match) && !Visitedlinks.Contains(BaseUrl + match))
{
Uri ValidUri = WebPageValidator.GetUrl(match);
if (ValidUri != null && HostUrls.Contains(ValidUri.Host))
links.Enqueue(match.Replace(".html", ""));
else
links.Enqueue(BaseUrl + match.Replace(".html", ""));
}
}
}
catch (Exception e)
{
log.Error("Error occured: " + e.Message);
Console.WriteLine("Error occured, check log for further details."); ;
}
});
WebPageInternalHandler.SavePage(page, url);
var context = CustomSynchronizationContext.GetSynchronizationContext();
Parallel.ForEach(links, new ParallelOptions { MaxDegreeOfParallelism = 25 },
webpage =>
{
try
{
if (WebPageValidator.ValidUrl(webpage))
{
string linkToProcess = webpage;
if (links.TryDequeue(out linkToProcess) && !Visitedlinks.Contains(linkToProcess))
{
ShowPercentProgress();
Thread.Sleep(15);
Visitedlinks.Enqueue(linkToProcess);
Task d = Scrape(linkToProcess);
Console.Clear();
}
}
}
catch (Exception e)
{
log.Error("Error occured: " + e.Message);
Console.WriteLine("Error occured, check log for further details.");
}
});
Console.WriteLine("parallel finished");
}
}
catch (Exception e)
{
log.Error("Error occured: " + e.Message);
Console.WriteLine("Error occured, check log for further details.");
}
}
NOTE that Scrape gets called multiple times(recursive)
call the method like this:
public Task ExecuteScrape()
{
var context = CustomSynchronizationContext.GetSynchronizationContext();
Scrape(BaseUrl).ContinueWith(x => {
Visitedlinks.Enqueue(BaseUrl);
}, context).Wait();
return null;
}
which in turn gets called like so:
static void Main(string[] args)
{
RunScrapper();
Console.ReadLine();
}
public static void RunScrapper()
{
try
{
_scrapper.ExecuteScrape();
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
my result:
How do I solve this?
(Is it ethical for me to answer a question about web page scraping?)
Don't call Scrape recursively. Place the list of urls you want to scrape in a ConcurrentQueue and begin processing that queue. As the process of scraping a page returns more urls, just add them into the same queue.
I wouldn't use just a string, either. I recommend creating a class like
public class UrlToScrape //because naming things is hard
{
public string Url { get; set; }
public int Depth { get; set; }
}
Regardless of how you execute this it's recursive, so you have to somehow keep track of how many levels deep you are. A website could deliberately generate URLs that send you into infinite recursion. (If they did this then they don't want you scraping their site. Does anybody want people scraping their site?)
When your queue is empty that doesn't mean you're done. The queue could be empty, but the process of scraping the last url dequeued could still add more items back into that queue, so you need a way to account for that.
You could use a thread safe counter (int using Interlocked.Increment/Decrement) that you increment when you start processing a url and decrement when you finish. You're done when the queue is empty and the count of in-process urls is zero.
This is a very rough model to illustrate the concept, not what I'd call a refined solution. For example, you still need to account for exception handling, and I have no idea where the results go, etc.
public class UrlScraper
{
private readonly ConcurrentQueue<UrlToScrape> _queue = new ConcurrentQueue<UrlToScrape>();
private int _inProcessUrlCounter;
private readonly List<string> _processedUrls = new List<string>();
public UrlScraper(IEnumerable<string> urls)
{
foreach (var url in urls)
{
_queue.Enqueue(new UrlToScrape {Url = url, Depth = 1});
}
}
public void ScrapeUrls()
{
while (_queue.TryDequeue(out var dequeuedUrl) || _inProcessUrlCounter > 0)
{
if (dequeuedUrl != null)
{
// Make sure you don't go more levels deep than you want to.
if (dequeuedUrl.Depth > 5) continue;
if (_processedUrls.Contains(dequeuedUrl.Url)) continue;
_processedUrls.Add(dequeuedUrl.Url);
Interlocked.Increment(ref _inProcessUrlCounter);
var url = dequeuedUrl;
Task.Run(() => ProcessUrl(url));
}
}
}
private void ProcessUrl(UrlToScrape url)
{
try
{
// As the process discovers more urls to scrape,
// pretend that this is one of those new urls.
var someNewUrl = "http://discovered";
_queue.Enqueue(new UrlToScrape { Url = someNewUrl, Depth = url.Depth + 1 });
}
catch (Exception ex)
{
// whatever you want to do with this
}
finally
{
Interlocked.Decrement(ref _inProcessUrlCounter);
}
}
}
If I was doing this for real the ProcessUrl method would be its own class, and it would take HTML, not a URL. In this form it's difficult to unit test. If it were in a separate class then you could pass in HTML, verify that it outputs results somewhere, and that it calls a method to enqueue new URLs it finds.
It's also not a bad idea to maintain the queue as a database table instead. Otherwise if you're processing a bunch of urls and you have to stop, you'd have start all over again.
Can't you add all tasks Task d to some type of concurrent collection you thread through all recursive calls (via method argument) and then simply call Task.WhenAll(tasks).Wait()?
You'd need an intermediate method (makes it cleaner) that launches the base Scrape call and passes in the empty task collection. When the base call returns you have in hand all tasks and you simply wait them out.
public async Task Scrape (
string url) {
var tasks = new ConcurrentQueue<Task>();
//call your implementation but
//change it so that you add
//all launched tasks d to tasks
Scrape(url, tasks);
//1st option: Wait().
//This will block caller
//until all tasks finish
Task.WhenAll(tasks).Wait();
//or 2nd option: await
//this won't block and will return to caller.
//Once all tasks are finished method
//will resume in WriteLine
await Task.WhenAll(tasks);
Console.WriteLine("Finished!"); }
Simple rule: if you want to know when something finishes, the first step is to keep track of it. In your current implementation you are essentially firing and forgetting all launched tasks...
I have an Excel Add-In written in C#, .NET 4.5. It will send many web service requests to a web server to get data. E.g. it sends 30,000 requests to web service server. When data of a request comes back, the addin will plot the data in Excel.
Originally I did all the requests asynchronously, but sometime I will get OutOfMemoryException
So I changed, sent the requests one by one, but it is too slow, takes long time to finish all requests.
I wonder if there is a way that I can do 100 requests at a time asynchronously, once the data of all the 100 requests come back and plot in Excel, then send the next 100 requests.
Thanks
Edit
On my addin, there is a ribbon button "Refresh", when it is clicked, refresh process starts.
On main UI thread, ribbon/button is clicked, it will call web service BuildMetaData,
once it is returned back, in its callback MetaDataCompleteCallback, another web service call is sent
Once it is returned back, in its callback DataRequestJobFinished, it will call plot to plot data on Excel. see below
RefreshBtn_Click()
{
if (cells == null) return;
Range firstOccurence = null;
firstOccurence = cells.Find(functionPattern, null,
null, null,
XlSearchOrder.xlByRows,
XlSearchDirection.xlNext,
null, null, null);
DataRequest request = null;
_reportObj = null;
Range currentOccurence = null;
while (!Helper.RefreshCancelled)
{
if(firstOccurence == null ||IsRangeEqual(firstOccurence, currentOccurence)) break;
found = true;
currentOccurence = cells.FindNext(currentOccurence ?? firstOccurence);
try
{
var excelFormulaCell = new ExcelFormulaCell(currentOccurence);
if (excelFormulaCell.HasValidFormulaCell)
{
request = new DataRequest(_unityContainer, XLApp, excelFormulaCell);
request.IsRefreshClicked = true;
request.Workbook = Workbook;
request.Worksheets = Worksheets;
_reportObj = new ReportBuilder(_unityContainer, XLApp, request, index, false);
_reportObj.ParseParameters();
_reportObj.GenerateReport();
//this is necessary b/c error message is wrapped in valid object DataResponse
//if (!string.IsNullOrEmpty(_reportObj.ErrorMessage)) //Clear previous error message
{
ErrorMessage = _reportObj.ErrorMessage;
Errors.Add(ErrorMessage);
AddCommentToCell(_reportObj);
Errors.Remove(ErrorMessage);
}
}
}
catch (Exception ex)
{
ErrorMessage = ex.Message;
Errors.Add(ErrorMessage);
_reportObj.ErrorMessage = ErrorMessage;
AddCommentToCell(_reportObj);
Errors.Remove(ErrorMessage);
Helper.LogError(ex);
}
}
}
on Class to GenerateReport
public void GenerateReport()
{
Request.ParseFunction();
Request.MetacompleteCallBack = MetaDataCompleteCallback;
Request.BuildMetaData();
}
public void MetaDataCompleteCallback(int id)
{
try
{
if (Request.IsRequestCancelled)
{
Request.FormulaCell.Dispose();
return;
}
ErrorMessage = Request.ErrorMessage;
if (string.IsNullOrEmpty(Request.ErrorMessage))
{
_queryJob = new DataQueryJob(UnityContainer, Request.BuildQueryString(), DataRequestJobFinished, Request);
}
else
{
ModifyCommentOnFormulaCellPublishRefreshEvent();
}
}
catch (Exception ex)
{
ErrorMessage = ex.Message;
ModifyCommentOnFormulaCellPublishRefreshEvent();
}
finally
{
Request.MetacompleteCallBack = null;
}
}
public void DataRequestJobFinished(DataRequestResponse response)
{
Dispatcher.Invoke(new Action<DataRequestResponse>(DataRequestJobFinishedUI), response);
}
public void DataRequestJobFinished(DataRequestResponse response)
{
try
{
if (Request.IsRequestCancelled)
{
return;
}
if (response.status != Status.COMPLETE)
{
ErrorMessage = ManipulateStatusMsg(response);
}
else // COMPLETE
{
var tmpReq = Request as DataRequest;
if (tmpReq == null) return;
new VerticalTemplate(tmpReq, response).Plot();
}
}
catch (Exception e)
{
ErrorMessage = e.Message;
Helper.LogError(e);
}
finally
{
//if (token != null)
// this.UnityContainer.Resolve<IEventAggregator>().GetEvent<DataQueryJobComplete>().Unsubscribe(token);
ModifyCommentOnFormulaCellPublishRefreshEvent();
Request.FormulaCell.Dispose();
}
}
on plot class
public void Plot()
{
...
attributeRange.Value2 = headerArray;
DataRange.Value2 = ....
DataRange.NumberFormat = ...
}
OutOfMemoryException is not about the too many requests sent simultaneously. It is about freeing your resources right way. In my practice there are two main problems when you are getting such exception:
Wrong working with immutable structures or System.String class
Not disposing your disposable resources, especially graphic objects and WCF requests.
In case of reporting, for my opinion, you got a second one type of a problem. DataRequest and DataRequestResponse are good point to start the investigation for the such objects.
If this doesn't help, try to use the Tasks library with async/await pattern, you can find good examples here:
// Signature specifies Task<TResult>
async Task<int> TaskOfTResult_MethodAsync()
{
int hours;
// . . .
// Return statement specifies an integer result.
return hours;
}
// Calls to TaskOfTResult_MethodAsync
Task<int> returnedTaskTResult = TaskOfTResult_MethodAsync();
int intResult = await returnedTaskTResult;
// or, in a single statement
int intResult = await TaskOfTResult_MethodAsync();
// Signature specifies Task
async Task Task_MethodAsync()
{
// . . .
// The method has no return statement.
}
// Calls to Task_MethodAsync
Task returnedTask = Task_MethodAsync();
await returnedTask;
// or, in a single statement
await Task_MethodAsync();
In your code I see a while loop, in which you can store your Task[] of size of 100, for which you can use the WaitAll method, and the problem should be solved. Sorry, but your code is huge enough, and I can't provide you a more straight example.
I'm having a lot of trouble parsing your code to figure out is being iterated for your request but the basic template for batching asynchronously is going to be something like this:
static const int batchSize = 100;
public async Task<IEnumerable<Results>> GetDataInBatches(IEnumerable<RequestParameters> parameters) {
if(!parameters.Any())
return Enumerable.Empty<Result>();
var batchResults = await Task.WhenAll(parameters.Take(batchSize).Select(doQuery));
return batchResults.Concat(await GetDataInBatches(parameters.Skip(batchSize));
}
where doQuery is something with the signature
Task<Results> async doQuery(RequestParameters parameters) {
//.. however you do the query
}
I wouldn't use this for a million requests since its recursive, but your case should would generate a callstack only 300 deep so you'll be fine.
Note that this also assumes that your data request stuff is done asynchronously and returns a Task. Most libraries have been updated to do this (look for methods with the Async suffix). If it doesn't expose that api you might want to create a separate question for how to specifically get your library to play nice with the TPL.
I'm having a problem with my Windows Phone 8 weather app's background agent.
Whenever the background agent is run, a new http weather request is made when certain conditions (that are not relevant to the problem I'm having) are met. When these conditions are unmet, cached weather data is used instead.
Furthermore, if you have set your live tile's location to your "Current Location", the background agent will use reverse geocoding to determine the name of the area for the location you're currently at. This is done whether new or cached data is used i.e. every time my app's background agent is run.
The problem I'm having is that whenever cached data is used, the live tile is not updating. But it doesn't appear to cause an exception to occur because the app's background agent never gets blocked, even if there's been more than two times where the live tile fails to update.
This is the relevant excerpt from the background agent's view model's "public async Task getWeatherForTileLocation()" method that's called from the scheduled agent:
Scheduled agent excerpt:
protected async override void OnInvoke(ScheduledTask task)
{
LiveTileViewModel viewModel = new LiveTileViewModel();
await viewModel.getWeatherForTileLocation();
// Etc.
}
getWeatherForTileLocation() excerpt:
// If the default location is 'Current Location', then update its coordinates.
if ((int)IsolatedStorageSettings.ApplicationSettings["LocationDefaultId"] == 1)
{
try
{
// Get new coordinates for current location.
await this.setCoordinates();;
}
catch (Exception e)
{
}
}
// Depending on the time now, since last update (and many other factors),
// must decide whether to use cached data or fresh data
if (this.useCachedData(timeNow, timeLastUpdated))
{
this.ExtractCachedData(); // This method works absolutely fine, trust me. But the live tile never updates when it's run outside debugging.
// Not because of what it does, but because of how fast it executes.
}
else
{
// a httpClient.GetAsync() call is made here that also works fine.
}
The setCoordinates method, as well the reverse geocoding related methods that are called from it:
public async Task<string> setCoordinates()
{
// Need to initialise the tracking mechanism.
Geolocator geolocator = new Geolocator();
// Location services are off.
// Get out - don't do anything.
if (geolocator.LocationStatus == PositionStatus.Disabled)
{
return "gps off";
}
// Location services are on.
// Proceed with obtaining longitude + latitude.
else
{
// Setup the desired accuracy in meters for data returned from the location service.
geolocator.DesiredAccuracyInMeters = 50;
try
{
// Taken from: http://bernhardelbl.wordpress.com/2013/11/26/geolocator-getgeopositionasync-with-correct-timeout/
// Because sometimes GetGeopositionAsync does not return. So you need to add a timeout procedure by your self.
// get the async task
var asyncResult = geolocator.GetGeopositionAsync();
var task = asyncResult.AsTask();
// add a race condition - task vs timeout task
var readyTask = await Task.WhenAny(task, Task.Delay(10000));
if (readyTask != task) // timeout wins
{
return "error";
}
// position found within timeout
Geoposition geoposition = await task;
// Retrieve latitude and longitude.
this._currentLocationLatitude = Convert.ToDouble(geoposition.Coordinate.Latitude.ToString("0.0000000000000"));
this._currentLocationLongitude = Convert.ToDouble(geoposition.Coordinate.Longitude.ToString("0.0000000000000"));
// Reverse geocoding to get your current location's name.
Deployment.Current.Dispatcher.BeginInvoke(() =>
{
this.setCurrentLocationName();
});
return "success";
}
// If there's an error, may be because the ID_CAP_LOCATION in the app manifest wasn't include.
// Alternatively, may be because the user hasn't turned on the Location Services.
catch (Exception ex)
{
if ((uint)ex.HResult == 0x80004004)
{
return "gps off";
}
else
{
// Something else happened during the acquisition of the location.
// Return generic error message.
return "error";
}
}
}
}
/**
* Gets the name of the current location through reverse geocoding.
**/
public void setCurrentLocationName()
{
// Must perform reverse geocoding i.e. get location from latitude/longitude.
ReverseGeocodeQuery query = new ReverseGeocodeQuery()
{
GeoCoordinate = new GeoCoordinate(this._currentLocationLatitude, this._currentLocationLongitude)
};
query.QueryCompleted += query_QueryCompleted;
query.QueryAsync();
}
/**
* Event called when the reverse geocode call returns a location result.
**/
void query_QueryCompleted(object sender, QueryCompletedEventArgs<IList<MapLocation>> e)
{
foreach (var item in e.Result)
{
if (!item.Information.Address.District.Equals(""))
this._currentLocation = item.Information.Address.District;
else
this._currentLocation = item.Information.Address.City;
try
{
IsolatedStorageSettings.ApplicationSettings["LiveTileLocation"] = this._currentLocation;
IsolatedStorageSettings.ApplicationSettings.Save();
break;
}
catch (Exception ee)
{
//Console.WriteLine(ee);
}
}
}
I've debugged the code many times, and have found no problems when I have. The http request when called is good, cached data extraction is good, reverse geocoding does always return a location (eventually).
But I did notice that when I'm using cached data, the name of the current location is retrieved AFTER the scheduled task has created the updated live tile but before the scheduled task has finished.
That is, the name of the location is retrieved after this code in the scheduled agent is run:
extendedData.WideVisualElement = new LiveTileWideFront_Alternative()
{
Icon = viewModel.Location.Hourly.Data[0].Icon,
Temperature = viewModel.Location.Hourly.Data[0].Temperature,
Time = viewModel.Location.Hourly.Data[0].TimeFull.ToUpper(),
Summary = viewModel.Location.Hourly.Data[0].Summary + ". Feels like " + viewModel.Location.Hourly.Data[0].ApparentTemperature + ".",
Location = IsolatedStorageSettings.ApplicationSettings["LiveTileLocation"].ToString().ToUpper(),
PrecipProbability = viewModel.Location.Hourly.Data[0].PrecipProbabilityInt
};
But before:
foreach (ShellTile tile in ShellTile.ActiveTiles)
{
LiveTileHelper.UpdateTile(tile, extendedData);
break;
}
NotifyComplete();
Obviously due to memory constraints I can't create an updated visual element at this point.
For comparison, when I'm not using cached data, the reverse geocoding query always manages to return a location before the http request code has finished.
So as the view model's getWeatherForTileLocation() method is using "await" in the scheduled agent, I decided to make sure that the method doesn't return anything until the current location's name has been retrieved. I added a simple while loop to the method's footer that would only terminate after the _currentLocation field has received a value i.e. the reverse geocoding has completed:
// Keep looping until the reverse geocoding has given your current location a name.
while( this._currentLocation == null )
{
}
// You can exit the method now, as you can create an updated live tile with your current location's name now.
return true;
When I debugged, I think this loop ran around 3 million iterations (a very big number anyway). But this hack (I don't know how else to describe it) seemed to work when I'm debugging. That is, when the target of my build was my Lumia 1020, and when I created a live tile fresh from it, which calls:
ScheduledActionService.Add(periodicTask);
ScheduledActionService.LaunchForTest(periodicTaskName, TimeSpan.FromSeconds(1));
To ensure I don't have to wait for the first scheduled task. When I debugged this first scheduled task, everything works fine: 1) a reverse geocoding request is made, 2) cached data extracted correctly, 3) hacky while loop keeps iterating, 4) stops when the reverse geocoding has returned a location name, 5) tile gets updated successfully.
But subsequent background agent calls that do use cached data don't appear to update the tile. It's only when non-cached data is used that the live tile updates. I should remind you at this point the reverse geocoding query always manages to return a location before the http request code has finished i.e. the hacky loop iterates only once.
Any ideas on what I need to do in order to ensure that the live tile updates correctly when cached data is used (read: when the processing of data, after the reverse geocoding query is made, is much faster than a http request)? Also, is there a more elegant way to stop the getWeatherForTileLocation() from exiting than my while loop? I'm sure there is!
Sorry for the long post but wanted to be as thorough as possible!
This has been giving me sleepless nights (literally) for the last 72 hours, so your help and guidance would be most appreciated.
Many thanks.
Bardi
You have done a great job of providing lots of detail, but it is very disconnected so it is a litle hard to follow. I think the root of your problem is the following:
// Reverse geocoding to get your current location's name.
Deployment.Current.Dispatcher.BeginInvoke(() =>
{
this.setCurrentLocationName();
});
You are attempting to get the location name, but your setCoordinates method will have already completed by the time the setCurrentLocationName method gets around to executing.
Now because you need to be in a UI thread to do any tile updating anyways, I would suggest just dispatching from the begining:
protected async override void OnInvoke(ScheduledTask task)
{
Deployment.Current.Dispatcher.BeginInvoke(() =>
{
LiveTileViewModel viewModel = new LiveTileViewModel();
await viewModel.getWeatherForTileLocation();
}
}
This would remove the need to do any other dispatching in the future.
Two more things:
Generally weather data includes the name of the location you are getting data for. If this is the case, just use that data rather than doing the reverse geocode. This will save you some memory and save time.
If you do need to get the location, I might suggest pulling out a "LocationService" that can get data for you. In this class you can use a TaskCompltionSource to await the event rather than having code follow many different paths.
public class LocationService
{
public static Task<Location> ReverseGeocode(double lat, double lon)
{
TaskCompletionSource<Location> completionSource = new TaskCompletionSource<Location>();
var geocodeQuery = new ReverseGeocodeQuery();
geocodeQuery.GeoCoordinate = new GeoCoordinate(lat, lon);
EventHandler<QueryCompletedEventArgs<IList<MapLocation>>> query = null;
query = (sender, args) =>
{
geocodeQuery.QueryCompleted -= query;
MapLocation mapLocation = args.Result.FirstOrDefault();
var location = Location.FromMapLocation(mapLocation);
completionSource.SetResult(location);
};
geocodeQuery.QueryCompleted += query;
geocodeQuery.QueryAsync();
}
return completionSource.Task;
}
Using a TaskCometionSource allows you to await the method rather than using the event.
var location = await locationService.ReverseGeocode(lat, lon);
This example uses another Location class that I created do just hold things like City and State.
The key thing with background agents is to ensure that code always flows "synchronously". This doesn't mean code cannot be asynchronous, but does mean that code needs to be called one after the other. So if you have something that has events, you could continue all other code after the event.
Hope that helps!
I don't see your deferral call. When you use async you have to tell the task that you're deferring completion till later. Can't remember the method off the top of my head but it's either on the base class of your background task or on the parameter you get.
The reason it probably works with cache data is that it probably isn't actually an async operation.
I think this is sorted now! Thanks so much Shawn for the help. The setLocationName() method call is now awaited, and it looks like this now:
public Task<string> setLocationName()
{
var reverseGeocode = new ReverseGeocodeQuery();
reverseGeocode.GeoCoordinate = new System.Device.Location.GeoCoordinate(this._currentLocationLatitude, this._currentLocationLongitude );
var tcs = new TaskCompletionSource<string>();
EventHandler<QueryCompletedEventArgs<System.Collections.Generic.IList<MapLocation>>> handler = null;
handler = (sender, args) =>
{
MapLocation mapLocation = args.Result.FirstOrDefault();
string l;
if (!mapLocation.Information.Address.District.Equals(""))
l = mapLocation.Information.Address.District;
else
l = mapLocation.Information.Address.City;
try
{
System.DateTime t = System.DateTime.UtcNow.AddHours(1.0);
if (t.Minute < 10)
IsolatedStorageSettings.ApplicationSettings["LiveTileLocation"] = l + " " + t.Hour + ":0" + t.Minute;
else
IsolatedStorageSettings.ApplicationSettings["LiveTileLocation"] = l + " " + t.Hour + ":" + t.Minute;
IsolatedStorageSettings.ApplicationSettings.Save();
this._currentLocationName = IsolatedStorageSettings.ApplicationSettings["LiveTileLocation"].ToString();
}
catch (Exception ee)
{
//Console.WriteLine(ee);
}
reverseGeocode.QueryCompleted -= handler;
tcs.SetResult(l);
};
reverseGeocode.QueryCompleted += handler;
reverseGeocode.QueryAsync();
return tcs.Task;
}
I'm trying to test the accuracy of Apple's forward geocoding service to cover the rare case when the latitude/longitude we have on our data model is missing or invalid, but we still know the physical address. Monotouch provides CLGeocoder.GeocodeAddress(String, CLGeocodeCompletionHandler) and GeocodeAddressAsync(String) but when I call them the completionHandler is never called and the async method never returns.
Nothing in my application log indicates a problem and enclosing the call in a try-catch block didn't turn up any exceptions. Maps integration is enabled in the project options. Short of capturing network traffic (which is probably SSL anyway) I'm out of ideas.
Here's the code which loads placemarks and tries to geocode addresses:
private void ReloadPlacemarks()
{
// list to hold any placemarks which come back with empty/invalid coordinates
List<ServiceCallWrapper> geoList = new List<ServiceCallWrapper> ();
mapView.ClearPlacemarks ();
List<MKPlacemark> placemarks = new List<MKPlacemark>();
if (serviceCallViewModel.ActiveServiceCall != null) {
var serviceCall = serviceCallViewModel.ActiveServiceCall;
if (serviceCall.dblLatitude != 0 && serviceCall.dblLongitude != 0) {
placemarks.Add (serviceCall.ToPlacemark ());
} else {
// add it to the geocode list
geoList.Add (serviceCall);
}
}
foreach (var serviceCall in serviceCallViewModel.ServiceCalls) {
if (serviceCall.dblLatitude != 0 && serviceCall.dblLongitude != 0) {
placemarks.Add (serviceCall.ToPlacemark ());
} else {
//add it to the geocode list
geoList.Add (serviceCall);
}
}
if (placemarks.Count > 0) {
mapView.AddPlacemarks (placemarks.ToArray ());
}
if (geoList.Count > 0) {
// attempt to forward-geocode the street address
foreach (ServiceCallWrapper s in geoList) {
ServiceCallWrapper serviceCall = GeocodeServiceCallAddressAsync (s).Result;
mapView.AddPlacemark (serviceCall.ToPlacemark());
}
}
}
private async Task<ServiceCallWrapper> GeocodeServiceCallAddressAsync(ServiceCallWrapper s)
{
CLGeocoder geo = new CLGeocoder ();
String addr = s.address + " " + s.city + " " + s.state + " " + s.zip;
Console.WriteLine ("Attempting forward geocode for service call UID: " + s.call_uid + " with address: " + addr);
//app hangs on this
CLPlacemark[] result = await geo.GeocodeAddressAsync(addr);
//code updating latitude and longitude (omitted)
return s;
}
Your problem is this line:
ServiceCallWrapper serviceCall = GeocodeServiceCallAddressAsync (s).Result;
By calling Task<T>.Result, you are causing a deadlock. I explain this fully on my blog, but the gist of it is that await will (by default) capture a "context" when it yields control, and will use that context to complete the async method. In this case, the "context" is the UI context. So, the UI thread is blocked (waiting on Result) when the response comes in, and the async method cannot continue because it's waiting to run on the UI thread.
The solution is to use async all the way. In other words, replace every Task<T>.Result and Task.Wait with await:
private async Task ReloadPlacemarksAsync()
{
...
ServiceCallWrapper serviceCall = await GeocodeServiceCallAddressAsync (s);
...
}
Note that your void ReloadPlacemarks is now Task ReloadPlacemarksAsync, so this change affects your callers. async will "grow" through the codebase, and this is normal. For more information, see my MSDN article on async best practices.