Hangfire get last execution time - c#

I'm using hangfire 1.5.3. In my recurring job I want to call a service that uses the time since the last run. Unfortunately the LastExecution is set to the current time, because the job data was updated before executing the job.
Job
public abstract class RecurringJobBase
{
protected RecurringJobDto GetJob(string jobId)
{
using (var connection = JobStorage.Current.GetConnection())
{
return connection.GetRecurringJobs().FirstOrDefault(p => p.Id == jobId);
}
}
protected DateTime GetLastRun(string jobId)
{
var job = GetJob(jobId);
if (job != null && job.LastExecution.HasValue)
{
return job.LastExecution.Value.ToLocalTime();
}
return DateTime.Today;
}
}
public class NotifyQueryFilterSubscribersJob : RecurringJobBase
{
public const string JobId = "NotifyQueryFilterSubscribersJob";
private readonly IEntityFilterChangeNotificationService _notificationService;
public NotifyQueryFilterSubscribersJob(IEntityFilterChangeNotificationService notificationService)
{
_notificationService = notificationService;
}
public void Run()
{
var lastRun = GetLastRun(JobId);
_notificationService.CheckChangesAndSendNotifications(DateTime.Now - lastRun);
}
}
Register
RecurringJob.AddOrUpdate<NotifyQueryFilterSubscribersJob>(NotifyQueryFilterSubscribersJob.JobId, job => job.Run(), Cron.Minutely, TimeZoneInfo.Local);
I know, that it is configured as minutely, so I could calculate the time roughly. But I'd like to have a configuration independent implementation. So my Question is: How can I implement RecurringJobBase.GetLastRun to return the time of the previous run?

To address my comment above, where you might have more than one type of recurring job running but want to check previous states, you can check that the previous job info actually relates to this type of job by the following (although this feels a bit hacky/convoluted).
If you're passing the PerformContext into the job method than you can use this:
var jobName = performContext.BackgroundJob.Job.ToString();
var currentJobId = int.Parse(performContext.BackgroundJob.Id);
JobData jobFoundInfo = null;
using (var connection = JobStorage.Current.GetConnection()) {
var decrementId = currentJobId;
while (decrementId > currentJobId - 50 && decrementId > 1) { // try up to 50 jobs previously
decrementId--;
var jobInfo = connection.GetJobData(decrementId.ToString());
if (jobInfo.Job.ToString().Equals(jobName)) { // **THIS IS THE CHECK**
jobFoundInfo = jobInfo;
break;
}
}
if (jobFoundInfo == null) {
throw new Exception($"Could not find the previous run for job with name {jobName}");
}
return jobFoundInfo;
}

You could take advantage of the fact you already stated - "Unfortunately the LastExecution is set to the current time, because the job data was updated before executing the job".
The job includes the "LastJobId" property which seems to be an incremented Id. Hence, you should be able to get the "real" previous job by decrement LastJobId and querying the job data for that Id.
var currentJob = connection.GetRecurringJobs().FirstOrDefault(p => p.Id == CheckForExpiredPasswordsId);
if (currentJob == null)
{
return null; // Or whatever suits you
}
var previousJob = connection.GetJobData((Convert.ToInt32(currentJob.LastJobId) - 1).ToString());
return previousJob.CreatedAt;
Note that this is the time of creation, not execution. But it might be accurate enough for you. Bear in mind the edge case when it is your first run, hence there will be no previous job.

After digging around, I came up with the following solution.
var lastSucceded = JobStorage.Current.GetMonitoringApi().SucceededJobs(0, 1000).OrderByDescending(j => j.Value.SucceededAt).FirstOrDefault(j => j.Value.Job.Method.Name == "MethodName" && j.Value.Job.Type.FullName == "NameSpace.To.Class.Containing.The.Method").Value;
var lastExec = lastSucceded.SucceededAt?.AddMilliseconds(Convert.ToDouble(-lastSucceded.TotalDuration));
It's not perfect but i think a little cleaner than the other solutions.
Hopefully they will implement an official way soon.

The answer by #Marius Steinbach is often good enough but if you have thousands of job executions (my case) loading all of them from DB doesn't seem that great. So finally I decided to write a simple SQL query and use it directly (this is for PostgreSQL storage though changing it to SqlServer should be straightforward):
private async Task<DateTime?> GetLastSuccessfulExecutionTime(string jobType)
{
await using var conn = new NpgsqlConnection(_connectionString);
if (conn.State == ConnectionState.Closed)
conn.Open();
await using var cmd = new NpgsqlCommand(#"
SELECT s.data FROM hangfire.job j
LEFT JOIN hangfire.state s ON j.stateid = s.id
WHERE j.invocationdata LIKE $1 AND j.statename = $2
ORDER BY s.createdat DESC
LIMIT 1", conn)
{
Parameters =
{
new() { Value = $"%{jobType}%" } ,
new() { Value = SucceededState.StateName }
}
};
var result = await cmd.ExecuteScalarAsync();
if (result is not string data)
return null;
var stateData = JsonSerializer.Deserialize<Dictionary<string, string>>(data);
return JobHelper.DeserializeNullableDateTime(stateData?.GetValueOrDefault("SucceededAt"));
}

Use this method that return Last exucution time and Next execution time of one job. this method return last and next execution time of one job.
public static (DateTime?, DateTime?) GetExecutionDateTimes(string jobName)
{
DateTime? lastExecutionDateTime = null;
DateTime? nextExecutionDateTime = null;
using (var connection = JobStorage.Current.GetConnection())
{
var job = connection.GetRecurringJobs().FirstOrDefault(p => p.Id == jobName);
if (job != null && job.LastExecution.HasValue)
lastExecutionDateTime = job.LastExecution;
if (job != null && job.NextExecution.HasValue)
nextExecutionDateTime = job.NextExecution;
}
return (lastExecutionDateTime, nextExecutionDateTime);
}

Related

Problem trying to clone a Project Server database using OData and Entity Framework [duplicate]

This question already has answers here:
How can I use Fast Member to Bulk Copy data into a table with inconsistent column names?
(2 answers)
Closed 2 years ago.
I am having trouble updating my entities with Parallel.Foreach. The program I have, works fine by using foreach to update the entities, but if I use Parallel.Foreach it gives me the error like : "Argument Exception: An item with the same key has already been added". I have no idea why it happens, shouldn't it be thread safe? Or why giving me this error? How to resolve this issue?
The program itself get some data from a database and copy it to another one. If the datarow exists with the same guid (see below), and the status unchanged the matching datarow in the second must be updated. If theres a match, and status changed, modifications must be ignored. Finally if no match in the second database, then insert the datarow into the second database. (Synchronize the two databases). I just want to speed up the process somehow, that is why I first think of parallel processing.
(I am using Autofac as an IoC container and dependency injection if that matters)
Here is the code snippet which tries to update:
/* #param reports: data from the first database */
public string SynchronizeData(List<Reports> reports, int statusid)
{
// reportdataindatabase - the second database data, List() actually selects all, see next code snippet
List<Reports> reportdataindatabase = unitOfWorkTAFeedBack.ReportsRepository.List().ToList();
int allcount = reports.Count;
int insertedcount = 0;
int updatedcount = 0;
int ignoredcount = 0;
// DOES NOT WORK, GIVES THE ERROR
Parallel.ForEach(reports, r =>
{
var guid = reportdataindatabase.FirstOrDefault(x => x.AssignmentGUID == r.AssignmentGUID);
if (guid == null)
{
unitOfWorkTAFeedBack.ReportsRepository.Add(r); // an insert on the repository
insertedcount++;
}
else
{
if (guid.StatusId == statusid)
{
r.ReportsID = guid.ReportsID;
unitOfWorkTAFeedBack.ReportsRepository.Update(r); // update on the repo
updatedcount++;
}
else
{
ignoredcount++;
}
}
});
/* WORKS PERFECTLY BUT RELATIVELY SLOW - takes 80 seconds to update 1287 records
foreach (Reports r in reports)
{
var guid = reportdataindatabase.FirstOrDefault(x => x.AssignmentGUID == r.AssignmentGUID); // find match between the two databases
if (guid == null)
{
unitOfWorkTAFeedBack.ReportsRepository.Add(r); // no match, insert
insertedcount++;
}
else
{
if (guid.StatusId == statusid)
{
r.ReportsID = guid.ReportsID;
unitOfWorkTAFeedBack.ReportsRepository.Update(r);
updatedcount++;
}
else
{
ignoredcount++;
}
}
} */
unitOfWorkTAFeedBack.Commit(); // this only calls SaveChanges() on DbContext object
int allprocessed = insertedcount + updatedcount + ignoredcount;
string result = "Synchronization finished. " + allprocessed + " reports processed out of " + allcount + ", "
+ insertedcount + " has been inserted, " + updatedcount + " has been updated and "
+ ignoredcount + " has been ignored. \n Press a button to dismiss this window." ;
return result;
}
The program breaks on this Repository class in the Update method (with Parallel.Foreach, no problem with the standard foreach):
public class EntityFrameworkReportsRepository : IReportsRepository
{
private readonly TAFeedBackContext tAFeedBackContext;
public EntityFrameworkReportsRepository(TAFeedBackContext tAFeedBackContext)
{
this.tAFeedBackContext = tAFeedBackContext;
}
public void Add(Reports r)
{
tAFeedBackContext.Reports.Add(r);
}
public void Delete(int Id)
{
var obj = tAFeedBackContext.Reports.Find(Id);
tAFeedBackContext.Reports.Remove(obj);
}
public Reports Get(int Id)
{
var obj = tAFeedBackContext.Reports.Find(Id);
return obj;
}
public IQueryable<Reports> List()
{
return tAFeedBackContext.Reports.AsNoTracking();
}
public void Update(Reports r)
{
var entry = tAFeedBackContext.Entry(r); // The Program Breaks At This Point!
if (entry.State == EntityState.Detached)
{
tAFeedBackContext.Reports.Attach(r);
tAFeedBackContext.Entry(r).State = EntityState.Modified;
}
else
{
tAFeedBackContext.Entry(r).CurrentValues.SetValues(r);
}
}
}
Please bear in mind it hard to give a complete answer as there are thing I need clarity on … but comments should help with building a picture.
Parallel.ForEach(reports, r => //Parallel.ForEach is not the answer..
{
//reportdataindatabase is done..before so ok here
// do you really want FirstOrDefault vs SingleOrDefault
var guid = reportdataindatabase.FirstOrDefault(x => x.AssignmentGUID == r.AssignmentGUID);
if (guid == null)
{
// this is done on the context not the DB, unresolved..(excuted)
unitOfWorkTAFeedBack.ReportsRepository.Add(r); // an insert on the repository
//insertedcount++; u would need a lock
}
else
{
if (guid.StatusId == statusid)
{
r.ReportsID = guid.ReportsID;
// this is done on the context not the DB, unresolved..(excuted)
unitOfWorkTAFeedBack.ReportsRepository.Update(r); // update on the repo
//updatedcount++; u would need a lock
}
else
{
//ignoredcount++; u would need a lock
}
}
});
the issue here... as reportdataindatabase can contain the same key twice..
and the context is only updated after the fact aka when it get here..
unitOfWorkTAFeedBack.Commit();
it may have been called twice for the same entity
as above (commit) is where the work is... doing the add/update above in Parallel wont save you any real time, as that part is quick..
//takes 80 seconds to update 1287 records... does seem long...
//List reportdataindatabase = unitOfWorkTAFeedBack.ReportsRepository.List().ToList();
//PS Add how reports are retrieved.. you want something like
TAFeedBackContext db = new TAFeedBackContext();
var remoteReports = DatafromAnotherPLace //include how this was retrieved;
var localReports = TAFeedBackContext.Reports.ToList(); //these are tracked.. (by default)
foreach (var item in remoteReports)
{
//i assume more than one is invalid.
var localEntity = localReports.SingleOrDefault(x => x.AssignmentGUID == item.AssignmentGUID);
if (localEntity == null)
{
//add as it doenst exist
TAFeedBackContext.Reports.Add(new Report() { *set fields* });
}
else
{
if (localEntity.StatusId == statusid) //only update if status is the passed in status.
{
//why are you modifying the remote entity
item.ReportsID = localEntity.ReportsID;
//update remove entity?, i get the impression its from a different context,
//if not then cool, but you need to show how reports is retrieved
}
else
{
}
}
}
TAFeedBackContext.SaveChanges();

Why is my .net core API cancelling requests?

I have a an aync method that is looped:
private Task<HttpResponseMessage> GetResponseMessage(Region region, DateTime startDate, DateTime endDate)
{
var longLatString = $"q={region.LongLat.Lat},{region.LongLat.Long}";
var startDateString = $"{startDateQueryParam}={ConvertDateTimeToApixuQueryString(startDate)}";
var endDateString = $"{endDateQueryParam}={ConvertDateTimeToApixuQueryString(endDate)}";
var url = $"http://api?key={Config.Key}&{longLatString}&{startDateString}&{endDateString}";
return Client.GetAsync(url);
}
I then take the response and save it to my ef core database, however in some instances I get this Exception message: The Operaiton was canceled
I really dont understand that. This is a TCP handshake issue?
Edit:
For context I am making many of these calls, passing response to the method that writes to db (which is also so slow Its unbelievable):
private async Task<int> WriteResult(Response apiResponse, Region region)
{
// since context is not thread safe we ensure we have a new one for each insert
// since a .net core app can insert data at the same time from different users different instances of context
// must be thread safe
using (var context = new DalContext(ContextOptions))
{
var batch = new List<HistoricalWeather>();
foreach (var forecast in apiResponse.Forecast.Forecastday)
{
// avoid inserting duplicates
var existingRecord = context.HistoricalWeather
.FirstOrDefault(x => x.RegionId == region.Id &&
IsOnSameDate(x.Date.UtcDateTime, forecast.Date));
if (existingRecord != null)
{
continue;
}
var newHistoricalWeather = new HistoricalWeather
{
RegionId = region.Id,
CelsiusMin = forecast.Day.Mintemp_c,
CelsiusMax = forecast.Day.Maxtemp_c,
CelsiusAverage = forecast.Day.Avgtemp_c,
MaxWindMph = forecast.Day.Maxwind_mph,
PrecipitationMillimeters = forecast.Day.Totalprecip_mm,
AverageHumidity = forecast.Day.Avghumidity,
AverageVisibilityMph = forecast.Day.Avgvis_miles,
UvIndex = forecast.Day.Uv,
Date = new DateTimeOffset(forecast.Date),
Condition = forecast.Day.Condition.Text
};
batch.Add(newHistoricalWeather);
}
context.HistoricalWeather.AddRange(batch);
var inserts = await context.SaveChangesAsync();
return inserts;
}
Edit: I am making 150,000 calls. I know this is questionable since It all goes in memory I guess before even doing a save but this is where I got to in trying to make this run faster... only I guess my actual writing code is blocking :/
var dbInserts = await Task.WhenAll(
getTasks // the list of all api get requests
.Select(async x => {
// parsed can be null if get failed
var parsed = await ParseApixuResponse(x.Item1); // readcontentasync and just return the deserialized json
return new Tuple<ApiResult, Region>(parsed, x.Item2);
})
.Select(async x => {
var finishedGet = await x;
if(finishedGet.Item1 == null)
{
return 0;
}
return await writeResult(finishedGet.Item1, finishedGet.Item2);
})
);
.net core has a DefaultConnectionLimit setting as answered in comments.
this limits outgoing connections to specific domains to ensure all ports are not taken etc.
i did my parallel work incorrectly causing it to go over the limit - which everything i read says should not be 2 on .net core but it was - and that caused connections to close before receiving responses.
I made it greater, did parallel work correctly, lowered it again.

Stop Hangfire job from enqueuing if already enqueued

Is there a simple way of stopping a hangfire.io job from enqueuing if one is already enqueued?
Looking at the jobfilterattribute, nothing stands out as how to get the state of anything on the server. Can I use the connection objects and query the store?
Thanks
Have a look at the following gist by the library owner https://gist.github.com/odinserj/a8332a3f486773baa009
This should prevent the same job from being en-queued more than once by querying the fingerprint.
You can activate it per background job by decorating the method with the attribute [DisableMultipleQueuedItemsFilter].
Or you can enable it globally GlobalJobFilters.Filters.Add(new DisableMultipleQueuedItemsFilter());
I am using following code block to check to add new job or not, depending on its current state: (I know that i am currently looking at only first 1000 jobs. You can implement your type of logic))
private static bool IsOKToAddJob(string JobName, string QueueName, out string NotOKKey)
{
try
{
var monapi = JobStorage.Current.GetMonitoringApi();
var processingJobs = monapi.ProcessingJobs(0, 1000);
NotOKKey = processingJobs.Where(j => j.Value.Job.ToString() == JobName).FirstOrDefault().Key;
if (!string.IsNullOrEmpty(NotOKKey)) return false;
var scheduledJobs = monapi.ScheduledJobs(0, 1000);
NotOKKey = scheduledJobs.Where(j => j.Value.Job.ToString() == JobName).FirstOrDefault().Key;
if (!string.IsNullOrEmpty(NotOKKey)) return false;
var enqueuedJobs = monapi.EnqueuedJobs(QueueName, 0, 1000);
NotOKKey = enqueuedJobs.Where(j => j.Value.Job.ToString() == JobName).FirstOrDefault().Key;
if (!string.IsNullOrEmpty(NotOKKey)) return false;
NotOKKey = null;
return true;
}
catch (Exception ex)
{
//LOG your Exception;
}
}
And the usage is simple:
if (IsOKToAddJob(YOURJOBNAME, QueueName, out NOTOKKey))
var id = BackgroundJob.Enqueue(() =>YOURMETHOD());
//rest

EF and MVC - approach to work together

I used the following approach long time (approx 5 years):
Create one big class with initialization of XXXEntities in controller and create each method for each action with DB. Example:
public class DBRepository
{
private MyEntities _dbContext;
public DBRepository()
{
_dbContext = new MyEntities();
}
public NewsItem NewsItem(int ID)
{
var q = from i in _dbContext.News where i.ID == ID select new NewsItem() { ID = i.ID, FullText = i.FullText, Time = i.Time, Topic = i.Topic };
return q.FirstOrDefault();
}
public List<Screenshot> LastPublicScreenshots()
{
var q = from i in _dbContext.Screenshots where i.isPublic == true && i.ScreenshotStatus.Status == ScreenshotStatusKeys.LIVE orderby i.dateTimeServer descending select i;
return q.Take(5).ToList();
}
public void SetPublicScreenshot(string filename, bool val)
{
var screenshot = Get<Screenshot>(p => p.filename == filename);
if (screenshot != null)
{
screenshot.isPublic = val;
_dbContext.SaveChanges();
}
}
public void SomeMethod()
{
SomeEntity1 s1 = new SomeEntity1() { field1="fff", field2="aaa" };
_dbContext.SomeEntity1.Add(s1);
SomeEntity2 s2 = new SomeEntity2() { SE1 = s1 };
_dbContext.SomeEntity1.Add(s2);
_dbContext.SaveChanges();
}
And some external code create DBRepository object and call methods.
It worked fine. But now Async operations came in. So, if I use code like
public async void AddStatSimplePageAsync(string IPAddress, string login, string txt)
{
DateTime dateAdded2MinsAgo = DateTime.Now.AddMinutes(-2);
if ((from i in _dbContext.StatSimplePages where i.page == txt && i.dateAdded > dateAdded2MinsAgo select i).Count() == 0)
{
StatSimplePage item = new StatSimplePage() { IPAddress = IPAddress, login = login, page = txt, dateAdded = DateTime.Now };
_dbContext.StatSimplePages.Add(item);
await _dbContext.SaveChangesAsync();
}
}
can be a situation, when next code will be executed before SaveChanged completed and one more entity will be added to _dbContext, which should not be saved before some actions. For example, some code:
DBRepository _rep = new DBRepository();
_rep.AddStatSimplePageAsync("A", "b", "c");
_rep.SomeMethod();
I worry, that SaveChanged will be called after line
_dbContext.SomeEntity1.Add(s1);
but before
_dbContext.SomeEntity2.Add(s2);
(i.e. these 2 actions is atomic operation)
Am I right? My approach is wrong now? Which approach should be used?
PS. As I understand, will be the following stack:
1. calling AddStatSimplePageAsync
2. start calling await _dbContext.SaveChangesAsync(); inside AddStatSimplePageAsync
3. start calling SomeMethod(), _dbContext.SaveChangesAsync() in AddStatSimplePageAsync is executing in another (child) thread.
4. complete _dbContext.SaveChangesAsync() in child thread. Main thread is executing something in SomeMethod()
Ok this time I (think)'ve got your problem.
At first, it's weird that you have two separate calls to SaveChangesmethod. Usually you should try to have it at the end of all your operations and then dispose it.
Even thought yes, your concerns are right, but some clarifications are needed here.
When encountering an asyncor await do not think about threads, but about tasks, that are two different concepts.
Have a read to this great article. There is an image that will practically explain you everything.
To say that in few words, if you do not await an async method, you can have the risk that your subsequent operation could "harm" the execution of the first one. To solve it, simply await it.

How to manage a mutex in an asynchronous method

I have ported my old HttpHandler (.ashx) TwitterFeed code to a WebAPI application. The core of the code uses the excellent Linq2Twitter package (https://linqtotwitter.codeplex.com/). Part of the port involved upgrading this component from version 2 to version 3, which now provides a number of asynchronous method calls - which are new to me. Here is the basic controller:
public async Task<IEnumerable<Status>>
GetTweets(int count, bool includeRetweets, bool excludeReplies)
{
var auth = new SingleUserAuthorizer
{
CredentialStore = new SingleUserInMemoryCredentialStore
{
ConsumerKey = ConfigurationManager.AppSettings["twitterConsumerKey"],
ConsumerSecret = ConfigurationManager.AppSettings["twitterConsumerKeySecret"],
AccessToken = ConfigurationManager.AppSettings["twitterAccessToken"],
AccessTokenSecret = ConfigurationManager.AppSettings["twitterAccessTokenSecret"]
}
};
var ctx = new TwitterContext(auth);
var tweets =
await
(from tweet in ctx.Status
where (
(tweet.Type == StatusType.Home)
&& (tweet.ExcludeReplies == excludeReplies)
&& (tweet.IncludeMyRetweet == includeRetweets)
&& (tweet.Count == count)
)
select tweet)
.ToListAsync();
return tweets;
}
This works fine, but previously, I had cached the results to avoid 'over calling' the Twitter API. It is here that I have run into a problem (more to do with my lack of understanding of the asynchronous protocol than anything else I suspect).
In overview, what I want to do is to first check the cache, if data doesn't exists, then rehydrate the cache and return the data to the caller (web page). Here is my attempt at the code
public class TwitterController : ApiController {
private const string CacheKey = "TwitterFeed";
public async Task<IEnumerable<Status>>
GetTweets(int count, bool includeRetweets, bool excludeReplies)
{
var context = System.Web.HttpContext.Current;
var tweets = await GetTweetData(context, count, includeRetweets, excludeReplies);
return tweets;
}
private async Task<IEnumerable<Status>>
GetTweetData(HttpContext context, int count, bool includeRetweets, bool excludeReplies)
{
var cache = context.Cache;
Mutex mutex = null;
bool iOwnMutex = false;
IEnumerable<Status> data = (IEnumerable<Status>)cache[CacheKey];
// Start check to see if available on cache
if (data == null)
{
try
{
// Lock base on resource key
mutex = new Mutex(true, CacheKey);
// Wait until it is safe to enter (someone else might already be
// doing this), but also add 30 seconds max.
iOwnMutex = mutex.WaitOne(30000);
// Now let's see if some one else has added it...
data = (IEnumerable<Status>)cache[CacheKey];
// They did, so send it...
if (data != null)
{
return data;
}
if (iOwnMutex)
{
// Still not there, so now is the time to look for it!
data = await CallTwitterApi(count, includeRetweets, excludeReplies);
cache.Remove(CacheKey);
cache.Add(CacheKey, data, null, GetTwitterExpiryDate(),
TimeSpan.Zero, CacheItemPriority.Normal, null);
}
}
finally
{
// Release the Mutex.
if ((mutex != null) && (iOwnMutex))
{
// The following line throws the error:
// Object synchronization method was called from an
// unsynchronized block of code.
mutex.ReleaseMutex();
}
}
}
return data;
}
private DateTime GetTwitterExpiryDate()
{
string szExpiry = ConfigurationManager.AppSettings["twitterCacheExpiry"];
int expiry = Int32.Parse(szExpiry);
return DateTime.Now.AddMinutes(expiry);
}
private async Task<IEnumerable<Status>>
CallTwitterApi(int count, bool includeRetweets, bool excludeReplies)
{
var auth = new SingleUserAuthorizer
{
CredentialStore = new SingleUserInMemoryCredentialStore
{
ConsumerKey = ConfigurationManager.AppSettings["twitterConsumerKey"],
ConsumerSecret = ConfigurationManager.AppSettings["twitterConsumerKeySecret"],
AccessToken = ConfigurationManager.AppSettings["twitterAccessToken"],
AccessTokenSecret = ConfigurationManager.AppSettings["twitterAccessTokenSecret"]
}
};
var ctx = new TwitterContext(auth);
var tweets =
await
(from tweet in ctx.Status
where (
(tweet.Type == StatusType.Home)
&& (tweet.ExcludeReplies == excludeReplies)
&& (tweet.IncludeMyRetweet == includeRetweets)
&& (tweet.Count == count)
&& (tweet.RetweetCount < 1)
)
select tweet)
.ToListAsync();
return tweets;
}
}
The problem occurs in the finally code block where the Mutex is released (though I have concerns about the overall pattern and approach of the GetTweetData() method):
if ((mutex != null) && (iOwnMutex))
{
// The following line throws the error:
// Object synchronization method was called from an
// unsynchronized block of code.
mutex.ReleaseMutex();
}
If I comment out the line, the code works correctly, but (I assume) I should release the Mutex having created it. From what I have found out, this problem is related to the thread changing between creating and releasing the mutex.
Because of my lack of general knowledge on asynchronous coding, I am not sure a) if the pattern I'm using is viable and b) if it is, how I address the problem.
Any advice would be much appreciated.
Using a mutex like that isn't going to work. For one thing, a Mutex is thread-affine, so it can't be used with async code.
Other problems I noticed:
Cache is threadsafe, so it shouldn't need a mutex (or any other protection) anyway.
Asynchronous methods should follow the Task-based Asynchronous Pattern.
There is one major tip regarding caching: when you just have an in-memory cache, then cache the task rather than the resulting data. On a side note, I have to wonder whether HttpContext.Cache is the best cache to use, but I'll leave it as-is since your question is more about how asynchronous code changes caching patterns.
So, I'd recommend something like this:
private const string CacheKey = "TwitterFeed";
public Task<IEnumerable<Status>> GetTweetsAsync(int count, bool includeRetweets, bool excludeReplies)
{
var context = System.Web.HttpContext.Current;
return GetTweetDataAsync(context, count, includeRetweets, excludeReplies);
}
private Task<IEnumerable<Status>> GetTweetDataAsync(HttpContext context, int count, bool includeRetweets, bool excludeReplies)
{
var cache = context.Cache;
Task<IEnumerable<Status>> data = cache[CacheKey] as Task<IEnumerable<Status>>;
if (data != null)
return data;
data = CallTwitterApiAsync(count, includeRetweets, excludeReplies);
cache.Insert(CacheKey, data, null, GetTwitterExpiryDate(), TimeSpan.Zero);
return data;
}
private async Task<IEnumerable<Status>> CallTwitterApiAsync(int count, bool includeRetweets, bool excludeReplies)
{
...
}
There's a small possibility that if two different requests (from two different sessions) request the same twitter feed at the same exact time, that the feed will be requested twice. But I wouldn't lose sleep over it.

Categories

Resources