I want to validate if my understanding is correct.. having read the documentation about ICacheEntryProcessor, it says that if we want to update a field in cache entry, we implement this class and use Invoke on cache to update the field in cache record atomically..
when I implement the above approach, the record does not seem to be updated..there are no exceptions thrown in the process method.
here's my implementation
public class UserConnectionUpdateProcessor : ICacheEntryProcessor<string, User, UserConnection, bool>
{
/// <summary>
/// Processes the update
/// </summary>
/// <param name="entry"></param>
/// <param name="arg"></param>
/// <returns></returns>
public bool Process(IMutableCacheEntry<string, User> entry, UserConnection arg)
{
var connection = (from conn in entry.Value.Connections
where conn.ConnectionId == arg.ConnectionId
select conn).FirstOrDefault();
if(connection == null)
{
//this is a new connection
entry.Value.Connections.Add(arg);
return true;
}
if(arg.Disconnected)
{
entry.Value.Connections.Remove(connection);
}
else
{
connection.LastActivity = DateTime.Now;
}
return true;
}
}
I enabled Ignite Trace logs and this got printed
2020-06-21 21:09:54.1732|DEBUG|org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache|<usersCache> Entry did not pass the filter or conflict resolution (will skip write) [entry=GridDhtCacheEntry [rdrs=ReaderId[] [], part=358, super=GridDistributedCacheEntry [super=GridCacheMapEntry [key=KeyCacheObjectImpl [part=358,
also I was going through Ignite source to understand what operations are performed..no luck yet
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicCache.java
Your code is fine, but since you only change the data inside the entry.Value object, Ignite does not detect those changes and does not update the entry.
Add entry.Value = entry.Value right before return true.
Ignite uses the Value property setter to mark the entry as updated.
Related
I'm using the swagger-codegen to generate c# client, however noticing that the sortParamsByRequiredFlag is not applied to model generation.
For example, here is a sample config file:
{
"packageVersion" : "1.0.0",
"sortParamsByRequiredFlag": true,
"optionalProjectFile" : false
}
Here is the generated truncated code for model's constructor:
/// <summary>
/// Initializes a new instance of the <see cref="V2alpha1CronJobSpec" /> class.
/// </summary>
/// <param name="ConcurrencyPolicy">Specifies how to treat concurrent executions of a Job. Defaults to Allow..</param>
/// <param name="FailedJobsHistoryLimit">The number of failed finished jobs to retain. This is a pointer to distinguish between explicit zero and not specified..</param>
/// <param name="JobTemplate">Specifies the job that will be created when executing a CronJob. (required).</param>
/// <param name="Schedule">The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron. (required).</param>
/// <param name="StartingDeadlineSeconds">Optional deadline in seconds for starting the job if it misses scheduled time for any reason. Missed jobs executions will be counted as failed ones..</param>
/// <param name="SuccessfulJobsHistoryLimit">The number of successful finished jobs to retain. This is a pointer to distinguish between explicit zero and not specified..</param>
/// <param name="Suspend">This flag tells the controller to suspend subsequent executions, it does not apply to already started executions. Defaults to false..</param>
public V2alpha1CronJobSpec(string ConcurrencyPolicy = default(string), int? FailedJobsHistoryLimit = default(int?), V2alpha1JobTemplateSpec JobTemplate = default(V2alpha1JobTemplateSpec), string Schedule = default(string), long? StartingDeadlineSeconds = default(long?), int? SuccessfulJobsHistoryLimit = default(int?), bool? Suspend = default(bool?))
{
// to ensure "JobTemplate" is required (not null)
if (JobTemplate == null)
{
throw new InvalidDataException("JobTemplate is a required property for V2alpha1CronJobSpec and cannot be null");
}
else
{
this.JobTemplate = JobTemplate;
}
// to ensure "Schedule" is required (not null)
if (Schedule == null)
{
throw new InvalidDataException("Schedule is a required property for V2alpha1CronJobSpec and cannot be null");
}
else
{
this.Schedule = Schedule;
}
this.ConcurrencyPolicy = ConcurrencyPolicy;
this.FailedJobsHistoryLimit = FailedJobsHistoryLimit;
this.StartingDeadlineSeconds = StartingDeadlineSeconds;
this.SuccessfulJobsHistoryLimit = SuccessfulJobsHistoryLimit;
this.Suspend = Suspend;
}
As you can see from the swagger spec, JobTemplate, Schedule are required parameters. However, the params in the constructor are sorted alphabetically.
I've been sorting through the swagger-codegen code base and I think the sortParamsByRequiredFlag only applies to API generated methods.
Is this by design? I'm not sure if I'm missing some config that I should be setting?
Here is the GitHub issue I opened but haven't heard anything on it.
This is not an answer to why Swagger is not generating the correct code, but an answer to your problem on GitHub.
There is absolutely no need for you to use this:
var jobSpec = new V2alpha1CronJobSpec(null, null, new V2alpha1JobTemplateSpec(), "stringValue", null, ...);
You can just use this (called named parameters):
var jobSpec = new V2alpha1CronJobSpec(JobTemplate: new V2alpha1JobTemplateSpec(), Schedule: "stringValue");
For c# client this flag is ignored in the main codebase.
You can create your own extension and override fromModel method to sort the properties.
Have a look at below code on how to do it.
#Override
public CodegenModel fromModel(String name, Model model, Map<String, Model> allDefinitions) {
CodegenModel codegenModel = super.fromModel(name, model, allDefinitions);
if (sortParamsByRequiredFlag)
{
Collections.sort(codegenModel.readWriteVars, new Comparator<CodegenProperty>() {
#Override
public int compare(CodegenProperty one, CodegenProperty another) {
if (one.required == another.required) return 0;
else if (one.required) return -1;
else return 1;
}
});
System.out.println("***Sorting based on required params in model....***");
}
return codegenModel;
}
I already developed one application with the EF6 and MS-SQL Server.
Everywhere in my application I wrote the code like below where I needed to Insert, Update or delete the data from the table:
Code:
using (DemoEntities objContext = GetDemoEntities())
{
using (TransactionScope objTransaction = new TransactionScope())
{
Demo1(objContext);
Demo2(objContext);
// Commit the changes in the database.
objTransaction.Complete();
}
}
public void Demo1(DemoEntities objContext)
{
Demo1 objDemo1 = new Demo1();
objDemo1.Title = "ABC";
objContext.Demo1.Add(objDemo1);
objContext.SaveChanges();
}
public void Demo2(DemoEntities objContext)
{
Demo2 objDemo2 = new Demo2();
objDemo2.Title = "ABC";
objContext.Demo2.Add(objDemo2);
objContext.SaveChanges();
}
My Application is running on the one server and database is running on the another server in the AWS.
My application is working smoothly, But 2-3 times weakly I got the error like the below.
System.Data.Entity.Core.EntityException: The underlying provider
failed on Open. ---> System.Data.SqlClient.SqlException: A
network-related or instance-specific error occurred while establishing
a connection to SQL Server. The server was not found or was not
accessible. Verify that the instance name is correct and that SQL
Server is configured to allow remote connections. (provider: Named
Pipes Provider, error: 40 - Could not open a connection to SQL Server)
---> System.ComponentModel.Win32Exception: Access is denied
In first request I got the above error and after the immediate another request I does not get any error and request is successful.
After doing some Google I got the concept like the Connection Resiliency I implemented in my application and it works and retry the query for some specific times after some specific period.
But it fails in the case of the where I used my Custom Transactions like in the above code. It throws the error like this.
System.InvalidOperationException: The configured execution strategy
'MYExecutionStrategy' does not support user initiated transactions.
See http://go.microsoft.com/fwlink/?LinkId=309381 for additional
information.
I configured the Execution Strategy like this:
public class MYExecutionStrategy : DbExecutionStrategy
{
/// <summary>
/// The default retry limit is 5, which means that the total amount of time spent
/// between retries is 26 seconds plus the random factor.
/// </summary>
public MYExecutionStrategy()
{
}
/// <summary>
///
/// </summary>
/// <param name="maxRetryCount"></param>
/// <param name="maxDelay"></param>
public MYExecutionStrategy(int maxRetryCount, TimeSpan maxDelay)
: base(maxRetryCount, maxDelay)
{
}
/// <summary>
///
/// </summary>
/// <param name="exception"></param>
/// <returns></returns>
protected override bool ShouldRetryOn(Exception exception)
{
bool bRetry = false;
SqlException objSqlException = exception as SqlException;
if (objSqlException != null)
{
List<int> lstErrorNumbersToRetry = new List<int>()
{
5 // SQL Server is down or not reachable
};
if (objSqlException.Errors.Cast<SqlError>().Any(A => lstErrorNumbersToRetry.Contains(A.Number)))
{
bRetry = true;
}
}
return bRetry;
}
}
And DBConfiguration like this:
/// <summary>
///
/// </summary>
public class MYConfiguration : DbConfiguration
{
/// <summary>
///
/// </summary>
public MYConfiguration()
{
SetExecutionStrategy("System.Data.SqlClient", () => new MYExecutionStrategy(3, TimeSpan.FromSeconds(1)));
}
}
Questions:
How can I use the Connection Resiliency in the Entity Framework 6 With the Custom Transaction.
I was unable to find the DatabaseFacade class or namespace.
So after doing some RnD and reading around the WEB I found the Solution like this:
var str = MYExecutionStrategy(3, TimeSpan.FromSeconds(1));
str.Execute(() =>
{
using (DemoEntities objContext = GetWDemoEntities ())
{
using (TransactionScope obj = new TransactionScope())
{
Demo1 objDemo1 = new Demo1();
objDemo1.Title = "ABC";
objContext.Demo1.Add(objDemo1);
objContext.SaveChanges(); // First SaveChanges() method called.
Demo2 objDemo2 = new Demo2();
objDemo2.Title = "ABC";
objContext.Demo2.Add(objDemo2);
objContext.SaveChanges();// Second SaveChanges() method called.
obj.Complete();
}
}
}
With a custom transaction if you get a connection failure on the second SaveChanges(), the first SaveChanges() will also be rolled back. How do you propose to retry that? You can't. That's why EF retry only supports a single SaveChanges() per transaction.
One way forward is to remove the SaveChanges() from Demo1() and Demo2(), and have a single SaveChanges() in the calling method instead of a transaction.
Another way is for you to catch the exception in the calling method and orchestrate the retry there.
Background
Hi, everyone. I have an abstract class called BaseRecordFetcher<TEntity> it has one method that takes read/sort/translate/move-next methods from the child class, and yield returns the results as model entities.
When I read multiple data rows, it correctly does a yield return for each entity, then reaches the Trace message after the do...while without any problems.
Problem
I have however noticed that when I use IEnumerable.FirstOrDefault() on the collection, the system assumes no more yield returns are required, and it does not finish execution of this method!
Other than the trace-output on how many records were returned, I do not have too much of a problem with this behaviour, but it got me thinking: "What if I do need to have some... let's call it finally code?"
Question
Is there a way to always ensure the system runs some post-processing code after yield return?
Code
/// <summary>
/// Retrieve translated entities from the database. The methods used to do
/// this are specified from the child class as parameters (i.e. Action or
/// Func delegates).
/// </summary>
/// <param name="loadSubsetFunc">
/// Specify how to load a set of database records. Return boolean
/// confirmation that records were found.
/// </param>
/// <param name="preIterationAction">
/// Specify what should happen to sort the results.
/// </param>
/// <param name="translateRowFunc">
/// Specify how a database record should translate to a model entity.
/// Return the new entity.
/// </param>
/// <param name="moveNextFunc">
/// Specify how the database row pointer should move on. Return FALSE on a
/// call to the final row.
/// </param>
/// <returns>
/// A set of translated entities from the database.
/// </returns>
/// <example><code>
///
/// return base.FetchRecords(
/// _dOOdad.LoadFacilitySites,
/// () => _dOOdad.Sort = _dOOdad.GetAutoKeyColumn(),
/// () =>
/// {
/// var entity = new FacilitySite();
/// return entity.PopulateLookupEntity(
/// _dOOdad.CurrentRow.ItemArray);
/// },
/// _dOOdad.MoveNext);
///
/// </code></example>
protected virtual IEnumerable<TEntity> FetchRecords(
Func<bool> loadSubsetFunc, Action preIterationAction,
Func<TEntity> translateRowFunc, Func<bool> moveNextFunc)
{
// If records are found, sort them and return set of entities
if (loadSubsetFunc())
{
Trace.WriteLine(string.Format(
"# FOUND one or more records: Returning {0}(s) as a set.",
typeof(TEntity).Name));
int recordCount = 0;
preIterationAction();
do
{
recordCount++;
var entity = translateRowFunc();
yield return entity;
}
while (moveNextFunc());
// This code never gets reached if FirstOrDefault() is used on the set,
// because the system will assume no more enities need to be returned
Trace.WriteLine(string.Format(
"# FINISHED returning records: {0} {1}(s) returned as a set.",
recordCount, typeof(TEntity).Name));
}
else
{
Trace.WriteLine(string.Format(
"# ZERO records found: Returning an empty set of {0}.",
typeof(TEntity).Name));
}
}
EDIT (added solution; thank you #Servy and #BenRobinson):
try
{
do
{
recordCount++;
var entity = translateRowFunc();
yield return entity;
}
while (moveNextFunc());
}
finally
{
// This code always executes, even when you use FirstOrDefault() on the set.
Trace.WriteLine(string.Format(
"# FINISHED returning records: {0} {1}(s) returned as a set.",
recordCount, typeof(TEntity).Name));
}
Yes, you can provide finally blocks within an iterator block, and yes, they are designed to handle this exact case.
IEnumerator<T> implements IDisposable. When the compiler transforms the iterator block into an implementation the Dispose method of the enumerator will execute any finally blocks that should be executed based on where the enumerator currently was when it was disposed of.
This means that as long as whoever is iterating the IEnumerator makes sure to always dispose of their enumerators, you can be sure that your finally blocks will run.
Also note that using blocks will be transformed into a try/finally block, and so will work essentially as one would hope within iterator blocks. They will clean up the given resource, if needed, when the iterator is disposed.
I have read everywhere that the Add method fails if it already exists but does it throw an exception or does it fail silently?
I am writing a multithreaded web application where it should not exist already and it will cause problems if I overwrite the cache, so I can't use the Insert method.
Would this be something I could do:
try
{
HttpContext.Current.Cache.Add("notifications", notifications, null,
System.Web.Caching.Cache.NoAbsoluteExpiration, TimeSpan.FromHours(8),
System.Web.Caching.CacheItemPriority.High, null);
}
catch
{
//do whatever if notifications already exist
}
Thanks for any answers :)
System.Web.Caching.Cache is designed to be thread-safe in a multithreaded web application, and multiple threads may be in contention to add the same key to the cache. So it depends on how you want to handle such race conditions.
In many cases, you will be inserting immutable data into the cache and won't care which thread 'wins' the race. So you can use Add or Insert.
If you want "first one wins", use the Add method, if you want "last one wins (and overwrites)" use the Insert method.
There is no point in checking for existence before inserting/adding. Another thread may insert the item after your check and before you attempt to add/insert.
Neither Add nor Insert with throw an exception if the key already exists. It wouldn't make sense to do so as the Cache is designed for thread-safe insertion without locking. Add will fail silently, and Insert wil overwrite.
Incidentally, when reading from the Cache, don't check for existence then read:
if (Cache["MyKey"] == null)
{
// ... handle missing value
}
else
{
// ... a race condition means the item may have been removed here
// before you get a chance to read it
MyType value = (MyType) Cache["MyKey"];
}
Instead, read the value from the cache and check for null:
MyType value = Cache["MyKey"] as MyType; // for reference types
if (value == null)
{
// ... handle missing value
}
This is something that would be easy enough to test yourself, but it will throw an error. A good utility function to use:
/// <summary>
/// Places an item into the cache using absolute expiration.
/// </summary>
/// <param name="key">Key of the item being inserted</param>
/// <param name="item">Item to insert</param>
/// <param name="expireTime">Absolute expiration time of the item</param>
public static void InsertIntoCacheAbsoluteExpiration(string key, object item, DateTime expireTime)
{
if (HttpContext.Current == null)
{
return;
}
lock (LOCK)
{
HttpRuntime.Cache.Remove(key);
HttpRuntime.Cache.Add(
key,
item,
null,
expireTime,
System.Web.Caching.Cache.NoSlidingExpiration,
System.Web.Caching.CacheItemPriority.Normal,
null);
}
}
Remove will not complain if the key doesn't exit. If you would like to test if the key is already there, you can always check
HttpRuntime.Cache.Get(key) != null
As long as you use a locking variable, you should not run into an issue where your check says something is NOT in the cache, and it shows up between the check and the add.
It fails silently.
using System.Web;
using System.Web.Caching;
class Program
{
static void Main(string[] args)
{
Cache cache = HttpRuntime.Cache;
string object1 = "Object1";
string object2 = "Object2";
cache.Add(
"Key", object1, null, Cache.NoAbsoluteExpiration, new TimeSpan(0, 10, 0), CacheItemPriority.Normal, null);
cache.Add("Key", object2, null, Cache.NoAbsoluteExpiration, new TimeSpan(0, 10, 0), CacheItemPriority.Normal, null);
string cachedstring = (string)cache.Get("Key");
Console.WriteLine(cachedstring);
Console.ReadLine();
}
}
The output of this console app is Object1.
Today I got here nice answer that will resolve my problem. Unfortunately, I forgot to ask how about locking.
The issue was simple - in the server, each connected client will get an unique ID (reusable) from 1 to 500, which is maximum of clients.
The answer was to create a qeue and use waiting elements for new connections and just return them back when they are released.
I’m not sure whether I understand correctly - also I should init the qeue with 500 elements (ints) and just take them one by one and return back once released?
If so, what about locking here? My question was mainly aimed on performance because I was using lock.
If you can use Parallel Extensions go for a ConcurrentQueue
If you can't you can find how to implement your own in this article by Herb Sutter
By the way, the Jon Skeet answer is telling you to just try to unqueue and if empty generate and use. When you're done with an id, simply return it by enqueuing it. There's no reference to 500 ids. IF you need to limit to 500 ids then you will need to initialize the queue with your 500 ids and then, instead of generating new ones, BLOCK when the queue is empty. For that a consumer/producer pattern will be better suited.
i do not understand what you wanna do.
You can populate a hash table for used numbers.
And you can store it in an application wide variable and use Application.Lock and Application.Unlock methods while accessing to that resource.
Something like this?
/// <summary>
/// Thread safe queue of client ids
/// </summary>
internal class SlotQueue
{
private readonly AutoResetEvent _event = new AutoResetEvent(false);
private readonly Queue<int> _items = new Queue<int>();
private int _waitCount;
/// <summary>
/// Initializes a new instance of the <see cref="SlotQueue"/> class.
/// </summary>
/// <param name="itemCount">The item count.</param>
public SlotQueue(int itemCount)
{
// Create queue items
for (int i = 0; i < itemCount; ++i)
_items.Enqueue(i);
}
/// <summary>
/// Gets number of clients waiting in the queue.
/// </summary>
public int QueueSize
{
get { return _waitCount; }
}
/// <summary>
///
/// </summary>
/// <param name="waitTime">Number of milliseconds to wait for an id</param>
/// <returns></returns>
public int Deqeue(int waitTime)
{
// Thread safe check if we got any free items
lock (_items)
{
if (_items.Count > 0)
return _items.Dequeue();
}
// Number of waiting clients.
Interlocked.Increment(ref _waitCount);
// wait for an item to get enqueued
// false = timeout
bool res = _event.WaitOne(waitTime);
if (!res)
{
Interlocked.Decrement(ref _waitCount);
return -1;
}
// try to fetch queued item
lock (_items)
{
if (_items.Count > 0)
{
Interlocked.Decrement(ref _waitCount);
return _items.Dequeue();
}
}
// another thread got the last item between waitOne and the lock.
Interlocked.Decrement(ref _waitCount);
return -1;
}
/// <summary>
/// Enqueue a client ID
/// </summary>
/// <param name="id">Id to enqueue</param>
public void Enqueue(int id)
{
lock (_items)
{
_items.Enqueue(id);
if (_waitCount > 0)
_event.Set();
}
}
}