Today I got here nice answer that will resolve my problem. Unfortunately, I forgot to ask how about locking.
The issue was simple - in the server, each connected client will get an unique ID (reusable) from 1 to 500, which is maximum of clients.
The answer was to create a qeue and use waiting elements for new connections and just return them back when they are released.
I’m not sure whether I understand correctly - also I should init the qeue with 500 elements (ints) and just take them one by one and return back once released?
If so, what about locking here? My question was mainly aimed on performance because I was using lock.
If you can use Parallel Extensions go for a ConcurrentQueue
If you can't you can find how to implement your own in this article by Herb Sutter
By the way, the Jon Skeet answer is telling you to just try to unqueue and if empty generate and use. When you're done with an id, simply return it by enqueuing it. There's no reference to 500 ids. IF you need to limit to 500 ids then you will need to initialize the queue with your 500 ids and then, instead of generating new ones, BLOCK when the queue is empty. For that a consumer/producer pattern will be better suited.
i do not understand what you wanna do.
You can populate a hash table for used numbers.
And you can store it in an application wide variable and use Application.Lock and Application.Unlock methods while accessing to that resource.
Something like this?
/// <summary>
/// Thread safe queue of client ids
/// </summary>
internal class SlotQueue
{
private readonly AutoResetEvent _event = new AutoResetEvent(false);
private readonly Queue<int> _items = new Queue<int>();
private int _waitCount;
/// <summary>
/// Initializes a new instance of the <see cref="SlotQueue"/> class.
/// </summary>
/// <param name="itemCount">The item count.</param>
public SlotQueue(int itemCount)
{
// Create queue items
for (int i = 0; i < itemCount; ++i)
_items.Enqueue(i);
}
/// <summary>
/// Gets number of clients waiting in the queue.
/// </summary>
public int QueueSize
{
get { return _waitCount; }
}
/// <summary>
///
/// </summary>
/// <param name="waitTime">Number of milliseconds to wait for an id</param>
/// <returns></returns>
public int Deqeue(int waitTime)
{
// Thread safe check if we got any free items
lock (_items)
{
if (_items.Count > 0)
return _items.Dequeue();
}
// Number of waiting clients.
Interlocked.Increment(ref _waitCount);
// wait for an item to get enqueued
// false = timeout
bool res = _event.WaitOne(waitTime);
if (!res)
{
Interlocked.Decrement(ref _waitCount);
return -1;
}
// try to fetch queued item
lock (_items)
{
if (_items.Count > 0)
{
Interlocked.Decrement(ref _waitCount);
return _items.Dequeue();
}
}
// another thread got the last item between waitOne and the lock.
Interlocked.Decrement(ref _waitCount);
return -1;
}
/// <summary>
/// Enqueue a client ID
/// </summary>
/// <param name="id">Id to enqueue</param>
public void Enqueue(int id)
{
lock (_items)
{
_items.Enqueue(id);
if (_waitCount > 0)
_event.Set();
}
}
}
Related
I want to validate if my understanding is correct.. having read the documentation about ICacheEntryProcessor, it says that if we want to update a field in cache entry, we implement this class and use Invoke on cache to update the field in cache record atomically..
when I implement the above approach, the record does not seem to be updated..there are no exceptions thrown in the process method.
here's my implementation
public class UserConnectionUpdateProcessor : ICacheEntryProcessor<string, User, UserConnection, bool>
{
/// <summary>
/// Processes the update
/// </summary>
/// <param name="entry"></param>
/// <param name="arg"></param>
/// <returns></returns>
public bool Process(IMutableCacheEntry<string, User> entry, UserConnection arg)
{
var connection = (from conn in entry.Value.Connections
where conn.ConnectionId == arg.ConnectionId
select conn).FirstOrDefault();
if(connection == null)
{
//this is a new connection
entry.Value.Connections.Add(arg);
return true;
}
if(arg.Disconnected)
{
entry.Value.Connections.Remove(connection);
}
else
{
connection.LastActivity = DateTime.Now;
}
return true;
}
}
I enabled Ignite Trace logs and this got printed
2020-06-21 21:09:54.1732|DEBUG|org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache|<usersCache> Entry did not pass the filter or conflict resolution (will skip write) [entry=GridDhtCacheEntry [rdrs=ReaderId[] [], part=358, super=GridDistributedCacheEntry [super=GridCacheMapEntry [key=KeyCacheObjectImpl [part=358,
also I was going through Ignite source to understand what operations are performed..no luck yet
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/atomic/GridDhtAtomicCache.java
Your code is fine, but since you only change the data inside the entry.Value object, Ignite does not detect those changes and does not update the entry.
Add entry.Value = entry.Value right before return true.
Ignite uses the Value property setter to mark the entry as updated.
I'm using the swagger-codegen to generate c# client, however noticing that the sortParamsByRequiredFlag is not applied to model generation.
For example, here is a sample config file:
{
"packageVersion" : "1.0.0",
"sortParamsByRequiredFlag": true,
"optionalProjectFile" : false
}
Here is the generated truncated code for model's constructor:
/// <summary>
/// Initializes a new instance of the <see cref="V2alpha1CronJobSpec" /> class.
/// </summary>
/// <param name="ConcurrencyPolicy">Specifies how to treat concurrent executions of a Job. Defaults to Allow..</param>
/// <param name="FailedJobsHistoryLimit">The number of failed finished jobs to retain. This is a pointer to distinguish between explicit zero and not specified..</param>
/// <param name="JobTemplate">Specifies the job that will be created when executing a CronJob. (required).</param>
/// <param name="Schedule">The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron. (required).</param>
/// <param name="StartingDeadlineSeconds">Optional deadline in seconds for starting the job if it misses scheduled time for any reason. Missed jobs executions will be counted as failed ones..</param>
/// <param name="SuccessfulJobsHistoryLimit">The number of successful finished jobs to retain. This is a pointer to distinguish between explicit zero and not specified..</param>
/// <param name="Suspend">This flag tells the controller to suspend subsequent executions, it does not apply to already started executions. Defaults to false..</param>
public V2alpha1CronJobSpec(string ConcurrencyPolicy = default(string), int? FailedJobsHistoryLimit = default(int?), V2alpha1JobTemplateSpec JobTemplate = default(V2alpha1JobTemplateSpec), string Schedule = default(string), long? StartingDeadlineSeconds = default(long?), int? SuccessfulJobsHistoryLimit = default(int?), bool? Suspend = default(bool?))
{
// to ensure "JobTemplate" is required (not null)
if (JobTemplate == null)
{
throw new InvalidDataException("JobTemplate is a required property for V2alpha1CronJobSpec and cannot be null");
}
else
{
this.JobTemplate = JobTemplate;
}
// to ensure "Schedule" is required (not null)
if (Schedule == null)
{
throw new InvalidDataException("Schedule is a required property for V2alpha1CronJobSpec and cannot be null");
}
else
{
this.Schedule = Schedule;
}
this.ConcurrencyPolicy = ConcurrencyPolicy;
this.FailedJobsHistoryLimit = FailedJobsHistoryLimit;
this.StartingDeadlineSeconds = StartingDeadlineSeconds;
this.SuccessfulJobsHistoryLimit = SuccessfulJobsHistoryLimit;
this.Suspend = Suspend;
}
As you can see from the swagger spec, JobTemplate, Schedule are required parameters. However, the params in the constructor are sorted alphabetically.
I've been sorting through the swagger-codegen code base and I think the sortParamsByRequiredFlag only applies to API generated methods.
Is this by design? I'm not sure if I'm missing some config that I should be setting?
Here is the GitHub issue I opened but haven't heard anything on it.
This is not an answer to why Swagger is not generating the correct code, but an answer to your problem on GitHub.
There is absolutely no need for you to use this:
var jobSpec = new V2alpha1CronJobSpec(null, null, new V2alpha1JobTemplateSpec(), "stringValue", null, ...);
You can just use this (called named parameters):
var jobSpec = new V2alpha1CronJobSpec(JobTemplate: new V2alpha1JobTemplateSpec(), Schedule: "stringValue");
For c# client this flag is ignored in the main codebase.
You can create your own extension and override fromModel method to sort the properties.
Have a look at below code on how to do it.
#Override
public CodegenModel fromModel(String name, Model model, Map<String, Model> allDefinitions) {
CodegenModel codegenModel = super.fromModel(name, model, allDefinitions);
if (sortParamsByRequiredFlag)
{
Collections.sort(codegenModel.readWriteVars, new Comparator<CodegenProperty>() {
#Override
public int compare(CodegenProperty one, CodegenProperty another) {
if (one.required == another.required) return 0;
else if (one.required) return -1;
else return 1;
}
});
System.out.println("***Sorting based on required params in model....***");
}
return codegenModel;
}
I'm writing a Portable Class Library that is going to be used by WPF, Windows Phone and possibly WinRT apps and I'm doing some work on background threads that occasionally need to call back to the UI. I instantiate the classes doing this in the UI thread, so I can easily save the SynchronizationContext and use it to call back to the UI.
However, in PCL, SynchronizationContext.Send() is obsolete, because it's not supported by WinRT and SynchronizationContext.Post() (which runs asynchronously) is not always appropriate.
I figured I'd just wait until the delegate passed to Post() is run, but all my attempts ended with a deadlock if Post() was invoked from the same thread the saved SynchronizationContext referred to.
Now I've managed to fix this by checking if it's the same thread and just simply calling my delegate if it is, but the checks are incredibly ugly involving reflecting out the value of private fields of the API, so I thought someone could help me find a more proper way.
Here is my current code if you'd like to see some gore:
/// <summary>
/// Invokes the passed callback on this SynchronizationContext and waits for its execution. Can be used even if
/// SynchronizationContext.Send is not available. Throws the exceptions thrown in the delegate.
/// </summary>
/// <param name="context">the context to run the method</param>
/// <param name="d">the method to run</param>
/// <param name="state">the parameter of the method to run</param>
public static void InvokeSynchronized( this SynchronizationContext context, SendOrPostCallback d, object state )
{
if ( !context.Match( SynchronizationContext.Current ) )
{
ManualResetEvent waitHandle = new ManualResetEvent( false );
Exception error = null;
// replicate SynchronizationContext.Send with .Post as Send is obsolete in the Portable Class Library
context.Post( ( o ) =>
{
try
{
d( o );
}
catch ( Exception exc )
{
error = exc;
}
finally
{
waitHandle.Set();
}
},
state );
waitHandle.WaitOne();
if ( error != null )
{
throw error;
}
}
else
{
d( state );
}
}
/// <summary>
/// Checks if the two SynchronizationContexts refer to the same thread
/// </summary>
/// <param name="sc1"></param>
/// <param name="sc2"></param>
/// <returns></returns>
public static bool Match(this SynchronizationContext sc1, SynchronizationContext sc2)
{
if ( sc2 == null )
{
return false;
}
else if ( sc1 == sc2 || sc1.Equals(sc2) )
{
return true;
}
// check if the two contexts run on the same thread
// proper equality comparison is generally not supported, so some hacking is required
return sc1.FindManagedThreadId() == sc2.FindManagedThreadId();
}
/// <summary>
/// Finds the ManagedThreadId of the thread associated with the passed SynchronizationContext
/// </summary>
/// <param name="sc"></param>
/// <returns></returns>
public static int FindManagedThreadId(this SynchronizationContext sc)
{
// here be dragons
try
{
BindingFlags bindFlags = BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static;
switch ( sc.GetType().FullName )
{
case "System.Windows.Threading.DispatcherSynchronizationContext":
// sc._dispatcher.Thread.ManagedThreadId
var _dispatcher_field = sc.GetType().GetField( "_dispatcher", bindFlags );
var _dispatcher_value = _dispatcher_field.GetValue( sc );
var Thread_property = _dispatcher_value.GetType().GetProperty( "Thread", bindFlags );
var Thread_value = Thread_property.GetValue( _dispatcher_value, null ) as Thread;
return Thread_value.ManagedThreadId;
}
}
catch ( Exception e )
{
throw new InvalidOperationException( "ManagedThreadId could not be obtained for SynchronizationContext of type " + sc.GetType().FullName, e );
}
throw new InvalidOperationException( "ManagedThreadId not found for SynchronizationContext of type " + sc.GetType().FullName );
}
Thanks!
I think that SynchronizationContext.Send is being deprecated by Microsoft for a good reason. They really want the new Windows Store and WP8 apps to fully embrace asynchronous programming model and make the blocking code a thing of the past.
So, something that was:
void SendDataToUI()
{
_synchronizationContext.Send(_callback, data);
}
Should now become:
async Task SendDataToUIAsync()
{
var tcs = new TaskCompletionSource<object>();
_synchronizationContext.Post(a =>
{
try
{
_callback(a);
tcs.SetResult(Type.Missing);
}
catch (Exception ex)
{
tcs.SetException(ex);
}
}, data);
await tcs.Task;
}
That said, I suppose you have your own good reasons to use SynchronizationContext.Send in your PCL library.
The first part of your logic looks good, and you could cut off the reflection part of it by simply memorizing the Thread.CurrentThread.ManagedThreadId of the UI thread, at the same place where you memorize the SynchronizationContext.Current of the UI thread. Then in your implementation of InvokeSynchronized you just compare it to the Thread.CurrentThread.ManagedThreadId of the current thread, and use waitHandle.WaitOne() if your are on non-UI thread.
Background
Hi, everyone. I have an abstract class called BaseRecordFetcher<TEntity> it has one method that takes read/sort/translate/move-next methods from the child class, and yield returns the results as model entities.
When I read multiple data rows, it correctly does a yield return for each entity, then reaches the Trace message after the do...while without any problems.
Problem
I have however noticed that when I use IEnumerable.FirstOrDefault() on the collection, the system assumes no more yield returns are required, and it does not finish execution of this method!
Other than the trace-output on how many records were returned, I do not have too much of a problem with this behaviour, but it got me thinking: "What if I do need to have some... let's call it finally code?"
Question
Is there a way to always ensure the system runs some post-processing code after yield return?
Code
/// <summary>
/// Retrieve translated entities from the database. The methods used to do
/// this are specified from the child class as parameters (i.e. Action or
/// Func delegates).
/// </summary>
/// <param name="loadSubsetFunc">
/// Specify how to load a set of database records. Return boolean
/// confirmation that records were found.
/// </param>
/// <param name="preIterationAction">
/// Specify what should happen to sort the results.
/// </param>
/// <param name="translateRowFunc">
/// Specify how a database record should translate to a model entity.
/// Return the new entity.
/// </param>
/// <param name="moveNextFunc">
/// Specify how the database row pointer should move on. Return FALSE on a
/// call to the final row.
/// </param>
/// <returns>
/// A set of translated entities from the database.
/// </returns>
/// <example><code>
///
/// return base.FetchRecords(
/// _dOOdad.LoadFacilitySites,
/// () => _dOOdad.Sort = _dOOdad.GetAutoKeyColumn(),
/// () =>
/// {
/// var entity = new FacilitySite();
/// return entity.PopulateLookupEntity(
/// _dOOdad.CurrentRow.ItemArray);
/// },
/// _dOOdad.MoveNext);
///
/// </code></example>
protected virtual IEnumerable<TEntity> FetchRecords(
Func<bool> loadSubsetFunc, Action preIterationAction,
Func<TEntity> translateRowFunc, Func<bool> moveNextFunc)
{
// If records are found, sort them and return set of entities
if (loadSubsetFunc())
{
Trace.WriteLine(string.Format(
"# FOUND one or more records: Returning {0}(s) as a set.",
typeof(TEntity).Name));
int recordCount = 0;
preIterationAction();
do
{
recordCount++;
var entity = translateRowFunc();
yield return entity;
}
while (moveNextFunc());
// This code never gets reached if FirstOrDefault() is used on the set,
// because the system will assume no more enities need to be returned
Trace.WriteLine(string.Format(
"# FINISHED returning records: {0} {1}(s) returned as a set.",
recordCount, typeof(TEntity).Name));
}
else
{
Trace.WriteLine(string.Format(
"# ZERO records found: Returning an empty set of {0}.",
typeof(TEntity).Name));
}
}
EDIT (added solution; thank you #Servy and #BenRobinson):
try
{
do
{
recordCount++;
var entity = translateRowFunc();
yield return entity;
}
while (moveNextFunc());
}
finally
{
// This code always executes, even when you use FirstOrDefault() on the set.
Trace.WriteLine(string.Format(
"# FINISHED returning records: {0} {1}(s) returned as a set.",
recordCount, typeof(TEntity).Name));
}
Yes, you can provide finally blocks within an iterator block, and yes, they are designed to handle this exact case.
IEnumerator<T> implements IDisposable. When the compiler transforms the iterator block into an implementation the Dispose method of the enumerator will execute any finally blocks that should be executed based on where the enumerator currently was when it was disposed of.
This means that as long as whoever is iterating the IEnumerator makes sure to always dispose of their enumerators, you can be sure that your finally blocks will run.
Also note that using blocks will be transformed into a try/finally block, and so will work essentially as one would hope within iterator blocks. They will clean up the given resource, if needed, when the iterator is disposed.
I am using the Nuget package System.Threading.Tasks for Silverlight 4 (a port from Mono). I keep getting the InvalidOperationException ("The underlying Task is already in one of the three final states: RanToCompletion, Faulted, or Canceled.") for the following:
var tasks = new Task<DeploymentCatalog>[2];
//Catalog the XAP downloaded by the Application Loader, the BiggestBox Index:
var uri0 = new Uri(Application.Current.
Host.InitParams["LoaderInfoXapPath"], UriKind.Relative);
tasks[0] = CompositionUtility.DownloadCatalogAsync(uri0);
//Catalog the XAP with the BiggestBox Index Part:
var uri1 = new Uri(Application.Current
.Host.InitParams["BiggestBoxIndexXapPath"], UriKind.Relative);
tasks[1] = CompositionUtility.DownloadCatalogAsync(uri1);
//tasks did not run by default...
//tasks[0].Start();
//tasks[1].Start();
Task.WaitAll(tasks);
this.AddToAggregateCatalog(tasks[0].Result);
this.AddToAggregateCatalog(tasks[1].Result);
base.Compose();
where DownloadCatalogAsync is:
/// <summary>
/// Downloads the catalog
/// in a <see cref="System.Threading.Task"/>.
/// </summary>
/// <param name="location">The location.</param>
/// <param name="downloadCompleteAction">The download complete action.</param>
[SuppressMessage("Microsoft.Reliability", "CA2000:DisposeObjectsBeforeLosingScope",
Justification = "Reliable disposal depends on callers.")]
public static Task<DeploymentCatalog> DownloadCatalogAsync(Uri location)
{
return DownloadCatalogAsTask(location, null);
}
/// <summary>
/// Downloads the catalog
/// in a <see cref="System.Threading.Task"/>.
/// </summary>
/// <param name="location">The location.</param>
/// <param name="downloadCompleteAction">The download complete action.</param>
/// <remarks>
/// For details, see the “Converting an Event-Based Pattern” section in
/// “Simplify Asynchronous Programming with Tasks”
/// by Igor Ostrovsky
/// [http://msdn.microsoft.com/en-us/magazine/ff959203.aspx]
/// </remarks>
[SuppressMessage("Microsoft.Reliability", "CA2000:DisposeObjectsBeforeLosingScope",
Justification = "Reliable disposal depends on callers.")]
public static Task<DeploymentCatalog> DownloadCatalogAsync(Uri location,
Action<object, AsyncCompletedEventArgs> downloadCompleteAction)
{
var completionSource = new TaskCompletionSource<DeploymentCatalog>();
var catalog = new DeploymentCatalog(location);
catalog.DownloadCompleted += (s, args) =>
{
if(args.Error != null) completionSource.SetException(args.Error);
else if(args.Cancelled) completionSource.SetCanceled();
else
{
completionSource.SetResult(s as DeploymentCatalog); //exception thrown here
if(downloadCompleteAction != null)
downloadCompleteAction.Invoke(s, args);
}
};
catalog.DownloadAsync();
return completionSource.Task;
}
I use this same pattern with WebClient and it works fine (but I don't have to Start() the tasks explicitly---however I did not test WebClient with the Mono-ported version of Task Parallel Library (for Silverlight). I guess I should do that...
You're calling Start on the TaskCompletionSource and the TCS Task had already started and, in the case when you get this exception, it already completed too. Generally you want to design methods like DownloadCatalogAsTask to return already started Task instances and callers can expect them to be started and avoid calling Start themselves.
Couple other suggestions:
I would consider renaming the method to DownloadCatalogAsync to align with .NET 4.5 naming guidelines.
I would not pass the Action<> into the async method. The whole point of returning a Task is so that you can chain on continuations using familiar patterns like an explicit ContinueWith or the await keyword in C# 5.0. Here you're inventing you own approach.
I had the same problem. i.e. Mono TPL on Silverlight 4; exception thrown at completionSource.SetResult().
This was resolved when I used completionSource.TrySetResult() instead.