C# Properties, Why check for equality before assignment - c#

Why do I see people implement properties like this?
What is the point of checking if the value is equal to the current value?
public double? Price
{
get
{
return _price;
}
set
{
if (_price == value)
return;
_price = value;
}
}

In this case it would be moot; however, in the case where there is an associated side-effect (typically an event), it avoids trivial events. For example:
set
{
if (_price == value)
return;
_price = value;
OnPriceChanged(); // invokes the Price event
}
Now, if we do:
foo.Price = 16;
foo.Price = 16;
foo.Price = 16;
foo.Price = 16;
we don't get 4 events; we get at most 1 (maybe 0 if it is already 16).
In more complex examples there could be validation, pre-change actions and post-change actions. All of these can be avoided if you know that it isn't actually a change.
set
{
if (_price == value)
return;
if(value < 0 || value > MaxPrice) throw new ArgumentOutOfRangeException();
OnPriceChanging();
_price = value;
OnPriceChanged();
}

This is not an answer, more: it is an evidence-based response to the claim (in another answer) that it is quicker to check than to assign. In short: no, it isn't. No difference whatsoever. I get (for non-nullable int):
AutoProp: 356ms
Field: 356ms
BasicProp: 357ms
CheckedProp: 356ms
(with some small variations on successive runs - but essentially they all take exactly the same time within any sensible rounding - when doing something 500 MILLION times, we can ignore 1ms difference)
In fact, if we change to int? I get:
AutoProp: 714ms
Field: 536ms
BasicProp: 714ms
CheckedProp: 2323ms
or double? (like in the question):
AutoProp: 535ms
Field: 535ms
BasicProp: 539ms
CheckedProp: 3035ms
so this is not a performance helper!
with tests
class Test
{
static void Main()
{
var obj = new Test();
Stopwatch watch;
const int LOOP = 500000000;
watch = Stopwatch.StartNew();
for (int i = 0; i < LOOP; i++)
{
obj.AutoProp = 17;
}
watch.Stop();
Console.WriteLine("AutoProp: {0}ms", watch.ElapsedMilliseconds);
watch = Stopwatch.StartNew();
for (int i = 0; i < LOOP; i++)
{
obj.Field = 17;
}
watch.Stop();
Console.WriteLine("Field: {0}ms", watch.ElapsedMilliseconds);
watch = Stopwatch.StartNew();
for (int i = 0; i < LOOP; i++)
{
obj.BasicProp = 17;
}
watch.Stop();
Console.WriteLine("BasicProp: {0}ms", watch.ElapsedMilliseconds);
watch = Stopwatch.StartNew();
for (int i = 0; i < LOOP; i++)
{
obj.CheckedProp = 17;
}
watch.Stop();
Console.WriteLine("CheckedProp: {0}ms", watch.ElapsedMilliseconds);
Console.ReadLine();
}
public int AutoProp { get; set; }
public int Field;
private int basicProp;
public int BasicProp
{
get { return basicProp; }
set { basicProp = value; }
}
private int checkedProp;
public int CheckedProp
{
get { return checkedProp; }
set { if (value != checkedProp) checkedProp = value; }
}
}

Let's suppose we don't handle any change related events.
I don't think comparing is faster than assingment. It depends on the data type. Let's say you have a string, Comparison is much longer in the worst case than a simple assignment where the member simply changes reference to the ref of the new string.
So my guess is that it's better in that case to assign right away.
In the case of simple data types it doesn't have a real impact.

Such that, you dont have to re-assign the same value. Its just faster execution for comparing values. AFAIK

Related

c# 1D-byte array to 2D-double array

I'm dealing with c# concurrent-queue and multi-threading in socket-programming tcp/ip
First, I've already done with socket-programming itself. That means, I've already finished coding about client, server and stuffs about communication itself
basic structure is pipe-lined(producer-consumer problem) and now I'm doing with bit conversion
below is brief summary about my code
client-socket ->server-socket -> concurrent_queue_1(with type byte[65536],Thread_1 process this) -> concurrent_queue_2(with type double[40,3500], Thread_2 process this) -> display-data or other work(It can be gpu-work)
*(double[40,3500] can be changed to other size)
Till now,I've implemented putting_data into queue1(Thread1) and just dequeuing all(Thread2) and, its speed is about 700Mbps
The reason I used two concurrent_queue is, I want communication,and type conversion work to be processed in background regardless of main procedure about control things.
Here is the code about my own concurrent_queue with Blocking
public class BlockingConcurrentQueue<T> : IDisposable
{
private readonly ConcurrentQueue<T> _internalQueue;
private AutoResetEvent _autoResetEvent;
private long _consumed;
private long _isAddingCompleted = 0;
private long _produced;
private long _sleeping;
public BlockingConcurrentQueue()
{
_internalQueue = new ConcurrentQueue<T>();
_produced = 0;
_consumed = 0;
_sleeping = 0;
_autoResetEvent = new AutoResetEvent(false);
}
public bool IsAddingCompleted
{
get
{
return Interlocked.Read(ref _isAddingCompleted) == 1;
}
}
public bool IsCompleted
{
get
{
if (Interlocked.Read(ref _isAddingCompleted) == 1 && _internalQueue.IsEmpty)
return true;
else
return false;
}
}
public void CompleteAdding()
{
Interlocked.Exchange(ref _isAddingCompleted, 1);
}
public void Dispose()
{
_autoResetEvent.Dispose();
}
public void Enqueue(T item)
{
_internalQueue.Enqueue(item);
if (Interlocked.Read(ref _isAddingCompleted) == 1)
throw new InvalidOperationException("Adding Completed.");
Interlocked.Increment(ref _produced);
if (Interlocked.Read(ref _sleeping) == 1)
{
Interlocked.Exchange(ref _sleeping, 0);
_autoResetEvent.Set();
}
}
public bool TryDequeue(out T result)
{
if (Interlocked.Read(ref _consumed) == Interlocked.Read(ref _produced))
{
Interlocked.Exchange(ref _sleeping, 1);
_autoResetEvent.WaitOne();
}
if (_internalQueue.TryDequeue(out result))
{
Interlocked.Increment(ref _consumed);
return true;
}
return false;
}
}
My question is here
As I mentioned above, concurrent_queue1's type is byte[65536] and 65536 bytes = 8192 double data.
(40 * 3500=8192 * 17.08984375)
I want merge multiple 8192 double data into form of double[40,3500](size can be changed)and enqueue to concurrent_queue2 with Thread2
It's easy to do it with naive-approach(using many complex for loop) but it's slow cuz, It copys all the
data and expose to upper class or layer.
I'm searching method automatically enqueuing with matched size like foreach loop automatically iterates through 2D-array in row-major way, not yet found
Is there any fast way to merge 1D-byte array into form of 2D-double array and enqueue it?
Thanks for your help!
I try to understand your conversion rule, so I write this conversion code. Use Parallel to speed up the calculation.
int maxSize = 65536;
byte[] dim1Array = new byte[maxSize];
for (int i = 0; i < maxSize; ++i)
{
dim1Array[i] = byte.Parse((i % 256).ToString());
}
int dim2Row = 40;
int dim2Column = 3500;
int byteToDoubleRatio = 8;
int toDoubleSize = maxSize / byteToDoubleRatio;
double[,] dim2Array = new double[dim2Row, dim2Column];
Parallel.For(0, toDoubleSize, i =>
{
int row = i / dim2Column;
int col = i % dim2Column;
int originByteIndex = row * dim2Column * byteToDoubleRatio + col * byteToDoubleRatio;
dim2Array[row, col] = BitConverter.ToDouble(
dim1Array,
originByteIndex);
});

How to recognize Event with onSensorChanged by comparing two float values

I want to read data from my Accelerometer Sensor and compare them if the first number of the float changes. I have some problem to unregister the listener or pause in order to compare and see if the average/Note has increased by a full Int for example "Note: 5.677" increases to "Note:6.234" then it supposed to be recognized as an event.
I have converted them into Ints so I can check if they are equal or not.
I already have tried to delay it with Thread.Sleep and pause but doesnt work.
Maybe because of Lock(_synlock) ?
public void OnSensorChanged(SensorEvent e)
{
lock (_syncLock)
{
avg = e.Values.Average();
_sensorTextView.Text = string.Format("x={0:f}, y={1:f}, z={2:f}", e.Values[0], e.Values[1], e.Values[2]);
_sensorTextView2.Text = string.Format("Note: {0}", avg);
note1 = e.Values.Average();
//Thread.Sleep(500);
//base.OnPause();
//_sensorManager.UnregisterListener(this);
//base.OnResume();
//_sensorManager.RegisterListener(this, _sensorManager.GetDefaultSensor(SensorType.Accelerometer), SensorDelay.Ui);
//_sensorTextView.Text = string.Format("x={0:f}, y={1:f}, z={2:f}", e.Values[0], e.Values[1], e.Values[2]);
//avg = e.Values.Average();
//_sensorTextView2.Text = string.Format("Note: {0}", avg);
//note2 = e.Values.Average();
}
int noteInt1 = Convert.ToInt32(note1);
int noteInt2 = Convert.ToInt32(note2);
//Thread.Sleep(2000); "zeige alle 2 sekunden an"
List<double> eventnumbers = new List<double> { };
if (noteInt1 != noteInt2)
{
avg = e.Values.Average();
//Console.WriteLine("bye");
_sensorTextView2.Text = string.Format("Note: {0}", avg);
//foreach(SensorEvent e){
eventnumbers.Add(new double()); //Value of avg is the value of the eventnumber element
eventnumbers.ForEach(Console.WriteLine);
}
}
UPDATE :
public void OnSensorChanged(SensorEvent e)
{
lock (_syncLock)
{
_sensorTextView.Text = string.Format("x={0:f}, y={1:f}, y={2:f}", e.Values[0], e.Values[1], e.Values[2]);
//_sensorTextView2.Text = string.Format("Note: {0}", e.Values.Average());
avg = e.Values.Average();
}
eventCounter();
}
public void eventCounter()
{
eventnumbers.Add(avg); //add number to our buffer
Console.WriteLine(avg);
eventnumbers.Add(avg+1); //adding a number different of our first one to see if the method works
Console.WriteLine(avg+1);
while (eventnumbers.Count >= 1) //as long as two elements included
{
for (int puffer = 0; puffer <= 10; puffer++) //buffer of 10 Elements
{
for (int i = 0; i < eventnumbers.Count; i++) //compare first number of our buffer
{
note1 = eventnumbers[i]; //remember value of first element
Console.WriteLine(note1);
Console.WriteLine("i ist:" + i);
for (int j = 1; j <= eventnumbers.Count; j++) //with the second number of our buffer
{
Console.WriteLine("j ist:" + j);
note2 = eventnumbers[j]; //remember value of second element
Console.WriteLine(note2);
}
}
while (eventnumbers.Count >= 1) //as long as two elements included
{
try
{
noteInt1 = Convert.ToInt32(note1); //double to Int
noteInt2 = Convert.ToInt32(note2);
if (noteInt1 != noteInt2) //we parse to int in order to compare if event has changed
{
_sensorTextView2.Text = string.Format("Note: {0}", avg); //if yes update our displaying mark/number
eventcounters.Add(avg); //add element to elementcounter
}
break; //
}
catch (System.ArgumentOutOfRangeException) //for the case if there are less than 2 elements to compare
{
eventcounters.ForEach(Console.WriteLine); //return all marks(numbers) from our eventcounter
}
break;
} break;
}
//throw new NotImplementedException();
}
eventnumbers.Clear();
}
My Problem is that I do not get another (different) value when I call e.Value.Average(). I need to do it outside the method and call onSensorChanged another time but i dont know how to do that since I need to create another event, right? thats why I have used e1 and e2 but I think I miss something with the initialization i guess.
This is how I have tried so far creating two events:
public class MainActivity : Activity, ISensorEventListener
{
static readonly object _syncLock = new Object();
SensorManager _sensorManager;
TextView _sensorTextView;
TextView _sensorTextView2;
double note1;
double note2;
double avg;
SensorEvent e1;
SensorEvent e2;
my one create():
protected override void OnCreate(Bundle bundle)
{
base.OnCreate(bundle);
SetContentView(Resource.Layout.activity_main);
_sensorManager = (SensorManager)GetSystemService(SensorService);
_sensorTextView = FindViewById<TextView>(Resource.Id.accelerometer_text);
_sensorTextView2 = FindViewById<TextView>(Resource.Id.accelerometer_note);
e1 = (SensorEvent)GetSystemService(ISensorEventListener);
e2 = (SensorEvent)GetSystemService(ISensorEventListener);
displayMark();
ISensorEventListener is marked a wrong btw
public void displayMark()
{
OnSensorChanged(e1);
note1 = e1.Values.Average();
OnSensorChanged(e2);
note2 = e2.Values.Average();
int noteInt1 = Convert.ToInt32(note1);
int noteInt2 = Convert.ToInt32(note2);
Console.WriteLine(noteInt1);
Console.WriteLine(noteInt2);
List<double> eventnumbers = new List<double> { };
if (noteInt1 != noteInt2)
{
avg = e2.Values.Average();
Console.WriteLine("bye");
_sensorTextView2.Text = string.Format("Note: {0}", avg);
//foreach(SensorEvent e){
eventnumbers.Add(new double()); //Value of avg is the value of the eventnumber element
}
eventnumbers.ForEach(Console.WriteLine);
I'm just going to deal with the general approach here. Rather than trying to wait in an event for some new data I would use the events to record a value and then compare the new one to the last one. So outside the function would be a variable something like:
private int previousValue;
Then when the function is called it compares the 'new' value to this previous one:
public void OnSensorChanged(SensorEvent e)
{
// Can't tell if this is required with information available
lock (_syncLock)
{
// Capture the 'new' data to a local variable
var newValue = e.Values.Average();
if (newValue != previousValue)
{
// Value has changed by required amount so do something here
}
// Update the previous so next we we use this as our reference value
previousValue = newValue;
}
This isn't a full solution but should be a better starting point for developing one.

Thread safety Parallel.For c#

im frenchi so sorry first sorry for my english .
I have an error on visual studio (index out of range) i have this problem only with a Parallel.For not with classic for.
I think one thread want acces on my array[i] and another thread want too ..
It's a code for calcul Kmeans clustering for building link between document (with cosine similarity).
more information :
IndexOutOfRange is on similarityMeasure[i]=.....
I have a computer with 2 Processor (12logical)
with classic for , cpu usage is 9-14% , time for 1 iteration=9min..
with parallel.for , cpu usage is 70-90% =p, time for 1 iteration =~1min30
Sometimes it works longer before generating an error
My function is :
private static int FindClosestClusterCenter(List<Centroid> clustercenter, DocumentVector obj)
{
float[] similarityMeasure = new float[clustercenter.Count()];
float[] copy = similarityMeasure;
object sync = new Object();
Parallel.For(0, clustercenter.Count(), (i) => //for(int i = 0; i < clustercenter.Count(); i++) Parallel.For(0, clustercenter.Count(), (i) => //
{
similarityMeasure[i] = SimilarityMatrics.FindCosineSimilarity(clustercenter[i].GroupedDocument[0].VectorSpace, obj.VectorSpace);
});
int index = 0;
float maxValue = similarityMeasure[0];
for (int i = 0; i < similarityMeasure.Count(); i++)
{
if (similarityMeasure[i] > maxValue)
{
maxValue = similarityMeasure[i];
index = i;
}
}
return index;
}
My function is call here :
do
{
prevClusterCenter = centroidCollection;
DateTime starttime = DateTime.Now;
foreach (DocumentVector obj in documentCollection)//Parallel.ForEach(documentCollection, parallelOptions, obj =>//foreach (DocumentVector obj in documentCollection)
{
int ind = FindClosestClusterCenter(centroidCollection, obj);
resultSet[ind].GroupedDocument.Add(obj);
}
TimeSpan tempsecoule = DateTime.Now.Subtract(starttime);
Console.WriteLine(tempsecoule);
//Console.ReadKey();
InitializeClusterCentroid(out centroidCollection, centroidCollection.Count());
centroidCollection = CalculMeanPoints(resultSet);
stoppingCriteria = CheckStoppingCriteria(prevClusterCenter, centroidCollection);
if (!stoppingCriteria)
{
//initialisation du resultat pour la prochaine itération
InitializeClusterCentroid(out resultSet, centroidCollection.Count);
}
} while (stoppingCriteria == false);
_counter = counter;
return resultSet;
FindCosSimilarity :
public static float FindCosineSimilarity(float[] vecA, float[] vecB)
{
var dotProduct = DotProduct(vecA, vecB);
var magnitudeOfA = Magnitude(vecA);
var magnitudeOfB = Magnitude(vecB);
float result = dotProduct / (float)Math.Pow((magnitudeOfA * magnitudeOfB),2);
//when 0 is divided by 0 it shows result NaN so return 0 in such case.
if (float.IsNaN(result))
return 0;
else
return (float)result;
}
CalculMeansPoint :
private static List<Centroid> CalculMeanPoints(List<Centroid> _clust)
{
for (int i = 0; i < _clust.Count(); i++)
{
if (_clust[i].GroupedDocument.Count() > 0)
{
for (int j = 0; j < _clust[i].GroupedDocument[0].VectorSpace.Count(); j++)
{
float total = 0;
foreach (DocumentVector vspace in _clust[i].GroupedDocument)
{
total += vspace.VectorSpace[j];
}
_clust[i].GroupedDocument[0].VectorSpace[j] = total / _clust[i].GroupedDocument.Count();
}
}
}
return _clust;
}
You may have some side effects in FindCosineSimilarity, make sure it does not modify any field or input parameter. Example: resultSet[ind].GroupedDocument.Add(obj);. If resultSet is not a reference to locally instantiated array, then that is a side effect.
That may fix it. But FYI you could use AsParallel for this rather than Parallel.For:
similarityMeasure = clustercenter
.AsParallel().AsOrdered()
.Select(c=> SimilarityMatrics.FindCosineSimilarity(c.GroupedDocument[0].VectorSpace, obj.VectorSpace))
.ToArray();
You realize that if you synchronize the whole Content of the Parallel-For, it's just the same as having a normal synchrone for-loop, right? Meaning the code as it is doesnt do anything in parallel, so I dont think you'll have any Problems with concurrency. My guess from what I can tell is clustercenter[i].GroupedDocument is propably an empty Array.

Thread.MemoryBarrier and lock difference for a simple property

For the following scenario, is there any difference regarding thread-safeness, result and performance between using MemoryBarrier
private SomeType field;
public SomeType Property
{
get
{
Thread.MemoryBarrier();
SomeType result = field;
Thread.MemoryBarrier();
return result;
}
set
{
Thread.MemoryBarrier();
field = value;
Thread.MemoryBarrier();
}
}
and lock statement (Monitor.Enter and Monitor.Exit)
private SomeType field;
private readonly object syncLock = new object();
public SomeType Property
{
get
{
lock (syncLock)
{
return field;
}
}
set
{
lock (syncLock)
{
field = value;
}
}
}
Because reference assignment is atomic so I think that in this scenarios we do need any locking mechanism.
Performance
The MemeoryBarrier is about 2x faster than lock implementation for Release. Here are my test results:
Lock
Normaly: 5397 ms
Passed as interface: 5431 ms
Double Barrier
Normaly: 2786 ms
Passed as interface: 3754 ms
volatile
Normaly: 250 ms
Passed as interface: 668 ms
Volatile Read/Write
Normaly: 253 ms
Passed as interface: 697 ms
ReaderWriterLockSlim
Normaly: 9272 ms
Passed as interface: 10040 ms
Single Barrier: freshness of Property
Normaly: 1491 ms
Passed as interface: 2510 ms
Single Barrier: other not reodering
Normaly: 1477 ms
Passed as interface: 2275 ms
Here is how I tested it in LINQPad (with optimization set in Preferences):
void Main()
{
"Lock".Dump();
string temp;
var a = new A();
var watch = Stopwatch.StartNew();
for (int i = 0; i < 100000000; ++i)
{
temp = a.Property;
a.Property = temp;
}
Console.WriteLine("Normaly: " + watch.ElapsedMilliseconds + " ms");
Test(a);
"Double Barrier".Dump();
var b = new B();
watch.Restart();
for (int i = 0; i < 100000000; ++i)
{
temp = b.Property;
b.Property = temp;
}
Console.WriteLine("Normaly: " + watch.ElapsedMilliseconds + " ms");
Test(b);
"volatile".Dump();
var c = new C();
watch.Restart();
for (int i = 0; i < 100000000; ++i)
{
temp = c.Property;
c.Property = temp;
}
Console.WriteLine("Normaly: " + watch.ElapsedMilliseconds + " ms");
Test(c);
"Volatile Read/Write".Dump();
var d = new D();
watch.Restart();
for (int i = 0; i < 100000000; ++i)
{
temp = d.Property;
d.Property = temp;
}
Console.WriteLine("Normaly: " + watch.ElapsedMilliseconds + " ms");
Test(d);
"ReaderWriterLockSlim".Dump();
var e = new E();
watch.Restart();
for (int i = 0; i < 100000000; ++i)
{
temp = e.Property;
e.Property = temp;
}
Console.WriteLine("Normaly: " + watch.ElapsedMilliseconds + " ms");
Test(e);
"Single Barrier: freshness of Property".Dump();
var f = new F();
watch.Restart();
for (int i = 0; i < 100000000; ++i)
{
temp = f.Property;
f.Property = temp;
}
Console.WriteLine("Normaly: " + watch.ElapsedMilliseconds + " ms");
Test(f);
"Single Barrier: other not reodering".Dump();
var g = new G();
watch.Restart();
for (int i = 0; i < 100000000; ++i)
{
temp = g.Property;
g.Property = temp;
}
Console.WriteLine("Normaly: " + watch.ElapsedMilliseconds + " ms");
Test(g);
}
void Test(I a)
{
string temp;
var watch = Stopwatch.StartNew();
for (int i = 0; i < 100000000; ++i)
{
temp = a.Property;
a.Property = temp;
}
Console.WriteLine("Passed as interface: " + watch.ElapsedMilliseconds + " ms\n");
}
interface I
{
string Property { get; set; }
}
class A : I
{
private string field;
private readonly object syncLock = new object();
public string Property
{
get
{
lock (syncLock)
{
return field;
}
}
set
{
lock (syncLock)
{
field = value;
}
}
}
}
class B : I
{
private string field;
public string Property
{
get
{
Thread.MemoryBarrier();
string result = field;
Thread.MemoryBarrier();
return result;
}
set
{
Thread.MemoryBarrier();
field = value;
Thread.MemoryBarrier();
}
}
}
class C : I
{
private volatile string field;
public string Property
{
get
{
return field;
}
set
{
field = value;
}
}
}
class D : I
{
private string field;
public string Property
{
get
{
return Volatile.Read(ref field);
}
set
{
Volatile.Write(ref field, value);
}
}
}
class E : I
{
private string field;
private ReaderWriterLockSlim locker = new ReaderWriterLockSlim();
public string Property
{
get
{
locker.EnterReadLock();
string result = field;
locker.ExitReadLock();
return result;
}
set
{
locker.EnterReadLock();
field = value;
locker.ExitReadLock();
}
}
}
class F : I
{
private string field;
public string Property
{
get
{
Thread.MemoryBarrier();
return field;
}
set
{
field = value;
Thread.MemoryBarrier();
}
}
}
class G : I
{
private string field;
public string Property
{
get
{
string result = field;
Thread.MemoryBarrier();
return result;
}
set
{
Thread.MemoryBarrier();
field = value;
}
}
}
is there any difference regarding thread-safeness?
Both ensure that appropriate barriers are set up around the read and write.
result?
In both cases two threads can race to write a value. However, reads and writes cannot move forwards or backwards in time past either the lock or the full fences.
performance?
You've written the code both ways. Now run it. If you want to know which is faster, run it and find out! If you have two horses and you want to know which is faster, race them. Don't ask strangers on the Internet which horse they think is faster.
That said, a better technique is set a performance goal, write the code to be clearly correct, and then test to see if you met your goal. If you did, don't waste your valuable time trying to optimize further code that is already fast enough; spend it optimizing something else that isn't fast enough.
A question you didn't ask:
What would you do?
I'd not write a multithreaded program, that's what I'd do. I'd use processes as my unit of concurrency if I had to.
If I had to write a multithreaded program then I would use the highest-level tool available. I'd use the Task Parallel Library, I'd use async-await, I'd use Lazy<T> and so on. I'd avoid shared memory; I'd treat threads as lightweight processes that returned a value asynchronously.
If I had to write a shared-memory multithreaded program then I would lock everything, all the time. We routinely write programs these days that fetch a billion bytes of video over a satellite link and send it to a phone. Twenty nanoseconds spent taking a lock isn't going to kill you.
I am not smart enough to try to write low-lock code, so I wouldn't do that at all. If I had to then I would use that low-lock code to build a higher-level abstraction and use that abstraction. Fortunately I don't have to because someone already has built the abstractions I need.
As long as the variable in question is one of the limited set of variables that can be fetched/set atomically (i.e. reference types), then yes, the two solutions are applying the same thread-related constraints.
That said, I would honestly expect the MemoryBarrier solution to perform worse than a lock. Accessing an uncontested lock block is very fast. It has been optimized specifically for that case. On the other hand, introducing a memory barrier, which affects not only the access to that one variable, as is the case for a lock, but all memory, could very easily have significant negative performance implications throughout other aspects of the application. You would of course need to do some testing to be sure, (of your real applications, because testing these two in isolation isn't going to reveal the fact that the memory barrier is forcing all of the rest of the application's memory to be synchronized, not just this one variable).
There is no difference as far as thread safety goes. However, I would prefer:
private SomeType field
public SomeType Property
{
get
{
return Volatile.Read(ref field);
}
set
{
Volatile.Write(ref field, value);
}
}
Or,
private volatile SomeType field
public SomeType Property
{
get
{
return field;
}
set
{
field = value;
}
}

find and findindex painful slow for List<Object> why?

Im am working in a project that uses intensively List and i try to find the object via the name (that is a member of the object).
My code worked without searching it using a single for-next loop (function find1) but i found that it is possible to the same using the build-in found find, and the code works. However, it feel a bit slow. So, i did a project for test the speed:
I have the next code
public List<MyObject> varbig = new List<MyObject>();
public Dictionary<string,string> myDictionary=new Dictionary<string, string>();
public Form1() {
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e) {
myDictionary.Clear();
varbig.Clear();
for (int i = 0; i < 5000; i++) {
varbig.Add(new MyObject("name" + i.ToString(),"value"+i.ToString()));
myDictionary.Add("name" + i.ToString(), i.ToString());
}
// first test
var start1 = Environment.TickCount;
for (int i = 0; i < 3000; i++) {
var ss=find1("name499");
}
var end1 = Environment.TickCount;
Console.WriteLine("time 1 :" + (end1 - start1));
// second test
var start2 = Environment.TickCount;
for (int i = 0; i < 3000; i++) {
var ss=find2("name499");
}
var end2 = Environment.TickCount;
Console.WriteLine("time 2 :" + (end2 - start2));
// third test
var start3 = Environment.TickCount;
for (int i = 0; i < 3000; i++) {
var ss = find3("name499");
}
var end3 = Environment.TickCount;
Console.WriteLine("time 3 :" + (end3 - start3));
// first test b
var start1b = Environment.TickCount;
for (int i = 0; i < 3000; i++) {
var ss=find1("name4999");
}
var end1b = Environment.TickCount;
Console.WriteLine("timeb 1 :" + (end1b - start1b));
// second test
var start2b = Environment.TickCount;
for (int i = 0; i < 3000; i++) {
var ss=find2("name4999");
}
var end2b = Environment.TickCount;
Console.WriteLine("timeb 2 :" + (end2b - start2b));
// third test
var start3b = Environment.TickCount;
for (int i = 0; i < 3000; i++) {
var ss = find3("name4999");
}
var end3b = Environment.TickCount;
Console.WriteLine("timeb 3 :" + (end3b - start3b));
}
public int find1(string name) {
for (int i = 0; i < varbig.Count; i++) {
if(varbig[i].Name == name) {
return i;
}
}
return -1;
}
public int find2(string name) {
int idx = varbig.FindIndex(tmpvar => Name == name);
return idx;
}
public int find3(string name) {
var ss=myDictionary[name];
return int.Parse(ss);
}
}
And i use the next object
public class MyObject {
private string _name = "";
private string _value = "";
public MyObject() {}
public MyObject(string name, string value) {
_name = name;
_value = value;
}
public string Name {
get { return _name; }
set { _name = value; }
}
public string Value {
get { return _value; }
set { _value = value; }
}
}
Mostly it do the next thing:
I create an array with 5000 elements.
time 1 = search the 499th object (index) using a simple for-next.
time 2 = search the 499th using the build in function find of List
time 3 = it do the search of the 499th element using dictionary.
Timeb 1, timeb 2 and timeb 3 do the same but try to search the 4999th element instead of the 499th element.
I ran a couple of times :
time 1 :141
time 2 :1248
time 3 :0
timeb 1 :811
timeb 2 :1170
timeb 3 :0
time 1 :109
time 2 :1170
time 3 :0
timeb 1 :796
timeb 2 :1170
timeb 3 :0
(the small then the fast)
And, for my surprise, the build in function findindex is absurdly slow (in some cases, close to 10x slower. Also, the dictionary approach is almost instantly.
My question is, why?. is it because the predicate?.
The problem is in this line:
int idx = varbig.FindIndex(tmpvar => Name == name);
Name == name is wrong, you should write tmpvar.Name == name instead.
In your code you're comparing name argument with the Name property of your form; they are obviously different, and so the method always examines the whole list instead of stopping when the searched value is found. In fact, as you can see looking the numbers, the time spent by find2() is basically always equal.
About the dictionary, it's obviously faster than the other methods because dictionaries are memory structure specifically built to provide fast keyed access.
In fact they arrive close to O(1) time complexity, while looping a list you have a time complexity equal to O(n).
Find1 is using a simple for( i = 0 to count) method
Find2 uses the built in Find method (which is exactly find1 above), except that you have passed a predicate along with it, which I believe is slowing it down.
Find3 using a dictionary, I would assume is the fastest without any timers, becuase a dictionary uses hashtables under the covers which has an 0(1) look up (contant time)
There is the error in your code - the find2 method uses the Form.Name for the comparison instead of your collection objects names. It should looks like this:
public int find2(string name) {
return varbig.FindIndex((obj) => obj.Name == name);
}
The results without using the Form.Name are more consistent:
time 1 :54
time 2 :50
time 3 :0
timeb 1 :438
timeb 2 :506
timeb 3 :0
You don't need to put for loop to search in find2...
Just call find2 directly, then result will be 0.

Categories

Resources