Pointers and bit operators in GPU kernels - c#

I want to perform a double threshold on a volume, using a GPU kernel. I send my volume, per slice, as read_only image2d_t. My output volume is a binary volume, where each bit specifies if its related voxel is enabled or disabled. My kernel checks if the current pixel value is within the lower/upper threshold range, and enables its corresponding bit in the binary volume.
For debugging purposes, I left the actual check commented for now. I simply use the passed slice nr to determine if the binary volume bit should be on or off. The first 14 slices are set to "on", the rest to "off". I have also verified this code on the CPU side, the code I pasted at the bottom of this post. The code shows both paths, the CPU being commented now.
The CPU code works as intended, the following image is returned after rendering the volume with the binary mask applied:
Running the exact same logic using my GPU kernel returns incorrect results (1st 3D, 2nd slice view):
What goes wrong here? I read that OpenCL does not support bit fields, but it does support bitwise operators as far as I could understand from the OpenCL specs. My bit logic, which selects the right bit from the 32 bit word and flips it, is supported right? Or is my simple flag considered a bit field. What it does is select the voxel%32 bit from the left (not the right, hence the subtract).
Another thing could be that the uint pointer passed to my kernel is different from what I expect. I assumed this would be valid use of pointers and passing data to my kernel. The logic applied to the "uint* word" part in the kernel is due to padding words per row, and paddings rows per slice. The CPU variant confirmed that the pointer calculation logic is valid though.
Below; the code
uint wordsPerRow = (uint)BinaryVolumeWordsPerRow(volume.Geometry.NumberOfVoxels);
uint wordsPerPlane = (uint)BinaryVolumeWordsPerPlane(volume.Geometry.NumberOfVoxels);
int[] dims = new int[3];
dims[0] = volume.Geometry.NumberOfVoxels.X;
dims[1] = volume.Geometry.NumberOfVoxels.Y;
dims[2] = volume.Geometry.NumberOfVoxels.Z;
uint[] arrC = dstVolume.BinaryData.ObtainArray() as uint[];
unsafe {
fixed(int* dimPtr = dims) {
fixed(uint *arrcPtr = arrC) {
// pick Cloo Platform
ComputePlatform platform = ComputePlatform.Platforms[0];
// create context with all gpu devices
ComputeContext context = new ComputeContext(ComputeDeviceTypes.Gpu,
new ComputeContextPropertyList(platform), null, IntPtr.Zero);
// load opencl source
StreamReader streamReader = new StreamReader(#"C:\views\pii-sw113v1\PMX\ADE\Philips\PmsMip\Private\Viewing\Base\BinaryVolumes\kernels\kernel.cl");
string clSource = streamReader.ReadToEnd();
streamReader.Close();
// create program with opencl source
ComputeProgram program = new ComputeProgram(context, clSource);
// compile opencl source
program.Build(null, null, null, IntPtr.Zero);
// Create the event wait list. An event list is not really needed for this example but it is important to see how it works.
// Note that events (like everything else) consume OpenCL resources and creating a lot of them may slow down execution.
// For this reason their use should be avoided if possible.
ComputeEventList eventList = new ComputeEventList();
// Create the command queue. This is used to control kernel execution and manage read/write/copy operations.
ComputeCommandQueue commands = new ComputeCommandQueue(context, context.Devices[0], ComputeCommandQueueFlags.None);
// Create the kernel function and set its arguments.
ComputeKernel kernel = program.CreateKernel("LowerThreshold");
int slicenr = 0;
foreach (IntPtr ptr in pinnedSlices) {
/*// CPU VARIANT FOR TESTING PURPOSES
for (int y = 0; y < dims[1]; y++) {
for (int x = 0; x < dims[0]; x++) {
long pixelOffset = x + y * dims[0];
ushort* ushortPtr = (ushort*)ptr;
ushort pixel = *(ushortPtr + pixelOffset);
int BinaryWordShift = 5;
int BinaryWordBits = 32;
if (
(0 <= x) &&
(0 <= y) &&
(0 <= slicenr) &&
(x < dims[0]) &&
(y < dims[1]) &&
(slicenr < dims[2])
) {
uint* word =
arrcPtr + 1 + (slicenr * wordsPerPlane) +
(y * wordsPerRow) +
(x >> BinaryWordShift);
uint mask = (uint)(0x1 << ((BinaryWordBits - 1) - (byte)(x & 0x1f)));
//if (pixel > lowerThreshold && pixel < upperThreshold) {
if (slicenr < 15) {
*word |= mask;
} else {
*word &= ~mask;
}
}
}
}*/
ComputeBuffer<int> dimsBuffer = new ComputeBuffer<int>(
context,
ComputeMemoryFlags.ReadOnly | ComputeMemoryFlags.CopyHostPointer,
3,
new IntPtr(dimPtr));
ComputeImageFormat format = new ComputeImageFormat(ComputeImageChannelOrder.Intensity, ComputeImageChannelType.UnsignedInt16);
ComputeImage2D image2D = new ComputeImage2D(
context,
ComputeMemoryFlags.ReadOnly,
format,
volume.Geometry.NumberOfVoxels.X,
volume.Geometry.NumberOfVoxels.Y,
0,
ptr
);
// The output buffer doesn't need any data from the host. Only its size is specified (arrC.Length).
ComputeBuffer<uint> c = new ComputeBuffer<uint>(
context, ComputeMemoryFlags.WriteOnly, arrC.Length);
kernel.SetMemoryArgument(0, image2D);
kernel.SetMemoryArgument(1, dimsBuffer);
kernel.SetValueArgument(2, wordsPerRow);
kernel.SetValueArgument(3, wordsPerPlane);
kernel.SetValueArgument(4, slicenr);
kernel.SetValueArgument(5, lowerThreshold);
kernel.SetValueArgument(6, upperThreshold);
kernel.SetMemoryArgument(7, c);
// Execute the kernel "count" times. After this call returns, "eventList" will contain an event associated with this command.
// If eventList == null or typeof(eventList) == ReadOnlyCollection<ComputeEventBase>, a new event will not be created.
commands.Execute(kernel, null, new long[] { dims[0], dims[1] }, null, eventList);
// Read back the results. If the command-queue has out-of-order execution enabled (default is off), ReadFromBuffer
// will not execute until any previous events in eventList (in our case only eventList[0]) are marked as complete
// by OpenCL. By default the command-queue will execute the commands in the same order as they are issued from the host.
// eventList will contain two events after this method returns.
commands.ReadFromBuffer(c, ref arrC, false, eventList);
// A blocking "ReadFromBuffer" (if 3rd argument is true) will wait for itself and any previous commands
// in the command queue or eventList to finish execution. Otherwise an explicit wait for all the opencl commands
// to finish has to be issued before "arrC" can be used.
// This explicit synchronization can be achieved in two ways:
// 1) Wait for the events in the list to finish,
//eventList.Wait();
//}
// 2) Or simply use
commands.Finish();
slicenr++;
}
}
}
}
And my kernel code:
const sampler_t smp = CLK_FILTER_NEAREST | CLK_ADDRESS_CLAMP | CLK_NORMALIZED_COORDS_FALSE;
kernel void LowerThreshold(
read_only image2d_t image,
global int* brickSize,
uint wordsPerRow,
uint wordsPerPlane,
int slicenr,
int lower,
int upper,
global write_only uint* c )
{
int4 coord = (int4)(get_global_id(0),get_global_id(1),slicenr,1);
uint4 pixel = read_imageui(image, smp, coord.xy);
uchar BinaryWordShift = 5;
int BinaryWordBits = 32;
if (
(0 <= coord.x) &&
(0 <= coord.y) &&
(0 <= coord.z) &&
(coord.x < brickSize[0]) &&
(coord.y < brickSize[1]) &&
(coord.z < brickSize[2])
) {
global uint* word =
c + 1 + (coord.z * wordsPerPlane) +
(coord.y * wordsPerRow) +
(coord.x >> BinaryWordShift);
uint mask = (uint)(0x1 << ((BinaryWordBits - 1) - (uchar)(coord.x & 0x1f)));
//if (pixel.w > lower && pixel.w < upper) {
if (slicenr < 15) {
*word |= mask;
} else {
*word &= ~mask;
}
}
}

Two issues:
You've declared "c" as "write_only" yet use the "|=" and "&=" operators, which are read-modify-write
As the other posters mentioned, if two work items are accessing the same word, there are race conditions between the read-modify-write that will cause errors. Atomic operations are much slower than non-atomic operations, so while possible, not recommended.
I'd recommend making your output 8x larger and using bytes rather than bits. This would make your output write-only and would also remove contention and therefore race conditions.
Or (if data compactness or format is important) process 8 elements at a time per work item, and write the composite 8-bit output as a single byte. This would be write-only, with no contention, and would still have your data compactness.

Related

for loop from intPtr, how?

How do I write a for loop to iterate over an array of floats, given the intPtr for the start of the array?
It's C# in Unity, so I know the bytes of a float are 4. But am having only crashes when trying to increment from the intPtr by the simple use of the number 4 as a value to increment by.
This is what's not working:
float myFloatVar = 42.42f
for ( int i = varIntPtr ; i < varIntPtr + 12 ; i+=4 ) {
presumedToBeAnArrayLocation[i] = myFloatVar * i;
}
given the intPtr for the start of the array?
If you have a pointer to the start of an array, then unless that array is externally pinned: your code is already irretrievably broken - an unmanaged pointer doesn't get updated with GC movement, so you now have undefined behaviour.
If we assume that it is pinned, or is unmanaged memory (and therefore not subject to GC movement), then something like:
float* typed = ptr.ToPointer();
for (int i = 0; i < count; i++)
{
float v = typed[i];
}
However, it is usually preferable to use spans when possible:
var typed = new Span<float>(ptr.ToPointer(), count);
foreach (var v in typed) {
}

.NET BitArray cardinality [duplicate]

I am implementing a library where I am extensively using the .Net BitArray class and need an equivalent to the Java BitSet.Cardinality() method, i.e. a method which returns the number of bits set. I was thinking of implementing it as an extension method for the BitArray class. The trivial implementation is to iterate and count the bits set (like below), but I wanted a faster implementation as I would be performing thousands of set operations and counting the answer. Is there a faster way than the example below?
count = 0;
for (int i = 0; i < mybitarray.Length; i++)
{
if (mybitarray [i])
count++;
}
This is my solution based on the "best bit counting method" from http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel
public static Int32 GetCardinality(BitArray bitArray)
{
Int32[] ints = new Int32[(bitArray.Count >> 5) + 1];
bitArray.CopyTo(ints, 0);
Int32 count = 0;
// fix for not truncated bits in last integer that may have been set to true with SetAll()
ints[ints.Length - 1] &= ~(-1 << (bitArray.Count % 32));
for (Int32 i = 0; i < ints.Length; i++)
{
Int32 c = ints[i];
// magic (http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel)
unchecked
{
c = c - ((c >> 1) & 0x55555555);
c = (c & 0x33333333) + ((c >> 2) & 0x33333333);
c = ((c + (c >> 4) & 0xF0F0F0F) * 0x1010101) >> 24;
}
count += c;
}
return count;
}
According to my tests, this is around 60 times faster than the simple foreach loop and still 30 times faster than the Kernighan approach with around 50% bits set to true in a BitArray with 1000 bits. I also have a VB version of this if needed.
you can accomplish this pretty easily with Linq
BitArray ba = new BitArray(new[] { true, false, true, false, false });
var numOnes = (from bool m in ba
where m
select m).Count();
BitArray myBitArray = new BitArray(...
int
bits = myBitArray.Count,
size = ((bits - 1) >> 3) + 1,
counter = 0,
x,
c;
byte[] buffer = new byte[size];
myBitArray.CopyTo(buffer, 0);
for (x = 0; x < size; x++)
for (c = 0; buffer[x] > 0; buffer[x] >>= 1)
counter += buffer[x] & 1;
Taken from "Counting bits set, Brian Kernighan's way" and adapted for bytes. I'm using it for bit arrays of 1 000 000+ bits and it's superb.
If your bits are not n*8 then you can count the mod byte manually.
I had the same issue, but had more than just the one Cardinality method to convert. So, I opted to port the entire BitSet class. Fortunately it was self-contained.
Here is the Gist of the C# port.
I would appreciate if people would report any bugs that are found - I am not a Java developer, and have limited experience with bit logic, so I might have translated some of it incorrectly.
Faster and simpler version than the accepted answer thanks to the use of System.Numerics.BitOperations.PopCount
C#
Int32[] ints = new Int32[(bitArray.Count >> 5) + 1];
bitArray.CopyTo(ints, 0);
Int32 count = 0;
for (Int32 i = 0; i < ints.Length; i++) {
count += BitOperations.PopCount(ints[i]);
}
Console.WriteLine(count);
F#
let ints = Array.create ((bitArray.Count >>> 5) + 1) 0u
bitArray.CopyTo(ints, 0)
ints
|> Array.sumBy BitOperations.PopCount
|> printfn "%d"
See more details in Is BitOperations.PopCount the best way to compute the BitArray cardinality in .NET?
You could use Linq, but it would be useless and slower:
var sum = mybitarray.OfType<bool>().Count(p => p);
There is no faster way with using BitArray - What it comes down to is you will have to count them - you could use LINQ to do that or do your own loop, but there is no method offered by BitArray and the underlying data structure is an int[] array (as seen with Reflector) - so this will always be O(n), n being the number of bits in the array.
The only way I could think of making it faster is using reflection to get a hold of the underlying m_array field, then you can get around the boundary checks that Get() uses on every call (see below) - but this is kinda dirty, and might only be worth it on very large arrays since reflection is expensive.
public bool Get(int index)
{
if ((index < 0) || (index >= this.Length))
{
throw new ArgumentOutOfRangeException("index", Environment.GetResourceString("ArgumentOutOfRange_Index"));
}
return ((this.m_array[index / 0x20] & (((int) 1) << (index % 0x20))) != 0);
}
If this optimization is really important to you, you should create your own class for bit manipulation, that internally could use BitArray, but keeps track of the number of bits set and offers the appropriate methods (mostly delegate to BitArray but add methods to get number of bits currently set) - then of course this would be O(1).
If you really want to maximize the speed, you could pre-compute a lookup table where given a byte-value you have the cardinality, but BitArray is not the most ideal structure for this, since you'd need to use reflection to pull the underlying storage out of it and operate on the integral types - see this question for a better explanation of that technique.
Another, perhaps more useful technique, is to use something like the Kernighan trick, which is O(m) for an n-bit value of cardinality m.
static readonly ZERO = new BitArray (0);
static readonly NOT_ONE = new BitArray (1).Not ();
public static int GetCardinality (this BitArray bits)
{
int c = 0;
var tmp = new BitArray (myBitArray);
for (c; tmp != ZERO; c++)
tmp = tmp.And (tmp.And (NOT_ONE));
return c;
}
This too is a bit more cumbersome than it would be in say C, because there are no operations defined between integer types and BitArrays, (tmp &= tmp - 1, for example, to clear the least significant set bit, has been translated to tmp &= (tmp & ~0x1).
I have no idea if this ends up being any faster than naively iterating for the case of the BCL BitArray, but algorithmically speaking it should be superior.
EDIT: cited where I discovered the Kernighan trick, with a more in-depth explanation
If you don't mind to copy the code of System.Collections.BitArray to your project and Edit it,you can write as fellow:
(I think it's the fastest. And I've tried use BitVector32[] to implement my BitArray, but it's still so slow.)
public void Set(int index, bool value)
{
if ((index < 0) || (index >= this.m_length))
{
throw new ArgumentOutOfRangeException("index", "Index Out Of Range");
}
SetWithOutAuth(index,value);
}
//When in batch setting values,we need one method that won't auth the index range
private void SetWithOutAuth(int index, bool value)
{
int v = ((int)1) << (index % 0x20);
index = index / 0x20;
bool NotSet = (this.m_array[index] & v) == 0;
if (value && NotSet)
{
CountOfTrue++;//Count the True values
this.m_array[index] |= v;
}
else if (!value && !NotSet)
{
CountOfTrue--;//Count the True values
this.m_array[index] &= ~v;
}
else
return;
this._version++;
}
public int CountOfTrue { get; internal set; }
public void BatchSet(int start, int length, bool value)
{
if (start < 0 || start >= this.m_length || length <= 0)
return;
for (int i = start; i < length && i < this.m_length; i++)
{
SetWithOutAuth(i,value);
}
}
I wrote my version of after not finding one that uses a look-up table:
private int[] _bitCountLookup;
private void InitLookupTable()
{
_bitCountLookup = new int[256];
for (var byteValue = 0; byteValue < 256; byteValue++)
{
var count = 0;
for (var bitIndex = 0; bitIndex < 8; bitIndex++)
{
count += (byteValue >> bitIndex) & 1;
}
_bitCountLookup[byteValue] = count;
}
}
private int CountSetBits(BitArray bitArray)
{
var result = 0;
var numberOfFullBytes = bitArray.Length / 8;
var numberOfTailBits = bitArray.Length % 8;
var tailByte = numberOfTailBits > 0 ? 1 : 0;
var bitArrayInBytes = new byte[numberOfFullBytes + tailByte];
bitArray.CopyTo(bitArrayInBytes, 0);
for (var i = 0; i < numberOfFullBytes; i++)
{
result += _bitCountLookup[bitArrayInBytes[i]];
}
for (var i = (numberOfFullBytes * 8); i < bitArray.Length; i++)
{
if (bitArray[i])
{
result++;
}
}
return result;
}
The problem is naturally O(n), as a result your solution is probably the most efficient.
Since you are trying to count an arbitrary subset of bits you cannot count the bits when they are set (would would provide a speed boost if you are not setting the bits too often).
You could check to see if the processor you are using has a command which will return the number of set bits. For example a processor with SSE4 could use the POPCNT according to this post. This would probably not work for you since .Net does not allow assembly (because it is platform independent). Also, ARM processors probably do not have an equivalent.
Probably the best solution would be a look up table (or switch if you could guarantee the switch will compiled to a single jump to currentLocation + byteValue). This would give you the count for the whole byte. Of course BitArray does not give access to the underlying data type so you would have to make your own BitArray. You would also have to guarantee that all the bits in the byte will always be part of the intersection which does not sound likely.
Another option would be to use an array of booleans instead of a BitArray. This has the advantage not needing to extract the bit from the others in the byte. The disadvantage is the array will take up 8x as much space in memory meaning not only wasted space, but also more data push as you iterate through the array to perform your count.
The difference between a standard array look up and a BitArray look up is as follows:
Array:
offset = index * indexSize
Get memory at location + offset and save to value
BitArray:
index = index/indexSize
offset = index * indexSize
Get memory at location + offset and save to value
position = index%indexSize
Shift value position bits
value = value and 1
With the exception of #2 for Arrays and #3 most of these commands take 1 processor cycle to complete. Some of the commands can be combined into 1 command using x86/x64 processors, though probably not with ARM since it uses a reduced set of instructions.
Which of the two (array or BitArray) perform better will be specific to your platform (processor speed, processor instructions, processor cache sizes, processor cache speed, amount of system memory (Ram), speed of system memory (CAS), speed of connection between processor and RAM) as well as the spread of indexes you want to count (are the intersections most often clustered or are they randomly distributed).
To summarize: you could probably find a way to make it faster, but your solution is the fastest you will get for your data set using a bit per boolean model in .NET.
Edit: make sure you are accessing the indexes you want to count in order. If you access indexes 200, 5, 150, 151, 311, 6 in that order then you will increase the amount of cache misses resulting in more time spent waiting for values to be retrieved from RAM.

GLSL Spinlock Blocking Forever

I am trying to implement a Spinlock in GLSL. It will be used in the context of Voxel Cone Tracing. I try to move the information, which stores the lock state, to a separate 3D texture which allows atomic operations. In order to not waste memory I don't use a full integer to store the lock state but only a single bit. The problem is that without limiting the maximum number of iterations, the loop never terminates. I implemented the exact same mechanism in C#, created a lot of tasks working on shared resources and there it works perfectly.
The book Euro Par 2017: Parallel Processing Page 274 (can be found on Google) mentions possible caveats when using locks on SIMT devices. I think the code should bypass those caveats.
Problematic GLSL Code:
void imageAtomicRGBA8Avg(layout(RGBA8) volatile image3D image, layout(r32ui) volatile uimage3D lockImage,
ivec3 coords, vec4 value)
{
ivec3 lockCoords = coords;
uint bit = 1<<(lockCoords.z & (4)); //1<<(coord.z % 32)
lockCoords.z = lockCoords.z >> 5; //Division by 32
uint oldValue = 0;
//int counter=0;
bool goOn = true;
while (goOn /*&& counter < 10000*/)
//while(true)
{
uint newValue = oldValue | bit;
uint result = imageAtomicCompSwap(lockImage, lockCoords, oldValue, newValue);
//Writing is allowed if could write our value and if the bit indicating the lock is not already set
if (result == oldValue && (result & bit) == 0)
{
vec4 rval = imageLoad(image, coords);
rval.rgb = (rval.rgb * rval.a); // Denormalize
vec4 curValF = rval + value; // Add
curValF.rgb /= curValF.a; // Renormalize
imageStore(image, coords, curValF);
//Release the lock and set the flag such that the loops terminate
bit = ~bit;
oldValue = 0;
while (goOn)
{
newValue = oldValue & bit;
result = imageAtomicCompSwap(lockImage, lockCoords, oldValue, newValue);
if (result == oldValue)
goOn = false; //break;
oldValue = result;
}
//break;
}
oldValue = result;
//++counter;
}
}
Working C# code with identical functionality
public static void Test()
{
int buffer = 0;
int[] resource = new int[2];
Action testA = delegate ()
{
for (int i = 0; i < 100000; ++i)
imageAtomicRGBA8Avg(ref buffer, 1, resource);
};
Action testB = delegate ()
{
for (int i = 0; i < 100000; ++i)
imageAtomicRGBA8Avg(ref buffer, 2, resource);
};
Task[] tA = new Task[100];
Task[] tB = new Task[100];
for (int i = 0; i < tA.Length; ++i)
{
tA[i] = new Task(testA);
tA[i].Start();
tB[i] = new Task(testB);
tB[i].Start();
}
for (int i = 0; i < tA.Length; ++i)
tA[i].Wait();
for (int i = 0; i < tB.Length; ++i)
tB[i].Wait();
}
public static void imageAtomicRGBA8Avg(ref int lockImage, int bit, int[] resource)
{
int oldValue = 0;
int counter = 0;
bool goOn = true;
while (goOn /*&& counter < 10000*/)
{
int newValue = oldValue | bit;
int result = Interlocked.CompareExchange(ref lockImage, newValue, oldValue); //imageAtomicCompSwap(lockImage, lockCoords, oldValue, newValue);
if (result == oldValue && (result & bit) == 0)
{
//Now we hold the lock and can write safely
resource[bit - 1]++;
bit = ~bit;
oldValue = 0;
while (goOn)
{
newValue = oldValue & bit;
result = Interlocked.CompareExchange(ref lockImage, newValue, oldValue); //imageAtomicCompSwap(lockImage, lockCoords, oldValue, newValue);
if (result == oldValue)
goOn = false; //break;
oldValue = result;
}
//break;
}
oldValue = result;
++counter;
}
}
The locking mechanism should work quite identical as the one described in OpenGL Insigts Chapter 22 Octree-Based Sparse Voxelization Using the GPU Hardware Rasterizer by Cyril Crassin and Simon Green. They just use integer textures to store the colors for every voxel which I would like to avoid because this complicates Mip Mapping and other things.
I hope the post is understandable, I get the feeling it is already becoming too long...
Why does the GLSL implementation not terminate?
If I understand you well, you use lockImage as thread-lock: A determined value at determined coords means "only this shader instance can do next operations" (change data in other image at that coords). Right.
The key is imageAtomicCompSwap. We know it did the job because it was able to store that determined value (let's say 0 means "free" and 1 means "locked"). We know it because the returned value (the original value) is "free" (i.e. the swap operation happened):
bool goOn = true;
unit oldValue = 0; //free
uint newValue = 1; //locked
//Wait for other shader instance to free the simulated lock
while ( goON )
{
uint result = imageAtomicCompSwap(lockImage, lockCoords, oldValue, newValue);
if ( result == oldValue ) //it was free, now it's locked
{
//Just this shader instance executes next lines now.
//Other instances will find a "locked" value in 'lockImage' and will wait
...
//release our simulated lock
imageAtomicCompSwap(lockImage, lockCoords, newValue, oldValue);
goOn = false;
}
}
I think your code loops forever because you complicated your life with bitvar and did a wrong use of oldVale and newValue
EDIT:
If the 'z' of the lockImage is multiple of 32 (just a hint for understanding, no needed exact multiple), you try to pack 32 voxel-locks in an integer. Let's call this integer 32C.
A shader instance ("SI") may want to change its bit in 32C, lock or unlock. So you must (A)get the current value and (B)change only your bit.
Other SIs are trying to change their bits. Some with the same bit, others with different bits.
Between two calls to imageAtomicCompSwap in the one SI, other SI may have changed not your bit (it's locked, no?) but other bits in the same 32C value. You don't know which is the current value, you know only your bit. Thus you have nothing (or an old wrong value) to compare with in the imageAtomicCompSwap call. It likely fails to set a new value. Several SIs failing leads to "deadlocks" and the while-loop never ends.
You try to avoid using an old wrong value by oldValue = result and trying again with imageAtomicCompSwap. This the (A)-(B) I wrote before. But between (A) and (B) still other SI may have changed the result= 32C value, ruining your idea.
IDEA:
You can use my simple approach (just 0 or 1 values in lockImage), without bits thing. The result is that lockImage is smaller. But all shader instances trying to update any of the 32 image coords related to a 32C value in lockImage will wait until the one who locked that value frees it.
Using another lockImage2 just to lock-unlock the 32C value for a bit update, seems too much spinning.
I have written article about how to implement per pixel mutex in fragment shader along with code . I think you can refer that. You are doing pretty similar thing that I have explained there. Here we go:
Getting Over Draw Count and Per Pixel Mutex
what is overdraw count ?
Mostly on embedded hardware the major concern for performance drop could be overdraw. Basically one pixel on screen is shaded multiple times by the GPU due to nature of geometry or scene we are drawing and this is called as overdraw. There are many tools to visualize overdraw count.
Details about overdraw?
When we draw some vertices those vertices will be transformed to clip space then to window coordinates. Rasterizer then maps this coordinates to pixels/fragments.Then for pixels/fragments GPU calls pixel shader. There could be cases when we are drawing multiple instance of geometry and blending them. So, this will do drawing on same pixel multiple times.This will lead to overdraw and could degrade performance.
Strategies to avoid overdraw?
Consider Frustum culling - Do frustum culling on CPU so that objects out of cameras field of view will not be rendered.
Sort objects based on z - Draw objects from front to back this way for later objects z test will fail and the fragment wont be written.
Enable back face culling - Using this we can avoid rendering back faces that are looking towards camera. 
If you observe point 2, we are rendering in exactly reverse order for blending.We are rendering from back to front. We need to do this because blending happens after z test. If for any fragment fails z test then though it is at back we should still consider it as blending is on but, that fragment will be completely ignored giving artifacts.Hence we need to maintain order from back to front. Due to this when blending is enabled we get more overdraw count.
Why we need Per Pixel Mutex?
By nature GPU is parallel so, shading of pixels can be done in parallel. So there are many instance of pixel shader running at a time. This instances may be shading same pixel and hence accessing same pixels.This may lead to some synchronization issues.This may create some unwanted effects. In this application I am maintaining overdraw count in image buffer initialized to 0. The operations I do are in following order.
Read i pixel's count from image buffer (which will be zero for first time)
Add 1 to value of counter read in step 1
Store new value of counter in ith position pixel in image buffer
As I told you multiple instance of pixel shader could be working on same pixel this may lead to corruption of counter variable.As these steps of algorithm are not atomic. I could have used inbuilt function imageAtomicAdd(). I wanted to show how we can implement per pixel mutex so, I have not used inbuilt function imageAtomicAdd().
#version 430
layout(binding = 0,r32ui) uniform uimage2D overdraw_count;
layout(binding = 1,r32ui) uniform uimage2D image_lock;
void mutex_lock(ivec2 pos) {
uint lock_available;
do {
lock_available = imageAtomicCompSwap(image_lock, pos, 0, 1);
} while (lock_available == 0);
}
void mutex_unlock(ivec2 pos) {
imageStore(image_lock, pos, uvec4(0));
}
out vec4 color;
void main() {
mutex_lock(ivec2(gl_FragCoord.xy));           
uint count = imageLoad(overdraw_count, ivec2(gl_FragCoord.xy)).x + 1;
imageStore(overdraw_count, ivec2(gl_FragCoord.xy), uvec4(count));
mutex_unlock(ivec2(gl_FragCoord.xy)); 
}
Fragment_Shader.fs
About Demo.
In demo video you can see we are rendering many teapots and blending is on.So pixels with more intensity shows there overdraw count is high.
on youtube
Note: On android you can see this overdraw count in debug GPU options.
source: Per Pixel Mutex

Bug in Microsoft's internal PriorityQueue<T>?

In the .NET Framework in PresentationCore.dll, there is a generic PriorityQueue<T> class whose code can be found here.
I wrote a short program to test the sorting, and the results weren't great:
using System;
using System.Collections.Generic;
using System.Diagnostics;
using MS.Internal;
namespace ConsoleTest {
public static class ConsoleTest {
public static void Main() {
PriorityQueue<int> values = new PriorityQueue<int>(6, Comparer<int>.Default);
Random random = new Random(88);
for (int i = 0; i < 6; i++)
values.Push(random.Next(0, 10000000));
int lastValue = int.MinValue;
int temp;
while (values.Count != 0) {
temp = values.Top;
values.Pop();
if (temp >= lastValue)
lastValue = temp;
else
Console.WriteLine("found sorting error");
Console.WriteLine(temp);
}
Console.ReadLine();
}
}
}
Results:
2789658
3411390
4618917
6996709
found sorting error
6381637
9367782
There is a sorting error, and if the sample size is increased, the number of sorting errors increases somewhat proportionally.
Have I done something wrong? If not, where is the bug in the code of the PriorityQueue class located exactly?
The behavior can be reproduced using the initialization vector [0, 1, 2, 4, 5, 3]. The result is:
[0, 1, 2, 4, 3, 5]
(we can see that 3 is incorrectly placed)
The Push algorithm is correct. It builds a min-heap in a straightforward way:
Start from the bottom right
If the value is greater than the parent node then insert it and return
Otherwise, put instead the parent in the bottom right position, then try inserting the value at the parent place (and keep swapping up the tree until the right place has been found)
The resulting tree is:
0
/ \
/ \
1 2
/ \ /
4 5 3
The issue is with the Pop method. It starts by considering the top node as a "gap" to fill (since we popped it):
*
/ \
/ \
1 2
/ \ /
4 5 3
To fill it, it searches for the lowest immediate child (in this case: 1). It then moves the value up to fill the gap (and the child is now the new gap):
1
/ \
/ \
* 2
/ \ /
4 5 3
It then does the exact same thing with the new gap, so the gap moves down again:
1
/ \
/ \
4 2
/ \ /
* 5 3
When the gap has reached the bottom, the algorithm... takes the bottom-rightmost value of the tree and uses it to fill the gap:
1
/ \
/ \
4 2
/ \ /
3 5 *
Now that the gap is at the bottom-rightmost node, it decrements _count to remove the gap from the tree:
1
/ \
/ \
4 2
/ \
3 5
And we end up with... A broken heap.
To be perfectly honest, I don't understand what the author was trying to do, so I can't fix the existing code. At most, I can swap it with a working version (shamelessly copied from Wikipedia):
internal void Pop2()
{
if (_count > 0)
{
_count--;
_heap[0] = _heap[_count];
Heapify(0);
}
}
internal void Heapify(int i)
{
int left = (2 * i) + 1;
int right = left + 1;
int smallest = i;
if (left <= _count && _comparer.Compare(_heap[left], _heap[smallest]) < 0)
{
smallest = left;
}
if (right <= _count && _comparer.Compare(_heap[right], _heap[smallest]) < 0)
{
smallest = right;
}
if (smallest != i)
{
var pivot = _heap[i];
_heap[i] = _heap[smallest];
_heap[smallest] = pivot;
Heapify(smallest);
}
}
Main issue with that code is the recursive implementation, which will break if the number of elements is too large. I strongly recommend using an optimized thirdparty library instead.
Edit: I think I found out what is missing. After taking the bottom-rightmost node, the author just forgot to rebalance the heap:
internal void Pop()
{
Debug.Assert(_count != 0);
if (_count > 1)
{
// Loop invariants:
//
// 1. parent is the index of a gap in the logical tree
// 2. leftChild is
// (a) the index of parent's left child if it has one, or
// (b) a value >= _count if parent is a leaf node
//
int parent = 0;
int leftChild = HeapLeftChild(parent);
while (leftChild < _count)
{
int rightChild = HeapRightFromLeft(leftChild);
int bestChild =
(rightChild < _count && _comparer.Compare(_heap[rightChild], _heap[leftChild]) < 0) ?
rightChild : leftChild;
// Promote bestChild to fill the gap left by parent.
_heap[parent] = _heap[bestChild];
// Restore invariants, i.e., let parent point to the gap.
parent = bestChild;
leftChild = HeapLeftChild(parent);
}
// Fill the last gap by moving the last (i.e., bottom-rightmost) node.
_heap[parent] = _heap[_count - 1];
// FIX: Rebalance the heap
int index = parent;
var value = _heap[parent];
while (index > 0)
{
int parentIndex = HeapParent(index);
if (_comparer.Compare(value, _heap[parentIndex]) < 0)
{
// value is a better match than the parent node so exchange
// places to preserve the "heap" property.
var pivot = _heap[index];
_heap[index] = _heap[parentIndex];
_heap[parentIndex] = pivot;
index = parentIndex;
}
else
{
// Heap is balanced
break;
}
}
}
_count--;
}
Kevin Gosse's answer identifies the problem. Although his re-balancing of the heap will work, it's not necessary if you fix the fundamental problem in the original removal loop.
As he pointed out, the idea is to replace the item at the top of the heap with the lowest, right-most item, and then sift it down to the proper location. It's a simple modification of the original loop:
internal void Pop()
{
Debug.Assert(_count != 0);
if (_count > 0)
{
--_count;
// Logically, we're moving the last item (lowest, right-most)
// to the root and then sifting it down.
int ix = 0;
while (ix < _count/2)
{
// find the smallest child
int smallestChild = HeapLeftChild(ix);
int rightChild = HeapRightFromLeft(smallestChild);
if (rightChild < _count-1 && _comparer.Compare(_heap[rightChild], _heap[smallestChild]) < 0)
{
smallestChild = rightChild;
}
// If the item is less than or equal to the smallest child item,
// then we're done.
if (_comparer.Compare(_heap[_count], _heap[smallestChild]) <= 0)
{
break;
}
// Otherwise, move the child up
_heap[ix] = _heap[smallestChild];
// and adjust the index
ix = smallestChild;
}
// Place the item where it belongs
_heap[ix] = _heap[_count];
// and clear the position it used to occupy
_heap[_count] = default(T);
}
}
Note also that the code as written has a memory leak. This bit of code:
// Fill the last gap by moving the last (i.e., bottom-rightmost) node.
_heap[parent] = _heap[_count - 1];
Does not clear the value from _heap[_count - 1]. If the heap is storing reference types, then the references remain in the heap and cannot be garbage collected until the memory for the heap is garbage collected. I don't know where this heap is used, but if it's large and lives for any significant amount of time, it could cause excess memory consumption. The answer is to clear the item after it's copied:
_heap[_count - 1] = default(T);
My replacement code incorporates that fix.
Not reproducible in .NET Framework 4.8
Trying to reproduce this issue in 2020 with the .NET Framework 4.8 implementation of the PriorityQueue<T> as linked in the question using the following XUnit test ...
public class PriorityQueueTests
{
[Fact]
public void PriorityQueueTest()
{
Random random = new Random();
// Run 1 million tests:
for (int i = 0; i < 1000000; i++)
{
// Initialize PriorityQueue with default size of 20 using default comparer.
PriorityQueue<int> priorityQueue = new PriorityQueue<int>(20, Comparer<int>.Default);
// Using 200 entries per priority queue ensures possible edge cases with duplicate entries...
for (int j = 0; j < 200; j++)
{
// Populate queue with test data
priorityQueue.Push(random.Next(0, 100));
}
int prev = -1;
while (priorityQueue.Count > 0)
{
// Assert that previous element is less than or equal to current element...
Assert.True(prev <= priorityQueue.Top);
prev = priorityQueue.Top;
// remove top element
priorityQueue.Pop();
}
}
}
}
... succeeds in all 1 million test cases:
So it seems like Microsoft fixed the bug in their implementation:
internal void Pop()
{
Debug.Assert(_count != 0);
if (!_isHeap)
{
Heapify();
}
if (_count > 0)
{
--_count;
// discarding the root creates a gap at position 0. We fill the
// gap with the item x from the last position, after first sifting
// the gap to a position where inserting x will maintain the
// heap property. This is done in two phases - SiftDown and SiftUp.
//
// The one-phase method found in many textbooks does 2 comparisons
// per level, while this method does only 1. The one-phase method
// examines fewer levels than the two-phase method, but it does
// more comparisons unless x ends up in the top 2/3 of the tree.
// That accounts for only n^(2/3) items, and x is even more likely
// to end up near the bottom since it came from the bottom in the
// first place. Overall, the two-phase method is noticeably better.
T x = _heap[_count]; // lift item x out from the last position
int index = SiftDown(0); // sift the gap at the root down to the bottom
SiftUp(index, ref x, 0); // sift the gap up, and insert x in its rightful position
_heap[_count] = default(T); // don't leak x
}
}
As the link in the questions only points to most recent version of Microsoft's source code (currently .NET Framework 4.8) it's hard to say what exactly was changed in the code but most notably there's now an explicit comment not to leak memory, so we can assume the memory leak mentioned in #JimMischel's answer has been addressed as well which can be confirmed using the Visual Studio Diagnostic tools:
If there was a memory leak we'd see some changes here after a couple of million Pop() operations...

Simple interval/range intersection with overflow

I'm writing a physical memory manager that gets some intervals of memory from the BIOS that are not used by crucial system data. Each interval has 0 <= start <= 2^32 - 1 and 0 <= length <= 2^32. I have already filtered out the zero-length intervals.
Given two intervals S and T, I want to detect how they intersect. For example, does S start before T and end within T (picture a)? Or does S start before T and end after T (picture c)?
You'd think the solution is trivial:
uint s_end = s_start + s_length;
uint t_end = t_start + t_length;
if (s_start < t_start)
// S starts before T
else if (s_start < t_end)
// S starts within T
else
// S starts after T
if (s_end <= t_start)
// S ends before T
else if (s_end <= t_end)
// S ends within T
else
// S ends after T
The problem is overflow: I am technically limited to a 32-bit integer and the intervals can (and often do) use the whole range of available integers. For example in figure b, t_end equals 0 due to overflow. Or even, as in figure f t_start = t_end = s_start = 0 while t_length != 0.
How can I make these interval intersection conditions work with overflow taken into account?
The overflow screws up my conditions, but I really can't use a 64-bit integer for this (that would be easiest). I know it must be possible using some clever reshuffling of my conditions and using addition and subtraction, but after making endless diagrams and thinking about it for hours, I can't seem to be able to wrap my head around it.
While my problem is with 32-bit integers, in this image I used 4-bit integers just to simplify it. The problem remains the same.
OK, the issue is, if you want your ranges to span all of n-bits, any calculations based on start/end has the potential to overflow.
So the trick is to do a linear transform to a place where your start/end calculations do not overflow, do your calcs, and then linear transform back.
NOTES
Below the we can safely call end() now line, you can call the ordering checks (your original code) and it will be safe since the ordering is preserved during a linear transform.
Also, as I noted in the previous post, there is a special boundary case where even if you do this transform, you will overflow (where you span the entire line) - but you can code for that special boundary condition.
OUTPUT
5 11
CODE
#include <iostream>
using type = uint8_t;
struct segment
{
type start, length;
type end() const { return start + length; }
};
static segment
intersect( segment s, segment t )
{
type shift = std::min( s.start, t.start );
// transform so we can safely call end()
s.start -= shift; // doesn't affect length
t.start -= shift; // doesn't affect length
// we can safely call end() now ----------------------------------------------
type u_start = std::max( s.start, t.start );
type u_end = std::min( s.end(), t.end() );
type u_length = u_end - u_start;
segment u{ u_start, u_length };
// transform back
u.start += shift;
return u;
}
int main()
{
segment s{ 3, 13 }, t{ 5, 11 };
segment u = intersect( s, t );
std::cerr << uint32_t( u.start ) << " " << uint32_t( u.length ) << std::endl;
return 0;
}
Your example code does not enumerate all the cases. For example the intervals could also start or end at the same point.
To solve the overflow problem you could try to add different math based on the start comparison that will not include computing the ends at all. Something like:
if (s_start < t_start)
{
// S starts before T
uint start_offset = t_start - s_start;
if (start_offset < s_length)
{
if (s_length - start_offset < t_length)
{
// ...
}
else ...
} else ...
}
One solution is to treat an end of 0 as a special case. Weaving this into the if-statements, it becomes:
uint s_end = s_start + s_length;
uint t_end = t_start + t_length;
if (s_start < t_start)
// S starts before T
else if (t_end == 0 || s_start < t_end)
// S starts within T
else
// S starts after T
if (s_end != 0 && s_end <= t_start)
// S ends before T
else if (t_end == 0 || s_end == t_end
|| (s_end != 0 && s_end <= t_end))
// S ends within T
else
// S ends after T
This looks correct.
I don't know what do you do with conditions like (f), since 32-bit t_length will be 0 there.
Assuming you've managed this case somehow when you were filtering out length=0, which can mean both 0 and 2^32, the basic idea is this:
bool s_overflows=false;
if(s_start>0)//can't have overflow with s_start==0,
{
uint32 s_max_length=_UI32_MAX-s_start+1;
if(s_length==s_max_length) s_overflow=true;
}
bool t_overflows=false;
if(t_start>0)
{
uint32 t_max_length=_UI32_MAX-t_start+1;
if(t_length==t_max_length) t_overflow=true;
}
Then you just do your calculations, but if s_overflow is true, you don't calculate s_end -- you don't need it, since you already know it's 0x100000000. The same for t_overflow. Since these are already special cases, just like start=0, they shouldn't complicate your code much.

Categories

Resources