ArgumentOutOfRangeException on SerialPort.ReadTo() - c#

My code indeterminately throws ArgumentOutOfRangeException: Non-negative number required. when invoking the ReadTo() method of the SerialPort class:
public static void RetrieveCOMReadings(List<SuperSerialPort> ports)
{
Parallel.ForEach(ports,
port => port.Write(port.ReadCommand));
Parallel.ForEach(ports,
port =>
{
try
{
// this is the offending line.
string readto = port.ReadTo(port.TerminationCharacter);
port.ResponseData = port.DataToMatch.Match(readto).Value;
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
port.ResponseData = null;
}
});
}
SuperSerialPort is an extension of the SerialPort class, primarily to hold information required for communications specific to each device on the port.
A port always has the TerminationCharacter defined;
Most of the time it's a newline character:
I don't understand why this is happening.
If the ReadTo fails to find the character(s) specified in the input buffer, shouldn't it just timeout and return nothing?
The StackTrace is pointing to an offending function in the mscorlib, in the definition of the SerialPort class:
System.ArgumentOutOfRangeException occurred
HResult=-2146233086
Message=Non-negative number required.
Parameter name: byteCount
Source=mscorlib
ParamName=byteCount
StackTrace:
at System.Text.ASCIIEncoding.GetMaxCharCount(Int32 byteCount)
InnerException:
I followed it and here's what I found:
private int ReadBufferIntoChars(char[] buffer, int offset, int count, bool countMultiByteCharsAsOne)
{
Debug.Assert(count != 0, "Count should never be zero. We will probably see bugs further down if count is 0.");
int bytesToRead = Math.Min(count, CachedBytesToRead);
// There are lots of checks to determine if this really is a single byte encoding with no
// funky fallbacks that would make it not single byte
DecoderReplacementFallback fallback = encoding.DecoderFallback as DecoderReplacementFallback;
----> THIS LINE
if (encoding.IsSingleByte && encoding.GetMaxCharCount(bytesToRead) == bytesToRead &&
fallback != null && fallback.MaxCharCount == 1)
{
// kill ASCII/ANSI encoding easily.
// read at least one and at most *count* characters
decoder.GetChars(inBuffer, readPos, bytesToRead, buffer, offset);
bytesToRead is getting assigned a negative number because CachedBytesToRead is negative. The inline comments specify that CachedBytesToRead can never be negative, yet it's clearly the case:
private int readPos = 0; // position of next byte to read in the read buffer. readPos <= readLen
private int readLen = 0; // position of first unreadable byte => CachedBytesToRead is the number of readable bytes left.
private int CachedBytesToRead {
get {
return readLen - readPos;
}
Anyone have any rational explanation for why this is happening?
I don't believe I'm doing anything illegal in terms of reading/writing/accessing the SerialPorts.
This gets thrown constantly, with no good way to reproduce it.
There's bytes available on the input buffer, here you can see the state of some of the key properties when it breaks (readLen, readPos, BytesToRead, CachedBytesToRead):
Am I doing something glaringly wrong?
EDIT: A picture showing that the same port isn't being asynchronously accessed from the loop:

This is technically possible, in general a common issue with .NET classes that are not thread-safe. The SerialPort class is not, there's no practical case where it needs to be thread-safe.
The rough diagnostic is that two separate threads are calling ReadTo() on the same SerialPort object concurrently. A standard threading race condition will occur in the code that updates the readPos variable. Both threads have copied the same data from the buffer and each increment readPos. In effect advancing readPos too far by double the amount. Kaboom when the next call occurs with readPos larger than readLen, producing a negative value for the number of available bytes in the buffer.
The simple explanation is that your List<SuperSerialPort> collection contains the same port more than once. The Parallel.ForEach() statement triggers the race. Works just fine for a while, until two threads execute the decoder.GetChars() method simultaneously and both arrive at the next statement:
readPos += bytesToRead;
Best way to test the hypothesis is to add code that ensures that the list does contain the same port more than once. Roughly:
#if DEBUG
for (int ix = 0; ix < ports.Count - 1; ++ix)
for (int jx = ix + 1; jx < ports.Count; ++jx)
if (ports[ix].PortName == ports[jx].PortName)
throw new InvalidOperationException("Port used more than once");
#endif
A second explanation is that your method is being calling by more than one thread. That can't work, your method isn't thread-safe. Short from protecting it with a lock, making sure that only one thread ever calls it is the logical fix.

It can be caused because you are setting a termination character and using this character for readto. Instead try to use ReadLine or remove the termination character.

Related

SpinWait.SpinUntil taking MUCH longer than timeout to exit while waiting for Selnium element to exist

I have a relatively simple method to wait until an element exists and is displayed. The method handles the situation where more than a single element is returned for the given By (usually we only expect one of them to be displayed but in any case the method will return the first displayed element found).
The issue I'm having is that when there is no matching element on the page (at all), it is taking magnitudes of time more* than the TimeSpan specified, and I can't figure why.
*I just tested with a 30s timeout and it took a little over 5m
code:
/// <summary>
/// Returns the (first) element that is displayed when multiple elements are found on page for the same by
/// </summary>
public static IWebElement FindDisplayedElement(By by, int secondsToWait = 30)
{
WebDriver.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(secondsToWait);
// Wait for an element to exist and also displayed
IWebElement element = null;
bool success = SpinWait.SpinUntil(() =>
{
var collection = WebDriver.FindElements(by);
if (collection.Count <= 0)
return false;
element = collection.ToList().FirstOrDefault(x => x.Displayed == true);
return element != null;
}
, TimeSpan.FromSeconds(secondsToWait));
if (success)
return element;
// if element still not found
throw new NoSuchElementException("Could not find visible element with by: " + by.ToString());
}
You would call it with something like this:
[Test]
public void FindDisplayedElement()
{
webDriver.Navigate().GoToUrl("https://stackoverflow.com/questions");
var nonExistenetElementBy = By.CssSelector("#custom-header99");
FindDisplayedElement(nonExistenetElementBy , 10);
}
If you run the test (with 10s timeout) you will find it takes about 100 seconds to actually exit.
It looks like it might have something to do with the mix of the inherit wait built into WebDriver.FindElements() wrapped inside a SpinWait.WaitUntil().
Would like to hear what you guys think about this conundrum.
Cheers!
That's because SpinWait.WaitUntil is implemented rougly as follows:
public static bool SpinUntil(Func<bool> condition, TimeSpan timeout) {
int millisecondsTimeout = (int) timeout.TotalMilliseconds;
long num = 0;
if (millisecondsTimeout != 0 && millisecondsTimeout != -1)
num = Environment.TickCount;
SpinWait spinWait = new SpinWait();
while (!condition())
{
if (millisecondsTimeout == 0)
return false;
spinWait.SpinOnce();
// HERE
if (millisecondsTimeout != -1 && spinWait.NextSpinWillYield && millisecondsTimeout <= (Environment.TickCount - num))
return false;
}
return true;
}
Note condition under "HERE" comment above. It only checks whether timeout has expired IF spinWait.NextSpinWillYield returns true. What that means is: if next spin will result in context switch and timeout is expired - give up and return. But otherwise - keep spinning without even checking a timeout.
NextSpinWillYield result depends on number of previous spins. Basically this construct spins X amount of times (10 I believe), then starts to yield (give up current thread time slice to other threads).
In your case, condition inside SpinUntil take VERY long time to evaluate, which is completely against design of SpinWait - it expects condition evaluation take no time at all (and where SpinWait is actually applicable - it's true). Say one evaluation of condition takes 5 seconds in your case. Then, even if timeout is 1 second - it will spin 10 times first (50 seconds total) before even checking the timeout. That's because SpinWait is not designed for the thing you are trying to use it for. From documentation:
System.Threading.SpinWait is a lightweight synchronization type that
you can use in low-level scenarios to avoid the expensive context
switches and kernel transitions that are required for kernel events.
On multicore computers, when a resource is not expected to be held for
long periods of time, it can be more efficient for a waiting thread to
spin in user mode for a few dozen or a few hundred cycles, and then
retry to acquire the resource. If the resource is available after
spinning, then you have saved several thousand cycles. If the resource
is still not available, then you have spent only a few cycles and can
still enter a kernel-based wait. This spinning-then-waiting
combination is sometimes referred to as a two-phase wait operation.
None of which is applicable to your situation, in my opinion. Another part of documentation states "SpinWait is not generally useful for ordinary applications".
In this case, with such a long condition evaluation time - you can just run it in a loop without additional waiting or spinning, and manually check if timeout has expired on each iteration.
Doing some further testing I found out that reducing the WebDriver Implicit Wait Timeout to a low number (e.g. 100ms) fixes the issue. This corresponds to the explanation #Evk provided to why using SpinUntil doesn't work.
I've changed the function to use WebDriverWait instead (as shown in this answer to a different question) and it now works correctly. This removed the need to use the implicit wait timeout at all.
/// <summary>
/// Returns the (first) element that is displayed when multiple elements are found on page for the same by
/// </summary>
/// <exception cref="NoSuchElementException">Thrown when either an element is not found or none of the found elements is displayed</exception>
public static IWebElement FindDisplayedElement(By by, int secondsToWait = DEFAULT_WAIT)
{
var wait = new WebDriverWait(WebDriver, TimeSpan.FromSeconds(secondsToWait));
try
{
return wait.Until(condition =>
{
return WebDriver.FindElements(by).ToList().FirstOrDefault(x => x.Displayed == true);
});
}
catch (WebDriverTimeoutException ex)
{
throw new NoSuchElementException("Could not find visible element with by: " + by.ToString(), ex);
}
}

I need to access global (unmanaged) memory in c#

I am pretty new in c# and could need some help on a audio project.
My audio input buffers call a method when they are filled up. In the method I marshall this buffers to a local float[], pass this to a function, where some audio processing is done. After processing the function throws back the manipulated float[], which I pass by Marshall.copy to the audio aout buffer. it works but it is is pretty hard to get the audioprocessing done fast enough, to pass the result back to the method without ending in ugly glitches. If I enlarge the audio buffers it gets better but I get untolarable high latency in the signal chain.
One problem is the GC. My DSP Routine is doing some FFT and the methods frequently need allocate local variables. I think this is slowing down my process a lot.
So I need a way to allogate once (and re-access) a few pieces of umanaged memory, keep this mem fpr the entire runtime and just reference to it from the methods.
I found e.g:
IntPtr hglobal = Marshal.AllocHGlobal(8192);
Marshal.FreeHGlobal(hglobal);
SO what I tried is to define a global static class "Globasl" with static member and asigned that IntPtr to that.
Globals.mem1 = hglobal;
From within any nother method I can access this now by
e.g.
int[] f = new int[2];
f[0] = 111;
f[1] = 222;
Marshal.Copy(f, 0, Globals.mem1, 2);
Now comes my problem:
If I want to access this int[] from the example above in another method, how could I do this?
thank you for your fast help.
I was little unprecise seems, sorry
my audiodevice driver throws a buffer filled event which I catch, (in pseudocode since I dont have access to my home desktop right now). looks like:
void buffer (....)
{
byte[] buf = new byte[asiobuffersize];
marshall.copy(asioinbuffers, 0, buf, asiobufferlenth);
buf= manipulate(buf);
marshall.copy(buf, 0, asiooutbuffer, asiobufferlenth);
}
the manipulate function is doing some conversions from byte to float then some math (FFT) and backtransform to byte and looks like e.g.
private byte[] manipulate(byte[] buf, Complex[] filter)
{
float bu = convertTofloat(buf); //conversion from byte to audio float here
Complex[] inbuf = new Complex[bu.Length];
Complex[] midbuf = new Complex[bu.Length];
Complex[] mid2buf = new Complex[bu.Length];
Complex[] outbuf = new Complex[bu.Length];
for ( n....)
{
inbuf[n]= bu[n]; //Copy to Complex
}
midbuf=FFT(inbuf); //Doing FFT transform
for (n....)
{
mid2buf[n]=midbuf[n]*filter[n]; // Multiply with filter
}
outbuf=iFFT(mid2buf) //inverse iFFT transform
byte bu = convertTobyte(float); //conversion from float back to audio byte
return bu;
}
here I expect my speed issue to be. So I thought the problem could be solved if the manipulating funktion could just "get" a fixed piece of unmanaged memory where (once pre-created) sits fixed all those those variables (Complex e.g.) and pre allocated menory, so that I must not create new ones each time the function is called. I first expected the reason for my glitches in wrong FFT or math but it happens in kind of "sharp" time distances of few seconds, so it is not connected to audio signal issues like clipping. I think the isseue happens, when the GC is doing some serious work and eats me exactly this few miliseconds missing to get the outbut buffer filled in time.
acceptable.
I really doubt the issues you are experiencing are induced by your managed buffer creation/copying. Instead I think your problem is that you have your data capture logic coupled with your DSP logic. Usually, captured data resides in a circular buffer, where the data is rewritten after some period, so you should be fetching this data as soon as posible.
The problem is that you don't fetch the next available data block until after your DSP is done; you already know FFT operations are really CPU intensive! If you have a processing peak, you may not be able to retrieve data before it is rewritten by the capture driver.
One possibility to address your issue is try to increase, if posible, the size and/or available amount of capture buffers. This buys you more time before the captured data is rewritten. The other possibility, and is the one that I favor is decoupling your processing stage from your capture stage, in this way, if new data is available while you are busy performing your DSP computations, you can still grab it and buffer it almost as soon as it becomes available. You become much more resilient to garbage collection induced pauses or computing peaks inside your manipulate method.
This would involve creating two threads: the capture thread and the processing thread. You would also need an "Event" that signals the processing thread that new data is available, and a queue, that will serve as a dynamic, expandable buffer.
The capture thread would look something like this:
// the following are class members
AutoResetEvent _bufQueueEvent = new AutoResetEvent(false);
Queue<byte[]> _bufQueue = new Queue<byte[]>;
bool _keepCaptureThreadAlive;
Thread _captureThread;
void CaptureLoop() {
while( _keepCaptureThreadAlive ) {
byte[] asioinbuffers = WaitForBuffer();
byte[] buf = new byte[asiobuffers.Length]
marshall.copy(asioinbuffers, 0, buf, asiobufferlenth);
lock( _bufQueue ) {
_bufQueue.Enqueue(buf);
}
_bufQueueEvent.Set(); // notify processing thread new data is available
}
}
void StartCaptureThread() {
_keepCaptureThreadAlive = true;
_captureThread = new Thread(new(ThreadStart(CaptureLoop));
_captureThread.Name = "CaptureThread";
_captureThread.IsBackground = true;
_captureThread.Start();
}
void StopCaptureThread() {
_keepCaptureThreadAlive = false;
_captureThread.Join(); // wait until the thread exits.
}
The processing thread would look something like this
// the following are class members
bool _keepProcessingThreadAlive;
Thread _processingThread;
void ProcessingLoop() {
while( _keepProcessingThreadAlive ) {
_bufQueueEvent.WaitOne(); // thread will sleep until fresh data is available
if( !_keepProcessingThreadAlive ) {
break; // check if thread is being waken for termination.
}
int queueCount;
lock( _bufQueue ) {
queueCount = _bufQueue.Count;
}
for( int i = 0; i < queueCount; i++ ) {
byte[] buffIn;
lock( _bufQueue ) {
// only lock during dequeue operation, this way the capture thread Will
// be able to enqueue fresh data even if we are still doing DSP processing
buffIn = _bufQueue.Dequeue();
}
byte[] buffOut = manipulate(buffIn); // you are safe if this stage takes more time than normal, you will still get the incoming data
// additional logic using manipulate() return value
...
}
}
}
void StartProcessingThread() {
_keepProcessingThreadAlive = true;
_processingThread = new Thread(new(ThreadStart(ProcessingLoop));
_processingThread.Name = "ProcessingThread";
_processingThread.IsBackground = true;
_processingThread.Start();
}
void StopProcessingThread() {
_keepProcessingThreadAlive = false;
_bufQueueEvent.Set(); // wake up thread in case it is waiting for data
_processingThread.Join();
}
At my job we also perform a lot of DSP and this pattern has really helped us with the kind of issues you are experiencing.

Why doesn't this code produce torn reads?

When going through the CLR/CLI specs and memory models etc, I noticed the wording around atomic reads/writes according to the ECMA CLI spec:
A conforming CLI shall guarantee that read and write access to
properly aligned memory locations no larger than the native word size
(the size of type native int) is atomic when all the write accesses to
a location are the same size.
Specifically the phrase 'properly aligned memory' caught my eye. I wondered if I could somehow get torn reads with a long type on a 64-bit system with some trickery. So I wrote the following test-case:
unsafe class Program {
const int NUM_ITERATIONS = 200000000;
const long STARTING_VALUE = 0x100000000L + 123L;
const int NUM_LONGS = 200;
private static int prevLongWriteIndex = 0;
private static long* misalignedLongPtr = (long*) GetMisalignedHeapLongs(NUM_LONGS);
public static long SharedState {
get {
Thread.MemoryBarrier();
return misalignedLongPtr[prevLongWriteIndex % NUM_LONGS];
}
set {
var myIndex = Interlocked.Increment(ref prevLongWriteIndex) % NUM_LONGS;
misalignedLongPtr[myIndex] = value;
}
}
static unsafe void Main(string[] args) {
Thread writerThread = new Thread(WriterThreadEntry);
Thread readerThread = new Thread(ReaderThreadEntry);
writerThread.Start();
readerThread.Start();
writerThread.Join();
readerThread.Join();
Console.WriteLine("Done");
Console.ReadKey();
}
private static IntPtr GetMisalignedHeapLongs(int count) {
const int ALIGNMENT = 7;
IntPtr reservedMemory = Marshal.AllocHGlobal(new IntPtr(sizeof(long) * count + ALIGNMENT - 1));
long allocationOffset = (long) reservedMemory % ALIGNMENT;
if (allocationOffset == 0L) return reservedMemory;
return reservedMemory + (int) (ALIGNMENT - allocationOffset);
}
private static void WriterThreadEntry() {
for (int i = 0; i < NUM_ITERATIONS; ++i) {
SharedState = STARTING_VALUE + i;
}
}
private static void ReaderThreadEntry() {
for (int i = 0; i < NUM_ITERATIONS; ++i) {
var sharedStateLocal = SharedState;
if (sharedStateLocal < STARTING_VALUE) Console.WriteLine("Torn read detected: " + sharedStateLocal);
}
}
}
However, no matter how many times I run the program I never legitimately see the line "Torn read detected!". So why not?
I allocated multiple longs in a single block in the hopes that at least one of them would spill between two cache lines; and the 'start point' for the first long should be misaligned (unless I'm misunderstanding something).
Also I know that the nature of multithreading errors means they can be hard to force, and that my 'test program' isn't as rigorous as it could be, but I've run the program almost 30 times now with no results- each with 200000000 iterations.
There are a number of flaws in this program that hides torn reads. Reasoning about the behavior of unsynchronized threads is never simple, and hard to explain, the odds for accidental synchronization are always high.
var myIndex = Interlocked.Increment(ref prevLongWriteIndex) % NUM_LONGS;
Nothing very subtle about Interlocked, unfortunately it affects the reader thread a great deal as well. Pretty hard to see, but you can use Stopwatch to time the execution of the threads. You'll see that Interlocked on the writer slows down the reader by a factor of ~2. Enough to affect the timing of the reader and not repro the problem, accidental synchronization.
Simplest way to eliminate the hazard and maximize the odds of detecting a torn read is to just always read and write from the same memory location. Fix:
var myIndex = 0;
if (sharedStateLocal < STARTING_VALUE)
This test doesn't help much to detect torn reads, there are many that simply don't trigger the test. Having too many binary zeros in the STARTING_VALUE make it extra unlikely. A good alternative that maximizes the odds for detection is to alternate between 1 and -1, ensuring the byte values are always different and making the test very simple. Thus:
private static void WriterThreadEntry() {
for (int i = 0; i < NUM_ITERATIONS; ++i) {
SharedState = 1;
SharedState = -1;
}
}
private static void ReaderThreadEntry() {
for (int i = 0; i < NUM_ITERATIONS; ++i) {
var sharedStateLocal = SharedState;
if (Math.Abs(sharedStateLocal) != 1) {
Console.WriteLine("Torn read detected: " + sharedStateLocal);
}
}
}
That quickly gets you several pages of torn reads in the console in 32-bit mode. To get them in 64-bit as well you need to do extra work to get the variable mis-aligned. It needs to straddle the L1 cache-line boundary so the processor has to perform two reads and writes, like it does in 32-bit mode. Fix:
private static IntPtr GetMisalignedHeapLongs(int count) {
const int ALIGNMENT = -1;
IntPtr reservedMemory = Marshal.AllocHGlobal(new IntPtr(sizeof(long) * count + 64 + 15));
long cachelineStart = 64 * (((long)reservedMemory + 63) / 64);
long misalignedAddr = cachelineStart + ALIGNMENT;
if (misalignedAddr < (long)reservedMemory) misalignedAddr += 64;
return new IntPtr(misalignedAddr);
}
Any ALIGNMENT value between -1 and -7 will now produce torn reads in 64-bit mode as well.

Try...catch mysteriously continues executing code (only sometimes...)

In the C# code below, assume that sample is, say, 20000.
Then the line which calls Convert.ToInt16 will throw an overflow exception as it tries to convert 20000 * 3.2 (which is greater than 32768).
In that exception, it sets a bool flag to true (dataErrorWritten), which is supposed to stop the error message being written more than once.
However what I am seeing is some cases where the error message is continually written every time and it appears the dataErrorWritten flag is not doing anything. How can this be possible? Once the exception is caught the first time, dataErrorWritten[i] will be set to true, and when it comes around the next time it should not print any error. In fact, this works 99% of the time, however under certain strange circumstances it does not.
ProcessInData is running on its own thread within a static class.
Any ideas would be greatly appreciated. Thank you.
I have already tracked down one multi-threading bug in this program (shared data queue without lock), and I guess that this might have something to do with multi-threading but I can't see how.
private static bool[] dataErrorWritten;
private static void ProcessInData()
{
short[][] bufferedData = new short[800][];
short sample;
//initialise this bool array to false
dataErrorWriten = new bool[32];
for (int i = 0; i < 32; i++)
{
dataErrorWriten[i] = false;
}
//create buffered data arrays
for (int i = 0; i < bufferedData.Length; i++)
{
bufferedData[i] = new short[32];
}
//loop always true
while (processData)
{
//... Increment bufferLocation
for (int i = 0; i < 32; i++) {
//Get a sample of data
sample = GetSampleFromSomewhere();
try {
bufferedData[bufferLocation][i] = Convert.ToInt16(((float)sample * 3.2);
dataErrorWritten[i] = false;
}
catch (Exception) {
if (!dataErrorWritten[i]) {
EventLog.WriteToErrorLogFile("Invalid UDP sample value " + sample + " received for channel " + (i + 1).ToString() + ".");
dataErrorWritten[i] = true;
}
// ... set buffered data within bounds
}
}
}
}
Your code sets the flag to false after a successful convert and true in the handler.
If your code had a mix of good and bad converts, the flag will toggle and you get multiple errors.
You could remove the set back to "false" to ensure each error is only printed once. Perhaps use a counter instead of a bool. Only output the error when the counter is 0. That way you know how many times the error occurred (for debug purposes), but only report it once.

XNA/Mono Effect Throwing Runtime Cast Exception

As a foreword, the exact same code works just fine in XNA, but Monogame throws an exception. This likely requires someone familiar with the Monogame rendering pipeline.
During the Draw section of my game, there's a ShadowmapResolver that renders out a texture that is the final calculated light pattern from a given light. It's receiving an exception when rendering from what is essentially EffectPass.Apply() complaining that from somewhere within Mono theres an attempted cast from int32[] to Single[]. Here's my code that calls it:
private void ExecuteTechnique(Texture2D source, RenderTarget2D destination,
string techniqueName, Texture2D shadowMap)
{
graphicsDevice.SetRenderTarget(destination);
graphicsDevice.Clear(Color.Transparent);
resolveShadowsEffect.Parameters["renderTargetSizeX"].SetValue((float)baseSizeX);
resolveShadowsEffect.Parameters["renderTargetSizeY"].SetValue((float)baseSizeY);
if (source != null)
resolveShadowsEffect.Parameters["InputTexture"].SetValue(source);
if (shadowMap != null)
resolveShadowsEffect.Parameters["ShadowMapTexture"].SetValue(shadowMap);
resolveShadowsEffect.CurrentTechnique = resolveShadowsEffect
.Techniques[techniqueName];
try
{
foreach (EffectPass pass in resolveShadowsEffect.CurrentTechnique.Passes)
{
pass.Apply(); // <--- InvalidCastException re-enters my program here
quadRender.Render(Vector2.One * -1, Vector2.One);
}
}
catch (Exception ex)
{
Util.Log(LogManager.LogLevel.Critical, ex.Message);
}
graphicsDevice.SetRenderTarget(null);
}
And here is the stacktrace:
at Microsoft.Xna.Framework.Graphics.ConstantBuffer.SetData(Int32 offset, Int32 rows, Int32 columns, Object data)
at Microsoft.Xna.Framework.Graphics.ConstantBuffer.SetParameter(Int32 offset, EffectParameter param)
at Microsoft.Xna.Framework.Graphics.ConstantBuffer.Update(EffectParameterCollection parameters)
at Microsoft.Xna.Framework.Graphics.EffectPass.Apply()
at JASG.ShadowmapResolver.ExecuteTechnique(Texture2D source, RenderTarget2D destination, String techniqueName, Texture2D shadowMap) in C:\Users\[snip]\dropbox\Projects\JASG2\JASG\JASG\Rendering\ShadowmapResolver.cs:line 253
So it would appear that one of the parameters of my shader which I am trying to set is confusing monogame somehow, but I don't see what it could be. I'm pushing floats, not int arrays. I even tried changing the RenderTarget2D.SurfaceFormat from Color to Single for all my targets and textures, still gives the exact same error.
Outside of the function I gave, in a broader scope, there are no other parameters being set since another EffectPass.Apply. There are multiple other effects that render without error before this one.
In case it helps, here's the source for the MonoGame Framework regarding ConstantBuffer.SetData()
private void SetData(int offset, int rows, int columns, object data)
{
// Shader registers are always 4 bytes and all the
// incoming data objects should be 4 bytes per element.
const int elementSize = 4;
const int rowSize = elementSize * 4;
// Take care of a single element.
if (rows == 1 && columns == 1)
{
// EffectParameter stores all values in arrays by default.
if (data is Array)
Buffer.BlockCopy(data as Array, 0, _buffer, offset, elementSize);
else
{
// TODO: When we eventually expose the internal Shader
// API then we will need to deal with non-array elements.
throw new NotImplementedException();
}
}
// Take care of the single copy case!
else if (rows == 1 || (rows == 4 && columns == 4))
Buffer.BlockCopy(data as Array, 0, _buffer, offset, rows*columns*elementSize);
else
{
var source = data as Array;
var stride = (columns*elementSize);
for (var y = 0; y < rows; y++)
Buffer.BlockCopy(source, stride*y, _buffer, offset + (rowSize*y),
columns*elementSize);
}
}
Is this some sort of marshaling problem? Thanks for your time!
Edit: P.S.: The exception is an InvalidCastException and not a NotImplementedException.
Not sure if this helps you or not but the only casting I see being done is the data as Array. I would bet that it is crashing on the line :
Buffer.BlockCopy(data as Array, 0, _buffer, offset, rows*columns*elementSize);
or
var source = data as Array;
Because they don't do any type checking before casting there. If that is the line it is crashing on it is because they don't seem to support non-array data values. I don't know this framework well enough to give you a solid answer on how to work around this. I would probably report this as a bug to the makers here
Try 2MGFX tool which optimizes shaders for monogame. MGFX tool tips

Categories

Resources