Does anybody know why device creation for using DirectX11 works, even though the hardware doesn't support it? I have the following simplified code (this uses SlimDx, but I guess the language is of second importance here:
var form = frm;
form.Resize += form_Resize;
var desc = new SwapChainDescription()
{
BufferCount = 1,
ModeDescription = new ModeDescription(form.ClientSize.Width, form.ClientSize.Height, new Rational(60, 1), Format.R8G8B8A8_UNorm),
IsWindowed = true,
OutputHandle = form.Handle,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput
};
Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.None, desc, out device, out swapChain);
device.Factory.SetWindowAssociation(form.Handle, WindowAssociationFlags.IgnoreAll);
Resize(form.ClientSize);
sprite = new SpriteRendererSlimDx(device, m_lock);
var textBlock = new TextBlockRendererSlimDx(sprite, "Arial", FontWeight.Bold, SlimDX.DirectWrite.FontStyle.Normal, FontStretch.Normal, 16);
MessagePump.Run(form, () =>
{
device.ImmediateContext.ClearRenderTargetView(renderView, Color.DarkBlue);
textBlock.DrawString("Hello World", Vector2.Zero, Color.Red);
sprite.Flush();
swapChain.Present(0, PresentFlags.None);
});
The code opens a window and draws some text. It works fine on Graphics Cards supporting Feature Level 11.0, but if I run the same code on my aging ATI Radeon HD 4550, which only supports Feature Level 10.1, the device driver locks up and windows restarts the display driver.
The issue is not with the (in the example not shown) text rendering, but happens for rendering triangles, too. Debugging shows that the lockup happens on the swapChain.Present() call.
Directx 11 Compatiblilty suggests that I might be having a problem with using Shader Level 5 (which I'm trying to use), but generally my question is: Why don't I get an ordinary error message if something is not supported? The driver shouldn't attempt to compile Shader Level 5, if the card cannot do it, should it?
Update
The error also seems to happen if I only use shader level 4.
Related
I have been playing around with SharpDX.XAudio2 for a few days now, and while things have been largely positive (the odd software quirk here and there) the following problem has me completely stuck:
I am working in C# .NET using VS2015.
I am trying to play multiple sounds simultaneously.
To do this, I have made:
- Test.cs: Contains main method
- cSoundEngine.cs: Holds XAudio2, MasteringVoice, and sound management methods.
- VoiceChannel.cs: Holds a SourceVoice, and in future any sfx/ related data.
cSoundEngine:
List<VoiceChannel> sourceVoices;
XAudio2 engine;
MasteringVoice master;
public cSoundEngine()
{
engine = new XAudio2();
master = new MasteringVoice(engine);
sourceVoices = new List<VoiceChannel>();
}
public VoiceChannel AddAndPlaySFX(string filepath, double vol, float pan)
{
/**
* Set up and start SourceVoice
*/
NativeFileStream fileStream = new NativeFileStream(filepath, NativeFileMode.Open, NativeFileAccess.Read);
SoundStream soundStream = new SoundStream(fileStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source);
sourceVoices.Add(voice);
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
return voice;
}
Test.cs:
cSoundEngine engine = new cSoundEngine();
total = 6;
for (int i = 0; i < total; i++)
{
string filepath = System.IO.Directory.GetParent(System.IO.Directory.GetCurrentDirectory()).Parent.FullName + #"\Assets\Planet.wav";
VoiceChannel sfx = engine.AddAndPlaySFX(filepath, 0.1, 0);
}
Console.Read(); //Input anything to end play.
There is currently nothing worth showing in VoiceChannel.cs - it holds 'SourceVoice source' which is the one parameter sent in the constructor!
Everything is fine and well running with up to 5 sounds (total = 5). All you hear is the blissful drone of Planet.wav. Any higher than 5 however causes the console to freeze for ~5 seconds, then close (likely a c++ error which debugger can't handle). Sadly no error message for us to look at or anything.
From testing:
- Will not crash as long as you do not have more than 5 running sourcevoices.
- Changing sample rate does not seem to help.
- Setting inputChannels for master object to a different number makes no difference.
- MasteringVoice seems to say the max number of inputvoices is 64.
- Making each sfx play from a different wav file makes no difference.
- Setting the volume for sourcevoices and/or master makes no difference.
From the XAudio2 API Documentation I found this quote: 'XAudio2 removes the 6-channel limit on multichannel sounds, and supports multichannel audio on any multichannel-capable audio card. The card does not need to be hardware-accelerated.'. This is the closest I have come to finding something that mentions this problem.
I am not well experienced with programming sfx and a lot of this is very new to me, so feel free to call me an idiot where appropriate but please try and explain things in layman terms.
Please, if you have any ideas or answers they would be greatly appreciated!
-Josh
As Chuck has suggested, I have created a databank which holds the .wav data, and I just reference the single data store with each buffer. This has improved the sound limit up to 20 - however this has not fixed the problem as a whole, likely because I have not implemented this properly.
Implementation:
class SoundDataBank
{
/**
* Holds a single byte array for each sound
*/
Dictionary<eSFX, Byte[]> bank;
string curdir => Directory.GetParent(Directory.GetCurrentDirectory()).Parent.FullName;
public SoundDataBank()
{
bank = new Dictionary<eSFX, byte[]>();
bank.Add(eSFX.planet, NativeFile.ReadAllBytes(curdir + #"\Assets\Planet.wav"));
bank.Add(eSFX.base1, NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav"));
}
public Byte[] GetSoundData(eSFX sfx)
{
byte[] output = bank[sfx];
return output;
}
}
In SoundEngine we create a SoundBank object (initialised in SoundEngine constructor):
SoundDataBank soundBank;
public VoiceChannel AddAndPlaySFXFromStore(eSFX sfx, double vol)
{
/**
* sourcevoice will be automatically added to MasteringVoice and engine in the constructor.
*/
byte[] buffer = soundBank.GetSoundData(sfx);
MemoryStream memoryStream = new MemoryStream(buffer);
SoundStream soundStream = new SoundStream(memoryStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source, engine, MakeOutputMatrix());
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
sourceVoices.Add(voice);
return voice;
}
Following this implementation now lets me play up to 20 sound effects - but NOT because we are playing from the soundbank. Infact, even running the old method for sound effects now gets up to 20 sfx instances.
This has improved up to 20 because we have done NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav") in the constructor for the SoundBank.
I suspect NativeFile is holding a store of loaded file data, so you regardless of whether you run the original SoundEngine.AddAndPlaySFX() or SoundEngine.AddAndPlaySFXFromStore(), they are both running from memory?
Either way, this has quadrupled the limit from before, so this has been incredibly useful - but requires further work.
I am trying to create a simple application which will have the functionality to switch on and off the flash light in a Windows Media Device.
I have initialized the camera as following:
var devices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
var rearCamera = devices.FirstOrDefault(item => item.EnclosureLocation != null &&
item.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Back);
if (rearCamera != null)
{
DeviceName.Content = rearCamera.Name;
FlashButton.Visibility = System.Windows.Visibility.Visible;
mediaCapture = new MediaCapture();
await mediaCapture.InitializeAsync(new MediaCaptureInitializationSettings
{
VideoDeviceId = rearCamera.Id
});
LowLagPhotoCapture lowLagCaptureMgr = null;
// Image properties
ImageEncodingProperties imgFormat = ImageEncodingProperties.CreateJpeg();
// Create LowLagPhotoCapture object
lowLagCaptureMgr = await mediaCapture.PrepareLowLagPhotoCaptureAsync(imgFormat);
}
And to switch on the flash I have written the following code:
var MyVideoDeviceController = mediaCapture.VideoDeviceController;
var MyTorch = MyVideoDeviceController.TorchControl;
var MyFlash = MyVideoDeviceController.FlashControl;
if (MyTorch.Supported)
{
MyTorch.PowerPercent = 100;
MyTorch.Enabled = true;
}
else
{
if (MyFlash.Supported)
{
MyFlash.PowerPercent = 100;
MyFlash.Enabled = true;
}
else
{
MessageBox.Show("No Flash and Torch Support", "Flash and Torch");
}
}
But seems both TorchControl and FlashControl are not supported in the code. I am not sure if am using the right APIs too. I am trying to run this on a Motion F5m - Tablet PC
Thanks in advance
The TorchControl is used for constant video light, so if you're taking a photograph, it's not the most appropriate control to use. One reason is that on many devices, video light will be dimmer than a photo flash, but especially because on some devices, the torch will only turn on while a video recording is in progress. Depending on the capabilities of the device, this may interfere with the ability to take photos.
You have the right idea setting MyFlash.Enabled = true, but just to be safe, I would also set MyFlash.Auto = false, so that the flash will fire each time, and not only when it's dark.
The CameraManualControls sample on the Microsoft GitHub repository shows you how to use the Flash and Torch controls, and many more. It targets Windows 10, though, so if you're on 8.1 you'll have to adapt the code or upgrade your tablet.
Now, all of the above is assuming that the device you're running your app on has flash support in the first place. When you say that the controls are not supported, that means that the camera driver on the device is not advertising the capability to Windows. I assume that the built-in Microsoft Camera app doesn't allow you to use the flash either?
I see the manufacturer of your tablet lists an "Illuminator Light" on their camera specs list, but there is a chance that the only way to control it is through their proprietary application. In that case you'd have to reach out to them for support.
I’m working on a prototype that integrates WPF, Direct3D9 (using Microsoft’s D3DImage WPF class), and CUDA (I need to be able to generate a texture for the D3DImage on the GPU).
The problem is, CUDA doesn’t update my texture. No error codes are returned, the texture just stays unchanged. Even if I read after my own write, I don't see any changes. How to update my D3D9 texture?
I'm not even running any CUDA kernels, for debug purposes I only using cuMemcpy2D API to write the CUDA memory by copying some fake data from the CPU.
Here’s the code, it’s C# but I’ve placed native APIs in the comments:
static void updateTexture( Texture tx )
{
var size = tx.getSize();
using( CudaDirectXInteropResource res = new CudaDirectXInteropResource( tx.NativePointer, CUGraphicsRegisterFlags.None, CudaContext.DirectXVersion.D3D9 ) ) // cuGraphicsD3D9RegisterResource
{
res.Map(); // = cuGraphicsMapResources
using( CudaArray2D arr = res.GetMappedArray2D( 0, 0 ) ) // cuGraphicsSubResourceGetMappedArray, cuArrayGetDescriptor. The size is correct here, BTW
{
// Debug code below - don't run any kernels for now, just call cuMemcpy2D to write the GPU memory
uint[] arrWhite = new uint[ size.Width * size.Height ];
for( int i = 0; i < arrWhite.Length; i++ )
arrWhite[ i ] = 0xFF0000FF;
arr.CopyFromHostToThis( arrWhite ); // cuMemcpy2D
uint[] test = new uint[ size.Width * size.Height ];
arr.CopyFromThisToHost( test ); // The values here are correct
}
res.UnMap(); // cuGraphicsUnmapResources
}
tx.AddDirtyRectangle();
// Map again and check what's in the resource
using( CudaDirectXInteropResource res = new CudaDirectXInteropResource( tx.NativePointer, CUGraphicsRegisterFlags.None, CudaContext.DirectXVersion.D3D9 ) )
{
res.Map();
using( CudaArray2D arr = res.GetMappedArray2D( 0, 0 ) )
{
uint[] test = new uint[ size.Width * size.Height ];
arr.CopyFromThisToHost( test ); // All zeros :-(
Debug.WriteLine( "First pixel: {0:X}", test[ 0 ] );
}
res.UnMap();
}
}
As hinted by the commenter, I’ve tried creating a single instance of CudaDirectXInteropResource along with the D3D texture.
It worked.
It’s counter-intuitive and undocumented, but it looks like cuGraphicsUnregisterResource destroys the newly written data.
At least on my machine with GeForce GTX 960, Cuda 7.0 and Windows 8.1 x64.
So, the solution — call cuGraphicsD3D9RegisterResource once per texture, and use cuGraphicsMapResources / cuGraphicsUnmapResources API to allow CUDA to access the texture data.
If I create my Device and my SwapChain like this:
SwapChain _swapChain;
Device _device;
// SwapChain description
var desc = new SwapChainDescription()
{
BufferCount = 1,
ModeDescription = new ModeDescription(500, 300, new Rational(60, 1), Format.R8G8B8A8_UNorm),
IsWindowed = true,
OutputHandle = _windowHandle,
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.RenderTargetOutput
};
Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.Debug,
desc, out _device, out _swapChain);
I get the expected debugging text:
Now if I create my Device and my SwapChain like this:
Factory _factory = new Factory();
Adapter adapter = _factory.GetAdapter(0);
SwapChain _swapChain;
Device _device = new Device(adapter, DeviceCreationFlags.Debug);
// SwapChain description
var desc = new SwapChainDescription()
{
BufferCount = 1,
ModeDescription = new ModeDescription(500, 300, new Rational(60, 1), Format.R8G8B8A8_UNorm),
IsWindowed = true,
OutputHandle = _windowHandle,
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.RenderTargetOutput
};
_swapChain = new SwapChain(_factory, _device, desc);
I don't get the expected debugging text:
In addition to not getting the expected debugging text, I get a load of new messages in my output:
First-chance exception at 0x7631C42D in Tester.exe: Microsoft C++ exception: _com_error at memory location 0x064EEC28.
First-chance exception at 0x7631C42D in Tester.exe: Microsoft C++ exception: _com_error at memory location 0x064EED6C.
First-chance exception at 0x7631C42D in Tester.exe: Microsoft C++ exception: [rethrow] at memory location 0x00000000.
D3D11 ERROR: ID3D11Device::OpenSharedResource: Returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #381: DEVICE_OPEN_SHARED_RESOURCE_INVALIDARG_RETURN]
D3D11 WARNING: ID3D11DeviceContext::OMSetRenderTargets: Resource being set to OM RenderTarget slot 0 is inaccessible because of a previous call to ReleaseSync or GetDC. [ STATE_SETTING WARNING #9: DEVICE_OMSETRENDERTARGETS_HAZARD]
D3D11 WARNING: ID3D11DeviceContext::Draw: The Pixel Shader expects a Render Target View bound to slot 0, but none is bound. This is OK, as writes of an unbound Render Target View are discarded. It is also possible the developer knows the data will not be used anyway. This is only a problem if the developer actually intended to bind a Render Target View here. [ EXECUTION WARNING #3146081: DEVICE_DRAW_RENDERTARGETVIEW_NOT_SET]
D3D11 WARNING: ID3D11DeviceContext::OMSetRenderTargets: Resource being set to OM RenderTarget slot 0 is inaccessible because of a previous call to ReleaseSync or GetDC. [ STATE_SETTING WARNING #9: DEVICE_OMSETRENDERTARGETS_HAZARD]
Where the last message repeats itself every frame...
Why is this happening? What's the difference between the 2 creation methods? How can I use the second creation method properly?
PS: I want to use the second creation method because I need the device to determine anti-aliasing settings...
PPS: In case it's needed, here's the code that creates the RenderTargetView:
using (Texture2D backBuffer = _swapChain.GetBackBuffer<Texture2D>(0))
{
_renderTargetView = new RenderTargetView(_device, backBuffer);
}
_context = _device.ImmediateContext;
_context.OutputMerger.SetRenderTargets(_renderTargetView);
_context.Rasterizer.SetViewport(0, 0, 500, 300);
Using the second method of Device creating, replacing
Device _device = new Device(adapter, DeviceCreationFlags.Debug);
with
Device _device = new Device(DriverType.Hardware, DeviceCreationFlags.Debug);
makes the debug text appear again. It also seems to remove the warning messages and the first-chance exceptions
I must say this in an extremely subtle difference, which I only found through comparing the 2 Graphics Event Lists...
I am searching a contour, and at least one is found, as I can see with the debugger.
When I try to transform the sequence into an array of points a StackOverflowException (that's why I am here :-) is thrown.
I am sure the reason is some wrong allocation of buffers, I am a bit confused with the examples I have found in C, C++, but not in C#. OpenCvSharp uses generics which I never used before.
Platform: Windows 7 on x86, Sharpdevelop 4.2.2
Here follows the code snippet:
OpenCvSharp.CvMemStorage allContours = new OpenCvSharp.CvMemStorage();
OpenCvSharp.CvSeq<OpenCvSharp.CvPoint> contour = null;
OpenCvSharp.CvPoint[] border;
i = OpenCvSharp.Cv.FindContours (image[EDGE], allContours, out contour,
OpenCvSharp.CvContour.SizeOf, OpenCvSharp.ContourRetrieval.List,
OpenCvSharp.ContourChain.ApproxNone,
OpenCvSharp.Cv.Point(0,0));
if (i!=1)
{ Forms.MessageBox.Show( i.ToString() + " instead of 1 contours found", "Info");
return;
}
border = new OpenCvSharp.CvPoint [contour.Total];
OpenCvSharp.Cv.CvtSeqToArray<OpenCvSharp.CvPoint> (contour, out border);
It is the last line where the program crashes.