I want to create staging Texture3D and bind it to unordered access view to perform some calculations using DirectCompute and then read them with CPU. Unfortunately I got error when creating Texture3D using following description:
Texture3DDescription desc = new Texture3DDescription()
{
BindFlags = BindFlags.UnorderedAccess,
CpuAccessFlags = CpuAccessFlags.Read | CpuAccessFlags.Write,
Depth = sunAngleIterations,
Format = SharpDX.DXGI.Format.R32G32B32_Float,
Height = viewAngleIterations,
MipLevels = 1,
OptionFlags = ResourceOptionFlags.None,
Usage = ResourceUsage.Staging,
Width = heightIterations
};
texture = new Texture3D(device, desc);
The exception thrown is:
{HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: Parameter jest niepoprawny.}
Any ideas what's wrong here?
Staging textures can't be bound to the shader pipeline, so you need to create first a default texture (also not all cards support 3 channels, so I also changed the format, trying to sample 3 channels texture might not work or crash your driver)
Texture3DDescription desc = new Texture3DDescription()
{
BindFlags = BindFlags.UnorderedAccess,
CpuAccessFlags = CpuAccessFlags.None,
Depth = sunAngleIterations,
Format = SharpDX.DXGI.Format.R32G32B32A32_Float,
Height = viewAngleIterations,
MipLevels = 1,
OptionFlags = ResourceOptionFlags.None,
Usage = ResourceUsage.Default,
Width = heightIterations
};
texture = new Texture3D(device, desc);
Then perform your calculations and use a staging texture to retrieve the data:
Texture3DDescription stagingdesc = new Texture3DDescription()
{
BindFlags = BindFlags.None,
CpuAccessFlags = CpuAccessFlags.Read,
Depth = sunAngleIterations,
Format = SharpDX.DXGI.Format.R32G32B32A32_Float,
Height = viewAngleIterations,
MipLevels = 1,
OptionFlags = ResourceOptionFlags.None,
Usage = ResourceUsage.Staging,
Width = heightIterations
};
stagingtexture = new Texture3D(device, stagingdesc);
Your need to then use deviceContext.CopyResource to copy the content of your default rexture to your staging texture.
Once done, you can use deviceContext.MapSubresource (with read flag) to access your texture data.
Related
I am a C#, SharpDX and Directx newbie. Please excuse my ignorance. I am following up on an old post: Exception of Texture2D.FromMemory() in SharpDX code. It was very helpful.
My goal:
Build a Texture2d from softwarebitmap.
Make the texture available to HLSL.
The way I approached it:
Using IMemoryBufferByteAccess, I was able to retrieve the pointer to byte and the total capacity of Frame. From the previous post, it seems I would need to use the DataRectangle to point to the byte array.
Have 2 textures with different descriptors- Texture1 (_staging_texture)- none binding flag, cpu write and read privileges, usage- staging. I created this texture with the datarectangle pointing to the byte array. Texture2 (_final_texture)- Shader binding flag, no cpu access, usage- default. This texture would be eventually made available to the shader. The intention was to use the copyResource function from Texture1 to Texture2.
Below, I copy my unpolished code for reference:
bitmap = latestFrame.SoftwareBitmap;
Windows.Graphics.Imaging.BitmapBuffer bitmapBuffer= bitmap.LockBuffer(Windows.Graphics.Imaging.BitmapBufferAccessMode.Read);
Windows.Foundation.IMemoryBufferReference bufferReference = bitmapBuffer.CreateReference();
var staging_descriptor = new Texture2DDescription
{
Width = Width,
Height = Height,
MipLevels = 1,
ArraySize = 1,
Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm,
SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
Usage = ResourceUsage.Staging,
BindFlags = BindFlags.None,
CpuAccessFlags = CpuAccessFlags.Read | CpuAccessFlags.Write,
OptionFlags = ResourceOptionFlags.None
};
var final_descriptor = new Texture2DDescription
{
Width = Width,
Height = Height,
MipLevels = 1,
ArraySize = 1,
Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm,
SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
Usage = ResourceUsage.Default,
BindFlags = BindFlags.ShaderResource,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None
};
var dataRectangle = new SharpDX.DataRectangle();
unsafe
{
byte* dataInBytes;
uint capacityInBytes;
((InteropStatics.IMemoryBufferByteAccess)bufferReference).GetBuffer(out dataInBytes, out capacityInBytes);
dataRectangle.DataPointer = (IntPtr)dataInBytes;
dataRectangle.Pitch = 4;
}
Texture2D _stagingTexture = new Texture2D(device, staging_descriptor, dataRectangle);
Texture2D _finalTexture = new Texture2D(device, final_descriptor);
_stagingTexture.Device.ImmediateContext.CopyResource(_stagingTexture, _finalTexture);
My question is two fold:
The DataRectangle uses IntPtr type while the pointer retrieved from
the interface is Byte array.. Is this not a problem? OR does the
pitch member in the DataRectangle address this? For now I casted
byteArray to IntPtr.
Would this approach work? OR is there a better way to handle this?
Any pointers, suggestions or constructive criticisms would be much appreciated!
a while ago i was looking for the same and I come up with this function that always works fine for my use case
public static Texture2D CreateTexture2DFrombytes(Device device, byte[] RawData, int width, int height)
{
Texture2DDescription desc;
desc.Width = width;
desc.Height = height;
desc.ArraySize = 1;
desc.BindFlags = BindFlags.ShaderResource;
desc.Usage = ResourceUsage.Immutable;
desc.CpuAccessFlags = CpuAccessFlags.None;
desc.Format = Format.B8G8R8A8_UNorm;
desc.MipLevels = 1;
desc.OptionFlags = ResourceOptionFlags.None;
desc.SampleDescription.Count = 1;
desc.SampleDescription.Quality = 0;
DataStream s = DataStream.Create(RawData, true, true);
DataRectangle rect = new DataRectangle(s.DataPointer, width * 4);
Texture2D t2D = new Texture2D(device, desc, rect);
return t2D;
}
I followed this solution for my project : How to create bitmap from Surface (SharpDX)
I don't have enough reputation to comment so I'm opening a new question here.
My project is basically in Direct 2D, I have a Surface buffer, a swapchain. I want to put my buffer into a datastream and reads it's value to put it into a bitmap and save it on disk ( like a screen capture), but my code won't work since all the bytes values are 0 (which is black) and this doesn't make sense since my image is fully white with a bit of blue.
Here is my code :
SwapChainDescription description = new SwapChainDescription()
{
ModeDescription = new ModeDescription(this.Width, this.Height, new Rational(60, 1), Format.B8G8R8A8_UNorm),
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.RenderTargetOutput,
BufferCount = 1,
SwapEffect = SwapEffect.Discard,
IsWindowed = true,
OutputHandle = this.Handle
};
Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.Debug | DeviceCreationFlags.BgraSupport, description, out device, out swapChain);
SharpDX.DXGI.Device dxgiDevice = device.QueryInterface<SharpDX.DXGI.Device>();
SharpDX.DXGI.Adapter dxgiAdapter = dxgiDevice.Adapter;
SharpDX.Direct2D1.Device d2dDevice = new SharpDX.Direct2D1.Device(dxgiDevice);
d2dContext = new SharpDX.Direct2D1.DeviceContext(d2dDevice, SharpDX.Direct2D1.DeviceContextOptions.None);
SharpDX.Direct3D11.DeviceContext d3DeviceContext = new SharpDX.Direct3D11.DeviceContext(device);
properties = new BitmapProperties(new SharpDX.Direct2D1.PixelFormat(SharpDX.DXGI.Format.B8G8R8A8_UNorm, SharpDX.Direct2D1.AlphaMode.Premultiplied),
96, 96);
Surface backBuffer = swapChain.GetBackBuffer<Surface>(0);
d2dTarget = new SharpDX.Direct2D1.Bitmap(d2dContext, backBuffer, properties);
d2dContext.Target = d2dTarget;
playerBitmap = this.LoadBitmapFromContentFile(#"C:\Users\ndesjardins\Desktop\wave.png");
//System.Drawing.Bitmap bitmapCanva = new System.Drawing.Bitmap(1254, 735);
d2dContext.BeginDraw();
d2dContext.Clear(SharpDX.Color.White);
d2dContext.DrawBitmap(playerBitmap, new SharpDX.RectangleF(0, 0, playerBitmap.Size.Width, playerBitmap.Size.Height), 1f, SharpDX.Direct2D1.BitmapInterpolationMode.NearestNeighbor);
SharpDX.Direct2D1.SolidColorBrush brush = new SharpDX.Direct2D1.SolidColorBrush(d2dContext, SharpDX.Color.Green);
d2dContext.DrawRectangle(new SharpDX.RectangleF(200, 200, 100, 100), brush);
d2dContext.EndDraw();
swapChain.Present(1, PresentFlags.None);
Texture2D backBuffer3D = backBuffer.QueryInterface<SharpDX.Direct3D11.Texture2D>();
Texture2DDescription desc = backBuffer3D.Description;
desc.CpuAccessFlags = CpuAccessFlags.Read;
desc.Usage = ResourceUsage.Staging;
desc.OptionFlags = ResourceOptionFlags.None;
desc.BindFlags = BindFlags.None;
var texture = new Texture2D(device, desc);
d3DeviceContext.CopyResource(backBuffer3D, texture);
byte[] data = null;
using (Surface surface = texture.QueryInterface<Surface>())
{
DataStream dataStream;
var map = surface.Map(SharpDX.DXGI.MapFlags.Read, out dataStream);
int lines = (int)(dataStream.Length / map.Pitch);
data = new byte[surface.Description.Width * surface.Description.Height * 4];
dataStream.Position = 0;
int dataCounter = 0;
// width of the surface - 4 bytes per pixel.
int actualWidth = surface.Description.Width * 4;
for (int y = 0; y < lines; y++)
{
for (int x = 0; x < map.Pitch; x++)
{
if (x < actualWidth)
{
data[dataCounter++] = dataStream.Read<byte>();
}
else
{
dataStream.Read<byte>();
}
}
}
dataStream.Dispose();
surface.Unmap();
int width = surface.Description.Width;
int height = surface.Description.Height;
byte[] bytewidth = BitConverter.GetBytes(width);
byte[] byteheight = BitConverter.GetBytes(height);
Array.Copy(bytewidth, 0, data, 0, 4);
Array.Copy(byteheight, 0, data, 4, 4);
}
Do you guys have any idea why the byte array that is returned at the end is full of 0 since it should be mostly 255? All I did in my backbuffer was to draw a bitmap image and a rectangle form. Array.Copy is to add the width and height header to the byte array, therefore I could create a bitmap out of it.
I answered in a comment but formatting is horrible so apologies!
https://gamedev.stackexchange.com/a/112978/29920 This looks promising but as you said in reply to mine, this was some time ago and I'm pulling this out of thin air, if it doesn't work either someone with more current knowledge will have to answer or I'll have to grab some source code and try myself.
SharpDX.Direct2D1.Bitmap dxbmp = new SharpDX.Direct2D1.Bitmap(renderTarget,
new SharpDX.Size2(bmpWidth, bmpHeight), new
BitmapProperties(renderTarget.PixelFormat));
dxbmp.CopyFromMemory(bmpBits, bmpWidth * 4);
This looks kind of like what you need. I'm assuming bmpBits in here is either a byte array or a memory stream either of which could then be saved off or at least give you something to look at to see if you're actually getting pixel data
I'm trying to create an off-screen bitmap to draw on it and to draw it with Direct2D1.RenderTarget.DrawBitmap then. So I create Texture2D and get the Bitmap from it. But I receive the error
[D2DERR_UNSUPPORTED_PIXEL_FORMAT/UnsupportedPixelFormat]
in last string of code. Please help me to understand, what have i done wrong here?
m_texture = new Texture2D(
context.Device,
new Texture2DDescription() {
ArraySize = 1,
BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
CpuAccessFlags = CpuAccessFlags.None,
Format = Format.B8G8R8A8_UNorm,
Height = bitmapSize.Height,
Width = bitmapSize.Width,
MipLevels = 1,
OptionFlags = ResourceOptionFlags.None,
SampleDescription = new SampleDescription() {
Count = 1,
Quality = 0
},
Usage = ResourceUsage.Default
}
);
m_surface = m_texture.QueryInterface<Surface>();
using (SharpDX.Direct2D1.Factory factory = new SharpDX.Direct2D1.Factory()) {
m_renderTarget = new RenderTarget(
factory,
m_surface,
new RenderTargetProperties() {
DpiX = 0.0f, // default dpi
DpiY = 0.0f, // default dpi
MinLevel = SharpDX.Direct2D1.FeatureLevel.Level_DEFAULT,
Type = RenderTargetType.Hardware,
Usage = RenderTargetUsage.None,
PixelFormat = new SharpDX.Direct2D1.PixelFormat(
Format.Unknown,
AlphaMode.Premultiplied
)
}
);
}
m_bitmap = new SharpDX.Direct2D1.Bitmap(m_renderTarget, m_surface);
public static SharpDX.Direct2D1.Bitmap GetBitmapFromSRV(SharpDX.Direct3D11.ShaderResourceView srv, RenderTarget renderTarger)
{
using (var texture = srv.ResourceAs<Texture2D>())
using (var surface = texture.QueryInterface<Surface>())
{
var bitmap = new SharpDX.Direct2D1.Bitmap(renderTarger, surface, new SharpDX.Direct2D1.BitmapProperties(new SharpDX.Direct2D1.PixelFormat(
Format.R8G8B8A8_UNorm,
SharpDX.Direct2D1.AlphaMode.Premultiplied)));
return bitmap;
}
}
I'm trying to create a SharpDX.Direct3D11.Texture2D from in-memory data but always get a SharpDXException (HRESULT: 0x80070057, "The parameter is incorrect."). I have used a Texture1D for this purpose before which can be created without a problem.
I have reduced the code to this sample which still produces the exception:
using (var device = new Device(DriverType.Hardware, DeviceCreationFlags.Debug)) {
// empty stream sufficient for example
var stream = new DataStream(16 * 4, true, true);
var description1D = new Texture1DDescription() {
Width = 16,
ArraySize = 1,
Format = Format.R8G8B8A8_UNorm,
MipLevels = 1,
};
using (var texture1D = new Texture1D(device, description1D, new[] { new DataBox(stream.DataPointer) })) {
// no exception on Texture1D
}
var description2D = new Texture2DDescription() {
Width = 8,
Height = 2,
ArraySize = 1,
MipLevels = 1,
Format = Format.R8G8B8A8_UNorm,
SampleDescription = new SampleDescription(1, 0),
};
using (var texture2D = new Texture2D(device, description2D, new[] { new DataBox(stream.DataPointer) })) {
// HRESULT: [0x80070057], Module: [Unknown], ApiCode: [Unknown/Unknown], Message: The parameter is incorrect.
}
}
Creating the texture without passing the data works fine. Can someone tell me how to fix the Texture2D initialization?
You need to pass the row stride of a texture 2D into the DataBox. Something like:
new DataBox(stream.DataPointer, 8 * 4)
Or in a more generic manner:
new DataBox(stream.DataPointer, description2D.Width
* (int)FormatHelper.SizeOfInBytes(description2D.Format))
I'm trying to use SlimDX and DirectX10 or 11 to control the stereoization process on the nVidia 3D Vision Kit. Thanks to this question I've been able to make it work in DirectX 9. However, due to some missing methods I've been unable to make it work under DirectX 10 or 11.
The algorithm goes like this:
Render left eye image
Render right eye image
Create a texture able to contain them both PLUS an extra row (so the texture size would be 2 * width, height + 1)
Write this NV_STEREO_IMAGE_SIGNATURE value
Render this texture on the screen
My test code skips the first two steps, as I already have a stereo texture. It was a former .JPS file, specifically one of those included in the sample pictures coming with the nvidia 3D kit. Step number 5 uses a full screen quad and a shader to render the stereoized texture onto it through an ortho-projection matrix. The sample code I've seen for DX9 doesn't need this and simply calls the StretchRect(...) method to copy the texture back onto the backbuffer. So maybe it is for this reason that is not working? Is there a similar method to accomplish this in DX10? I thought that rendering onto the backbuffer would theoretically be the same than copying (or StretchRecting) a texture onto it, but maybe it is not?
Here follows my code (slimdx):
Stereoization procedure
static Texture2D Make3D(Texture2D stereoTexture)
{
// stereoTexture contains a stereo image with the left eye image on the left half
// and the right eye image on the right half
// this staging texture will have an extra row to contain the stereo signature
Texture2DDescription stagingDesc = new Texture2DDescription()
{
ArraySize = 1,
Width = 3840,
Height = 1081,
BindFlags = BindFlags.None,
CpuAccessFlags = CpuAccessFlags.Write,
Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
OptionFlags = ResourceOptionFlags.None,
Usage = ResourceUsage.Staging,
MipLevels = 1,
SampleDescription = new SampleDescription(1, 0)
};
Texture2D staging = new Texture2D(device, stagingDesc);
// Identify the source texture region to copy (all of it)
ResourceRegion stereoSrcBox = new ResourceRegion { Front = 0, Back = 1, Top = 0, Bottom = 1080, Left = 0, Right = 3840 };
// Copy it to the staging texture
device.CopySubresourceRegion(stereoTexture, 0, stereoSrcBox, staging, 0, 0, 0, 0);
// Open the staging texture for reading
DataRectangle box = staging.Map(0, MapMode.Write, SlimDX.Direct3D10.MapFlags.None);
// Go to the last row
box.Data.Seek(stereoTexture.Description.Width * stereoTexture.Description.Height * 4, System.IO.SeekOrigin.Begin);
// Write the NVSTEREO header
box.Data.Write(data, 0, data.Length);
staging.Unmap(0);
// Create the final stereoized texture
Texture2DDescription finalDesc = new Texture2DDescription()
{
ArraySize = 1,
Width = 3840,
Height = 1081,
BindFlags = BindFlags.ShaderResource,
CpuAccessFlags = CpuAccessFlags.Write,
Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
OptionFlags = ResourceOptionFlags.None,
Usage = ResourceUsage.Dynamic,
MipLevels = 1,
SampleDescription = new SampleDescription(1, 0)
};
// Copy the staging texture on a new texture to be used as a shader resource
Texture2D final = new Texture2D(device, finalDesc);
device.CopyResource(staging, final);
staging.Dispose();
return final;
}
NV_STEREO_IMAGE_SIGNATURE data
// The NVSTEREO header.
static byte[] data = new byte[] {0x4e, 0x56, 0x33, 0x44, //NVSTEREO_IMAGE_SIGNATURE = 0x4433564e;
0x00, 0x0F, 0x00, 0x00, //Screen width * 2 = 1920*2 = 3840 = 0x00000F00;
0x38, 0x04, 0x00, 0x00, //Screen height = 1080 = 0x00000438;
0x20, 0x00, 0x00, 0x00, //dwBPP = 32 = 0x00000020;
0x02, 0x00, 0x00, 0x00}; //dwFlags = SIH_SCALE_TO_FIT = 0x00000002
Main
private static Device device;
[STAThread]
static void Main()
{
// Device creation
var form = new RenderForm("Stereo test") {ClientSize = new Size(1920, 1080)};
var desc = new SwapChainDescription()
{
BufferCount = 1,
ModeDescription = new ModeDescription(1920, 1080, new Rational(120, 1), Format.R8G8B8A8_UNorm),
IsWindowed = true,
OutputHandle = form.Handle,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput
};
SwapChain swapChain;
Device.CreateWithSwapChain(null, DriverType.Hardware, DeviceCreationFlags.Debug, desc, out device, out swapChain);
//Stops Alt+enter from causing fullscreen skrewiness.
Factory factory = swapChain.GetParent<Factory>();
factory.SetWindowAssociation(form.Handle, WindowAssociationFlags.IgnoreAll);
Texture2D backBuffer = Resource.FromSwapChain<Texture2D>(swapChain, 0);
RenderTargetView renderView = new RenderTargetView(device, backBuffer);
ImageLoadInformation info = new ImageLoadInformation()
{
BindFlags = BindFlags.None,
CpuAccessFlags = CpuAccessFlags.Read,
FilterFlags = FilterFlags.None,
Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
MipFilterFlags = FilterFlags.None,
OptionFlags = ResourceOptionFlags.None,
Usage = ResourceUsage.Staging,
MipLevels = 1
};
// Make texture 3D
Texture2D sourceTexture = Texture2D.FromFile(device, "medusa.jpg", info);
Texture2D stereoizedTexture = Make3D(sourceTexture);
ShaderResourceView srv = new ShaderResourceView(device, stereoizedTexture);
// Create a quad that fills the whole screen
ushort[] idx;
TexturedVertex[] quad = CreateTexturedQuad(Vector3.Zero, 1920, 1080, out idx);
// fill vertex and index buffers
DataStream stream = new DataStream(4*24, true, true);
stream.WriteRange(quad);
stream.Position = 0;
Buffer vertices = new SlimDX.Direct3D10.Buffer(device, stream, new BufferDescription()
{
BindFlags = BindFlags.VertexBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SizeInBytes = 4*24,
Usage = ResourceUsage.Default
});
stream.Close();
stream = new DataStream(6*sizeof (ushort), true, true);
stream.WriteRange(idx);
stream.Position = 0;
Buffer indices = new SlimDX.Direct3D10.Buffer(device, stream, new BufferDescription()
{
BindFlags = BindFlags.IndexBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SizeInBytes = 6*sizeof (ushort),
Usage = ResourceUsage.Default
});
// Create world view (ortho) projection matrices
QuaternionCam qCam = new QuaternionCam();
// Load effect from file. It is a basic effect that renders a full screen quad through
// an ortho projectio=n matrix
Effect effect = Effect.FromFile(device, "Texture.fx", "fx_4_0", ShaderFlags.Debug, EffectFlags.None);
EffectTechnique technique = effect.GetTechniqueByIndex(0);
EffectPass pass = technique.GetPassByIndex(0);
InputLayout layout = new InputLayout(device, pass.Description.Signature, new[]
{
new InputElement("POSITION", 0,
Format.
R32G32B32A32_Float,
0, 0),
new InputElement("TEXCOORD", 0,
Format.
R32G32_Float,
16, 0)
});
effect.GetVariableByName("mWorld").AsMatrix().SetMatrix(
Matrix.Translation(Layout.OrthographicTransform(Vector2.Zero, 90, new Size(1920, 1080))));
effect.GetVariableByName("mView").AsMatrix().SetMatrix(qCam.View);
effect.GetVariableByName("mProjection").AsMatrix().SetMatrix(qCam.OrthoProjection);
effect.GetVariableByName("tDiffuse").AsResource().SetResource(srv);
// Set RT and Viewports
device.OutputMerger.SetTargets(renderView);
device.Rasterizer.SetViewports(new Viewport(0, 0, form.ClientSize.Width, form.ClientSize.Height, 0.0f, 1.0f));
// Create solid rasterizer state
RasterizerStateDescription rDesc = new RasterizerStateDescription()
{
CullMode = CullMode.None,
IsDepthClipEnabled = true,
FillMode = FillMode.Solid,
IsAntialiasedLineEnabled = true,
IsFrontCounterclockwise = true,
IsMultisampleEnabled = true
};
RasterizerState rState = RasterizerState.FromDescription(device, rDesc);
device.Rasterizer.State = rState;
// Main Loop
MessagePump.Run(form, () =>
{
device.ClearRenderTargetView(renderView, Color.Cyan);
device.InputAssembler.SetInputLayout(layout);
device.InputAssembler.SetPrimitiveTopology(PrimitiveTopology.TriangleList);
device.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertices, 24, 0));
device.InputAssembler.SetIndexBuffer(indices, Format.R16_UInt, 0);
for (int i = 0; i < technique.Description.PassCount; ++i)
{
// Render the full screen quad
pass.Apply();
device.DrawIndexed(6, 0, 0);
}
swapChain.Present(0, PresentFlags.None);
});
// Dispose resources
vertices.Dispose();
layout.Dispose();
effect.Dispose();
renderView.Dispose();
backBuffer.Dispose();
device.Dispose();
swapChain.Dispose();
rState.Dispose();
stereoizedTexture.Dispose();
sourceTexture.Dispose();
indices.Dispose();
srv.Dispose();
}[/code]
Thanks in advance!
I eventually managed to fix it. The key was in using the CopySubResourceRegion method on the stereoized texture back to the backbuffer, specifying its dimension (e.g.: 1920 x 1080 instead of 3840 x 1081).