I am a C#, SharpDX and Directx newbie. Please excuse my ignorance. I am following up on an old post: Exception of Texture2D.FromMemory() in SharpDX code. It was very helpful.
My goal:
Build a Texture2d from softwarebitmap.
Make the texture available to HLSL.
The way I approached it:
Using IMemoryBufferByteAccess, I was able to retrieve the pointer to byte and the total capacity of Frame. From the previous post, it seems I would need to use the DataRectangle to point to the byte array.
Have 2 textures with different descriptors- Texture1 (_staging_texture)- none binding flag, cpu write and read privileges, usage- staging. I created this texture with the datarectangle pointing to the byte array. Texture2 (_final_texture)- Shader binding flag, no cpu access, usage- default. This texture would be eventually made available to the shader. The intention was to use the copyResource function from Texture1 to Texture2.
Below, I copy my unpolished code for reference:
bitmap = latestFrame.SoftwareBitmap;
Windows.Graphics.Imaging.BitmapBuffer bitmapBuffer= bitmap.LockBuffer(Windows.Graphics.Imaging.BitmapBufferAccessMode.Read);
Windows.Foundation.IMemoryBufferReference bufferReference = bitmapBuffer.CreateReference();
var staging_descriptor = new Texture2DDescription
{
Width = Width,
Height = Height,
MipLevels = 1,
ArraySize = 1,
Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm,
SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
Usage = ResourceUsage.Staging,
BindFlags = BindFlags.None,
CpuAccessFlags = CpuAccessFlags.Read | CpuAccessFlags.Write,
OptionFlags = ResourceOptionFlags.None
};
var final_descriptor = new Texture2DDescription
{
Width = Width,
Height = Height,
MipLevels = 1,
ArraySize = 1,
Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm,
SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
Usage = ResourceUsage.Default,
BindFlags = BindFlags.ShaderResource,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None
};
var dataRectangle = new SharpDX.DataRectangle();
unsafe
{
byte* dataInBytes;
uint capacityInBytes;
((InteropStatics.IMemoryBufferByteAccess)bufferReference).GetBuffer(out dataInBytes, out capacityInBytes);
dataRectangle.DataPointer = (IntPtr)dataInBytes;
dataRectangle.Pitch = 4;
}
Texture2D _stagingTexture = new Texture2D(device, staging_descriptor, dataRectangle);
Texture2D _finalTexture = new Texture2D(device, final_descriptor);
_stagingTexture.Device.ImmediateContext.CopyResource(_stagingTexture, _finalTexture);
My question is two fold:
The DataRectangle uses IntPtr type while the pointer retrieved from
the interface is Byte array.. Is this not a problem? OR does the
pitch member in the DataRectangle address this? For now I casted
byteArray to IntPtr.
Would this approach work? OR is there a better way to handle this?
Any pointers, suggestions or constructive criticisms would be much appreciated!
a while ago i was looking for the same and I come up with this function that always works fine for my use case
public static Texture2D CreateTexture2DFrombytes(Device device, byte[] RawData, int width, int height)
{
Texture2DDescription desc;
desc.Width = width;
desc.Height = height;
desc.ArraySize = 1;
desc.BindFlags = BindFlags.ShaderResource;
desc.Usage = ResourceUsage.Immutable;
desc.CpuAccessFlags = CpuAccessFlags.None;
desc.Format = Format.B8G8R8A8_UNorm;
desc.MipLevels = 1;
desc.OptionFlags = ResourceOptionFlags.None;
desc.SampleDescription.Count = 1;
desc.SampleDescription.Quality = 0;
DataStream s = DataStream.Create(RawData, true, true);
DataRectangle rect = new DataRectangle(s.DataPointer, width * 4);
Texture2D t2D = new Texture2D(device, desc, rect);
return t2D;
}
I have c# TensorFlow.NET working in Unity. But it using an image from the file system. I want to be able to use an image from memory (Texture2D).
I tried to follow some examples of people using TensorFlowSharp. But that didn't work.
What am I doing wrong?
Note: with both functions, I am using the same image. The image is 512x512. but the result of both pictures is different.
// Doesn't work
private NDArray FromTextureToNDArray(Texture2D texture) {
Color32[] pixels = texture.GetPixels32();
byte[] floatValues = new byte[(texture.width * texture.height) * 3];
for (int i = 0; i < pixels.Length; i++) {
var color = pixels[i];
floatValues[i * 3] = color.r;
floatValues[i * 3 + 1] = color.g;
floatValues[i * 3 + 2] = color.b;
}
Shape shape = new Shape(1, texture.width, texture.height, 3);
NDArray image = new NDArray(floatValues, shape);
return image;
}
// Works
private NDArray ReadFromFile(string fileName) {
var graph = new Graph().as_default();
// Change image
var file_reader = tf.read_file(fileName, "file_reader");
var decodeJpeg = tf.image.decode_jpeg(file_reader, channels: 3, name: "DecodeJpeg");
var casted = tf.cast(decodeJpeg, TF_DataType.TF_UINT8);
var dims_expander = tf.expand_dims(casted, 0);
using (var sess = tf.Session(graph)) {
return sess.run(dims_expander);
}
}
I ended up using this code from Shaqian: https://github.com/shaqian/TF-Unity/blob/master/TensorFlow/Utils.cs
Add this script to your project and then you could use it like this:
// Get image
byte[] imageData = Utils.DecodeTexture(texture, texture.width, texture.height, 0, Flip.VERTICAL);
Shape shape = new Shape(1, texture.width, texture.height, 3);
NDArray image = new NDArray(imageData, shape);
Use Barracuda as step in between.
var encoder = new Unity.Barracuda.TextureAsTensorData(your_2d_texture);
I am trying to create movie from images.
I am following following links :
https://www.leadtools.com/support/forum/posts/t11084- // Here I am trying option 2 mentioned & https://social.msdn.microsoft.com/Forums/sqlserver/en-US/b61726a4-4b87-49c7-b4fc-8949cd1366ac/visual-c-visual-studio-2017-how-do-you-convert-jpg-images-to-video-in-visual-c?forum=csharpgeneral
void convert()
{
bmp = new Bitmap(320, 240, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
// create sample source object
SampleSource smpsrc = new SampleSource();
ConvertCtrl convertCtrl = new ConvertCtrl();
// create a new media type wrapper
MediaType mt = new MediaType();
double AvgTimePerFrame = (10000000 / 15);
// set the type to 24-bit RGB video
mt.Type = Constants.MEDIATYPE_Video;
mt.SubType = Constants.MEDIASUBTYPE_RGB24;
// set the format
mt.FormatType = Constants.FORMAT_VideoInfo;
VideoInfoHeader vih = new VideoInfoHeader();
int bmpSize = GetBitmapSize(bmp);
// setup the video info header
vih.bmiHeader.biCompression = 0; // BI_RGB
vih.bmiHeader.biBitCount = 24;
vih.bmiHeader.biWidth = bmp.Width;
vih.bmiHeader.biHeight = bmp.Height;
vih.bmiHeader.biPlanes = 1;
vih.bmiHeader.biSizeImage = bmpSize;
vih.bmiHeader.biClrImportant = 0;
vih.AvgTimePerFrame.lowpart = (int)AvgTimePerFrame;
vih.dwBitRate = bmpSize * 8 * 15;
mt.SetVideoFormatData(vih, null, 0);
// set fixed size samples matching the bitmap size
mt.SampleSize = bmpSize;
mt.FixedSizeSamples = true;
// assign the source media type
smpsrc.SetMediaType(mt);
// select the LEAD compressor
convertCtrl.VideoCompressors.MCmpMJpeg.Selected = true;
convertCtrl.SourceObject = smpsrc;
convertCtrl.TargetFile = #"D:\Projects\LEADTool_Movie_fromImage\ImageToVideo_LeadTool\ImageToVideo_LeadTool\Images\Out\aa.avi";
//convertCtrl.TargetFile = "C:\\Users\\vipul.langalia\\Documents\\count.avi";
convertCtrl.TargetFormat = TargetFormatType.WMVMux;
convertCtrl.StartConvert();
BitmapData bmpData;
int i = 1;
byte[] a = new byte[bmpSize];
System.Drawing.Rectangle rect = new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height);
var imgs = GetAllFiles();
foreach (var item in imgs)
{
bmpSize = GetBitmapSize(item);
MediaSample ms = smpsrc.GetSampleBuffer(30000);
ms.SyncPoint = true;
bmpData = item.LockBits(rect, ImageLockMode.ReadWrite, item.PixelFormat);
Marshal.Copy(bmpData.Scan0, a, 0, bmpSize);
item.UnlockBits(bmpData);
ms.SetData(bmpSize, a);
SetSampleTime(ms, i, AvgTimePerFrame);
smpsrc.DeliverSample(1000, ms);
i++;
}
smpsrc.DeliverEndOfStream(1000);
}
byte[] GetByteArrayFroMWritableBitmap(WriteableBitmap bitmapSource)
{
var width = bitmapSource.PixelWidth;
var height = bitmapSource.PixelHeight;
var stride = width * ((bitmapSource.Format.BitsPerPixel + 7) / 8);
var bitmapData = new byte[height * stride];
bitmapSource.CopyPixels(bitmapData, stride, 0);
return bitmapData;
}
private int GetBitmapSize(WriteableBitmap bmp)
{
int BytesPerLine = (((int)bmp.Width * 24 + 31) & ~31) / 8;
return BytesPerLine * (int)bmp.Height;
}
private int GetBitmapSize(Bitmap bmp)
{
int BytesPerLine = ((bmp.Width * 24 + 31) & ~31) / 8;
return BytesPerLine * bmp.Height;
}
It is throwing out of memory exception when execute ms.SetData(bmpSize, a); statement. Plus If I directly pass byte[] by var a = System.IO.File.ReadAllBytes(imagePath); in ms.SetData(bmpSize, a); statement then it will not throw error but video file is not properly created.
Can anybody please help me?
There are a couple of problems with your code:
Are all your images 320x240 pixels? If not, you should resize them to these exact dimensions before delivering them as video samples to the Convert control.
If you want to use a different size, you can, but it should be the same size for all images, and you should modify the code accordingly.
You are setting the TargetFormat property to WMVMux, but the name of the output file has “.avi” extension. If you want to save AVI files, set TargetFormat = TargetFormatType.AVI.
If you still face problems after this, feel free to contact support#leadtools.com and provide full details about what you tried and what errors you’re getting. Email support is free for LEADTOOLS SDK owners and also for free evaluation users.
I try to capture desktop screenshot using SharpDX. My application is able to capture screenshot but without labels in Windows Explorer.
I tryed 2 solutions but without change. I tried find in documentation any information, but without change.
Here is mi code:
public void SCR()
{
uint numAdapter = 0; // # of graphics card adapter
uint numOutput = 0; // # of output device (i.e. monitor)
// create device and factory
var device = new SharpDX.Direct3D11.Device(SharpDX.Direct3D.DriverType.Hardware);
var factory = new Factory1();
// creating CPU-accessible texture resource
var texdes = new SharpDX.Direct3D11.Texture2DDescription
{
CpuAccessFlags = SharpDX.Direct3D11.CpuAccessFlags.Read,
BindFlags = SharpDX.Direct3D11.BindFlags.None,
Format = Format.B8G8R8A8_UNorm,
Height = factory.Adapters1[numAdapter].Outputs[numOutput].Description.DesktopBounds.Height,
Width = factory.Adapters1[numAdapter].Outputs[numOutput].Description.DesktopBounds.Width,
OptionFlags = SharpDX.Direct3D11.ResourceOptionFlags.None,
MipLevels = 1,
ArraySize = 1
};
texdes.SampleDescription.Count = 1;
texdes.SampleDescription.Quality = 0;
texdes.Usage = SharpDX.Direct3D11.ResourceUsage.Staging;
var screenTexture = new SharpDX.Direct3D11.Texture2D(device, texdes);
// duplicate output stuff
var output = new Output1(factory.Adapters1[numAdapter].Outputs[numOutput].NativePointer);
var duplicatedOutput = output.DuplicateOutput(device);
SharpDX.DXGI.Resource screenResource = null;
SharpDX.DataStream dataStream;
Surface screenSurface;
var i = 0;
var miliseconds = 2500000;
while (true)
{
i++;
// try to get duplicated frame within given time
try
{
SharpDX.DXGI.OutputDuplicateFrameInformation duplicateFrameInformation;
duplicatedOutput.AcquireNextFrame(miliseconds, out duplicateFrameInformation, out screenResource);
}
catch (SharpDX.SharpDXException e)
{
if (e.ResultCode.Code == SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
{
// this has not been a successful capture
// thanks #Randy
// keep retrying
continue;
}
else
{
throw e;
}
}
device.ImmediateContext.CopyResource(screenResource.QueryInterface<SharpDX.Direct3D11.Resource>(), screenTexture);
screenSurface = screenTexture.QueryInterface<Surface>();
// screenSurface.Map(SharpDX.DXGI.MapFlags.Read, out dataStream);
int width = output.Description.DesktopBounds.Width;
int height = output.Description.DesktopBounds.Height;
var boundsRect = new System.Drawing.Rectangle(0, 0, width, height);
var mapSource = device.ImmediateContext.MapSubresource(screenTexture, 0, MapMode.Read, SharpDX.Direct3D11.MapFlags.None);
using (var bitmap = new System.Drawing.Bitmap(width, height, PixelFormat.Format32bppArgb))
{
// Copy pixels from screen capture Texture to GDI bitmap
var bitmapData = bitmap.LockBits(boundsRect, ImageLockMode.WriteOnly, bitmap.PixelFormat);
var sourcePtr = mapSource.DataPointer;
var destinationPtr = bitmapData.Scan0;
for (int y = 0; y < height; y++)
{
// Copy a single line
Utilities.CopyMemory(destinationPtr, sourcePtr, width * 4);
// Advance pointers
sourcePtr = IntPtr.Add(sourcePtr, mapSource.RowPitch);
destinationPtr = IntPtr.Add(destinationPtr, bitmapData.Stride);
}
// Release source and dest locks
bitmap.UnlockBits(bitmapData);
device.ImmediateContext.UnmapSubresource(screenTexture, 0);
bitmap.Save(string.Format(#"d:\scr\{0}.png", i));
}
// var image = FromByte(ToByte(dataStream));
//var image = getImageFromDXStream(1920, 1200, dataStream);
//image.Save(string.Format(#"d:\scr\{0}.png", i));
// dataStream.Close();
//screenSurface.Unmap();
screenSurface.Dispose();
screenResource.Dispose();
duplicatedOutput.ReleaseFrame();
}
}
After few hours of research and googling i found working solution:
From:
PixelFormat.Format32bppArgb
To:
PixelFormat.Format32bppRgb
I'm trying to use SlimDX and DirectX10 or 11 to control the stereoization process on the nVidia 3D Vision Kit. Thanks to this question I've been able to make it work in DirectX 9. However, due to some missing methods I've been unable to make it work under DirectX 10 or 11.
The algorithm goes like this:
Render left eye image
Render right eye image
Create a texture able to contain them both PLUS an extra row (so the texture size would be 2 * width, height + 1)
Write this NV_STEREO_IMAGE_SIGNATURE value
Render this texture on the screen
My test code skips the first two steps, as I already have a stereo texture. It was a former .JPS file, specifically one of those included in the sample pictures coming with the nvidia 3D kit. Step number 5 uses a full screen quad and a shader to render the stereoized texture onto it through an ortho-projection matrix. The sample code I've seen for DX9 doesn't need this and simply calls the StretchRect(...) method to copy the texture back onto the backbuffer. So maybe it is for this reason that is not working? Is there a similar method to accomplish this in DX10? I thought that rendering onto the backbuffer would theoretically be the same than copying (or StretchRecting) a texture onto it, but maybe it is not?
Here follows my code (slimdx):
Stereoization procedure
static Texture2D Make3D(Texture2D stereoTexture)
{
// stereoTexture contains a stereo image with the left eye image on the left half
// and the right eye image on the right half
// this staging texture will have an extra row to contain the stereo signature
Texture2DDescription stagingDesc = new Texture2DDescription()
{
ArraySize = 1,
Width = 3840,
Height = 1081,
BindFlags = BindFlags.None,
CpuAccessFlags = CpuAccessFlags.Write,
Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
OptionFlags = ResourceOptionFlags.None,
Usage = ResourceUsage.Staging,
MipLevels = 1,
SampleDescription = new SampleDescription(1, 0)
};
Texture2D staging = new Texture2D(device, stagingDesc);
// Identify the source texture region to copy (all of it)
ResourceRegion stereoSrcBox = new ResourceRegion { Front = 0, Back = 1, Top = 0, Bottom = 1080, Left = 0, Right = 3840 };
// Copy it to the staging texture
device.CopySubresourceRegion(stereoTexture, 0, stereoSrcBox, staging, 0, 0, 0, 0);
// Open the staging texture for reading
DataRectangle box = staging.Map(0, MapMode.Write, SlimDX.Direct3D10.MapFlags.None);
// Go to the last row
box.Data.Seek(stereoTexture.Description.Width * stereoTexture.Description.Height * 4, System.IO.SeekOrigin.Begin);
// Write the NVSTEREO header
box.Data.Write(data, 0, data.Length);
staging.Unmap(0);
// Create the final stereoized texture
Texture2DDescription finalDesc = new Texture2DDescription()
{
ArraySize = 1,
Width = 3840,
Height = 1081,
BindFlags = BindFlags.ShaderResource,
CpuAccessFlags = CpuAccessFlags.Write,
Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
OptionFlags = ResourceOptionFlags.None,
Usage = ResourceUsage.Dynamic,
MipLevels = 1,
SampleDescription = new SampleDescription(1, 0)
};
// Copy the staging texture on a new texture to be used as a shader resource
Texture2D final = new Texture2D(device, finalDesc);
device.CopyResource(staging, final);
staging.Dispose();
return final;
}
NV_STEREO_IMAGE_SIGNATURE data
// The NVSTEREO header.
static byte[] data = new byte[] {0x4e, 0x56, 0x33, 0x44, //NVSTEREO_IMAGE_SIGNATURE = 0x4433564e;
0x00, 0x0F, 0x00, 0x00, //Screen width * 2 = 1920*2 = 3840 = 0x00000F00;
0x38, 0x04, 0x00, 0x00, //Screen height = 1080 = 0x00000438;
0x20, 0x00, 0x00, 0x00, //dwBPP = 32 = 0x00000020;
0x02, 0x00, 0x00, 0x00}; //dwFlags = SIH_SCALE_TO_FIT = 0x00000002
Main
private static Device device;
[STAThread]
static void Main()
{
// Device creation
var form = new RenderForm("Stereo test") {ClientSize = new Size(1920, 1080)};
var desc = new SwapChainDescription()
{
BufferCount = 1,
ModeDescription = new ModeDescription(1920, 1080, new Rational(120, 1), Format.R8G8B8A8_UNorm),
IsWindowed = true,
OutputHandle = form.Handle,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput
};
SwapChain swapChain;
Device.CreateWithSwapChain(null, DriverType.Hardware, DeviceCreationFlags.Debug, desc, out device, out swapChain);
//Stops Alt+enter from causing fullscreen skrewiness.
Factory factory = swapChain.GetParent<Factory>();
factory.SetWindowAssociation(form.Handle, WindowAssociationFlags.IgnoreAll);
Texture2D backBuffer = Resource.FromSwapChain<Texture2D>(swapChain, 0);
RenderTargetView renderView = new RenderTargetView(device, backBuffer);
ImageLoadInformation info = new ImageLoadInformation()
{
BindFlags = BindFlags.None,
CpuAccessFlags = CpuAccessFlags.Read,
FilterFlags = FilterFlags.None,
Format = SlimDX.DXGI.Format.R8G8B8A8_UNorm,
MipFilterFlags = FilterFlags.None,
OptionFlags = ResourceOptionFlags.None,
Usage = ResourceUsage.Staging,
MipLevels = 1
};
// Make texture 3D
Texture2D sourceTexture = Texture2D.FromFile(device, "medusa.jpg", info);
Texture2D stereoizedTexture = Make3D(sourceTexture);
ShaderResourceView srv = new ShaderResourceView(device, stereoizedTexture);
// Create a quad that fills the whole screen
ushort[] idx;
TexturedVertex[] quad = CreateTexturedQuad(Vector3.Zero, 1920, 1080, out idx);
// fill vertex and index buffers
DataStream stream = new DataStream(4*24, true, true);
stream.WriteRange(quad);
stream.Position = 0;
Buffer vertices = new SlimDX.Direct3D10.Buffer(device, stream, new BufferDescription()
{
BindFlags = BindFlags.VertexBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SizeInBytes = 4*24,
Usage = ResourceUsage.Default
});
stream.Close();
stream = new DataStream(6*sizeof (ushort), true, true);
stream.WriteRange(idx);
stream.Position = 0;
Buffer indices = new SlimDX.Direct3D10.Buffer(device, stream, new BufferDescription()
{
BindFlags = BindFlags.IndexBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SizeInBytes = 6*sizeof (ushort),
Usage = ResourceUsage.Default
});
// Create world view (ortho) projection matrices
QuaternionCam qCam = new QuaternionCam();
// Load effect from file. It is a basic effect that renders a full screen quad through
// an ortho projectio=n matrix
Effect effect = Effect.FromFile(device, "Texture.fx", "fx_4_0", ShaderFlags.Debug, EffectFlags.None);
EffectTechnique technique = effect.GetTechniqueByIndex(0);
EffectPass pass = technique.GetPassByIndex(0);
InputLayout layout = new InputLayout(device, pass.Description.Signature, new[]
{
new InputElement("POSITION", 0,
Format.
R32G32B32A32_Float,
0, 0),
new InputElement("TEXCOORD", 0,
Format.
R32G32_Float,
16, 0)
});
effect.GetVariableByName("mWorld").AsMatrix().SetMatrix(
Matrix.Translation(Layout.OrthographicTransform(Vector2.Zero, 90, new Size(1920, 1080))));
effect.GetVariableByName("mView").AsMatrix().SetMatrix(qCam.View);
effect.GetVariableByName("mProjection").AsMatrix().SetMatrix(qCam.OrthoProjection);
effect.GetVariableByName("tDiffuse").AsResource().SetResource(srv);
// Set RT and Viewports
device.OutputMerger.SetTargets(renderView);
device.Rasterizer.SetViewports(new Viewport(0, 0, form.ClientSize.Width, form.ClientSize.Height, 0.0f, 1.0f));
// Create solid rasterizer state
RasterizerStateDescription rDesc = new RasterizerStateDescription()
{
CullMode = CullMode.None,
IsDepthClipEnabled = true,
FillMode = FillMode.Solid,
IsAntialiasedLineEnabled = true,
IsFrontCounterclockwise = true,
IsMultisampleEnabled = true
};
RasterizerState rState = RasterizerState.FromDescription(device, rDesc);
device.Rasterizer.State = rState;
// Main Loop
MessagePump.Run(form, () =>
{
device.ClearRenderTargetView(renderView, Color.Cyan);
device.InputAssembler.SetInputLayout(layout);
device.InputAssembler.SetPrimitiveTopology(PrimitiveTopology.TriangleList);
device.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertices, 24, 0));
device.InputAssembler.SetIndexBuffer(indices, Format.R16_UInt, 0);
for (int i = 0; i < technique.Description.PassCount; ++i)
{
// Render the full screen quad
pass.Apply();
device.DrawIndexed(6, 0, 0);
}
swapChain.Present(0, PresentFlags.None);
});
// Dispose resources
vertices.Dispose();
layout.Dispose();
effect.Dispose();
renderView.Dispose();
backBuffer.Dispose();
device.Dispose();
swapChain.Dispose();
rState.Dispose();
stereoizedTexture.Dispose();
sourceTexture.Dispose();
indices.Dispose();
srv.Dispose();
}[/code]
Thanks in advance!
I eventually managed to fix it. The key was in using the CopySubResourceRegion method on the stereoized texture back to the backbuffer, specifying its dimension (e.g.: 1920 x 1080 instead of 3840 x 1081).