IGraphBuilder use of ColorConverter - c#

I try to use this code to get pictures of my cam:
IGraphBuilder _graph = null;
ISampleGrabber _grabber = null;
IBaseFilter _sourceObject = null;
IBaseFilter _grabberObject = null;
IMediaControl _control = null;
// Create the main graph
_graph = Activator.CreateInstance(Type.GetTypeFromCLSID(FilterGraph)) as IGraphBuilder;
// Create the webcam source
_sourceObject = FilterInfo.CreateFilter(_monikerString);
// Create the grabber
_grabber = Activator.CreateInstance(Type.GetTypeFromCLSID(SampleGrabber)) as ISampleGrabber;
_grabberObject = _grabber as IBaseFilter;
// Add the source and grabber to the main graph
_graph.AddFilter(_sourceObject, "source");
_graph.AddFilter(_grabberObject, "grabber");
IPin pin = _sourceObject.GetPin(PinDirection.Output, 0);
IAMStreamConfig streamConfig = pin as IAMStreamConfig;
int count = 0, size = 0;
streamConfig.GetNumberOfCapabilities(out count, out size);
int width = 0, height = 0;
AMMediaType mediaType = null;
AMMediaType mediaTypeCandidate = null;
for(int index = 0; index < count; index++) {
VideoStreamConfigCaps scc = new VideoStreamConfigCaps();
int test = streamConfig.GetStreamCaps(index, out mediaTypeCandidate, scc);
if(mediaTypeCandidate.MajorType == MediaTypes.Video && mediaTypeCandidate.SubType == MediaSubTypes.YUY2) {
VideoInfoHeader header = (VideoInfoHeader)Marshal.PtrToStructure(mediaTypeCandidate.FormatPtr, typeof(VideoInfoHeader));
if(header.BmiHeader.Width == 1280 && header.BmiHeader.Height == 720) {
width = header.BmiHeader.Width;
height = header.BmiHeader.Height;
if(mediaType != null)
mediaType.Dispose();
mediaType = mediaTypeCandidate;
} else
mediaTypeCandidate.Dispose();
} else
mediaTypeCandidate.Dispose();
}
streamConfig.SetFormat(mediaType);
And it works but i do not see the Image which is generated by this code:
uint pcount = (uint)(_capGrabber.Width * _capGrabber.Height * PixelFormats.Bgr32.BitsPerPixel / 8);
// Create a file mapping
_section = CreateFileMapping(new IntPtr(-1), IntPtr.Zero, 0x04, 0, pcount, null);
_map = MapViewOfFile(_section, 0xF001F, 0, 0, pcount);
// Get the bitmap
BitmapSource = Imaging.CreateBitmapSourceFromMemorySection(_section, _capGrabber.Width,
_capGrabber.Height, PixelFormats.Bgr32, _capGrabber.Width * PixelFormats.Bgr32.BitsPerPixel / 8, 0) as InteropBitmap;
_capGrabber.Map = _map;
// Invoke event
if (NewBitmapReady != null)
{
NewBitmapReady(this, null);
}
Because the SubMediaTyp is YUY2. How can i add a converter to this code? I have read something about a ColorConvert, which can be added to the IGraphBuilder. How does that work?

I would not expect CreateBitmapSourceFromMemorySection to accept anything else than flavors of RGB. Even more unlikely that it accepts YUY2 media type, so you need the DirectShow pipeline to convert video stream to RGB before you export it as a managed bitmap/imaging object.
To achieve this, you typically add Sample Grabber filter initialized to 24-bit RGB subtype and let DirectShow provide necessary converters automatically.
See detailed explanation and code snippets here: DirectShow: Examples for Using SampleGrabber for Grabbing a Frame and...
media.majorType = MediaType.Video;
media.subType = MediaSubType.RGB24;
media.formatPtr = IntPtr.Zero;
hr = sampGrabber.SetMediaType(media);

Related

Create movie from images c# using leadTools

I am trying to create movie from images.
I am following following links :
https://www.leadtools.com/support/forum/posts/t11084- // Here I am trying option 2 mentioned & https://social.msdn.microsoft.com/Forums/sqlserver/en-US/b61726a4-4b87-49c7-b4fc-8949cd1366ac/visual-c-visual-studio-2017-how-do-you-convert-jpg-images-to-video-in-visual-c?forum=csharpgeneral
void convert()
{
bmp = new Bitmap(320, 240, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
// create sample source object
SampleSource smpsrc = new SampleSource();
ConvertCtrl convertCtrl = new ConvertCtrl();
// create a new media type wrapper
MediaType mt = new MediaType();
double AvgTimePerFrame = (10000000 / 15);
// set the type to 24-bit RGB video
mt.Type = Constants.MEDIATYPE_Video;
mt.SubType = Constants.MEDIASUBTYPE_RGB24;
// set the format
mt.FormatType = Constants.FORMAT_VideoInfo;
VideoInfoHeader vih = new VideoInfoHeader();
int bmpSize = GetBitmapSize(bmp);
// setup the video info header
vih.bmiHeader.biCompression = 0; // BI_RGB
vih.bmiHeader.biBitCount = 24;
vih.bmiHeader.biWidth = bmp.Width;
vih.bmiHeader.biHeight = bmp.Height;
vih.bmiHeader.biPlanes = 1;
vih.bmiHeader.biSizeImage = bmpSize;
vih.bmiHeader.biClrImportant = 0;
vih.AvgTimePerFrame.lowpart = (int)AvgTimePerFrame;
vih.dwBitRate = bmpSize * 8 * 15;
mt.SetVideoFormatData(vih, null, 0);
// set fixed size samples matching the bitmap size
mt.SampleSize = bmpSize;
mt.FixedSizeSamples = true;
// assign the source media type
smpsrc.SetMediaType(mt);
// select the LEAD compressor
convertCtrl.VideoCompressors.MCmpMJpeg.Selected = true;
convertCtrl.SourceObject = smpsrc;
convertCtrl.TargetFile = #"D:\Projects\LEADTool_Movie_fromImage\ImageToVideo_LeadTool\ImageToVideo_LeadTool\Images\Out\aa.avi";
//convertCtrl.TargetFile = "C:\\Users\\vipul.langalia\\Documents\\count.avi";
convertCtrl.TargetFormat = TargetFormatType.WMVMux;
convertCtrl.StartConvert();
BitmapData bmpData;
int i = 1;
byte[] a = new byte[bmpSize];
System.Drawing.Rectangle rect = new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height);
var imgs = GetAllFiles();
foreach (var item in imgs)
{
bmpSize = GetBitmapSize(item);
MediaSample ms = smpsrc.GetSampleBuffer(30000);
ms.SyncPoint = true;
bmpData = item.LockBits(rect, ImageLockMode.ReadWrite, item.PixelFormat);
Marshal.Copy(bmpData.Scan0, a, 0, bmpSize);
item.UnlockBits(bmpData);
ms.SetData(bmpSize, a);
SetSampleTime(ms, i, AvgTimePerFrame);
smpsrc.DeliverSample(1000, ms);
i++;
}
smpsrc.DeliverEndOfStream(1000);
}
byte[] GetByteArrayFroMWritableBitmap(WriteableBitmap bitmapSource)
{
var width = bitmapSource.PixelWidth;
var height = bitmapSource.PixelHeight;
var stride = width * ((bitmapSource.Format.BitsPerPixel + 7) / 8);
var bitmapData = new byte[height * stride];
bitmapSource.CopyPixels(bitmapData, stride, 0);
return bitmapData;
}
private int GetBitmapSize(WriteableBitmap bmp)
{
int BytesPerLine = (((int)bmp.Width * 24 + 31) & ~31) / 8;
return BytesPerLine * (int)bmp.Height;
}
private int GetBitmapSize(Bitmap bmp)
{
int BytesPerLine = ((bmp.Width * 24 + 31) & ~31) / 8;
return BytesPerLine * bmp.Height;
}
It is throwing out of memory exception when execute ms.SetData(bmpSize, a); statement. Plus If I directly pass byte[] by var a = System.IO.File.ReadAllBytes(imagePath); in ms.SetData(bmpSize, a); statement then it will not throw error but video file is not properly created.
Can anybody please help me?
There are a couple of problems with your code:
Are all your images 320x240 pixels? If not, you should resize them to these exact dimensions before delivering them as video samples to the Convert control.
If you want to use a different size, you can, but it should be the same size for all images, and you should modify the code accordingly.
You are setting the TargetFormat property to WMVMux, but the name of the output file has “.avi” extension. If you want to save AVI files, set TargetFormat = TargetFormatType.AVI.
If you still face problems after this, feel free to contact support#leadtools.com and provide full details about what you tried and what errors you’re getting. Email support is free for LEADTOOLS SDK owners and also for free evaluation users.

Unity: Converting Texture2D to YUV420P and sending with UDP using FFmpeg

In my Unity game each frame is rendered into a texture and then put together into a video using FFmpeg. Now my questions is if I am doing this right because avcodec_send_frame throws every time an exception.
I am pretty sure that I am doing something wrong or in the wrong order or simply missing something.
Here is the code for capturing the texture:
void Update() {
//StartCoroutine(CaptureFrame());
if (rt == null)
{
rect = new Rect(0, 0, captureWidth, captureHeight);
rt = new RenderTexture(captureWidth, captureHeight, 24);
frame = new Texture2D(captureWidth, captureHeight, TextureFormat.RGB24, false);
}
Camera camera = this.GetComponent<Camera>(); // NOTE: added because there was no reference to camera in original script; must add this script to Camera
camera.targetTexture = rt;
camera.Render();
RenderTexture.active = rt;
frame.ReadPixels(rect, 0, 0);
frame.Apply();
camera.targetTexture = null;
RenderTexture.active = null;
byte[] fileData = null;
fileData = frame.GetRawTextureData();
encoding(fileData, fileData.Length);
}
And here is the code for encoding and sending the byte data:
private unsafe void encoding(byte[] bytes, int size)
{
Debug.Log("Encoding...");
AVCodec* codec;
codec = ffmpeg.avcodec_find_encoder(AVCodecID.AV_CODEC_ID_H264);
int ret, got_output = 0;
AVCodecContext* codecContext = null;
codecContext = ffmpeg.avcodec_alloc_context3(codec);
codecContext->bit_rate = 400000;
codecContext->width = captureWidth;
codecContext->height = captureHeight;
//codecContext->time_base.den = 25;
//codecContext->time_base.num = 1;
AVRational timeBase = new AVRational();
timeBase.num = 1;
timeBase.den = 25;
codecContext->time_base = timeBase;
//AVStream* videoAVStream = null;
//videoAVStream->time_base = timeBase;
AVRational frameRate = new AVRational();
frameRate.num = 25;
frameRate.den = 1;
codecContext->framerate = frameRate;
codecContext->gop_size = 10;
codecContext->max_b_frames = 1;
codecContext->pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P;
AVFrame* inputFrame;
inputFrame = ffmpeg.av_frame_alloc();
inputFrame->format = (int)codecContext->pix_fmt;
inputFrame->width = captureWidth;
inputFrame->height = captureHeight;
inputFrame->linesize[0] = inputFrame->width;
AVPixelFormat dst_pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P, src_pix_fmt = AVPixelFormat.AV_PIX_FMT_RGBA;
int src_w = 1920, src_h = 1080, dst_w = 1920, dst_h = 1080;
SwsContext* sws_ctx;
GCHandle pinned = GCHandle.Alloc(bytes, GCHandleType.Pinned);
IntPtr address = pinned.AddrOfPinnedObject();
sbyte** inputData = (sbyte**)address;
sws_ctx = ffmpeg.sws_getContext(src_w, src_h, src_pix_fmt,
dst_w, dst_h, dst_pix_fmt,
0, null, null, null);
fixed (int* lineSize = new int[1])
{
lineSize[0] = 4 * captureHeight;
// Convert RGBA to YUV420P
ffmpeg.sws_scale(sws_ctx, inputData, lineSize, 0, codecContext->width, inputFrame->extended_data, inputFrame->linesize);
}
inputFrame->pts = counter++;
if (ffmpeg.avcodec_send_frame(codecContext, inputFrame) < 0)
throw new ApplicationException("Error sending a frame for encoding!");
AVPacket pkt;
pkt = new AVPacket();
//pkt.data = inData;
AVPacket* packet = &pkt;
ffmpeg.av_init_packet(packet);
Debug.Log("pkt.size " + pkt.size);
pinned.Free();
AVDictionary* options = null;
ffmpeg.av_dict_set(&options, "pkt_size", "1300", 0);
ffmpeg.av_dict_set(&options, "buffer_size", "65535", 0);
AVIOContext* server = null;
ffmpeg.avio_open2(&server, "udp://192.168.0.1:1111", ffmpeg.AVIO_FLAG_WRITE, null, &options);
Debug.Log("encoded");
ret = ffmpeg.avcodec_encode_video2(codecContext, &pkt, inputFrame, &got_output);
ffmpeg.avio_write(server, pkt.data, pkt.size);
ffmpeg.av_free_packet(&pkt);
pkt.data = null;
pkt.size = 0;
}
And every time I start the game
if (ffmpeg.avcodec_send_frame(codecContext, inputFrame) < 0)
throw new ApplicationException("Error sending a frame for encoding!");
throws the exception.
Any help in fixing the issue would be greatly appreciated :)

FileLabels are not visible

I try to capture desktop screenshot using SharpDX. My application is able to capture screenshot but without labels in Windows Explorer.
I tryed 2 solutions but without change. I tried find in documentation any information, but without change.
Here is mi code:
public void SCR()
{
uint numAdapter = 0; // # of graphics card adapter
uint numOutput = 0; // # of output device (i.e. monitor)
// create device and factory
var device = new SharpDX.Direct3D11.Device(SharpDX.Direct3D.DriverType.Hardware);
var factory = new Factory1();
// creating CPU-accessible texture resource
var texdes = new SharpDX.Direct3D11.Texture2DDescription
{
CpuAccessFlags = SharpDX.Direct3D11.CpuAccessFlags.Read,
BindFlags = SharpDX.Direct3D11.BindFlags.None,
Format = Format.B8G8R8A8_UNorm,
Height = factory.Adapters1[numAdapter].Outputs[numOutput].Description.DesktopBounds.Height,
Width = factory.Adapters1[numAdapter].Outputs[numOutput].Description.DesktopBounds.Width,
OptionFlags = SharpDX.Direct3D11.ResourceOptionFlags.None,
MipLevels = 1,
ArraySize = 1
};
texdes.SampleDescription.Count = 1;
texdes.SampleDescription.Quality = 0;
texdes.Usage = SharpDX.Direct3D11.ResourceUsage.Staging;
var screenTexture = new SharpDX.Direct3D11.Texture2D(device, texdes);
// duplicate output stuff
var output = new Output1(factory.Adapters1[numAdapter].Outputs[numOutput].NativePointer);
var duplicatedOutput = output.DuplicateOutput(device);
SharpDX.DXGI.Resource screenResource = null;
SharpDX.DataStream dataStream;
Surface screenSurface;
var i = 0;
var miliseconds = 2500000;
while (true)
{
i++;
// try to get duplicated frame within given time
try
{
SharpDX.DXGI.OutputDuplicateFrameInformation duplicateFrameInformation;
duplicatedOutput.AcquireNextFrame(miliseconds, out duplicateFrameInformation, out screenResource);
}
catch (SharpDX.SharpDXException e)
{
if (e.ResultCode.Code == SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
{
// this has not been a successful capture
// thanks #Randy
// keep retrying
continue;
}
else
{
throw e;
}
}
device.ImmediateContext.CopyResource(screenResource.QueryInterface<SharpDX.Direct3D11.Resource>(), screenTexture);
screenSurface = screenTexture.QueryInterface<Surface>();
// screenSurface.Map(SharpDX.DXGI.MapFlags.Read, out dataStream);
int width = output.Description.DesktopBounds.Width;
int height = output.Description.DesktopBounds.Height;
var boundsRect = new System.Drawing.Rectangle(0, 0, width, height);
var mapSource = device.ImmediateContext.MapSubresource(screenTexture, 0, MapMode.Read, SharpDX.Direct3D11.MapFlags.None);
using (var bitmap = new System.Drawing.Bitmap(width, height, PixelFormat.Format32bppArgb))
{
// Copy pixels from screen capture Texture to GDI bitmap
var bitmapData = bitmap.LockBits(boundsRect, ImageLockMode.WriteOnly, bitmap.PixelFormat);
var sourcePtr = mapSource.DataPointer;
var destinationPtr = bitmapData.Scan0;
for (int y = 0; y < height; y++)
{
// Copy a single line
Utilities.CopyMemory(destinationPtr, sourcePtr, width * 4);
// Advance pointers
sourcePtr = IntPtr.Add(sourcePtr, mapSource.RowPitch);
destinationPtr = IntPtr.Add(destinationPtr, bitmapData.Stride);
}
// Release source and dest locks
bitmap.UnlockBits(bitmapData);
device.ImmediateContext.UnmapSubresource(screenTexture, 0);
bitmap.Save(string.Format(#"d:\scr\{0}.png", i));
}
// var image = FromByte(ToByte(dataStream));
//var image = getImageFromDXStream(1920, 1200, dataStream);
//image.Save(string.Format(#"d:\scr\{0}.png", i));
// dataStream.Close();
//screenSurface.Unmap();
screenSurface.Dispose();
screenResource.Dispose();
duplicatedOutput.ReleaseFrame();
}
}
After few hours of research and googling i found working solution:
From:
PixelFormat.Format32bppArgb
To:
PixelFormat.Format32bppRgb

BufferCB not being called by SampleGrabber

I'm using a SampleGrabber to get audio data, however my BufferCB method is not being executed. What am I doing wrong ?
//add Sample Grabber
IBaseFilter pSampleGrabber = (IBaseFilter)Activator.CreateInstance(Type.GetTypeFromCLSID(CLSID_SampleGrabber));
hr = pGraph.AddFilter(pSampleGrabber, "SampleGrabber");
checkHR(hr, "Can't add Sample Grabber");
AMMediaType pSampleGrabber_pmt = new AMMediaType();
//pSampleGrabber_pmt.majorType = MediaType.Audio;
pSampleGrabber_pmt.subType = MediaSubType.PCM;
pSampleGrabber_pmt.formatType = FormatType.WaveEx;
pSampleGrabber_pmt.fixedSizeSamples = true;
pSampleGrabber_pmt.formatSize = 18;
pSampleGrabber_pmt.sampleSize = 2;
WaveFormatEx pSampleGrabber_Format = new WaveFormatEx();
pSampleGrabber_Format.wFormatTag = 1;
pSampleGrabber_Format.nChannels = 1;
pSampleGrabber_Format.nSamplesPerSec = 48000;
pSampleGrabber_Format.nAvgBytesPerSec = 96000;
pSampleGrabber_Format.nBlockAlign = 2;
pSampleGrabber_Format.wBitsPerSample = 16;
pSampleGrabber_pmt.formatPtr = Marshal.AllocCoTaskMem(Marshal.SizeOf(pSampleGrabber_Format));
Marshal.StructureToPtr(pSampleGrabber_Format, pSampleGrabber_pmt.formatPtr, false);
hr = ((ISampleGrabber)pSampleGrabber).SetMediaType(pSampleGrabber_pmt);
DsUtils.FreeAMMediaType(pSampleGrabber_pmt);
checkHR(hr, "Can't set media type to sample grabber");
ISampleGrabber pGrabber = new SampleGrabber() as ISampleGrabber;
pGrabber = (ISampleGrabber)pSampleGrabber;
pGrabber.SetCallback(null, 1);
My BufferCB method is like
public int BufferCB(double SampleTime, IntPtr pBuffer, int BufferLen)
{
return 0;
}
You created and configured one instance pSampleGrabber and then you are attaching your callback to another unused idling instance pGrabber.
You need
pSampleGrabber as ISampleGrabber
instead of
new SampleGrabber() as ISampleGrabber

Setting up the constant buffer using SlimDX

I've been following the Microsoft Direct3D11 tutorials but using C# and SlimDX. I'm trying to set the constant buffer but am not sure how to either create or set it.
I'm simply trying to set three matrices (world, view and projection) using a constant buffer but I'm struggling at every stage, creation, data input and passing it to the shader.
The HLSL on MSDN (which I've essentially copied) is:
cbuffer ConstantBuffer : register( b0 )
{
matrix World;
matrix View;
matrix Projection;
}
The C++ code on MSDN is:
ID3D11Buffer* g_pConstantBuffer = NULL;
XMMATRIX g_World;
XMMATRIX g_View;
XMMATRIX g_Projection;
//set up the constant buffer
D3D11_BUFFER_DESC bd;
ZeroMemory( &bd, sizeof(bd) );
bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(ConstantBuffer);
bd.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
bd.CPUAccessFlags = 0;
if( FAILED(g_pd3dDevice->CreateBuffer( &bd, NULL, &g_pConstantBuffer ) ) )
return hr;
//
// Update variables
//
ConstantBuffer cb;
cb.mWorld = XMMatrixTranspose( g_World );
cb.mView = XMMatrixTranspose( g_View );
cb.mProjection = XMMatrixTranspose( g_Projection );
g_pImmediateContext->UpdateSubresource( g_pConstantBuffer, 0, NULL, &cb, 0, 0 );
Does anybody know how to translate this to SlimDX? Or if anybody knows any SlimDX tutorials or resources that would also be useful.
Thanks.
Something similar to this should work:
var buffer = new Buffer(device, new BufferDescription {
Usage = ResourceUsage.Default,
SizeInBytes = sizeof(ConstantBuffer),
BindFlags = BindFlags.ConstantBuffer
});
var cb = new ConstantBuffer();
cb.World = Matrix.Transpose(world);
cb.View = Matrix.Transpose(view);
cb.Projection = Matrix.Transpose(projection);
var data = new DataStream(sizeof(ConstantBuffer), true, true);
data.Write(cb);
data.Position = 0;
context.UpdateSubresource(new DataBox(0, 0, data), buffer, 0);

Categories

Resources