DirectShowLib - Save video with overlay text - c#

How to save the graph obtained after processing at avi file. Managed to get pictures with the overlay's text. I know that there is a method SetOutputFileName(), but how to use it here?
private Bitmap bitmapOverlay;
private IFilterGraph2 m_FilterGraph;
void GO()
{
SetupGraph("C:\\Export.avi");
SetupBitmap();
IMediaControl mediaCtrl = m_FilterGraph as IMediaControl;
int hr = mediaCtrl.Run();
DsError.ThrowExceptionForHR( hr );
}
private void SetupGraph(string FileName)
{
int hr;
IBaseFilter ibfRenderer = null;
ISampleGrabber sampGrabber = null;
IBaseFilter capFilter = null;
IPin iPinInFilter = null;
IPin iPinOutFilter = null;
IPin iPinInDest = null;
// Get the graphbuilder object
m_FilterGraph = new FilterGraph() as IFilterGraph2;
// Get the SampleGrabber interface
sampGrabber = new SampleGrabber() as ISampleGrabber;
// Add the video source
hr = m_FilterGraph.AddSourceFilter(FileName, "Ds.NET FileFilter", out capFilter);
DsError.ThrowExceptionForHR( hr );
// Hopefully this will be the video pin
IPin iPinOutSource = DsFindPin.ByDirection(capFilter, PinDirection.Output, 0);
IBaseFilter baseGrabFlt = sampGrabber as IBaseFilter;
ConfigureSampleGrabber(sampGrabber);
iPinInFilter = DsFindPin.ByDirection(baseGrabFlt, PinDirection.Input, 0);
iPinOutFilter = DsFindPin.ByDirection(baseGrabFlt, PinDirection.Output, 0);
// Add the frame grabber to the graph
hr = m_FilterGraph.AddFilter( baseGrabFlt, "Ds.NET Grabber" );
DsError.ThrowExceptionForHR( hr );
hr = m_FilterGraph.Connect(iPinOutSource, iPinInFilter);
DsError.ThrowExceptionForHR( hr );
// Get the default video renderer
ibfRenderer = (IBaseFilter) new VideoRendererDefault();
// Add it to the graph
hr = m_FilterGraph.AddFilter( ibfRenderer, "Ds.NET VideoRendererDefault" );
DsError.ThrowExceptionForHR( hr );
iPinInDest = DsFindPin.ByDirection(ibfRenderer, PinDirection.Input, 0);
// Connect the graph. Many other filters automatically get added here
hr = m_FilterGraph.Connect(iPinOutFilter, iPinInDest);
DsError.ThrowExceptionForHR( hr );
SaveSizeInfo(sampGrabber);
}
Process video - draw on each frame text.
cc.Save ("C: \\ Test \\ img" + m_Count + ".jpg") - so get shots with superimposed text.
How to make the processed video file saved in avi file?
int ISampleGrabberCB.BufferCB( double SampleTime, IntPtr pBuffer, int BufferLen )
{
Graphics g;
String s;
float sLeft;
float sTop;
SizeF d;
g = Graphics.FromImage(bitmapOverlay);
g.Clear(System.Drawing.Color.Transparent);
g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
// Prepare to put the specified string on the image
g.DrawRectangle(System.Drawing.Pens.Blue, 0, 0, m_videoWidth - 1, m_videoHeight - 1);
g.DrawRectangle(System.Drawing.Pens.Blue, 1, 1, m_videoWidth - 3, m_videoHeight - 3);
d = g.MeasureString(m_String, fontOverlay);
sLeft = (m_videoWidth - d.Width) / 2;
sTop = (m_videoHeight - d.Height ) / 2;
g.DrawString(m_String, fontOverlay, System.Drawing.Brushes.Red,
sLeft, sTop, System.Drawing.StringFormat.GenericTypographic);
g.Dispose();
Bitmap v;
v = new Bitmap(m_videoWidth, m_videoHeight, m_stride,
PixelFormat.Format32bppArgb, pBuffer);
v.RotateFlip(RotateFlipType.Rotate180FlipX);
g = Graphics.FromImage(v);
g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
// draw the overlay bitmap over the video's bitmap
g.DrawImage(bitmapOverlay, 0, 0, bitmapOverlay.Width, bitmapOverlay.Height);
Bitmap cc = new Bitmap(v);
cc.Save("C:\\Test\\img" + m_Count + ".jpg");
g.Dispose();
v.Dispose();
m_Count++;
return 0;
}

Typically it should look like:
[File reader] -> [AVI Demuxer] -> (video pin) -> [Video decoder] -> [Sample grabber] -> [Video encoder] -> [AVI Muxer] -> [File writer]
|-> (audio pin) ->|
AVI file is a media container so you need to demultiplex it to separate streams and at the end to multiplex (modified) streams back to AVI container. When you got video stream it (typically) contains encoded video. So to modify it you need to decode it and then after modification encode it back to the same format. You don't need to do anything about audio stream, just direct it from demuxer straight to muxer. [File writer] filter allows you to specify output file name.
I don't know what is "Ds.NET FileFilter" and how it can demux and then decode the video, but seems it can because you can see your modified picture. AVI Muxer is a standard MS filter, I just don't remember its name. You need to choose a video encoder. I'd recommend first to build a simple graph in GraphEditor that doesn't modify the picture but just read->demux->decod->encode->mux->write to verify you have everything you need and they work fine. Just try to play resulting AVI file.

Related

How to modify the Orientation exif tag in a jpg file without changing anything else in the file, using Winforms C# or C++ .Net

I'm writing a program to help my organise the thousands of digital photos I have taken over the years. One feature I want is to be able to rotate an image by modifying the Orientation EXIF tag, without changing anything else in the file. I KNOW this is possible because if you right-click on the file in Windows Explorer and select Rotate Left/Right then exactly that happens - one byte is modified to match the new orientation value. I specifically do NOT want to modify the picture itself.
However everything I have tried either has no effect or significantly changes the file (e.g. reduces it by 14k bytes, presumably by re-encoding it). I have read many posts on several websites and nobody seems to have an answer about my specific problem - mostly they talk about adding extra tags, and the need to add padding, but surely I don't need to add padding if I'm only trying to modify one existing byte (especially as I know that Windows Explorer can do it).
I'm using a C# Windows Forms application running Framework 4.5.2 under Windows 10 Pro. Also tried doing it from C++. Thanks to all the contributors whose examples I have built upon.
Here are 5 bare-bones console app examples :
Basic C# using System.Drawing.Image class. This sets the Orientation tag OK but reduces the size i.e. re-encodes the picture.
static void Main(string[] args)
{
const int EXIF_ORIENTATION = 0x0112;
try
{
using (Image image = Image.FromFile("Test.jpg"))
{
System.Drawing.Imaging.PropertyItem orientation = image.GetPropertyItem(EXIF_ORIENTATION);
byte o = 6; // Rotate 90 degrees clockwise
orientation.Value[0] = o;
image.SetPropertyItem(orientation);
image.Save("Test2.jpg");
}
}
catch (Exception ex)
{
}
The InPlaceBitMapEditor class looks like a exactly what I need, and the debug lines suggest this is modifying the EXIF tag, but the file is not modified i.e. the changes not written out.
static void Main(string[] args)
{
try
{
Stream stream = new System.IO.FileStream("Test.JPG", FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite);
JpegBitmapDecoder pngDecoder = new JpegBitmapDecoder(stream, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default);
BitmapFrame frame = pngDecoder.Frames[0];
InPlaceBitmapMetadataWriter inplace = frame.CreateInPlaceBitmapMetadataWriter();
ushort u = 6; // Rotate 90 degrees clockwise
object i1 = inplace.GetQuery("/app1/ifd/{ushort=274}"); // DEBUG - this is what it was before - 1
if (inplace.TrySave() == true)
{
inplace.SetQuery("/app1/ifd/{ushort=274}", u);
}
object i2 = inplace.GetQuery("/app1/ifd/{ushort=274}"); // DEBUG - this is what it is after - 6
stream.Close();
}
catch (Exception ex)
{
}
An evolution of the above, which explicitly writes out the file. This sets the Orientation tag and the file displays OK but reduces the size i.e. re-encodes the picture.
static void Main(string[] args)
{
BitmapCreateOptions createOptions = BitmapCreateOptions.PreservePixelFormat | BitmapCreateOptions.IgnoreColorProfile;
using (Stream originalFile = File.Open("Test.JPG", FileMode.Open, FileAccess.ReadWrite))
{
BitmapDecoder original = BitmapDecoder.Create(originalFile, createOptions, BitmapCacheOption.None);
if (!original.CodecInfo.FileExtensions.Contains("jpg"))
{
Console.WriteLine("The file you passed in is not a JPEG.");
return;
}
JpegBitmapEncoder output = new JpegBitmapEncoder();
BitmapFrame frame = original.Frames[0];
BitmapMetadata metadata = frame.Metadata.Clone() as BitmapMetadata;
ushort u = 6;
object i1 = metadata.GetQuery("/app1/ifd/{ushort=274}"); // DEBUG - this is what it was before - 1
metadata.SetQuery("/app1/ifd/{ushort=274}", u);
object i2 = metadata.GetQuery("/app1/ifd/{ushort=274}"); // DEBUG - this is what it was after - 6
output.Frames.Add(BitmapFrame.Create(original.Frames[0], original.Frames[0].Thumbnail, metadata, original.Frames[0].ColorContexts));
using (Stream outputFile = File.Open("Test2.JPG", FileMode.Create, FileAccess.ReadWrite))
{
output.Save(outputFile);
}
}
}
Tried using C++ instead, with some alternate techniques using GDI+. This sets the Orientation tag OK but reduces the size i.e. re-encodes the picture.
// ConsoleApplication4.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <windows.h>
#include <gdiplus.h>
#include <stdio.h>
using namespace Gdiplus;
/*
This rotates the file and saves under a different name, but the file size has been shrunk by 18 KB from 3446 KB to 3428 KB
*/
int GetEncoderClsid(const WCHAR* format, CLSID* pClsid)
{
UINT num = 0; // number of image encoders
UINT size = 0; // size of the image encoder array in bytes
ImageCodecInfo* pImageCodecInfo = NULL;
GetImageEncodersSize(&num, &size);
if (size == 0)
return -1; // Failure
pImageCodecInfo = (ImageCodecInfo*)(malloc(size));
if (pImageCodecInfo == NULL)
return -1; // Failure
GetImageEncoders(num, size, pImageCodecInfo);
for (UINT j = 0; j < num; ++j)
{
if (wcscmp(pImageCodecInfo[j].MimeType, format) == 0)
{
*pClsid = pImageCodecInfo[j].Clsid;
free(pImageCodecInfo);
return j; // Success
}
}
free(pImageCodecInfo);
return -1; // Failure
}
int RotateImage()
{
// Initialize <tla rid="tla_gdiplus"/>.
GdiplusStartupInput gdiplusStartupInput;
ULONG_PTR gdiplusToken;
GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);
Status stat;
CLSID clsid;
unsigned short v;
Bitmap* bitmap = new Bitmap(L"Test.JPG");
PropertyItem* propertyItem = new PropertyItem;
// Get the CLSID of the JPEG encoder.
GetEncoderClsid(L"image/jpeg", &clsid);
propertyItem->id = PropertyTagOrientation;
propertyItem->length = 2; // string length including NULL terminator
propertyItem->type = PropertyTagTypeShort;
v = 6; // Rotate 90 degrees clockwise
propertyItem->value = &v;
bitmap->SetPropertyItem(propertyItem);
stat = bitmap->Save(L"Test2.JPG", &clsid, NULL);
if (stat != Ok) printf("Error saving.\n");
delete propertyItem;
delete bitmap;
GdiplusShutdown(gdiplusToken);
return 0;
}
int main()
{
RotateImage();
return 0;
}
This is a whopper and fairly low-level. This sets the Orientation tag OK but reduces the size i.e. re-encodes the picture.
// ConsoleApplication5.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <Windows.h>
#include <wincodecsdk.h>
/*
This rotates the file and saves under a different name, but the file size has been shrunk by 18 KB from 3446 KB to 3428 KB
*/
int RotateImage()
{
// Initialize COM.
HRESULT hr = CoInitializeEx(NULL, COINIT_APARTMENTTHREADED);
IWICImagingFactory *piFactory = NULL;
IWICBitmapDecoder *piDecoder = NULL;
// Create the COM imaging factory.
if (SUCCEEDED(hr))
{
hr = CoCreateInstance(CLSID_WICImagingFactory,
NULL, CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&piFactory));
}
// Create the decoder.
if (SUCCEEDED(hr))
{
hr = piFactory->CreateDecoderFromFilename(L"Test.JPG", NULL, GENERIC_READ,
WICDecodeMetadataCacheOnDemand, //For JPEG lossless decoding/encoding.
&piDecoder);
}
// Variables used for encoding.
IWICStream *piFileStream = NULL;
IWICBitmapEncoder *piEncoder = NULL;
IWICMetadataBlockWriter *piBlockWriter = NULL;
IWICMetadataBlockReader *piBlockReader = NULL;
WICPixelFormatGUID pixelFormat = { 0 };
UINT count = 0;
double dpiX, dpiY = 0.0;
UINT width, height = 0;
// Create a file stream.
if (SUCCEEDED(hr))
{
hr = piFactory->CreateStream(&piFileStream);
}
// Initialize our new file stream.
if (SUCCEEDED(hr))
{
hr = piFileStream->InitializeFromFilename(L"Test2.jpg", GENERIC_WRITE);
}
// Create the encoder.
if (SUCCEEDED(hr))
{
hr = piFactory->CreateEncoder(GUID_ContainerFormatJpeg, NULL, &piEncoder);
}
// Initialize the encoder
if (SUCCEEDED(hr))
{
hr = piEncoder->Initialize(piFileStream, WICBitmapEncoderNoCache);
}
if (SUCCEEDED(hr))
{
hr = piDecoder->GetFrameCount(&count);
}
if (SUCCEEDED(hr))
{
// Process each frame of the image.
for (UINT i = 0; i < count &&SUCCEEDED(hr); i++)
{
// Frame variables.
IWICBitmapFrameDecode *piFrameDecode = NULL;
IWICBitmapFrameEncode *piFrameEncode = NULL;
IWICMetadataQueryReader *piFrameQReader = NULL;
IWICMetadataQueryWriter *piFrameQWriter = NULL;
// Get and create the image frame.
if (SUCCEEDED(hr))
{
hr = piDecoder->GetFrame(i, &piFrameDecode);
}
if (SUCCEEDED(hr))
{
hr = piEncoder->CreateNewFrame(&piFrameEncode, NULL);
}
// Initialize the encoder.
if (SUCCEEDED(hr))
{
hr = piFrameEncode->Initialize(NULL);
}
// Get and set the size.
if (SUCCEEDED(hr))
{
hr = piFrameDecode->GetSize(&width, &height);
}
if (SUCCEEDED(hr))
{
hr = piFrameEncode->SetSize(width, height);
}
// Get and set the resolution.
if (SUCCEEDED(hr))
{
piFrameDecode->GetResolution(&dpiX, &dpiY);
}
if (SUCCEEDED(hr))
{
hr = piFrameEncode->SetResolution(dpiX, dpiY);
}
// Set the pixel format.
if (SUCCEEDED(hr))
{
piFrameDecode->GetPixelFormat(&pixelFormat);
}
if (SUCCEEDED(hr))
{
hr = piFrameEncode->SetPixelFormat(&pixelFormat);
}
// Check that the destination format and source formats are the same.
bool formatsEqual = FALSE;
if (SUCCEEDED(hr))
{
GUID srcFormat;
GUID destFormat;
hr = piDecoder->GetContainerFormat(&srcFormat);
if (SUCCEEDED(hr))
{
hr = piEncoder->GetContainerFormat(&destFormat);
}
if (SUCCEEDED(hr))
{
if (srcFormat == destFormat)
formatsEqual = true;
else
formatsEqual = false;
}
}
if (SUCCEEDED(hr) && formatsEqual)
{
// Copy metadata using metadata block reader/writer.
if (SUCCEEDED(hr))
{
piFrameDecode->QueryInterface(IID_PPV_ARGS(&piBlockReader));
}
if (SUCCEEDED(hr))
{
piFrameEncode->QueryInterface(IID_PPV_ARGS(&piBlockWriter));
}
if (SUCCEEDED(hr))
{
piBlockWriter->InitializeFromBlockReader(piBlockReader);
}
}
if (SUCCEEDED(hr))
{
hr = piFrameEncode->GetMetadataQueryWriter(&piFrameQWriter);
}
if (SUCCEEDED(hr))
{
// Set Orientation.
PROPVARIANT value;
value.vt = VT_UI2;
value.uiVal = 6; // Rotate 90 degrees clockwise
hr = piFrameQWriter->SetMetadataByName(L"/app1/ifd/{ushort=274}", &value);
}
if (SUCCEEDED(hr))
{
hr = piFrameEncode->WriteSource(
static_cast<IWICBitmapSource*> (piFrameDecode),
NULL); // Using NULL enables JPEG loss-less encoding.
}
// Commit the frame.
if (SUCCEEDED(hr))
{
hr = piFrameEncode->Commit();
}
if (piFrameDecode)
{
piFrameDecode->Release();
}
if (piFrameEncode)
{
piFrameEncode->Release();
}
if (piFrameQReader)
{
piFrameQReader->Release();
}
if (piFrameQWriter)
{
piFrameQWriter->Release();
}
}
}
if (SUCCEEDED(hr))
{
piEncoder->Commit();
}
if (SUCCEEDED(hr))
{
piFileStream->Commit(STGC_DEFAULT);
}
if (piFileStream)
{
piFileStream->Release();
}
if (piEncoder)
{
piEncoder->Release();
}
if (piBlockWriter)
{
piBlockWriter->Release();
}
if (piBlockReader)
{
piBlockReader->Release();
}
return 0;
}
int main()
{
RotateImage();
return 0;
}
Again, there are a lot of posts on various sites that are similar but not close enough, and I have tried to apply what they suggest with no success. Please accept my apologies if this has indeed been answered elsewhere.
I know I can just live with the slight change to the file, and once it has been changed once it doesn't seem to get changed again - if I rotate the file by 90 degrees 5 times then it produces the same binary as if I rotate just once, but I can't see why it changes at all, if all I want to do is modify the orientation tag, and I know that's possible because Windows Explorer can do it !
The way to do this programatically is to read the APP1 marker that should come after the SOS market. Get the JPEG documentation for the marker structure.
Once you have the APP1 marker, you need to change the orientation however you want it.
Then write an SOS marker, your modified APP1 marker, and the rest of the JPEG stream after the APP1 marker to a new file.
That's all their is to it. The only complexity is navigating the EXIF documentation to do the orientation setting.
It is not possible to do this unless jpeg's width and height are both multiple of 16. If this operation is done in GDI+, and width and height are not multiple of 16, GDI+ will do its best to keep the compression quality the same. It's the same in .net
See also
Transforming a JPEG Image Without Loss of Information
Note, your GDI+ code will only rotate the thumbnail. To rotate the image, use the code below:
void RotateImage()
{
//new/delete operator is not necessary, unless
//Gdiplus startup/shutdown is in the same scope
Gdiplus::Image image(L"source.jpg");
if((image.GetWidth() % 16) != 0 || (image.GetHeight() % 16) != 0)
wprintf(L"Lossless compression is not possible\n");
Gdiplus::EncoderParameters encoder_params;
encoder_params.Count = 1;
encoder_params.Parameter[0].Guid = Gdiplus::EncoderTransformation;
encoder_params.Parameter[0].Type = Gdiplus::EncoderParameterValueTypeLong;
encoder_params.Parameter[0].NumberOfValues = 1;
//rotate
ULONG transformation = Gdiplus::EncoderValueTransformRotate90;
encoder_params.Parameter[0].Value = &transformation;
CLSID clsid;
GetEncoderClsid(L"image/jpeg", &clsid);
auto stat = image.Save(L"destination.jpg", &clsid, &encoder_params);
wprintf(L"Save %s\n", (stat == Gdiplus::Ok) ? L"succeeded" : L"failed");
}
int main()
{
Gdiplus::GdiplusStartupInput gdiplusStartupInput;
ULONG_PTR gdiplusToken;
GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);
RotateImage();
Gdiplus::GdiplusShutdown(gdiplusToken);
return 0;
}

How to use EZrgb24 filter

Context
I'm trying to apply filter such as contrast, color change, brightness on every frame of a .avi video.
The video is playing just fine with directshow.net and c#.
after a couple hours of research, I found out that buffercb was not the way to go to do the job.
Apparantly, EZrgb24 is a filter I can add to my graph that does exactly what I want.
However, I can't get it to work.
Added in the beggining of my class
[DllImport("ole32.dll", EntryPoint = "CoCreateInstance", CallingConvention = CallingConvention.StdCall)]
static extern UInt32 CoCreateInstance([In, MarshalAs(UnmanagedType.LPStruct)] Guid rclsid,
IntPtr pUnkOuter, UInt32 dwClsContext, [In, MarshalAs(UnmanagedType.LPStruct)] Guid riid,
[MarshalAs(UnmanagedType.IUnknown)] out object rReturnedComObject);
Here is relevant code that works
int hr = 0;
IBaseFilter ibfRenderer = null;
ISampleGrabber sampGrabber = null;
IBaseFilter capFilter = null;
IPin iPinInFilter = null;
IPin iPinOutFilter = null;
IPin iPinInDest = null;
Type comType = null;
object comObj = null;
m_FilterGraph = new FilterGraph() as IFilterGraph2;
try
{
// Get the SampleGrabber interface
sampGrabber = new SampleGrabber() as ISampleGrabber;
// Add the video source
hr = m_FilterGraph.AddSourceFilter(_videoPath, "Ds.NET FileFilter", out capFilter);
DsError.ThrowExceptionForHR(hr);
// Hopefully this will be the video pin
IPin iPinOutSource = DsFindPin.ByDirection(capFilter, PinDirection.Output, 0);
IBaseFilter baseGrabFlt = sampGrabber as IBaseFilter;
ConfigureSampleGrabber(sampGrabber);
iPinInFilter = DsFindPin.ByDirection(baseGrabFlt, PinDirection.Input, 0);
iPinOutFilter = DsFindPin.ByDirection(baseGrabFlt, PinDirection.Output, 0);
// Add the frame grabber to the graph
hr = m_FilterGraph.AddFilter(baseGrabFlt, "Ds.NET Grabber");
DsError.ThrowExceptionForHR(hr);
hr = m_FilterGraph.Connect(iPinOutSource, iPinInFilter);
DsError.ThrowExceptionForHR(hr);
// Get the default video renderer
ibfRenderer = (IBaseFilter)new VideoRendererDefault();
// Add it to the graph
hr = m_FilterGraph.AddFilter(ibfRenderer, "Ds.NET VideoRendererDefault");
DsError.ThrowExceptionForHR(hr);
iPinInDest = DsFindPin.ByDirection(ibfRenderer, PinDirection.Input, 0);
// Connect the graph. Many other filters automatically get added here
hr = m_FilterGraph.Connect(iPinOutFilter, iPinInDest);
DsError.ThrowExceptionForHR(hr);
SaveSizeInfo(sampGrabber);
HERE WE WANT TO ADD THE EZRGB FILTER.
Code that doesnt work
/*
// { 8B498501-1218-11cf-ADC4-00A0D100041B }
DEFINE_GUID(CLSID_EZrgb24,
0x8b498501, 0x1218, 0x11cf, 0xad, 0xc4, 0x0, 0xa0, 0xd1, 0x0, 0x4, 0x1b);
*/
unsafe
{
Guid IUnknownGuid = new Guid("00000000-0000-0000-C000-000000000046"); //Can it be written in more pretty style?
Guid ezrgbclsid = new Guid(0x8b498501, 0x1218, 0x11cf, 0xad, 0xc4, 0x0, 0xa0, 0xd1, 0x0, 0x4, 0x1b);
uint hr1 = CoCreateInstance(ezrgbclsid, IntPtr.Zero, (uint)(CLSCTX.CLSCTX_INPROC_HANDLER), ezrgbclsid, out x);//CLSCTX_LOCAL_SERVER
IIPEffect myEffect = (IIPEffect)x;// as IIPEffect;
if (hr1 != 0)
{
int iError = Marshal.GetLastWin32Error();
Console.Write("CoCreateInstance Error = {0}, LastWin32Error = {1}", hr1, iError);
}
myEffect.put_IPEffect(1004, 0, 100); //for this filter, look at resource.h for what the int should be, in this case 1002 is the emboss effect
}
My diagnostic
I found out that the int value returned in hr1, is the hexadecimal value for dll not registred.
Which means to me that EZRGB is not registred on my computer.
How I tryed to solve the problem
Found and downloaded EZRGB.ax on some obscure web site.
executed the command :
cd \windows\syswow64
regsvr32 c:\ezrgb24.ax
A message box appeared with DllRegisterServer in c:\ezrgb24.ax succeeded.
Still doesn't work.
I am using directshow.net, however, this is also tagged both directshow as I feel the solution will work for either c# or c++.
Use can use SampleCB instead of BufferCB; the former provides you access to data which is streamed further, so you can modify it
The typical problem with registration is that you build 32-bit DLL and you are trying to use it from 64-bit code. The bitnesses have to match.
You need CLSCTX_ALL or CLSCTX_INPROC_SERVER

How can i show when playing a video file how much time left to play?

For example on a label or in a textBox.
This is the code im trying now using DirectShowLib-2005.dll
private void button5_Click(object sender, EventArgs e)
{
f = new WmvAdapter(_videoFile);
TimeSpan ts = TimeSpan.FromTicks(f._duration);
MessageBox.Show(ts.ToString());
int t = 1;
const int WS_CHILD = 0x40000000;
const int WS_CLIPCHILDREN = 0x2000000;
_videoFile = Options_DB.get_loadedVideo();
FilgraphManager graphManager = new FilgraphManager();
graphManager.RenderFile(_videoFile);
videoWindow = (IVideoWindow)graphManager;
videoWindow.Owner = (int)pictureBox1.Handle;
videoWindow.WindowStyle = WS_CHILD | WS_CLIPCHILDREN;
videoWindow.SetWindowPosition(
pictureBox1.ClientRectangle.Left,
pictureBox1.ClientRectangle.Top,
pictureBox1.ClientRectangle.Width,
pictureBox1.ClientRectangle.Height);
mc = (IMediaControl)graphManager;
mc.Run();
When i click the button the file is playing and i see first the duration in the MessageBox.Show wich show me: 00:02:47.4800000
So first thing is that the duration is wrong since the file play length is: 00:04:36 when im looking on the file on the hard disk.
My goal is to show some progressBar or without a progressBar for now just on a label the time left for playing the video backwards. If the duration is 00:04:36 so i want to show it go back 00:04:35 ... 00:04:34 and so on.
The variable _duration is long and i tried to convert it to TimeSpan.
But the video length is not the same as it is when im looking on the file on the hard disk.
This is the functin wich i didn't create just using it from the class WmvAdapter:
private void SetupGraph(string file)
{
ISampleGrabber sampGrabber = null;
IBaseFilter capFilter = null;
IBaseFilter nullrenderer = null;
_filterGraph = (IFilterGraph2)new FilterGraph();
_mediaCtrl = (IMediaControl)_filterGraph;
_mediaEvent = (IMediaEvent)_filterGraph;
_mSeek = (IMediaSeeking)_filterGraph;
var mediaFilt = (IMediaFilter)_filterGraph;
try
{
// Add the video source
int hr = _filterGraph.AddSourceFilter(file, "Ds.NET FileFilter", out capFilter);
DsError.ThrowExceptionForHR(hr);
// Get the SampleGrabber interface
sampGrabber = new SampleGrabber() as ISampleGrabber;
var baseGrabFlt = sampGrabber as IBaseFilter;
ConfigureSampleGrabber(sampGrabber);
// Add the frame grabber to the graph
hr = _filterGraph.AddFilter(baseGrabFlt, "Ds.NET Grabber");
DsError.ThrowExceptionForHR(hr);
// ---------------------------------
// Connect the file filter to the sample grabber
// Hopefully this will be the video pin, we could check by reading it's mediatype
IPin iPinOut = DsFindPin.ByDirection(capFilter, PinDirection.Output, 0);
// Get the input pin from the sample grabber
IPin iPinIn = DsFindPin.ByDirection(baseGrabFlt, PinDirection.Input, 0);
hr = _filterGraph.Connect(iPinOut, iPinIn);
DsError.ThrowExceptionForHR(hr);
// Add the null renderer to the graph
nullrenderer = new NullRenderer() as IBaseFilter;
hr = _filterGraph.AddFilter(nullrenderer, "Null renderer");
DsError.ThrowExceptionForHR(hr);
// ---------------------------------
// Connect the sample grabber to the null renderer
iPinOut = DsFindPin.ByDirection(baseGrabFlt, PinDirection.Output, 0);
iPinIn = DsFindPin.ByDirection(nullrenderer, PinDirection.Input, 0);
hr = _filterGraph.Connect(iPinOut, iPinIn);
DsError.ThrowExceptionForHR(hr);
// Turn off the clock. This causes the frames to be sent
// thru the graph as fast as possible
hr = mediaFilt.SetSyncSource(null);
DsError.ThrowExceptionForHR(hr);
// Read and cache the image sizes
SaveSizeInfo(sampGrabber);
//Edit: get the duration
hr = _mSeek.GetDuration(out _duration);
DsError.ThrowExceptionForHR(hr);
}
finally
{
if (capFilter != null)
{
Marshal.ReleaseComObject(capFilter);
}
if (sampGrabber != null)
{
Marshal.ReleaseComObject(sampGrabber);
}
if (nullrenderer != null)
{
Marshal.ReleaseComObject(nullrenderer);
}
GC.Collect();
}
}
The duration before i converted it to TimeSpan was in the variable _duration: 1674800000
I tried a lot of examples and stuff but i couldn't get far from the TimeSpan convertion.
How can i do it please ?
Thank you.
This question seems related: Determine length of audio file using DirectShow
The answer there states:
GetDuration returns a 64bit integer value for how long it would take to play the file.
You will need to call the GetTimeFormat method to find out what units the duration is in. The most likely default value is TIME_FORMAT_MEDIA_TIME which is 10ths of a microsecond.
IN that case you would divide the duration by 10*1000*1000 to get seconds.
You can also call SetTimeFormat before calling GetDuration if you want to force the units.
So in your case, I'd use GetTimeFormat() to figure out the units and use that to convert it to the correct units for a TimeSpan object.

IGraphBuilder use of ColorConverter

I try to use this code to get pictures of my cam:
IGraphBuilder _graph = null;
ISampleGrabber _grabber = null;
IBaseFilter _sourceObject = null;
IBaseFilter _grabberObject = null;
IMediaControl _control = null;
// Create the main graph
_graph = Activator.CreateInstance(Type.GetTypeFromCLSID(FilterGraph)) as IGraphBuilder;
// Create the webcam source
_sourceObject = FilterInfo.CreateFilter(_monikerString);
// Create the grabber
_grabber = Activator.CreateInstance(Type.GetTypeFromCLSID(SampleGrabber)) as ISampleGrabber;
_grabberObject = _grabber as IBaseFilter;
// Add the source and grabber to the main graph
_graph.AddFilter(_sourceObject, "source");
_graph.AddFilter(_grabberObject, "grabber");
IPin pin = _sourceObject.GetPin(PinDirection.Output, 0);
IAMStreamConfig streamConfig = pin as IAMStreamConfig;
int count = 0, size = 0;
streamConfig.GetNumberOfCapabilities(out count, out size);
int width = 0, height = 0;
AMMediaType mediaType = null;
AMMediaType mediaTypeCandidate = null;
for(int index = 0; index < count; index++) {
VideoStreamConfigCaps scc = new VideoStreamConfigCaps();
int test = streamConfig.GetStreamCaps(index, out mediaTypeCandidate, scc);
if(mediaTypeCandidate.MajorType == MediaTypes.Video && mediaTypeCandidate.SubType == MediaSubTypes.YUY2) {
VideoInfoHeader header = (VideoInfoHeader)Marshal.PtrToStructure(mediaTypeCandidate.FormatPtr, typeof(VideoInfoHeader));
if(header.BmiHeader.Width == 1280 && header.BmiHeader.Height == 720) {
width = header.BmiHeader.Width;
height = header.BmiHeader.Height;
if(mediaType != null)
mediaType.Dispose();
mediaType = mediaTypeCandidate;
} else
mediaTypeCandidate.Dispose();
} else
mediaTypeCandidate.Dispose();
}
streamConfig.SetFormat(mediaType);
And it works but i do not see the Image which is generated by this code:
uint pcount = (uint)(_capGrabber.Width * _capGrabber.Height * PixelFormats.Bgr32.BitsPerPixel / 8);
// Create a file mapping
_section = CreateFileMapping(new IntPtr(-1), IntPtr.Zero, 0x04, 0, pcount, null);
_map = MapViewOfFile(_section, 0xF001F, 0, 0, pcount);
// Get the bitmap
BitmapSource = Imaging.CreateBitmapSourceFromMemorySection(_section, _capGrabber.Width,
_capGrabber.Height, PixelFormats.Bgr32, _capGrabber.Width * PixelFormats.Bgr32.BitsPerPixel / 8, 0) as InteropBitmap;
_capGrabber.Map = _map;
// Invoke event
if (NewBitmapReady != null)
{
NewBitmapReady(this, null);
}
Because the SubMediaTyp is YUY2. How can i add a converter to this code? I have read something about a ColorConvert, which can be added to the IGraphBuilder. How does that work?
I would not expect CreateBitmapSourceFromMemorySection to accept anything else than flavors of RGB. Even more unlikely that it accepts YUY2 media type, so you need the DirectShow pipeline to convert video stream to RGB before you export it as a managed bitmap/imaging object.
To achieve this, you typically add Sample Grabber filter initialized to 24-bit RGB subtype and let DirectShow provide necessary converters automatically.
See detailed explanation and code snippets here: DirectShow: Examples for Using SampleGrabber for Grabbing a Frame and...
media.majorType = MediaType.Video;
media.subType = MediaSubType.RGB24;
media.formatPtr = IntPtr.Zero;
hr = sampGrabber.SetMediaType(media);

BufferCB not being called by SampleGrabber

I'm using a SampleGrabber to get audio data, however my BufferCB method is not being executed. What am I doing wrong ?
//add Sample Grabber
IBaseFilter pSampleGrabber = (IBaseFilter)Activator.CreateInstance(Type.GetTypeFromCLSID(CLSID_SampleGrabber));
hr = pGraph.AddFilter(pSampleGrabber, "SampleGrabber");
checkHR(hr, "Can't add Sample Grabber");
AMMediaType pSampleGrabber_pmt = new AMMediaType();
//pSampleGrabber_pmt.majorType = MediaType.Audio;
pSampleGrabber_pmt.subType = MediaSubType.PCM;
pSampleGrabber_pmt.formatType = FormatType.WaveEx;
pSampleGrabber_pmt.fixedSizeSamples = true;
pSampleGrabber_pmt.formatSize = 18;
pSampleGrabber_pmt.sampleSize = 2;
WaveFormatEx pSampleGrabber_Format = new WaveFormatEx();
pSampleGrabber_Format.wFormatTag = 1;
pSampleGrabber_Format.nChannels = 1;
pSampleGrabber_Format.nSamplesPerSec = 48000;
pSampleGrabber_Format.nAvgBytesPerSec = 96000;
pSampleGrabber_Format.nBlockAlign = 2;
pSampleGrabber_Format.wBitsPerSample = 16;
pSampleGrabber_pmt.formatPtr = Marshal.AllocCoTaskMem(Marshal.SizeOf(pSampleGrabber_Format));
Marshal.StructureToPtr(pSampleGrabber_Format, pSampleGrabber_pmt.formatPtr, false);
hr = ((ISampleGrabber)pSampleGrabber).SetMediaType(pSampleGrabber_pmt);
DsUtils.FreeAMMediaType(pSampleGrabber_pmt);
checkHR(hr, "Can't set media type to sample grabber");
ISampleGrabber pGrabber = new SampleGrabber() as ISampleGrabber;
pGrabber = (ISampleGrabber)pSampleGrabber;
pGrabber.SetCallback(null, 1);
My BufferCB method is like
public int BufferCB(double SampleTime, IntPtr pBuffer, int BufferLen)
{
return 0;
}
You created and configured one instance pSampleGrabber and then you are attaching your callback to another unused idling instance pGrabber.
You need
pSampleGrabber as ISampleGrabber
instead of
new SampleGrabber() as ISampleGrabber

Categories

Resources