Ok I have been at this for 2 days and need help with this last part.
I have a Microsoft LifeCam Cinema camera and I use the .NET DirectShowLib to capture the video stream. Well actually I use WPFMediaKit, but I am in the source code of that dealing directly with the direct show library now.
What I have working is:
- View the video output of the camera
- Record the video output of the camera in ASF or AVI (the only 2 MediaType's supported with ICaptureGraphBuilder2)
The problem is: I can save it as a .avi. This works fine and at a resolution of 1280x720 but it saves the file in RAW output. Meaning it is about 50-60MB per second. Way too high.
Or I can switch it to .asf and it outputs a WMV, but when I do this the capture and the output both go to resolution 320x240.
In WPFMediaKit there is a function I changed because apparently with Microsoft LifeCam Cinema cameras a lot of people have this problem. So instead of creating or changing the AMMediaType you iterate through and then use that to call SetFormat.
///* Make the VIDEOINFOHEADER 'readable' */
var videoInfo = new VideoInfoHeader();
int iCount = 0, iSize = 0;
videoStreamConfig.GetNumberOfCapabilities(out iCount, out iSize);
IntPtr TaskMemPointer = Marshal.AllocCoTaskMem(iSize);
AMMediaType pmtConfig = null;
for (int iFormat = 0; iFormat < iCount; iFormat++)
{
IntPtr ptr = IntPtr.Zero;
videoStreamConfig.GetStreamCaps(iFormat, out pmtConfig, TaskMemPointer);
videoInfo = (VideoInfoHeader)Marshal.PtrToStructure(pmtConfig.formatPtr, typeof(VideoInfoHeader));
if (videoInfo.BmiHeader.Width == DesiredWidth && videoInfo.BmiHeader.Height == DesiredHeight)
{
///* Setup the VIDEOINFOHEADER with the parameters we want */
videoInfo.AvgTimePerFrame = DSHOW_ONE_SECOND_UNIT / FPS;
if (mediaSubType != Guid.Empty)
{
int fourCC = 0;
byte[] b = mediaSubType.ToByteArray();
fourCC = b[0];
fourCC |= b[1] << 8;
fourCC |= b[2] << 16;
fourCC |= b[3] << 24;
videoInfo.BmiHeader.Compression = fourCC;
// pmtConfig.subType = mediaSubType;
}
/* Copy the data back to unmanaged memory */
Marshal.StructureToPtr(videoInfo, pmtConfig.formatPtr, true);
hr = videoStreamConfig.SetFormat(pmtConfig);
break;
}
}
/* Free memory */
Marshal.FreeCoTaskMem(TaskMemPointer);
DsUtils.FreeAMMediaType(pmtConfig);
if (hr < 0)
return false;
return true;
When that was implemented I could finally view the captured video as 1280x720 as long as I set the SetOutputFilename to a MediaType.Avi.
If I set it to a MediaType.Asf it goes to 320x240 and the output is the same.
Or the AVI works and outputs in the correct format but does so in RAW video, hence a very large file size. I have attempted to add a compressor to the graph but with no luck, this is far out of my experience.
I am looking for 1 of 2 answers.
Recording the ASF at 1280x720
Adding a compressor to the graph so that the filesize of my outputted AVI is small.
I figured this out. So I am posting it here for any other poor soul who passes by wondering why it doesn't work.
Download the source of the WPFMediaKit, you are going to need to change some code.
Go to Folder DirectShow > MediaPlayers and open up VideoCapturePlayer.cs
Find the function SetVideoCaptureParameters and replace it with this:
/// <summary>
/// Sets the capture parameters for the video capture device
/// </summary>
private bool SetVideoCaptureParameters(ICaptureGraphBuilder2 capGraph, IBaseFilter captureFilter, Guid mediaSubType)
{
/* The stream config interface */
object streamConfig;
/* Get the stream's configuration interface */
int hr = capGraph.FindInterface(PinCategory.Capture,
MediaType.Video,
captureFilter,
typeof(IAMStreamConfig).GUID,
out streamConfig);
DsError.ThrowExceptionForHR(hr);
var videoStreamConfig = streamConfig as IAMStreamConfig;
/* If QueryInterface fails... */
if (videoStreamConfig == null)
{
throw new Exception("Failed to get IAMStreamConfig");
}
///* Make the VIDEOINFOHEADER 'readable' */
var videoInfo = new VideoInfoHeader();
int iCount = 0, iSize = 0;
videoStreamConfig.GetNumberOfCapabilities(out iCount, out iSize);
IntPtr TaskMemPointer = Marshal.AllocCoTaskMem(iSize);
AMMediaType pmtConfig = null;
for (int iFormat = 0; iFormat < iCount; iFormat++)
{
IntPtr ptr = IntPtr.Zero;
videoStreamConfig.GetStreamCaps(iFormat, out pmtConfig, TaskMemPointer);
videoInfo = (VideoInfoHeader)Marshal.PtrToStructure(pmtConfig.formatPtr, typeof(VideoInfoHeader));
if (videoInfo.BmiHeader.Width == DesiredWidth && videoInfo.BmiHeader.Height == DesiredHeight)
{
///* Setup the VIDEOINFOHEADER with the parameters we want */
videoInfo.AvgTimePerFrame = DSHOW_ONE_SECOND_UNIT / FPS;
if (mediaSubType != Guid.Empty)
{
int fourCC = 0;
byte[] b = mediaSubType.ToByteArray();
fourCC = b[0];
fourCC |= b[1] << 8;
fourCC |= b[2] << 16;
fourCC |= b[3] << 24;
videoInfo.BmiHeader.Compression = fourCC;
// pmtConfig.subType = mediaSubType;
}
/* Copy the data back to unmanaged memory */
Marshal.StructureToPtr(videoInfo, pmtConfig.formatPtr, true);
hr = videoStreamConfig.SetFormat(pmtConfig);
break;
}
}
/* Free memory */
Marshal.FreeCoTaskMem(TaskMemPointer);
DsUtils.FreeAMMediaType(pmtConfig);
if (hr < 0)
return false;
return true;
}
Now that will sort out your screen display at what ever desired resolution you want, provided that your camera supports it.
Next you will soon figure out that this new correct capture you have isnt applied when writing the video to disk.
Since the ICaptureBuilder2 method only supports Avi and Asf (which is wmv) you need to set your mediatype to one of them.
hr = graphBuilder.SetOutputFileName(MediaSubType.Asf, this.m_fileName, out mux, out sink);
You will find that line in the SetupGraph function.
Asf will only output in 320x240, yet the Avi will output in the desired resolution, but uncompressed (meaning 50-60MB per second for a 1280x720 video feed), which is too high.
So that leaves you with 2 options
Figure out how to add a encoder (compression filter) to the Avi output
Figure out how to change the WMV profile
I tried 1, with no success. Mainly due to the fact this is my first time working with DirectShow and only just grasp the meaning of graphs.
But I was successful with #2 and here is how I did it.
Special thanks to (http://www.codeproject.com/KB/audio-video/videosav.aspx) I pulled out the needed code from here.
Create a new class in the same folder called WMLib.cs and place the following in it
Download the demo project from http://www.codeproject.com/KB/audio-video/videosav.aspx and copy and paste the WMLib.cs into your project (change the namespace as necessary)
Create a function in the VideoCapturePlayer.cs class
/// <summary>
/// Configure profile from file to Asf file writer
/// </summary>
/// <param name="asfWriter"></param>
/// <param name="filename"></param>
/// <returns></returns>
public bool ConfigProfileFromFile(IBaseFilter asfWriter, string filename)
{
int hr;
//string profilePath = "test.prx";
// Set the profile to be used for conversion
if ((filename != null) && (File.Exists(filename)))
{
// Load the profile XML contents
string profileData;
using (StreamReader reader = new StreamReader(File.OpenRead(filename)))
{
profileData = reader.ReadToEnd();
}
// Create an appropriate IWMProfile from the data
// Open the profile manager
IWMProfileManager profileManager;
IWMProfile wmProfile = null;
hr = WMLib.WMCreateProfileManager(out profileManager);
if (hr >= 0)
{
// error message: The profile is invalid (0xC00D0BC6)
// E.g. no <prx> tags
hr = profileManager.LoadProfileByData(profileData, out wmProfile);
}
if (profileManager != null)
{
Marshal.ReleaseComObject(profileManager);
profileManager = null;
}
// Config only if there is a profile retrieved
if (hr >= 0)
{
// Set the profile on the writer
IConfigAsfWriter configWriter = (IConfigAsfWriter)asfWriter;
hr = configWriter.ConfigureFilterUsingProfile(wmProfile);
if (hr >= 0)
{
return true;
}
}
}
return false;
}
In the SetupGraph function find the SetOutputFileName and below it put
ConfigProfileFromFile(mux, "c:\wmv.prx");
Now create a file called wmv.prx on your c: drive and place the relevant information in it.
You can see a sample of a PRX file from the demo project here: http://www.codeproject.com/KB/audio-video/videosav.aspx (Pal90.prx)
And now enjoy your .wmv file outputted at the right size.
Yes I know the code I placed in was rather scrappy but I will leave it up to you to polish it up.
Lifecam is known for unobvious behavior with setting capture format (more specifically, falling back to other formats). See prevoius discussions which are likely to suggest you a solution:
Can't make IAMStreamConfig.SetFormat() to work with LifeCam Studio
IAMStreamConfig settings are ignored by stream - Microsoft LifeCam Cinema records only on 640x480
Can Microsoft LifeCam record at a frame rates lower than 30 fps?
Related
To provide a little bit of context. I am trying to output live audio from a camera in my c# application. After doing some research it seems pretty obvious to do it in a c++ managed dll. I chose the XAudio2 api because it should be pretty easy to implement and use with dynamic audio content.
So the idea is to create the XAudio device in c++ with an empty buffer and push in the audio from the c# code side. The audio chunks are pushed every 50ms because I want to keep the latency as small as possible.
// SampleRate = 44100; Channels = 2; BitPerSample = 16;
var blockAlign = (Channels * BitsPerSample) / 8;
var avgBytesPerSecond = SampleRate * blockAlign;
var avgBytesPerMillisecond = avgBytesPerSecond / 1000;
var bufferSize = avgBytesPerMillisecond * Time;
_sampleBuffer = new byte[bufferSize];
Everytime the timer runs it gets the pointer of the audio buffer, reads the data from the audio, copies the data to the pointer and calls the PushAudio method.
I am also using a stopwatch to check how long the processing took and calculate the interval again for the timer to include the processing time.
private void PushAudioChunk(object sender, ElapsedEventArgs e)
{
unsafe
{
_pushAudioStopWatch.Reset();
_pushAudioStopWatch.Start();
var audioBufferPtr = Output.AudioCapturerBuffer();
FillBuffer(_sampleBuffer);
Marshal.Copy(_sampleBuffer, 0, (IntPtr)audioBufferPtr, _sampleBuffer.Length);
Output.PushAudio();
_pushTimer.Interval = Time - _pushAudioStopWatch.ElapsedMilliseconds;
_pushAudioStopWatch.Stop();
DIX.Log.WriteLine("Push audio took: {0}ms", _pushAudioStopWatch.ElapsedMilliseconds);
}
}
This is the implementation of the c++ part.
Regarding to the documentation on msdn I created a XAudio2 device and added the MasterVoice and SourceVoice. The buffer is empty at first because the c# part is responsible to push in the audio data.
namespace Audio
{
using namespace System;
template <class T> void SafeRelease(T **ppT)
{
if (*ppT)
{
(*ppT)->Release();
*ppT = NULL;
}
}
WAVEFORMATEXTENSIBLE wFormat;
XAUDIO2_BUFFER buffer = { 0 };
IXAudio2* pXAudio2 = NULL;
IXAudio2MasteringVoice* pMasterVoice = NULL;
IXAudio2SourceVoice* pSourceVoice = NULL;
WaveOut::WaveOut(int bufferSize)
{
audioBuffer = new Byte[bufferSize];
wFormat.Format.wFormatTag = WAVE_FORMAT_PCM;
wFormat.Format.nChannels = 2;
wFormat.Format.nSamplesPerSec = 44100;
wFormat.Format.wBitsPerSample = 16;
wFormat.Format.nBlockAlign = (wFormat.Format.nChannels * wFormat.Format.wBitsPerSample) / 8;
wFormat.Format.nAvgBytesPerSec = wFormat.Format.nSamplesPerSec * wFormat.Format.nBlockAlign;
wFormat.Format.cbSize = 0;
wFormat.SubFormat = KSDATAFORMAT_SUBTYPE_PCM;
HRESULT hr = XAudio2Create(&pXAudio2, 0, XAUDIO2_DEFAULT_PROCESSOR);
if (SUCCEEDED(hr))
{
hr = pXAudio2->CreateMasteringVoice(&pMasterVoice);
}
if (SUCCEEDED(hr))
{
hr = pXAudio2->CreateSourceVoice(&pSourceVoice, (WAVEFORMATEX*)&wFormat,
0, XAUDIO2_DEFAULT_FREQ_RATIO, NULL, NULL, NULL);
}
buffer.pAudioData = (BYTE*)audioBuffer;
buffer.AudioBytes = bufferSize;
buffer.Flags = 0;
if (SUCCEEDED(hr))
{
hr = pSourceVoice->Start(0);
}
}
WaveOut::~WaveOut()
{
}
WaveOut^ WaveOut::CreateWaveOut(int bufferSize)
{
return gcnew WaveOut(bufferSize);
}
uint8_t* WaveOut::AudioCapturerBuffer()
{
if (!audioBuffer)
{
throw gcnew Exception("Audio buffer is not initialized. Did you forget to set up the audio container?");
}
return (BYTE*)audioBuffer;
}
int WaveOut::PushAudio()
{
HRESULT hr = pSourceVoice->SubmitSourceBuffer(&buffer);
if (FAILED(hr))
{
return -1;
}
return 0;
}
}
The problem I am facing is that I always have some cracking in the output. I tried to increase the interval of the timer or increased the buffer size a bit. Everytime the same result.
What am I doing wrong?
Update:
I created 3 buffers the XAudio engine can go through. The cracking got away. The missing part now is to fill the buffers at the right time from the c# part to avoid buffers with the same data.
void Render(void* param)
{
std::vector<byte> audioBuffers[BUFFER_COUNT];
size_t currentBuffer = 0;
// Get the current state of the source voice
while (BackgroundThreadRunning && pSourceVoice)
{
if (pSourceVoice)
{
pSourceVoice->GetState(&state);
}
while (state.BuffersQueued < BUFFER_COUNT)
{
std::vector<byte> resultData;
resultData.resize(DATA_SIZE);
CopyMemory(&resultData[0], pAudioBuffer, DATA_SIZE);
// Retreive the next buffer to stream from MF Music Streamer
audioBuffers[currentBuffer] = resultData;
// Submit the new buffer
XAUDIO2_BUFFER buf = { 0 };
buf.AudioBytes = static_cast<UINT32>(audioBuffers[currentBuffer].size());
buf.pAudioData = &audioBuffers[currentBuffer][0];
pSourceVoice->SubmitSourceBuffer(&buf);
// Advance the buffer index
currentBuffer = ++currentBuffer % BUFFER_COUNT;
// Get the updated state
pSourceVoice->GetState(&state);
}
Sleep(30);
}
}
XAudio2 does not copy the source data buffer at the time you submit it via SubmitSourceBuffer. You must keep that data (which is in your application memory) valid, and the buffer allocated for the entire time that XAudio2 will need to read out of it to process the data. This is done for efficiency to avoid the need for an extra copy, but puts the multi-threaded burden of keeping the memory available until it's done playing on you. That also means you can't modify the playing buffer.
Your current code is just reusing the same buffer which is causing the popping as you change the data while it's play. You can solve this with having 2 or three buffers you rotate between. A XAudio2 Source Voice has status information you can use to determine when it's done playing a buffer, or you can register for explicit callbacks which tell you when the buffer is no longer being used.
See DirectX Tool Kit for Audio and classic XAudio2 samples for examples of using XAudio2.
Why is Windows documentation so lacking ?? It seems impossible to find an example of how this method is supposed to work StartPreviewToCustomSinkAsync
What I am trying to do is get a preview image from a video source (via MediaCapture) but can't understand how this method works (especially what the second parameter, IMediaExtension, is supposed to be/do).
Any chance any of you can help me with this ?
If all you need is to get a preview frame every now and then, there is a sample on the Microsoft github page that is relevant, although they target Windows 10. You may be interested in migrating your project to get this functionality.
GetPreviewFrame: This sample will capture preview frames as opposed to full-blown photos. Once it has a preview frame, it can edit the pixels on it.
Here is the relevant part:
private async Task GetPreviewFrameAsSoftwareBitmapAsync()
{
// Get information about the preview
var previewProperties = _mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview) as VideoEncodingProperties;
// Create the video frame to request a SoftwareBitmap preview frame
var videoFrame = new VideoFrame(BitmapPixelFormat.Bgra8, (int)previewProperties.Width, (int)previewProperties.Height);
// Capture the preview frame
using (var currentFrame = await _mediaCapture.GetPreviewFrameAsync(videoFrame))
{
// Collect the resulting frame
SoftwareBitmap previewFrame = currentFrame.SoftwareBitmap;
// Add a simple green filter effect to the SoftwareBitmap
EditPixels(previewFrame);
}
}
private unsafe void EditPixels(SoftwareBitmap bitmap)
{
// Effect is hard-coded to operate on BGRA8 format only
if (bitmap.BitmapPixelFormat == BitmapPixelFormat.Bgra8)
{
// In BGRA8 format, each pixel is defined by 4 bytes
const int BYTES_PER_PIXEL = 4;
using (var buffer = bitmap.LockBuffer(BitmapBufferAccessMode.ReadWrite))
using (var reference = buffer.CreateReference())
{
// Get a pointer to the pixel buffer
byte* data;
uint capacity;
((IMemoryBufferByteAccess)reference).GetBuffer(out data, out capacity);
// Get information about the BitmapBuffer
var desc = buffer.GetPlaneDescription(0);
// Iterate over all pixels
for (uint row = 0; row < desc.Height; row++)
{
for (uint col = 0; col < desc.Width; col++)
{
// Index of the current pixel in the buffer (defined by the next 4 bytes, BGRA8)
var currPixel = desc.StartIndex + desc.Stride * row + BYTES_PER_PIXEL * col;
// Read the current pixel information into b,g,r channels (leave out alpha channel)
var b = data[currPixel + 0]; // Blue
var g = data[currPixel + 1]; // Green
var r = data[currPixel + 2]; // Red
// Boost the green channel, leave the other two untouched
data[currPixel + 0] = b;
data[currPixel + 1] = (byte)Math.Min(g + 80, 255);
data[currPixel + 2] = r;
}
}
}
}
}
And declare this outside your class:
[ComImport]
[Guid("5b0d3235-4dba-4d44-865e-8f1d0e4fd04d")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
void GetBuffer(out byte* buffer, out uint capacity);
}
And of course, your project will have to allow unsafe code for all of this to work.
Have a closer look at the sample to see how to get all the details. Or, to have a walkthrough, you can watch the camera session from a recent //build/ conference, which includes a little bit of a walkthrough through some camera samples.
Alternatively, if you're bound to 8.1, you can look into the Lumia Imaging SDK, which can notify you when there's a new preview frame available.
There are much of examples on GitHub. If you are developing for Windows Phone 8.1 - samples are here
According this example recording looking like this:
private StspMediaSinkProxy mediaSink;
mediaSink = new StspMediaSinkProxy();
MediaEncodingProfile encodingProfile = MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Qvga);
var mfExtension = await mediaSink.InitializeAsync(encodingProfile.Audio, encodingProfile.Video);
await mediaCapture.StartRecordToCustomSinkAsync(encodingProfile, mfExtension);
So, you can understand how to get IMediaExtension from MediaEncodingProfile from this example.
You haven't post any code, but making Preview should be similar to code I have provide
I want to capture and process data using Bass.NET using the BASS_ChannelGetData method. The examples I've seen that use this play audio files through the Bass.NET library and then sample that, however I wish to sample the data my soundcard outputs, so that I can capture and process audio data from third party audio players, for example Spotify.
Bass.BASS_ChannelGetData(handle, buffer, (int)BASSData.BASS_DATA_FFT256);
How would I get a handle that would allow me to process this data?
Bass.BASS_RecordInit do return a handle but if you look closely at the documentation they do use it only for playing (actually starting) the record channel. Their code sample uses a callback to retrieve the audio samples.
Take a look at Bass.BASS_RecordStart Method documentation.
private RECORDPROC _myRecProc; // make it global, so that the GC can not remove it
private int _byteswritten = 0;
private byte[] _recbuffer; // local recording buffer
...
if ( Bass.BASS_RecordInit(-1) )
{
_myRecProc = new RECORDPROC(MyRecording);
int recHandle = Bass.BASS_RecordStart(44100, 2, BASSFlag.BASS_RECORD_PAUSE, _myRecProc, IntPtr.Zero);
...
// start recording
Bass.BASS_ChannelPlay(recHandle, false);
}
...
private bool MyRecording(int handle, IntPtr buffer, int length, IntPtr user)
{
bool cont = true;
if (length > 0 && buffer != IntPtr.Zero)
{
// increase the rec buffer as needed
if (_recbuffer == null || _recbuffer.Length < length)
_recbuffer = new byte[length];
// copy from managed to unmanaged memory
Marshal.Copy(buffer, _recbuffer, 0, length);
_byteswritten += length;
// write to file
...
// stop recording after a certain amout (just to demo)
if (_byteswritten > 800000)
cont = false; // stop recording
}
return cont;
}
Note that you should be able to use BASS_ChannelGetData inside that callback instead of Marshal.Copy.
Did you mean resample instead of sample ? If so then BassMix class will handle that job.
I'm writing an app in C# that uses EDSDK to interface a digital camera with a PC. The user snaps a picture; that triggers an event in the software; and then the software copies the image new image to the PC. That's working great.
Now, I'd like for the software to be able to gracefully handle scenarios where no PC is available or the camera somehow loses connection with the PC. So, whenever a user starts a new session with the software, it first checks to see if there are any images on the camera and, if so, copies them locally. In order to do that, I need a way to get pointers to each individual directory item. So far, I haven't been able to find anything in the documentation or online on how to do this.
Is there any way for EDSDK to get a list of existing files from the camera?
I wrote a function to do this, purely to teach my self the SDK. As such the code isn't used in anything that's in production, thus I give you the relevant function:
/// <summary>
/// Gets a list of files on the CF Card. If dest is given, downloads them
/// </summary>
/// <param name="dest">string.empty if you just want a list of file names, otherwise the folder in which you want the files</param>
/// <returns>a list of file names</returns>
public List<string> GetPicturesOnCard(string dest)
{
IntPtr volumeRef = IntPtr.Zero;
IntPtr dirRef = IntPtr.Zero;
IntPtr subDirRef = IntPtr.Zero;
List<string> result = new List<string>();
try
{
openSession();
int nFiles;
if (EDSDK.EdsGetChildAtIndex(camera, 0, out volumeRef) != EDSDK.EDS_ERR_OK)
throw new CannonException("Couldn't access Memory Card");
//first child is DCIM
if (EDSDK.EdsGetChildAtIndex(volumeRef, 0, out dirRef) != EDSDK.EDS_ERR_OK)
throw new CannonException("Memory Card formatting Error");
//first child of DCIM has the actual photos
if(EDSDK.EdsGetChildAtIndex(dirRef, 0, out subDirRef)!= EDSDK.EDS_ERR_OK)
throw new CannonException("Memory Card formatting Error");
EDSDK.EdsGetChildCount(subDirRef, out nFiles);
for (int i = 0; i < nFiles; i++)
{
IntPtr fileRef = IntPtr.Zero;
EDSDK.EdsDirectoryItemInfo fileInfo;
try
{
EDSDK.EdsGetChildAtIndex(subDirRef, i, out fileRef);
EDSDK.EdsGetDirectoryItemInfo(fileRef, out fileInfo);
if (dest != string.Empty)
{
IntPtr fStream = IntPtr.Zero;//it's a cannon sdk file stream, not a managed stream
uint fSize = fileInfo.Size;
try
{
EDSDK.EdsCreateMemoryStream(fSize, out fStream);
if (EDSDK.EdsDownload(fileRef, fSize, fStream) != EDSDK.EDS_ERR_OK)
throw new CannonException("Image Download failed");
if (EDSDK.EdsDownloadComplete(fileRef) != EDSDK.EDS_ERR_OK)
throw new CannonException("Image Download failed to complete");
byte[] buffer = new byte[fSize];
IntPtr imgLocation;
if (EDSDK.EdsGetPointer(fStream, out imgLocation) == EDSDK.EDS_ERR_OK)
{
Marshal.Copy(imgLocation, buffer, 0, (int)fSize - 1);
File.WriteAllBytes(dest + fileInfo.szFileName, buffer);
}
else
throw new CannonException("Interal Error #1");//because the expection text coud land up in a message box some where
}
finally
{
EDSDK.EdsRelease(fStream);
}
}
}
finally
{
EDSDK.EdsRelease(fileRef);
}
result.Add(fileInfo.szFileName);
}
}
finally
{
EDSDK.EdsRelease(subDirRef);
EDSDK.EdsRelease(dirRef);
EDSDK.EdsRelease(volumeRef);
closeSession();
}
return result;
}
Really what you do is: Initialise the SDK, then the camera, then open a session.
With that done you then get a reference to the card, then the folder (the folder you want is the first child of the first child), then iterate through all the children there of - they're the files.
The SDK documentation is no fun to read, but will tell you the bits I've missed out (How to initialise the camera and so on, which I'm assuming you know how to do from the rest of the project).
I have been using the latest version of the WPFMediaKit. What I am trying to do is write a sample application that will use the Samplegrabber to capture the video frames of video files so I can have them as individual Bitmaps.
So far, I have had good luck with the following code when constructing and rendering my graph. However, when I use this code to play back a .wmv video file, when the samplegrabber is attached, it will play back jumpy or choppy. If I comment out the line where I add the samplegrabber filter, it works fine. Again, it works with the samplegrabber correctly with AVI/MPEG, etc.
protected virtual void OpenSource()
{
FrameCount = 0;
/* Make sure we clean up any remaining mess */
FreeResources();
if (m_sourceUri == null)
return;
string fileSource = m_sourceUri.OriginalString;
if (string.IsNullOrEmpty(fileSource))
return;
try
{
/* Creates the GraphBuilder COM object */
m_graph = new FilterGraphNoThread() as IGraphBuilder;
if (m_graph == null)
throw new Exception("Could not create a graph");
/* Add our prefered audio renderer */
InsertAudioRenderer(AudioRenderer);
var filterGraph = m_graph as IFilterGraph2;
if (filterGraph == null)
throw new Exception("Could not QueryInterface for the IFilterGraph2");
IBaseFilter renderer = CreateVideoMixingRenderer9(m_graph, 1);
IBaseFilter sourceFilter;
/* Have DirectShow find the correct source filter for the Uri */
var hr = filterGraph.AddSourceFilter(fileSource, fileSource, out sourceFilter);
DsError.ThrowExceptionForHR(hr);
/* We will want to enum all the pins on the source filter */
IEnumPins pinEnum;
hr = sourceFilter.EnumPins(out pinEnum);
DsError.ThrowExceptionForHR(hr);
IntPtr fetched = IntPtr.Zero;
IPin[] pins = { null };
/* Counter for how many pins successfully rendered */
int pinsRendered = 0;
m_sampleGrabber = (ISampleGrabber)new SampleGrabber();
SetupSampleGrabber(m_sampleGrabber);
hr = m_graph.AddFilter(m_sampleGrabber as IBaseFilter, "SampleGrabber");
DsError.ThrowExceptionForHR(hr);
/* Loop over each pin of the source filter */
while (pinEnum.Next(pins.Length, pins, fetched) == 0)
{
if (filterGraph.RenderEx(pins[0],
AMRenderExFlags.RenderToExistingRenderers,
IntPtr.Zero) >= 0)
pinsRendered++;
Marshal.ReleaseComObject(pins[0]);
}
Marshal.ReleaseComObject(pinEnum);
Marshal.ReleaseComObject(sourceFilter);
if (pinsRendered == 0)
throw new Exception("Could not render any streams from the source Uri");
/* Configure the graph in the base class */
SetupFilterGraph(m_graph);
HasVideo = true;
/* Sets the NaturalVideoWidth/Height */
//SetNativePixelSizes(renderer);
}
catch (Exception ex)
{
/* This exection will happen usually if the media does
* not exist or could not open due to not having the
* proper filters installed */
FreeResources();
/* Fire our failed event */
InvokeMediaFailed(new MediaFailedEventArgs(ex.Message, ex));
}
InvokeMediaOpened();
}
And:
private void SetupSampleGrabber(ISampleGrabber sampleGrabber)
{
FrameCount = 0;
var mediaType = new AMMediaType
{
majorType = MediaType.Video,
subType = MediaSubType.RGB24,
formatType = FormatType.VideoInfo
};
int hr = sampleGrabber.SetMediaType(mediaType);
DsUtils.FreeAMMediaType(mediaType);
DsError.ThrowExceptionForHR(hr);
hr = sampleGrabber.SetCallback(this, 0);
DsError.ThrowExceptionForHR(hr);
}
I have read a few things saying the the .wmv or .asf formats are asynchronous or something. I have attempted inserting a WMAsfReader to decode which works, but once it goes to the VMR9 it gives the same behavior. Also, I have gotten it to work correctly when I comment out the IBaseFilter renderer = CreateVideoMixingRenderer9(m_graph, 1); line and have filterGraph.Render(pins[0]); -- the only drawback is that now it renders in an Activemovie widow of its own instead of my control, however the samplegrabber functions correctly and without any skipping. So I am thinking the bug is in the VMR9 / samplegrabbing somewhere.
Any help? I am new to this.
Some decoders will use hardware acceleration using DXVA. This is implemented by negotiating a partly-decoded format, and passing this partly decoded data to the renderer to complete decoding and render. If you insert a sample grabber configured to RGB24 between the decoder and the renderer, you will disable hardware acceleration.
That, I'm sure, is the crux of the problem. The details are still a little vague, I'm afraid, such as why it works when you use the default VMR-7, but fails when you use VMR-9. I would guess that the decoder is trying to use dxva and failing, in the vmr-9 case, but has a reasonable software-only backup that works well in vmr-7.
I'm not familiar with the WPFMediaKit, but I would think the simplest solution is to replace the explicit vmr-9 creation with an explicit vmr-7 creation. That is, if the decoder works software-only with vmr-7, then use that and concentrate on fixing the window-reparenting issue.
It ended up the the code I have posted (which in itself was fairly shamelessly slightly modified by me from Jeremiah Morrill's WPFMediakit source code) was in fact adequate to render the .WMV files and be sample grabbed.
It seems the choppiness has something to do with running through the VS debugger, or VS2008 itself. After messing around with the XAML for a while in the visual editor, then running the app, I will have this choppy behavior introduced. Shutting down VS2008 seems to remedy it. :P
So not much of an answer, but at least a (annoying - restarting VS2008) fix when it crops up.