i connected a eos canon camera to pc
i have an application that i could take picture remotly ,and download image to pc,
but when i remove the SD card from camera , i cant download image from buffer to pc
// register objceteventcallback
err = EDSDK.EdsSetObjectEventHandler(obj.camdevice, EDSDK.ObjectEvent_All, objectEventHandler, new IntPtr(0));
if (err != EDSDK.EDS_ERR_OK)
Debug.WriteLine("Error registering object event handler");
///
public uint objectEventHandler(uint inEvent, IntPtr inRef, IntPtr inContext)
{
switch(inEvent)
{
case EDSDK.ObjectEvent_DirItemCreated:
this.getCapturedItem(inRef);
Debug.WriteLine("dir item created");
break;
case EDSDK.ObjectEvent_DirItemRequestTransfer:
this.getCapturedItem(inRef);
Debug.WriteLine("file transfer request event");
break;
default:
Debug.WriteLine(String.Format("ObjectEventHandler: event {0}", inEvent));
break;
}
return 0;
}
anyone could help me , why this event does not call ,
or how i download image from buffer to pc, with out have Sd card on my camera
thanks
You probably ran into the same problem as I did yesterday: the camera tries to store the image for a later download, finds no memory card to store it to and instantly discards the image.
To get your callback to fire, you need to set the camera to save images to the PC (kEdsSaveTo_Host) at some point during your camera initialization routine. In C++, it worked like this:
EdsInt32 saveTarget = kEdsSaveTo_Host;
err = EdsSetPropertyData( _camera, kEdsPropID_SaveTo, 0, 4, &saveTarget );
You probably need to build an IntPtr for this. At least, that's what Dmitriy Prozorovskiy did (prompted by a certain akadunno) in this thread.
The SDK (as far as I know) only exposes the picture taking event in the form of the object being created on the file system of the camera (ie the SD card). There is not a way of which I am familiar to capture from buffer. In a way this makes sense, because in an environment where there is only a small amount of onboard memory, it is important to keep the volatile memory clear so that it can continue to take photographs. Once the buffer has been flushed to nonvolatile memory, you are then clear to interact with those bytes. Limiting, I know, but it is what it is.
The question asks for C#, but in Java one will have to setProperty as:
NativeLongByReference number = new NativeLongByReference( new NativeLong( EdSdkLibrary.EdsSaveTo.kEdsSaveTo_Host ) );
EdsVoid data = new EdsVoid( number.getPointer() );
NativeLong l = EDSDK.EdsSetPropertyData(edsCamera, new NativeLong(EdSdkLibrary.kEdsPropID_SaveTo), new NativeLong(0), new NativeLong(NativeLong.SIZE), data);
And the usual download will do
Related
I am trying to record a video of a camera while using MediaFrameReader. For my application I need the MediaFrameReader instance to process each frame individually. I have tried to apply LowLagMediaRecording additionally to simply save the camerastream to a file, but it seems that you can not use both methods simultaniously. That meant that I am stuck to MediaFrameReader only where I can access each frame in the Frame_Arrived method.
I have tried several approaches and found two working solutions with the MediaComposition class creating MediaClip objects. You can either save each frame as a JPEG-file and render all images finally to a video file. This process is very slow since you constantly need to access the hard drive. Or you can create a MediaStreamSample object from the Direct3DSurface of a frame. This way you can save the data in RAM (first in GPU RAM, if full then RAM) instead of the hard drive which in theory is much faster. The problem is that calling the RenderToFileAsync-method of the MediaComposition class requires all MediaClips already being added to the internal list. This leads to exceeding the RAM after an already quite short time of recording. After collecting data of approximately 5 minutes, windows already created a 70GB swap file which defies the cause for choosing this path in the first place.
I also tried the third party library OpenCvSharp to save the processed frames as a video. I have done that previously in python without any problems. In UWP though, I am not able to interact with the file system without a StorageFile object. So, all I get from OpenCvSharp is an UnauthorizedAccessException when I try to save the rendered video to the file system.
So, to summerize: what I need is a way to render the data of my camera frames to a video while the data is still coming in, so I can dispose every frame after it having been processed, like a python OpenCV implementation would do it. I am very thankfull for every hint. Here are parts of my code to better understand the context:
private void ColorFrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
{
MediaFrameReference colorFrame = sender.TryAcquireLatestFrame();
if (colorFrame != null)
{
if (currentMode == StreamingMode)
{
colorRenderer.Draw(colorFrame, true);
}
if (currentMode == RecordingMode)
{
MediaStreamSample sample = MediaStreamSample.CreateFromDirect3D11Surface(colorFrame.VideoMediaFrame.Direct3DSurface, new TimeSpan(0, 0, 0, 0, 33));
ColorComposition.Clips.Add(MediaClip.CreateFromSurface(sample.Direct3D11Surface, new TimeSpan(0, 0, 0, 0, 33)));
}
}
}
private async Task CreateVideo(MediaComposition composition, string outputFileName)
{
try
{
await mediaFrameReaderColor.StopAsync();
mediaFrameReaderColor.FrameArrived -= ColorFrameArrived;
mediaFrameReaderColor.Dispose();
StorageFolder folder = await documentsFolder.GetFolderAsync(directory);
StorageFile vid = await folder.CreateFileAsync(outputFileName + ".mp4", CreationCollisionOption.GenerateUniqueName);
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
await composition.RenderToFileAsync(vid, MediaTrimmingPreference.Precise);
stopwatch.Stop();
Debug.WriteLine("Video rendered: " + stopwatch.ElapsedMilliseconds);
composition.Clips.Clear();
composition = null;
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
}
I am trying to record a video of a camera while using MediaFrameReader. For my application I need the MediaFrameReader instance to process each frame individually.
Please create custom video effect and add it to the MediaCapture object to implement your requirement. Using the custom video effect will allow us to process the frames in the context of the MediaCapture object without using the MediaFrameReader, in this way all of the limitations of the MediaFrameReader that you have mentioned go away.
Besides, it also have a number of build-in effects that allow us to analyze camera frames, for more information, please check the following articles:
#MediaCapture.AddVideoEffectAsync
https://learn.microsoft.com/en-us/uwp/api/windows.media.capture.mediacapture.addvideoeffectasync.
#Custom video effects
https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/custom-video-effects.
#Effects for analyzing camera frames
https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/scene-analysis-for-media-capture.
Thanks.
Requirement:
I am trying to capture Audio/Video of windows screen with SharpAPI Example with Loopback audio stream of NAudio Example.
I am using C#, wpf to achieve the same.
Couple of nuget packages.
SharpAvi - forVideo capturing
NAudio - for Audio capturing
What has been achieved:
I have successfully integrated that with the sample provided and I'm trying to capture the audio through NAudio with SharpAPI video stream for the video to record along with audio implementation.
Issue:
Whatever I write the audio stream in SharpAvi video. On output, It was recorded only with video and audio is empty.
Checking audio alone to make sure:
But When I try capture the audio as separate file called "output.wav" and It was recorded with audio as expected and can able to hear the recorded audio. So, I'm concluding for now that the issue is only on integration with video via SharpApi
writterx = new WaveFileWriter("Out.wav", audioSource.WaveFormat);
Full code to reproduce the issue:
https://drive.google.com/open?id=1H7Ziy_yrs37hdpYriWRF-nuRmmFbsfe-
Code glimpse from Recorder.cs
NAudio Initialization:
audioSource = new WasapiLoopbackCapture();
audioStream = CreateAudioStream(audioSource.WaveFormat, encodeAudio, audioBitRate);
audioSource.DataAvailable += audioSource_DataAvailable;
Capturing audio bytes and write it on SharpAvi Audio Stream:
private void audioSource_DataAvailable(object sender, WaveInEventArgs e)
{
var signalled = WaitHandle.WaitAny(new WaitHandle[] { videoFrameWritten, stopThread });
if (signalled == 0)
{
audioStream.WriteBlock(e.Buffer, 0, e.BytesRecorded);
audioBlockWritten.Set();
Debug.WriteLine("Bytes: " + e.BytesRecorded);
}
}
Can you please help me out on this. Any other way to reach my requirement also welcome.
Let me know if any further details needed.
Obviously, author doesn't need it, but since I run to the same problem others might need it.
Problem in my case was that I was getting audio every 0.1 seconds and attempted to write both new video and audio at the same time. And getting new video data (taking screenshot) took me too long. Causing each frame was added every 0.3 seconds instead of 0.1. And that caused some problems with audio stream being not sync with video and not being played properly by video players (or whatever it was). And after optimizing code a little bit to be within 0.1 second, the problem is gone
Morning all,
I'm developing a C# WPF application which continuously reads barcodes (about one every minute) from a DATALOGIC scanner (DS4800-1000) and send them to a server which replies with details about that specific barcode. This scanner is connected to a tablet running Windows 8.1 (non RT) through a USB-to-serial converter from MOXA (model UPort 1100).
Whenever a new barcode is read, the DataReceived event is fired and handled with the following method:
private void port1_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
Log.log(Log.LogLevel.Info, "MainScreen.port1_DataReceived");
Thread.Sleep(100);
String data = "";
// If the com port has been closed, do nothing
if (!comport1.IsOpen)
{
Log.log(Log.LogLevel.Info, "MainScreen.port1_DataReceived - COM CLOSED");
data = "COM CLOSED"; // Must be < 16 chars
}
else
{
// Obtain the number of bytes waiting in the port's buffer
int bytes = comport1.BytesToRead;
// Create a byte array buffer to hold the incoming data
byte[] buffer = new byte[bytes];
// Read the data from the port and store it in our buffer
comport1.Read(buffer, 0, bytes);
data = Encoding.Default.GetString(buffer);
Log.log(Log.LogLevel.Info, "Data received from barcode scanner number 1: " + data);
}
// COM port is handled by a different thread; this.Dispatcher calls the original thread
this.Dispatcher.Invoke((Action)(() =>
{
ExtractBarcodeData(data);
} ));
}
I'm observing a strange behavior: at random times, I see no reaction at all on the application, although the scanner actually reads a new barcode, while I would expect a new DataReceived event as the previous barcodes. Logs say me that the port is actually open and I can also close it using a specific button which closes and reopen it. Here comes the exception (on the Open() call): A device attached to the system is not functioning.
I can not reproduce this error in no way, it's totally unpredictable and random! Anyone has got any idea why the DataReceived event is not triggering?
Thanks,
FZ
Most USB-to-serial converters have this problem. They may disappear from the system and appear again. All opened handles at this situation become invalid.
Please, open the Device Manager and verify the power management tab for each USB hubs there. The system should not power off the hub.
private void ViewReceivedImage(byte[] buffer)
{
try
{
MemoryStream ms = new MemoryStream(buffer);
BitmapImage bi = new BitmapImage();
bi.SetSource(ms);
MyImage.Source = bi;
ms.Close();
}
catch (Exception) { }
finally
{
StartReceiving();
}
}
I develop this code to get the image from PC screen and show it on WP7, and its work fine on WP7 emulator
this the video that work on emulator
but when I install the xap on WP7 device, it not show all the images, the refresh really fast that don't show only a the top part of image
I think maybe the hardware of WP7 really slow in front of my PC.
If I will add a wait time, where can I put it, or if there are any solution?
I use tcp socket.
So try sending less data to speed it up
only send half the number of frames
send less colour data
compress it before sending (Zip it or something along those lines).
We're working on a SIP softphone and we get audio feedback when we call from one phone to the other. However, when we call from a normal SIP Phone (software or hardware) to our app, then it all works fine - it's only when calling from one phone using the app to another one. Here is the code we use to initialize RIL Audio:
public static void InitRILAudio()
{
IntPtr res;
RILRESULTCALLBACK result = new RILRESULTCALLBACK(f_result);
RILNOTIFYCALLBACK notify = new RILNOTIFYCALLBACK(f_notify);
res = RIL_Initialize(1, result, notify, (0x00010000 | 0x00020000 | 0x00080000), 0, out hRil);
if (res != IntPtr.Zero)
return;
RILAUDIODEVICEINFO audioDeviceInfo = new RILAUDIODEVICEINFO();
audioDeviceInfo.cbSize = 16;
audioDeviceInfo.dwParams = 0x00000003; //RIL_PARAM_ADI_ALL;
audioDeviceInfo.dwRxDevice = 0x00000001; //RIL_AUDIO_HANDSET;
audioDeviceInfo.dwTxDevice = 0x00000001; //RIL_AUDIO_HANDSET;
res = RIL_SetAudioDevices(hRil, audioDeviceInfo);
}
We are using SipEk (http://voipengine.googlepages.com/sipeksdk) for the SIP stack. Basically we just use a callback delegate from the SDK for the audio stuff. Has anyone else experienced problems with Audio feedback loops like this? Either with RIL Audio or SipEk? Any suggestions?
Thanks in advance!
Feedback means that you're not using echo cancellation (line and/or acoustic, depending on whether it's working as a speakerphone or not), or if you are, the delay in your system (jitter buffers, network, encode/decode, etc) is greater than the echo canceller can handle. Excessive gain/clipping in the wrong places can also defeat any echo canceller (they don't like non-linear effects).
Sounds like you're just dumping the audio off to some other layers. SipEk is just a wrapper for pjsip, but I assume you're doing audio via the Microsoft RIL/etc stuff, not via pjmedia. You need to have a good understanding of your audio paths - where stuff gets sampled, if/how it's acoustic/line echo-cancelled, what the echo tail is, how it gets encoded and packetized, how it's received, jitter-buffered, loss-concealed, and decoded and played back.