I'm trying to develop an application on Windows CE. My device has a camera and I would like to handle it in my application. I've found many samples for windows mobile and try to implement it but without success.
Is there a specific class or assembly for Windows CE about camera handling ???
Thanks
No, there is no generic "camera class" for using camera data under CE. Windows Mobile has the CameraCaptureDialog, but it requires a specific software interface that is only required from WinMo OEMs. Since cameras can have a wide variety of software interfaces and since there is not requirement for any OEM to use any specific one, there wasn't a way for the CF team to wrap it in a control.
In many cases the camera input stream can be gotten through Direct Show. If your device has DShow and the camera driver provides a DSHow interface (two big ifs) then you can probably create a filtergraph to import it. Doing so involves a fair bit of complex COM interop, so it's not what I'd call simple, but it's at least achievable.
Related
I am making a C# app to capture graphic from an avermedia pcie capture card.
But it seems that there are no out of box tools to do so.
So I made a C++ directshow app to do the capture, which is a console app and opens a capture window when running.
How can I redirect the output to a C# app? for example, to a CaptureElement?
So you want to have XAML CaptureElement connected to AverMedia PCIe capture card. This sounds like a well-understood challenge overall, however every other piece of technology you mentioned is eventually a bad choice: DirectShow, multiple apps with piping, redirection and fitting of cutsom code to XAML CaptureElement control.
Microsoft has intentionally been limited ways you can integrate different APIs and so there are not so many ways to get everything together.
Let us go over the supposed integration path. The capture card is supposed to be shipped with a compatible driver:
Video capture devices are supported through the UVC class driver and must be compatible with UVC 1.1
When this is the case, such devices are visible to Media Foundation API handling video capture among tasks. XAML CaptureElement would be able to see a video capture device through this API and this way everything is supposed to work without need to fit anything from your end.
If this is not happening, it suggests you are dealing with an unsupported device coming without suitable or compatible driver.
Previous media API in Windows was DirectShow but its days are gone. It remains perfectly working as a legacy framework, a lot of applications out there are still relying on it. Specifically it will not integrate with new technology like XAML and UWP. More to that, even Media Foundation itself, the current media API, in its public offering is lagging behind when it comes to fitting with most recent technology. Having said that it is a good idea to stay well clear of DirectShow here if this is at all possible.
I see no need for cross-process design with video travelling between process through piping. There is no good reason for such design and even though this can work efficiently (Windows itself proves it can work great in terms of performance by having so called Frame Server service in it), this is not to be built on piping. In your case it is unlikely to be have to be built on multiple processes either. Instead you can develop a native code DLL project that takes care of video acquisition and connects to managed code via suitable glue layer: C++/CLI, COM, C++/WinRT and such.
Then next thing is fitting to XAML CaptureElement. The control is designed to work with Windows.Media.Capture.MediaCapture class that talks to hardware and you don't have suitable hardware as you plan to implement your own acquisition layer. Long story short you are not supposed to forward external data to CaptureElement and you would have hard time doing this. Your best strategy is to upload externally obtained data to Windows.Graphics.Imaging.SoftwareBitmap or alike and take involved performance impact as acceptable. That is, you will be dealing with video frames as images.
An alternative way is to upload acquired video frames into Direct 3D 11 textures and it would open you a more performant way of integration with video related controls, such as Windows.UI.Xaml.Controls.SwapChainPanel however it would also require that you put much more development effort in there.
I want to detect is any audio playing from any application using Windows API functions like waveout... or mixer functions. I am going to use it in the program I am writing for Windows CE 6 using C#. Note that I am programming Windows CE so there are some limits, and I don't want to use DirectX.
Need more specification on the device that will be used.
It's not wise to use other things - DirectX is well prepared for common problems, that you will have.
Also please look at Platform Builder - maybe this is more complex but you will know what kind of IN/OUT's you will be using.
I am trying to develop a Windows Form Application (not WPF) where I would require to preview numbers of cameras available on a tablet or PC, take pictures and then save the pictures in the device.
I am very new to this kind of application development and recently came accross Media Capture but I can not find a good lead to start with.
Can anyone let me know how to approach or how can I build the application with the aforementioned features or provide a good lead??
P.S. Found a good example on https://code.msdn.microsoft.com/windowsapps/media-capture-sample-adf87622/ but it uses XML not the Win Form Application type.....
What kind of cameras do you have? If the cameras support onvif, then there is a good onvif camera software you could try. You can handle many cameras with it, you can take snapshots so I guess it could work for you.
I'm writing a program in C#/Mono to be run on an ARM computer. It needs to be able to get the state of an Xbox controller plugged in to the system. I have tried a number of various libraries (XInputDotNet, for one), but they have all had various issues (like trying to use incompatible native C++ DLLs).
How can I read the state of an Xbox controller on an ARM machine?
P.S. Although there are similar questions, this is not a duplicate. All the solutions for previous questions fail for one reason or another on ARM.
To speak more directly to your task - you need to interface with a USB device. Typically this means opening up a handle to the USB driver, giving you a pipe through which you can read and write data to the USB device. libusb is a great way to do that in Linux userspace applications.
Given that you're going to be doing this in C# on mono, you're going to need to do pinvokes to libusb. If you're not familiar with pinvokes yet, I'd recommend practicing on something small, since they can get complicated very fast; particularly, read about SafeHandles.
There is at least one existing project you could use to base your work off of - xboxdrv, which is a userland C++ application, built on top of libusb, that reads and writes the controller's data and presents it as a standard joystick so that any joystick-aware program can utilize it.
The libusb library, which can be compiled on any Linux platform, including ARM, actually has a sample program (xusb) that can report the status of an XBox Controller.
See https://github.com/libusb/libusb/blob/master/examples/xusb.c#L945.
Is is possible to have multiple applications which are using the same kinect device?
Multiple applications can't use same Kinect device as one app can only request one Kinect sensor. You can however choose one of the ways listed below
Use multiple Kinects so each app uses separate Kinect
Use single Kinect for one of the app and share data across applications using inter-process communication
Multiple applications cannot share the device itself however using Kinect Service you can build one application that can act as a bridge between two apps sharing the color, skeletal, and depth data. The only thing you won't be able to do out of the box is tilt the Kinect.
Another work around is to have multiple applications or windows tied to one project using a separate Kinect class to provide access to the SDK functions.