private void ViewReceivedImage(byte[] buffer)
{
try
{
MemoryStream ms = new MemoryStream(buffer);
BitmapImage bi = new BitmapImage();
bi.SetSource(ms);
MyImage.Source = bi;
ms.Close();
}
catch (Exception) { }
finally
{
StartReceiving();
}
}
I develop this code to get the image from PC screen and show it on WP7, and its work fine on WP7 emulator
this the video that work on emulator
but when I install the xap on WP7 device, it not show all the images, the refresh really fast that don't show only a the top part of image
I think maybe the hardware of WP7 really slow in front of my PC.
If I will add a wait time, where can I put it, or if there are any solution?
I use tcp socket.
So try sending less data to speed it up
only send half the number of frames
send less colour data
compress it before sending (Zip it or something along those lines).
Related
I am creating an application which streams audio over UDP. Currently it works fine, however it uses alot of network usage (upto 500kbps). Is this normal? Is there a way I can compress or slightly reduce the quality of the audio so it uses up less network usage?
WasapiCapture capture = new WasapiLoopbackCapture();
capture.Initialize();
capture.Start();
capture.DataAvailable += (object sender, DataAvailableEventArgs e) =>
{
// Send data here (works fine)
};
Yes this is normal. Its capturing at the default rate. You need to set your bitrate and codec.
This is what I am doing:
_soundIn = new WasapiLoopbackCapture(0, new CSCore.WaveFormat(44100, 16, 1, CSCore.AudioEncoding.MpegLayer3));
I am trying to record a video of a camera while using MediaFrameReader. For my application I need the MediaFrameReader instance to process each frame individually. I have tried to apply LowLagMediaRecording additionally to simply save the camerastream to a file, but it seems that you can not use both methods simultaniously. That meant that I am stuck to MediaFrameReader only where I can access each frame in the Frame_Arrived method.
I have tried several approaches and found two working solutions with the MediaComposition class creating MediaClip objects. You can either save each frame as a JPEG-file and render all images finally to a video file. This process is very slow since you constantly need to access the hard drive. Or you can create a MediaStreamSample object from the Direct3DSurface of a frame. This way you can save the data in RAM (first in GPU RAM, if full then RAM) instead of the hard drive which in theory is much faster. The problem is that calling the RenderToFileAsync-method of the MediaComposition class requires all MediaClips already being added to the internal list. This leads to exceeding the RAM after an already quite short time of recording. After collecting data of approximately 5 minutes, windows already created a 70GB swap file which defies the cause for choosing this path in the first place.
I also tried the third party library OpenCvSharp to save the processed frames as a video. I have done that previously in python without any problems. In UWP though, I am not able to interact with the file system without a StorageFile object. So, all I get from OpenCvSharp is an UnauthorizedAccessException when I try to save the rendered video to the file system.
So, to summerize: what I need is a way to render the data of my camera frames to a video while the data is still coming in, so I can dispose every frame after it having been processed, like a python OpenCV implementation would do it. I am very thankfull for every hint. Here are parts of my code to better understand the context:
private void ColorFrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
{
MediaFrameReference colorFrame = sender.TryAcquireLatestFrame();
if (colorFrame != null)
{
if (currentMode == StreamingMode)
{
colorRenderer.Draw(colorFrame, true);
}
if (currentMode == RecordingMode)
{
MediaStreamSample sample = MediaStreamSample.CreateFromDirect3D11Surface(colorFrame.VideoMediaFrame.Direct3DSurface, new TimeSpan(0, 0, 0, 0, 33));
ColorComposition.Clips.Add(MediaClip.CreateFromSurface(sample.Direct3D11Surface, new TimeSpan(0, 0, 0, 0, 33)));
}
}
}
private async Task CreateVideo(MediaComposition composition, string outputFileName)
{
try
{
await mediaFrameReaderColor.StopAsync();
mediaFrameReaderColor.FrameArrived -= ColorFrameArrived;
mediaFrameReaderColor.Dispose();
StorageFolder folder = await documentsFolder.GetFolderAsync(directory);
StorageFile vid = await folder.CreateFileAsync(outputFileName + ".mp4", CreationCollisionOption.GenerateUniqueName);
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
await composition.RenderToFileAsync(vid, MediaTrimmingPreference.Precise);
stopwatch.Stop();
Debug.WriteLine("Video rendered: " + stopwatch.ElapsedMilliseconds);
composition.Clips.Clear();
composition = null;
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
}
I am trying to record a video of a camera while using MediaFrameReader. For my application I need the MediaFrameReader instance to process each frame individually.
Please create custom video effect and add it to the MediaCapture object to implement your requirement. Using the custom video effect will allow us to process the frames in the context of the MediaCapture object without using the MediaFrameReader, in this way all of the limitations of the MediaFrameReader that you have mentioned go away.
Besides, it also have a number of build-in effects that allow us to analyze camera frames, for more information, please check the following articles:
#MediaCapture.AddVideoEffectAsync
https://learn.microsoft.com/en-us/uwp/api/windows.media.capture.mediacapture.addvideoeffectasync.
#Custom video effects
https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/custom-video-effects.
#Effects for analyzing camera frames
https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/scene-analysis-for-media-capture.
Thanks.
I'm developing an UWP application which has to receive a video stream from a remote pc.
Right now I'm capturing the video from the PC's webcam, sending it to a remote server which return it to me over a TCP socket.
I've been able to succesfully do this thing with an audio stream.
The problem occours when I receive a portion of the video stream as a byte array and try to create a SoftwareBitmap whach has to be represented in a XAML Image element.
The source code is structured to fire an event when a videoframe has been captured, then convert it to a byte[], write it on the TCP socket; when a message is received on the socket another event is fired in order to feed the UI with a single image.
Here the portion of the code in which i get the Exception:
var reader = (DataReader)sender;
try
{
SoftwareBitmap img = new SoftwareBitmap(BitmapPixelFormat.Bgra8, 1280, 720);
img.CopyFromBuffer(reader.ReadBuffer(reader.UnconsumedBufferLength));
ImageReadyEvent(SoftwareBitmap.Convert(img,
BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Ignore), null);
}
catch (Exception ex)
{
throw;
}
The exception is fired when img.CopyFromBuffer(reader.ReadBuffer(reader.UnconsumedBufferLength)); is called.
At that moment the value of reader.UnconsumedBufferLength is 55000 byte.
The same code works perfectly if I execute it right after the video frame is ready, without sending it over the socket.
I've also tryed a BitmapDecoder but it fails everytime, with both the possible overrides of BitmapDecoder.CreateAsync();
I'm not figuring out how to solve this issue, anyone has an advice to make this thing work?
Your code is correct, It may mismatch in the buffer when transfer over TCP socket. Please try to compare the received data with the source data. And optimize your transport protocol.
I am currently trying to save not only the skeletal data, but also the color frame images for post processing reasons. Currently, this is the section of the code that handles the color video, and outputs a color image in the UI. I figure, this is where the saving of the color frame images has to take place.
private void ColorFrameEvent(ColorImageFrameReadyEventArgs colorImageFrame)
{
//Get raw image
using (ColorImageFrame colorVideoFrame = colorImageFrame.OpenColorImageFrame())
{
if (colorVideoFrame != null)
{
//Create array for pixel data and copy it from the image frame
Byte[] pixelData = new Byte[colorVideoFrame.PixelDataLength];
colorVideoFrame.CopyPixelDataTo(pixelData);
//Set alpha to 255
for (int i = 3; i < pixelData.Length; i += 4)
{
pixelData[i] = (byte)255;
}
using (colorImage.GetBitmapContext())
{
colorImage.FromByteArray(pixelData);
}
}
}
}
I have tried reading up on OpenCV, EmguCV, and multithreading; but I am pretty confused. It would be nice to have a solid good explanation in one location. However, I feel like the best way to do this without losing frames per second, would be to save all the images in a List of arrays perhaps, and then when the program finishes do some post processing to convert arrays->images->video in Matlab.
Can someone comment on how I would go about implementing saving the color image stream into a file?
The ColorImageFrameReady event is triggered 30 seconds a second (30fps) if everything goes smoothly. I think it's rather heavy to save every picture at once.
I suggest you use a Backgroundworker, you can check if the worker is busy and if not just pass the bytes to the backgroundworker and do your magic.
You can easily save or make an image from a byte[]. Just google this.
http://www.codeproject.com/Articles/15460/C-Image-to-Byte-Array-and-Byte-Array-to-Image-Conv
How to compare two images using byte arrays
i connected a eos canon camera to pc
i have an application that i could take picture remotly ,and download image to pc,
but when i remove the SD card from camera , i cant download image from buffer to pc
// register objceteventcallback
err = EDSDK.EdsSetObjectEventHandler(obj.camdevice, EDSDK.ObjectEvent_All, objectEventHandler, new IntPtr(0));
if (err != EDSDK.EDS_ERR_OK)
Debug.WriteLine("Error registering object event handler");
///
public uint objectEventHandler(uint inEvent, IntPtr inRef, IntPtr inContext)
{
switch(inEvent)
{
case EDSDK.ObjectEvent_DirItemCreated:
this.getCapturedItem(inRef);
Debug.WriteLine("dir item created");
break;
case EDSDK.ObjectEvent_DirItemRequestTransfer:
this.getCapturedItem(inRef);
Debug.WriteLine("file transfer request event");
break;
default:
Debug.WriteLine(String.Format("ObjectEventHandler: event {0}", inEvent));
break;
}
return 0;
}
anyone could help me , why this event does not call ,
or how i download image from buffer to pc, with out have Sd card on my camera
thanks
You probably ran into the same problem as I did yesterday: the camera tries to store the image for a later download, finds no memory card to store it to and instantly discards the image.
To get your callback to fire, you need to set the camera to save images to the PC (kEdsSaveTo_Host) at some point during your camera initialization routine. In C++, it worked like this:
EdsInt32 saveTarget = kEdsSaveTo_Host;
err = EdsSetPropertyData( _camera, kEdsPropID_SaveTo, 0, 4, &saveTarget );
You probably need to build an IntPtr for this. At least, that's what Dmitriy Prozorovskiy did (prompted by a certain akadunno) in this thread.
The SDK (as far as I know) only exposes the picture taking event in the form of the object being created on the file system of the camera (ie the SD card). There is not a way of which I am familiar to capture from buffer. In a way this makes sense, because in an environment where there is only a small amount of onboard memory, it is important to keep the volatile memory clear so that it can continue to take photographs. Once the buffer has been flushed to nonvolatile memory, you are then clear to interact with those bytes. Limiting, I know, but it is what it is.
The question asks for C#, but in Java one will have to setProperty as:
NativeLongByReference number = new NativeLongByReference( new NativeLong( EdSdkLibrary.EdsSaveTo.kEdsSaveTo_Host ) );
EdsVoid data = new EdsVoid( number.getPointer() );
NativeLong l = EDSDK.EdsSetPropertyData(edsCamera, new NativeLong(EdSdkLibrary.kEdsPropID_SaveTo), new NativeLong(0), new NativeLong(NativeLong.SIZE), data);
And the usual download will do