I create an application for Hololens with Unity 2020.3.12f with the MixedRealityToolKit.
I have a problem loading a file to a byte array in background of the Hololens.
To visualise png-grafics ( > 100 files) on my Hololens App, i create sprites dynamicly from ByteArrays.
To load an image (~96 KB) into a byte-Array with File.ReadAllBytes takes around 300 ms (very slow).
To keep the process of updating the shown image fast, the Byte-Arrays of all images should be preloaded on a GameObjects script.
I decided to load 20 files into ByteArray and show up the first image as sprite.
After this process the other files should be loaded in the backgroud, so the App still allows interaction. The function I Call from a Button should look like:
public void LoadSomething()
{
UnityEngine.WSA.Application.InvokeOnUIThread(async () =>
{
#Load First Byte-Arrays and show up first Image
#Load the rest of the Bytes-Arrays without blocking Interaction on App
LoadBytes();
}, false);
}
The files to load are stored as Storagefiles on a Gameobject "Manager".
The function for loading could be like:
private void LoadBytes()
{
foreach (Storagefile storageFile in manager.listOfStorageFiles)
{
byte[] filedata = File.ReadAllBytes(storageFile.Path);
manager.byteList.Add(fileData);
}
}
I tried to start the function with Task.Run(() => LoadBytes ), but this freezes the UI and nothing happens. I think there is a problem to work with a GameObject on CPU.
Have anyone a solution for calling this function from the UIThread and still keep my UI interactive ?
Since you have any I/O-bound needs, it is recommended to improve LoadBytes() as an asynchronous method that performs a non-blocking wait on the background job. For more information please see: Asynchronous programming patterns
Related
I am trying to record a video of a camera while using MediaFrameReader. For my application I need the MediaFrameReader instance to process each frame individually. I have tried to apply LowLagMediaRecording additionally to simply save the camerastream to a file, but it seems that you can not use both methods simultaniously. That meant that I am stuck to MediaFrameReader only where I can access each frame in the Frame_Arrived method.
I have tried several approaches and found two working solutions with the MediaComposition class creating MediaClip objects. You can either save each frame as a JPEG-file and render all images finally to a video file. This process is very slow since you constantly need to access the hard drive. Or you can create a MediaStreamSample object from the Direct3DSurface of a frame. This way you can save the data in RAM (first in GPU RAM, if full then RAM) instead of the hard drive which in theory is much faster. The problem is that calling the RenderToFileAsync-method of the MediaComposition class requires all MediaClips already being added to the internal list. This leads to exceeding the RAM after an already quite short time of recording. After collecting data of approximately 5 minutes, windows already created a 70GB swap file which defies the cause for choosing this path in the first place.
I also tried the third party library OpenCvSharp to save the processed frames as a video. I have done that previously in python without any problems. In UWP though, I am not able to interact with the file system without a StorageFile object. So, all I get from OpenCvSharp is an UnauthorizedAccessException when I try to save the rendered video to the file system.
So, to summerize: what I need is a way to render the data of my camera frames to a video while the data is still coming in, so I can dispose every frame after it having been processed, like a python OpenCV implementation would do it. I am very thankfull for every hint. Here are parts of my code to better understand the context:
private void ColorFrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
{
MediaFrameReference colorFrame = sender.TryAcquireLatestFrame();
if (colorFrame != null)
{
if (currentMode == StreamingMode)
{
colorRenderer.Draw(colorFrame, true);
}
if (currentMode == RecordingMode)
{
MediaStreamSample sample = MediaStreamSample.CreateFromDirect3D11Surface(colorFrame.VideoMediaFrame.Direct3DSurface, new TimeSpan(0, 0, 0, 0, 33));
ColorComposition.Clips.Add(MediaClip.CreateFromSurface(sample.Direct3D11Surface, new TimeSpan(0, 0, 0, 0, 33)));
}
}
}
private async Task CreateVideo(MediaComposition composition, string outputFileName)
{
try
{
await mediaFrameReaderColor.StopAsync();
mediaFrameReaderColor.FrameArrived -= ColorFrameArrived;
mediaFrameReaderColor.Dispose();
StorageFolder folder = await documentsFolder.GetFolderAsync(directory);
StorageFile vid = await folder.CreateFileAsync(outputFileName + ".mp4", CreationCollisionOption.GenerateUniqueName);
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
await composition.RenderToFileAsync(vid, MediaTrimmingPreference.Precise);
stopwatch.Stop();
Debug.WriteLine("Video rendered: " + stopwatch.ElapsedMilliseconds);
composition.Clips.Clear();
composition = null;
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
}
I am trying to record a video of a camera while using MediaFrameReader. For my application I need the MediaFrameReader instance to process each frame individually.
Please create custom video effect and add it to the MediaCapture object to implement your requirement. Using the custom video effect will allow us to process the frames in the context of the MediaCapture object without using the MediaFrameReader, in this way all of the limitations of the MediaFrameReader that you have mentioned go away.
Besides, it also have a number of build-in effects that allow us to analyze camera frames, for more information, please check the following articles:
#MediaCapture.AddVideoEffectAsync
https://learn.microsoft.com/en-us/uwp/api/windows.media.capture.mediacapture.addvideoeffectasync.
#Custom video effects
https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/custom-video-effects.
#Effects for analyzing camera frames
https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/scene-analysis-for-media-capture.
Thanks.
I m developing a wallpaper app. Therefore I am using Xamarin.Forms. There is a scroll view with all available wallpapers and on top of it, the user can see the selected one bigger.
The problem is that when I start the App I can see that it is a bit lagging. (Images are loading in view) And I don t like that. It looks not good. So I started a new Task in that I am getting the images from the embedded resources. But that doesn't t coast so much time. The time-consuming thing is to display the images in the view. And I can not do that in the task because it has to be an STA thread to access the view.
I want it like this:
The app starts You see a running activity indicator
The images are loaded from the resource without blocking the UI thread
Images are loaded into view without blocking the UI thread
Activity indicator disappears and main grid comes in front
So here is my code:
Check this pastebin for code.
Thank you.
Ok I didn't t fixed the problem but I have a solution:
Task.Factory.StartNew(() =>
{
return InitializeImages();
}).ContinueWith(r =>
{
Device.BeginInvokeOnMainThread(() =>
{
if (r.Result == GlobalSettings.ImageCount)
ButtonsGrid.ColumnDefinitions[1].Width = 0;
else
ButtonsGrid.ColumnDefinitions[1].Width = GridLength.Star;
for (int i = 0; i < r.Result; i++)
{
ImagesGrid.Children.Add(_previewImages[i]);
ImagesStackLayout.Children.Add(_images[i]);
}
});
Thread.Sleep(2000);
Device.BeginInvokeOnMainThread(async () =>
{
await ShowWallpaperInPreview();
await Loading(false);
});
});
I let it lagging in the background. and after a short time, I change from loading to normal view. This way it is lagging in the background (who cares=)
A very good candidate to keep your app responsive and cover cases that are not mentioned in your question is an open source project named FFImageLoading. Here is the list of supported features:
Configurable disk and memory caching
Multiple image views using the same image source (url, path, resource) will use only one bitmap which is cached in memory (less memory usage)
Error and loading placeholders support
Images can be automatically downsampled to specified size (less memory usage)
Can retry image downloads (RetryCount, RetryDelay)
and much more!
Check the Advanced Usage doc and Wiki for more details.
I googled my way through lots of questions all day but I didn't seem to find an answer fitting my problem, so I hope somebody can help me.
I want to display several image streams in a Unity scene. I have several GameObjects with the loading-script attached, e.g.
GameObject1 with script Loader.cs (calls coroutine in Start)
GameObject2 with script Loader.cs (calls coroutine in Start)
GameObject3 with script Loader.cs (calls coroutine in Start)
and load my images via a coroutine in this script:
IEnumerator Load()
{
Texture2D tex = new Texture2D(4, 4, TextureFormat.DXT1, false);
while (true)
{
WWW www = new WWW(url);
yield return www;
www.LoadImageIntoTexture(tex);
img.texture = tex;
}
}
(Where img.texture is my image in the scene and url is the url i'm loading from).
This works fine for loading the images, but I notice the images load/ stream faster once I start more coroutines. To clarify, if i have 1 image stream it's updating the images at a certain speed but if i have, e.g. 3 image streams (each having a coroutine for loading) suddenly all 3 streams load images faster than with 1 stream.
I tried adding a yield return new WaitForFixedUpdate(); at the end of the coroutine but it didn't help, and i have no idea how to achieve a regular pace in loading new images, independent how many streams/coroutines i have at the same time?
I hope it's clear what i mean and any help is appreciated!
Starting multiple coroutines updates the image faster because what you have is downloading the image at different times from different Threads. The native side of the WWW API is implemented with Threads. Note that coroutine is not the-same as Thread.
The use of coroutine with the WWW API is to be able to wait for that WWW request to finish before accessing the output(image).Basically, you have 3 coroutines that runs forever. While each one is downloading the image, the other one is likely uploading the one it has already downloaded to Texture2D.
Regardless, this is not the correct way to stream image. The WWW API cannot be used for this purpose. It may look good to you but it is a bad hack.
You have two options:
1.Use new Unity's UnityWebRequest with its DownloadHandlerScript extension.
2.Use C# standard HttpWebRequest API with Thread then send the result to the main Thread using this technique.
No matter which option you chose, the technique to read from jpeg still remain the-same.
1.Make connection to the url
2.Read from that stream
3.Search for byte 0xFF followed by 0xD8
4.When you find byte 0xFF followed by 0xD8, start reading the stream as and keep storing them into a byte array/List.
5.While reading the jpeg stream, keep searching for byte 0xFF and 0xD9.
6.When you find byte 0xFF followed by 0xD9, stop reading.
7.Your jpeg image is now byte array from constructed from 0xFF, 0xD8 followed by the whole byte array from step #4 and then 0xFF, 0xD9.
8.Finally, use Texture2D.LoadImage to load the byte array into Texture2D and display it to the screen.
9.Jump back to step #3 and keep repeating that since a stream does not have an end unless the server is down.
If you run into problems, you can always create a new post with a code you made from this answer. Here is a post that should also get you stated.
Im trying to manipulate an image using system.drawing in GTk#.I want the UI to update the image on screen as soon as the user updates a textbox.To implement this i tried using the background worker from winforms,it worked but when the textbox is updated at a higher speed the application becomes stuck with no error.
So i took a look at multithreading in GTK here http://www.mono-project.com/docs/gui/gtksharp/responsive-applications/ and created a thread .
void textboxchanged()
{
Thread thr = new Thread (new ThreadStart (ThreadRoutine));
thr.Start ();
}
static void ThreadRoutine ()
{
LargeComputation ();
}
static void LargeComputation ()
{
image=new Bitmap(backupimage);
//Long image processing
}
It works poorly than the background worker throws up object currently in use elsewhere error here image=new Bitmap(backupimage); when the speed of entry in textbox is even a little fast.What im i doing wrong ?
Update 1 :
Im not processing the same image using 2 different threads that does 2 different operations at the same time.Im calling the thread that does the same operation before the old thread is complete.As in background worker i need a way to check if the old thread has completed working before launching the new one.So basically what im looking for is a way to check if an instance of the same thread
is running.In winforms i used to do if(backgroundworker.isbusy==false) then do stuff
Update 2
Solution with performance degradation
As suggested by #voo Replacing the global bitmap helped solve the issue.What i did was instead of using global bitmap.I created a global string(filename).Now i use img=new Bitmap(filename).Tried executing fast as i can no error came up.So inorder to update the GUI i used the invoke as suggested here mono-project.com/docs/gui/gtksharp/responsive-applications/.The thing is no error comes up and image gets updated,but when the typing operation is fast enough there a wait involved. Performance got degraded.This was not the case with background worker.Is there a way to improve performance.
At end of the large image processing operation method i added this to update GUI
Gtk.Application.Invoke (delegate {
MemoryStream istream=new MemoryStream();
img.Save (istream, System.Drawing.Imaging.ImageFormat.Png);
istream.Position = 0;
workimagepixbuff = new Gdk.Pixbuf (istream);
image1.Pixbuf = workimagepixbuff.ScaleSimple (400, 300, Gdk.InterpType.Bilinear);
});
// cannot directly convert Bitmap to Pixbuff,so doing this
The problem here is that you are processing an image at two places (two threads) on the same time and the image operation in .Net (GDI speaking) does not allow this.
Because you did not provided very much information I'm just guessing here.
When manipulating bitmap images in GDI, there are BitmapData behind the scene that needs to be locked and unlocked. This mechanism just make the picture available in memory for read/write. But AFAIK when you lock a BitmapData that is already locked you get a similar exception : System.InvalidOperationException, Bitmap region is already locked.
To me it sounds like you are getting this kind of error, but with other words because you are not explicitly locking bitmap data bits. GDI just tell you : "I would have to lock bitmap data bits but I can't because the object is already used (locked) elsewhere."
The solution here may be to try to synchronize bitmap usage (where bits locking may be involved) between thread whenever they may occur. So you may want to use the lock keyword or a similar mechanism:
So try something that looks like the following:
private static object _imageLock = new object();
static void LargeComputation ()
{
lock(_imageLock)
{
image=new Bitmap(backupimage);
//Long image processing ...
}
}
static void AnotherImageOperationSomewhereElse()
{
lock(_imageLock)
{
//Another image processing on backupImage or something derived from it...
}
}
I am currently trying to save not only the skeletal data, but also the color frame images for post processing reasons. Currently, this is the section of the code that handles the color video, and outputs a color image in the UI. I figure, this is where the saving of the color frame images has to take place.
private void ColorFrameEvent(ColorImageFrameReadyEventArgs colorImageFrame)
{
//Get raw image
using (ColorImageFrame colorVideoFrame = colorImageFrame.OpenColorImageFrame())
{
if (colorVideoFrame != null)
{
//Create array for pixel data and copy it from the image frame
Byte[] pixelData = new Byte[colorVideoFrame.PixelDataLength];
colorVideoFrame.CopyPixelDataTo(pixelData);
//Set alpha to 255
for (int i = 3; i < pixelData.Length; i += 4)
{
pixelData[i] = (byte)255;
}
using (colorImage.GetBitmapContext())
{
colorImage.FromByteArray(pixelData);
}
}
}
}
I have tried reading up on OpenCV, EmguCV, and multithreading; but I am pretty confused. It would be nice to have a solid good explanation in one location. However, I feel like the best way to do this without losing frames per second, would be to save all the images in a List of arrays perhaps, and then when the program finishes do some post processing to convert arrays->images->video in Matlab.
Can someone comment on how I would go about implementing saving the color image stream into a file?
The ColorImageFrameReady event is triggered 30 seconds a second (30fps) if everything goes smoothly. I think it's rather heavy to save every picture at once.
I suggest you use a Backgroundworker, you can check if the worker is busy and if not just pass the bytes to the backgroundworker and do your magic.
You can easily save or make an image from a byte[]. Just google this.
http://www.codeproject.com/Articles/15460/C-Image-to-Byte-Array-and-Byte-Array-to-Image-Conv
How to compare two images using byte arrays