I have around 200 images I need to display one at time in Unity. These need to be held in the application and can't be downloaded from the web.
Right now I have a list set up which takes all of my images and stores. Then, on the hit of a button, I iterate through the list showing each image one at a time. My code looks like this:
public static List <Texture2D> images = new List<Texture2D>();
void Start ()
{
System.Object[] textures = Resources.LoadAll("");
foreach(object o in textures)
{
images.Add(o as Texture2D);
}
}
public static void MoveForward()
{
if(_frameCount < images.Count-1)
{
_frameCount++;
}
else
{
_frameCount = 0;
}
}
However, due to the amount of images I need to store, it's eating my iPads memory. I was wondering if there is a better way in which I could do this where I don't need to load each image at run time, hopefully speeding up the application.
You can resolve this problem in two ways:
1) If you need extra responsibility and no loading time, you can put all your assets in one (or more) spritesheets, then load spritesheet and show specific sprites you need. You can optimize memory usage by compressing this sheet.
2) If you can stand a little loading time (or no noticeable delay, if your images are not big) you can load needed image when you click the button, or keep next image preloaded - then you will have only two images loaded at a time. After that, when you move on and don't need some image right now you can unload it by using Resources.Unload.
Related
I am trying to compare two bitmaps to one another. One is premade, and the other one consists of a small image of the main screen being taken and filtered for everything besides full white. I now need a way to compare the amount of white pixels in the live Bitmap to the amount in the premade Bitmap (101 white pixels). A way I know of would be using the Bitmap.Get/SetPixel commands, but they are really slow and as this is used for a kind of time critical application, unfitting.
Especially since I could cut down the filtering process by a factor of 70 by following this guide.
https://www.codeproject.com/articles/617613/fast-pixel-operations-in-net-with-and-without-unsa
I also can't just compare the 2 Bitmaps, as the live one will usually not have the pixels in the same position, but will share the same amount of white pixels.
So yeah. It'd be great if one of you had a time effective solution to this problem.
Edit
Huge oversight on my part. When looking at the filtering method, it becomes apparent that one can just use a counter+=1 every time a pixel is not filtered out.
So I just changed this line of code in the filter function
row[rIndex] = row[bIndex] = row[gIndex] = distance > toleranceSquared ? unmatchingValue : matchingValue;
to this
if(distance > toleranceSquared)
{
row[rIndex] = row[bIndex] = row[gIndex] = unmatchingValue;
}
else
{
row[rIndex] = row[bIndex] = row[gIndex] = matchingValue;
WhitePixelCount += 1;
}
I have implemented annotation feature which is similar to drawing in VR. Drawing is a Unity trail and its shape depends on its trajectory. This is where real problem comes. We are synchronising the drawing in realtime using PhotonTransformView which syncs world position of the trail. But here is the output. The synchronised drawing looks so different from the original one.
Here is sync configuration code:
public void SetupSync(int viewId, int controllingPlayer)
{
if (PhotonNetwork.inRoom)
{
photonView = gameObject.AddComponent<PhotonView>();
photonView.ownershipTransfer = OwnershipOption.Takeover;
photonView.synchronization = ViewSynchronization.ReliableDeltaCompressed;
photonView.viewID = viewId;
photonTransformView = gameObject.AddComponent<PhotonTransformView>();
photonTransformView.m_PositionModel.SynchronizeEnabled = true;
photonView.ObservedComponents = new List<Component>();
photonView.ObservedComponents.Add(photonTransformView);
photonView.TransferOwnership(controllingPlayer);
}
}
How can we make the drawing on two systems more similar? I have seen cases where people have been able to synchronise these perfectly. Check this. What are they doing?
Yes, PhotonTransformView is not suitable for this.
You could send a reliable rpc every x milliseconds with the list of points since the last rpc. That's when it's being drawn live, and when the drawing is finished, you cache the whole drawing definition in a database given a drawing Id. Then drawings can be retrieved later by players joining the room after the drawing was done or even loaded from a list of drawing or by any arbitrary logic.
All in all you need two different system, one when drawing is live, and one when drawing is done.
Martjin Pieters's answer is the correct way to do it.
But for those who have the same problem for a different situations, it comes from this line:
photonView.synchronization = ViewSynchronization.ReliableDeltaCompressed;
It's basically compressing data and not sending the new one if it's too close to the last data sent. Just switch it Unreliable and all the data will be directly sent.
I am working on a small-project similar to popcorn time, but I have faced some issues.
First of all when I call the function which gets all available movies and adds them to FlowLayoutPanel with its cover photos it uses a huge amount of ram like 3 GB! so is there any way to fix this (this == the below code)? (Note that I have checked the function that gets all movies from database alone and it uses only 43MB.)
List<TVShows> T1 = TVShows.GetAll();
TVShowsFlowPanel.Controls.Clear();
for (int i = 0; i < T1.Count; i++)
{
TVShowControl P1 = new TVShowControl(T1[i]);
TVShowsFlowPanel.Controls.Add(P1);
}
second, when I type PictureBox1.picture=http://..... to get the cover picture is there a way to ask if that picture was fully downloaded and showed before I move to the next Movie? and why it takes a long time to get the pic even I am using popcorn time same API with the same internet connection
last, when I add like 80 movies in that flow layout panel, why it's not smooth while scrolling down even the loading process was finished, I mean the pictures show some irritating, random lines until I stay in a specific spot.
Thanks for any help I do really appreciate that!
I was able to successfully use the following code to highlight text in an existing PDF:
private static void highlightDiff(PdfStamper stamper, Rectangle rectangle, int page)
{
float[] quadPoints = { rectangle.Left, rectangle.Bottom, rectangle.Right, rectangle.Bottom, rectangle.Left, rectangle.Top, retangle.Right, rectangle.Top };
PdfAnnotation highlight = PdfAnnotation.CreateMarkup(stamper.Writer, rectangle, null, PdfAnnotation.MARKUP_HIGHLIGHT, quadPoints);
highlight.Color = BaseColor.RED;
stamper.AddAnnotation(highlight, page);
}
The problem is I'm highlighting characters at a time and my guess is a new layer is added every time I call this function because the resulting file size is significantly larger after the program has completed running.
I tried to add the following lines at the end of the function and maybe it's just me but it seemed to have sped up the time it takes the PDF to load when I go to view it but the size of the file still remains exceedingly large.
stamper.FreeTextFlattening = true;
I may try to make my code more efficient and decrease the number of calls I make (if the characters I'm highlighting are next to each other, find the combined rectangle and call) but was wondering if there was another way around this. Thanks in advance!
Each time you execute highlightDiff you add a new highlight annotation to the PDF. Inside the PDF such an annotation is an object like this:
1 0 obj
<<
/Rect[204.68 705.11 211.2 716.11]
/Subtype/Highlight
/Contents()
/QuadPoints[204.68 716.11 211.2 716.11 204.68 705.11 211.2 705.11]
/C[1 0 0]
/P 2 0 R
>>
Furthermore there needs to be a reference to this object from the page description plus an entry in the internal cross references.
Thus, each such call makes the PDF grow by nearly 200 bytes. If you highlight many such individual characters, the file indeed will grow considerably.
I may try to make my code more efficient and decrease the number of calls I make (if the characters I'm highlighting are next to each other, find the combined rectangle and call) but was wondering if there was another way around this.
If you indeed want your highlighting to be done using highlighting annotations, there is not way around this.
If you on the other hand would also accept highlighting rectangles to be drawn in the regular page content, you may see less file size growth using that approach. Even then, though, first combining neighboring rectangles would reduce file size (and PDF viewer resource requirements) considerably.
I've made a program that analyzes the first pixel of an image and then notes the values of it in a List, this is for checking if the image is black&white or in color. Now, does anyone know an efficient way of reading high-res images? Right now I'm using Bitmaps but they are highly inefficient. The images are around 18 megapixels each and I want to analyze around 400 photos. Code below:
Bitmap b;
foreach (FileInfo f in files) {
// HUGE MEMORY LEAK
System.GC.Collect();
System.GC.WaitForPendingFinalizers();
b = (Bitmap)Bitmap.FromFile(f.FullName);
// reading pixel (0,0) from bitmap
When I run my program it says:
"An unhandled exception of type 'System.OutOfMemoryException' occurred in System.Drawing.dll
Additional information: There is no available memory."
I've tried with System.GC.Collect() to clean up, as you can see, but the exception doesn't go away. If I analyse a folder that contains only a few photos, the program runs fine and gladly does it's job.
Using the first pixel of an image to check if it is colour or not is the wrong way to do this.
If you have an image with a black background (pixel value 0,0,0 in RGB), how do you know the image is black and white, and not colour with a black background?
Placing the bitmap in a Using is correct, as it will dispose properly.
The following will do the trick.
class Program
{
static void Main(string[] args) {
List<String> ImageExtensions = new List<string> { ".JPG", ".JPE", ".BMP", ".GIF", ".PNG" };
String rootDir = "C:\\Images";
foreach (String fileName in Directory.EnumerateFiles(rootDir)) {
if (ImageExtensions.Contains(Path.GetExtension(fileName).ToUpper())) {
try {
//Image.FromFile will work just as well here.
using (Image i = Bitmap.FromFile(fileName)) {
if (i.PixelFormat == PixelFormat.Format16bppGrayScale) {
//Grey scale...
} else if (i.PixelFormat == PixelFormat.Format1bppIndexed) {
//1bit colour (possibly b/w, but could be other indexed colours)
}
}
} catch (Exception e) {
Console.WriteLine("Error - " + e.Message);
}
}
}
}
}
The reference for PixelFormat is found here - https://msdn.microsoft.com/en-us/library/system.drawing.imaging.pixelformat%28v=vs.110%29.aspx
Objects in C# are limited to 2Gb, so I doubt that an individual image is causing the problem.
I also would suggest that you should NEVER manually call the GC to solve a memory leak (though this is not technically a leak, just heavy memory usage).
Using statements are perfect for ensuring that an object is marked for disposal, and the GC is very good at cleaning up.
We perform intensive image processing in our software, and have never had issues with memory using the approach I have shown.
While simply reading the header to find image data is a perfectly correct solution, it does mean a lot of extra work to decode different file types, which is not necessary unless you are working with vast amounts of images in very small memory (although if that is your aim, straight C is a better way to do it rather than C#. Horses for courses and all that jazz!)
EDIT - I just ran this on a directory containing over 5000 high-res TIFFs with no memory issues. The slowest part of the process was the console output!
I guess, if you need only first pixel - not necessary to read all file. Maybe you should take just first pixel from bitmap byte array and work with it. You should find pixel array manually and take first.
How find pixel array? Well, it depends on file format. You should find specification for each usable format and use it. Here is example for BMP reader