Unity iOS performance issues (low frame rate) - c#

Currently, I have a relatively simple 2D game which should not be too taxing on a cpu. It runs fine on my computer however when I build it to my iPhone or iPad the game starts to be quite jittery and has a poor frame rate. Does anyone have any suggestions on how I could improve the performance (I already use object pooling et cetera I think it has something to do with my draw calls / graphics)
EDIT: turns out the renderer is using most of my cpu

If the issue is as you say too many draw calls then one simply way to reduce those is to pack your sprites using the Sprite Packer. This places all your sprites tightly which reduces the amount of draw calls, if they don't all fit nicely or there's too many of them then they will be split into subpages. If you'd like to see how to use the Unity texture packer in simple steps I would direct you to this blog post that talks about it in more depth. Here's a simple step by step guide that I based off of the blog post linked.
Step 1 Select all those sprites that you want to pack in together
Step 2 Give a Packing Tag as shown in the picture below:
Step 3 Open Sprite Packer window and perform packing
Step 4 Click the Pack button as seen above
There are also third party tools like Texture Packer that can do this, and they have more complex algorithms which gives them the edge over Unity's built in verison.
It's also possible there are issues with your games performance not relating directly to draw calls, I would recommend taking a look at the official Unity guide for mobile optimisation.

Related

AR Foundation / ARCore / ARKit fiducial markers generation

I am trying to develop an AR application in unity using the new AR Foundation.
This application would need to use two features:
It needs to use a large amount of tracking images
It needs to properly identify the tracked image (marker) (Only one image will be visible at the same moment)
What I need is dynamically generate the fiducial markers preferably with the tracking part same for all and only with specific part carrying id of the marker. Preferably the AR code would be similar to the ARToolkit One from this image:
Do these markers work well with ARfoundation (abstraction over ARCore and ARKit)?
Lets say I ll add 100 of these generated codes into the XRImageIs it possible that AR Foundation image targets get "confused" and mixup tracked images? Could in theory i use QR codes as Markers and simply code ID information into the QR code?
In a project I searched for a good way to implement a lot of different markers to identify a vast amount of different real world objects. At first I tried QRCodes and added them to the Image Database in ARFoundation.
It worked but sometimes markers got mixed up and this already happened by using only 4 QRCodes containing words ("left", "right", "up", "down"). The problem was that ARFoundation relies on ARCore, ARKit, etc. (Depending on the platform you want to build.)
Excerpt from the ARCore guide:
Avoid images with that contain a large number of geometric features, or very few features (e.g. barcodes, QR codes, logos and other line art) as this will result in poor detection and tracking performance.
The next thing I tried was to combine OpenCV with ARFoundation and use ArUco Marker for detection. The detection works much better and faster than the Image Recognition. This was done by accessing the Camera Image and using OpenCVs marker detection. In ARFoundation you can access the camera image by using public bool TryAcquireLatestCpuImage(out XRCpuImage cpuImage).
The Problem of this method:
This is a resource-intensive process that impacts performance...
On an iPad Pro 13" 2020, the performance in my application dropped from constant 60 FPS to around 25 FPS. For me, this was a too serious performance drop.
A solution could be to create a collection of images with large variations and perfect score, but I am unsure how images with all these aspects in mind could be generated. (Probably also limited to 1000 images per reference database, see ARCore guide)
If you want to check if these markers works well in ARCore , goto this link and download the arcoreimg tool.
The tool will give you a score that will let you know if this image is trackable or not. Though site recommends the score to be 75 , i have tested this for score of as low as 15. Here is quick demo if you are interested to see. The router image in the demo has a score of 15.

Monogame performance issues on iOS

My friends and I are developing a game for Ipad/Iphone using the monogame Framework. We are at the final stage of the development and we are having some issues concerning the deploy of the game.
This is our first IOS game and we really need help to make this game happen.
This is our website: http://www.dumativa.com.br/index.php/pt-BR/projetos/dragon-festival (This isn't the newest version of the game but you can see what it looks like)
Since this is our first game we kinda made some mistakes we are trying to fix. Our textures were made focusing the ipad size (1024x768). We don't really know if that was the right aproach. We thought it would be better to scale them down to iphone instead of the other way around.
We need help with the following topics:
1 - The PVRTC compression.
1.1 - Is it really necessary to implement this in order to improve the game performance?
1.2 - How does transparency works with these compressed textures? We have some textures that have a transparency gradient and we wonder how is it going to look after compression.
1.3 - Do you recomend us to remake all the textures to texture atlas using "power of 2" in order to make the PVRTC compression work?
2 - ARMv7
2.1 - How does it helps to improve the game performance?
2.2 - How do we make sure that it is working after I enabled it in monodevelop?
3 - Texture Size/Resolution
3.1 - Are we right about the scaling or should we develop textures for iphone resolution and then scale them up to ipad?
3.2 - Should we have two different apps (one for iphone and one for ipad) with the same textures but with different sizes for each device?
4 - Sugestions
Do you have any sugestion or point that we are missing? We don't really know how to improve further then the topics listed above. We need every possible direction.
Basically we need to decide which way we need to go to improve our performance and playing experience. We would really apreciate your help and you wont regret once this game is launched :)!
Thank you very much.
The answer to most of your questions is going to be "it depends". There is no substitute for testing on an actual target device and seeing if it's acceptable or not.
I have two general suggestions: (a) make sure you have as few textures/spritesheets as possible, ideally one per scene for a 2D game. And (b) yes, I would create resized/resampled graphics for different target devices instead of relying on dynamic scaling. This is very important for non-Retina hardware like the iPad mini, where it is a terrible idea to load huge images unnecessarily. Better to have a slightly larger download up front followed by an optimal playing experience.

2D images to 3D view

I need to be able to generate a 3D perspective from a bunch of 2D images of a pipe.
Basically... We have written software that interprets combined data from laser and sonar units to give us an image slice from a section of pipe. These units travel through the pipe and scan the inside of the pipe every 100mm.
All of this is working great. My client now wants to take all these 2D image slices and generate a 3D view so they can "travel" through the pipe looking at defects etc.. that are picked up by the scans. We can see the defects in the 2D images but there can be hundreds of images in a single inspection - hence the requirement to be able to look through the pipe.
I am doing this in VS2010 on the .NET 4 platform in C#.
I am honestly clueless as to where to start here. I am not a graphics developer so this is all new territory to me. I see it as a great challenge but need some help kicking off - and a bit of direction.
Any help appreciated :)
Mike
Well, every 10cm isn't very detailed.. However, you need to scan the pixels of the pipe, creating a list of closed polygons, then just use a trianglestrip to connect one set to the next, all the way down the pipe.
Try to start with very basic 2d instead of full blown 3d rendering - may be good enough. Pipe when you look at it from inside can be represented as several trapeze. Assuming your images are small cylinder portions of a pipe - map each stripe to trapezoids (4 would be good start - easy to position) and draw than in circular pattern. You may draw several stripes this way at the same time. To move back/forward - just reassign images to trapezoids.
If you need full 3d - consider if WPF would work, if not - XNA or some OpenGL library will give you full 3d.
You don't specify the context, 100mm sample intervals may be sparse (a 1m pipe) or detailed (10km pipe). Nor do you specify how many sample points there are (number of cross sections and size of cross section image).
A simple way to show the data is to use voxels where each pixel on a cross section is treated as a cube and adjacent samples form adjacent cubes (think Minecraft). The result will look blocky but as it's an engineering / scientific application this is probably preferable. Interpolating the model to produce a smooth surface may hide defects or make areas appear to be defective. Also, rendering a cross section through a voxel is a bit easier than a polygon surface.

Web Page - 3d earthquake visualization - Silverlight?

I have never written any silverlight apps but I am looking to write a 3d viewer for earthquakes and have it run from my web site.
I would like to create a simple viewer so the user can change the "camera" ie their perspective. The view could contain up to 10,000 objects in the 3d space.
I want the ability to quickly view this - I have seen this on a Power Basic application and want to do this for the web.
I have a current web site at http://canterburyquakelive.co.nz for earthquakes in Canterbury New Zeaalnd and I want to learn the basics so that it can be more interactive.
I want to say for example (to start) place 2 objects in a "space" that I can define and move the camera in real time.
The system must support up to 10,000 objects in the end of the day.
Each object can be a simple circle - no need for special pixel shaders
I am unsure of the exact functionallity of the system at the moment so if I can find a tutorial that allows me to place someone (a circle) into a 3d world (space) and change the camera that would be good.
Any ideas appreciated - there seems to be so much about 3d and silverlight that I may be getting lost in the "gloss" of new features where I need some basics and I can learn and adapt over time.
** Added comment + image **
Basically I am waiting to create a page that look like this using Silverlight. But I am open to any technology.
I've never done 3D in silverlight so I can't exactly answer your question as asked but in general to display geographic markers in a 'real' 3D terrain is quite involved. At a minimum you're probably looking at:
Obtaining binary height data files (last time I looked, NASA gives this away)
Reading and interpreting said files to get 'bitmap' height data
Choosing and dealing with projections (e.g. UTM)
Deciding how to tesselate your bitmap height data
If you want it textured you'll need to also obtain satellite data for that, again converting or processing it to account for projection.
You could ignore the terrain height, but that may not simplify things depending on how 'bumpy' your terrain is.
For a pre-defined small enough area, you could perhaps pre-author a 3d model of the terrain in some 3D package but displaying your markers will still require a projection from long/lat into your 3D space, and you'll still need to know terrain height (unless you do mesh collision with the static model).
Rendering the markers is pretty straightforward by comparison, choose from:
Use a 3D model e.g. a 'pin head' (simple but not always visible)
Render a regular n-gon with 'viewer facing' polygons (resolution independent but maybe ugly)
Render a quad with a circle texture on it (low poly but what size texture to choose?)
There are probably libraries that do some or all of this for you, so if you are set on rolling your own then some of the things I've mentioned could form the basis for your search.
However, given what you've described of your site and situation I suspect you'd be better off avoiding all that work by using a pre-existing solution. E.g. the Google Earth API.
You could consider 3D web plugins that -granted- take you away from Silverlight but that might speed up your development process. I'm thinking in particular of e.g. the Blender 3D web plugin. I can understand the need to write your own viewer, but think twice before you re-invent the wheel. Good luck!

XNA vs SlimDX for offscreen renderer

I realise there are numerous questions on here asking about choosing between XNA and SlimDX, but these all relate to game programming.
A little background: I have an application that renders scenes from XML descriptions. Currently I am using WPF 3D and this mostly works, except that WPF has no way to render scenes offscreen (i.e. on a server, without displaying them in a window), and also rendering to a bitmap causes WPF to fallback to software rendering.
So I'm faced with having to write my own renderer. Here are the requirements:
Mix of 3D and 2D elements.
Relatively few elements per scene (tens of meshes, tens of 2D elements).
Large scenes (up to 3000px square for print).
Only a single frame will be rendered (i.e. FPS is not an issue).
Opacity masks.
Pixel shaders.
Software fallback (servers may or may not have a decent gfx card).
Possibility of being rendered offscreen.
As you can see it's pretty simple stuff and WPF can manage it quite nicely except for the not-being-able-to-export-the-scene problem.
In particular I don't need many of the things usually needed in game development. So bearing that in mind, would you choose XNA or SlimDX? The non-rendering portion of the code is already written in C#, so want to stick with that.
I haven't used SlimDX, but based on my experience with XNA and reading about SlimDX's objective. I'd suggest SlimDX. XNA while it can be used for other things is primarily a Game Engine, not a Rendering Engine. It's got lots of specific optimizations & methodology geared towards Games.
Also, XNA likes to pre-build it's resources into DirectX Files (.x) if you're working with dynamic files, I think SlimDX is the best choice for you.
XNA and SlimDX are very close in nature, but there are some differences:
XNA requires a GPU with a least pixel/vertex shaders 1.1 while I think SlimDX does not.
SlimDX supports DirectX10 and 11, while XNA only supports DirectX 9.
XNA is a cross platform between Windows, Xbox 360, Zune and Windows Phone 7, while SlimDX is not.
XNA has a strong community (creators.xna.com) with tons of tutorials and help materials.
I would go with XNA.

Categories

Resources