I was experimenting in unity with just 2 game objects, no textures and only 40 lines of code, but when I built it to android it was about 17 mb, then I changed the managed stripping level to high and it went down to 11mb, but after that it wouldn't go down even if I deleted every game object and script in the project. what can I do to make it smaller.
That's the size of the engine, you can't get lower than that. I guess you're looking for making an instant game. You can't do it in Unity! In Unity they're working on a specific version named "Unity Tiny" for this purpose but it's still not released!
I think you did everything you need to do, but take in practice to always compress the texture, meshes and animation clips also just put what you need, also care about object's vertices and polygons number.
to check the assets contributions in the built file size go to the Console window
(menu: Window > General > Console) click the small drop-down panel in the top right, and select Open Editor Log and check your assets contribution for optimization.
And last thing you can do to reduce size is reducing .NET library size.
Unity supports two .NET API compatibility levels: .NET 4.x and .NET Standard 2.0. The .NET Standard 2.0 restricts you to a smaller subset of the .NET API, which can help keep size down.
and if this solution does not help so you reached the smallest size which is just the rendering and mono functions.
You need to build only for one plaform and use L2CPP:
Edit -> Project Settings -> Player -> Configuration
Check only ARMv7.
Screenshot
I can clearly remember building 7mb applications.
Related
I am trying to develop an AR application in unity using the new AR Foundation.
This application would need to use two features:
It needs to use a large amount of tracking images
It needs to properly identify the tracked image (marker) (Only one image will be visible at the same moment)
What I need is dynamically generate the fiducial markers preferably with the tracking part same for all and only with specific part carrying id of the marker. Preferably the AR code would be similar to the ARToolkit One from this image:
Do these markers work well with ARfoundation (abstraction over ARCore and ARKit)?
Lets say I ll add 100 of these generated codes into the XRImageIs it possible that AR Foundation image targets get "confused" and mixup tracked images? Could in theory i use QR codes as Markers and simply code ID information into the QR code?
In a project I searched for a good way to implement a lot of different markers to identify a vast amount of different real world objects. At first I tried QRCodes and added them to the Image Database in ARFoundation.
It worked but sometimes markers got mixed up and this already happened by using only 4 QRCodes containing words ("left", "right", "up", "down"). The problem was that ARFoundation relies on ARCore, ARKit, etc. (Depending on the platform you want to build.)
Excerpt from the ARCore guide:
Avoid images with that contain a large number of geometric features, or very few features (e.g. barcodes, QR codes, logos and other line art) as this will result in poor detection and tracking performance.
The next thing I tried was to combine OpenCV with ARFoundation and use ArUco Marker for detection. The detection works much better and faster than the Image Recognition. This was done by accessing the Camera Image and using OpenCVs marker detection. In ARFoundation you can access the camera image by using public bool TryAcquireLatestCpuImage(out XRCpuImage cpuImage).
The Problem of this method:
This is a resource-intensive process that impacts performance...
On an iPad Pro 13" 2020, the performance in my application dropped from constant 60 FPS to around 25 FPS. For me, this was a too serious performance drop.
A solution could be to create a collection of images with large variations and perfect score, but I am unsure how images with all these aspects in mind could be generated. (Probably also limited to 1000 images per reference database, see ARCore guide)
If you want to check if these markers works well in ARCore , goto this link and download the arcoreimg tool.
The tool will give you a score that will let you know if this image is trackable or not. Though site recommends the score to be 75 , i have tested this for score of as low as 15. Here is quick demo if you are interested to see. The router image in the demo has a score of 15.
I am having issues while running my project on low end machines. When I run a windows build on the following machine. Most of the 3D objects weren't showing up & some were showing up as pink:
DELL Optiplex 745
Intel (R) Core(TM)2Duo 2.20GHz
Bit OS, Windows 7
RAM: 2.00 GB
Video Card: Intel(R) Q965/Q963
Unity Version: 5.6.1f1
Upon investigating I came to know that Unity's minimum requirement for Windows is DirectX 9 with shader model 3.0 where as this system has shader model 2.0. I guess that is the reason its isn't working on this particular system. I tried creating a separate project & used some of the built-in shaders in it. Some of them work on the separate build (Like Standard shader works if Emission property is turned off. If I turn on emission the object doesn't show up in the scene) but when I try to add that same scene in my main project it doesn't work. I have also tried by changing all the materials to standard & turned off emission on my main project but still it doesn't work.
Can anyone guide me how I can resolve this issue? And is there a way for me to run my application on systems that don't support shader model 3.0. Or How can I setup a project on Unity that supports shader model 2.0?
In short, games typically are not locked to a specific graphic fidelity, rather the user is able to choose options like AA; bloom; ambient occussion and so forth which in relatity results in the application either choosing a preset of shaders (those marked low detail for example) or to regenerate new shaders based on the chosen options and tailored for the custom machine prior to the game launching.
You can always tell when the latter is occurring when a AAA game says "optimising shaders for your machine".
XNA had the concept of Reach and HiDef graphic profiles.
For more info, ask over at gamedev.net.
I'm coming here because I didn't find a solution to my problem, and I would like your opinion. I'm building a Unity SteamVR project, with a specified quality level (Fantastic for example).
However, the quality of the built game does not correspond to the specified one. (I see it with the shadows and the aliasing that are lower than expected). I'm sure to specify the good quality level, it's marked with a green tick.
Do I need to specify the quality level in an other way ? Is this a well known bug ? Is it related with SteamVR ?
I am using Unity 2017.
Brett
The quality settings for VR is different from a normal standalone build. The VR quality heavily depends on the render scale which controls the texel:pixel ratio before lens correction is applied. Note that this will also affect performance.
You can change it with:
VRSettings.renderScale
Last time I checked, the render scale default value on SteamVR is 1. The more you increase this, the better the quality you get. I recommend you change it to about 1.5 and see if the performance looks fine with that value. If not, keep increasing.
void Awake()
{
VRSettings.renderScale = 1.5f;
}
You can learn more about Unity's render scale and see example of VR image outputs based on its values here.
I found a (bad) solution. If I deactivate all the levels except the one I targeted, the building process use the expected quality.
On the left, the specified level quality is not used when I build my game. On the right, the specified level quality is used to build my game, but I need to previously deactivate all the others levels.
I also find a workaround. I set the settings manually via script. For example anti-aliasing to 8.
QualitySettings.antiAliasing = 8;
so I'm making this 3D TTT game and I'd love to add a 3D interactive cube for live feedback, because the game as it is may be hard to grasp by greater audience.
I've chosen Visual Studio 2013 for my project and I write it in C#. The game looks like 7 isolated squares containing 7^2 buttons. My idea is to add a 7^3 cube of 343 cells for better navigation to the form. Obviously, each cell within the cube would have to be linked with each WF button.
So far I've spent goodly time on the internet and even my IT teacher was unable to provide this answer, so I come to you. Is there any way to do it?
You could try SlimDX: http://slimdx.org/ It is a free open source framework that enables developers to easily build DirectX applications using C#
OpenTK http://www.opentk.com/ is another option.
My friends and I are developing a game for Ipad/Iphone using the monogame Framework. We are at the final stage of the development and we are having some issues concerning the deploy of the game.
This is our first IOS game and we really need help to make this game happen.
This is our website: http://www.dumativa.com.br/index.php/pt-BR/projetos/dragon-festival (This isn't the newest version of the game but you can see what it looks like)
Since this is our first game we kinda made some mistakes we are trying to fix. Our textures were made focusing the ipad size (1024x768). We don't really know if that was the right aproach. We thought it would be better to scale them down to iphone instead of the other way around.
We need help with the following topics:
1 - The PVRTC compression.
1.1 - Is it really necessary to implement this in order to improve the game performance?
1.2 - How does transparency works with these compressed textures? We have some textures that have a transparency gradient and we wonder how is it going to look after compression.
1.3 - Do you recomend us to remake all the textures to texture atlas using "power of 2" in order to make the PVRTC compression work?
2 - ARMv7
2.1 - How does it helps to improve the game performance?
2.2 - How do we make sure that it is working after I enabled it in monodevelop?
3 - Texture Size/Resolution
3.1 - Are we right about the scaling or should we develop textures for iphone resolution and then scale them up to ipad?
3.2 - Should we have two different apps (one for iphone and one for ipad) with the same textures but with different sizes for each device?
4 - Sugestions
Do you have any sugestion or point that we are missing? We don't really know how to improve further then the topics listed above. We need every possible direction.
Basically we need to decide which way we need to go to improve our performance and playing experience. We would really apreciate your help and you wont regret once this game is launched :)!
Thank you very much.
The answer to most of your questions is going to be "it depends". There is no substitute for testing on an actual target device and seeing if it's acceptable or not.
I have two general suggestions: (a) make sure you have as few textures/spritesheets as possible, ideally one per scene for a 2D game. And (b) yes, I would create resized/resampled graphics for different target devices instead of relying on dynamic scaling. This is very important for non-Retina hardware like the iPad mini, where it is a terrible idea to load huge images unnecessarily. Better to have a slightly larger download up front followed by an optimal playing experience.