XNA 3D blender models not keeping scale - c#

So I am making all my models is Blender and then exporting it to .fbx format using the File ->Export and then checking off XNA Strict Options. This works just great, except that when I put my model in XNA, it has been stretched along the up-down axis and it is always the same scale. No matter how much I scale it in Blender, it is always the same size in game. Any ideas? Also, I'm not sure if this is related, but if I have a model with multiple parts, it will only show one part of it. Any help is appreciated!

I am more familiar with 3ds max than blender, but if this was happening in max I would know what's going on so I'll say it in case Blender works in a similar fashion.
When you modify something in a 3d modeling app (like scaling the model on a particular axis), it does not actually change the positions of the vertices like you may think. It only creates a transform matrix that can be applied to the original vertex positions at render time to make it appear the way you expect.
So when you import the model into Xna, you are importing the model with it's unscaled vertex positions and all the transform matrices it takes to render the model the way you would expect it to. But you have to apply those transforms in your Xna code or the model wont appear the way it did in your modeling app. (the issue you are having)
The way you apply the transforms is by calling the Xna Model.CopyAbsoluteBoneTransformsTo(Matrix[]). Make sure you do not call Model.CopyBoneTransformsTo(Matrix[]), you need the one with the word 'Absolute' in it.
Here is a tutorial that shows how to implement that method: http://msdn.microsoft.com/en-us/library/bb203933(v=xnagamestudio.40).aspx

Related

Unity VR Render Textures, Scene space rendering, and blending layers

[using unity 2020.3]
I'm trying to slowly blend different layers in and out in VR, with both layers being visible while the fade between occurs. Right now, I am using two cameras, one as the main camera and one as a render texture (both only rendering their selective layers). Then I use UI to fade the render texture in and out. This looks and works great in 2D view (including builds), but UI components do not render in VR.
I am aware that rendering this in VR will require 4 sets of rendering (two for each eye), but I'd still like to know how to generate and display a render texture for each eye using unity.
This effect can be done in other ways and I'm open to suggestions. There are a lot of different types of elements I wish to fade in and out so I'm aware of one solution to add transparent shaders and fade particles but this can be tedious and requires a lot of setups (I'd like more of a permanent solution for any project). This being said, I'd still like to know how to manipulate what is being rendered out to the VR headset.
I'm fairly certain that the "Screen space effects" section of Unity doc Single Pass Stereo rendering (Double-Wide rendering) -- { https://docs.unity3d.com/Manual/SinglePassStereoRendering.html } -- is what I'm looking for, however, this still doesn't answer how to get the render texture for each eye (and I'm a little confused on how to use what they have written).
I'm happy to elaborate more and test some things out! Thank you in advance!

Get texture from images through 3D projection

In my (limited) experiences on 3D programming, usually we set up a 3D model with materials and texture, then set up the light and camera. Finally we can get a 2D view through the camera.
But I need to reverse this procedure: given a 2D view image, a camera setup, and a 3D model without texture, I wanted to find the texture for the model such that it results in the same 2D view. To simplify we ignore the light and materials, assuming they are even.
Although not easy, I think I can write a program to do this. But are there any existing wheels out there so I don't have to invent it again? (C#, WPF 3D or openCV)
Helix3d Toolkit for WPF has an interesting example called "ContourDemo". If you download the whole source you get a very comprehensive example app showcasing its capabilities.
This particular example uses a number of helper methods to generate a contour mesh from a given 3D model file(.3ds, .obj, .stl).
With some extending this could be the basis of reverse calculating the uv mapping, possibly.
Even if there is nothing suitable to perform the core requirement (extracting the texture) it is a great toolkit for displaying your original files and any outputs you have generated generated.

How to Combine Vertices and edges into one In Unity

I'm new to Unity and I'm making a car racing Game. Now, I'm stuck at some point. I was looking for some solution of my problem, but couldn't succeed.
My problem is:
When I run my game on my phone, it sticks badly because whenever there are several buildings in front of the car camera, like one building behind another building, it lags. Reason for this is there are so many vertices and edges at that time, So the Car Camera is unable to capture all that stuff at same time.
How do I preload the 2nd Scene while loading 1st Scene?
I am using Unity free version.
In graphics programming, there is a common routine to simply don't draw objects that aren't in the field of view. I'm sure Unity can handle this. Check link: Unity description on this topic
I'm not hugely knowledgeable about Unity, but as a 3D modeller there's a bunch of things you can do to improve performance:
Create a simplified version of your buildings with fewer polygons for use when buildings are a long way away. A skyscraper, for example, can be as simple as a textured box.
If you've done that already, reduce the distance at which the simpler imposters are substituted for the complex versions.
Reduce the number of polygons by other means. A good example is if you've got a window ledge sticking out of the side of a building, don't try and make it an extension of the body. Instead, make it a separate box, delete the facet that won't be seen, and move it to intersect with the rest of the building.
Another good trick is to use bump maps or normal maps to approximate smaller features, rather than trying to model everything.
Opaqueness. Try not to have transparent windows in your buildings. It's computationally cheaper to make them just reflect the skybox or a suitably blurred reflection imposter. Also make sure that the material's shader is in Opaque mode, if it supports this.
You might also benefit a little from checking the 'Static' box on the game object, assuming that buildings aren't able to be moved (i.e. by smashing through them in a bulldozer).
Collision detection can also be a big drain. Be sure to use the simplest possible detection mesh you can - either a box, cylinder, sphere or a combination.

Web Page - 3d earthquake visualization - Silverlight?

I have never written any silverlight apps but I am looking to write a 3d viewer for earthquakes and have it run from my web site.
I would like to create a simple viewer so the user can change the "camera" ie their perspective. The view could contain up to 10,000 objects in the 3d space.
I want the ability to quickly view this - I have seen this on a Power Basic application and want to do this for the web.
I have a current web site at http://canterburyquakelive.co.nz for earthquakes in Canterbury New Zeaalnd and I want to learn the basics so that it can be more interactive.
I want to say for example (to start) place 2 objects in a "space" that I can define and move the camera in real time.
The system must support up to 10,000 objects in the end of the day.
Each object can be a simple circle - no need for special pixel shaders
I am unsure of the exact functionallity of the system at the moment so if I can find a tutorial that allows me to place someone (a circle) into a 3d world (space) and change the camera that would be good.
Any ideas appreciated - there seems to be so much about 3d and silverlight that I may be getting lost in the "gloss" of new features where I need some basics and I can learn and adapt over time.
** Added comment + image **
Basically I am waiting to create a page that look like this using Silverlight. But I am open to any technology.
I've never done 3D in silverlight so I can't exactly answer your question as asked but in general to display geographic markers in a 'real' 3D terrain is quite involved. At a minimum you're probably looking at:
Obtaining binary height data files (last time I looked, NASA gives this away)
Reading and interpreting said files to get 'bitmap' height data
Choosing and dealing with projections (e.g. UTM)
Deciding how to tesselate your bitmap height data
If you want it textured you'll need to also obtain satellite data for that, again converting or processing it to account for projection.
You could ignore the terrain height, but that may not simplify things depending on how 'bumpy' your terrain is.
For a pre-defined small enough area, you could perhaps pre-author a 3d model of the terrain in some 3D package but displaying your markers will still require a projection from long/lat into your 3D space, and you'll still need to know terrain height (unless you do mesh collision with the static model).
Rendering the markers is pretty straightforward by comparison, choose from:
Use a 3D model e.g. a 'pin head' (simple but not always visible)
Render a regular n-gon with 'viewer facing' polygons (resolution independent but maybe ugly)
Render a quad with a circle texture on it (low poly but what size texture to choose?)
There are probably libraries that do some or all of this for you, so if you are set on rolling your own then some of the things I've mentioned could form the basis for your search.
However, given what you've described of your site and situation I suspect you'd be better off avoiding all that work by using a pre-existing solution. E.g. the Google Earth API.
You could consider 3D web plugins that -granted- take you away from Silverlight but that might speed up your development process. I'm thinking in particular of e.g. the Blender 3D web plugin. I can understand the need to write your own viewer, but think twice before you re-invent the wheel. Good luck!

Simple 3D Graphics in C#

I'm currently working on an application where I need to do some visualization, and the most complicated thing I'll be doing is displaying point-like objects.
Anything beyond that is complete overkill for my purposes, since I won't be doing anything but drawing point-like objects.
That being said, what would be the simplest solution to my needs?
The simplest is probably to use WPF 3D. This is a retained mode graphics system, so if you don't have huge needs (ie: special shaders for effects, etc), it's very easy to setup and use directly.
Otherwise, a more elaborate 3D system, such as XNA, may be more appropriate. This will be more work to setup, but give you much more control.
I recommend you take a look on Microsoft XNA for C#
Are they to be rendered as true points or as spheres? (where you see the 'points' that are closer using the visible size of the sphere as a reference.) In the former case, I would recommend simply multiplying the appropriate transformation matrices yourself to project the points to your viewing plane, rather than using a full-blown 3D engine (as you're not rendering any triangles or performing lighting/shading)
For some theoretical background on 3D projection to a 2D plane, see this Wiki article. If you use XNA, it has Matrix helper functions that generate the appropriate transformation matrices for you, even if you don't use it for any actual rendering. The problem becomes very trivial for points, as there are no normals to consider. You simply multiply the composed View Projection matrix by each point, clip any points that lie outside the viewing frustrum (i.e. behind the viewing plane, too far away, or outside the 2d range of your viewport) and render the points in X,Y. The calculation does you give feedback as to how 'deep' each point is relative to your viewing plane, so you could use this to scale or color the points appropriately, as otherwise it's very difficult to quickly understand the 3d placement of the points.

Categories

Resources