Xna: drawing dynamics alpha map - c#

I need to draw simple alpha maps in xna which depends on current game state. I cannot save many kinds of alpha maps in reseources, so i want to draw them dynamically.
For example sprite can be partially hidden behind wall.
I think about using System.Drawing, but when i'm including this dll, there are conflicts with Xna namespaces. Is there another way to draw simple bitmaps in xna?
Thank you for your time.

You should not use the System.Drawing classes. These work on images that lie in the system RAM. Instead use XNA to manipulate a texture that lies in GPU memory.
You can use SetRenderTarget() to set the texture as the render target. This will cause further draw calls to be executed on the texture. This way, you can draw everything on the texture. However, you will not have methods for drawing circles, squares etc. To achieve this, you should either create vertex buffers with the according geometry or use a sprite template.

Related

How to use pathfinding on a map image in unity 2d?

In my 2d android game, I have an image of a map as the background. I want to spawn objects and make them walk to a given destination. However, I want them to walk on the actual roads. I tried to use pathfinding grid and navmesh, however, they are not accurate and therefore couldn't detect the actual small roads on the map. Any idea how I can achieve this?
Edit: I have a high res image with no labels. I also tried using a png of the roads layout and marking it as walkable but since the roads are small and too close to one another it doesn't draw the navmesh on them.
(red is the player, green is the given destination, blue is the path automatically created for red to walk on after the destination is given)
I have an image of a map as the background
This is the wrong type of data to start of with. To do any kind of navigation you will need some type of graph data, either to create a navmesh from, or to implement your own pathfinding (A star is not that difficult to implement). But creating a graph from just a image of a map will be doomed to failure since roads will be covered by labels and other kinds of visual noise. It might be a feasible approach if you could get "clean" image of sufficient resolution, ideally without any kind of anti aliasing or other processing that could interfere with analysis.
You would be much better of to start with an object representation of your map and either convert this to an image, or correlate the object positions to coordinates on the image. You may for example use the open streetmap API for this kind of data. Or just create your graph by hand if you only have a single small map.

Unity VR Render Textures, Scene space rendering, and blending layers

[using unity 2020.3]
I'm trying to slowly blend different layers in and out in VR, with both layers being visible while the fade between occurs. Right now, I am using two cameras, one as the main camera and one as a render texture (both only rendering their selective layers). Then I use UI to fade the render texture in and out. This looks and works great in 2D view (including builds), but UI components do not render in VR.
I am aware that rendering this in VR will require 4 sets of rendering (two for each eye), but I'd still like to know how to generate and display a render texture for each eye using unity.
This effect can be done in other ways and I'm open to suggestions. There are a lot of different types of elements I wish to fade in and out so I'm aware of one solution to add transparent shaders and fade particles but this can be tedious and requires a lot of setups (I'd like more of a permanent solution for any project). This being said, I'd still like to know how to manipulate what is being rendered out to the VR headset.
I'm fairly certain that the "Screen space effects" section of Unity doc Single Pass Stereo rendering (Double-Wide rendering) -- { https://docs.unity3d.com/Manual/SinglePassStereoRendering.html } -- is what I'm looking for, however, this still doesn't answer how to get the render texture for each eye (and I'm a little confused on how to use what they have written).
I'm happy to elaborate more and test some things out! Thank you in advance!

Get texture from images through 3D projection

In my (limited) experiences on 3D programming, usually we set up a 3D model with materials and texture, then set up the light and camera. Finally we can get a 2D view through the camera.
But I need to reverse this procedure: given a 2D view image, a camera setup, and a 3D model without texture, I wanted to find the texture for the model such that it results in the same 2D view. To simplify we ignore the light and materials, assuming they are even.
Although not easy, I think I can write a program to do this. But are there any existing wheels out there so I don't have to invent it again? (C#, WPF 3D or openCV)
Helix3d Toolkit for WPF has an interesting example called "ContourDemo". If you download the whole source you get a very comprehensive example app showcasing its capabilities.
This particular example uses a number of helper methods to generate a contour mesh from a given 3D model file(.3ds, .obj, .stl).
With some extending this could be the basis of reverse calculating the uv mapping, possibly.
Even if there is nothing suitable to perform the core requirement (extracting the texture) it is a great toolkit for displaying your original files and any outputs you have generated generated.

How to Import or Draw a Plane in XNA?

I'm implementing a small software and am having trouble getting the Draw Geometry! How I can do this?
Thank you.
Your question is very vague. If you want to draw some vertexes you have to use the GraphicsDevice draw methods otherwise if you need to draw some sprites you have to use the SpriteBatch draw method. If you want to import a model you can exploit the draw method of the Model class.
If you're starting from a scratch I recommend you to give a look to those step-by-step tutorials:
Your First Game - XNA Game Studio in 2D
Displaying a 3D Model on the Screen
Draw a Textured Quad

Simple 3D Graphics in C#

I'm currently working on an application where I need to do some visualization, and the most complicated thing I'll be doing is displaying point-like objects.
Anything beyond that is complete overkill for my purposes, since I won't be doing anything but drawing point-like objects.
That being said, what would be the simplest solution to my needs?
The simplest is probably to use WPF 3D. This is a retained mode graphics system, so if you don't have huge needs (ie: special shaders for effects, etc), it's very easy to setup and use directly.
Otherwise, a more elaborate 3D system, such as XNA, may be more appropriate. This will be more work to setup, but give you much more control.
I recommend you take a look on Microsoft XNA for C#
Are they to be rendered as true points or as spheres? (where you see the 'points' that are closer using the visible size of the sphere as a reference.) In the former case, I would recommend simply multiplying the appropriate transformation matrices yourself to project the points to your viewing plane, rather than using a full-blown 3D engine (as you're not rendering any triangles or performing lighting/shading)
For some theoretical background on 3D projection to a 2D plane, see this Wiki article. If you use XNA, it has Matrix helper functions that generate the appropriate transformation matrices for you, even if you don't use it for any actual rendering. The problem becomes very trivial for points, as there are no normals to consider. You simply multiply the composed View Projection matrix by each point, clip any points that lie outside the viewing frustrum (i.e. behind the viewing plane, too far away, or outside the 2d range of your viewport) and render the points in X,Y. The calculation does you give feedback as to how 'deep' each point is relative to your viewing plane, so you could use this to scale or color the points appropriately, as otherwise it's very difficult to quickly understand the 3d placement of the points.

Categories

Resources