I am trying to create a function in one of my helper classes that will take a texture (left) and then generate an alpha mask (right)
Here's what I have so far:
public Texture2D CreateAlphaMask(Texture2D texture)
{
if (texture == null)
return null;
{
RenderTarget2D target = new RenderTarget2D(Device, texture.Width, texture.Height);
Device.SetRenderTarget(target);
Device.Clear(Color.Black);
using (SpriteBatch batch = new SpriteBatch(Device))
{
BlendState blendState = new BlendState()
{
AlphaBlendFunction = BlendFunction.Max,
AlphaSourceBlend = Blend.One,
AlphaDestinationBlend = Blend.One,
ColorBlendFunction = BlendFunction.Add,
ColorSourceBlend = Blend.InverseDestinationColor,
ColorDestinationBlend = Blend.Zero,
BlendFactor = Color.White,
ColorWriteChannels = ColorWriteChannels.All
};
batch.Begin(0, blendState);
batch.Draw(texture, Vector2.Zero, Color.White);
batch.End();
}
Device.SetRenderTarget(null);
return target;
}
}
What should be happening is that if alpha=0, then the pixel is black, and if alpha=1, then the pixel is white (and interpolated between these values if needed).
However, I can't seem to make it go "whiter" than the base image on the left. That is, if I set it to blend white, then at most it will go to the grey tones that I have, but never brighter. This isn't something I can create in advance, either, as it must be calculated during the game.
Related
I want to implement a user login in my unity game but I am unable to get the user profile picture from their Facebook id. The username is showing but not the profile picture. It is showing blank. I am also not getting any errors!
The sprite of the image is changing but not displaying on the screen.
Here is the code:
void DealWithFbMenus(bool isLoggedIn)
{
if (isLoggedIn)
{
FB.API("/me?fields=first_name", HttpMethod.GET, DisplayUsername);
FB.API("/me/picture?type=med", HttpMethod.GET, DisplayProfilePic);
}
}
void DisplayUsername(IResult result)
{
if (result.Error == null)
{
string name = "" + result.ResultDictionary["first_name"];
FB_userName.text = name;
Debug.Log("" + name);
}
else
{
Debug.Log(result.Error);
}
}
void DisplayProfilePic(IGraphResult result)
{
if (result.Error == null)
{
Debug.Log("Profile Pic");
FB_userDp.sprite = Sprite.Create(result.Texture, new Rect(0, 0, 128, 128), new Vector2());
}
else
{
Debug.Log(result.Error);
}
}
Sprite.Create takes
rect Rectangular section of the texture to use for the sprite.
I suspect that your hard coded 128 x 128 pixels is just a section and not the entire texture depending on the actual image dimensions of the picture.
And it also takes
pivot Sprite's pivot point relative to its graphic rectangle.
You are using new Vector2() which means the bottom left corner. In general for profile pictures I would rather assume that the pivot should be the center of the texture and use
Vector2.one * 0.5f
or
new Vector2(0.5f, 0.5f)
So asuming the download itself actually works and as you say you don't get errors you would probbaly rather use e.g.
FB_userDp.sprite = Sprite.Create(result.Texture, new Rect(0, 0, result.Texture.width, result.Texture.height), Vector2.one * 0.5f);
or if your goal is to use a square section no matter the dimenions you could use
var size = Mathf.Min(result.Texture.width, result.Texture.height);
FB_userDp.sprite = Sprite.Create(result.Texture, new Rect(0, 0, size, size), Vector2.one * 0.5f);
I have to create 2D map in unity using single image. So, i have one .png file with 5 different pieces out of which I have to create a map & i am not allowed to crop the image. So, how do create this map using only one image.
I am bit new to unity, i tried searching but didn't find exactly what i am looking for. Also tried, tilemap using Pallet but couldn't figure out how to extract only one portion of the image.
You can create various Sprites from the given texture on the fly in code.
You can define which part of a given Texture2D shall be used for the Sprite using Sprite.Create providing the rect in pixel coordinates of the given image. Remember however that in unity texture coordinates start bottom left.
Example use the given pixel coordinate snippet of a texture for the attached UI.Image component:
[RequireComponent(typeof(Image))]
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public Rect pixelCoordinates;
private void Start()
{
var newSprite = Sprite.Create(texture, pixelCoordinates, Vector2.one / 2f);
GetComponent<Image>().sprite = newSprite;
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
pixelCoordinates = new Rect();
return;
}
// reset to valid rect values
pixelCoordinates.x = Mathf.Clamp(pixelCoordinates.x, 0, texture.width);
pixelCoordinates.y = Mathf.Clamp(pixelCoordinates.y, 0, texture.height);
pixelCoordinates.width = Mathf.Clamp(pixelCoordinates.width, 0, pixelCoordinates.x + texture.width);
pixelCoordinates.height = Mathf.Clamp(pixelCoordinates.height, 0, pixelCoordinates.y + texture.height);
}
}
Or you get make a kind of manager class for generating all needed sprites once e.g. in a list like
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public List<Rect> pixelCoordinates = new List<Rect>();
// OUTPUT
public List<Sprite> resultSprites = new List<Sprite>();
private void Start()
{
foreach(var coordinates in pixelCoordinates)
{
var newSprite = Sprite.Create(texture, coordinates, Vector2.one / 2f);
resultSprites.Add(newSprite);
}
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
for(var i = 0; i < pixelCoordinates.Count; i++)
{
pixelCoordinates[i] = new Rect();
}
return;
}
for (var i = 0; i < pixelCoordinates.Count; i++)
{
// reset to valid rect values
var rect = pixelCoordinates[i];
rect.x = Mathf.Clamp(pixelCoordinates[i].x, 0, texture.width);
rect.y = Mathf.Clamp(pixelCoordinates[i].y, 0, texture.height);
rect.width = Mathf.Clamp(pixelCoordinates[i].width, 0, pixelCoordinates[i].x + texture.width);
rect.height = Mathf.Clamp(pixelCoordinates[i].height, 0, pixelCoordinates[i].y + texture.height);
pixelCoordinates[i] = rect;
}
}
}
Example:
I have 4 Image instances and configured them so the pixelCoordinates are:
imageBottomLeft: X=0, Y=0, W=100, H=100
imageTopLeft: X=0, Y=100, W=100, H=100
imageBottomRight: X=100, Y=0, W=100, H=100
imageTopRight: X=100, Y=100, W=100, H=100
The texture I used is 386 x 395 so I'm not using all of it here (just added the frames the Sprites are going to use)
so when hitting Play the following sprites are created:
In a graphic application i am rendering an image to a texture, then i use that texture on a 3d model.
My problem is the following:
When the application starts everything is fine, but if i resize the view where i do the rendering and i make it bigger, the texture on the 3d model disappear (it doesnt turn black, i think that all values become 1). Making the image smaller doesnt make the texture to disappear, but it is shown incorrectly (not resized).
Here are some explanatory images:
Resize smaller
Not resized
Resize bigger, 1 pixel bigger is enough to make image disappear.
The code that generate the renderview is this:
private void CreateRenderToTexture(Panel view)
{
Texture2DDescription t2d = new Texture2DDescription()
{
Height = view.Height,
Width = view.Width,
Format = Format.R32G32B32A32_Float,
BindFlags = BindFlags.ShaderResource | BindFlags.RenderTarget, //| BindFlags.UnorderedAccess,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SampleDescription = new SampleDescription(_multisample, 0),
MipLevels = 1,
Usage = ResourceUsage.Default,
ArraySize = 1,
};
_svgTexture = new Texture2D(_device, t2d);
_svgRenderView = new RenderTargetView(_device, _svgTexture);
}
private void RenderSVGToTexture()
{
_camera.SetDefaultProjection();
UpdatePerFrameBuffers();
_dc.OutputMerger.SetTargets(_depthStencil, _svgRenderView);//depth stencil has same dimension as all other buffers
_dc.ClearRenderTargetView(_svgRenderView, new Color4(1.0f, 1.0f, 1.0f));
_dc.ClearDepthStencilView(_depthStencil, DepthStencilClearFlags.Depth | DepthStencilClearFlags.Stencil, 1.0f, 0);
Entity e;
if (RenderingManager.Scene.Entity2DExists("svgimage"))
{
RenderingManager.Scene.GetEntity2D("svgimage", out e);
e.Draw(_dc);
}
_swapChain.Present(0, PresentFlags.None);
}
When rendering the 3D scene i call this function before rendering the model:
private void SetTexture()
{
Entity e;
if (!RenderingManager.Scene.GetEntity3D("model3d", out e))
return;
e.ShaderType = ResourceManager.ShaderType.MAIN_MODEL;
if (ResourceManager.SVGTexture == null )
{
e.ShaderType = ResourceManager.ShaderType.PNUVNOTEX;
return;
}
SamplerDescription a = new SamplerDescription();
a.AddressU = TextureAddressMode.Wrap;
a.AddressV = TextureAddressMode.Wrap;
a.AddressW = TextureAddressMode.Wrap;
a.Filter = Filter.MinPointMagMipLinear;
SamplerState b = SamplerState.FromDescription(ResourceManager.Device, a);
ShaderResourceView svgTexResourceView = new ShaderResourceView(ResourceManager.Device, Texture2D.FromPointer(ResourceManager.SVGTexture.ComPointer));
ResourceManager.Device.ImmediateContext.PixelShader.SetShaderResource(svgTexResourceView, 0);
ResourceManager.Device.ImmediateContext.PixelShader.SetSampler(b, 0);
b.Dispose();
svgTexResourceView.Dispose();
}
Pixel shader:
Texture2D svg : register(t0);
Texture2D errorEstimate : register(t1);
SamplerState ss : register(s0);
float4 main(float4 position : SV_POSITION, float4 color : COLOR, float2 uv : UV) : SV_Target
{
return color * svg.Sample(ss, uv);// *errorEstimate.Sample(ss, uv);
}
I dont understand what i am doing wrong, i hope that you can make me see the mistake that i am doing. Thank you, and sorry for the bad english!
As it (almost) always turn out i was making a very stupid mistake.
I wasn't calling the correct resize function.
Basically in the Renderer2D class there is a DoResize function that does the resize of the 2d only buffers, while in the abstract Renderer class there is the rest of the buffers resizing. The mistake is that in the parent class i was calling the wrong base resize function!
Parent class:
protected override void DoResize(uint width, uint height)
{
if (width == 0 || height == 0)
return;
base.DoResize(width, height); //Here i was calling base.Resize (which was deprecated after a change in the application architecture)
_camera.Width = width;
_camera.Height = height;
_svgTexture.Dispose();
_svgRenderView.Dispose();
CreateRenderToTexture(_viewReference);
ResizePending = false;
}
Base class
protected virtual void DoResize(uint width, uint height)
{
Width = width;
Height = height;
_viewport = new Viewport(0, 0, Width, Height);
_renderTarget.Dispose();
if (_swapChain.ResizeBuffers(2, (int)width, (int)height, Format.Unknown, SwapChainFlags.AllowModeSwitch).IsFailure)
Console.WriteLine("An error occured while resizing buffers.");
using (var resource = Resource.FromSwapChain<Texture2D>(_swapChain, 0))
_renderTarget = new RenderTargetView(_device, resource);
_depthStencil.Dispose();
CreateDepthBuffer();
}
Maybe the code i posted can be of help for someone who is trying to do some render to texture stuff, since i see that there is always people that can't make it work :)
I'm having trouble getting my head around the colour/material system of C# WPF projects, currently I am updating the colour of an entire system of points on each update of the model when I would instead like to just update the colour of a single point (as it is added).
AggregateSystem Class
public class AggregateSystem {
// stack to store each particle in aggregate
private readonly Stack<AggregateParticle> particle_stack;
private readonly GeometryModel3D particle_model;
// positions, indices and texture co-ordinates for particles
private readonly Point3DCollection particle_positions;
private readonly Int32Collection triangle_indices;
private readonly PointCollection text_coords;
// brush to apply to particle_model.Material
private RadialGradientBrush rad_brush;
// ellipse for rendering
private Ellipse ellipse;
private RenderTargetBitmap render_bitmap;
public AggregateSystem() {
particle_stack = new Stack<AggregateParticle>();
particle_model = new GeometryModel3D { Geometry = new MeshGeometry3D() };
ellipse = new Ellipse {
Width = 32.0,
Height = 32.0
};
rad_brush = new RadialGradientBrush();
// fill ellipse interior using rad_brush
ellipse.Fill = rad_brush;
ellipse.Measure(new Size(32,32));
ellipse.Arrange(new Rect(0,0,32,32));
render_bitmap = new RenderTargetBitmap(32,32,96,96,PixelFormats.Pbgra32));
ImageBrush img_brush = new ImageBrush(render_bitmap);
DiffuseMaterial diff_mat = new DiffuseMaterial(img_brush);
particle_model.Material = diff_mat;
particle_positions = new Point3DCollection();
triangle_indices = new Int32Collection();
tex_coords = new PointCollection();
}
public Model3D AggregateModel => particle_model;
public void Update() {
// get the most recently added particle
AggregateParticle p = particle_stack.Peek();
// compute position index for triangle index generation
int position_index = particle_stack.Count * 4;
// create points associated with particle for circle generation
Point3D p1 = new Point3D(p.position.X, p.position.Y, p.position.Z);
Point3D p2 = new Point3D(p.position.X, p.position.Y + p.size, p.position.Z);
Point3D p3 = new Point3D(p.position.X + p.size, p.position.Y + p.size, p.position.Z);
Point3D p4 = new Point3D(p.position.X + p.size, p.position.Y, p.position.Z);
// add points to particle positions collection
particle_positions.Add(p1);
particle_positions.Add(p2);
particle_positions.Add(p3);
particle_positions.Add(p4);
// create points for texture co-ords
Point t1 = new Point(0.0, 0.0);
Point t2 = new Point(0.0, 1.0);
Point t3 = new Point(1.0, 1.0);
Point t4 = new Point(1.0, 0.0);
// add texture co-ords points to texcoords collection
tex_coords.Add(t1);
tex_coords.Add(t2);
tex_coords.Add(t3);
tex_coords.Add(t4);
// add position indices to indices collection
triangle_indices.Add(position_index);
triangle_indices.Add(position_index + 2);
triangle_indices.Add(position_index + 1);
triangle_indices.Add(position_index);
triangle_indices.Add(position_index + 3);
triangle_indices.Add(position_index + 2);
// update colour of points - **NOTE: UPDATES ENTIRE POINT SYSTEM**
// -> want to just apply colour to single particles added
rad_brush.GradientStops.Add(new GradientStop(p.colour, 0.0));
render_bitmap.Render(ellipse);
// set particle_model Geometry model properties
((MeshGeometry3D)particle_model.Geometry).Positions = particle_positions;
((MeshGeometry3D)particle_model.Geometry).TriangleIndices = triangle_indices;
((MeshGeometry3D)particle_model.Geometry).TextureCoordinates = tex_coords;
}
public void SpawnParticle(Point3D _pos, Color _col, double _size) {
AggregateParticle agg_particle = new AggregateParticle {
position = _pos, colour = _col, size = _size;
}
// push most-recently-added particle to stack
particle_stack.Push(agg_particle);
}
}
where AggregateParticle is a POD class consisting of Point3D position, Color color and double size fields which are self-explanatory.
Is there any simple and efficient method to update the colour of the single particle as it is added in the Update method rather than the entire system of particles? Or will I need to create a List (or similar data structure) of DiffuseMaterial instances for each and every particle in the system and apply brushes for the necessary colour to each?
[The latter is something I want to avoid at all costs, partly due to the fact it would require large structural changes to my code, and I am certain that there is a better way to approach this than that - i.e. there MUST be some simple way to apply colour to a set of texture co-ordinates, surely?!.]
Further Details
AggregateModel is a single Model3D instance corresponding to the field particle_model which is added to a Model3DGroup of the MainWindow.
I should note that what I am trying to achieve, specifically, here is a "gradient" of colours for each particle in an aggregate structure where a particle has a Color in a "temperature-gradient" (computed elsewhere in the program) which is dependent upon order in which it was generated - i.e. particles have a colder colour if generated earlier and a warmer colour if generated later. This colour list is pre-computed and passed to each particle in the Update method as can be seen above.
One solution I attempted involved creating a separate AggregateComponent instance for each particle where each of these objects has an associated Model3D and thus a corresponding brush. Then an AggregateComponentManager class was created which contained the List of each AggregateComponent. This solution works, however it is horrendously slow as each component has to be updated every time a particle is added so memory usage explodes - is there a way to adapt this where I can cache already rendered AggregateComponents without having to call their Update method each time a particle is added?
Full source code (C# code in the DLAProject directory) can be found on GitHub: https://github.com/SJR276/DLAProject
We create WPF 3D models for smallish point clouds (+/- 100 k points) where each point is added as an octahedron (8 triangles) to a MeshGeometry3D.
To allow different colors for different points (we use this for selecting one or a subset of points) in such a point cloud, we assign texture coordinates from a small Bitmap.
At a high level we have some code like this:
BitmapSource bm = GetColorsBitmap(new List<Color> { BaseColor, SelectedColor });
ImageBrush ib = new ImageBrush(bm)
{
ViewportUnits = BrushMappingMode.Absolute,
Viewport = new Rect(0, 0, 1, 1) // Matches the pixels in the bitmap.
};
GeometryModel3D model = new GeometryModel3D { Material = new DiffuseMaterial(ib) };
and now the texture coordinates are just
new Point(0, 0);
new Point(1, 0);
... etc.
The colors Bitmap comes from:
// Creates a bitmap that has a single row containing single pixels with the given colors.
// At most 256 colors.
public static BitmapSource GetColorsBitmap(IList<Color> colors)
{
if (colors == null) throw new ArgumentNullException("colors");
if (colors.Count > 256) throw new ArgumentOutOfRangeException("colors", "More than 256 colors");
int size = colors.Count;
for (int j = colors.Count; j < 256; j++)
{
colors.Add(Colors.White);
}
var palette = new BitmapPalette(colors);
byte[] pixels = new byte[size];
for (int i = 0; i < size; i++)
{
pixels[i] = (byte)i;
}
var bm = BitmapSource.Create(size, 1, 96, 96, PixelFormats.Indexed8, palette, pixels, 1 * size);
bm.Freeze();
return bm;
}
We also go to some effort to cache and reuse the internal Geometry structure when updating the point cloud.
Finally we display this with the awesome Helix Toolkit.
I'm trying to add shadows to my MonoGame-based 2D game. At first I just rendered semitransparent black textures to the frame, but they sometimes overlap and it looks nasty:
I tried to render all shadows into the stencil buffer first, and then use a single semitransparent texture to draw all shadows at once using the stencil buffer. However, it doesn't work as expected:
The two problems are:
The shadows are rendered into the scene
The stencil buffer is seemingly unaffected: the semitransparent black texture covers the entire screen
Here's the initialization code:
StencilCreator = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.Always,
StencilPass = StencilOperation.Replace,
ReferenceStencil = 1
};
StencilRenderer = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.Equal,
ReferenceStencil = 1,
StencilPass = StencilOperation.Keep
};
var projection = Matrix.CreateOrthographicOffCenter(0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height, 0, 0, 1);
var halfPixel = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
AlphaEffect = new AlphaTestEffect(GraphicsDevice)
{
DiffuseColor = Color.White.ToVector3(),
AlphaFunction = CompareFunction.Greater,
ReferenceAlpha = 0,
World = Matrix.Identity,
View = Matrix.Identity,
Projection = halfPixel * projection
};
MaskingTexture = new Texture2D(GameEngine.GraphicsDevice, 1, 1);
MaskingTexture.SetData(new[] { new Color(0f, 0f, 0f, 0.3f) });
ShadowTexture = ResourceCache.Get<Texture2D>("Skins/Common/wall-shadow");
And the following code is the body of my Draw method:
// create stencil
GraphicsDevice.Clear(ClearOptions.Stencil, Color.Black, 0f, 0);
var batch = new SpriteBatch(GraphicsDevice);
batch.Begin(SpriteSortMode.Immediate, null, null, StencilCreator, null, AlphaEffect);
foreach (Vector2 loc in ShadowLocations)
{
newBatch.Draw(
ShadowTexture,
loc,
null,
Color.White,
0f,
Vector2.Zero,
2f,
SpriteEffects.None,
0f
);
}
batch.End();
// render shadow texture through stencil
batch.Begin(SpriteSortMode.Immediate, null, null, StencilRenderer, null);
batch.Draw(MaskingTexture, GraphicsDevice.Viewport.Bounds, Color.White);
batch.End();
What could possibly be the problem? The same code worked fine in my XNA project.
I worked around the issue by using a RenderTarget2D instead of the stencil buffer. Drawing solid black shadows to a RT2D and then drawing the RT2D itself into the scene with a semitransparent color does the trick and is much simplier to implement.