Texture appears grey when rendered - c#

I'm currently working my way through "Beginning C# Programming", and have hit a problem in chapter 7 when drawing textures.
I have used the same code as on the demo CD, and although I had to change the path of the texture to be absolute, when rendered it is appearing grey.
I have debugged the program to write to file the loaded texture, and this is fine - no problems there. So something after that point is going wrong.
Here are some snippets of code:
public void InitializeGraphics()
{
// set up the parameters
Direct3D.PresentParameters p = new Direct3D.PresentParameters();
p.SwapEffect = Direct3D.SwapEffect.Discard;
...
graphics = new Direct3D.Device( 0, Direct3D.DeviceType.Hardware, this,
Direct3D.CreateFlags.SoftwareVertexProcessing, p );
...
// set up various drawing options
graphics.RenderState.CullMode = Direct3D.Cull.None;
graphics.RenderState.AlphaBlendEnable = true;
graphics.RenderState.AlphaBlendOperation = Direct3D.BlendOperation.Add;
graphics.RenderState.DestinationBlend = Direct3D.Blend.InvSourceAlpha;
graphics.RenderState.SourceBlend = Direct3D.Blend.SourceAlpha;
...
}
public void InitializeGeometry()
{
...
texture = Direct3D.TextureLoader.FromFile(
graphics, "E:\\Programming\\SharpDevelop_Projects\\AdvancedFrameworkv2\\texture.jpg", 0, 0, 0, 0, Direct3D.Format.Unknown,
Direct3D.Pool.Managed, Direct3D.Filter.Linear,
Direct3D.Filter.Linear, 0 );
...
}
protected virtual void Render()
{
graphics.Clear( Direct3D.ClearFlags.Target, Color.White , 1.0f, 0 );
graphics.BeginScene();
// set the texture
graphics.SetTexture( 0, texture );
// set the vertex format
graphics.VertexFormat = Direct3D.CustomVertex.TransformedTextured.Format;
// draw the triangles
graphics.DrawUserPrimitives( Direct3D.PrimitiveType.TriangleStrip, 2, vertexes );
graphics.EndScene();
graphics.Present();
...
}
I can't figure out what is going wrong here. Obviously if I load up the texture in windows it displays fine - so there's something not right in the code examples given in the book. It doesn't actually work, and there must be something wrong with my environment presumably.

You're using a REALLY old technology there... I'm guessing you're trying to make a game (as we all did when we started out!), try using XNA. My best guess is that it's your graphics driver. I know that sounds like a cop out, but seriously, I've seen this before and once I swapped out my old graphics card for a new one it worked! I'm not saying it's broken, or that it's impossible to get it to work. But my best two suggestions would be to:
1) Start using XNA and use the tutorials on http://www.xnadevelopment.com/tutorials.shtml
2) Replace your graphics card (if you want to carry on with what you are doing now).

Related

Facial detection coordinates using a camera

I need a way to grab the coordinates of the face in C# for Windows Phone 8.1 in the camera view. I haven't been able to find anything on the web so I'm thinking it might not be possible. What I need is the x and y (and possibly area) of the "box" that forms around the face when it is detected in the camera view. Has anyone done this before?
Code snippet (bear in mind this is part of an app from the tutorial I linked below the code. It's not copy-pasteable, but should provide some help)
const string MODEL_FILE = "haarcascade_frontalface_alt.xml";
FaceDetectionWinPhone.Detector m_detector;
public MainPage()
{
InitializeComponent();
m_detector = new FaceDetectionWinPhone.Detector(System.Xml.Linq.XDocument.Load(MODEL_FILE));
}
void photoChooserTask_Completed(object sender, PhotoResult e)
{
if (e.TaskResult == TaskResult.OK)
{
BitmapImage bmp = new BitmapImage();
bmp.SetSource(e.ChosenPhoto);
WriteableBitmap btmMap = new WriteableBitmap(bmp);
//find faces from the image
List<FaceDetectionWinPhone.Rectangle> faces =
m_detector.getFaces(
btmMap, 10f, 1f, 0.05f, 1, false, false);
//go through each face, and draw a red rectangle on top of it.
foreach (var r in faces)
{
int x = Convert.ToInt32(r.X);
int y = Convert.ToInt32(r.Y);
int width = Convert.ToInt32(r.Width);
int height = Convert.ToInt32(r.Height);
btmMap.FillRectangle(x, y, x + height, y + width, System.Windows.Media.Colors.Red);
}
//update the bitmap before drawing it.
btmMap.Invalidate();
facesPic.Source = btmMap;
}
}
This is taken from developer.nokia.com
To do this in real-time, you need to intercept the viewfinder image, perhaps using the NewCameraFrame method (EDIT: not sure if you should use this method or PhotoCamera.GetPreviewBufferArgb32 as described below. I have to leave it up to your research)
So basically your task has 2 parts:
Get the viewfinder image
Detect faces on it (using something like the code above)
If I were you, I'd first do step 2. on an image loaded from disk, and once you can detect faces on that, I'd see how to obtain current viewfinder image and detect faces on that. X,Y coordinates are easy enough to obtain once you've detected the face - see code above.
(EDIT): I think you should try using PhotoCamera.GetPreviewBufferArgb32 method to obtain the viewfinder image. Look here MSDN documentation. Also, be sure to search through MSDN docs and tutorials. This should be more than enough to complete step 1.
A lot of face detection algorithms use Haar classifiers, Viola-Jones algorithm etc. If you're familiar with that, you'll feel more confident in what you're doing, but you can do without. Also, read the materials that I linked - they seem fairly good.

WPF c# Drawing Thick curve with Lines or Alternative

I'm currently plotting XY data on a canvas and drawing a curve with it. So far it is simple and working for a thin line but when I increase the thickness a peculiar effect happens due to how the lines are drawn to form a curve.
I've attached an example image that shows a nice smooth line that works fine when the line is thin. But when the line is thicker you can obviously see the problem.
Is there a way to connect these endpoints to make a nice smooth line?
If not, is there another drawing tool that is useful in creating a nice line?
I'm not happy about the implementation as is because quickly the canvas becomes cluttered by hundreds if not thousands of line objects on the Canvas. This seems like an awful way of doing this but I haven't found a better way as of yet. I'd much rather go with another route that would create a single curve object.
Any help is appreciated as always.
Thanks!
Point previousPoint;
public void DrawLineToBox(DrawLineAction theDrawAction, Point drawPoint)
{
Line myLine = new Line();
myLine.Stroke = new SolidColorBrush(Color.FromArgb(255, 0, 0, 0));
myLine.StrokeThickness = 29;
if(theDrawAction == DrawLineAction.KeepDrawing)
{
myLine.X1 = previousPoint.X; //draw from this point
myLine.Y1 = previousPoint.Y;
}
else if(theDrawAction == DrawLineAction.StartDrawing)
{
myLine.X1 = drawPoint.X; //draw from same point
myLine.Y1 = drawPoint.Y;
}
myLine.X2 = drawPoint.X; //draw to this point
myLine.Y2 = drawPoint.Y;
canvasToDrawOn.Children.Add(myLine); //add to canvas
previousPoint.X = drawPoint.X; //set current point as last point
previousPoint.Y = drawPoint.Y;
}
Try adding the following two lines:
myLine.StrokeStartLineCap = PenLineCap.Round;
myLine.StrokeEndLineCap = PenLineCap.Round;
Also, you really should use the Polylne or Path object to do what you are currently doing. Personally, I always set StrokeStartLineCap and StrokeEndLineCap to PenLineCap.Round and StrokeLineJoin to PenLineJoin.Round for the Polyline objects I used.

XNA Hardware Instancing: Mesh not rendered completely

I have implemented basic Hardware model instancing method in XNA code by following this short tutorial:
http://www.float4x4.net/index.php/2011/07/hardware-instancing-for-pc-in-xna-4-with-textures/
I have created the needed shader (without texture atlas though, single texture only) and I am trying to use this method to draw a simple tree I generated using 3DS Max 2013 and exported via FBX format.
The results I'm seeing left me without clue as to what is going on.
Back when I was using no instancing methods, but simply calling Draw on a mesh (for every tree on a level), the whole tree was shown:
I have made absolutely sure that the Model contains only one Mesh and that Mesh contains only one MeshPart.
I am using Vertex Extraction method, by using Model's Vertex and Index Buffer "GetData<>()" method, and correct number of vertices and indices, hence, correct number of primitives is rendered. Correct texture coordinates and Normals for lighting are also extracted, as is visible by the part of the tree that is being rendered.
Also the parts of the tree are also in their correct places as well.
They are simply missing some 1000 or so polygons for absolutely no reason what so ever. I have break-pointed at every step of vertex extraction and shader's parameter generation, and I cannot for the life of me figure out what am I doing wrong.
My Shader's Vertex Transformation function:
VertexShaderOutput VertexShaderFunction2(VertexShaderInput IN, float4x4 instanceTransform : TEXCOORD1)
{
VertexShaderOutput output;
float4 worldPosition = mul(IN.Position, transpose(instanceTransform));
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.texCoord = IN.texCoord;
output.Normal = IN.Normal;
return output;
}
Vertex bindings and index buffer generation:
instanceBuffer = new VertexBuffer(Game1.graphics.GraphicsDevice, Core.VertexData.InstanceVertex.vertexDeclaration, counter, BufferUsage.WriteOnly);
instanceVertices = new Core.VertexData.InstanceVertex[counter];
for (int i = 0; i < counter; i++)
{
instanceVertices[i] = new Core.VertexData.InstanceVertex(locations[i]);
}
instanceBuffer.SetData(instanceVertices);
bufferBinding[0] = new VertexBufferBinding(vBuffer, 0, 0);
bufferBinding[1] = new VertexBufferBinding(instanceBuffer, 0, 1);
Vertex extraction method used to get all vertex info (this part I'm sure works correctly as I have used it before to load test geometric shapes into levels, like boxes, spheres, etc for testing various shaders, and constructing bounding boxes around them using extracted vertex data, and it is all correct):
public void getVertexData(ModelMeshPart part)
{
modelVertices = new VertexPositionNormalTexture[part.NumVertices];
rawData = new Vector3[modelVertices.Length];
modelIndices32 = new uint[rawData.Length];
modelIndices16 = new ushort[rawData.Length];
int stride = part.VertexBuffer.VertexDeclaration.VertexStride;
VertexPositionNormalTexture[] vertexData = new VertexPositionNormalTexture[part.NumVertices];
part.VertexBuffer.GetData(part.VertexOffset * stride, vertexData, 0, part.NumVertices, stride);
if (part.IndexBuffer.IndexElementSize == IndexElementSize.ThirtyTwoBits)
part.IndexBuffer.GetData<uint>(modelIndices32);
if (part.IndexBuffer.IndexElementSize == IndexElementSize.SixteenBits)
part.IndexBuffer.GetData<ushort>(modelIndices16);
for (int i = 0; i < modelVertices.Length; i++)
{
rawData[i] = vertexData[i].Position;
modelVertices[i].Position = rawData[i];
modelVertices[i].TextureCoordinate = vertexData[i].TextureCoordinate;
modelVertices[i].Normal = vertexData[i].Normal;
counter++;
}
}
This is the rendering code for the object batch (trees in this particular case):
public void RenderHW()
{
Game1.graphics.GraphicsDevice.RasterizerState = rState;
treeBatchShader.CurrentTechnique.Passes[0].Apply();
Game1.graphics.GraphicsDevice.SetVertexBuffers(bufferBinding);
Game1.graphics.GraphicsDevice.Indices = iBuffer;
Game1.graphics.GraphicsDevice.DrawInstancedPrimitives(PrimitiveType.TriangleList, 0, 0, treeMesh.Length, 0, primitive , counter);
Game1.graphics.GraphicsDevice.RasterizerState = rState2;
}
If anybody has any idea where to even start looking for errors, just post all ideas that come to mind, as I'm completely stumped as to what's going on.
This even counters all my previous experience where I'd mess something up in shader code or vertex generation, you'd get some absolute mess on your screen - numerous graphical artifacts such as elongated triangles originating where mesh should be, but one tip stretching back to (0,0,0), black texture, incorrect positioning (often outside skybox or below terrain), incorrect scaling...
This is something different, almost as if it works - the part of the tree that is visible is correct in every single aspect (location, rotation, scale, texture, shading), except that a part is missing. What makes it weirder for me is that the part missing is seemingly logically segmented: Only tree trunk's primitives, and some leaves off the lowest branches of the tree are missing, leaving all other primitives correctly rendered with no artifacts. Basically, they're... correctly missing.
Solved. Of course it was the one part I was 100% sure it was correct while it was not.
modelIndices32 = new uint[rawData.Length];
modelIndices16 = new ushort[rawData.Length];
Change that into:
modelIndices32 = new uint[part.IndexBuffer.IndexCount];
modelIndices16 = new ushort[part.IndexBuffer.IndexCount];
Now I have to just figure out why are 3 draw calls rendering 300 trees slower than 300 draw calls rendering 1 tree each (i.e. why did I waste entire afternoon creating a new problem).

Issues with depth rendering in a XNA voxel engine

The last few days we worked on our voxel engine. We get to some depth rendering problems if we draw our cubes. See following Youtube-Video: http://youtu.be/lNDAqO7yHBQ
We already searched along this problem and found different approaches but none of them solved our problem.
GraphicsDevice.Clear(ClearOptions.DepthBuffer | ClearOptions.Target, Color.CornflowerBlue, 1.0f, 0);
GraphicsDevice.BlendState = BlendState.Opaque;
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
Our LoadContent() Method:
protected override void LoadContent()
{
// Create a new SpriteBatch, which can be used to draw textures.
_spriteBatch = new SpriteBatch(GraphicsDevice);
// TODO: use this.Content to load your game content here
_effect = new BasicEffect(GraphicsDevice);
_vertexBuffer = new VertexBuffer(GraphicsDevice, Node.VertexPositionColorNormal.VertexDeclaration, _chunkManager.Vertices.Length, BufferUsage.WriteOnly);
_vertexBuffer.SetData(_chunkManager.Vertices); // copies the data from our local vertices array into the memory on our graphics card
_indexBuffer = new IndexBuffer(GraphicsDevice, typeof(int), _chunkManager.Indices.Length, BufferUsage.WriteOnly);
_indexBuffer.SetData(_chunkManager.Indices);
}
Our Draw() Method:
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);
GraphicsDevice.BlendState = BlendState.Opaque;
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
GraphicsDevice.RasterizerState = RasterizerState.CullClockwise;
// Set object and camera info
//_effect.World = Matrix.Identity;
_effect.View = _camera.View;
_effect.Projection = _camera.Projection;
_effect.VertexColorEnabled = true;
_effect.EnableDefaultLighting();
// Begin effect and draw for each pass
foreach (var pass in _effect.CurrentTechnique.Passes)
{
pass.Apply();
GraphicsDevice.SetVertexBuffer(_vertexBuffer);
GraphicsDevice.Indices = _indexBuffer;
GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _chunkManager.Vertices.Count(), 0, _chunkManager.Indices.Count() / 3);
}
base.Draw(gameTime);
}
Our View and Projection setup:
Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, (float)Game.Window.ClientBounds.Width / Game.Window.ClientBounds.Height, 1, 500);
View = Matrix.CreateLookAt(CameraPosition, CameraPosition + _cameraDirection, _cameraUp);
We use the Camera (http://www.filedropper.com/camera_1) from Aaron Reed's book (http://shop.oreilly.com/product/0636920013709.do).
Did you see something we missed? Or do you have an idea to solve this problem?
Today we worked on this topic. The coordinates of the voxels in our original code were arround (X:600'000, Y:750, Z:196'000). After we relocated all voxels closer to the zero point (X:0, Y:0, Z:0) the described problem disappeared. We assume this has something to do with the datatyp float which is used by XNA. According to the MSDN (http://msdn.microsoft.com/en-us/library/b1e65aza.aspx) a float has only a precision of 7 digits. We concluded that if you put your voxels on coordinates with a precision of 7 digits and because the DepthBuffer from XNA works with floats you get the effect we described above.
Could maybe someone confirm our assumption?
Thank you!
Yes it's a well-known problem in gaming, particularly large world simulations such as flight and space sims. Essentially as the camera moves too far away from the origin, floating point inaccuracies arise which play havoc, particularly during rotations.
The solution is a method known as floating origin where instead of moving the eye you essentially move the universe. This article is for Unity3D but since it is .net you can convert it to XNA. You can read more about it here in this excellent article
Also, there is only so much you can squeeze into a finite number of bits in a z-buffer whilst viewing the very close and the very far at the same time. For that you need you need to use a logarithmic z-buffer instead of 1:1. Want to know more?
I blogged about it some time back when I was working on my space sim here.

RenderTarget2D tints all alpha purple

So, I'm trying to render my gameplay to a RenderTarget2D in XNA so I can apply shaders to the scene. It's working, to some extent, but anything that has was drawn with an alpha level other than 255 seems to be tinted purple. The alpha effect is working, but there is also a purple tint to it. I've tried looking around for a solution, and the only ones I can seem to find are either the full screen being rendered purple, or the alpha being replaced with purple.
My issue is not quite either of those...
This is a scene I threw up to show you what's going on. As you can see, the alpha effect is working, but the object is tinted purple.
Here's the part where I post my render code:
gameTarget = new RenderTarget2D(GraphicsDevice, (int)screenSize.X, (int)screenSize.Y, 1, SurfaceFormat.Color, RenderTargetUsage.PreserveContents);
gameDepthBuffer = new DepthStencilBuffer(GraphicsDevice, (int)screenSize.X, (int)screenSize.Y, GraphicsDevice.DepthStencilBuffer.Format);
This is the initialisation I'm using.
GraphicsDevice g = GraphicsDevice;
DepthStencilBuffer d = g.DepthStencilBuffer;
g.SetRenderTarget(0, gameTarget);
g.DepthStencilBuffer = gameDepthBuffer;
g.Clear(Color.Black);
GameBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.SaveState);
level.Draw(GameBatch);
GameBatch.End();
g.SetRenderTarget(0, null);
g.DepthStencilBuffer = d;
GameBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.SaveState);
if (renderEffect != null)
{
renderEffect.Begin();
renderEffect.CurrentTechnique.Passes[0].Begin();
}
Sprite.Draw(GameBatch, gameTarget.GetTexture(), new Rectangle(0, 0, (int)assumedSize.X, (int)assumedSize.Y), Color.White);
if (renderEffect != null)
{
renderEffect.CurrentTechnique.Passes[0].End();
renderEffect.End();
}
GameBatch.End();
renderEffect is the effect file, Sprite is class that deal with drawing relative to an assumed screen size (to cope with varying resolutions).
I'm working in XNA 3.1. I know I should be using 4.0 by now, but I'm not because I have book on 3.1, which is helpful in certain circumstances.
Anyway, some help here would be greatly appreciated...
Generally purple is the default color to which RenderTarget are cleared.
With that in mind, I see that you are not clearing the Back Buffer, after setting the render target to null. So your code should look like:
g.SetRenderTarget(0, null);
g.Clear(Color.Transparent);//or black
Fixed! I needed to set some alpha parameters:
GraphicsDevice.RenderState.SeparateAlphaBlendEnabled = true;
GraphicsDevice.RenderState.AlphaDestinationBlend = Blend.One;
GraphicsDevice.RenderState.AlphaSourceBlend = Blend.SourceAlpha;
GraphicsDevice.RenderState.SourceBlend = Blend.SourceAlpha;
GraphicsDevice.RenderState.DestinationBlend = Blend.InverseSourceAlpha;

Categories

Resources