I've written some code using SlimDX and WPF where I would expect the end result to be a red screen.
Unfortunately all I get is a black screen.
This is on windows 7.
Can anyone see anything major I'm missing?
The reason I'm using a separate surface as the backbuffer for the D3DImage is that I am going to be needing multiple viewports. I thought that rendering to seperate surfaces instead of the devices initial backbuffer would be the best way to achieve that.
anyway, on with the code..
Disclaimer: Please ignore the bad code, this is written entirely as throw-away code just so I can figure out how to do achieve what I'm after.
Here's my window class:
namespace SlimDXWithWpf
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
SlimDXRenderer controller;
public MainWindow()
{
InitializeComponent();
controller = new SlimDXRenderer();
controller.Initialize();
D3DImage image = new D3DImage();
image.Lock();
controller.RenderToSurface();
image.SetBackBuffer(D3DResourceType.IDirect3DSurface9, controller.SurfacePointer);
image.AddDirtyRect(new Int32Rect(0, 0, image.PixelWidth, image.PixelHeight));
image.Unlock();
Background = new ImageBrush(image);
}
}
}
And heres my "renderer" class
namespace SlimDXWithWpf
{
public class SlimDXRenderer : IDisposable
{
Direct3DEx directX;
DeviceEx device;
Surface surface;
Surface backBuffer;
IntPtr surfacePointer;
public IntPtr SurfacePointer
{
get
{
return surfacePointer;
}
}
public void Initialize()
{
directX = new Direct3DEx();
HwndSource hwnd = new HwndSource(0, 0, 0, 0, 0, 640, 480, "SlimDXControl", IntPtr.Zero);
PresentParameters pp = new PresentParameters()
{
BackBufferCount = 1,
BackBufferFormat = Format.A8R8G8B8,
BackBufferWidth = 640,
BackBufferHeight = 480,
DeviceWindowHandle = hwnd.Handle,
PresentationInterval = PresentInterval.Immediate,
Windowed = true,
SwapEffect = SwapEffect.Discard
};
device = new DeviceEx(directX, 0, DeviceType.Hardware, hwnd.Handle, CreateFlags.HardwareVertexProcessing, pp);
backBuffer = device.GetRenderTarget(0);
surface = Surface.CreateRenderTarget(device, 1024, 768, Format.A8R8G8B8, MultisampleType.None, 1, false);
surfacePointer = surface.ComPointer;
}
public void RenderToSurface()
{
device.SetRenderTarget(0, surface);
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, new Color4(Color.Red), 0f, 0);
device.BeginScene();
device.EndScene();
}
public void Dispose()
{
surface.Dispose();
device.Dispose();
directX.Dispose();
}
}
}
-- Edit: For a second I had thought I'd solved it, but it seems it will only work when my second render target (the one I'm trying to clear red) is 640x480. Any thoughts?
Did you base some of this code on the SlimDX WPF sample? It looks like you might have, which is why your Clear() call is using 0.0f for the Z clear value... which is a bug in our sample. It should be 1.0f.
Beyond that, the only potential issue I see is that your surface render target is a different size than your back buffer, but that should not actually cause problems. Have you tried rendering to the device's backbuffer (Device.GetBackBuffer()) instead of a new surface to see what impact that has?
In your device.Clear call, change the first numeric argument from 0f to 1f. That's the z-depth which ranges from 0 to 1. Specifying a z-depth of 0 effectively does nothing.
Related
I'm trying to create a screenshot/bitmap of my screen. I wrote this function:
public static Bitmap CreateScreenshot(Rectangle bounds)
{
var bmpScreenshot = new Bitmap(bounds.Width, bounds.Height,
PixelFormat.Format32bppArgb);
var gfxScreenshot = Graphics.FromImage(bmpScreenshot);
gfxScreenshot.CopyFromScreen(bounds.X, bounds.Y,
0, 0,
new Size(bounds.Size.Width, bounds.Size.Height),
CopyPixelOperation.SourceCopy);
return bmpScreenshot;
}
This function is being called in my overlay form that should draw the bitmap onto itself. I'm currently using GDI+ for the whole process.
private void ScreenshotOverlay_Load(object sender, EventArgs e)
{
foreach (Screen screen in Screen.AllScreens)
Size += screen.Bounds.Size;
Location = Screen.PrimaryScreen.Bounds.Location;
_screenshot = BitmapHelper.CreateScreenshot(new Rectangle(new Point(0, 0), Size));
Invalidate(); // The screenshot/bitmap is drawn here
}
Yep, I dispose the bitmap later, so don't worry. ;)
On my laptop and desktop computer this works fine. I've tested this with different resolutions and the calculations are correct. I can see an image of the screen on the form.
The problem starts with the Surface 3. All elements are being scaled by a factor of 1.5 (150%). This consequently means that the DPI changes. If I try to take a screenshot there, it does only capture like the upper-left part of the screen but not the whole one.
I've made my way through Google and StackOverflow and tried out different things:
Get the DPI, divide it by 96 and multiply the size components (X and Y) of the screen with this factor.
Add an entry to application.manifest to make the application DPI-aware.
The first way did not bring the desired result. The second one did, but the whole application would have to be adjusted then and this is quite complicated in Windows Forms.
Now my question would be: Is there any way to capture a screenshot of the whole screen, even if it is has a scalation factor higher than 1 (higher DPI)?
There must be a way to do this in order to make it working everywhere.
But at this point I had no real search results that could help me.
Thanks in advance.
Try this, which is found within SharpAVI's library. It works well on devices regardless of resolution scale. And I have tested it on Surface 3 at 150%.
System.Windows.Media.Matrix toDevice;
using (var source = new HwndSource(new HwndSourceParameters()))
{
toDevice = source.CompositionTarget.TransformToDevice;
}
screenWidth = (int)Math.Round(SystemParameters.PrimaryScreenWidth * toDevice.M11);
screenHeight = (int)Math.Round(SystemParameters.PrimaryScreenHeight * toDevice.M22);
SharpAVI can be found here: https://github.com/baSSiLL/SharpAvi It is for videos but uses a similar copyFromScreen method when getting each frame:
graphics.CopyFromScreen(0, 0, 0, 0, new System.Drawing.Size(screenWidth, screenHeight));
Before taking your screen shot, you can make the process DPI aware:
[System.Runtime.InteropServices.DllImport("user32.dll")]
public static extern bool SetProcessDPIAware();
private static Bitmap Screenshot()
{
SetProcessDPIAware();
var screen = System.Windows.Forms.Screen.PrimaryScreen;
var rect = screen.Bounds;
var size = rect.Size;
Bitmap bmpScreenshot = new Bitmap(size.Width, size.Height);
Graphics g = Graphics.FromImage(bmpScreenshot);
g.CopyFromScreen(0, 0, 0, 0, size);
return bmpScreenshot;
}
I'd like to write some unit tests that produce JPG images. I'm working on a game using XNA and want to have some rendering tests I can run and visually inspect that I haven't broken anything etc.
I have a Unit Test project that calls into my code to generate XNA objects like Triangle Strips, but all the rendering examples seem to assume I have a Game object and am rendering to the associated GraphicsDevice which is a window on the screen.
If possible, I'd like to render in memory and just save JPG images that I can inspect after the tests finish running. Alternatively, I guess I could instantiate a Game object in the unit tests, render to the screen and then dump to a JPG as described here: How to make screenshot using C# & XNA?
But that doesn't sound very efficient.
So, put simply, is there a way to instantiate a GraphicsDevice object outside of an XNA Game that writes to a buffer in memory?
I ended up instantiating a Control object from System.Windows.Forms. I made this part of my Unit Test project so my game doesn't depend on Windows Forms.
This is the class I use to provide functionality to produce JPGs for my tests:
public class JPGGraphicsDeviceProvider
{
private Control _c;
private RenderTarget2D _renderTarget;
private int _width;
private int _height;
public JPGGraphicsDeviceProvider(int width, int height)
{
_width = width;
_height = height;
_c = new Control();
PresentationParameters parameters = new PresentationParameters()
{
BackBufferWidth = width,
BackBufferHeight = height,
BackBufferFormat = SurfaceFormat.Color,
DepthStencilFormat = DepthFormat.Depth24,
DeviceWindowHandle = _c.Handle,
PresentationInterval = PresentInterval.Immediate,
IsFullScreen = false,
};
GraphicsDevice = new GraphicsDevice(GraphicsAdapter.DefaultAdapter,
GraphicsProfile.Reach,
parameters);
// Got this idea from here: http://xboxforums.create.msdn.com/forums/t/67895.aspx
_renderTarget = new RenderTarget2D(GraphicsDevice,
GraphicsDevice.PresentationParameters.BackBufferWidth,
GraphicsDevice.PresentationParameters.BackBufferHeight);
GraphicsDevice.SetRenderTarget(_renderTarget);
}
/// <summary>
/// Gets the current graphics device.
/// </summary>
public GraphicsDevice GraphicsDevice { get; private set; }
public void SaveCurrentImage(string jpgFilename)
{
GraphicsDevice.SetRenderTarget(null);
int w = GraphicsDevice.PresentationParameters.BackBufferWidth;
int h = GraphicsDevice.PresentationParameters.BackBufferHeight;
using (Stream stream = new FileStream(jpgFilename, FileMode.Create))
{
_renderTarget.SaveAsJpeg(stream, w, h);
}
GraphicsDevice.SetRenderTarget(_renderTarget);
}
}
Then in my tests, I just instantiate this and use the GraphicsDevice provided:
[TestClass]
public class RenderTests
{
private JPGGraphicsDeviceProvider _jpgDevice;
private RenderPanel _renderPanel;
public RenderTests()
{
_jpgDevice = new JPGGraphicsDeviceProvider(512, 360);
_renderPanel = new RenderPanel(_jpgDevice.GraphicsDevice);
}
[TestMethod]
public void InstantiatePrism6()
{
ColorPrism p = new ColorPrism(6, Color.RoyalBlue, Color.Pink);
_renderPanel.Add(p);
_renderPanel.Draw();
_jpgDevice.SaveCurrentImage("six-Prism.jpg");
}
}
I'm sure it's not bug-free, but it seems to work for now.
I have a function that gets called several times in a loop and returns a view controller( each view controller is like a slide of a presentation ). Uiviewcontroller has subviews like scroll view , uiwebview and uiimageview. As the number of view controllers increase, i get memory warnings and app crashes. I have wrapped the function in a nsautorelease pool and i am using "using" block to dispose the uiimages but no use.
Can you look at the code and tell me whats wrong and how i can solve the memory problem.
public UIViewController GetPage(int i){
using (var pool = new NSAutoreleasePool ()) {
UIViewController c = new UIViewController ();
c.View.Frame = new RectangleF (0, 0, 1024, 743);
UIScrollView scr = new UIScrollView (new RectangleF (0, 0, 1024, 743));
scr.ContentSize = new SizeF (1024, 748);
string contentDirectoryPath = // path string
UIWebView asset = new UIWebView (new RectangleF (0, 0, 1024, 748));
asset.ScalesPageToFit = true;
UIScrollView scrl = new UIScrollView (new RectangleF (0, 0, 1024, 748));
scrl.ContentSize = new SizeF (1024, 768);
using (UIImageView imgView = new UIImageView (new RectangleF (0, 0, 1024, 748))) {
using (var img = UIImage.FromFile (contentDirectoryPath)) {
imgView.Image = img;
img.Dispose ();
}
//UIImage img = UIImage.FromFile (contentDirectoryPath);
//imgView.Image = img;
imgView.ContentMode = UIViewContentMode.ScaleAspectFit;
float widthRatio = imgView.Bounds.Size.Width / imgView.Image.Size.Width;
float heightRatio = imgView.Bounds.Size.Height / imgView.Image.Size.Height;
float scale = Math.Min (widthRatio, heightRatio);
float imageWidth = scale * imgView.Image.Size.Width;
float imageHeight = scale * imgView.Image.Size.Height;
imgView.Frame = new RectangleF (0, 0, imageWidth, imageHeight);
scrl.AddSubview (imgView);
imgView.Center = imgView.Superview.Center;
imgView.Dispose ();
}
asset.AddSubview (scrl);
scrl.Center = scrl.Superview.Center;
asset.ScalesPageToFit = true;
scr.AddSubview (asset);
c.View.AddSubview (scr);
scr.Center = scr.Superview.Center;
return c;
}
Using NSAutoReleasePool won't help you, neither will disposing the image or image view help you (In fact, your code disposes the UIImageView you're adding to the UIScrollView - I'm actually wondering why this is working at all).
If you are creating tons of instances of a UIViewController that hosts a lot of subviews and especially large images, you will eventually run out of memory as long as you reference these controllers from somewhere.
From what I see, your image is 1024x768, probably at 32bit color depth, that's 3 megabyte per image in memory.
You will have to get rid of the view controller's you're creating or make them smarter (load and dispose the images in ViewWillAppear() and ViewWillDisappear() for instance)
The question is why are you creating multiple view controllers with the same view (different content)? Why not create a single view controller and then change the content of it on ViewDidLoad()? I don't see you using the passed integer anywhere so is this your actual code or just a sample of what you are doing?
If you want to change the content of the controller without reloading, create a function to change the values. For example:
public void Bind(UIImage image)
{
this.imgView.Image = image;
}
I have to write functions which is setting pixel in WPF. I need to draw some pictures. Using attached code I have some blurry effect (like on screen).
Can you tell me what is wrong, or which methods I should use ?
namespace DisplayAppCS {
public partial class MainWindow : Window
{
WriteableBitmap _bitmap = new WriteableBitmap(100, 200, 1, 1, PixelFormats.Bgr32, null);
public MainWindow()
{
InitializeComponent();
image1.SnapsToDevicePixels = true;
image1.Source = _bitmap;
int[] ColorData = { 0xFFFFFF }; // B G R
Int32Rect rect = new Int32Rect(
1,
60,
1,
1);
_bitmap.WritePixels(rect, ColorData, 4, 0);
}
}}
Your bitmap is 100x200 but your window is much larger. Your image is being stretched to the size of the window, thus creating the "blurring" effect. You need to either change the size of the window or tell the image not to stretch:
<Image Stretch="None"/>
That said, you could be going down completely the wrong path using a writeable bitmap. It really depends on your requirements. Could you get away with just using built-in WPF shapes, for example?
You can try with the SnapToDevicePixels property.
I'm stuck at not being able to map texture to a square in openGLES. I'm trying to display a jpg image on the screen, and in order for me to do that, I draw a square that I want to then map image onto. However all I get as an output is a white square. I don't know what am I doing wrong. And this problem is preventing me from moving forward with my project. I'm using Managed OpenGL ES wrapper for Windows Mobile.
I verified that the texture is loading correctly, but I can't apply it to my object. I uploaded sample project that shows my problem here. You would need VS2008 with Windows Mobile 6 SDK to be able to run it. I'm also posting the code of the Form that renders and textures an object here. Any suggestions would be much appreciated, since I've been stuck on this problem for a while, and I can't figure out what am I doing wrong.
public partial class Form1 : Form
{
[DllImport("coredll")]
extern static IntPtr GetDC(IntPtr hwnd);
EGLDisplay myDisplay;
EGLSurface mySurface;
EGLContext myContext;
public Form1()
{
InitializeComponent();
myDisplay = egl.GetDisplay(new EGLNativeDisplayType(this));
int major, minor;
egl.Initialize(myDisplay, out major, out minor);
EGLConfig[] configs = new EGLConfig[10];
int[] attribList = new int[]
{
egl.EGL_RED_SIZE, 5,
egl.EGL_GREEN_SIZE, 6,
egl.EGL_BLUE_SIZE, 5,
egl.EGL_DEPTH_SIZE, 16 ,
egl.EGL_SURFACE_TYPE, egl.EGL_WINDOW_BIT,
egl.EGL_STENCIL_SIZE, egl.EGL_DONT_CARE,
egl.EGL_NONE, egl.EGL_NONE
};
int numConfig;
if (!egl.ChooseConfig(myDisplay, attribList, configs, configs.Length, out numConfig) || numConfig < 1)
throw new InvalidOperationException("Unable to choose config.");
EGLConfig config = configs[0];
mySurface = egl.CreateWindowSurface(myDisplay, config, Handle, null);
myContext = egl.CreateContext(myDisplay, config, EGLContext.None, null);
egl.MakeCurrent(myDisplay, mySurface, mySurface, myContext);
gl.ClearColor(0, 0, 0, 0);
InitGL();
}
void InitGL()
{
gl.ShadeModel(gl.GL_SMOOTH);
gl.ClearColor(0.0f, 0.0f, 0.0f, 0.5f);
gl.BlendFunc(gl.GL_SRC_ALPHA, gl.GL_ONE_MINUS_SRC_ALPHA);
gl.Hint(gl.GL_PERSPECTIVE_CORRECTION_HINT, gl.GL_NICEST);
}
public unsafe void DrawGLScene()
{
gl.MatrixMode(gl.GL_PROJECTION);
gl.LoadIdentity();
gl.Orthof(0, ClientSize.Width, ClientSize.Height, 0, 0, 1);
gl.Disable(gl.GL_DEPTH_TEST);
gl.MatrixMode(gl.GL_MODELVIEW);
gl.LoadIdentity();
Texture myImage;
Bitmap Image = new Bitmap(#"\Storage Card\Texture.jpg");
using (MemoryStream ms = new MemoryStream())
{
Image.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp);
myImage = Texture.LoadStream(ms, false);
}
float[] rectangle = new float[] {
0, 0,
myImage.Width, 0,
0, myImage.Height,
myImage.Width, myImage.Height
};
float[] texturePosition = new float[] {
0, 0,
myImage.Width, 0,
0, myImage.Height,
myImage.Width, myImage.Height
};
//Bind texture
gl.BindTexture(gl.GL_TEXTURE_2D, myImage.Name);
gl.TexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MIN_FILTER, gl.GL_LINEAR);
gl.TexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER, gl.GL_LINEAR);
gl.EnableClientState(gl.GL_TEXTURE_COORD_ARRAY);
gl.EnableClientState(gl.GL_VERTEX_ARRAY);
//draw square and texture it.
fixed (float* rectanglePointer = &rectangle[0], positionPointer = &texturePosition[0])
{
gl.TexCoordPointer(2, gl.GL_FLOAT, 0, (IntPtr)positionPointer);
gl.VertexPointer(2, gl.GL_FLOAT, 0, (IntPtr)rectanglePointer);
gl.DrawArrays(gl.GL_TRIANGLE_STRIP, 0, 4);
}
gl.DisableClientState(gl.GL_TEXTURE_COORD_ARRAY);
gl.DisableClientState(gl.GL_VERTEX_ARRAY);
}
protected override void OnPaintBackground(PaintEventArgs e)
{
}
protected override void OnPaint(PaintEventArgs e)
{
base.OnPaint(e);
gl.Clear(gl.GL_COLOR_BUFFER_BIT);
DrawGLScene();
egl.SwapBuffers(myDisplay, mySurface);
gl.Clear(gl.GL_COLOR_BUFFER_BIT);
}
protected override void OnClosing(CancelEventArgs e)
{
if (!egl.DestroySurface(myDisplay, mySurface))
throw new Exception("Error while destroying surface.");
if (!egl.DestroyContext(myDisplay, myContext))
throw new Exception("Error while destroying context.");
if (!egl.Terminate(myDisplay))
throw new Exception("Error while terminating display.");
base.OnClosing(e);
}
}
You need to enable texturing:
glEnable( GL_TEXTURE_2D );
before rendering the square.
If you work with OpenGL|ES also take a look if the glDrawTexImage-Extension is supported (well - it should, it's a core-extension and required, but you never know...)
It won't help you with your problem directly (e.g. you have to enable texturing as well), but glDrawTexImage is a hell lot more efficient than polygon rendering. And it needs less code to write as well.
If you are loading textures from PNG or JPG files using UIImage, CGImage and CGContext, it is very important to set GL_TEXTURE_MIN_FILTER to GL_LINEAR or GL_NEAREST before creating textures, because if you don't do it, all your textures except the last bound will be set to blank white.
Thanks for the help! However your suggestion didn't fix the issue. Now the square is black instead of white, but still no texture. I've tried adding gl.Enable(gl.GL_TEXTURE_2D) at every possible position, but the result is still black square.
EDIT:
Upps, sorry, top-left corner of my image was black that's why I didn't see anything. Changed the image to have different colors, and now I can see part of the image rendered. It's not mapped propertly, but I can figure that part out.
Thanks a lot of the help!!!