I'd like to write some unit tests that produce JPG images. I'm working on a game using XNA and want to have some rendering tests I can run and visually inspect that I haven't broken anything etc.
I have a Unit Test project that calls into my code to generate XNA objects like Triangle Strips, but all the rendering examples seem to assume I have a Game object and am rendering to the associated GraphicsDevice which is a window on the screen.
If possible, I'd like to render in memory and just save JPG images that I can inspect after the tests finish running. Alternatively, I guess I could instantiate a Game object in the unit tests, render to the screen and then dump to a JPG as described here: How to make screenshot using C# & XNA?
But that doesn't sound very efficient.
So, put simply, is there a way to instantiate a GraphicsDevice object outside of an XNA Game that writes to a buffer in memory?
I ended up instantiating a Control object from System.Windows.Forms. I made this part of my Unit Test project so my game doesn't depend on Windows Forms.
This is the class I use to provide functionality to produce JPGs for my tests:
public class JPGGraphicsDeviceProvider
{
private Control _c;
private RenderTarget2D _renderTarget;
private int _width;
private int _height;
public JPGGraphicsDeviceProvider(int width, int height)
{
_width = width;
_height = height;
_c = new Control();
PresentationParameters parameters = new PresentationParameters()
{
BackBufferWidth = width,
BackBufferHeight = height,
BackBufferFormat = SurfaceFormat.Color,
DepthStencilFormat = DepthFormat.Depth24,
DeviceWindowHandle = _c.Handle,
PresentationInterval = PresentInterval.Immediate,
IsFullScreen = false,
};
GraphicsDevice = new GraphicsDevice(GraphicsAdapter.DefaultAdapter,
GraphicsProfile.Reach,
parameters);
// Got this idea from here: http://xboxforums.create.msdn.com/forums/t/67895.aspx
_renderTarget = new RenderTarget2D(GraphicsDevice,
GraphicsDevice.PresentationParameters.BackBufferWidth,
GraphicsDevice.PresentationParameters.BackBufferHeight);
GraphicsDevice.SetRenderTarget(_renderTarget);
}
/// <summary>
/// Gets the current graphics device.
/// </summary>
public GraphicsDevice GraphicsDevice { get; private set; }
public void SaveCurrentImage(string jpgFilename)
{
GraphicsDevice.SetRenderTarget(null);
int w = GraphicsDevice.PresentationParameters.BackBufferWidth;
int h = GraphicsDevice.PresentationParameters.BackBufferHeight;
using (Stream stream = new FileStream(jpgFilename, FileMode.Create))
{
_renderTarget.SaveAsJpeg(stream, w, h);
}
GraphicsDevice.SetRenderTarget(_renderTarget);
}
}
Then in my tests, I just instantiate this and use the GraphicsDevice provided:
[TestClass]
public class RenderTests
{
private JPGGraphicsDeviceProvider _jpgDevice;
private RenderPanel _renderPanel;
public RenderTests()
{
_jpgDevice = new JPGGraphicsDeviceProvider(512, 360);
_renderPanel = new RenderPanel(_jpgDevice.GraphicsDevice);
}
[TestMethod]
public void InstantiatePrism6()
{
ColorPrism p = new ColorPrism(6, Color.RoyalBlue, Color.Pink);
_renderPanel.Add(p);
_renderPanel.Draw();
_jpgDevice.SaveCurrentImage("six-Prism.jpg");
}
}
I'm sure it's not bug-free, but it seems to work for now.
Related
I'm following a snake tutorial right now and I wrote the exact thing as said, but it won't even show the rectangles of the snake and food.enter code here
I'm using Windows Form Application.
I made separate classes - Food; Snake; And the one for the form.
//Snake class
public Rectangle[] Body;
private int x = 0, y = 0, width = 20, height = 20;
public Snake()
{
Body = new Rectangle[1];
Body[0] = new Rectangle(x, y, width, height);
}
public void Draw()
{
for (int i = Body.Length - 1; i < 0; i--)
Body[i] = Body[i - 1];
}
public void Draw (Graphics graphics)
{
graphics.FillRectangles(Brushes.AliceBlue, Body);
}
public void Move (int direction)
{
Draw();
//Food Class
public class Food
{
public Rectangle Piece;
private int x, y, width = 20, height = 20;
public Food(Random rand)
{
Generate(rand);
Piece = new Rectangle(x, y, width, height);
}
public void Draw(Graphics graphics)
{
Piece.X = x;
Piece.Y = y;
graphics.FillRectangle(Brushes.Red, Piece);
You are not inheriting your body and food classes from any form of drawable context objects. So the "draw" routine would need to be explicitly called any time a change happens. You appear to be trying to mimic the structure of a WinForms UI component, where it has its own built-in Draw() method that is implicitly called any time the UI needs to update.
Also, since you are calling "Draw" without any parameters, that should be throwing an error unless you have an overload somewhere that doesn't require parameters. In which case, there would be no graphics context to draw to.
I'm no expert in doing game graphics, but I do know that constantly calling the redraw method of a UI component is exceptionally inefficient. There are overrides for the Invalidate() method where you can provide rectangles to invalidate only small portions of the entire component. That can help with improving redraw rate.
I would suggest having a single renderable UI component on screen that links to your data objects. And override the draw() method of that component so that it draws the entire game board (or portions of it based on the invalidated regions), based on the data stored in your game objects.
I coded example from https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/custom-video-effects
Part of the code:
public void ProcessFrame(ProcessVideoFrameContext context)
{
using (CanvasBitmap inputBitmap = CanvasBitmap.CreateFromDirect3D11Surface(canvasDevice, context.InputFrame.Direct3DSurface))
using (CanvasRenderTarget renderTarget = CanvasRenderTarget.CreateFromDirect3D11Surface(canvasDevice, context.OutputFrame.Direct3DSurface))
using (CanvasDrawingSession ds = renderTarget.CreateDrawingSession())
{
var gaussianBlurEffect = new GaussianBlurEffect
{
Source = inputBitmap,
BlurAmount = (float)BlurAmount,
Optimization = EffectOptimization.Speed
};
ds.DrawImage(gaussianBlurEffect);
}
}
The problem is: i want to draw points (bitmaps) on frames but i have no idea how to pass specific coord to ProcessFrame function. On input i have x and y coords for every frame where to draw point and on the output i want to have video with added points for every frame.
Thanks for help.
EDIT:
The code below is not suitable solution as the ProcessFrame(ProcessVideoFrameContext context) is part of an interface implementation.
My next solution proposal is to create a custom effect, similar to the GaussianBlusEffect and many more. An example here:
https://github.com/Microsoft/Win2D-Samples/blob/master/ExampleGallery
~~~
Below the original answer for reference.
You can pass in the X and Y parameters and access the pixels of the image.
public void ProcessFrame(ProcessVideoFrameContext context, int X, int Y)
{
using (CanvasBitmap inputBitmap = CanvasBitmap.CreateFromDirect3D11Surface(canvasDevice, context.InputFrame.Direct3DSurface))
using (CanvasRenderTarget renderTarget = CanvasRenderTarget.CreateFromDirect3D11Surface(canvasDevice, context.OutputFrame.Direct3DSurface))
using (CanvasDrawingSession ds = renderTarget.CreateDrawingSession())
{
Color[] Pixels = inputBitmap.GetPixelColors();
// Manipulate the array using X and Y with the Width parameter of the bitmap
var gaussianBlurEffect = new GaussianBlurEffect
{
Source = inputBitmap,
BlurAmount = (float)BlurAmount,
Optimization = EffectOptimization.Speed
};
ds.DrawImage(gaussianBlurEffect);
}
}
More info: https://microsoft.github.io/Win2D/html/M_Microsoft_Graphics_Canvas_CanvasBitmap_GetPixelColors.htm
I did not check if the Color[] is a pointer to the live buffer or a copy. If it is a copy, then you have to write back the buffer with SetPixelColors.
I am making a battleship game in console and I would like to implement a replay-system, where I could rewatch games. For this I use StreamWriter to record the player's moves, and in a different loop, a StreamReader uses these to 'replay' the game on screen. However, the position of the ships is not the same, because it is generated when a new game is started.
The NEWGAME loop is the same as the REPLAY, but in the replay the comp automatically makes the moves via the streamreader, however, the map is different because it is always randomly generated.
My question is that how can I save an exact copy of a 'map' that is in this situation a class?
Thanks in advance! If you have any questions I can answer.
switch (x)
{
case "newgame":
Mezo Játékos2Hajói= new Mezo();
Mezo Játékos1AmitLát = new Mezo();
Mezo Játékos1Hajói = new Mezo();
Mezo Játékos2AmitLát = new Mezo();
....
This generates the field for the new game, then come the rest, player attacks etc.
The replay case works the same as a new game, but the attacks are inserted by the computer from a streamreader.
case "replay":
Mezo Játékos2VHajói= new Mezo();
Mezo Játékos1VAmitLát = new Mezo();
Mezo Játékos1VHajói = new Mezo();
Mezo Játékos2VAmitLát = new Mezo();
The problem is here. This case generates a new for itself, because it wouldn't work, but here I would like to use the one used in newgame".
This is exactly the kind of thing that the Command Pattern is great for. Here's a simplified version of something you could do:
interface ICommand { void Execute(); }
class PlaceShip : ICommand
{
int x;
int y;
Ship ship;
public PlaceShip(int x, int y, Ship ship)
{
// Initialize fields
}
public void Execute()
{
// Place the ship
}
}
class Fire : ICommand
{
int x;
int y;
Player player;
public Fire(int x, int y, Player player)
{
// Initialize fields
}
public void Execute()
{
// Try to hit enemy
}
}
Then you can keep a history of ICommand objects which you can replay easily by just iterating through the list again.
You can try to use a XMLSerialize to export a the whole class to a Structured XMl automatically. It's kind of a way to make a full copy of the object in XML.
I'm working with a system that has 4 outputs (monitors) with e.g. 1280x1024 pixels for each output. I need a screenshot of the whole desktop and all open applications on it.
I tried GetDesktopWindow() (MSDN) but it doesn't work properly. Some forms don't shown on the captured picture.
i tried GetDesktopWindow() function but it doesn't work properly.
Of course not.
The GetDesktopWindow function returns a handle to the desktop window. It doesn't have anything to do with capturing an image of that window.
Besides, the desktop window is not the same thing as "the entire screen". It refers specifically to the desktop window. See this article for more information and what can go wrong when you abuse the handle returned by this function.
i'm working with a system that have 4 outputs (monitors) with 1280x1024(e.g) for each output. i need a screenshot from whole desktop and all open applications on it.
This is relatively simple to do in the .NET Framework using the Graphics.CopyFromScreen method. You don't even need to do any P/Invoke!
The only trick in this case is making sure that you pass the appropriate dimensions. Since you have 4 monitors, passing only the dimensions of the primary screen won't work. You need to pass the dimensions of the entire virtual screen, which contains all of your monitors. Retrieve this by querying the SystemInformation.VirtualScreen property, which returns the bounds of the virtual screen. As the documentation indicates, this is the bounds of the entire desktop on a multiple monitor system.
Sample code:
// Determine the size of the "virtual screen", which includes all monitors.
int screenLeft = SystemInformation.VirtualScreen.Left;
int screenTop = SystemInformation.VirtualScreen.Top;
int screenWidth = SystemInformation.VirtualScreen.Width;
int screenHeight = SystemInformation.VirtualScreen.Height;
// Create a bitmap of the appropriate size to receive the screenshot.
using (Bitmap bmp = new Bitmap(screenWidth, screenHeight))
{
// Draw the screenshot into our bitmap.
using (Graphics g = Graphics.FromImage(bmp))
{
g.CopyFromScreen(screenLeft, screenTop, 0, 0, bmp.Size);
}
// Do something with the Bitmap here, like save it to a file:
bmp.Save(savePath, ImageFormat.Jpeg);
}
Edit:
please check your solution with a wpf application in a thread that is not your main thread. i tried it. it doesn't work!
Hmm, I didn't see a WPF tag on the question or mentioned anywhere in the body.
No matter, though. The code I posted works just fine in a WPF application, as long as you add the appropriate references and using declarations. You will need System.Windows.Forms and System.Drawing. There might be a more WPF-esque way of doing this that doesn't require a dependency on these WinForms assemblies, but I wouldn't know what it is.
It even works on another thread. There is nothing here that would require the UI thread.
Yes, I tested it. Here is my full test code:
using System.Windows;
using System.Windows.Forms; // also requires a reference to this assembly
using System.Drawing; // also requires a reference to this assembly
using System.Drawing.Imaging;
using System.Threading;
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
}
private void button1_Click(object sender, RoutedEventArgs e)
{
// Create a new thread for demonstration purposes.
Thread thread = new Thread(() =>
{
// Determine the size of the "virtual screen", which includes all monitors.
int screenLeft = SystemInformation.VirtualScreen.Left;
int screenTop = SystemInformation.VirtualScreen.Top;
int screenWidth = SystemInformation.VirtualScreen.Width;
int screenHeight = SystemInformation.VirtualScreen.Height;
// Create a bitmap of the appropriate size to receive the screenshot.
using (Bitmap bmp = new Bitmap(screenWidth, screenHeight))
{
// Draw the screenshot into our bitmap.
using (Graphics g = Graphics.FromImage(bmp))
{
g.CopyFromScreen(screenLeft, screenTop, 0, 0, bmp.Size);
}
// Do something with the Bitmap here, like save it to a file:
bmp.Save("G:\\TestImage.jpg", ImageFormat.Jpeg);
}
});
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
}
}
I have created a tiny helper because I needed this case today and tried many different functions. Independently of the number of monitors, you can save it as a file on the disk or store it in a binary field in db with the following code blocks.
ScreenShotHelper.cs
using System.ComponentModel;//This namespace is required for only Win32Exception. You can remove it if you are catching exceptions from another layer.
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;
namespace Company.Core.Helpers.Win32 {
public static class ScreenShotHelper {
private static Bitmap CopyFromScreen(Rectangle bounds) {
try {
var image = new Bitmap(bounds.Width, bounds.Height);
using var graphics = Graphics.FromImage(image);
graphics.CopyFromScreen(Point.Empty, Point.Empty, bounds.Size);
return image;
}
catch(Win32Exception) {//When screen saver is active
return null;
}
}
public static Image Take(Rectangle bounds) {
return CopyFromScreen(bounds);
}
public static byte[] TakeAsByteArray(Rectangle bounds) {
using var image = CopyFromScreen(bounds);
using var ms = new MemoryStream();
image.Save(ms, ImageFormat.Png);
return ms.ToArray();
}
public static void TakeAndSave(string path, Rectangle bounds, ImageFormat imageFormat) {
using var image = CopyFromScreen(bounds);
image.Save(path, imageFormat);
}
}
}
Usage - Binary Field
var bounds = new Rectangle();
bounds = Screen.AllScreens.Aggregate(bounds, (current, screen)
=> Rectangle.Union(current, screen.Bounds));
_card.ScreenShot = Convert.ToBase64String(ScreenShotHelper.TakeAsByteArray(bounds));
Usage - Disk file
var bounds = new Rectangle();
bounds = Screen.AllScreens.Aggregate(bounds, (current, screen)
=> Rectangle.Union(current, screen.Bounds));
ScreenShotHelper.TakeAndSave(#"d:\screenshot.png", bounds, ImageFormat.Png);
I've written some code using SlimDX and WPF where I would expect the end result to be a red screen.
Unfortunately all I get is a black screen.
This is on windows 7.
Can anyone see anything major I'm missing?
The reason I'm using a separate surface as the backbuffer for the D3DImage is that I am going to be needing multiple viewports. I thought that rendering to seperate surfaces instead of the devices initial backbuffer would be the best way to achieve that.
anyway, on with the code..
Disclaimer: Please ignore the bad code, this is written entirely as throw-away code just so I can figure out how to do achieve what I'm after.
Here's my window class:
namespace SlimDXWithWpf
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
SlimDXRenderer controller;
public MainWindow()
{
InitializeComponent();
controller = new SlimDXRenderer();
controller.Initialize();
D3DImage image = new D3DImage();
image.Lock();
controller.RenderToSurface();
image.SetBackBuffer(D3DResourceType.IDirect3DSurface9, controller.SurfacePointer);
image.AddDirtyRect(new Int32Rect(0, 0, image.PixelWidth, image.PixelHeight));
image.Unlock();
Background = new ImageBrush(image);
}
}
}
And heres my "renderer" class
namespace SlimDXWithWpf
{
public class SlimDXRenderer : IDisposable
{
Direct3DEx directX;
DeviceEx device;
Surface surface;
Surface backBuffer;
IntPtr surfacePointer;
public IntPtr SurfacePointer
{
get
{
return surfacePointer;
}
}
public void Initialize()
{
directX = new Direct3DEx();
HwndSource hwnd = new HwndSource(0, 0, 0, 0, 0, 640, 480, "SlimDXControl", IntPtr.Zero);
PresentParameters pp = new PresentParameters()
{
BackBufferCount = 1,
BackBufferFormat = Format.A8R8G8B8,
BackBufferWidth = 640,
BackBufferHeight = 480,
DeviceWindowHandle = hwnd.Handle,
PresentationInterval = PresentInterval.Immediate,
Windowed = true,
SwapEffect = SwapEffect.Discard
};
device = new DeviceEx(directX, 0, DeviceType.Hardware, hwnd.Handle, CreateFlags.HardwareVertexProcessing, pp);
backBuffer = device.GetRenderTarget(0);
surface = Surface.CreateRenderTarget(device, 1024, 768, Format.A8R8G8B8, MultisampleType.None, 1, false);
surfacePointer = surface.ComPointer;
}
public void RenderToSurface()
{
device.SetRenderTarget(0, surface);
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, new Color4(Color.Red), 0f, 0);
device.BeginScene();
device.EndScene();
}
public void Dispose()
{
surface.Dispose();
device.Dispose();
directX.Dispose();
}
}
}
-- Edit: For a second I had thought I'd solved it, but it seems it will only work when my second render target (the one I'm trying to clear red) is 640x480. Any thoughts?
Did you base some of this code on the SlimDX WPF sample? It looks like you might have, which is why your Clear() call is using 0.0f for the Z clear value... which is a bug in our sample. It should be 1.0f.
Beyond that, the only potential issue I see is that your surface render target is a different size than your back buffer, but that should not actually cause problems. Have you tried rendering to the device's backbuffer (Device.GetBackBuffer()) instead of a new surface to see what impact that has?
In your device.Clear call, change the first numeric argument from 0f to 1f. That's the z-depth which ranges from 0 to 1. Specifying a z-depth of 0 effectively does nothing.