WPF Efficient Bitmap render management - c#

I am implementing a security camera. I have the following methods, lets call them for simplicity:
capture(...)// captures the frame from the camera, frame used by the following methods:
comparetemplates(...)//if it detects any motion it triggers the following method:
detection(...)// applies haarcascade detection and recognition through EmguCv;
Basically I capture() the frame, check the boolean comparetemplates() and render the captured frame with detection() applied on it.
The Problem:
capture() renders to the screen real time in 30fps, however, the other methods require more time.
All operations are made with System.Drawing.Bitmap and conversions to wpf Source for rendering.
The Question:
I am willing to sacrifice real time rendering for a smooth, 30 fps rendering delayed by up to 6 seconds. I am not asking necesarily for code, but for the actual principle to call the methods and apply System delays or anything (threads etc).
Thank you.

Related

Performance-wise, is it Necessary to Stop Drawing Textures That Aren't On Screen Anymore?

I pretty much put the question in the title. Would it help performance-wise if I stopped drawing targets are aren't on the screen anymore? What I mean by this is:
if (textureLocation is on the screen)
{
draw code here
}
Or it is so insignificant(if at all) that it doesn't matter?
Thanks,
Shyy
Depends. Ultimately spent time comes down to 3 things: sending data to the GPU, vertex shading, and pixel shading.
If the texture is located on a spritesheet that has other textures that are being drawn on screen and the offscreen draw call is within the same .Begin() .End() block as those others, it won't hurt performance since it takes just as long to send data and set the GPU up for the spritesheet. The 4 off-screen vertices will run through the vertex shader but that is not a bottle neck. The graphic pipeline culls offscreen objects between the vertex shader and pixel shader so it won't spend any time in the pixel shader
But if is a stand alone texture or in it's own .Begin() .End() block, it will cost time sending it's data to the GPU even though the GPU will cull it.
Whether it is significant or not only profiling can tell you.

Floor draw method in isometric engine

Im working on isometric 2D tile engin for RTS game. I have two ways how can I draw floor. One option is one big image (for example 8000px x 8000px have about 10MB) and second option is draw images tile by tile only in visibly area.
My questin is what is better (for performance)?
Performance-wise and memory-wise, a tiled approach is better.
Memory-wise: If you can use a single spritesheet to hold the textures of every tile you need to render, then the amount of memory used would decrease tremendously - as opposed to redefining textures for tiles you want to render more than once. Also, on every texture there is an attribute called "pitch". This attribute tells us how much more memory is being used than the image actually needs. What? Why would my program be doing this? Back in the good old days, when Ben Kenobi was still called Obi Wan Kenobi, textures took up the memory they were supposed to. But now, with hardware acceleration, the GPU adds some padding to your texture to make it align with boundaries that it can process faster. This is memory you can reduce with the use of a spritesheet.
From a performance standpoint: Whenever you draw a regular sprite to the screen, the graphics hardware requires three main pieces of information: 1) The texture you want to render from. 2) What part of that texture you want to render from. 3) Where on the screen you want to render to. Repeat for every object you want to render. With a spritesheet, it only passes data once - a big performance increase because passing data from the CPU to the GPU (and vice-versa) is really slow.
And I disagree with the two comments, actually. Making a change of this caliber would be difficult when your program is mature.

Windows Phone 7 onDraw

I am developing a game. I would like to draw a couple of textures on the screen. I succesfully achieved that by drawing texture2d using spriteBatch.
The thing is the textures are displayed only if I put the code in the onDraw method. The onDraw method is bound to the timer so it executes many times. I would like to draw my rectangles only once.
When I put the code in the constructor the rectangles are not displayed - they show up only when I put the code in the onDraw function. How can I omit that?
There isn't really any sensible option for doing this in XNA. Most games want to re-draw the entire screen each frame - so that is how XNA is structured. (And, without a really compelling reason, this is how you should structure your game too.)
XNA is double-buffered (I don't think there's a way to turn that off). You do your drawing on the back-buffer and then swap it with the front buffer. You never draw to the screen directly.
So, while you don't have to clear the screen and re-draw it on each frame, if you don't you must manually keep the contents of these two buffers in-sync - otherwise you will get severe flickering. This is not worth the effort.
What you may be looking for is the Game.SupressDraw method (call it from Update - or - as an alternative: override BeginDraw and return false). This will prevent Draw from being called for that particular frame, and prevent the back-buffer from being swapped to the front. So the previous frame simply stays on-screen.
But it's generally easier to simply draw every single frame.

Aforge Blob Detection

How to detect Non Moving Blobs from a video?
Let's consider I have a video and a initial background frame bitmap. Is that possible to detect an blob/object which is NOT MOVING ? and draw a rectangle around that Object?
This reminds me of an algorithm to detect forgotten objects on a subway. If I'm not wrong you want to detect objects that are not moving AND that were not on the initial background right? You can apply this approach:
With an initial image like this (couldn't find a truly empty subway image):
And an image with an added static object (the waste can), the subway moving, and a person waiting, probably moving a little:
After a Image>ThresholdDifference (http://www.aforgenet.com/framework/docs/html/322123cf-39df-0ae8-6434-29cceb6a54e1.htm) we will get something like:
Note how the waste can appears along with other objects that were not there. If you apply this similar process several times, let say every 10 seconds and then a Image>Intersect (http://www.aforgenet.com/framework/docs/html/7244211d-e882-09b1-965d-f820375af8be.htm) you will end with something like this after a few minutes:
You can easily get the coordinates of this object with a Image>Connected Component Labeling (http://www.aforgenet.com/framework/docs/html/240525ea-c114-8b0a-f294-508aae3e95eb.htm)
Drawbacks of this approach:
Needs some time (minutes if you take a snapshot every 10 seconds, seconds for more frequent snapshots) to detect the objects.
Will take even more time to detect an object that has a similar color than the background, you can easily notice this drawback in the upper part of the can, which is also white, like the wall.
This is a solution that is in my mind and I'm not sure to works properly:
run any pre-required filters and algorithms to be ready to blob detection.
run the blob detection algorithm and save all the blobs in an array.
find the center, and the area size of each blob.
compare current frame blob's data with previous blobs (their center and sizes)
if the changes was in acceptable range they are the unmoved blobs.

discrete-event simulation example

I want to perform a discrete-event simulation in C#. I want three balls rolling on the screen simultaneously following a random walk pattern. At time 1 ball one should appear and start rolling, at time 5, ball 2 and at time 10, ball 3 should appear. When any two balls come enough closer the color of balls should change (as long as they stay close).
I am very new to discrete event simulation, and i want to understand, how we do it in C# programming? what steps are required in creating the model. I know graphics and other stuff.
New comers be advised:
Using operating system timers or threads is NOT the way the discrete event simulations should work. Using one of these as a building block might be misleading or a plain wrong.
Read the wikipedia article first about Discrete Event Simulation (DES)
There are" models" so called "formalisms" that mathematically proven to work in event simulation. You need to implement one (for example DEVS).
You may want to look at some listed DES List of discrete event simulation software at the wikipedia.
Also you may find useful sigmawiki (programs,examples, tutorials) about DES. SharpSim and React.NET are DES implementations in C#.
Use a Timer (drag one from the Toolbox over to your form in the designer, or instantiate it in code if you prefer). Double click the timer to set a _Tick event in your code which will fire every N milliseconds (the .Interval property of the timer governs this). Set the .Interval to 1000 (1 second), and use objects that keep track of their own position in X and Y coordinates.
Use a Random object to generate the direction of the next position change of the ball, and within the _Tick event of the timer, update the position variables for each of the balls.
Using raw threads is a possibility, too, but the Timer gives you some of that power without having to manage everything yourself.

Categories

Resources