image processing techniques - direct manipulation of destination image or virtualized? - c#

i need to re-project a series of ariel images that have been referenced in a geographical coordinate system into a UTM projection. I had read that using getPixel and setPixel might be slow - should set up a series of 2 dimensional arrays for intermediate access and then flush the values to the destination image when I am done.
Is this normally this sort of image processing is done by the professionals?

Most image processing is feature detection, segmentation of a scene, fault finding, classification and tracking ....
You might want to take a peek at the book:
Image Processing in C (applicable for other languages too)
Image Processing - Principles and Applications
Which describes many fast and effective means of many image transformations. These two books helped me when I was processing images :)
If I understand your question ... If you are re-aligning or assembling many images, and you don't have orientation as well as position, you can use these algorithms for re-alignment of edges and common features. If you are stitching by position then these algorithms will help in re-sampling/resizing your images for more efficient assembly. There are also some open source libraries for these kinds of things. (OpenCV comes to mind)
edit: If I were re-projecting large images into new projections based on position conversion (and it were dynamic, not static) I would look into building an on-demand application that will refactor images given required resolution and desired position. The application can then pull the nearest resolution of the relative neighbourhood images and provide a result at the desired resolution.
Without more background, I hope this helps!
edit 2:
Comment from answer below:
Depends on the images. If they are fixed size then an array might be good. If they vary then it might be better to implement a system that provides get/setpixel using relative sampling/averaging to match up images of differing res?
I don't know the ins and outs of the images you are working with, and what you are doing, but often abstracting what a 'pixel' is rather than accessing values in an array is useful. This way you can implement conversion, sampling, rotating, correcting algorithms on the backend. Like GetVPixel() or SetVPixel(). This may be more useful when working with multiple, differing res/format images. Like
SetVPixel(img1, coord1, GetVPixel(img2, coord2))
Obviously in an OOP/C# manner. img1 and img2 can be different in size, res, geographics, alignment or anything else providing your backend understands both.

If you don't mind using unsafe code, you can wrap the Bitmap's BitmapData in an object that allows you to efficiently get and set pixels. The below code is mostly taken from a gaussian blur filter, with a couple of modifications of my own. It's not the most flexible code if your bitmap formats differ but I hope it illustrates how you can manipulate bitmaps more efficiently.
public unsafe class RawBitmap : IDisposable
{
private BitmapData _bitmapData;
private byte* _begin;
public RawBitmap(Bitmap originBitmap)
{
OriginBitmap = originBitmap;
_bitmapData = OriginBitmap.LockBits(new Rectangle(0, 0, OriginBitmap.Width, OriginBitmap.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
_begin = (byte*)(void*)_bitmapData.Scan0;
}
#region IDisposable Members
public void Dispose()
{
OriginBitmap.UnlockBits(_bitmapData);
}
#endregion
public unsafe byte* Begin
{
get { return _begin; }
}
public unsafe byte* this[int x, int y]
{
get
{
return _begin + y * (_bitmapData.Stride) + x * 3;
}
}
public unsafe byte* this[int x, int y, int offset]
{
get
{
return _begin + y * (_bitmapData.Stride) + x * 3 + offset;
}
}
public unsafe void SetColor(int x, int y, Color color)
{
byte* p = this[x, y];
p[0] = color.B;
p[1] = color.G;
p[2] = color.R;
}
public unsafe Color GetColor(int x, int y)
{
byte* p = this[x, y];
return new Color
(
p[2],
p[1],
p[0]
);
}
public int Stride
{
get { return _bitmapData.Stride; }
}
public int Width
{
get { return _bitmapData.Width; }
}
public int Height
{
get { return _bitmapData.Height; }
}
public int GetOffset()
{
return _bitmapData.Stride - _bitmapData.Width * 3;
}
public Bitmap OriginBitmap { get; private set; }
}

The FreeImage library is pretty fast and offers a Cut and Paste that might be useful. The distribution comes with a C# wrapper.

AFAIK the overhead of GetPixel/SetPixel is the call to it, when accessing an array there is no call hence less overhead.
You should start with GetPixel/SetPixel, you can alway override those calls later to use direct data access.

Related

What governs DC scaling?

This code gets different scaling depending on which computer I run it on.
Metafile image;
IntPtr dib;
var memoryHdc = Win32Utils.CreateMemoryHdc(IntPtr.Zero, 1, 1, out dib);
try
{
image = new Metafile(memoryHdc, EmfType.EmfOnly);
using (var g = Graphics.FromImage(image))
{
Render(g, html, left, top, maxWidth, cssData, stylesheetLoad, imageLoad);
}
}
finally
{
Win32Utils.ReleaseMemoryHdc(memoryHdc, dib);
}
Going into the Render method, the Metafile object has a PixelFormat of DontCare and consequently does not have valid vertical or horizontal resolutions.
Coming out of the Render method, it has a value of Format32bppRgb and PhysicalDimension.Width and PhysicalDimension.Height have increased to accommodate the rendered image.
How can I make scaling independent of local settings?
Here's the implementation of CreateMemoryHdc (I didn't write it, it's from an OSS library).
public static IntPtr CreateMemoryHdc(IntPtr hdc, int width, int height, out IntPtr dib)
{
// Create a memory DC so we can work off-screen
IntPtr memoryHdc = CreateCompatibleDC(hdc);
SetBkMode(memoryHdc, 1);
// Create a device-independent bitmap and select it into our DC
var info = new BitMapInfo();
info.biSize = Marshal.SizeOf(info);
info.biWidth = width;
info.biHeight = -height;
info.biPlanes = 1;
info.biBitCount = 32;
info.biCompression = 0; // BI_RGB
IntPtr ppvBits;
dib = CreateDIBSection(hdc, ref info, 0, out ppvBits, IntPtr.Zero, 0);
SelectObject(memoryHdc, dib);
return memoryHdc;
}
As you can see, the width, height and bit depth passed to the DC constructor are constant. Creating the metafile produces different physical dimensions. Right after executing this
image = new Metafile(memoryHdc, EmfType.EmfOnly);
the metafile has PhysicalDimension.Height (and width) of 26.43 on my workstation and 31.25 on the server to which I am deploying, so the difference in scaling is already evident and therefore probably not a consequence of anything in the rendering.
This may be relevant. BitMapInfo is defined in the OSS library and looks like this:
internal struct BitMapInfo
{
public int biSize;
public int biWidth;
public int biHeight;
public short biPlanes;
public short biBitCount;
public int biCompression;
public int biSizeImage;
public int biXPelsPerMeter;
public int biYPelsPerMeter;
public int biClrUsed;
public int biClrImportant;
public byte bmiColors_rgbBlue;
public byte bmiColors_rgbGreen;
public byte bmiColors_rgbRed;
public byte bmiColors_rgbReserved;
}
so possibly setting biXPelsPerMeter and biYPelsPerMeter will help. The above code doesn't set them and may be allowing platform values.
Unfortunately, setting these values doesn't seem to make any difference. msdn says
biXPelsPerMeter
The horizontal resolution, in pixels-per-meter, of the
target device for the bitmap. An application can use this value to
select a bitmap from a resource group that best matches the
characteristics of the current device.
So these settings are used when loading a bitmap from a resource. No help here.
This all looks pertinent https://www.codeproject.com/articles/177394/%2fArticles%2f177394%2fWorking-with-Metafile-Images-in-NET
It may help to know that this code does not run in an application. It renders HTML as a metafile for printing, and it lives inside a Web API webservice.
There is no user interface so I'm not sure how to interpret the question of whether it is DPI Aware. The evidence suggests it's DPI affected so the question may be pertinent.
GDI doesn't scale. Use GDI+ for device independence. You will lose antialiasing but most print devices are high DPI anyway.
Does the library in use have an option to use GDI+ instead?
(In my own case, yes. Problem solved.)

Bitmap.Clone(Rectangle, PixelFormat) - OutOfMemoryException [duplicate]

Why am I getting an out of memory exception?
So this dies in C# on the first time through:
splitBitmaps.Add(neededImage.Clone(rectDimensions, neededImage.PixelFormat));
Where splitBitmaps is a List<BitMap> BUT this works in VB for at least 4 iterations:
arlSplitBitmaps.Add(Image.Clone(rectDimensions, Image.PixelFormat))
Where arlSplitBitmaps is a simple array list. (And yes I've tried arraylist in c#)
This is the fullsection:
for (Int32 splitIndex = 0; splitIndex <= numberOfResultingImages - 1; splitIndex++)
{
Rectangle rectDimensions;
if (splitIndex < numberOfResultingImages - 1)
{
rectDimensions = new Rectangle(splitImageWidth * splitIndex, 0,
splitImageWidth, splitImageHeight);
}
else
{
rectDimensions = new Rectangle(splitImageWidth * splitIndex, 0,
sourceImageWidth - (splitImageWidth * splitIndex), splitImageHeight);
}
splitBitmaps.Add(neededImage.Clone(rectDimensions, neededImage.PixelFormat));
}
neededImage is a Bitmap by the way.
I can't find any useful answers on the intarweb, especially not why it works just fine in VB.
Update:
I actually found a reason (sort of) for this working but forgot to post it. It has to do with converting the image to a bitmap instead of just trying to clone the raw image if I remember.
Clone() may also throw an Out of memory exception when the coordinates specified in the Rectangle are outside the bounds of the bitmap. It will not clip them automatically for you.
I found that I was using Image.Clone to crop a bitmap and the width took the crop outside the bounds of the original image. This causes an Out of Memory error. Seems a bit strange but can beworth knowing.
I got this too when I tried to use the Clone() method to change the pixel format of a bitmap. If memory serves, I was trying to convert a 24 bpp bitmap to an 8 bit indexed format, naively hoping that the Bitmap class would magically handle the palette creation and so on. Obviously not :-/
This is a reach, but I've often found that if pulling images directly from disk that it's better to copy them to a new bitmap and dispose of the disk-bound image. I've seen great improvement in memory consumption when doing so.
Dave M. is on the money too... make sure to dispose when finished.
I struggled to figure this out recently - the answers above are correct. Key to solving this issue is to ensure the rectangle is actually within the boundaries of the image. See example of how I solved this.
In a nutshell, checked to if the area that was being cloned was outside the area of the image.
int totalWidth = rect.Left + rect.Width; //think -the same as Right property
int allowableWidth = localImage.Width - rect.Left;
int finalWidth = 0;
if (totalWidth > allowableWidth){
finalWidth = allowableWidth;
} else {
finalWidth = totalWidth;
}
rect.Width = finalWidth;
int totalHeight = rect.Top + rect.Height; //think same as Bottom property
int allowableHeight = localImage.Height - rect.Top;
int finalHeight = 0;
if (totalHeight > allowableHeight){
finalHeight = allowableHeight;
} else {
finalHeight = totalHeight;
}
rect.Height = finalHeight;
cropped = ((Bitmap)localImage).Clone(rect, System.Drawing.Imaging.PixelFormat.DontCare);
Make sure that you're calling .Dispose() properly on your images, otherwise unmanaged resources won't be freed up. I wonder how many images are you actually creating here -- hundreds? Thousands?

SVGs in C#, draw multiple complex rectangles

I'm creating a gannt chart to show hundreds of calendars for individual instances of orders, currently using an algorithm to draw lines and rectangles to create a grid, the problem is I'm the bitmaps are becoming far to large to draw, taking up ram, I've tried multiple different methods including drawing the bitmaps at half size and scaling them up (comes out horribly fuzzy) and still to large.
I want to be able to draw SVGs as I figure for something that draws large simple shapes should reduce the size dramatically compared to bitmaps.
the problem is I cant find anything on msdn that includes any sort of c# library for drawing svgs and I dont want to use external code.
Do I need to create It in XAML or is there a library similar to how bitmaps are drawn ?
Windows Forms = GDI / GDI+
WPF/XAML = DirectX (where possible)
Best bet is to go with WPF/XAML which supports scalable vector graphics (not the same as the .svg file format)
You will need 3rd party code to do SVG in WinForms.
If you are sticking with WinForms, then bitmapping is the only way this can be achieved really. Take a look at PixelFormat - you might be able to reduce the size of your bitmap in memory by using a format which requires fewer bits-per-pixel for example.
There is no need to use external tools or SVGs. With a bit of simple math you can easily just render the necessary parts you want to display. All you need is to know the grid size, the range of dates and the range of line-items that are visible in your view. Let's call them:
DateTime dispStartDate;
DateTime dispEndDate;
int dispStartItem;
int dispEndItem;
int GridSize = 10; //nifty if you'd like a magnification factor
Let's also say you have a class for a Gantt chart item:
class gItem
{
DateTime StartDate{ get; set; }
DateTime EndDate{ get; set; }
int LineNumber{ get; set; }
int Length { get { return EndDate - StartDate; } }
//some other code and stuff you'd like to add
}
Now you need a list containing all of your Gantt chart entries:
List<gItem> GanttItems;
By now you should have assigned values to each of the above variables, now it's time to generate a list of rectangles that would be visible in the view and draw them:
List<Rectangle> EntryRects = new List<Rectangle>();
void UpdateDisplayBounds()
{
foreach(gItem gEntry in GanttItems)
{
if(gEntry.StartDate < dispEndDate && gEntry.EndDate > dispStartDate
&& gEntry.LineNumber >= dispStartItem && gEntry.LineNumber <= dispEndItem)
{
int x = (gEntry.StartDate - dispStartDate) * GridSize;
int y = (gEntry.LineNumber - dispStartItem) * GridSize;
int width = gEntry.Length * GridSize;
int height = GridSize;
EntryRects.Add(new Rectangle(x, y, width, height);
}
}
}
Now you have a list of rectangles only within the display bounds which you can render. So let's draw:
void DrawRectangles(Graphics canvas)//use a picturebox's graphics handler or something for the canvas
{
canvas.Clear(this.BackColor);
using(SolidBrush b = new SolidBrush(Color.Blue)) //Choose your color
{
foreach(Rectangle r in EntryRects)
{
canvas.FillRectangle(b, r);
}
}
}
The above code should get you started. With this you have a list of rectangles that you render on request and the only image taking space in memory is the currently displayed one.

c# screen transfer over socket efficient improve ways

thats how i wrote your beautiful code(some simple changes for me for easier understanding)
private void Form1_Load(object sender, EventArgs e)
{
prev = GetDesktopImage();//get a screenshot of the desktop;
cur = GetDesktopImage();//get a screenshot of the desktop;
var locked1 = cur.LockBits(new Rectangle(0, 0, cur.Width, cur.Height),
ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
var locked2 = prev.LockBits(new Rectangle(0, 0, prev.Width, prev.Height),
ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
ApplyXor(locked1, locked2);
compressionBuffer = new byte[1920* 1080 * 4];
// Compressed buffer -- where the data goes that we'll send.
int backbufSize = LZ4.LZ4Codec.MaximumOutputLength(this.compressionBuffer.Length) + 4;
backbuf = new CompressedCaptureScreen(backbufSize);
MessageBox.Show(compressionBuffer.Length.ToString());
int length = Compress();
MessageBox.Show(backbuf.Data.Length.ToString());//prints the new buffer size
}
the compression buffer length is for example 8294400
and the backbuff.Data.length is 8326947
I didn't like the compression suggestions, so here's what I would do.
You don't want to compress a video stream (so MPEG, AVI, etc are out of the question -- these don't have to be real-time) and you don't want to compress individual pictures (since that's just stupid).
Basically what you want to do is detect if things change and send the differences. You're on the right track with that; most video compressors do that. You also want a fast compression/decompression algorithm; especially if you go to more FPS that will become more relevant.
Differences. First off, eliminate all branches in your code, and make sure memory access is sequential (e.g. iterate x in the inner loop). The latter will give you cache locality. As for the differences, I'd probably use a 64-bit XOR; it's easy, branchless and fast.
If you want performance, it's probably better to do this in C++: The current C# implementation doesn't vectorize your code, and that will help you a great deal here.
Do something like this (I'm assuming 32bit pixel format):
for (int y=0; y<height; ++y) // change to PFor if you like
{
ulong* row1 = (ulong*)(image1BasePtr + image1Stride * y);
ulong* row2 = (ulong*)(image2BasePtr + image2Stride * y);
for (int x=0; x<width; x += 2)
row2[x] ^= row1[x];
}
Fast compression and decompression usually means simpler compression algorithms. https://code.google.com/p/lz4/ is such an algorithm, and there's a proper .NET port available for that as well. You might want to read on how it works too; there is a streaming feature in LZ4 and if you can make it handle 2 images instead of 1 that will probably give you a nice compression boost.
All in all, if you're trying to compress white noise, it simply won't work and your frame rate will drop. One way to solve this is to reduce the colors if you have too much 'randomness' in a frame. A measure for randomness is entropy, and there are several ways to get a measure of the entropy of a picture ( https://en.wikipedia.org/wiki/Entropy_(information_theory) ). I'd stick with a very simple one: check the size of the compressed picture -- if it's above a certain limit, reduce the number of bits; if below, increase the number of bits.
Note that increasing and decreasing bits is not done with shifting in this case; you don't need your bits to be removed, you simply need your compression to work better. It's probably just as good to use a simple 'AND' with a bitmask. For example, if you want to drop 2 bits, you can do it like this:
for (int y=0; y<height; ++y) // change to PFor if you like
{
ulong* row1 = (ulong*)(image1BasePtr + image1Stride * y);
ulong* row2 = (ulong*)(image2BasePtr + image2Stride * y);
ulong mask = 0xFFFCFCFCFFFCFCFC;
for (int x=0; x<width; x += 2)
row2[x] = (row2[x] ^ row1[x]) & mask;
}
PS: I'm not sure what I would do with the alpha component, I'll leave that up to your experimentation.
Good luck!
The long answer
I had some time to spare, so I just tested this approach. Here's some code to support it all.
This code normally run over 130 FPS with a nice constant memory pressure on my laptop, so the bottleneck shouldn't be here anymore. Note that you need LZ4 to get this working and that LZ4 is aimed at high speed, not high compression ratio's. A bit more on that later.
First we need something that we can use to hold all the data we're going to send. I'm not implementing the sockets stuff itself here (although that should be pretty simple using this as a start), I mainly focused on getting the data you need to send something over.
// The thing you send over a socket
public class CompressedCaptureScreen
{
public CompressedCaptureScreen(int size)
{
this.Data = new byte[size];
this.Size = 4;
}
public int Size;
public byte[] Data;
}
We also need a class that will hold all the magic:
public class CompressScreenCapture
{
Next, if I'm running high performance code, I make it a habit to preallocate all the buffers first. That'll save you time during the actual algorithmic stuff. 4 buffers of 1080p is about 33 MB, which is fine - so let's allocate that.
public CompressScreenCapture()
{
// Initialize with black screen; get bounds from screen.
this.screenBounds = Screen.PrimaryScreen.Bounds;
// Initialize 2 buffers - 1 for the current and 1 for the previous image
prev = new Bitmap(screenBounds.Width, screenBounds.Height, PixelFormat.Format32bppArgb);
cur = new Bitmap(screenBounds.Width, screenBounds.Height, PixelFormat.Format32bppArgb);
// Clear the 'prev' buffer - this is the initial state
using (Graphics g = Graphics.FromImage(prev))
{
g.Clear(Color.Black);
}
// Compression buffer -- we don't really need this but I'm lazy today.
compressionBuffer = new byte[screenBounds.Width * screenBounds.Height * 4];
// Compressed buffer -- where the data goes that we'll send.
int backbufSize = LZ4.LZ4Codec.MaximumOutputLength(this.compressionBuffer.Length) + 4;
backbuf = new CompressedCaptureScreen(backbufSize);
}
private Rectangle screenBounds;
private Bitmap prev;
private Bitmap cur;
private byte[] compressionBuffer;
private int backbufSize;
private CompressedCaptureScreen backbuf;
private int n = 0;
First thing to do is capture the screen. This is the easy part: simply fill the bitmap of the current screen:
private void Capture()
{
// Fill 'cur' with a screenshot
using (var gfxScreenshot = Graphics.FromImage(cur))
{
gfxScreenshot.CopyFromScreen(screenBounds.X, screenBounds.Y, 0, 0, screenBounds.Size, CopyPixelOperation.SourceCopy);
}
}
As I said, I don't want to compress 'raw' pixels. Instead, I'd much rather compress XOR masks of previous and the current image. Most of the times this will give you a whole lot of 0's, which is easy to compress:
private unsafe void ApplyXor(BitmapData previous, BitmapData current)
{
byte* prev0 = (byte*)previous.Scan0.ToPointer();
byte* cur0 = (byte*)current.Scan0.ToPointer();
int height = previous.Height;
int width = previous.Width;
int halfwidth = width / 2;
fixed (byte* target = this.compressionBuffer)
{
ulong* dst = (ulong*)target;
for (int y = 0; y < height; ++y)
{
ulong* prevRow = (ulong*)(prev0 + previous.Stride * y);
ulong* curRow = (ulong*)(cur0 + current.Stride * y);
for (int x = 0; x < halfwidth; ++x)
{
*(dst++) = curRow[x] ^ prevRow[x];
}
}
}
}
For the compression algorithm I simply pass the buffer to LZ4 and let it do its magic.
private int Compress()
{
// Grab the backbuf in an attempt to update it with new data
var backbuf = this.backbuf;
backbuf.Size = LZ4.LZ4Codec.Encode(
this.compressionBuffer, 0, this.compressionBuffer.Length,
backbuf.Data, 4, backbuf.Data.Length-4);
Buffer.BlockCopy(BitConverter.GetBytes(backbuf.Size), 0, backbuf.Data, 0, 4);
return backbuf.Size;
}
One thing to note here is that I make it a habit to put everything in my buffer that I need to send over the TCP/IP socket. I don't want to move data around if I can easily avoid it, so I'm simply putting everything that I need on the other side there.
As for the sockets itself, you can use a-sync TCP sockets here (I would), but if you do, you will need to add an extra buffer.
The only thing that remains is to glue everything together and put some statistics on the screen:
public void Iterate()
{
Stopwatch sw = Stopwatch.StartNew();
// Capture a screen:
Capture();
TimeSpan timeToCapture = sw.Elapsed;
// Lock both images:
var locked1 = cur.LockBits(new Rectangle(0, 0, cur.Width, cur.Height),
ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
var locked2 = prev.LockBits(new Rectangle(0, 0, prev.Width, prev.Height),
ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
try
{
// Xor screen:
ApplyXor(locked2, locked1);
TimeSpan timeToXor = sw.Elapsed;
// Compress screen:
int length = Compress();
TimeSpan timeToCompress = sw.Elapsed;
if ((++n) % 50 == 0)
{
Console.Write("Iteration: {0:0.00}s, {1:0.00}s, {2:0.00}s " +
"{3} Kb => {4:0.0} FPS \r",
timeToCapture.TotalSeconds, timeToXor.TotalSeconds,
timeToCompress.TotalSeconds, length / 1024,
1.0 / sw.Elapsed.TotalSeconds);
}
// Swap buffers:
var tmp = cur;
cur = prev;
prev = tmp;
}
finally
{
cur.UnlockBits(locked1);
prev.UnlockBits(locked2);
}
}
Note that I reduce Console output to ensure that's not the bottleneck. :-)
Simple improvements
It's a bit wasteful to compress all those 0's, right? It's pretty easy to track the min and max y position that has data using a simple boolean.
ulong tmp = curRow[x] ^ prevRow[x];
*(dst++) = tmp;
hasdata |= tmp != 0;
You also probably don't want to call Compress if you don't have to.
After adding this feature you'll get something like this on your screen:
Iteration: 0.00s, 0.01s, 0.01s 1 Kb => 152.0 FPS
Using another compression algorithm might also help. I stuck to LZ4 because it's simple to use, it's blazing fast and compresses pretty well -- still, there are other options that might work better. See http://fastcompression.blogspot.nl/ for a comparison.
If you have a bad connection or if you're streaming video over a remote connection, all this won't work. Best to reduce the pixel values here. That's quite simple: apply a simple 64-bit mask during the xor to both the previous and current picture... You can also try using indexed colors - anyhow, there's a ton of different things you can try here; I just kept it simple because that's probably good enough.
You can also use Parallel.For for the xor loop; personally I didn't really care about that.
A bit more challenging
If you have 1 server that is serving multiple clients, things will get a bit more challenging, as they will refresh at different rates. We want the fastest refreshing client to determine the server speed - not slowest. :-)
To implement this, the relation between the prev and cur has to change. If we simply 'xor' away like here, we'll end up with a completely garbled picture at the slower clients.
To solve that, we don't want to swap prev anymore, as it should hold key frames (that you'll refresh when the compressed data becomes too big) and cur will hold incremental data from the 'xor' results. This means you can basically grab an arbitrary 'xor'red frame and send it over the line - as long as the prev bitmap is recent.
H264 or Equaivalent Codec Streaming
There are various compressed streaming available which does almost everything that you can do to optimize screen sharing over network. There are many open source and commercial libraries to stream.
Screen transfer in Blocks
H264 already does this, but if you want to do it yourself, you have to divide your screens into smaller blocks of 100x100 pixels, and compare these blocks with previous version and send these blocks over network.
Window Render Information
Microsoft RDP does lot better, it does not send screen as a raster image, instead it analyzes screen and creates screen blocks based on the windows on the screen. It then analyzes contents of screen and sends image only if needed, if it is a text box with some text in it, RDP sends information to render text box with a text with font information and other information. So instead of sending image, it sends information on what to render.
You can combine all techniques and make a mixed protocol to send screen blocks with image and other rendering information.
Instead of handling data as an array of bytes, you can handle it as an array of integers.
int* p = (int*)((byte*)scan0.ToPointer() + y * stride);
int* p2 = (int*)((byte*)scan02.ToPointer() + y * stride2);
for (int x = 0; x < nWidth; x++)
{
//always get the complete pixel when differences are found
if (*p2 != 0)
*p = *p2
++p;
++p2;
}

Draw a single pixel on Windows Forms

I'm stuck trying to turn on a single pixel on a Windows Form.
graphics.DrawLine(Pens.Black, 50, 50, 51, 50); // draws two pixels
graphics.DrawLine(Pens.Black, 50, 50, 50, 50); // draws no pixels
The API really should have a method to set the color of one pixel, but I don't see one.
I am using C#.
This will set a single pixel:
e.Graphics.FillRectangle(aBrush, x, y, 1, 1);
The Graphics object doesn't have this, since it's an abstraction and could be used to cover a vector graphics format. In that context, setting a single pixel wouldn't make sense. The Bitmap image format does have GetPixel() and SetPixel(), but not a graphics object built on one. For your scenario, your option really seems like the only one because there's no one-size-fits-all way to set a single pixel for a general graphics object (and you don't know EXACTLY what it is, as your control/form could be double-buffered, etc.)
Why do you need to set a single pixel?
Just to show complete code for Henk Holterman answer:
Brush aBrush = (Brush)Brushes.Black;
Graphics g = this.CreateGraphics();
g.FillRectangle(aBrush, x, y, 1, 1);
Where I'm drawing lots of single pixels (for various customised data displays), I tend to draw them to a bitmap and then blit that onto the screen.
The Bitmap GetPixel and SetPixel operations are not particularly fast because they do an awful lot of boundschecking, but it's quite easy to make a 'fast bitmap' class which has quick access to a bitmap.
MSDN Page on GetHdc
I think this is what you are looking for. You will need to get the HDC and then use the GDI calls to use SetPixel. Note, that a COLORREF in GDI is a DWORD storing a BGR color. There is no alpha channel, and it is not RGB like the Color structure of GDI+.
This is a small section of code that I wrote to accomplish the same task:
public class GDI
{
[System.Runtime.InteropServices.DllImport("gdi32.dll")]
internal static extern bool SetPixel(IntPtr hdc, int X, int Y, uint crColor);
}
{
...
private void OnPanel_Paint(object sender, PaintEventArgs e)
{
int renderWidth = GetRenderWidth();
int renderHeight = GetRenderHeight();
IntPtr hdc = e.Graphics.GetHdc();
for (int y = 0; y < renderHeight; y++)
{
for (int x = 0; x < renderWidth; x++)
{
Color pixelColor = GetPixelColor(x, y);
// NOTE: GDI colors are BGR, not ARGB.
uint colorRef = (uint)((pixelColor.B << 16) | (pixelColor.G << 8) | (pixelColor.R));
GDI.SetPixel(hdc, x, y, colorRef);
}
}
e.Graphics.ReleaseHdc(hdc);
}
...
}
Drawing a Line 2px using a Pen with DashStyle.DashStyle.Dot draws a single Pixel.
private void Form1_Paint(object sender, PaintEventArgs e)
{
using (Pen p = new Pen(Brushes.Black))
{
p.DashStyle = System.Drawing.Drawing2D.DashStyle.Dot;
e.Graphics.DrawLine(p, 10, 10, 11, 10);
}
}
If you are drawing on a graphic with SmoothingMode = AntiAlias, most drawing methods will draw more than one pixel. If you only want one pixel drawn, create a 1x1 bitmap, set the bitmap's pixel to the desired color, then draw the bitmap on the graphic.
using (var pixel = new Bitmap(1, 1, e.Graphics))
{
pixel.SetPixel(0, 0, color);
e.Graphics.DrawImage(pixel, x, y);
}
The absolute best method is to create a bitmap and pass it an intptr (pointer) to an existing array. This allows the array and the bitmap data to share the same memory... no need to use Bitmap.lockbits/Bitmap.unlockbits, both of which are slow.
Here's the broad outline:
Mark your function 'unsafe' and set your projects build settings to allow 'unsafe' code! (C# pointers)
Create your array[,]. Either using Uint32's, Bytes, or a Struct that permits access to by both Uint32 OR individual Uint8's (By using explicit field offsets)
Use System.Runtime.InteropServices.Marshal.UnsaveAddrOfPinnedArrayElement to obtain the Intptr to the start of the array.
Create the Bitmap using the constructor that takes an Intptr and Stride. This will overlap the new bitmap with the existing array data.
You now have permanent direct access to the pixel data!
The underlying array
The underlying array would likely be a 2D array of a user-struct Pixel. Why? Well... Structs can allow multiple member variables to share the same space by using explicit fixed offsets! This means that the struct can have 4 single-bytes members (.R, .G, .B, and .A), and 3 overlapping Uint16's (.AR, .RG, and ,GB)... and a single Uint32 (.ARGB)... this can make colour-plane manipulations MUCH faster.
As R, G, B, AR, RG, GB and ARGB all access different parts of the same 32-bit pixel you can manipulate pixels in a highly flexible way!
Because the array of Pixel[,] shares the same memory as the Bitmap itself, Graphics operations immediately update the Pixel array - and Pixel[,] operations on the array immediately update the bitmap! You now have multiple ways of manipulating the bitmap.
Remember, by using this technique you do NOT need to use 'lockbits' to marshal the bitmap data in and out of a buffer... Which is good, because lockbits is very VERY slow.
You also don't need to use a brush and call complex framework code capable of drawing patterned, scalable, rotatable, translatable, aliasable, rectangles... Just to write a single pixel Trust me - all that flexibility in the Graphics class makes drawing a single pixel using Graphics.FillRect a very slow process.
Other benefits
Super-smooth scrolling! Your Pixel buffer can be larger than your canvas/bitmap, in both height, and width! This enables efficient scrolling!
How?
Well, when you create a Bitmap from the array you can point the bitmaps upper-left coordinate at some arbitrary [y,x] coordinate by taking the IntPtr of that Pixel[,].
Then, by deliberately setting the Bitmaps 'stride' to match width of the array (not the width of the bitmap) you can render a predefined subset rectangle of the larger array... Whilst drawing (ahead of time) into the unseen margins! This is the principle of "offscreen drawing" in smooth scrollers.
Finally
You REALLY should wrap the Bitmap and Array into a FastBitmap class. This will help you control the lifetime of the array/bitmap pair. Obviously, if the array goes out of scope or is destroyed - the bitmap will be left pointing at an illegal memory address. By wrapping them up in a FastBitmap class you can ensure this can't happen...
... It's also a really handy place to put the various utilities you'll inevitably want to add... Such as scrolling, fading, working with colour planes, etc.
Remember:
Creating Bitmaps from a MemoryStream is very slow
Using Graphics.FillRect to draw pixels is painfully inefficient
Accessing underlying bitmap data with lockpixels/unlockpixels is very slow
And, if you're using System.Runtime.InteropServices.Marshal.Copy, just stop!
Mapping the Bitmap onto some existing array memory is the way to go. Do it right, and you'll never need/want to use a framework Bitmap again.
Apparently DrawLine draws a line that is one pixel short of the actual specified length. There doesn't seem to be a DrawPoint/DrawPixel/whatnot, but instead you can use DrawRectangle with width and height set to 1 to draw a single pixel.
You should put your code inside the Paint event, otherwise, it will not be refreshed and delete itself.
Example:
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
}
private void Form1_Paint(object sender, PaintEventArgs e)
{
Brush aBrush = (Brush)Brushes.Red;
Graphics g = this.CreateGraphics();
g.FillRectangle(aBrush, 10, 10, 1, 1);
}
}

Categories

Resources