Modify bytes in image to add black zones left and right - c#

I have a byte[] object which contains the bytes of an image. I need to pad the image adding black zones to the left and right.
My image is 512 height and 384 width and I need to make it 512X512, that is, I need to add 128 colums, 64 to the left and 64 to the right.
I think I need to first copy all image bytes to columns 65 to 448 (that makes my 384 width image), then add 64 columns to the left and 64 columns to the right.
I'm not quite sure how to do this, I would imagine a nested for will suffice but not sure.
I'm programming in C#

Have tested this with a raw image generated by Photoshop and it seems to work OK. Obviously it's designed to only work for your specific case as I'm not sure what you're trying to achieve but I'm sure you could improve upon it :)
public byte[] FixImage(byte[] imageData, int bitsPerPixel)
{
int bytesPerPixel = bitsPerPixel / 8;
List<byte> data = new List<byte>();
for (int i = 0; i < imageData.Length; i += 384 * bytesPerPixel)
{
data.AddRange(new byte[64*bytesPerPixel]);
data.AddRange(imageData.Skip(i).Take(384 * bytesPerPixel));
data.AddRange(new byte[64 * bytesPerPixel]);
}
return data.ToArray();
}
If you end up using more complicated formats than raw byte arrays, it may be worth investigating using the GDI functions in System.Drawing. Let me know if you want an example of that.

Related

Yet another issue with stride

According to the documentation stride for bitmap creating from a byteArray needs to be:
The stride is the width of a single row of pixels (a scan line), rounded up to a four-byte boundary. If the stride is positive, the bitmap is top-down. If the stride is negative, the bitmap is bottom-up.
I have a byteArray for an image of 640 x 512 pixels. The byte array is created from an arraybuffer that is coming in from a live camera. I am creating the image for the first time. The image format is PixelFormat.Format24bppRgb so one byte per Red, Green and Blue for a total of three bytes per pixel. That makes one line of the image 640 * 3 bytes = 1,920 bytes. Checking to see if this is divisible by four I get 1,920/4 = 480.
When I use a stride of 640 * 3 = 1,920 I get a bad result (a garbled image). What am I doing wrong?
I have verified that my byteArray has the right number of elements and looks correct. My code looks like this:
int stride = 1920;
using (Bitmap image = new Bitmap(640, 512, stride, PixelFormat.Format24bppRgb, new IntPtr(ptr)))
{
return (Bitmap)image.Clone();
}
EDIT
It is sounding like stride at 1920 is correct and you all are confirming that I understand how stride works. My guess at this point is that my pixel layout (ARGB VS RGB or something like that) is not correct.
This is how the image should look (using a free GenIcam viewer:
How the "garbled" image looks:
Solved!
Understanding how stride works was the key. Since I had the correct stride I started looking elsewhere. What happened was I was passing an array with the wrong value to make the bitmap. The values should have values for temperature:
23.5, 25.2, 29.8 ...
The values actually getting passed were in the thousands range ...
7501, 7568, 7592 ...
Garbage in, garbage out...
Rambling thoughts:
I need to learn how to set up unit tests when doing things like this. Units test are easy when you are testing something like a method to calculate the circumference of a circle. I guess I need to figure out a way to set up a raw data file that simulates the raw data from the camera and run it through all the conversions to get to an image.
Thank you for all your help.

converting 8 bytes into one long

I am currently developing a C# 2D sandbox based game. The game world is filled with tiles/blocks. Since the world is so large the game can sometimes use more than what is allowed for 32-bit application.
My tiles consist of the following data inside a struct:
public byte type;
public byte typeWall;
public byte liquid;
public byte typeLiquid;
public byte frameX;
public byte frameY;
public byte frameWallX;
public byte frameWallY;
I am looking to encapsulate all this data within one "long" (64-bit integer).
I want properties to get and set each piece of data using bit shifting, etc... (I have never done this).
Would this save space? Would it increase processing speed? If so how can it be accomplished?
Thanks.
I am looking to encapsulate all this data within one "long" (64-bit integer).
You can use StructLayoutAttribute with LayoutKind.Explicit and then decorate fields with FieldOffsetAttribute specifying the exact position.
I want properties to get and set each piece of data using bit shifting, etc... (I have never done this).
Then use shift left (<<), shift right (>>) and masking (and && to extract / or || to write (don't forget about any non-zero bits in the target byte)) with 0xff to separate individual bytes. Read more about bitwise operations here.
Would this save space? Would it increase processing speed?
Did you measure it? Did you discover a performace / memory consuption problem? If yes, go optimize it. If not, do not do premature optimizations. In other words, don't blindly try without measuring first.
I don't know why you want to do this, but you can do it in this way:
byte type = 4;
byte typeWall = 45;
byte liquid = 45;
byte typeLiquid = 234;
byte frameX = 23;
byte frameY = 23;
byte frameWallX = 22;
byte frameWallY = 221;
byte[] bytes = new [] {type, typeWall, liquid, typeLiquid, frameX, frameY, frameWallX, frameWallY};
BitConverter.ToInt64(bytes, 0);
or using << (shift) operator.
As you can see by pasting the following code into linqpad :
void Main()
{
sizeof(byte).Dump("byte size");
sizeof(Int32).Dump("int 32");
sizeof(Int64).Dump("int 64");
sizeof(char).Dump("for good measure, a char:");
}
You'll get:
byte size 1
int 32 4
int 64 8
for good measure, a char: 2
So packing 8 bytes in an int64 will be the same, but you'll have to play with the bits yourself (if that's your thing, by all means, go for it :)

computing 31 bit number / ignoring most significant bit

I am working on a piece of software that analyzes E01 bitstream images. Basically these are forensic data files that allow a user to compress all the data on a disk into a single file. The E01 format embeds data about the original data, including MD5 hash of the source and resulting data, etc. If you are interested in some light reading, the EWF/E01 specification is here. Onto my problem:
The e01 file contains a "table" section which is a series of 32 bit numbers that are offsets to other locations within the e01 file where the actual data chunks are located. I have successfully parsed this data out into a list doing the following:
this.ChunkLocations = new List<int>();
//hack:Will this overflow? We are adding to integers to a long?
long currentReadLocation = TableSectionDescriptorRef.OffsetFromFileStart + c_SECTION_DESCRIPTOR_LENGTH + c_TABLE_HEADER_LENGTH;
byte[] currReadBytes;
using (var fs = new FileStream(E01File.FullName, FileMode.Open))
{
fs.Seek(currentReadLocation, 0);
for (int i = 0; i < NumberOfEntries; i++)
{
currReadBytes = new byte[c_CHUNK_DATA_OFFSET_LENGTH];
fs.Read(currReadBytes,0, c_CHUNK_DATA_OFFSET_LENGTH);
this.ChunkLocations.Add(BitConverter.ToUInt32(currReadBytes, 0));
}
}
The c_CHUNK_DATA_OFFSET_LENGTH is 4 bytes/ "32 bit" number.
According to the ewf/e01 specification, "The most significant bit in the chunk data offset indicates if the chunk is compressed (1) or uncompressed (0)". This appears to be evidenced by the fact that, if I convert the offsets to ints, there are large negative numbers in the results (for chunks without compression,no doubt), but most of the other offsets appear to be correctly incremented, but every once in a while there is crazy data. The data in the ChunkLocations looks something like this:
346256
379028
-2147071848
444556
477328
510100
Where with -2147071848 it appears the MSB was flipped to indicate compression/lack of compression.
QUESTIONS: So, if the MSB is used to flag for the presence of compression, then really I'm dealing with at 31 bit number, right?
1. How do I ignore the MSB/ compute a 31 bit number in figuring the offset value?
2. This seems to be a strange standard since it would seem like it would significantly limit the size of the offsets you could have, so I'm questioning if I'm missing something? These offsets to seem correct when I navigate to these locations within the e01 file.
Thanks for any help!
This sort of thing is typical when dealing with binary formats. As dtb pointed out, 31 bits is probably plenty large for this application, because it can address offsets up to 2 GiB. So they use that extra bit as a flag to save space.
You can just mask off the bit with a bitwise AND:
const UInt32 COMPRESSED = 0x80000000; // Only bit 31 on
UInt32 raw_value = 0x80004000; // test value
bool compressed = (raw_value & COMPRESSED) > 0;
UInt32 offset = raw_value & ~COMPRESSED;
Console.WriteLine("Compressed={0} Offset=0x{1:X}", compressed, offset);
Output:
Compressed=True Offset=0x4000
If you just want to strip off the leading bit, perform a bitwise and (&) of the value with 0x7FFFFFFF

How to convert a bitmap image to black and white in c#? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
convert image to Black-White or Sepia in c#
I'm writing a C# application, that opens an image, and clicking on a button, displays it in only black and white!
I was sure to find a lot of information on the net, but my searches didn't give me a lot of understandable and useful information.
I have no idea of how to proceed. Does any one have any advice ? Know a tutorial on the net?
I once found a function that converts a bitmap to grayscale
public void ToGrayScale(Bitmap Bmp)
{
int rgb;
Color c;
for (int y = 0; y < Bmp.Height; y++)
for (int x = 0; x < Bmp.Width; x++)
{
c = Bmp.GetPixel(x, y);
rgb = (int)Math.Round(.299 * c.R + .587 * c.G + .114 * c.B);
Bmp.SetPixel(x, y, Color.FromArgb(rgb, rgb, rgb));
}
}
The function accepts a bitmap as a parameter, and changes it to its grayscale version.
I hope this helps.
Edit See fuller answer here: convert image to Black-White or Sepia in c#.
There are many way to desaturate a color image. In fact, there is probably no one "true" or "correct" way to do it, though some way are more correct than others.
I assume that your image is in RGB (Red-Green-Blue) format (though BGR is also common).
The simplest way, which should work for most photos (but less so for synthetic images), is to just use the Green channel of the 3 RGB channels. Humans are most sensitive to variations in the green part of the spectrum, so the green channel covers most of the visible range and is a good approximation to the grayscale image you want.
A better way to generate a grayscale image is to use a weighted average of the 3 RGB channels. Choosing equal weights (0.33*R+0.33*G+0.33*B) will give a pretty good grayscale image. Other weight will give different results some of which may be considered more aesthetically pleasing, and some may take into consideration perceptual parameters.
You could always convert the image to another color space which has only a single grayscale channel (and 2 "color" channels), such as HSV (V is the grayscale), YUV (Y is the grayscale) or Lab (L is the grayscale). The differences should not be very big.
The term "de-saturation" comes from the HSV space. If you convert you image to HSV, set the S channel (Saturation) to be all zeros, and render the image, you will get a 3-channel desaturated "color" image.

Finding File size in bit

I need Exact size of File.
is there anyway to find file size in bits (not byte).
I do not want to find System.IO.FileInfo.Length and convert it to bits. Because this lose one byte information .
It is not possible (in any operating system that I know of) to have files whose size is not a multiple of one byte. In other words, it is not possible for a file to have a size like “3 bytes and 2 bits”. That is why all the functionality to retrieve file length returns it in bytes. You can multiply this with 8 to get the number of bits:
long lengthInBits = new FileInfo(fileName).Length * 8;
Well, using FileInfo.Length and multiplying by 8 is the most natural way of finding a file's size in bits. It's not like it can have a length which isn't a multiple of 8 bits:
long length = new FileInfo(fileName).Length * 8;
If you want to be fancy, you could left-shift it instead:
long length = new FileInfo(fileName).Length << 3;
That's not going to make any real difference though.
There are some other ways of finding the length of the file - opening it as a stream and asking the stream for its length, for example - but using FileInfo.Length is probably the simplest. Why don't you want to use the most obvious approach? It's like asking how to add two numbers together, saying that you don't want to use the + operator, but not giving a reason why...
8 bits in a byte:
int numOfBits = myFileInfo.Length * 8;
Why do you find this to be a problem?
If you don't like the magic number:
public const int BitsInByte = 8;
int numOfBits = myFileInfo.Length * BitsInByte;

Categories

Resources