I have read the first frame of a DICOM CINE image, then I want to read the second frame and so on. How much byte should I seek the file pointer to get next frame(If the frame size is width=640, height=480).
by DICOM cine image, you mean multi-frame DICOM files right?
May i know :
which platform you are on, which dicom lib/SDK you are using? and for your DICOM image, has it been decompressed? to BMP(32-bit/24-bit)?
If your dicom file is in 24bit(3-bytes) BMP, then your next frame of pixel data would be 640*480*3.
Assuming you are dealing with uncompressed (native) multi-frame DICOM. In that case, you need to extract following information before proceeding to calculate the size of each image frame.
Transfer Syntax (0002, 0010) to make sure dataset set is not using
encapsulated/compressed transfer syntax.
Sample per Pixel (0028, 0002): This represents number of samples
(planes) in this image. As for example 24-bit RGB will have value of
3
Number of Frames (0028, 0008): total number of frames
Rows (0028, 0010)
Columns (0028, 0011)
Bit Allocated (0028, 0100): Number of bits allocated for each pixel
sample.
Planar Configuration (0028, 0006): Conditional element that indicates
whether the pixel data are sent color-by-plane or color-by-pixel.
This is required if Samples per Pixel (0028, 0002) has a value
greater than 1.
You would calculate the frame size as follows:
Frame size in bytes = Rows * Columns * (Bit Allocated* Sample per Pixel/8)
Related
Basically, I want to get the current byte of the MediaElement at its current playback position. For example, when it is at 5 seconds, the byte position would be 1024kb. I don't want to multiply the bitrate with the current time as that is not accurate.
All I need is to get the byte position at certain durations.
So is there anyway I could get this? I'm open to other options. (Does FFProbe support this?)
I've tried everything and there is no way to do this directly using MediaElement.
The only way is to get the frame number of the video by multiplying the framerate with the timecode of the byte position you want to get.
Then use a program like BmffViewer which analyzes the moov atom of the video header. Then go to the stco entries of the track you want to analyze and get the chunk offset of the frame you calculated earlier.
I have an List as such
List<Bitmap> imgList = new List<Bitmap>();>
i need to scroll through that list, very quickly and stitch all the images into one. Like so
using (Bitmap b = new Bitmap(imgList[0].Width, imgList[0].Height * imgList.Count, System.Drawing.Imaging.PixelFormat.Format24bppRgb))
{
using (Graphics g = Graphics.FromImage(b))
{
for (int i=0;i<imgList.Count;i++)
{
g.DrawImage(imgList[i], 0, i * imgList[i].Height);
}
}
b.Save(fileName, ImageFormat.Bmp);
}
imgList.Clear()
The problem that i'm running into is that the images are 2000 wide by 2 high, and there could be 30,000-100,000 images in the array. When I try to make a blank bitmap that size, it get a parameter not found error. Any help would be GREATLY appreciated.
The size of the block of memory you need is 2 x 100,000 x 20,000 x # bytes per pixel, which is going to be either 12,000,000,000 bytes for 24 bits per pixel and 16,000,000,000 bytes for 32 bits per pixel. So in other words ~12GB or ~16GB. A 32 bit address space is just too darn small for that amount of memory, so sucks to be you.
Or does it?
Since you want to create a file from this, you should be concerned with the file limits rather than your own memory limits. Lucky for you, the size of the integer type used for image dimensions in a BMP is 32 bit, which means that a 200,000 x 2,000 image is totally within those limits. So whether or not you can make that image in memory, you can make the file.
This involves you making your own version of a BMP encoder. It's not that bad - it's most writing a BMP header, a DIB header (110 bytes total) and then raw pixel data. Of course, once done you'll be hard-pressed to find code that will open it, but that's someone else's problem, right?
using C# and emgu.
I am working with jpegs which will ultimately end up in a browser.
I am already lowering the quality of the image (to 60%) to reduce the byte array size.
Whenever I get the chance I spend sometime looking around to find ideas to reduce the size of this byte array even more.
I do know the brighter the image the more bytes it seems to hold and the more contrast the image has the more bytes it seems to hold.
Upon googling I came across this:
http://en.kioskea.net/download/download-666-greycstoration
It seemed to imply that by reducing minor pixel variations in an image that I can reduce the byte array defining these jpeg images.
So, to my approach (and understanding)...
Do I iterate through all the pixels and 'average' a group of pixels say on a 4x4 area? Or am I missing the meaning entirely here. I ask because I have already done this but it makes no difference to the image size (in bytes).
I could post my code (and will) but it was just a mock up and not fit for production code.
I am more interested in understanding the meaning/implementation of all this. I can code it this myself as soon as this is clarified (and will post back with the code here)...
Could not add comment due to rep points.
First, if you are going to do and average or some type of filtering on a per pixel basis while using nearest neighbors then you will most likely want to use a filter that is an odd number say 5x5. This will cause the target pixel to be centered in the filter.
You will end up with smaller or equal sized files but only testing will prove how much smaller.
EDIT:
So I think you could also use a simple reduction factor that would give you less colors per image. The algo that I saw was data[i]= (data[i]/N) * N + N/2, N is the reduction factor and data[i] is to be each component of each pixel in your image. The pixels will have the same range 0-255 for each of the rgb channels but certain numbers in this range will not be available to the pixels depending on the reduction factor!
This should give you a reduced jpeg image because you will have less colors.
does anyone know how i can retrieve the frame dimension of a mpeg4 video (non h264, i.e. Mpeg4 Part 2) from the raw video bitstream?
i´m currently writing a custom media source for windows media foundation, i have to provide a mediatype which needs the frame size. it doesn´t work without it.
any ideas?
thanks
I am not getting you. Are you trying to know the width and the height of the video being streamed? If so (and I guess that it is the "dimension" you are looking for) heres how:
Parse the stream for this integer 000001B0(hex) its always the first thing you get streamed. If not, see the SDP of the stream (if you have any, and search for config= field, and there it is... only now it is a Base16 string!
Read all the bytes until you get to the integer 000001B6(hex)
You should get something like this (hex): 000001B0F5000001B5891300000100000001200086C40FA28 A021E0A2
This is the "stream configuration header" or frame or whatever, exact name is Video Object Sequence. It holds all the info a decoder would need to decode the video stream.
Read the last 4 bytes (in my example they are separated by one space -- A021E0A2)
Now observe these bytes as one 32-bit unsigned integer...
To get width read the first 8 bits, and then multiply what you get with 4
Skip next 7 bits
To get height read next 9 bits
In pseudo code:
WIDTH = readBitsUnsigned(array, 8) * 4;
readBitsUnsigned(array, 7);
HEIGHT = readBitsUnsigned(array, 9);
There you go... width and height. (:
I'm writing some map software for my smartphone and have hit a problem whereby I don't want to load all of the (large) image files into memory when only a portion will be displayed.
Is there a way to read only a subsection (the viewable portion) of a big image given that you know the x and y offsets and width? I know it's probably possibly to do it by reading the file a byte at a time but I'm not sure how to do this.
Thank you,
Nico
It's going to depend at least in part on what format(s) your images are saved in. If you have raw image files or bitmaps, it may be possible, but if your data is compressed in any manner, such as JPEG or PNG, it's going to be a lot more difficult to read just a subsection.
If you truly don't want to ever load the full data into memory, you'll have to write your own IO routine that reads the file. For anything more complex than BMP, your decompression algorithm could get complicated.
If it's a BMP file, it shouldn't be that hard.
First you read the header from the file, if I recall correctly it's 44 bytes, but you can find that out from searching the web for a specification.
The header contains information like how many bytes there are per pixel, total width and height, how many bytes per scan line. Normally the bitmap is stored upside down, so you would calculate where in the file the first pixel of the bottom line was and skip to that location. Then you read the pixels you want from that line and skip to the correct pixel on the next line.
The FileStream class has what you need; a Read method for reading and a Seek method to skip to a given position.
Couldn't you cut the image up into sections beforehand?
Splitting it into many 256x256 pixel images means you'd only have to load a couple of them and stitch them back together on the viewable canvas. To name one implementation - google maps uses this technique.
This is something I have done with bitmaps...
public BitmapCropBitmap(BitMap fullBitmap, Rectangle rectangle)
{
return proBitmap.clone(fullBitmap, rectangle, fullBitmap.PixelFormat);
}