I am trying to hide a string of text into a bitmap using the LSB algorithm, which is replacing the least significant bit of the RGB Values for each pixel. So far I have loped through the pixels of the image and cleared the LSB value for each pixel. The part that I am struggling with is inserting the new LSB values that come from a string.
This is what I have done so far any pointers of where to go next would be helpful
string text = txtEncrypt.Text;
//Gets the ascii value of each character from the string
var n = ASCIIEncoding.ASCII.GetBytes(text);
Bitmap myBitmap = new Bitmap(myPictureBox.Image);
byte[] rgbBytes = new byte[0];
int R=0, G=0, B=0;
for (int i = 0; i < myBitmap.Width; i++)
{
for (int j = 0; j < myBitmap.Height; j++)
{
Color pixel = myBitmap.GetPixel(i, j);
// now, clear the least significant bit (LSB) from each pixel element
//Therefore Three bits in each pixel spare
R = pixel.R - pixel.R % 2;
G = pixel.G - pixel.G % 2;
B = pixel.B - pixel.B % 2;
// Need to insert new values
}
}
Although you can do bit manipulation using "regular" arithmetics (the kind they teach in the first grade) it is more common to use bit manipulation operators to achieve the same goal.
For example, writing R = pixel.R & ~1 is a lot more common than subtracting pixel.R % 2.
You don't need to clear the bit before setting it. To force a bit into 1 use R = pixel.R | 1. To force it into zero use the R = pixel.R & ~1 mentioned above.
To iterate bits of the "messagestored as a sequence ofN` bytes use this check:
if (message[pos / 8] & (1 << pos % 8) != 0) {
// bit at position pos is 1
} else {
// bit at position pos is 0
}
Bitwise operators make this easy to do:
set last bit to 1:
var newR = pixel.R | 0b00000001
set last bit to 0
var newR = pixel.R & 0b11111110
how this works: | merges bits like an or operator would per byte. and & merges bits like an and operator would (psuedocode):
10101000 | 0000001 = 10101001
10101001 & 1111110 = 10101000
Related
I'm trying to display a waveform ( i recorded my self in the microphone).
The keep the data in a byte array.
In addition my sample rate is 44100 and the sample size is 16bit.
My array contains only positive values, but in some examples i saw that the values are between -1.0 to 1.0. Why is that?
In addition, If my sample size is 16 bit and i'm using a byte array - i need to make a conversion.
This is what i tried to do so far:
double[] x = new double[Arc.Buffer.Count / 2];
byte[] y = new byte[2];
for (i = 0; i < Arc.Buffer.Count; i++)
{
Array.Copy(Arc.Buffer.ToArray(), 2*i, y, 0, 2);
x[i] = (double)((y[1] << 8) | (y[0] & 0x00FF));
}
But after this code, my x array contain huge values (and not -1.0 to 1.0).
I'm new in this subject and i will appreciate any help! (:
Assuming this is PCM data, there are a number of WAV/RIFF formats possible. If your sample size is 16 bits, the values you're reading are going to be signed, between -32,768 and 32,767.
A quick search for "wav riff formats" turned up a resource regarding parsing digital audio data.
Since you are processing 16-bit signed samples then the values are going to be between -32768 and 32767. To get in the range of -1.0 to 1.0 double precision you need to divide by 32768.0
double[] x = new double[Arc.Buffer.Count / 2];
byte[] y = new byte[2];
for (i = 0; i < Arc.Buffer.Count; i++)
{
Array.Copy(Arc.Buffer.ToArray(), 2*i, y, 0, 2);
x[i] = ((y[1] << 8) | (y[0] & 0x00FF)) / 32768.0;
}
EDIT
Just was pointed that the requirements state peaks cannot be ends of Arrays.
So I ran across this site
http://codility.com/
Which gives you programming problems and gives you certificates if you can solve them in 2 hours. The very first question is one I have seen before, typically called the Peaks and Flags question. If you are not familiar
A non-empty zero-indexed array A consisting of N integers is given. A peak is an array element which is larger than its neighbours. More precisely, it is an index P such that
0 < P < N − 1 and A[P − 1] < A[P] > A[P + 1]
.
For example, the following array A:
A[0] = 1
A[1] = 5
A[2] = 3
A[3] = 4
A[4] = 3
A[5] = 4
A[6] = 1
A[7] = 2
A[8] = 3
A[9] = 4
A[10] = 6
A[11] = 2
has exactly four peaks: elements 1, 3, 5 and 10.
You are going on a trip to a range of mountains whose relative heights are represented by array A. You have to choose how many flags you should take with you. The goal is to set the maximum number of flags on the peaks, according to certain rules.
Flags can only be set on peaks. What's more, if you take K flags, then the distance between any two flags should be greater than or equal to K. The distance between indices P and Q is the absolute value |P − Q|.
For example, given the mountain range represented by array A, above, with N = 12, if you take:
two flags, you can set them on peaks 1 and 5;
three flags, you can set them on peaks 1, 5 and 10;
four flags, you can set only three flags, on peaks 1, 5 and 10.
You can therefore set a maximum of three flags in this case.
Write a function that, given a non-empty zero-indexed array A of N integers, returns the maximum number of flags that can be set on the peaks of the array.
For example, given the array above
the function should return 3, as explained above.
Assume that:
N is an integer within the range [1..100,000];
each element of array A is an integer within the range [0..1,000,000,000].
Complexity:
expected worst-case time complexity is O(N);
expected worst-case space complexity is O(N), beyond input storage (not counting the
storage required for input arguments).
Elements of input arrays can be modified.
So this makes sense, but I failed it using this code
public int GetFlags(int[] A)
{
List<int> peakList = new List<int>();
for (int i = 0; i <= A.Length - 1; i++)
{
if ((A[i] > A[i + 1] && A[i] > A[i - 1]))
{
peakList.Add(i);
}
}
List<int> flagList = new List<int>();
int distance = peakList.Count;
flagList.Add(peakList[0]);
for (int i = 1, j = 0, max = peakList.Count; i < max; i++)
{
if (Math.Abs(Convert.ToDecimal(peakList[j]) - Convert.ToDecimal(peakList[i])) >= distance)
{
flagList.Add(peakList[i]);
j = i;
}
}
return flagList.Count;
}
EDIT
int[] A = new int[] { 7, 10, 4, 5, 7, 4, 6, 1, 4, 3, 3, 7 };
The correct answer is 3, but my application says 2
This I do not get, since there are 4 peaks (indices 1,4,6,8) and from that, you should be able to place a flag at 2 of the peaks (1 and 6)
Am I missing something here? Obviously my assumption is that the beginning or end of an Array can be a peak, is this not the case?
If this needs to go in Stack Exchange Programmers, I will move it, but thought dialog here would be helpful.
EDIT
Obviously my assumption is that the beginning or end of an Array can
be a peak, is this not the case?
Your assumption is wrong since peak is defined as:
0 < P < N − 1
When it comes to your second example you can set 3 flags on 1, 4, 8.
Here is a hint: If it is possible to set m flags, then there must be at least m * (m - 1) + 1 array elements. Given that N < 100,000, turning the above around should give you confidence that the problem can be efficiently brute-forced.
Here is a hint: If it is possible to set m flags, then there must be
at least m * (m - 1) + 1 array elements. Given that N < 100,000,
turning the above around should give you confidence that the problem
can be efficiently brute-forced.
No, that is wrong. Codility puts custom solutions through a series of tests, and brute forcing can easily fail on time.
I give here my solution of the task that makes 100% score (correctness and performance) in codility, implemented in C++. To understand the solution you must realize for a given distance of indexes (for example, when first peak starts at index 2 and the last peak at index 58 the distance is 56), that contains n peaks there is an upper limit for the maximal number of peaks that can hold flags according to condition described in the task.
#include <vector>
#include <math.h>
typedef unsigned int uint;
void flagPeaks(const std::vector<uint> & peaks,
std::vector<uint> & flaggedPeaks,
const uint & minDist)
{
flaggedPeaks.clear();
uint dist = peaks[peaks.size() - 1] - peaks[0];
if (minDist > dist / 2)
return;
flaggedPeaks.push_back(peaks[0]);
for (uint i = 0; i < peaks.size(); ) {
uint j = i + 1;
while (j < (peaks.size()) && ((peaks[j] - peaks[i]) < minDist))
++j;
if (j < (peaks.size()) && ((peaks[j] - peaks[i]) >= minDist))
flaggedPeaks.push_back(peaks[j]);
i = j;
}
}
int solution(std::vector<int> & A)
{
std::vector<uint> peaks;
uint min = A.size();
for (uint i = 1; i < A.size() - 1; i++) {
if ((A[i] > A[i - 1]) && (A[i] > A[i + 1])) {
peaks.push_back(i);
if (peaks.size() > 1) {
if (peaks[peaks.size() - 1] - peaks[peaks.size() - 2] < min)
min = peaks[peaks.size() - 1] - peaks[peaks.size() - 2];
}
}
}
// minimal distance between 2 peaks is 2
// so when we have less than 3 peaks we are done
if (peaks.size() < 3 || min >= peaks.size())
return peaks.size();
const uint distance = peaks[peaks.size() - 1] - peaks[0];
// parts are the number of pieces between peaks
// given n + 1 peaks we always have n parts
uint parts = peaks.size() - 1;
// calculate maximal possible number of parts
// for the given distance and number of peaks
double avgOptimal = static_cast<double>(distance) / static_cast<double> (parts);
while (parts > 1 && avgOptimal < static_cast<double>(parts + 1)) {
parts--;
avgOptimal = static_cast<double>(distance) / static_cast<double>(parts);
}
std::vector<uint> flaggedPeaks;
// check how many peaks we can flag for the
// minimal possible distance between two flags
flagPeaks(peaks, flaggedPeaks, parts + 1);
uint flags = flaggedPeaks.size();
if (flags >= parts + 1)
return parts + 1;
// reduce the minimal distance between flags
// until the condition fulfilled
while ((parts > 0) && (flags < parts + 1)) {
--parts;
flagPeaks(peaks, flaggedPeaks, parts + 1);
flags = flaggedPeaks.size();
}
// return the maximal possible number of flags
return parts + 1;
}
I'm trying to implement an algorithm to Howellize a matrix, in the way explained on page 5 of this paper (google docs link) (link to the pdf).
Most of it is pretty obvious to me, I think, but I'm not sure about line 16, does >> mean a right shift there? If it does, then how does it even work? Surely it would mean that bits are being chopped off? As far as I know there's no guarantee at that point that the number it is shifting is being shifted by an amount that preserves the information.
And if it doesn't mean a right shift, what does it mean?
If anyone can spare the time, I'd also like to have a test case (I don't trust myself to come up with one, I don't understand it well enough).
I've implemented it like this, is that correct? (I don't have a test case, so how can I find out?)
int j = 0;
for (int i = 0; i < 2 * k + 1; i++)
{
var R = (from row in rows
where leading_index(row) == i
orderby rank(row[i]) ascending
select row).ToList();
if (R.Count > 0)
{
uint[] r = R[0];
int p = rank(r[i]); // rank counts the trailing zeroes
uint u = r[i] >> p;
invert(r, u); // multiplies each element of r by the
// multiplicative inverse of u
for (int s = 1; s < R.Count; s++)
{
int t = rank(R[s][i]);
uint v = R[s][i] >> t;
if (subtract(R[s], r, v << (t - p)) == 0)
// subtracts (v<<(t-p)) * r from R[s],
// removes if all elements are zero
rows.Remove(R[s]);
}
swap(rows, rows.IndexOf(r), j);
for (int h = 0; h < j - 1; h++)
{
uint d = rows[h][i] >> p;
subtract(rows[h], r, d);
}
if (r[i] != 1)
// shifted returns r left-shifted by 32-p
rows.Add(shifted(r, 32 - p));
j++;
}
}
For test case, this may help you (page no #2). Also try this.
I think that you are right about the right shift. To get the Howell form, they want the values other than leading value in a column to be smaller than the leading value. Right shifting seems fruitful for that.
line 16 says:
Pick d so that 0 <= G(h,i) - d * ri < ri
Consider
G(h,i) - d * ri = 0
G(h,i) = d * ri
G(h,i) = d * (2 ^ p) ... as the comment on line 8 says, ri = 2^p.
So d = G(h,i) / (2 ^ p)
Right shifting G(h,i) by p positions is the quickest way to compute the value of d.
I have a table with different codes. And their Id's are powers of 2. (20, 21, 22, 23...).
Based on different conditions my application will assign a value to the "Status" variable.
for ex :
Status = 272 ( which is 28+ 24)
Status = 21 ( Which is 24+ 22+20)
If Status = 21 then my method (C#) should tell me that 21 is sum of 16 + 4 + 1.
You can test all bits in the input value if they are checked:
int value = 21;
for (int i = 0; i < 32; i++)
{
int mask = 1 << i;
if ((value & mask) != 0)
{
Console.WriteLine(mask);
}
}
Output:
1
4
16
for (uint currentPow = 1; currentPow != 0; currentPow <<= 1)
{
if ((currentPow & QStatus) != 0)
Console.WriteLine(currentPow); //or save or print some other way
}
for QStatus == 21 it will give
1
4
16
Explanation:
A power of 2 has exactly one 1 in its binary representation. We take that one to be the rightmost one(least significant) and iteratively push it leftwards(towards more significant) until the number overflows and becomes 0. Each time we check that currentPow & QStatus is not 0.
This can probably be done much cleaner with an enum with the [Flags] attribute set.
This is basically binary (because binary is also base 2). You can bitshift values around !
uint i = 87;
uint mask;
for (short j = 0; j < sizeof(uint); j++)
{
mask = 1 << j;
if (i & mask == 1)
// 2^j is a factor
}
You can use bitwise operators for this (assuming that you have few enough codes that the values stay in an integer variable).
a & (a - 1) gives you back a after unsetting the last set bit. You can use that to get the value of the corresponding flag, like:
while (QStatus) {
uint nxtStatus = QStatus & (QStatus - 1);
processFlag(QStatus ^ nxtStatus);
QStatus = nxtStatus;
}
processFlag will be called with the set values in increasing order (e.g. 1, 4, 16 if QStatus is originally 21).
I am looking for a faster algorithm than the below for the following. Given a sequence of 64-bit unsigned integers, return a count of the number of times each of the sixty-four bits is set in the sequence.
Example:
4608 = 0000000000000000000000000000000000000000000000000001001000000000
4097 = 0000000000000000000000000000000000000000000000000001000000000001
2048 = 0000000000000000000000000000000000000000000000000000100000000000
counts 0000000000000000000000000000000000000000000000000002101000000001
Example:
2560 = 0000000000000000000000000000000000000000000000000000101000000000
530 = 0000000000000000000000000000000000000000000000000000001000010010
512 = 0000000000000000000000000000000000000000000000000000001000000000
counts 0000000000000000000000000000000000000000000000000000103000010010
Currently I am using a rather obvious and naive approach:
static int bits = sizeof(ulong) * 8;
public static int[] CommonBits(params ulong[] values) {
int[] counts = new int[bits];
int length = values.Length;
for (int i = 0; i < length; i++) {
ulong value = values[i];
for (int j = 0; j < bits && value != 0; j++, value = value >> 1) {
counts[j] += (int)(value & 1UL);
}
}
return counts;
}
A small speed improvement might be achieved by first OR'ing the integers together, then using the result to determine which bits you need to check. You would still have to iterate over each bit, but only once over bits where there are no 1s, rather than values.Length times.
I'll direct you to the classical: Bit Twiddling Hacks, but your goal seems slightly different than just typical counting (i.e. your 'counts' variable is in a really weird format), but maybe it'll be useful.
The best I can do here is just get silly with it and unroll the inner-loop... seems to have cut the performance in half (roughly 4 seconds as opposed to the 8 in yours to process 100 ulongs 100,000 times)... I used a qick command-line app to generate the following code:
for (int i = 0; i < length; i++)
{
ulong value = values[i];
if (0ul != (value & 1ul)) counts[0]++;
if (0ul != (value & 2ul)) counts[1]++;
if (0ul != (value & 4ul)) counts[2]++;
//etc...
if (0ul != (value & 4611686018427387904ul)) counts[62]++;
if (0ul != (value & 9223372036854775808ul)) counts[63]++;
}
that was the best I can do... As per my comment, you'll waste some amount (I know not how much) running this in a 32-bit environment. If your that concerned over performance it may benefit you to first convert the data to uint.
Tough problem... may even benefit you to marshal it into C++ but that entirely depends on your application. Sorry I couldn't be more help, maybe someone else will see something I missed.
Update, a few more profiler sessions showing a steady 36% improvement. shrug I tried.
Ok let me try again :D
change each byte in 64 bit integer into 64 bit integer by shifting each bit by n*8 in lef
for instance
10110101 -> 0000000100000000000000010000000100000000000000010000000000000001
(use the lookup table for that translation)
Then just sum everything togeter in right way and you got array of unsigned chars whit integers.
You have to make 8*(number of 64bit integers) sumations
Code in c
//LOOKTABLE IS EXTERNAL and has is int64[256] ;
unsigned char* bitcounts(int64* int64array,int len)
{
int64* array64;
int64 tmp;
unsigned char* inputchararray;
array64=(int64*)malloc(64);
inputchararray=(unsigned char*)input64array;
for(int i=0;i<8;i++) array64[i]=0; //set to 0
for(int j=0;j<len;j++)
{
tmp=int64array[j];
for(int i=7;tmp;i--)
{
array64[i]+=LOOKUPTABLE[tmp&0xFF];
tmp=tmp>>8;
}
}
return (unsigned char*)array64;
}
This redcuce speed compared to naive implemetaton by factor 8, becuase it couts 8 bit at each time.
EDIT:
I fixed code to do faster break on smaller integers, but I am still unsure about endianess
And this works only on up to 256 inputs, becuase it uses unsigned char to store data in. If you have longer input string, you can change this code to hold up to 2^16 bitcounts and decrease spped by 2
const unsigned int BYTESPERVALUE = 64 / 8;
unsigned int bcount[BYTESPERVALUE][256];
memset(bcount, 0, sizeof bcount);
for (int i = values.length; --i >= 0; )
for (int j = BYTESPERVALUE ; --j >= 0; ) {
const unsigned int jth_byte = (values[i] >> (j * 8)) & 0xff;
bcount[j][jth_byte]++; // count byte value (0..255) instances
}
unsigned int count[64];
memset(count, 0, sizeof count);
for (int i = BYTESPERVALUE; --i >= 0; )
for (int j = 256; --j >= 0; ) // check each byte value instance
for (int k = 8; --k >= 0; ) // for each bit in a given byte
if (j & (1 << k)) // if bit was set, then add its count
count[i * 8 + k] += bcount[i][j];
Another approach that might be profitable, would be to build an array of 256 elements,
which encodes the actions that you need to take in incrementing the count array.
Here is a sample for a 4 element table, which does 2 bits instead of 8 bits.
int bitToSubscript[4][3] =
{
{0}, // No Bits set
{1,0}, // Bit 0 set
{1,1}, // Bit 1 set
{2,0,1} // Bit 0 and bit 1 set.
}
The algorithm then degenerates to:
pick the 2 right hand bits off of the number.
Use that as a small integer to index into the bitToSubscriptArray.
In that array, pull off the first integer. That is the number of elements in the count array, that you need to increment.
Based on that count, Iterate through the remainder of the row, incrementing count, based on the subscript you pull out of the bitToSubscript array.
Once that loop is done, shift your original number two bits to the right.... Rinse Repeat as needed.
Now there is one issue I ignored, in that description. The actual subscripts are relative. You need to keep track of where you are in the count array. Every time you loop, you add two to an offset. To That offset, you add the relative subscript from the bitToSubscript array.
It should be possible to scale up to the size you want, based on this small example. I would think that another program could be used, to generate the source code for the bitToSubscript array, so that it can be simply hard coded in your program.
There are other variation on this scheme, but I would expect it to run faster on average than anything that does it one bit at a time.
Good Hunting.
Evil.
I believe this should give a nice speed improvement:
const ulong mask = 0x1111111111111111;
public static int[] CommonBits(params ulong[] values)
{
int[] counts = new int[64];
ulong accum0 = 0, accum1 = 0, accum2 = 0, accum3 = 0;
int i = 0;
foreach( ulong v in values ) {
if (i == 15) {
for( int j = 0; j < 64; j += 4 ) {
counts[j] += ((int)accum0) & 15;
counts[j+1] += ((int)accum1) & 15;
counts[j+2] += ((int)accum2) & 15;
counts[j+3] += ((int)accum3) & 15;
accum0 >>= 4;
accum1 >>= 4;
accum2 >>= 4;
accum3 >>= 4;
}
i = 0;
}
accum0 += (v) & mask;
accum1 += (v >> 1) & mask;
accum2 += (v >> 2) & mask;
accum3 += (v >> 3) & mask;
i++;
}
for( int j = 0; j < 64; j += 4 ) {
counts[j] += ((int)accum0) & 15;
counts[j+1] += ((int)accum1) & 15;
counts[j+2] += ((int)accum2) & 15;
counts[j+3] += ((int)accum3) & 15;
accum0 >>= 4;
accum1 >>= 4;
accum2 >>= 4;
accum3 >>= 4;
}
return counts;
}
Demo: http://ideone.com/eNn4O (needs more test cases)
http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetNaive
One of them
unsigned int v; // count the number of bits set in v
unsigned int c; // c accumulates the total bits set in v
for (c = 0; v; c++)
{
v &= v - 1; // clear the least significant bit set
}
Keep in mind, that complexity of this method is aprox O(log2(n)) where n is the number to count bits in, so for 10 binary it need only 2 loops
You should probably take the metod for counting 32 bits whit 64 bit arithmetics and applying it on each half of word, what would take by 2*15 + 4 instructions
// option 3, for at most 32-bit values in v:
c = ((v & 0xfff) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f;
c += (((v & 0xfff000) >> 12) * 0x1001001001001ULL & 0x84210842108421ULL) %
0x1f;
c += ((v >> 24) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f;
If you have sse4,3 capable processor you can use POPCNT instruction.
http://en.wikipedia.org/wiki/SSE4