I have a message:
x2400\x1100\x2001\x1020\x2100\x0900\x2008\x2012\x0900\x1001\x2001\x1010\x2001\x0900\x0802\x0812\x1200\x2010\x0802\x1004\x0820\x1010\x2100\x2002\x1012
It's in IBM column binary format. I read some documentation, but can't do by yourself.
https://www.masswerk.at/keypunch/?q=Mr.%20Donald%20F.%20Draper,%20104%20WAVERLY%20PLACE,%20APT%203R,%20NEW%20YORK,%20NY
The decoded message is:
ALIMCTF(TRINITY'KEYPUNCH)
In order to decode the message, the following must be considered:
Each character is identified by two bytes, e.g. \x2400 corresponds to A.
In a first step both bytes have to be decoded separately. For this purpose the IBM/360 Column Binary format (cbf) from the posted link (Advanced Usage section) has to be used. Each bit is assigned to a specific position, e.g. if byte 1 has the value 0x20, then that corresponds to bit 5 and thus to position Y. Analogously for byte 2, e.g. if byte 2 has the value 0x12 = 0x10 + 0x02, then this corresponds to bits 4 and 1 and thus to positions 5 and 8. In total, this results in the positions Y, 5 and 8 or Y58 for short.
In a second step, the character determined by the positions must be identified. To do this, the IBM 029 keypunch from the posted link (Usage section) must be used. E.g. the positions Y, 5 and 8 determine the character (.
If this is done for the entire message, the following table results:
Message Byte1, Byte2 Byte1, Byte2, Position Character
hex hex cbf cbf IBM 029
\x2400 24 00 Y1 0 Y1 A
\x1100 11 00 X3 0 X3 L
\x2001 20 01 Y 9 Y9 I
\x1020 10 20 X 4 X4 M
\x2100 21 00 Y3 0 Y3 C
\x0900 09 00 03 0 03 T
\x2008 20 08 Y 6 Y6 F
\x2012 20 12 Y 58 Y58 (
\x0900 09 00 03 0 03 T
\x1001 10 01 X 9 X9 R
\x2001 20 01 Y 9 Y9 I
\x1010 10 10 X 5 X5 N
\x2001 20 01 Y 9 Y9 I
\x0900 09 00 03 0 03 T
\x0802 08 02 0 8 08 Y
\x0812 08 12 0 58 58 '
\x1200 12 00 X2 0 X2 K
\x2010 20 10 Y 5 Y5 E
\x0802 08 02 0 8 08 Y
\x1004 10 04 X 7 X7 P
\x0820 08 20 0 4 04 U
\x1010 10 10 X 5 X5 N
\x2100 21 00 Y3 0 Y3 C
\x2002 20 02 Y 8 Y8 H
\x1012 10 12 X 58 X58 )
where the decoded message is in the last column (read from top to bottom).
Related
I want to create a method with signature:
void InitMatrixLinear(int[,] matrix)
but with only one loop I don't wanna create the same photo with two loops I need to make the same photo with only one loop how can I create this?
Like this I want to create:
1 2 3 4 5 6 7 8
9 10 11 12 13 14 15 16
17 18 19 20 21 22 23 24
25 26 27 28 29 30 31 32
33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48
49 50 51 52 53 54 55 56
57 58 59 60 61 62 63 64
Assuming the matrix passed in is 8x8 (since we want [1,2,...,64] as the elements):
for (int i = 0; i < 64; i++){
matrix[i%8,i/8] = i+1;
}
or
for (int i = 0; i < 64; i++){
matrix[i/8,i%8] = i+1;
}
Depending on the desired orientation of the matrix
I am trying to distribute a set of items across number of buckets. I am looking for following properties:
Bucket assignment needs to be deterministic. In different runs same
input should end up in the same bucket.
Distribution of data between buckets should be uniform.
This should work for fairly small number of inputs (e.g. if I want
to distribute 50 inputs across 25 buckets ideally each bucket will
have 2 items).
First try was to generate md5 from input data and form bucket from first bytes of md5. I am not too satisfied with uniformity. It works well when input is large but not so well for small input. E.g. distributing 100 items across 64 buckets:
List<string> l = new List<string>();
for (int i = 0; i < 100; i++)
{
l.Add(string.Format("data{0}.txt", i));
}
int[] buckets = new int[64];
var md5 = MD5.Create();
foreach (string str in l)
{
{
byte[] hash = md5.ComputeHash(Encoding.Default.GetBytes(str));
uint bucket = BitConverter.ToUInt32(hash, 0) % 64;
buckets[bucket % 64]++;
}
}
Any suggestions what could I do to achieve higher uniformity? Thanks.
Leaving aside the efficiency of using MD5 for this purpose (see the discussion here and in the marked duplicate of that question), basically the answer is that what you have is what a uniform distribution really looks like.
That might seem counter-intuitive, but it's easily demonstrable either mathematically or by experiment.
As a kind of motivating example, consider the task of choosing exactly 64 numbers in the range 0-63. The odds that you will get one per bucket are very close to 0. There are 6464 possible sequences, of which 64! contain all 64 numbers. The odds of getting one of these sequence is about one in 3.1×1026. In fact, the odds of getting a sequence in which no element appears three times is less than one in a thousand (it's about .000658). So it's almost certain that a random uniform sample of 64 numbers in the range 0-63 will have some triplets, and it's pretty likely that there will be some quadruplet. If the sample is 100 numbers, those probabilities just get even bigger.
But the maths are not so easy to compute in general, so here I chose to illustrate by experiment :-), using random.org, which is a pretty reliable source of random numbers. I asked it for 100 numbers in the range 0-63, and counted them (using bash, so my "graph" is not as pretty as yours). Here are two runs:
First run:
Random numbers:
44 17 50 11 16 4 24 29 12 36
27 32 12 63 4 30 19 60 28 39
22 40 19 16 23 2 46 31 52 41
13 2 42 17 29 39 43 9 20 50
45 40 38 33 17 45 28 6 48 12
56 26 34 33 35 40 28 44 22 10
50 55 49 43 63 62 22 50 15 52
48 54 53 26 4 53 13 56 42 60
49 30 14 55 29 62 15 13 35 40
22 38 37 36 10 36 5 41 43 53
Counts:
X X X
X XX X X XX X X X X X
X X X XX XXX X X X XXX X XX XXXXXXXX XXX XX XX X XX
X XXX XXXXXXXXX XX XXX XXXXXXXXXXXXXXXXXXXXX XXX XXXXX X XX
----------------------------------------------------------------
1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5 5 5 6 6
0 2 4 6 8 0 2 4 6 8 0 2 4 6 8 0 2 4 6 8 0 2 4 6 8 0 2 4 6 8 0 2
Second run:
Random numbers:
41 31 16 40 1 51 17 41 27 46
24 14 21 33 25 43 4 36 1 14
40 22 11 22 30 19 23 63 39 61
8 55 40 6 21 13 55 13 3 52
17 52 53 53 7 21 47 13 45 57
25 27 30 48 38 55 55 22 61 11
11 28 45 63 43 0 41 51 15 2
33 2 46 14 35 41 5 2 11 37
28 56 15 7 18 12 57 36 59 51
42 5 46 32 10 8 0 46 12 9
Counts:
X X X X
X X XX XX XX X X X
XXX X XX XXXXX X XX X XX X X X XX X XX XXX X X X X
XXXXXXXXXXXXXXXXXXXX XXXXX XX XXXX XXXXXXXXX XXXX XXX XXX X X X
----------------------------------------------------------------
1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5 5 5 6 6
0 2 4 6 8 0 2 4 6 8 0 2 4 6 8 0 2 4 6 8 0 2 4 6 8 0 2 4 6 8 0 2
You could try this with your favourite random number generator, playing around with the size of the distribution. You'll get the same sort of shape.
This question already has answers here:
Splitting string based on variable number of white spaces
(2 answers)
Closed 6 years ago.
here is my source code at the moment..
CODE:
static void InputValues()
{
int row, col;
string[] words;
matrixName = fileIn.ReadLine();
words = fileIn.ReadLine().Split(' ');
dimenOne = int.Parse(words[0]);
dimenTwo = int.Parse(words[1]);
matrix = new int[dimenOne+1, dimenTwo+1];
for (row = 1; row <= dimenOne; row++)
{
words = fileIn.ReadLine().Split(' ');
for (col = 1; col <= dimenTwo; col++)
{
matrix[row, col] = int.Parse(words[col-1]);
}
}
}
My program will crash after it reads in the first value of 45 after
matrix[row, col] = int.Parse(words[col-1]); there are 3 spaces between values in the text file which is posted below. How do i populate the 2-d array without crashing?
TXT FILE
3
Matrix One
5 7
45 38 5 56 18 34 4
87 56 23 41 75 87 97
45 97 86 7 6 8 85
67 6 79 65 41 37 4
7 76 57 68 8 78 2
Matrix Two
6 8
45 38 5 56 18 34 4 30
87 56 23 41 75 87 97 49
45 97 86 7 6 8 85 77
67 6 79 65 41 37 4 53
7 76 57 68 8 78 2 14
21 18 46 99 17 3 11 73
Matrix Three
6 6
45 38 5 56 18 34
87 56 23 41 75 87
45 97 86 7 6 8
67 6 79 65 41 37
7 76 57 68 8 78
21 18 46 99 17 3
Either test if you can convert the value to an integer (using TryParse) or better use a regular expression to parse the input string. Your problem is that the split function returns more results than you expect (can easily be seen if you set a breakpoint after words = filein....)
If you have a variable number of spaces in your lines, you should eliminate them.
words = fileIn.ReadLine()
.Split(' ')
.Where(x => !string.IsNullOrWhiteSpace(x))
.ToArray();
var numberFormat = new NumberFormatInfo();
numberFormat.NumberDecimalSeparator = ".";
numberFormat.NumberDecimalDigits = 2;
decimal a = 10.00M;
decimal b = 10M;
Console.WriteLine(a.ToString(numberFormat));
Console.WriteLine(b.ToString(numberFormat));
Console.WriteLine(a == b ? "True": "False");
In console:
10.00
10
True
Why is it different? More important, how do I call ToString() to ensure same output no matter how a variable is initialized?
The question of how to make it output consistently has been answered, but here is why they output differently in the first place:
A decimal value contains, internally, fields for a scale and a coefficient. In the case of 10M, the value encoded has a coefficient of 10 and a scale of 0:
10M = 10 * 10^0
In the case of 10.00M, the value encoded has a coefficient of 1000 and a scale of 2:
10.00M = 1000 * 10^(-2)
You can sort of see this by inspecting the values in-memory:
unsafe
{
fixed (decimal* array = new decimal[2])
{
array[0] = 10M;
array[1] = 10.00M;
byte* ptr = (byte*)array;
Console.Write("10M: ");
for (int i = 0; i < 16; i++)
Console.Write(ptr[i].ToString("X2") + " ");
Console.WriteLine("");
Console.Write("10.00M: ");
for (int i = 16; i < 32; i++)
Console.Write(ptr[i].ToString("X2") + " ");
}
}
Outputs
10M: 00 00 00 00 00 00 00 00 0A 00 00 00 00 00 00 00
10.00M: 00 00 02 00 00 00 00 00 E8 03 00 00 00 00 00 00
(0xA is 10 in hex, and 0x3E8 is 1000 in hex)
This behaviour is outlined in section 2.4.4.3 of the C# spec:
A real literal suffixed by M or m is of type decimal. For example, the literals 1m, 1.5m, 1e10m, and 123.456M are all of type decimal. This literal is converted to a decimal value by taking the exact value, and, if necessary, rounding to the nearest representable value using banker's rounding (§4.1.7). Any scale apparent in the literal is preserved unless the value is rounded or the value is zero (in which latter case the sign and scale will be 0). Hence, the literal 2.900m will be parsed to form the decimal with sign 0, coefficient 2900, and scale 3.
The NumberDecimalDigits property is used with the "F" and "N" standard format strings, not the ToString method called without a format string.
You can use:
Console.WriteLine(a.ToString("N", numberFormat));
Try this:
Console.WriteLine(String.Format("{0:0.00}", a));
Console.WriteLine(String.Format("{0:0.00}", b));
The output will have always 2 decimal cases. More examples here:
http://www.csharp-examples.net/string-format-double/
I'm moving from C# to Java and can't seem to find any information on this. I'm trying to read if a file using hex, in Java I use...
String s = Integer.toHexString(hexIn);
if(s.length() < 2){
s = "0" + Integer.toHexString(hexIn);
}
As I'm sure you know thats so if the byte read in is one characater long it adds a zero to make it hex, I wanna do the same thing for c# so far i have...
StreamReader reader = new StreamReader(fileDirectory);
long stickNoteLength = fileDirectory.Length;
int hexIn;
String hex = "";
for (int i = 0; (hexIn = reader.Read()) != -1; i++)
{
}
Now I'm stuck, sorry if this is a simple question and thanks for you help :)
string hex = String.Format("{0:X2}", hexIn);
With this formatting mask you will get for numbers from 0 to 32 (for example):
00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F
I have added 2 (0:X2) because you mentioned that you are reading bytes.
Note, that for representing hex numbers also will be correct to add 0x at the string beginning:
string hex = String.Format("0x{0:X2}", hexIn);
Try this
hex = hexIn.ToString("X");
or
hex = Convert.ToString(hexIn,16);