Substring not refers to the given String exception in C# - c#

I'm making simple program in C# just for fun, where I'm coding simple HEX to String -translation method like this:
private String translate(String hex)
{
StringBuilder output = new StringBuilder();
int i = 0;
while (i+2 <= hex.Length)
{
String str = hex.Substring(i, i + 2);
output.Append((char)Convert.ToInt32(str, 16));
i += 2;
}
Every time when I'm running the application, I'm getting unhandled exception "Out of ranges and bounds", and getting message, that index and point of Substring method parameters have to refer to some point of hex -String. I did the if -statement before Substring -method line with conditions true only if i +2 is smaller or equal to hex.Length, but this did not work properly. My application works properly, when I'm using only one byte HEX input String meaning one ASCII -character. Can anyone help me with this that I could go forward with my project?

Use:
hex.Substring(i, 2)
The second argument in Substring is length, not end.

Related

Convert string with special characters to hex - C#

Hi I'm trying to transform a string containing special characters like û and ….
In my research and tests I almost succeeded using the following function:
public static string ToHex(this string input)
{
char[] values = input.ToCharArray();
string hex = "0x";
string add = "";
foreach (char c in values)
{
int value = Convert.ToInt32(c);
add = String.Format("{0:X}", value).Length == 1 ?
"0" + String.Format("{0:X}", value) + "00"
: String.Format("{0:X}", value) + "00";
hex += add;
}
return hex;
}
If I try to decode ´o¸sçPQ^ûË\u000f±d it does it correctly and turns it into this 0xB4006F00B8007300E700500051005E00FB00CB000F00B1006400,
instead when I try to decode ´o¸sçPQ](ÂF\u0012…a it fails and turns it into 0xB4006F00B8007300E700500051005D002800C200460012002026006100 instead of this
0xB4006F00B8007300E700500051005D002800C2004600120026206100.
Making a minimum of debug I saw that the string is transformed from
´o¸sçPQ](ÂF\u0012…a to ´o¸sçPQ](ÂF.a, I wouldn't want that to be the problem but I'm not sure.
EDIT
0xB4006F00B8007300E700500051005D002800C2004600120026206100 ´o¸sçPQ](ÂF…a CORRECT
0xB4006F00B8007300E700500051005D002800C200460012002026006100 ´o¸sçPQ](ÂF.a MY OUTPUT
0xB4006F00B8007300E700500051005D003D00CB0042000C00A50061006000AD004500BB00 ´o¸sçPQ]=ËB¥a`­E» CORRECT
0xB4006F00B8007300E700500051005D003D00CB0042000C00A50061006000AD004500BB00 ´o¸sçPQ]=ËB¥a`­E» MY OUTPUT
0xB4006F00B8007300E700500051005D002F00D30042001900B7006E006100 ´o¸sçPQ]/ÓB·na CORRECT
0xB4006F00B8007300E700500051005D002F00D30042001900B7006E006100 ´o¸sçPQ]/ÓB·na MY OUTPUT
0xB4006F00B8007300E700500051005F001A20BC006B0021003500DD00 ´o¸sçPQ_‚¼k!5Ý CORRECT
0xB4006F00B8007300E700500051005F00201A00BC006B0021003500DD00 ´o¸sçPQ_'¼k!5Ý MY OUTPUT
0xB4006F00B8007300E700500051005D002F00EE006B00290014204E004100 ´o¸sçPQ]/îk)—NA CORRECT
0xB4006F00B8007300E700500051005D002F00EE006B0029002014004E004100 ´o¸sçPQ]/îk)-NA MY OUTPUT
0xB4006F00B8007300E700500051005D003800E600690036001C204C004F00 ´o¸sçPQ]8æi6“LO CORRECT
0xB4006F00B8007300E700500051005D003800E60069003600201C004C004F00 ´o¸sçPQ]8æi6"LO MY OUTPUT
0xB4006F00B8007300E700500051005D002F00F3006200390014204E004700C602 ´o¸sçPQ]/ób9—NGˆ CORRECT
0xB4006F00B8007300E700500051005D002F00F300620039002014004E0047002C600 ´o¸sçPQ]/ób9-NG^ MY OUTPUT
0xB4006F00B8007300E700500051005D003B00EE007200330078014100 ´o¸sçPQ];îr3ŸA CORRECT
0xB4006F00B8007300E700500051005D003B00EE0072003300178004100 ´o¸sçPQ];îr3YA MY OUTPUT
0xB4006F00B8007300E700500051005D003000F20064003E009D004B00 ´o¸sçPQ]0òd>K CORRECT
0xB4006F00B8007300E700500051005D003000F20064003E009D004B00 ´o¸sçPQ]0òd>?K MY OUTPUT
0xB4006F00B8007300E700500051005D002F00E60075003E00 ´o¸sçPQ]/æu> CORRECT
0xB4006F00B8007300E700500051005D002F00E60075003E00 ´o¸sçPQ]/æu> MY OUTPUT
0xB4006F00B8007300E700500051005D002F00EE006A003000DC024500 ´o¸sçPQ]/îj0˜E CORRECT
0xB4006F00B8007300E700500051005D002F00EE006A0030002DC004500 ´o¸sçPQ]/îj0~E MY OUTPUT
I thank you in advance for every reply or comment,
greetings.
This is due to endianness, and different integer and string encodings.
char cc = '…';
Console.WriteLine(cc);
// 2026 <-- note, hex value differs from byte representation shown below
Console.WriteLine(((int)cc).ToString("x"));
// 26200000
Console.WriteLine(BytesToHex(BitConverter.GetBytes((int)cc)));
// 2620
Console.WriteLine(BytesToHex(Encoding.GetEncoding("utf-16").GetBytes(new[] { cc })));
You should not treat chars as integers. There are plenty of different ways to encode strings, .net internally uses UTF-16. And all encodings works with bytes, not with integers. Explicit conversion chars to integer can lead to unexpected results, like yours. Why don't you get encoding you need and work with bytes via Encoding.GetBytes?
void Main()
{
// output you expect 0xB4006F00B8007300E700500051005D002800C2004600120026206100
Console.WriteLine(BytesToHex(Encoding.GetEncoding("utf-16").GetBytes("´o¸sçPQ](ÂF\u0012…a")));
}
public static string BytesToHex(byte[] bytes)
{
// whatever way to convert bytes to hex
return "0x" + BitConverter.ToString(bytes).Replace("-", "");
}

How to perform mutliple Replace calls at once

I have a bit of a weird question here at hands. I have a text that's encoded in such a way that each character is replaced by another character and I'm creating an application that will replace each character with a correct one. But I've come across a problem that I have trouble solving. Let me show with an example:
Original text: This is a line.
Encoded text: (.T#*T#*%*=T50;
Now, as I said, each character represents another character, '(' is 'T', '.' is actually a 'h' and so on.
Now I could just go with
string decoded = encoded.Replace('(','T'); //T.T#*T#*%*=T50;
And that will solve one problem, but when I reach character 'T' that is actually encoded character 'i' I will have to replace all 'T' with 'i', which means that all previously decoded letter 'T's (that were once '(') will also change along with the encoded 'T'.
//T.T#*T#*%*=T50; -> i.i#*i#*%*=i50;
in this situation it's obvious that I should've just went the other way around, first change 'T' to 'i' and then '(' to 'T', but in the text I'm changing that kind of analysis is not an option.
What's the alternative here that I could do to perform the task correctly?
Thank you!
One possible solution is do not use replace string method at all.
Instead you can create method which for every encoded character will output decoded one, and then go through your string as through array of char and for every character in this array use "decryption" method to get decoded character - thus you'll receive decoded string.
For example (using StringBulder to create new string):
private static char Decode(char source)
{
if (source == '(')
return 'T';
else if (source == '.')
return 'h';
//.... and so on
}
string source = "ABC";
var builder = new StringBuilder();
foreach (var c in source)
builder.Append(Decode(c));
var result = builder.ToString();
Using .Replace() probably isn't the way to go in the first place, since as you're finding it covers the whole string every time. And once you've modified the whole string once, the encoding is lost.
Instead, loop over the string one time and replace characters individually.
Create a function that accepts a char and returns the replaced char. For simplicity, I'll just show the signature:
private char Decode(char c);
Then just loop over the string and call that function on each character. LINQ can make short work of that:
var decodedString = new string(encodedString.Select(c => Decode(c)).ToArray());
(This is freehand and untested, you may or may not need that .ToArray() for the string constructor to be happy, I'm not certain. But you get the idea.)
If it's easier to read you can also just loop manually over the string and perhaps use a StringBuilder with each successive char to build the final decoded result.
Without knowledge of your encryption algorithm, this answer assumes that it's a simple character translation akin to the Caesar Cipher.
Pass in your encrypted string, the method loops over each character, adjusting it by the value of shiftDelta and returns the resulting string.
private string Decrypt(string input)
{
const int shiftDelta = 10;
var inputChars = input.ToCharArray();
var outputChars = new char[inputChars.Length];
for (var i = 0; i < outputChars.Length; i++)
{
// Perform character translation here
outputChars[i] = (char)(inputChars[i] + shiftDelta);
}
return outputChars.ToString();
}

Converting long hex string to binary string throws OverflowException

Trying to convert a huge hex string to a binary string, but the OverflowException keeps gets thrown. This is my code to convert an image file to a hex string (which when used with a FlowDocument works perfectly!):
string h = new System.Runtime.Remoting.Metadata.W3cXsd2001.SoapHexBinary(System.IO.File.ReadAllBytes(Path)).ToString();
Now, however, I want to take this hex string and convert it to a binary string so that it may also displayed in FlowDocument. First, I tried writing it to a temp text file and then attempt to read it into a byte array:
string TempPath = System.IO.Path.Combine(System.IO.Path.GetTempPath(), "Text.txt");
using (System.IO.StreamWriter sw = new System.IO.StreamWriter(TempPath))
{
sw.WriteLine(Convert.ToString(Convert.ToInt64(h, 16), 2).PadLeft(12, '0'));
}
byte[] c = System.IO.File.ReadAllBytes(TempPath);
When that didn't work, I tried reading it into a string:
string c = System.IO.File.ReadAll(TempPath);
Neither worked and still throw OverflowException. I have also tried just doing this and skipped writing to a file altogether:
string s = Convert.ToString(Convert.ToInt64(h, 16), 2).PadLeft(12, '0')
And despite what approach I take, I still get an exception thrown. How are large strings like this normally handled?
Update
I've modified my algorithm to convert one character at a time, so now it looks like this:
string NewBinary = "";
try
{
int i = 0;
foreach (char c in h)
{
if (i == 100) break;
NewBinary = string.Concat(NewBinary, Convert.ToString(Convert.ToInt64(c.ToString(), 16), 2).PadLeft(12, '0'));
i++;
}
}
The problem with this is that the string is always going to be super long and the code above takes a LONG time to generate the binary string. I limited the length to 100 to test conversion, so the conversion itself is not an issue.
An int64 is represented by a 16 character hex string, which is why attempting to convert a "huge string" causes an OverflowException - the value is more than can be represented by an int64. You will need to break the string up into groups of max 16 chars & convert those to binary & concatenate them.
You could convert a nibble at a time using a lookup array, for example:
public static string HexStringToBinaryString(string hexString)
{
var result = new StringBuilder();
string[] lookup =
{
"0000", "0001", "0010", "0011",
"0100", "0101", "0110", "0111",
"1000", "1001", "1010", "1011",
"1100", "1101", "1110", "1111"
};
foreach (char nibble in hexString.Select(char.ToUpper))
result.Append((nibble > '9') ? lookup[10+nibble-'A'] : lookup[nibble-'0']);
return result.ToString();
}
Convert each hex character of the string into its corresponding binary pattern (eg A becomes 1010 etc)

Get only numbers from line in file

So I have this file with a number that I want to use.
This line is as follows:
TimeAcquired=1433293042
I only want to use the number part, but not the part that explains what it is.
So the output is:
1433293042
I just need the numbers.
Is there any way to do this?
Follow these steps:
read the complete line
split the line at the = character using string.Split()
extract second field of the string array
convert string to integer using int.Parse() or int.TryParse()
There is a very simple way to do this and that is to call Split() on the string and take the last part. Like so if you want to keep it as a string:
var myValue = theLineString.Split('=').Last();
If you need this as an integer:
int myValue = 0;
var numberPart = theLineString.Split('=').Last();
int.TryParse(numberPart, out myValue);
string setting=sr.ReadLine();
int start = setting.IndexOf('=');
setting = setting.Substring(start + 1, setting.Length - start);
A good approach to Extract Numbers Only anywhere they are found would be to:
var MyNumbers = "TimeAcquired=1433293042".Where(x=> char.IsDigit(x)).ToArray();
var NumberString = new String(MyNumbers);
This is good when the FORMAT of the string is not known. For instance you do not know how numbers have been separated from the letters.
you can do it using split() function as given below
string theLineString="your string";
string[] collection=theLineString.Split('=');
so your string gets divided in two parts,
i.e.
1) the part before "="
2) the part after "=".
so thus you can access the part by their index.
if you want to access numeric one then simply do this
string answer=collection[1];
try
string t = "TimeAcquired=1433293042";
t= t.replace("TimeAcquired=",String.empty);
After just parse.
int mrt= int.parse(t);

How to prevent conversion of Windows-1252 argument into a Unicode string?

I've written my first COM classes. My unit tests work fine, but my first use of the COM objects has hit a snag.
The COM classes provide methods which accept a string, manipulate it and return a string. The consumer of the COM objects is a dBASE PLUS program.
When the input string contains common keyboard characters (ASCII 127 or lower), the COM methods work fine. However, if the string contains characters beyond the ASCII range, some of them get remapped from Windows-1252 to C#'s Unicode. This table shows the mapping that takes place: http://www.unicode.org/Public/MAPPINGS/VENDORS/MICSFT/WINDOWS/CP1252.TXT
For example, if the dBASE program calls the COM object with:
oMyComObject.MyMethod("It will cost€123") where the € is hex 80,
the C# method receives it as Unicode:
public string MyMethod(string source)
{
// source is Unicode and now the Euro symbol is hex 20AC
...
}
I would like to avoid this remapping because I want the original hex content of the string.
I've tried adding the following to MyMethod to convert the string back to Windows-1252, but the Euro symbol gets lost because it becomes a question mark:
byte[] UnicodeBytes = Encoding.Unicode.GetBytes(source.ToString());
byte[] Win1252Bytes = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(1252), UnicodeBytes);
string Win1252 = Encoding.GetEncoding(1252).GetString(Win1252Bytes);
Is there a way to prevent this conversion of the "source" parameter to Unicode? Or, is there a way to convert it 100% from Unicode back to Windows-1252?
Yes, I'm answering my own question. The answer by "Jigsore" put me on the right track, but I want to explain more clearly in case someone else makes the same mistake I made.
I eventually figured out that I had misdiagnosed the problem. dBASE was passing the string fine and C# was receiving it fine. It was how I checked the contents of the string that was in error.
This turnkey builds on Jigsore's answer:
void Main()
{
string unicodeText = "\u20AC\u0160\u0152\u0161";
byte[] unicodeBytes = Encoding.Unicode.GetBytes(unicodeText);
byte[] win1252bytes = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(1252), unicodeBytes);
for (int i = 0; i < win1252bytes.Length; i++)
Console.Write("0x{0:X2} ", win1252bytes[i]); // output: 0x80 0x8A 0x8C 0x9A
// win1252String represents the string passed from dBASE to C#
string win1252String = Encoding.GetEncoding(1252).GetString(win1252bytes);
Console.WriteLine("\r\nWin1252 string is " + win1252String); // output: Win1252 string is €ŠŒš
Console.WriteLine("looking at the code of the first character the wrong way: " + (int)win1252String[0]);
// output: looking at the code of the first character the wrong way: 8364
byte[] bytes = Encoding.GetEncoding(1252).GetBytes(win1252String[0].ToString());
Console.WriteLine("looking at the code of the first character the right way: " + bytes[0]);
// output: looking at the code of the first character the right way: 128
// Warning: If your input contains character codes which are large in value than what a byte
// can hold (ex: multi-byte Chinese characters), then you will need to look at more than just bytes[0].
}
The reason the first method was wrong is that casting (int)win1252String[0] (or the converse of casting an integer j to a character with (char)j) involves an implicit conversion with the Unicode character set C# uses.
I consider this resolved and would like to thank each person who took the time to comment or answer for their time and trouble. It is appreciated!
Actually you're doing the Unicode to Win-1252 conversion correctly, but you're performing an extra step. The original Win1252 codes are in the Win1252Bytes array!
Check the following code:
string unicodeText = "\u20AC\u0160\u0152\u0161";
byte[] unicodeBytes = Encoding.Unicode.GetBytes(unicodeText);
byte[] win1252bytes = Encoding.Convert(Encoding.Unicode, Encoding.GetEncoding(1252), unicodeBytes);
for (i = 0; i < win1252bytes.Length; i++)
Console.Write("0x{0:X2} ", win1252bytes[i]);
The output shows the Win-1252 codes for the unicodeText string, you can check this by looking at the CP1252.TXT table.

Categories

Resources