I just found an interesting problem between translating some data:
VB.NET: CByte(4) << 8 Returns 4
But C#: (byte)4 << 8 Returns 1024
Namely, why does VB.NET: (CByte(4) << 8).GetType() return type {Name = "Byte" FullName = "System.Byte"}
Yet C#: ((byte)4 << 8).GetType() returns type {Name = "Int32" FullName = "System.Int32"}
Is there a reason why these two treat the binary shift the same? Following from that, is there any way to make the C# bit shift perform the same as VB.NET (to make VB.NET perform like C# you just do CInt(_____) << 8)?
According to http://msdn.microsoft.com/en-us/library/a1sway8w.aspx byte does not have << defined on it for C# (only int, uint, long and ulong. This means that it will use an implciit conversion to a type that it can use so it converts it to int before doing the bit shift.
http://msdn.microsoft.com/en-us/library/7haw1dex.aspx says that VB defines the operation on Bytes. To prevent overflow it applies a mask to your shift to bring it within an appropriate range so it is actually in this case shifting by nothing at all.
As to why C# doesn't define shifting on bytes I can't tell you.
To actually make it behave the same for other datatypes you need to just mask your shift number by 7 for bytes or 15 for shorts (see second link for info).
To apply the same in C#, you would use
static byte LeftShiftVBStyle(byte value, int count)
{
return (byte)(value << (count & 7));
}
as for why VB took that approach.... just different language, different rules (it is a natural extension of the way C# handles shifting of int/&31 and long/&63, to be fair).
Chris already nailed it, vb.net has defined shift operators for the Byte and Short types, C# does not. The C# spec is very similar to C and also a good match for the MSIL definitions for OpCodes.Shl, Shr and Shr_Un, they only accept int32, int64 and intptr operands. Accordingly, any byte or short sized operands are first converted to int32 with their implicit conversion.
That's a limitation that the vb.net compiler has to work with, it needs to generate extra code to make the byte and short specific versions of the operators work. The byte operator is implemented like this:
Dim result As Byte = CByte(leftOperand << (rightOperand And 7))
and the short operator:
Dim result As Short = CShort(leftOperand << (rightOperand And 15))
The corresponding C# operation is:
Dim result As Integer = CInt(leftOperand) << CInt(rightOperand)
Or CLng() if required. Implicit in C# code is that the programmer always has to cast the result back to the desired result type. There are a lot of SO questions about that from programmers that don't think that's very intuitive. VB.NET has another feature that makes automatic casting more survivable, it has overflow checking enabled by default. Although that's not applicable to shifts.
Related
I have an array of bytes and a struct. I want to set some of struct members to some of array members this way:
mystruct.m1 = (myarr[0] << 8) | myarr[1]; //join bytes
So mystruct.m1 is going to be a 16-bit integer (ushort). However, if I define mystruct.m1 as ushort, I should cast (myarr[0] << 8) | myarr[1] to ushort or Visual Studio says cannot implicitly convert type 'int' to 'ushort'. But if I use normal int, no cast is needed.
Which one is better and more efficient?
A byte in C# is unsigned (values 0 to 255). Therefore I would go with ushort as well.
Which one is better?
The one that provides better semantics to future readers. With an ushort, it's immediately clear that only 2 bytes will fit in. If you use an int instead, one might wonder what the other 16 bits are good for.
Also, with int, bit-shift operations may work differently in case the value is negative for whatever reason.
Which one is more efficient?
Why care about efficiency if you have no performance problem?
I'm struggling on converting an apparently easy code from Python to C# as below:
def computeIV(self, lba):
iv = ""
lba &= 0xffffffff
for _ in xrange(4):
if (lba & 1):
lba = 0x80000061 ^ (lba >> 1)
else:
lba = lba >> 1
iv += struct.pack("<L", lba)
return iv
I'm used to C# logic and I really can't understand arrays bitmask ...
You can make use of BitArray Class in C# which manages a compact array of bit values, which are represented as Booleans, where true indicates that the bit is on (1) and false indicates the bit is off (0).
It offers AND, OR, NOT, SET and XOR functions.
For Shift operation, a possible solution can be found here:
BitArray - Shift bits or Shifting a BitArray
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I just started coding in C++ and I saw in some example codes this symbol: <<
is there a equavalent in C# if so What is it?
Thank you in advance.
Disclaimer: I don't know anything about C#; this answer just describes the operator in C++.
It depends on context; that operator is often overloaded to mean different things for different types.
For integer types, it's the bitwise left shift operator; it takes the bit pattern of a value, and moves it to the left, inserting zero into the less significant bits:
unsigned x = 6; // decimal 6, binary 00110
unsigned y = x << 2; // decimal 24, binary 11000
In general, a left-shift by N bits is equivalent to multiplying by 2N (so here, shifting by 2 bits multiplies by 4).
I'm fairly sure this use of the operator is the same in C# as in C++.
The standard library overloads the operator to insert a value into an output stream, in order to produce formatted output on the console, or in files, or in other ways.
#include <iostream> // declare standard input/output streams
std::cout << 42 << std::endl; // print 42 to the console, end the line, and flush.
I think C# has a TextWriter or something to handle formatted output, with Console.Out or something being equivalent to std::cout; but C# uses normal method calls rather than an overloaded operator.
operator<< means exactly the same in C++ as it does in C#; it is the left-shift operator and moves all the bits in a number one bit to the left.
But, in C++, you can overload most operators to make them do whatever you like for user-defined types. Perhaps most commonly, the left- and right-shift operators are overloaded for streams to mean 'stuff this thing into that stream' (left-shift) or 'extract a variable of this type from that stream' (right-shift).
I'm porting several thousand lines of cryptographic C# functions to a Java project. The C# code extensively uses unsigned values and bitwise operations.
I am aware of the necessary Java work-arounds to support unsigned values. However, it would be much more convenient if there were implementations of unsigned 32bit and 64bit Integers that I could drop into my code. Please link to such a library.
Quick google queries reveal several that are part of commercial applications:
http://www.teamdev.com/downloads/jniwrapper/javadoc/com/jniwrapper/UInt64.html
http://publib.boulder.ibm.com/infocenter/rfthelp/v7r0m0/index.jsp?topic=/com.rational.test.ft.api.help/ApiReference/com/rational/test/value/UInt64.html
Operations with signed and unsigned integers are mostly identical, when using two's complement notation, which is what Java does. What this means is that if you have two 32-bit words a and b and want to compute their sum a+b, the same internal operation will produce the right answer regardless of whether you consider the words as being signed or unsigned. This will work properly for additions, subtractions, and multiplications.
The operations which must be sign-aware include:
Right shifts: a signed right shift duplicates the sign bit, while an unsigned right shift always inserts zeros. Java provides the ">>>" operator for unsigned right-shifting.
Divisions: an unsigned division is distinct from a signed division. When using 32-bit integers, you can convert the values to the 64-bit long type ("x & 0xFFFFFFFFL" does the "unsigned conversion" trick).
Comparisons: if you want to compare a with b as two 32-bit unsigned words, then you have two standard idioms:
if ((a + Integer.MIN_VALUE) < (b + Integer.MIN_VALUE)) { ... }
if ((a & 0xFFFFFFFFL) < (b & 0xFFFFFFFFL)) { ... }
Knowing that, the signed Java types are not a big hassle for cryptographic code. I have implemented many cryptographic primitives in Java, and the signed types are not an issue provided that you understand what you are writing. For instance, have a look at sphlib: this is an opensource library which implements many cryptographic hash functions, both in C and in Java. The Java code uses Java's signed types (int, long...) quite seamlessly, and it simply works.
Java does not have operator overloading, so Java-only "solutions" to get unsigned types will involve custom classes (such as the UInt64 class you link to), which will imply a massive performance penalty. You really do not want to do that.
Theoretically, one could define a Java-like language with unsigned types and implement a compiler which produces bytecode for the JVM (internally using the tricks I detail above for shifts, divisions and comparisons). I am not aware of any available tool which does that; and, as I said above, Java's signed types are just fine for cryptographic code (in other words, if you have trouble with such signed types, then I daresay that you do not know enough to implement cryptographic code securely, and you should refrain from doing so; instead, use existing opensource libraries).
This is a language feature, not a library feature, so there is no way to extend Java to support this functionality unless you change the language itself, in which case you'd need to make your own compiler.
However, if you need unsigned right-shifts, Java supports the >>> operator which works like the >> operator for unsigned types.
You can, however, make your own methods to perform arithmetic with signed types as though they were unsigned; this should work, for example:
static int multiplyUnsigned(int a, int b)
{
final bool highBitA = a < 0, highBitB = b < 0;
final long a2 = a & ~(1 << 31), b2 = b & ~(1 << 31);
final long result = (highBitA ? a2 | (1 << 31) : a2)
* (highBitB ? b2 | (1 << 31) : b2);
return (int)result;
}
Edit:
Thanks to #Ben's comment, we can simplify this:
static int multiplyUnsigned(int a, int b)
{
final long mask = (1L << 32) - 1;
return (int)((a & mask) * (b & mask));
}
Neither of these methods works, though, for the long type. You'd have to cast to a double, negate, multiply, and cast it back again in that case, which would likely kill any and all of your optimizations.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
byte + byte = int… why?
I have a method like this:
void Method(short parameter)
{
short localVariable = 0;
var result = localVariable - parameter;
}
Why is the result an Int32 instead of an Int16?
It's not just subtraction, there simply exisits no short (or byte/sbyte) arithmetic.
short a = 2, b = 3;
short c = a + b;
Will give the error that it cannot convert int (a+b) to short (c).
One more reason to almost never use short.
Additional: in any calculation, short and sbyte will always be 'widened' to int, ushort and byte to uint. This behavior goes back to K&R C (and probaly is even older than that).
The (old) reason for this was, afaik, efficiency and overflow problems when dealing with char. That last reason doesn't hold so strong for C# anymore, where a char is 16 bits and not implicitly convertable to int. But it is very fortunate that C# numerical expressions remain compatible with C and C++ to a very high degree.
All operations with integral numbers smaller than Int32 are widened to 32 bits before calculation by default. The reason why the result is Int32 is simply to leave it as it is after calculation. If you check the MSIL arithmetic opcodes, the only integral numeric type they operate with are Int32 and Int64. It's "by design".
If you desire the result back in Int16 format, it is irrelevant if you perform the cast in code, or the compiler (hypotetically) emits the conversion "under the hood".
Also, the example above can easily be solved with the cast
short a = 2, b = 3;
short c = (short) (a + b);
The two numbers would expand to 32 bits, get subtracted, then truncated back to 16 bits, which is how MS intended it to be.
The advantage of using short (or byte) is primarily storage in cases where you have massive amounts of data (graphical data, streaming, etc.)
P.S. Oh, and the article is "a" for words whose pronunciation starts with a consonant, and "an" for words whose pronunciated form starts with a vowel. A number, AN int. ;)
The other answers given within this thread, as well as the discussions given here are instructive:
(1) Why is a cast required for byte subtraction in C#?
(2) byte + byte = int… why?
(3) Why is a cast required for byte subtraction in C#?
But just to throw another wrinkle into it, it can depend on which operators you use. The increment (++) and decrement (--) operators as well as the addition assignment (+=) and subtraction assignment (-=) operators are overloaded for a variety of numeric types, and they perform the extra step of converting the result back to the operand's type when returning the result.
For example, using short:
short s = 0;
s++; // <-- Ok
s += 1; // <-- Ok
s = s + 1; // <-- Compile time error!
s = s + s; // <-- Compile time error!
Using byte:
byte b = 0;
b++; // <-- Ok
b += 1; // <-- Ok
b = b + 1; // <-- Compile time error!
b = b + b; // <-- Compile time error!
If they didn't do it this way, calls using the increment operator (++) would be impossible and calls to the addition assignment operator would be awkward at best, e.g.:
short s
s += (short)1;
Anyway, just another aspect to this whole discussion...
I think its done automatically done to avoid overflow,
lets say you do something like this.
Short result = Short.MaxValue + Short.MaxValue;
The result clearly wouldn't fit in a short.
one thing i don't understand is then why not do it for int32 too which would automatically convert to long???
The effect you are seeing...
short - short = int
...is discussed extensively in this Stack Overflow question: byte + byte = int… why?
There is a lot of good information and some interesting discussions as to why it is that way.
Here is a highly-voted answer:
I believe it's basically for the sake
of performance. (In terms of "why it
happens at all" it's because there
aren't any operators defined by C# for
arithmetic with byte, sbyte, short or
ushort, just as others have said. This
answer is about why those operators
aren't defined.)
Processors have native operations to
do arithmetic with 32 bits very
quickly. Doing the conversion back
from the result to a byte
automatically could be done, but would
result in performance penalties in the
case where you don't actually want
that behaviour.
-- Jon Skeet