Calculating offset from two hex strings [closed] - c#

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I have a program that calculates offset but I do not have the source code for. I am trying to see how this is calculated using C#. The first input hex digit cannot be less than 80. This is what I have written so far but it does not calculate the offset correctly:
private void getOffSet()
{
// Default offset
int defaultOffset = 0x0512;
// Input
byte byte1input = 0x80;
byte byte2input = 0x00;
int inValue = byte2input + (byte1input << 8);
// Calculate offset
int outValue = inValue + defaultOffset;
// Convert integer as a hex in a string variable
string hexValue = outValue.ToString("X");
}
Any help correcting this function to calculate the correct offset would be greatly appreciated. Thank you before hand.

The following function matches your list. It is, however, a wild guess of course as there is no way I can match it with the original program and I have no idea what these offsets mean.
private void getOffSet(byte one, byte two)
{
byte baseByte = 0x80;
int defaultOffset = 0x0418;
int mul = (one - baseByte) % 8;
int result = mul * 0x2000 + defaultOffset;
result += two * 0x0020;
Console.WriteLine(result.ToString("X"));
}

Related

C# Remove all preceding zeros from a double (no strings) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 months ago.
This post was edited and submitted for review 6 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I need a method to return the firsts non zero numbers from a double in the following way: Any number >= 1 or == 0 will return the same; All the rest will return as per the following examples:
(Please note that I am using double because the potential imprecision is irrelevant in the use case whereas saving memory is relevant).
double NumberA = 123.2; // Returns 123.2
double NumberB = 1.2; // Returns 1.2
double NumberC = 0.000034; // Returns 3.4
double NumberD = 0.3; // Returns 3.0
double NumberE = -0.00000087; // Returns -8.7
One option would be to iteratively multiply by 10 until you get a number greater than 1:
public double RemoveLeadingZeros(double num)
{
if (num == 0) return 0;
while(Math.Abs(num) < 1) { num *= 10};
return num;
}
a more direct, but less intuitive, way using logarithms:
public double RemoveLeadingZeros(double num)
{
if (num == 0) return 0;
if (Math.Abs(num) < 1) {
double pow = Math.Floor(Math.Log10(num));
double scale = Math.Pow(10, -pow);
num = num * scale;
}
return num;
}
it's the same idea, but multiplying by a power of 10 rather then multiplying several times.
Note that double arithmetic is not always precise; you may end up with something like 3.40000000001 or 3.3999999999. If you want consistent decimal representation then you can use decimal instead, or string manipulation.
I would start with converting to a string. Something like this:
string doubleString = NumberC.ToString();
I don't know exactly what this will output, you will have to check. But if for example it is "0.000034" you can easily manipulate the string to meet your needs.

combine two 16 bit integer to 32 bit float value c# [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I have to Combine two 16 bit integer and trying to convert into 32 bit float value in my service but I am not able to get. Finally I figure out how to do.
Int16 val1 = 0;
Int16 val2 = 16880;
output should be:
30
Int16 val1 = 0;
Int16 val2 = 16880;
var byteval1 = BitConverter.GetBytes(val1);
var byteval2 = BitConverter.GetBytes(val2);
byte[] temp2 = new byte[4];
temp2[0] = byteval1[0];
temp2[1] = byteval1[1];
temp2[2] = byteval2[0];
temp2[3] = byteval2[1];
float myFloat = System.BitConverter.ToSingle(temp2, 0);

Confused about accessing elements from IntPtr in C# [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 5 years ago.
Improve this question
I am using Kinect v2 which contains the following CameraSpacePoint struct:
public struct CameraSpacePoint : IEquatable<CameraSpacePoint>
{
public float X;
public float Y;
public float Z;
}
The CameraSpacePoint also contains few methods Equals, GetHashCode etc, which are not shown above to keep the post clean and short.
Well, I define cameraSpacePoints in class constructor as follows:
IntPtr cameraSpacePoints = Marshal.AllocHGlobal(512 * 424 * 4 * 3);
Below is the explanation of above memory allocation:
512: width
424: height
4: bytes needed for single 'float'
3: total three variables i.e., 'X', 'Y' and 'Z'
Later, I copied values to cameraSpacePoints using CoordinateMapper as follows:
coordinateMapper.MapDepthFrameToCameraSpaceUsingIntPtr(depthFrameData,
512 * 424 * 2,
cameraSpacePoints,
512 * 424 * 4 * 3);
It seems perfect. Now I want to get the values from cameraSpacePoints. So I used following code inside unsafe block:
float* cameraSpacePoint = (float*)cameraSpacePoints;
for (var index = 0; index < 512 * 424; index++)
{
float X = cameraSpacePoint[index];
float Y = cameraSpacePoint[index + 1];
float Z = cameraSpacePoint[index + 2];
}
It doesn't seem working which I realized while visualizing it. It appears to me that there is some confusion while accessing elements from cameraSapacePoints using IntPtr. What is missing here? Any suggestions, please?
In your initial code, you are casting the IntPtr (which points to an array[] of CameraSpacePoint) to a raw float pointer. If you interpret the IntPtr as raw floats, since you are handling 3 points at a time (x, y and z), you'll need to increment the loop by 3 floats each time, e.g. (I've renamed variables for clarity):
var floats = (float*)cameraSpacePoints;
for (var index = 0; index < 512 * 424; index+=3)
{
var x = floats[index];
var y = floats[index + 1];
var z = floats[index + 2];
var myCameraSpacePoint = new CameraSpacePoint
{
X = x,
Y = y,
Z = z
};
// use myCameraSpacePoint here
}
But that's a horribly inefficient way of handling the data, given that the data was originally a CameraSpacePoint in any event. Much better would just be to cast the struct directly back to the actual type:
var cameraSpacePoints = (CameraSpacePoint*)cameraSpacePoints;
for (var index = 0; index < 512 * 424; index++)
{
var cameraSpacePoint = cameraSpacePoints[index];
// Do something with cameraSpacePoint
}
By casting to the correct type (CameraSpacePoint), we're also improving the robustness of the code - e.g. if, in future, additional fields are added to a new version of CameraSpacePoint, then a recompile of your code against the new version will again work, whereas accessing the floats directly would break the encapsulation and make maintenance difficult.
The reason why we no longer need to increment the loop by 3, is because when we use the subscript / index operation on cameraSpacePoints[index], is that the compiler knows to find element n at an offset of n * sizeof(CameraSpacePoint) after the position of the initial cameraSpacePoints[0]. And sizeof(CameraSpacePoint) is the size of 3 floats.

Is the following an encryption algorithm? If so, is it reversible? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I came across the following method in a programming example. Is this really an encryption algorithm? Or is it more of a hex hashing/irreversible encoding algorithm? I see the use of bitwise shifts and bitwise ands which leads me to believe that the method has data loss and is an irreversible hex encoding algorithm.
private string Encrypt(string key, string message)
{
string result = "";
var hexValues = "0123456789abcdef";
for (int i = 0, j = 0; i < message.Length; i++)
{
var a = (Int32)message[i];
var b = (Int32)key[j] & 10;
var encChar = a ^ b;
if (++j == key.Length)
{
j = 0;
}
result += hexValues[(encChar >> 4) & 15];
result += hexValues[encChar & 15];
}
return result;
}
At its heart, this algorithm is performing XOR encryption, a weak and easily broken form of encryption.
var encChar = a ^ b;
The bit shifts are used to get a hex value corresponding to the "encrypted" character position.
result += hexValues[(encChar >> 4) & 15];
result += hexValues[encChar & 15];
The & mask is used to select a value to XOR the character at the given position against. It is providing a "hidden" change to the key, a practice sometimes called security through obscurity (which does not add much to the actual security).
var b = (Int32)key[j] & 10;

Retrieving a value that is stored as MB [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have a table that stores the amount of RAM a server has in a biginit column with values such as 2470208.
But how I can apply a data annotation or other validations to show only 2 instead of s470208. ?
I mean to always divide by 1 million and get the number on the left side of the digit ?
1) Use this for automatic thousands-unit:
string GetByteString(long n) {
int k=0;
string u=" kMGTEP";
while(n>1024) {
n>>=10;
k++;
}
return n.ToString() + u[k];
}
Call:
string s= GetByteString(1234567890123);
Debug.WriteLine(s);
2) But if you simply always want MB just shift by 20:
long n = 123456789;
string MB = (n>>20).ToString();
But this will show 0 if n goes below 1MB.
Reason:
1 kB = 2^10 = 1<<10 = 1024;
1 MB = 2^20 = 1<<20 = 1024*1024 = 1048576;
1 GB = 2^30 = 1<<30 = 1024*1024*1024 = 1073741824;
You tagged C# but mentioned a bigint column so it isn't clear whether you're looking for a database or C# solution. The following C# method will take the number of bytes as an integer and return a formatted string...
public string FormattedBytes(long bytes)
{
string units = " kMGT";
double logBase = Math.Log((double)bytes, 1024.0);
double floorBase = Math.Floor(logBase);
return String.Format("{0:N2}{1}b",
Math.Pow(1024.0, logBase - floorBase),
units.Substring((int)floorBase, 1));
}

Categories

Resources