I am writing C# solution that generates a C++ file base on some configuration. For this i am using Scriban as a template engine. I saw the following statement before in Jinja2:
uint16_t {{"%25s"|format(device.name)}} = {{"0x%08x"|format(device.address)}};
device.name is a string and device.address contain Hexadecimal value (0x50060800).
I tried this:
uint16_t {{device.name | object.format "%25s"}} = {{device.address | math.format "0x%08x"}};
And i received the following error:
<input>(15,50) : error : Unexpected `RNG`. Must be a formattable object
<input>(15,71) : error : Unexpected `0x50060800`. Must be a formattable object
This is the result I was expecting:
uint16_t RNG = 0x50060800;
How can I implement the above statement in Scriban?
I answered in your GitHub issue. I'll paste here for posterity
Keep in mind:
Jinja is written for Python. So, it uses Python conventions for things like format strings.
Scriban is written in C# - it uses C# conventions for format strings.
Based on this StackOverflow post, it seems that by using "%25s", you are attempting to pad the width of device.name to 25 characters. The way to do that in Scriban is using the string.pad_right (or string.pad_left) function.
Additionally, your hex format string is incorrect (see the C# documentation on the X specifier for numeric format strings)
Putting all that together:
uint16_t {{device.name | string.pad_right 25 }} = 0x{{device.address | math.format "X8"}};
Note, however, the numeric format strings only work on numeric types. So, if the device.address property is a string in decimal representation (i.e., "1342572544"), you first must convert it to a number, like so:
0x{{device.address | string.to_int | math.format "X8"}};
If the device.address property is a string in hexadecimal representation, it gets a bit tricker. I don't see a built-in function to convert that for you. So, you'll have to make your own. See an example here
Related
I am translating a python communication library into C#, and am having trouble interpreting how the string gets formatted before being sent over tcp.
The relevant portion of the code is as follows:
struct.pack(
'!HHBH'+str(var_name_len)+'s',
self.msg_id,
req_len,
flag,
var_name_len,
self.varname
)
Then it gets sent with: sendall()
I have looked at the Python documentation (https://docs.python.org/2/library/struct.html) but am still drawing a blank regarding the first line: '!HHBH'+str(var_name_len)+'s', I understand this is where the formatting is set, but what it is being formatted to is beyond me.
The python code that I am translating can be found at the following link:
https://github.com/linuxsand/py_openshowvar/blob/master/py_openshowvar.py
Any python and C# vigilantes out there that can help me build this bridge?
Edit: Based on jas' answer, I have written the following c# struct:
public struct messageFormat
{
ushort messageId;
ushort reqLength;
char functionType;
ushort varLengthHex;
string varname;
...
Once I populate it, I will need to send it over TCP. I have an open socket, but need to convert to to a byte[] I assume so I can use socket.send(byte[])?
Thanks
What's being formatted are the five arguments following the format string. Each argument has a corresponding element in the format string.
For the sake of the explanation, let's assume that var_name_len has the value 12 (presumably because var_name is a string of length 12 in this hypothetical case).
So the format string will be
!HHBH12s
Breaking that down according to the docs:
! Big-ending byte ordering will be used
H self.msg_id will be packed as a two-byte unsigned short
H req_len will be packed as above
B flag will be packed as a one-byte unsigned char
H var_name_len will be packed as a two-byte unsigned short
12s self.varname will be packed as a 12-byte string
I figure my best way of really getting a feeling about double precision numbers is to play around with them a bit, and one of the things I want to do is to look at their (almost) binary representation. For this, in C#, the function BitConverter.DoubleToInt64Bits is very useful, as it (after converting into hexadecimal) gives me a look at what the "real" nature of the floating point number is.
The problem is that I don't seem to be able to find the equivalent function in Python, is there a way to do the same things as BitConverter.DoubleToInt64Bits in a python function?
Thank you.
EDIT:
An answer bellow suggested usinginascii.hexlify(struct.pack('d', 123.456) to convert the double into a hexadecimal representation, but I am still getting strange results.
For example,
inascii.hexlify(struct.pack('d', 123.456))
does indeed return '77be9f1a2fdd5e40' but if I run the code that should be equivalent in C#, i.e.
BitConverter.DoubleToInt64Bits(123.456).ToString("X")
I get a completely different number: "405EDD2F1A9FBE77". Where have I made my mistake?
How about using struct.pack and binascii.hexlify?
>>> import binascii
>>> import struct
>>> struct.pack('d', 0.0)
'\x00\x00\x00\x00\x00\x00\x00\x00'
>>> binascii.hexlify(struct.pack('d', 0.0))
'0000000000000000'
>>> binascii.hexlify(struct.pack('d', 1.0))
'000000000000f03f'
>>> binascii.hexlify(struct.pack('d', 123.456))
'77be9f1a2fdd5e40'
struct format specify d represent double c type (8 bytes = 64 bits). For other format, see Format characters.
UPDATE
By specifying #, =, <, >, ! as the first character of the format, you can indicate byte order. (Byte Order, Size, and Alignment)
>>> binascii.hexlify(struct.pack('<d', 123.456)) # little-enddian
'77be9f1a2fdd5e40'
>>> binascii.hexlify(struct.pack('>d', 123.456)) # big-endian
'405edd2f1a9fbe77'
I'm converting an ancient VB6 program to C# and I came across some VB6 code that looked like this . . .
Format(part(PI).far_xdev, " #0.000;-#0.000")
at first I was confused about the two format fields separated by a semicolon. But it turns out that in VB6 this means that it uses the first one if the value of the number being formatted is 0 or positive, and the second if it's negative. If there were three formatting fields it would be positive, negative, and zero; four would be positive, negative, zero and null.
What's the equivalent of this in C# string formatting? How do I say "use this formating string for a positive number and that one for a negative number"?
(To whoever added the "This question may already have an answer": the problem with that link is that the linked question was asked in reference to some version of BASIC (based on the syntax) and did not explicitly say he was looking for an answer in C#, and neither of the two answers given specifically say they are in C#. We are left to surmise the languages involved based only on the tags. I think this new question and the resulting answers are much more clear, explicit and detailed)
Use the same separation by semi-column. Read more about that separator at msdn (supports up to three sections)
Console.WriteLine("{0:positive;negative;zero}", +1); //prints positive
Console.WriteLine("{0:positive;negative;zero}", -1); //prints negative
Console.WriteLine("{0:positive;negative;zero}", -0); //prints zero
You can use ToString on numeric value and pass format there
string formatted = 1.ToString("positive;negative;zero"); //will return "positive"
or use string.Format as shown in the comment section. But still you need to pass order position {0} to it.
string formatted = string.Format("{0:positive;negative;zero}", 1);
In order to check for null, you can use null coalescing operator (cast to object is required, since there is no implicit cast from int? to string). It becomes quite messy, so I would recommend to consider simple if statement.
int? v = null;
var formatted = string.Format("{0:positive;negative;zero}", (object) v ?? "null");
C# supports the same custom formatting:
string.Format("{0:#0.000;-#0.000}",part(PI).far_xdev);
Note that the format you use is the same as the standard fixed-point format with 3 significant digits:
string.Format("{0:F3}",part(PI).far_xdev);
i need to migrate some delphi code to c# (.net) for my mv4 apliccation. which will replace some functionalities of the existing delphi aplication, but i need to use some specific function.
the main problem is when i try to get a char from a string like:
FText = "123456";
i = 1;
delphi:
a := Integer(FText[i]);
c#:
a = (int)FText[i];
but c# returns 50 and delphi 49
Delphi has historically used "Length-prefixed" strings where the length indicator was a string[0]. Placing the first character of the string at index 1. Since the introduction of "long strings" in Delphi, the byte count is no longer at index 0, but the strings continue to use 1-based indexing.
C# uses zero-based indexing for strings. When you convert any string code from Delphi to C#, you will need to deal with the different indexing scheme.
C# code works OK. FText[1] == '2' so it's 50 in ASCII.
Don't know lot about Delphi but maybe tables there are not 0-indexed so FText[1] == '1' so its 49 ?
I am a novice C# learner. I know the basic concepts of this language. While revising the concepts, I stumbled upon one problem - How does Int32.Parse() exactly work?
Now I know what it does and the output and the overloads. What I need is the exact way in which this parsing is accomplished.
I searched on the MSDN site. It gives a very generalized definition of this method (Converts the string representation of a number to its 32-bit signed integer equivalent.) So my question is - How does it convert the string into a 32-bit signed integer?
On reading more, I found out 2 things -
The string parameter is interpreted using the "NumberStyles" enumeration
The string parameter is formatted and parsed using the "NumberFormatInfo" class
I need the theory behind this concept. Also, I did not understand the term - "culture-specific information" from the definition of the NumberFormatInfo class.
Here is the relevant code, which you can view under the terms of the MS-RSL.
"Culture-specific information" refers to the ways numbers can be written in different cultures. For example, in the US, you might write 1 million as:
1,000,000
But other cultures use the comma as a decimal separator, so you might see
1'000'000
or:
1 000 000
or, of course (in any culture):
1000000