if I wish to marshal an int in C# (Int32) to/from a native (C++) library, what's the best way to declare the relevant variable in C++ code?
I could use a standard int but I'd rather be explicit about the width of that variable (I know that it's 32-bit on most platforms anyway).
So far, I can see two options:
int32_t in <cstdint>
__int32 (MSVC++ identifier) ... However I'd like to remain platform independent if I can
I seem to recall hearing that C++11 has some new library for this, but I can't seem to find any mention of it.
Thank you.
The int keyword in the currently shipping C# and C++ compilers are type aliases, respectively for System.Int32 and __int32, the concrete types used by their back-ends. I've been writing code for 30 years and have used 8-bit, 16-bit, 32-bit and 64-bit processors. And used int 30 years ago like I do today. And expended very little effort to port programs to the next generation architecture or operating system version.
You see this back in the winapi as well. Every type used for a function argument or return value is a type alias. The CreateWindow() function in Windows version 1.0 looks exactly the same as the one you use in the 64-bit version of Windows 8.1
I have no illusion that this progression suddenly stopped today. 128-bit processors are already the bread-and-butter for IBM. Languages use type aliases to prevent themselves from becoming rapidly outdated and forgotten. True for languages like C and C++, true for C# as well. Although it certainly is going to require moving a bigger rock in the case of C#, the identity is engraved in most any C# programmer's mind right now.
Intentionally not using type aliases makes your program less portable.
You can use int32_t which is exactly 32 bits. It is possible for there to be a C++ implementation for which int32_t is not defined, but in that case, all bets are off.
On every platform that I know of which supports C#, then C/C++ int is 32 bits and so you may be over-thinking this.
Another thing to consider is what type your C++ code accepts. If it accepts int, and you use a platform where int is not 32 bits, then you still have a problem.
Related
I'm trying to prototype an unsigned BigInteger type with fixed byte size. The general idea is to have a struct with a fixed buffer of UInt64 parts.
In order to do addition of those big integers efficiently, I could use a C# equivalent of the _addcarry_u64 intrinsic function, as supported by MSVC or ICC in the C++ world.
So far, I unfortunately couldn't find a baked-in equivalent in .NET Core 2.1.
Is there any equivalent in the C# world already (maybe with .NET Core 3.0 intrinsics? I can't test that on my machine unfortunately)?
Alternatively, is there any reasonable way for a custom implementation?
The only way I can think of would be to generate Assembly for these instructions, but then again, the only ways I've seen to invoke Assembly at all from C# is via PInvoke, and - to be tested - but I'd strongly suppose the overhead of that will be considerable larger than the performance to be gained from the intrinsic in the first place.
Thanks
I'm looking to optimize some C# code and I'm curious if my optimizer is going to perform any copy elision, and if so which kinds. I haven't been able to find any information on this by googling. If not I may want to switch my methods to take struct arguments by reference. The environment I care about is Unity 3D, which is odd as it uses either the Mono 2.0 Runtime or Unity's IL2CPP transpiler (which I should probably ask them about directly as it's closed-source.) But it would be interesting to know for Microsoft's optimizer as well, and if this type of optimization is generally allowed by the standard.
Side note: If this is not supported by the optimizer, it would be awfully nice if I could get the equivalent of the C++ const ref, but it appears this doesn't exist.
I can speak for IL2CPP, and say that it does not do anything to take struct arguments by reference. Even without access to the IL2CPP source code, you can see this by inspecting the generated C++ code.
Note that a C# struct is represented by a C++ struct, and that C++ struct is passed by value. We've discussed the possibility of using const references in this case, but we've not implemented yet (and we may not ever).
In Visual Basic Nominal storage allocation of object is system dependent.
4 bytes on 32-bit platform
8 bytes on 64-bit platform
http://msdn.microsoft.com/en-us/library/47zceaw7.aspx
my question is what is the size of Nominal storage allocation of object in c# and is it system dependent?
It is exactly the same. Remember that both languages are high-level languages and "platform-independent" that are compiled to MSIL. It is inherent to any CLI language. That is, neither C# nor VB run on your machine, it is the actual MSIL that gets compiled at runtime, at the end all of them get "translated" to the same language. Normally, you shouldn't need to care about this, chances are that if you need to be in control of this stuff you might need a lower level language where you have to do memory management by yourself such as C++, C, etc.
There is no difference. Why? Because VB and C# in the end use .NET and the .NET type (second column in your link) will always behave the way you described, regardless of actual language that lead to this type.
So far I've been using the C# Mersenne Twister found here to generate random numbers:
http://www.centerspace.net/resources.php
I just discovered SFMT which is supposed to be twice as fast here:
http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/
Can anyone point me at a C# implementation of SFMT?
My requirements are to generate an integer between (and including) 0 and 2^20 (1048576).
I need to do this trillions of times everyday for a simulation running on a 24 hour clock so I am prepared to spend days tweaking this to perfection.
Currently I've tweaked the Center Space Mersenne Twister by adding a new method to fit my requirements:
public uint Next20()
{
return (uint)(genrand_int32() >> 12);
}
Using the method genrand_int32() I'd like to produce my own version, genrand_int20(), that generates an integer between (and including) 0 and 2^20 to save on the cast above and shift but I don't understand the mathematics. Exactly how can I do this?
Also is using an uint going to be faster that int, or is just a matter of addressable numbers? Because I only need up to 1048576, I am only concerned with speed.
Also this will be running on a Windows Server 2003 R2 SP2 (32bit) box with .NET 2. Processor is AMD Opteron 275 (4 core).
What you can do is download the source from the link you discovered on Code Project. Unzip it, load the solution in Visual Studio and compile it. This will give you source, an unmanaged c dll and a .lib file.
You can P/Invoke the functions in this dll, (there are only 5 simple functions exported, of which you need only two) or you can use this dll, lib, and the SFMT header file to create a managed wrapper dll you can use in C# without P/Invoke. I just tried this method and it was very simple to do. There was no explicit marshalling involved.
Here's how. Once you have downloaded and compiled the source (you need the header and the lib file that is created in addition to the dll) create a new C++ CLR Class Library project. Call it WrapSFMT or something. Go the project properties. Under C++/Precompiled Headers, change to "Not using precompiled headers." Under the Linker/General/Additional Library Directories, enter the path to the SFMT.lib. Under Linker/Input/Additional Dependencies, add SFMT.lib. Close the property pages. Copy SFMT.h to your project folder and include it in the project.
Edit WrapSFMT.h to read as follows:
#pragma once
#include "SFMT.H"
using namespace System;
namespace WrapSFMT {
public ref class SRandom
{
public:SRandom(UInt32);
public:UInt32 Rand32(void);
};
}
These declare the methods that will be in your class. Now edit WrapSFMT.cpp to read:
#include "WrapSFMT.h"
namespace WrapSFMT {
SRandom::SRandom(UInt32 seed)
{
init_gen_rand(seed);
}
UInt32 SRandom::Rand32()
{
return gen_rand32();
}
}
These implement the methods you declared in the header file. All you are doing is calling functions from the SFMT.dll, and C++/CLI is automatically handling the conversion from unmanaged to managed. Now you should be able to build the WrapSFMT.dll and reference it in your C# project. Make sure the SFMT.dll is in the path, and you should have no problems.
You can find a C# implementation of SFMT (plus other RNG algorithms) at...
http://rei.to/random.html
The page and source code comments are in Japanese but you should be able to figure it out.
You can also find a Google-translated version (to English) of the page at...
http://translate.google.com/translate?hl=en&sl=ja&u=http://rei.to/random.html
I don't really see your problem with speed here. On my machine (Core 2 Duo T7200 # 2 GHz) generating a random integer with MT19937 or MT19937-64 takes around 20 ns (on average, when drawing 50000 numbers). So that'd be around 4,32 × 1012 (so around 4 trillion numbers) a day. And that's for one core. With Java. So I think you can expect the performance to be more than adequate for your needs.
To actually answer your question: I don't know of a C# implementation of SFMT, but conversion of the C code to C# should be fairly straightforward. However, you're not gaining much, as SFMT is optimized for SIMD and C# currently doesn't support this directly.
Is there a reason you can't compile the C implementation into a DLL and call this from your C# code?
EDIT:
I'm sorry, but I have only a very limited knowledge of C (and indeed C#), but the "How to create a C dll" may be answered here: http://www.kapilik.com/2007/09/17/how-to-create-a-simple-win32-dll-using-visual-c-2005/ and the how fast can be checked by profiling the code.
Maybe this is what you're looking for?
There is a list of several implementations.
Specifically, this one (by Cory Nelson) might be useful.
I need to do some large integer math. Are there any classes or structs out there that represent a 128-bit integer and implement all of the usual operators?
BTW, I realize that decimal can be used to represent a 96-bit int.
It's here in System.Numerics. "The BigInteger type is an immutable type that represents an arbitrarily large integer whose value in theory has no upper or lower bounds."
var i = System.Numerics.BigInteger.Parse("10000000000000000000000000000000");
While BigInteger is the best solution for most applications, if you have performance critical numerical computations, you can use the complete Int128 and UInt128 implementations in my Dirichlet.Numerics library. These types are useful if Int64 and UInt64 are too small but BigInteger is too slow.
System.Int128 and System.UInt128 have been available since .NET Core 7.0 Preview 5
They were implemented in Add support for Int128 and UInt128 data types
I don't know why they aren't in the .NET 7 Preview 5 announcement but in the upcoming .NET 7 Preview 6 announcement there'll also be Int128Converter and UInt128Converter for the new types in Preview 5
They didn't have C# support yet though, just like System.Half, so you'll have to use Int128 explicitly instead of using a native C# keyword
No, there's nothing in .NET <= 3.5. I'm hoping/expecting that BigInteger will make its return in .NET 4.0. (It was cut from .NET 3.5.)
BigInteger is now a standard part of C# and friends in .NET 4.0. See: Gunnar Peipman's ASP.NET blog.
Note that the CPU can generally work with ordinary integers much more quickly and in constant time, especially when using the usual math operators (+, -, /, ...) because these operators typically map directly to single CPU instructions.
With BigInteger, even the most basic math operations are much slower function calls to methods whose runtime varies with the size of the number. This is because BigInteger implements arbitrary precision arithmetic, which adds considerable but necessary overhead.
The benefit is that BigIntegers are not limited to 64 or even 128 bits, but by available system memory (or about 264 bits of precision, whichever comes first).
Read here.
GUID is backed by a 128 bit integer in .NET framework; though it doesn't come with any of the typical integer type methods.
I've written a handler for GUID before to treat it as a 128 bit integer, but this was for a company I worked for ~8 years ago. I no longer have access to the source code.
So if you need native support for a 128 bit integer, and don't want to rely on BigInteger for whatever reason, you could probably hack GUID to server your purposes.
If you don't mind making reference to the J# library (vjslib.dll included with VS by default) there is already and implementation of BigInteger in .NET
using java.math;
public static void Main(){
BigInteger biggy = new BigInteger(....)
}
C# PCL library for computations with big numbers such as Int128 and Int256.
https://github.com/lessneek/BigMath
I believe Mono has a BigInteger implementation that you should be able to track down the source for.