C# - any way to use _addcarry_u64 intrinsic? - c#

I'm trying to prototype an unsigned BigInteger type with fixed byte size. The general idea is to have a struct with a fixed buffer of UInt64 parts.
In order to do addition of those big integers efficiently, I could use a C# equivalent of the _addcarry_u64 intrinsic function, as supported by MSVC or ICC in the C++ world.
So far, I unfortunately couldn't find a baked-in equivalent in .NET Core 2.1.
Is there any equivalent in the C# world already (maybe with .NET Core 3.0 intrinsics? I can't test that on my machine unfortunately)?
Alternatively, is there any reasonable way for a custom implementation?
The only way I can think of would be to generate Assembly for these instructions, but then again, the only ways I've seen to invoke Assembly at all from C# is via PInvoke, and - to be tested - but I'd strongly suppose the overhead of that will be considerable larger than the performance to be gained from the intrinsic in the first place.
Thanks

Related

Do C# optimizers perform copy elision?

I'm looking to optimize some C# code and I'm curious if my optimizer is going to perform any copy elision, and if so which kinds. I haven't been able to find any information on this by googling. If not I may want to switch my methods to take struct arguments by reference. The environment I care about is Unity 3D, which is odd as it uses either the Mono 2.0 Runtime or Unity's IL2CPP transpiler (which I should probably ask them about directly as it's closed-source.) But it would be interesting to know for Microsoft's optimizer as well, and if this type of optimization is generally allowed by the standard.
Side note: If this is not supported by the optimizer, it would be awfully nice if I could get the equivalent of the C++ const ref, but it appears this doesn't exist.
I can speak for IL2CPP, and say that it does not do anything to take struct arguments by reference. Even without access to the IL2CPP source code, you can see this by inspecting the generated C++ code.
Note that a C# struct is represented by a C++ struct, and that C++ struct is passed by value. We've discussed the possibility of using const references in this case, but we've not implemented yet (and we may not ever).

C++ / C# PInvoke - Marshalling explicitly sized numerics

if I wish to marshal an int in C# (Int32) to/from a native (C++) library, what's the best way to declare the relevant variable in C++ code?
I could use a standard int but I'd rather be explicit about the width of that variable (I know that it's 32-bit on most platforms anyway).
So far, I can see two options:
int32_t in <cstdint>
__int32 (MSVC++ identifier) ... However I'd like to remain platform independent if I can
I seem to recall hearing that C++11 has some new library for this, but I can't seem to find any mention of it.
Thank you.
The int keyword in the currently shipping C# and C++ compilers are type aliases, respectively for System.Int32 and __int32, the concrete types used by their back-ends. I've been writing code for 30 years and have used 8-bit, 16-bit, 32-bit and 64-bit processors. And used int 30 years ago like I do today. And expended very little effort to port programs to the next generation architecture or operating system version.
You see this back in the winapi as well. Every type used for a function argument or return value is a type alias. The CreateWindow() function in Windows version 1.0 looks exactly the same as the one you use in the 64-bit version of Windows 8.1
I have no illusion that this progression suddenly stopped today. 128-bit processors are already the bread-and-butter for IBM. Languages use type aliases to prevent themselves from becoming rapidly outdated and forgotten. True for languages like C and C++, true for C# as well. Although it certainly is going to require moving a bigger rock in the case of C#, the identity is engraved in most any C# programmer's mind right now.
Intentionally not using type aliases makes your program less portable.
You can use int32_t which is exactly 32 bits. It is possible for there to be a C++ implementation for which int32_t is not defined, but in that case, all bets are off.
On every platform that I know of which supports C#, then C/C++ int is 32 bits and so you may be over-thinking this.
Another thing to consider is what type your C++ code accepts. If it accepts int, and you use a platform where int is not 32 bits, then you still have a problem.

Is C++/CLI faster than C#

Is C++/CLI faster than C#? In which type of operations is it faster?
Not necessarily. However, C++/CLI takes away much of the syntactic sugar around non-performant ways of doing things that is present in C# (boxing for example).
Also, C++/CLI allows you a much more clean interop with unmanaged code, actually allowing you to mix managed / unmanaged code, which is a performance crucial environment may be of benifit.
EDIT:
See this post for some of the differences: http://msdn.microsoft.com/en-us/library/ms379617(VS.80).aspx
Since they both run on the .NET framework, I'd say any performance difference would be negligable. Any difference will almost certainly be down to how well whichever compilers you are using work.
Well, the short answer is no. Why? Reference types in C++/CLI are compiled to MSIL, same as in C#.
The nice thing about C++/CLI (and the long answer) though, is that you can easily call into native code, which (in many cases) is faster. That being said, if you write a native C++ class and expect it to be executed natively when called by someone in a managed class, that native C++ class must be compiled without CLR support (this question goes into how to do that).
Any managed code written in C++/CLI will essentially be exactly the same as the equivalent C#, assuming compiler accuracy, as they'll both end up as intermediate language instructions. However, C++/CLI makes it easy to mix unmanaged code in with the managed portion which may provide considerable speed benefits if well optimised.
Seeing as they are both .NET languages that get compiled into the same byte code that in turn gets run on the same virtual machine I'd say in general, no.
C++/CLI is really only intended to provide language interop between .NET and C++.
Yes, because except few details you can use .net like any other lib while you continue to have the power of c++. Mixing c++ classes with .net classes, inline assembly, create a driver if you want, ect. In c# you can use usafe code but you can't access direct2d api for example, it is limited.
Direct2dTest::MyForm^ form = gcnew Direct2dTest::MyForm();
//use (HWND)form.Handle.ToPointer() to acces the hwnd;
form->Show();
//you can use
//System::Windows::Forms::Application::Run(form);
//or a classic win32 loop
while (GetMessage(&msg, NULL, 0, 0)) {
TranslateMessage(&msg);
DispatchMessage(&msg);
}

What type should I use for a 128-bit number in in .NET?

I need to do some large integer math. Are there any classes or structs out there that represent a 128-bit integer and implement all of the usual operators?
BTW, I realize that decimal can be used to represent a 96-bit int.
It's here in System.Numerics. "The BigInteger type is an immutable type that represents an arbitrarily large integer whose value in theory has no upper or lower bounds."
var i = System.Numerics.BigInteger.Parse("10000000000000000000000000000000");
While BigInteger is the best solution for most applications, if you have performance critical numerical computations, you can use the complete Int128 and UInt128 implementations in my Dirichlet.Numerics library. These types are useful if Int64 and UInt64 are too small but BigInteger is too slow.
System.Int128 and System.UInt128 have been available since .NET Core 7.0 Preview 5
They were implemented in Add support for Int128 and UInt128 data types
I don't know why they aren't in the .NET 7 Preview 5 announcement but in the upcoming .NET 7 Preview 6 announcement there'll also be Int128Converter and UInt128Converter for the new types in Preview 5
They didn't have C# support yet though, just like System.Half, so you'll have to use Int128 explicitly instead of using a native C# keyword
No, there's nothing in .NET <= 3.5. I'm hoping/expecting that BigInteger will make its return in .NET 4.0. (It was cut from .NET 3.5.)
BigInteger is now a standard part of C# and friends in .NET 4.0. See: Gunnar Peipman's ASP.NET blog.
Note that the CPU can generally work with ordinary integers much more quickly and in constant time, especially when using the usual math operators (+, -, /, ...) because these operators typically map directly to single CPU instructions.
With BigInteger, even the most basic math operations are much slower function calls to methods whose runtime varies with the size of the number. This is because BigInteger implements arbitrary precision arithmetic, which adds considerable but necessary overhead.
The benefit is that BigIntegers are not limited to 64 or even 128 bits, but by available system memory (or about 264 bits of precision, whichever comes first).
Read here.
GUID is backed by a 128 bit integer in .NET framework; though it doesn't come with any of the typical integer type methods.
I've written a handler for GUID before to treat it as a 128 bit integer, but this was for a company I worked for ~8 years ago. I no longer have access to the source code.
So if you need native support for a 128 bit integer, and don't want to rely on BigInteger for whatever reason, you could probably hack GUID to server your purposes.
If you don't mind making reference to the J# library (vjslib.dll included with VS by default) there is already and implementation of BigInteger in .NET
using java.math;
public static void Main(){
BigInteger biggy = new BigInteger(....)
}
C# PCL library for computations with big numbers such as Int128 and Int256.
https://github.com/lessneek/BigMath
I believe Mono has a BigInteger implementation that you should be able to track down the source for.

Big integers in C#

Currently I am borrowing java.math.BigInteger from the J# libraries as described here. Having never used a library for working with large integers before, this seems slow, on the order of 10 times slower, even for ulong length numbers. Does anyone have any better (preferably free) libraries, or is this level of performance normal?
As of .NET 4.0 you can use the System.Numerics.BigInteger class. See documentation here: http://msdn.microsoft.com/en-us/library/system.numerics.biginteger(v=vs.110).aspx
Another alternative is the IntX class.
IntX is an arbitrary precision
integers library written in pure C#
2.0 with fast - O(N * log N) - multiplication/division algorithms
implementation. It provides all the
basic operations on integers like
addition, multiplication, comparing,
bitwise shifting etc.
F# also ships with one. You can get it at Microsoft.FSharp.Math.
The System.Numerics.BigInteger class in .NET 4.0 is based on Microsoft.SolverFoundation.Common.BigInteger from Microsoft Research.
The Solver Foundation's BigInteger class looks very performant. I am not sure about which license it is released under, but you can get it here (download and install Solver Foundation and find the Microsoft.Solver.Foundation.dll).
I reckon you could optimize the implementation if you perform all the operations on BigInts that are going to return results smaller than a native type (Eg. int64) on the native types and only deal with the big array if you are going to overflow.
edit
This implementation on codeproject, seems only 7 times slower ... But with the above optimization you could get it to perform almost identically to native types for small numbers.
Here are several implementations of BigInteger in C#.
I've used Mono's BigInteger implementation, works pretty fast (I've used it in CompactFramework)
Bouncy Castle
Mono
I'm not sure about the performance, but IronPython also has a BigInteger class. It is in the Microsoft.Scripting.Math namespace.
Yes, it will be slow, and 10x difference is about what I'd expect. BigInt uses an array to represent an arbitrary length, and all the operations have to be done manually (as opposed to most math which can be done directly with the CPU)
I don't even know if hand-coding it in assembly will give you much of a performance gain over 10x, that's pretty damn close. I'd look for other ways to optimize it--sometimes depending on your math problem there are little tricks you can do to make it quicker.
I used Biginteger at a previous job. I don't know what kind of performance needs you have. I did not use it in a performance-intensive situation, but never had any problems with it.
This may sound like a strange suggestion, but have you tested the decimal type to see how fast it works?
The decimal range is ±1.0 × 10^−28 to ±7.9 × 10^28, so it may still not be large enough, but it is larger than a ulong.
There was supposed to be a BigInteger class in .NET 3.5, but it got cut.
This won't help you, but there was supposed to be a BigInteger class in .Net 3.5; it got cut, but from statements made at PDC, it will be in .Net 4.0. They apparently have spent a lot of time optimizing it, so the performance should be much better than what you're getting now.
Further, this question is essentially a duplicate of How can I represent a very large integer in .NET?
See the answers in this thread. You will need to use one of the third-party big integer libraries/classes available or wait for C# 4.0 which will include a native BigInteger datatype.
This Looks very promising. It is a C# Wrapper over GMP.
http://web.rememberingemil.org/Projects/GnuMpDotNet/GnuMpDotNet.html
There are also other BigInteger options for .Net here in particular, Mpir.Net
You can also use the Math.Gmp.Native Nuget package that I wrote. Its source code is available on GitHub, and documentation is available here. It exposes to .NET all of the functionality of the GMP library which is known as a highly-optimized arbitrary-precision arithmetic library.
Arbitrary-precision integer are represented by the mpz_t type. Operations on these integers all begin with the mpz_ prefix. For examples, mpz_add or mpz_cmp. Source code examples are given for each operation.

Categories

Resources