convert double value to binary value - c#

How can i convert double value to binary value.
i have some value like this below 125252525235558554452221545332224587265 i want to convert this to binary format..so i am keeping it in double and then trying to convert to binary (1 & 0's).. i am using C#.net

Well, you haven't specified a platform or what sort of binary value you're interested in, but in .NET there's BitConverter.DoubleToInt64Bits which lets you get at the IEEE 754 bits making up the value very easily.
In Java there's Double.doubleToLongBits which does the same thing.
Note that if you have a value such as "125252525235558554452221545332224587265" then you've got more information than a double can store accurately in the first place.

In C, you can do it for instance this way, which is a classic use of the union construct:
int i;
union {
double x;
unsigned char byte[sizeof (double)];
} converter;
converter.x = 5.5555555555556e18;
for(i = 0; i < sizeof converter.byte; i++)
printf("%02x", converter.byte[i]);
If you stick this in a main() and run it, it might print something like this:
~/src> gcc -o floatbits floatbits.c
~/src> ./floatbits
ba b5 f6 15 53 46 d3 43
Note though that this, of course, is platform-dependent in its endianness. The above is from a Linux system running on a Sempron CPU, i.e. it's little endian.

A decade late but hopefully this will help someone:
// Converts a double value to a string in base 2 for display.
// Example: 123.5 --> "0:10000000101:1110111000000000000000000000000000000000000000000000"
// Created by Ryan S. White in 2020, Released under the MIT license.
string DoubleToBinaryString(double val)
{
long v = BitConverter.DoubleToInt64Bits(val);
string binary = Convert.ToString(v, 2);
return binary.PadLeft(64, '0').Insert(12, ":").Insert(1, ":");
}

If you mean you want to do it yourself, then this is not a programming question.
If you want to make a computer do it, the easiest way is to use a floating point input routine and then display the result in its hex form. In C++:
double f = atof ("5.5555555555556E18");
unsigned char *b = (unsigned char *) &f;
for (int j = 0; j < 8; ++j)
printf (" %02x", b [j]);

A double value already IS a binary value. It is just a matter of the representation that you wish it to have. In a programming language when you call it a double, then the language that you use will interpret it in one way. If you happen to call the same chunk of memory an int, then it is not the same number.
So it depends what you really want... If you need to write it to disk or to network, then you need to think about BigEndian/LittleEndian.

For these huge numbers (who cannot be presented accurately using a double) you need to use some specialized class to hold the information needed.
C# provides the Decimal class:
The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding. For example, the following code produces a result of 0.9999999999999999999999999999 rather than 1.
If you need bigger precision than this, you need to make your own class I guess. There is one here for ints: http://sourceforge.net/projects/cpp-bigint/ although it seems to be for c++.

Related

Calculators working with larger numbers than 18446744073709551615

When I Initialize a ulong with the value 18446744073709551615 and then add a 1 to It and display to the Console It displays a 0 which is totally expected.
I know this question sounds stupid but I have to ask It. if my Computer has a 64-bit architecture CPU how is my calculator able to work with larger numbers than 18446744073709551615?
I suppose floating-point has a lot to do here.
I would like to know exactly how this happens.
Thank you.
working with larger numbers than 18446744073709551615
"if my Computer has a 64-bit architecture CPU" --> The architecture bit size is largely irrelevant.
Consider how you are able to add 2 decimal digits whose sum is more than 9. There is a carry generated and then used when adding the next most significant decimal place.
The CPU can do the same but with base 18446744073709551616 instead of base 10. It uses a carry bit as well as a sign and overflow bit to perform extended math.
I suppose floating-point has a lot to do here.
This is nothing to do with floating point.
; you say you're using ulong, which means your using unsigned 64-but arithmetic. The largest value you can store is therefore "all ones", for 64 bits - aka UInt64.MaxValue, which as you've discovered: https://learn.microsoft.com/en-us/dotnet/api/system.uint64.maxvalue
If you want to store arbitrarily large numbers: there are APIs for that - for example BigInteger. However, arbitrary size cones at a cost, so it isn't the default, and certainly isn't what you get when you use ulong (or double, or decimal, etc - all the compiler-level numeric types have fixed size).
So: consider using BigInteger
You either way have a 64 bits architecture processor and limited to doing 64 bits math - your problem is a bit hard to explain without taking an explicit example of how this is solved with BigInteger in System.Numerics namespace, available in .NET Framework 4.8 for example. The basis is to 'decompose' the number into an array representation.
mathematical expression 'decompose' here meaning :
"express (a number or function) as a combination of simpler components."
Internally BigInteger uses an internal array (actually multiple internal constructs) and a helper class called BigIntegerBuilder. In can implicitly convert an UInt64 integer without problem, for even bigger numbers you can use the + operator for example.
BigInteger bignum = new BigInteger(18446744073709551615);
bignum += 1;
You can read about the implicit operator here:
https://referencesource.microsoft.com/#System.Numerics/System/Numerics/BigInteger.cs
public static BigInteger operator +(BigInteger left, BigInteger right)
{
left.AssertValid();
right.AssertValid();
if (right.IsZero) return left;
if (left.IsZero) return right;
int sign1 = +1;
int sign2 = +1;
BigIntegerBuilder reg1 = new BigIntegerBuilder(left, ref sign1);
BigIntegerBuilder reg2 = new BigIntegerBuilder(right, ref sign2);
if (sign1 == sign2)
reg1.Add(ref reg2);
else
reg1.Sub(ref sign1, ref reg2);
return reg1.GetInteger(sign1);
}
In the code above from ReferenceSource you can see that we use the BigIntegerBuilder to add the left and right parts, which are also BigInteger constructs.
Interesting, it seems to keep its internal structure into an private array called "_bits", so that is the answer to your question. BigInteger keeps track of an array of 32-bits valued integer array and is therefore able to handle big integers, even beyond 64 bits.
You can drop this code into a console application or Linqpad (which has the .Dump() method I use here) and inspect :
BigInteger bignum = new BigInteger(18446744073709551615);
bignum.GetType().GetField("_bits",
BindingFlags.NonPublic | BindingFlags.Instance).GetValue(bignum).Dump();
A detail about BigInteger is revealed in a comment in the source code of BigInteger on Reference Source. So for integer values, BigInteger stores the value in the _sign field, for other values the field _bits is used.
Obviously, the internal array needs to be able to be converted into a representation in the decimal system (base-10) so humans can read it, the ToString() method converts the BigInteger to a string representation.
For a better in-depth understanding here, consider doing .NET source stepping to step way into the code how you carry out the mathematics here. But for a basic understanding, the BigInteger uses an internal representation of which is composed with 32 bits array which is transformed into a readable format which allows bigger numbers, bigger than even Int64.
// For values int.MinValue < n <= int.MaxValue, the value is stored in sign
// and _bits is null. For all other values, sign is +1 or -1 and the bits are in _bits

Why use decimal(int [ ]) constructor?

I am maintaining a C# desktop application, on windows 7, using Visual Studio 2013. And somewhere in the code there is the following line, that tries to create a 0.01 decimal value, using a Decimal(Int32[]) constructor:
decimal d = new decimal(new int[] { 1, 0, 0, 131072 });
First question is, is it different from the following?
decimal d = 0.01M;
If it is not different, why the developer has gone through the trouble of coding like that?
I need to change this line in order to create dynamic values. Something like:
decimal d = (decimal) (1 / Math.Pow(10, digitNumber));
Am I going to cause some unwanted behavior this way?
It seems useful to me when the source of the decimal consists of bits.
The decimal used in .NET has an implementation that is based on a sequence of bit parameters (not just one stream of bits like with an int), so it can be useful to construct a decimal with bits when you communicate with other systems which return a decimal through a blob of bytes (a socket, from a piece of memory, etc).
It is easy now to convert the set of bits to a decimal now. No need for fancy conversion code. Also, you can construct a decimal from the inputs defined in the standard, which makes it convenient for testing the .NET framework too.
The decimal(int[] bits) constructor allows you to give a bitwise definition of the decimal you're creating bits must be a 4 int array where:
bits 0, 1, and 2 make up the 96-bit integer number.
bits 3 contains the scale factor and sign
It just allows you to get really precise with the definition of the decimal judging from your example I don't think you need that level of precision.
See here for more detail on using that constructor or here for other constructors that may be more appropriate for you
To more specifically answer your question if digitNumberis a 16bit exponent then decimal d = new decimal(new int[] { 1, 0, 0, digitNumber << 16 }); does what you want since the exponent goes in bits 16 - 23 of last int in the array
The definition in the xml is
//
// Summary:
// Initializes a new instance of System.Decimal to a decimal value represented
// in binary and contained in a specified array.
//
// Parameters:
// bits:
// An array of 32-bit signed integers containing a representation of a decimal
// value.
//
// Exceptions:
// System.ArgumentNullException:
// bits is null.
//
// System.ArgumentException:
// The length of the bits is not 4.-or- The representation of the decimal value
// in bits is not valid.
So for some unknown reason the original developer wanted to initialize his decimal this way. Maybe he was just wanted to confuse someone in the future.
It cant possibly affect your code if you change this to
decimal d = 0.01m;
because
(new decimal(new int[] { 1, 0, 0, 131072})) == 0.01m
You should exactly know how decimal stored in memory.
you can use this method to generate the desired value
public static decimal Base10FractionGenerator(int digits)
{
if (digits < 0 || digits > 28)
throw new ArgumentException($"'{nameof(digits)}' must be between 0 and 28");
return new decimal(new[] { 1, 0, 0, digits << 16 });
}
Use it like
Console.WriteLine(Base10FractionGenerator(0));
Console.WriteLine(Base10FractionGenerator(2));
Console.WriteLine(Base10FractionGenerator(5));
Here is the result
1
0.01
0.00001
The particular constructor you're talking about generates a decimal from four 32-bit values. Unfortunately, newer versions of the Common Language Infrastructure (CLI) leave its exact format unspecified (presumably to allow implementations to support different decimal formats) and now merely guarantee at least a specific precision and range of decimal numbers. However, earlier versions of the CLI do define that format exactly as Microsoft's implementation does, so it's probably kept that way in Microsoft's implementation for backward compatibility. However, it's not ruled out that other implementations of the CLI will interpret the four 32-bit values of the Decimal constructor differently.
Decimals are exact numerics, you can use == or != to test for equality.
Perhaps, this line of code comes from some other place where it made sense at some particular point of time.
I'd clean it up.

Why does this simple double assertion fail in C#

The following test will fail in C#
Assert.AreEqual<double>(10.0d, 16.1d - 6.1d);
The problem appears to be a floating point error.
16.1d - 6.1d == 10.000000000000002
This is causing me headaches in writing unit tests for code that uses double. Is there a way to fix this?
There is no exact conversion between the decimal system and the binary representation of a double (see excellent comment by #PatriciaShanahan below on why).
In this case the .1 part of the numbers is the problem, it cannot be finitely represented in a double (like 1/3 can't be finitely represented exactly as a decimal number).
A code snippet to explain what happends:
double larger = 16.1d; //Assign closest double representation of 16.1.
double smaller = 6.1; //Assign closest double representation of 6.1.
double diff = larger - smaller; //Assign closest diff between larger and
//smaller, but since a smaller value has a
//larger precision the result will have better
//precision than larger but worse than smaller.
//The difference shows up as the ...000002.
Always use the Assert.Equal overload which takes a delta parameter when comparing doubles.
Alternatively if you really need exact decimal conversion, use the decimal data type, that has another binary representation and would return exactly 10 in your example.
Floatingoint numbers are an estimate of the actual value based on an exponent so the test fails correctly. If you require exact equivalence in two decimal numbers you may need to check out the decimal data type.
If you are using NUnit please use the Within option. Here can you find additional information: http://www.nunit.org/index.php?p=equalConstraint&r=2.6.2.
I agree with anders abel. There won't be a way to do this using a float number representation. In direct result of IEE 1985-754 only the numbers that can be represented by
can be stored and calculated with precisly (as long as the chosen bit number allows this).
For Example : 1024 * 1.75 * 183.375 / 1040.0675 <-- will be stored precisly
10 / 1.1 <-- wont be stored precisly
If you are hardly interested in exact representation of rational numbers you could write your own number-implementation using fractions.
This could be done by saving numerator, denominator and sign. Then operations like multiply, subtract, etc. need to be implemented (very hard to ensure good performance). A toString()-method could look like this (I assume cachedRepresentation, cachedDotIndex and cachedNumerator to be member-variables)
public String getString(int digits) {
if(this.cachedRepresentation == ""){
this.cachedRepresentation += this.positiveSign ? "" : "-";
this.cachedRepresentation += this.numerator/this.denominator;
this.cachedNumerator = 10 * (this.numerator % this.denominator);
this.cachedDotIndex = this.cachedRepresentation.Length;
this.cachedRepresentation += ".";
}
if ((this.cachedDotIndex + digits) < this.cachedRepresentation.Length)
return this.cachedRepresentation.Substring(0, this.cachedDotIndex + digits + 1);
while((this.cachedDotIndex + digits) >= this.cachedRepresentation.Length){
this.cachedRepresentation += this.cachedNumerator / this.denominator;
this.cachedNumerator = 10 * (this.cachedNumerator % denominator);
}
return cachedRepresentation;
}
This worked for me. At the operations itself with long numbers I got some problems with too small datatypes (usually I don't use c#). I think for an experienced c#-developer it should be no problem to implement this without problems of to small datatypes.
If you want to implement this you should do minifications of the fraction at initializing and before operations using euclids greatest-common-divider.
Non rational numbers can (in every case I know) be specified by a algorithm that comes as close to the exact representation as you want (and computer allows).

When is it beneficial to convert from float to double via decimal

Our existing application reads some floating point numbers from a file. The numbers are written there by some other application (let's call it Application B). The format of this file was fixed long time ago (and we cannot change it). In this file all the floating point numbers are saved as floats in binary representation (4 bytes in the file).
In our program as soon as we read the data we convert the floats to doubles and use doubles for all calculations because the calculations are quite extensive and we are concerned with the spread of rounding errors.
We noticed that when we convert floats via decimal (see the code below) we are getting more precise results than when we convert directly. Note: Application B also uses doubles internally and only writes them into the file as floats. Let's say Application B had the number 0.012 written to file as float. If we convert it after reading to decimal and then to double we get exactly 0.012, if we convert it directly, we get 0.0120000001043081.
This can be reproduced without reading from a file - with just an assignment:
float readFromFile = 0.012f;
Console.WriteLine("Read from file: " + readFromFile);
//prints 0.012
double forUse = readFromFile;
Console.WriteLine("Converted to double directly: " + forUse);
//prints 0.0120000001043081
double forUse1 = (double)Convert.ToDecimal(readFromFile);
Console.WriteLine("Converted to double via decimal: " + forUse1);
//prints 0.012
Is it always beneficial to convert from float to double via decimal, and if not, under what conditions is it beneficial?
EDIT: Application B can obtain the values which it saves in two ways:
Value can be a result of calculations
Value can be typed in by user as a decimal fraction (so in the example above the user had typed 0.012 into an edit box and it got converted to double, then saved to float)
we get exactly 0.012
No you don't. Neither float nor double can represent 3/250 exactly. What you do get is a value that is rendered by the string formatter Double.ToString() as "0.012". But this happens because the formatter doesn't display the exact value.
Going through decimal is causing rounding. It is likely much faster (not to mention easier to understand) to just use Math.Round with the rounding parameters you want. If what you care about is the number of significant digits, see:
Round a double to x significant figures
For what it's worth, 0.012f (which means the 32-bit IEEE-754 value nearest to 0.012) is exactly
0x3C449BA6
or
0.012000000104308128
and this is exactly representable as a System.Decimal. But Convert.ToDecimal(0.012f) won't give you that exact value -- per the documentation there is a rounding step.
The Decimal value returned by this method contains a maximum of seven significant digits. If the value parameter contains more than seven significant digits, it is rounded using rounding to nearest.
As strange as it may seem, conversion via decimal (with Convert.ToDecimal(float)) may be beneficial in some circumstances.
It will improve the precision if it is known that the original numbers were provided by users in decimal representation and users typed no more than 7 significant digits.
To prove it I wrote a small program (see below). Here is the explanation:
As you recall from the OP this is the sequence of steps:
Application B has doubles coming from two sources:
(a) results of calculations; (b) converted from user-typed decimal numbers.
Application B writes its doubles as floats into the file - effectively
doing binary rounding from 52 binary digits (IEEE 754 single) to the 23 binary digits (IEEE 754 double).
Our Application reads that float and converts it to a double by one of two ways:
(a) direct assignment to double - effectively padding a 23-bit number to a 52-bit number with binary zeros (29 zero-bits);
(b) via conversion to decimal with (double)Convert.ToDecimal(float).
As Ben Voigt properly noticed Convert.ToDecimal(float) (see MSDN in the Remark section) rounds the result to 7 significant decimal digits. In Wikipedia's IEEE 754 article about Single we can read that precision is 24 bits - equivalent to log10(pow(2,24)) ≈ 7.225 decimal digits. So, when we do the conversion to decimal we lose that 0.225 of a decimal digit.
So, in the generic case, when there is no additional information about doubles, the conversion to decimal will in most cases make us loose some precision.
But (!) if there is the additional knowledge that originally (before being written to a file as floats) the doubles were decimals with no more than 7 digits, the rounding errors introduced in decimal rounding (step 3(b) above) will compensate the rounding errors introduced with the binary rounding (in step 2. above).
In the program to prove the statement for the generic case I randomly generate doubles, then cast it to float, then convert it back to double (a) directly, (b) via decimal, then I measure the distance between the original double and the double (a) and double (b). If the double(a) is closer to the original than the double(b), I increment pro-direct conversion counter, in the opposite case I increment the pro-viaDecimal counter. I do it in a loop of 1 mln. cycles, then I print the ratio of pro-direct to pro-viaDecimal counters. The ratio turns out to be about 3.7, i.e. approximately in 4 cases out of 5 the conversion via decimal will spoil the number.
To prove the case when the numbers are typed in by users I used the same program with the only change that I apply Math.Round(originalDouble, N) to the doubles. Because I get originalDoubles from the Random class, they all will be between 0 and 1, so the number of significant digits coincides with the number of digits after the decimal point. I placed this method in a loop by N from 1 significant digit to 15 significant digits typed by user. Then I plotted it on the graph. The dependency of (how many times direct conversion is better than conversion via decimal) from the number of significant digits typed by user.
.
As you can see, for 1 to 7 typed digits the conversion via Decimal is always better than the direct conversion. To be exact, for a million of random numbers only 1 or 2 are not improved by conversion to decimal.
Here is the code used for the comparison:
private static void CompareWhichIsBetter(int numTypedDigits)
{
Console.WriteLine("Number of typed digits: " + numTypedDigits);
Random rnd = new Random(DateTime.Now.Millisecond);
int countDecimalIsBetter = 0;
int countDirectIsBetter = 0;
int countEqual = 0;
for (int i = 0; i < 1000000; i++)
{
double origDouble = rnd.NextDouble();
//Use the line below for the user-typed-in-numbers case.
//double origDouble = Math.Round(rnd.NextDouble(), numTypedDigits);
float x = (float)origDouble;
double viaFloatAndDecimal = (double)Convert.ToDecimal(x);
double viaFloat = x;
double diff1 = Math.Abs(origDouble - viaFloatAndDecimal);
double diff2 = Math.Abs(origDouble - viaFloat);
if (diff1 < diff2)
countDecimalIsBetter++;
else if (diff1 > diff2)
countDirectIsBetter++;
else
countEqual++;
}
Console.WriteLine("Decimal better: " + countDecimalIsBetter);
Console.WriteLine("Direct better: " + countDirectIsBetter);
Console.WriteLine("Equal: " + countEqual);
Console.WriteLine("Betterness of direct conversion: " + (double)countDirectIsBetter / countDecimalIsBetter);
Console.WriteLine("Betterness of conv. via decimal: " + (double)countDecimalIsBetter / countDirectIsBetter );
Console.WriteLine();
}
Here's a different answer - I'm not sure that it's any better than Ben's (almost certainly not), but it should produce the right results:
float readFromFile = 0.012f;
decimal forUse = Convert.ToDecimal(readFromFile.ToString("0.000"));
So long as .ToString("0.000") produces the "correct" number (which should be easy to spot-check), then you'll get something you can work with and not have to worry about rounding errors. If you need more precision, just add more 0's.
Of course, if you actually need to work with 0.012f out to the maximum precision, then this won't help, but if that's the case, then you don't want to be converting it from a float in the first place.

c#: sum of two double numbers problem [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why is floating point arithmetic in C# imprecise?
Hi. I've got following problem:
43.65+61.11=104.75999999999999
for decimal is correct:
(decimal)43.65+(decimal)61.11=104.76
Why result for double is wrong?
This question and its answers are a wealth of info on this - Difference between decimal, float and double in .NET?
To quote:
For values which are "naturally exact
decimals" it's good to use decimal.
This is usually suitable for any
concepts invented by humans: financial
values are the most obvious example,
but there are others too. Consider the
score given to divers or ice skaters,
for example.
For values which are more artefacts of
nature which can't really be measured
exactly anyway, float/double are more
appropriate. For example, scientific
data would usually be represented in
this form. Here, the original values
won't be "decimally accurate" to start
with, so it's not important for the
expected results to maintain the
"decimal accuracy". Floating binary
point types are much faster to work
with than decimals.
Short answer: floating point representation (such as "double") is inherently inaccurate. So is fixed point (such as "decimal"), but the inaccuracy in fixed-point representation is of a different kind. Here's one short explanation: http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm
You can google for "floating point inaccuracy" or so for more.
It isn't exactly wrong. It's the closest decimal representation to the binary floating-point number that results from the sum.
The problem is that IEEE floats cannot represent 43.65 + 61.11 exactly, due to the use of a binary mantissa. Some systems (such as Python 2.7 and Visual C++'s standard I/O libraries) will round to the simplest decimal that resolves to the same binary, and will print the expected 104.76. But all these systems arrive at exactly the same answer internally.
Interestingly, decimal notation can finitely represent any finite binary fraction, whereas the opposite doesn't hold. If humans had two fingers and computers used ten-state memory, we wouldn't have this problem. :-)
Because double uses a fractional model. Any number < 1 is expressed in the form of x / y. Given that information, some numbers can only be approximated. Use decimal, not double for high precision calculations.
See here for some light reading :)
It comes down to the fact that floats are stored as binary floats, and like in base 10, there are some numbers which can't be stored without truncation. Take for example 1/3rd in base 10, that is .3 recurring. The numbers you are dealing with, when converted to binary are recurring.
I disagree that floats or doubles are more or less accurate than decimal representations. They are as accurate as you choose to have precision. They are a different representation however, and different numbers can be expressed wholey than in base 10.
Decimal stores numbers in base 10. That will probably give you the result you expect
Decimal arithmetic is well-suited for base-10 representations of numbers, as base-10 numbers can be exactly represented in decimal. (Which is why currency is always stored in currency appropriate classes, or stored in int with a scaling factor to represent 'pennies' or 'pence' or other similar decimal currency.)
IEEE-754 Binary numbers cannot calculate with .1 or .01 accurately. So floating point formats approximate the inputs, and you get approximate outputs back, which is perfectly acceptable for what floating point was designed to handle -- physical simulations, scientific data, and fast numerical mathematical methods.
Note this simple program and output:
#include <stdio.h>
int main(int argc, char* argv[]) {
float f, g;
double d, e;
long double l, m;
f=0.1;
g=f*f;
d=0.1;
e=d*d;
l=0.1;
m=l*l;
printf("%40.40f, %40.40f, %40.40Lf\n", g, e, m);
return 0;
}
$ ./fp
0.0100000007078051567077636718750000000000,
0.0100000000000000019428902930940239457414,
0.0100000000000000011102569059430467124372
All three possibilities don't give the exact answer, 0.01, but they do give numbers that are quite close to the answer.
But use decimal arithmetic for currency.
Really? The code below returned 104.76 as expected:
class Program
{
static void Main(string[] args)
{
double d1 = 43.65;
double d2 = 61.11;
double d3 = d1 + d2;
Console.WriteLine(d3);
Console.ReadLine();
}
}
whereas the below code returned 104.76000213623
class Program
{
static void Main(string[] args)
{
float d1 = 43.65f;
float d2 = 61.11f;
double d3 = d1 + d2;
Console.WriteLine(d3);
Console.ReadLine();
}
}
Check if you are converting from float to double which may have caused this issue.

Categories

Resources