System.NotSupportedException: No data is available for encoding 1252 - c#

I'm working with a Trust Commerce Tutorial on how to generate a payment token that will allow customers to use the TC Trustee Host payment form. I was given an example on how to retrieve this token by their dev team.
using System;
using System.Net;
using System.IO;
using System.Text;
using System.Collections;
using System.Web;
/** #class TCToken
* An example class for generating a TrustCommerce Trustee Token
*/
public class TCToken
{
public static void Main(string [] args)
{
string custid = "123456";
string password = "XXXXXX";
try {
// Adapted from http://www.west-wind.com/presentations/dotnetWebRequest/dotnetWebRequest.htm
string gateway_post_address = "https://vault.trustcommerce.com/trustee/token.php";
HttpWebRequest req = (HttpWebRequest) WebRequest.Create(gateway_post_address);
// A sixty second timeout.
req.Timeout = 60000;
string post_data = "custid=" + HttpUtility.UrlEncode(custid) +
"&password=" + HttpUtility.UrlEncode(password);
req.Method = "POST";
byte [] buf = System.Text.Encoding.GetEncoding(1252).GetBytes(post_data);
req.ContentLength = buf.Length;
req.ContentType = "application/x-www-form-urlencoded";
Stream s = req.GetRequestStream();
s.Write(buf, 0, buf.Length);
s.Close();
HttpWebResponse rep = (HttpWebResponse) req.GetResponse();
Encoding enc = System.Text.Encoding.GetEncoding(1252);
StreamReader rs = new StreamReader(rep.GetResponseStream(), enc);
string token = rs.ReadToEnd();
Console.WriteLine(token);
rep.Close();
rs.Close();
} catch (Exception e) {
Console.WriteLine(e);
}
}
}
I made a new console application in visual studio, copied this code, and replaced the username and password with the correct credentials. When I try to run this, I get the following error in the console.
System.NotSupportedException: No data is available for encoding 1252.
For information on defining a custom encoding, see the documentation
for the Encoding.RegisterProvider method. at
System.Text.Encoding.GetEncoding(Int32 codepage) at
TCToken.Program.Main(String[] args) in
C:\Users\xxxx\source\repos\TCToken\TCToken\Program.cs:line 29
I've tried to google this error and most of the responses are a little above my understanding. I'm certainly not a C# expert.

What ckuri said. Just to be clear, you need the following line of code before opening the stream (steps 2,3):
System.Text.Encoding.RegisterProvider(System.Text.CodePagesEncodingProvider.Instance);
ExcelDataReader - Important note on .NET Core
By default, ExcelDataReader throws a NotSupportedException "No data is
available for encoding 1252." on .NET Core.
To fix, add a dependency to the package System.Text.Encoding.CodePages
and then add code to register the code page provider during
application initialization (f.ex in Startup.cs):
System.Text.Encoding.RegisterProvider(System.Text.CodePagesEncodingProvider.Instance);
This is required to parse strings in binary BIFF2-5 Excel documents
encoded with DOS-era code pages. These encodings are registered by
default in the full .NET Framework, but not on .NET Core.

.NET Core supports only ASCII, ISO-8859-1 and Unicode encodings, whereas .NET Framework supports much more.
However, .NET Core can be extended to support additional encodings like Windows-1252, Shift-JIS, GB2312 by registering the CodePagesEncodingProvider from the System.Text.Encoding.CodePages NuGet package.
After the NuGet package is installed the following steps as described in the documentation for the CodePagesEncodingProvider class must be done to register the provider:
Add a reference to the System.Text.Encoding.CodePages.dll assembly to your project.
Retrieve a CodePagesEncodingProvider object from the static Instance property.
Pass the CodePagesEncodingProvider object to the Encoding.RegisterProvider method.

nuget:
Install-Package System.Text.Encoding.CodePages -Version 5.0.0
code:
Encoding.RegisterProvider(CodePagesEncodingProvider.Instance);

I was experiencing similar issue when I was trying to read and convert xlsx file to DataTable. I found out that encoding 1252 are not default in .NET Core therefore I had to separately add NuGet package for the same.
Below is the method where I convert the data from memory stream.
private static DataTableCollection ExcelToDataTable(MemoryStream stream)
{
System.Text.Encoding.RegisterProvider(System.Text.CodePagesEncodingProvider.Instance);
using (var reader = ExcelReaderFactory.CreateReader(stream))
{
var result = reader.AsDataSet(new ExcelDataSetConfiguration()
{
ConfigureDataTable = (data) => new ExcelDataTableConfiguration()
{
UseHeaderRow = true
}
});
return result.Tables;
}
}
I referenced the Encoder from nuGet Package at the start of the method and it worked fine for me. This answer is late but might help people who are reading data from streams.
System.Text.Encoding.RegisterProvider(System.Text.CodePagesEncodingProvider.Instance);

Solution:
Add the System.Text.Encoding.CodePages Package to your project.
Write this code in your program:
System.Text.Encoding.RegisterProvider(System.Text.CodePagesEncodingProvider.Instance);

I've installed the library System.Text.Encoding.CodePages like other posts said and this code worked for me:
System.Text.Encoding.RegisterProvider(
System.Text.CodePagesEncodingProvider.Instance);
Encoding srcEncoding = Encoding.GetEncoding(1251);
using (var reader = new StreamReader(#"D:\someFile.csv", encoding: srcEncoding))
{
List<string> listA = new List<string>();
while (!reader.EndOfStream)
{
var line = reader.ReadLine();
var values = line.Split(';');
listA.Add(values[0]);
}
}

Related

Using protobuf CodedInputStream to read from byte[]

In the following code i want to use a predefined protobuf message in c#. I found that I was able to write and use the method to take a method that has been created and make a byte[]:
ContainerMessage containerMessage = new ContainerMessage();
containerMessage.Type = CommandType.Connect;
containerMessage.Connect = new Connect();
containerMessage.Connect.ClientName = "TemporaryClientName";
byte[] stream = new byte[containerMessage.CalculateSize()];
using (Google.Protobuf.CodedOutputStream outstream = new Google.Protobuf.CodedOutputStream(stream))
{
containerMessage.WriteTo(outstream);
}
This works as expected and i can inspect the message and the values are as expected as are the values in the byte[]. But if I try to Deserialize even this simple byte[] that i have just created:
using (Google.Protobuf.CodedInputStream instream = new Google.Protobuf.CodedInputStream(stream))
{
instream.ReadMessage(containerMessage);
}
It fails with:
An unhandled exception of type 'Google.Protobuf.InvalidProtocolBufferException' occurred in Google.Protobuf.dll
Additional information: Protocol message contained an invalid tag (zero).
Is this way of deserializing from a byte[] correct for protobuf?
The Protobuf Definition is:
message ContainerMessage {
CommandType type = 1;
bool commandSuccess = 2;
oneof message {
Connect connect = 3;
}
}
enum CommandType {
START = 0;
CONNECT = 2;
}
message Connect {
string ClientName = 1;
uint32 PushPullPort = 2;
}
And the CS file is generated with the command line:
protoc.exe -I=%CD% --csharp_out=..\GeneratedCsMessaging\ Connect.proto
The CodedOutputStream and CodedInputStream are mainly intended to be used by the compiled proto classes. The API for CodedOutputStream states such and mentions that if you want to have manually-written code calling either of both classes you need to use their WriteTag method before each value.
However, since you want to use the Google Protobuf for serializing and parsing any System.IO.Stream will do the job just like intended. This is very well documented and described in the Parsing and serialization section of the Protocol Buffer Basics for C#. The examples which can be found in Google Protobuf's Github can be quite helpful for getting the hang of Google Protobuf quickly. There you can see that a MemoryStream is used to serialize the object while the Parse.ParseFrom method can be used to parse an object out of the serialized data.
As you've mentioned in the comments to your question using Google.Protobuf; is an essential part to be able to use Google Protobuf's features.
EDIT: A sample usage in your case could look something like this
byte[] serializedBytes;
ContainerMessage containerMessage = new ContainerMessage()
{
Connect = new Connect()
{
ClientName = "TemporaryClientName",
},
Type = CommandType.Connect,
};
using( MemoryStream stream = new MemoryStream())
{
containerMessage.WriteTo(stream);
serializedBytes = stream.ToArray();
}
ContainerMessage parsedCopy = ContainerMessage.Parser.ParseFrom(serializedBytes);

Extract properties from a CRL file using C#

I'd like to write a program which monitors CRL (Certificate Revocation List) expiration date.
Therefore, I'd like to read the following properties from a CRL file:
1) Effective Date
2) Next Update
3) Next CRL Publish
How can I accomplish my task?
I've only managed to find types for X509Certificate2, X509Chain, x509RevocationMode etc..
you can use the class X509Certificate2 to get information needed.
example:To handle one certification file
X509Certificate2 x509 = new X509Certificate2();
byte[] rawData = ReadFile(fname);
x509.Import(rawData);
var validDate= x509 . NotBefore;
var expireDate = x509.NotAfter;
//Reads a file.
internal static byte[] ReadFile (string fileName)
{
FileStream f = new FileStream(fileName, FileMode.Open, FileAccess.Read);
int size = (int)f.Length;
byte[] data = new byte[size];
size = f.Read(data, 0, size);
f.Close();
return data;
}
reference:
https://msdn.microsoft.com/en-us/library/system.security.cryptography.x509certificates.x509certificate2(v=vs.110).aspx
Edit:
You can use the BouncyCastle.Crypto library to handle CRL.
Download the library and reference the BouncyCastle.Crypto.dll
or instal the nuget package:
Install-Package BouncyCastle
//reference library BouncyCastle.Crypto
//http://www.bouncycastle.org/csharp/
//Load CRL file and access its properties
public void GetCrlInfo(string fileName, Org.BouncyCastle.Math.BigInteger serialNumber, Org.BouncyCastle.X509.X509Certificate cert)
{
try
{
byte[] buf = ReadFile(fileName);
X509CrlParser xx = new X509CrlParser();
X509Crl ss = xx.ReadCrl(buf);
var nextupdate = ss.NextUpdate;
var isRevoked = ss.IsRevoked(cert);
Console.WriteLine("{0} {1}",nextupdate,isRevoked);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
Although, the question is answered, I would add that there is another good open project that extends native .NET Framework to work with cryptography objects which are missing in the .NET: https://github.com/Crypt32/pkix.net
In regards to CRL, I developed an X509CRL2 class in a similar way to built-in X509Certificate2: X509CRL2 Class. The usage is pretty simple:
// reference System.Security.Cryptography.X509Certificates namespace
var crl = new X509CRL2(#"C:\temp\crlfile.crl");
// Effective date:
var effective = crl.ThisUpdate;
// next update:
var nextupdate = crl.NextUpdate;
// next publish:
var nextPublishExtension = crl.Extensions["1.3.6.1.4.1.311.21.4"];
if (nextPublishExtension != null) { nextPublishExtension.Format(1); }
I support CRL files in multiple formats, including pure binary, Base64 or even in hex.
By using this class you can not only read CRL properties, but you can generate Version 2 CRLs.
Note: pkix.net library relies on my another open project https://github.com/Crypt32/Asn1DerParser.NET which is used to parse ASN structures.
in addition to M.Hassan post ;
Using BouncyCastle.X509 you must convert the System.Security... X509Certificate2 to BouncyCastle certificate, the missing function between initial code and edit could be :
using System.Security.Cryptography.X509Certificates;
public static Org.BouncyCastle.X509.X509Certificate Convert(X509Certificate2 certificate)
{
var certificateParser = new Org.BouncyCastle.X509.X509CertificateParser();
var rawData = certificate.GetRawCertData();
var bouncyCertificate = certificateParser.ReadCertificate(rawData);
return bouncyCertificate;
}
We can use CertEnroll win32 APIs. The code can be
var bytes = File.ReadAllBytes(crlFile);
var base64 = System.Convert.ToBase64String(bytes);
CX509CertificateRevocationList crl = new CX509CertificateRevocationList();
crl.InitializeDecode(base64, EncodingType.XCN_CRYPT_STRING_BASE64_ANY);
Add the following to the csproj for including certEnrol
<ItemGroup>
<COMReference Include="CERTENROLLLib">
<WrapperTool>tlbimp</WrapperTool>
<VersionMinor>0</VersionMinor>
<VersionMajor>1</VersionMajor>
<Guid>728ab348-217d-11da-b2a4-000e7bbb2b09</Guid>
<Lcid>0</Lcid>
<Isolated>false</Isolated>
<EmbedInteropTypes>true</EmbedInteropTypes>
</COMReference>

Serialize object in UWP app

I'm trying to pass an object over a StreamSocket, so i need to serialize it before i can do that.
I looked at the BinaryFormatter, but that doesn't seem to be available in UWP?
Any suggestions on how to serialize an object, and preferably in the most compact way.
You can serialize your own .NET objects; JSON.NET is a very popular choice, but the built-in DataContractSerializer should also work if you don't want any dependencies.
You cannot serialize any WinRT objects.
If you want to serialize an object to json, at first you have to add Newtonsoft.Json to your project.
To install Json.NET, run the following command in the Package Manager Console:
PM> Install-Package Newtonsoft.Json
Then use this code to pass your object :
(Note: In this sample I suppose your server and client are the same and the port that you are using is: 1800.)
try
{
Windows.Networking.Sockets.StreamSocket socket = new Windows.Networking.Sockets.StreamSocket();
Windows.Networking.HostName serverHost = new Windows.Networking.HostName("127.0.0.1");
string serverPort = "1800";
socket.ConnectAsync(serverHost, serverPort);
var json = JsonConvert.SerializeObject(object);
byte[] buffer = Encoding.UTF8.GetBytes(json);
Stream streamOut = socket.OutputStream.AsStreamForWrite();
StreamWriter writer = new StreamWriter(streamOut);
var request = json;
writer.WriteLine(request);
writer.Flush();
Stream streamIn = socket.InputStream.AsStreamForRead();
StreamReader reader = new StreamReader(streamIn);
string response = reader.ReadLine();
}
catch (Exception e)
{
var err = e.Message;
//Handle exception here.
}
Inform me if you have any other question.

Encoding string issue reading a CSV file in C#

I am currently developing a Windows Phone 8 application in which one I have to download a CSV file from a web-service and convert data to a C# business object (I do not use a library for this part).
Download the file and convert data to a C# business object is not an issue using RestSharp.Portable, StreamReader class and MemoryStream class.
The issue I face to is about the bad encoding of the string fields.
With the library RestSharp.Portable, I retrieve the csv file content as a byte array and then convert data to string with the following code (where response is a byte array) :
using (var streamReader = new StreamReader(new MemoryStream(response)))
{
while (streamReader.Peek() >= 0)
{
var csvLine = streamReader.ReadLine();
}
}
but instead of "Jérome", my csvLine variable contains J�rome. I tried several things to obtain Jérome but without success like :
using (var streamReader = new StreamReader(new MemoryStream(response), true))
or
using (var streamReader = new StreamReader(new MemoryStream(response), Encoding.UTF8))
When I open the CSV file with a simple notepad software like notepad++ I obtain Jérome only when the file is encoding in ANSI. But if I try the following code in C# :
using (var streamReader = new StreamReader(new MemoryStream(response), Encoding.GetEncoding("ANSI")))
I have the following exception :
'ANSI' is not a supported encoding name.
Can someone help me to decode correctly my CSV file ?
Thank you in advance for your help or advices !
You need to pick one of these.
https://msdn.microsoft.com/en-us/library/windows/desktop/dd317756(v=vs.85).aspx
If you don't know, you can try to guess it. Guessing isn't a perfect solution, per the answer here.
You can't detect the codepage, you need to be told it. You can analyse the bytes and guess it, but that can give some bizarre (sometimes amusing) results.
From the link of Lawtonfogle I tried to use
using (var streamReader = new StreamReader(new MemoryStream(response), Encoding.GetEncoding("Windows-1252")))
But I had the following error :
'Windows-1252' is not a supported encoding name.
Searching why on the internet, I finally found following thread with the following answer that works for me.
So here the working solution in my case :
using (var streamReader = new StreamReader(new MemoryStream(response), Encoding.GetEncoding("ISO-8859-1")))
{
while (streamReader.Peek() >= 0)
{
var csvLine = streamReader.ReadLine();
}
}

Why does mono sometimes truncate http downloads?

I use the following code to download text (json):
var request = WebRequest.Create(url);
using (var response = request.GetResponse())
{
string charset = null;
var httpResponse = response as HttpWebResponse;
if (httpResponse != null)
{
if (httpResponse.StatusCode != HttpStatusCode.OK)
{
throw new System.Net.WebException("Ststus code was: " + httpResponse.StatusCode);
}
charset = httpResponse.CharacterSet;
}
Encoding enc = charset != null ? Encoding.GetEncoding(charset) : null;
using (var reader = new StreamReader(response.GetResponseStream(), enc, true))
{
return reader.ReadToEnd();
}
}
On Windows (.net) it works fine. On Linux (Mono runtime) it sometimes returns truncated data: The json parser crashes, because can't find the closing delimiter for strings and similar errors. It is not a problem with the parser: I have tried 2 different. It does not seem to be a problem with encoding, because it sometimes works and sometimes doesn't for the exact same downloaded data.
Why would mono behave this way and how can I avoid this problem?
Edit: I added a console print for debugging purposes. The string coming directly from the code above is definitively truncated.
Edit2: Here is how I use the result:
string json = DownloadTextFile(url);
dynamic obj = Json.Decode(json);//Decoding fails here, because string is truncated.
The problem occurs much less frequently when I let the program run on a server with a good very good connection to the net. (After a few thousand downloads, instead of after a few hundred). That is good enough for my purposes.
Checking the content length does not help much, because it is -1 more often than not. It is sad that network stuff is implemented so poorly in mono. (On .net the same code works flawlessly even with a bad connection.)

Categories

Resources