In Java, this works as expected:
public static void testwrite(String filename) throws IOException {
FileOutputStream fs = new FileOutputStream(new File(filename), false);
DeflaterOutputStream fs2 = new DeflaterOutputStream(fs, new Deflater(3));
for (int i = 0; i < 50; i++)
for (int j = 0; j < 40; j++)
fs2.write((byte) (i + 0x30));
fs2.close();
}
public static void testread(String filename) throws IOException {
FileInputStream fs = new FileInputStream(new File(filename));
InflaterInputStream fs2 = new InflaterInputStream(fs);
int c, n = 0;
while ((c = fs2.read()) >= 0) {
System.out.print((char) c);
if (n++ % 40 == 0) System.out.println("");
}
fs2.close();
}
The first method compresses 2000 chars in a 106 bytes file, the second reads it ok.
The equivalent in C# would seem to be
private static void testwritecs(String filename) {
FileStream fs = new FileStream(filename, FileMode.OpenOrCreate);
DeflateStream fs2 = new DeflateStream(fs,CompressionMode.Compress,false);
for (int i = 0; i < 50; i++) {
for(int j = 0; j < 40; j++)
fs2.WriteByte((byte)(i+0x30));
}
fs2.Flush();
fs2.Close();
}
But it generates a file of 2636 bytes (larger than the raw data, even though it has low entropy) and is not readable with the Java testread() method above. Any ideas?
Edited: The implementation is indeed not standard/portable (this bit of the docs: "an industry standard algorithm" seems a joke), and very crippled. Among other things, its behaviour changes radically if one writes the bytes one at a time or in blocks (which goes against the concept of a "stream"); if I change the above
for(int j = 0; j < 40; j++)
fs2.WriteByte((byte)(i+0x30));
by
byte[] buf = new byte{}[40;
for(int j = 0; j < 40; j++)
buf[j]=(byte)(i+0x30));
fs2.Write(buf,0,buf.Length);
the compression gets (slightly) reasonable. Shame.
Don't use DeflateStream on anything except plain ASCII text, because it uses
statically defined, hardcoded Huffman trees built for plain ASCII text. See my prior answer for more detail, or just use SharpZipLib and forget it.
Related
In my application i have to create a XML element, where i need to insert this value, after some checking (Which i why i have to convert it)
int currentXMLTimeValue = Convert.ToInt32(elemList[i].Attributes["tTime"].Value);
When debugging i get stuck at this line and can wait over 10 min and i get an time out exception. As you might have guess im running it through a loop, so it is very cruicial that i dont use that much time on the action.
my other convertion isn't slow at all
int currentEDFTimeValue = Convert.ToInt32(lines[j][2]);
It must then be the value call that uses time, how could i otherwise get this value or How can i speed this up?
elemList[i].Attributes["tTime"].Value
Here is the rest of the for loops
XmlWriter writer = null;
XmlDocument doc = new XmlDocument();
doc.Load(oldFile);
XmlNodeList elemList = doc.GetElementsByTagName("test");
for (int j = 0; j < lines.Length; j++)
{
int currentEDFTimeValue = Convert.ToInt32(lines[j][2]);
for (int i = 0; i < elemList.Count; i++)
{
XmlElement newElem = doc.CreateElement(replacementElement);
int currentXMLTimeValue = Int32.Parse(elemList[i].Attributes["tTime"].Value);
if (currentEDFTimeValue != currentXMLTimeValue)
{
newElem.SetAttribute("tTime", currentEDFTimeValue.ToString());
if (currentEDFTimeValue > currentXMLTimeValue)
{
eventNode.InsertAfter(newElem, elemList[i]);
}
else if (currentEDFTimeValue < currentXMLTimeValue)
{
eventNode.InsertBefore(newElem, elemList[i]);
}
}
}
}
These xml files might have more than 50,000 lines
EDIT:
Did some testing at found that this is very fast
var test = elemList;
But getting the specific xml node is very slow
var test = elemList[i];
Could it be because i add nodes/elements to the list while using it?
EDIT2:
I have another function that is more or less the same but this is also very fast
List<List<int>> offsets = new List<List<int>>();
for (int i = 0; i < elemList.Count; i++)
{
var xmlDur = Convert.ToInt32(elemList[i].Attributes["duration"].Value);
List<int> offset = new List<int>();
for (int j = 0; j < lines.Length; j++)
{
var edfDur = Convert.ToInt32(lines[j][4]);
if (Math.Abs(edfDur - xmlDur) <= 2)
{
for (int h = 0; h < offsets[offsets.Count - 2].Count; h++)
{
startTimeDistance = Convert.ToInt32(elemList[i].Attributes["tTime"].Value) - Convert.ToInt32(lines[j][3]);
offset.Add(startTimeDistance);
}
}
}
if(offset.Count > 0) offsets.Add(offset);
}
short[] sArray = new short[100];
Many 16-bit data in sArray[100], So I want to write using BinaryWriter class.
But BinaryWriter has write(byte[]) and write(char[]) only.
How to write 16-bit(short[]) data to the file?
You need to write each value individually using the BinaryWriter.Write(short) Method.
Writing:
binaryWriter.Write(sArray.Length); // BinaryWriter.Write(Int32) overload
for (int i = 0; i < sArray.Length; i++)
{
binaryWriter.Write(sArray[i]); // BinaryWriter.Write(Int16) overload
}
Reading:
short[] result = new short[binaryReader.ReadInt32()];
for (int i = 0; i < result.Length; i++)
{
result[i] = binaryReader.ReadInt16();
}
I have been assigned to convert a C++ app to C#.
I want to convert the following code in C# where rate_buff is a double[3,9876] two dimensional array.
if ((fread((char*) rate_buff,
(size_t) record_size,
(size_t) record_count,
stream)) == (size_t) record_count)
If I correctly guessed your requirements, this is what you want:
int record_size = 9876;
int record_count = 3;
double[,] rate_buff = new double[record_count, record_size];
// open the file
using (Stream stream = File.OpenRead("some file path"))
{
// create byte buffer for stream reading that is record_size * sizeof(double) in bytes
byte[] buffer = new byte[record_size * sizeof(double)];
for (int i = 0; i < record_count; i++)
{
// read one record
if (stream.Read(buffer, 0, buffer.Length) != buffer.Length)
throw new InvalidDataException();
// copy the doubles out of the byte buffer into the two dimensional array
// note this assumes machine-endian byte order
for (int j = 0; j < record_size; j++)
rate_buff[i, j] = BitConverter.ToDouble(buffer, j * sizeof(double));
}
}
Or more concisely with a BinaryReader:
int record_size = 9876;
int record_count = 3;
double[,] rate_buff = new double[record_count, record_size];
// open the file
using (BinaryReader reader = new BinaryReader(File.OpenRead("some file path")))
{
for (int i = 0; i < record_count; i++)
{
// read the doubles out of the byte buffer into the two dimensional array
// note this assumes machine-endian byte order
for (int j = 0; j < record_size; j++)
rate_buff[i, j] = reader.ReadDouble();
}
}
I have a text file that has the following in it: (Without quotation marks and "Empty Space")
##############
# Empty Space#
# Empty Space#
# Empty Space#
# Empty Space#
##############
I want to add this whole file row by row into a list:
FileStream FS = new FileStream(#"FilePath",FileMode.Open);
StreamReader SR = new StreamReader(FS);
List<string> MapLine = new List<string>();
foreach (var s in SR.ReadLine())
{
MapLine.Add(s.ToString());
}
foreach (var x in MapLine)
{
Console.Write(x);
}
Here comes my problem: I want to add this into a Two dimensional array. I tried:
string[,] TwoDimentionalArray = new string[100, 100];
for (int i = 0; i < MapLine.Count; i++)
{
for (int j = 0; j < MapLine.Count; j++)
{
TwoDimentionalArray[j, i] = MapLine[j].Split('\n').ToString();
}
}
I am still new to C# so please any help will be appreciated.
You can try with this:
// File.ReadAllLines method serves exactly the purpose you need
List<string> lines = File.ReadAllLines(#"Data.txt").ToList();
// lines.Max(line => line.Length) is used to find out the length of the longest line read from the file
string[,] twoDim = new string[lines.Count, lines.Max(line => line.Length)];
for(int lineIndex = 0; lineIndex < lines.Count; lineIndex++)
for(int charIndex = 0; charIndex < lines[lineIndex].Length; charIndex++)
twoDim[lineIndex,charIndex] = lines[lineIndex][charIndex].ToString();
for (int lineIndex = 0; lineIndex < lines.Count; lineIndex++)
{
for (int charIndex = 0; charIndex < lines[lineIndex].Length; charIndex++)
Console.Write(twoDim[lineIndex, charIndex]);
Console.WriteLine();
}
Console.ReadKey();
This is going to save each character of the file content into it's own position in a 2-dimensional array. For that purpose char[,] might have also been used.
Currently, you are going through all the lines of your file, and for every line of your file, you go through all the lines of your file again, to split them on \n which is already done by you putting them in MapLine.
If you want every single char of a line array'd, and that in an array again, it should approximately look like this:
string[,] TwoDimentionalArray = new string[100, 100];
for (int i = 0; i < MapLine.Count; i++)
{
for (int j = 0; j < MapLine[i].length(); j++)
{
TwoDimentionalArray[i, j] = MapLine[i].SubString(j,j);
}
}
I did this without testing so it might be faulty. The point is you need to iterate through each line first, then through each letter in that line. From there, you can use SubString.
Also, I hope I've understood your question correctly.
Hello I have an array with a bunch of grayscale values
var test="...0,222,254,254,254,254,241,198,198,198,198,198,198,198,198,170,52...".Split(',');
And I want to create a bitmap with those values
int c = 1;
var bmp = new Bitmap(28, 28);
for (int i = 0; i < 28; i++)
for (int j = 0; j < 28; j++)
{
bmp.SetPixel(i, j, Color.FromArgb(Convert.ToInt32(test[c]), Convert.ToInt32(test[c]), Convert.ToInt32(test[c])));
c++;
}
However when I try to save it to disk:
bmp.Save(#"E:\r\0.jpg",ImageFormat.Jpeg);
I get the Generic GDI+ error
I have tried
Checking file permissions
Changing ImageFormat
Cloning the bitmap
Sorry but I just tried this and this works well.
Bitmap bmp = new Bitmap(28, 28);
int c = 0;
for (int i = 0; i < 28; i++)
{
for (int j = 0; j < 28; j++)
{
bmp.SetPixel(i, j, Color.FromArgb(i, i, i));
}
}
bmp.Save("test.jpg", ImageFormat.Jpeg);
Are you sure the problem is in save?
Ok, I'm a dumbass, the problem is that I was saving the file to a folder that did not exist, I thought it would be created.