It's me again and I have another problem. Somewhere, I've found following code:
private T DeepDeserialize<T>(string fileName)
{
T returnValue;
using (FileStream str = new FileStream(fileName, FileMode.Open))
{
BinaryFormatter binaryFormatter = new BinaryFormatter();
returnValue = (T)binaryFormatter.Deserialize(str);
}
return returnValue;
}
I've modified some classes today and now, it always throws an error, which could be translated like this: Before completing the analysis was detected ending stream (I don't know the right translation, the error message is in my language, not in English)
I've tried to insert str.Position = 0; between these two lines in using, which I've found somewhere here, but it doesn't help.
Can someone help me to make it work again? I have no ideas what to do...
You have changed the binary layout of your files but most likely trying to deserialize old files. This is not gonna work. You have to serialize new versions first.
P.S. If you would consider versioning and custom formatter at early stages, you might be able to deserialize old data with new classes, depending on how drastic was your change
Related
I have a c# solution with an ASMX startup project and having also some other projects. One of these projects (say ThirdProject) has a class (say DataReader) with a method (say ReadData()) which performs a deserialization of a stream. The stream itself is OK, it comes from an embedded resource, and it can be read without error by a StreamReader to a string, and it is really a valid xml string. But the deserialization throws a StackOverflowException.
Now comes the weirdness. For test purposes, I've created an additional project to this c# solution, this is a winform project. If I set this winform project to be the startup project of the solution, then it calls the ThirdProject.DataReader. ReadData() procedure without any error! The deserialization completes!
I've repeated my expreiments by changing the bitness (x86 or x64), and also by changing the target .Net Framework (from 4.0 to 4.7.2), but the result is always the same.
Where should I search the cause of this error? Any hint would be appreciated.
Edit.
The code part in question is this:
using (Stream stream = assembly.GetManifestResourceStream(xmlname))
{
var l = stream.Length;
var ret_obj = ktAntragsdatenAbrufenXmlFormat.Deserialize(stream);
...
}
The stream.Length is 18671. The ktAntragsdatenAbrufenXmlFormat is a static System.Xml.Serialization.XmlSerializer
This code part runs without error:
using (Stream stream = assembly.GetManifestResourceStream(xmlname))
{
var l = stream.Length;
StreamReader reader = new StreamReader(stream);
string text = reader.ReadToEnd();
}
Since reader.ReadToEnd() seems to work fine, try the following:
using (Stream stream = assembly.GetManifestResourceStream(xmlname))
{
var l = stream.Length;
StreamReader reader = new StreamReader(stream);
string text = reader.ReadToEnd();
using (TextReader reader = new StringReader(text))
{
var ret_obj = ktAntragsdatenAbrufenXmlFormat.Deserialize(stream);
}
}
Even if you have a valid xml it can have referrence loops.
If your web service is json, you can set a reference loop handling strategy like ReferenceLoopHandling.Ignore somewhere, havent used that, but there must be an option for that.
If its not json, it just wont go through your webservice as XML, i have no idea if an option even exists in that case. You possibly have to get rid of ref. loops manually before sending and rebuild them once its arrived on the other side.
Nvm, its not your case... ill leave it here anyway
I have a Neurotec NTemplate with one Finger record. Now i want to Serialize it with c# - protobuf-net. I dont getting any exeption but my MemoryStream is emplty.
what might be the problem?
code Is below:(where tenPrintTemplate is a NTemplate)
tenPrintTemplate.AddFingers(fingerPrintTemplate.Save());
//start Proto Buffer serialization
MemoryStream stream = new MemoryStream();
RuntimeTypeModel.Default.InferTagFromNameDefault = true;
RuntimeTypeModel.Default.Add(typeof(NTemplate), false);
ProtoBuf.Serializer.Serialize<NTemplate>(stream, tenPrintTemplate);
Here you've told it not to apply any standard pattern / configuration logic:
RuntimeTypeModel.Default.Add(typeof(NTemplate), false);
so you have basically told it "serialize nothing". If you specify false, it expects you to tell it how you want it to work, for example by using Add on the MetaType that is returned. I suspect you could also just specify true if it has suitable attributes.
Note that 0 is a perfectly reasonable length for protobuf-net and an object that doesn't have anything interesting to mention on the wire.
I have got the solution of serializing Neurotec's NTemplate using C# Protobuf-net. i'm adding the solution code below. if anyone face the same problem please use it as your solution.
//Its a NTemplate of TenPrint
tenPrintTemplate.AddFingers(fingerPrintTemplate.Save());
//start Proto Buffer serialization
MemoryStream stream = new MemoryStream();
int tenpritnTemplateSize = tenPrintTemplate.GetSize();
NBuffer buffer = new NBuffer(tenpritnTemplateSize);
// saving fingers template to buffer.
tenPrintTemplate.Save(buffer);
ProtoBuf.Serializer.Serialize<byte[]>(stream, buffer.ToArray());
Using Visual Studio 2010 and im getting "File is used by another process" almost randomly when trying to read a file. Im reading about 10 xml files into memory with the same procedure
The code that breaks is
private static TextReader CreateTextReader(IsolatedStorageFile isolatedStorageFolder, string path)
{
TextReader textReader = null;
if (isolatedStorageFolder == null)
textReader = new StreamReader(path);
else
textReader = new StreamReader(new IsolatedStorageFileStream(path, FileMode.Open, isolatedStorageFolder));
return textReader;
}
The code breaks 10 percent of the time on
textReader = new StreamReader(path);
I personally think its some kind of garbage collection problem, anyone has any tips on how to debug this kind of problem.
Be sure to call .Dispose or .Close on all steam reader operations that could lock the file. That might be your problem as that code works for me as a flat program.
You need to dispose of the TextReader. Use the using statement like
using (TextReader r = CreateTextReader(...))
{
}
Otherwise the file will remain open when you close your application.
EDIT
You're saying in your comments to the question that you're actually already using using - could it be that the file you're trying to read is actually opened by another application? Sometimes antivir solutions lock files while scanning them or stuff like that - will it work after a short while or do you have to reboot or something like that?
We are having an issue with one server and it's utilization of the StreamWriter class. Has anyone experienced something similar to the issue below? If so, what was the solution to fix the issue?
using( StreamWriter logWriter = File.CreateText( logFileName ) )
{
for (int i = 0; i < 500; i++)
logWriter.WriteLine( "Process completed successfully." );
}
When writing out the file the following output is generated:
Process completed successfully.
... (497 more lines)
Process completed successfully.
Process completed s
Tried adding logWriter.Flush() before close without any help. The more lines of text I write out the more data loss occurs.
Had a very similar issue myself. I found that if I enabled AutoFlush before doing any writes to the stream and it started working as expected.
logWriter.AutoFlush = true;
sometimes even u call flush(), it just won't do the magic. becus Flush() will cause stream to write most of the data in stream except the last block of its buffer.
try
{
// ... write method
// i dont recommend use 'using' for unmanaged resource
}
finally
{
stream.Flush();
stream.Close();
stream.Dispose();
}
Cannot reproduce this.
Under normal conditions, this should not and will not fail.
Is this the actual code that fails ? The text "Process completed" suggests it's an extract.
Any threading involved?
Network drive or local?
etc.
This certainly appears to be a "flushing" problem to me, even though you say you added a call to Flush(). The problem may be that your StreamWriter is just a wrapper for an underlying FileStream object.
I don't typically use the File.CreateText method to create a stream for writing to a file; I usually create my own FileStream and then wrap it with a StreamWriter if desired. Regardless, I've run into situations where I've needed to call Flush on both the StreamWriter and the FileStream, so I imagine that is your problem.
Try adding the following code:
logWriter.Flush();
if (logWriter.BaseStream != null)
logWriter.BaseStream.Flush();
In my case, this is what I found with output file
Case 1: Without Flush() and Without Close()
Character Length = 23,371,776
Case 2: With Flush() and Without Close()
logWriter.flush()
Character Length = 23,371,201
Case 3: When propely closed
logWriter.Close()
Character Length = 23,375,887 (Required)
So, In order to get proper result, always need to close Writer instance.
I faced same problem
Following worked for me
using (StreamWriter tw = new StreamWriter(#"D:\Users\asbalach\Desktop\NaturalOrder\NatOrd.txt"))
{
tw.Write(abc.ToString());// + Environment.NewLine);
}
Using framework 4.6.1 and under heavy stress it still has this problem. I'm not sure why it does this, though i found a way to solve it very differently (which strengthens my feeling its indeed a .net bug).
In my case i tried write huge jagged arrays to disk (video caching).
Since the jagged array is quite large it had to do lot of repeated writes to store a large set of video frames, and despite they where uncompressed and each cache file got exact 1000 frames, the logged cash files had all different sizes.
I had the problem when i used this
//note, generateLogfileName is just a function to create a filename()
using (FileStream fs = new FileStream(generateLogfileName(), FileMode.OpenOrCreate))
{
using (StreamWriter sw = new StreamWriter(fs)
{
// do your stuff, but it will be unreliable
}
}
However when i provided it an Encoding type, all logged files got an equal size, and the problem was gone.
using (FileStream fs = new FileStream(generateLogfileName(), FileMode.OpenOrCreate))
{
using (StreamWriter sw = new StreamWriter(fs,Encoding.Unicode))
{
// all data written correctly, no data lost.
}
}
Note also read the file width the same encoding type!
This did the trick for me:
streamWriter.flush();
I'm trying to implement file compression to an application. The application has been around for a while, so it needs to be able to read uncompressed documents written by previous versions. I expected that DeflateStream would be able to process an uncompressed file, but for GZipStream I get the "The magic number in GZip header is not correct" error. For DeflateStream I get "Found invalid data while decoding". I guess it does not find the header that marks the file as the type it is.
If it's not possible to simply process an uncompressed file, then 2nd best would be to have a way to determine whether a file is compressed, and choose the method of reading the file. I've found this link: http://blog.somecreativity.com/2008/04/08/how-to-check-if-a-file-is-compressed-in-c/, but this is very implementation specific, and doesn't feel like the right approach. It can also provide false positives (I'm sure this would be rare, but it does indicate that it's not the right approach).
A 3rd option I've considered is to attempt using DeflateStream, and fallback to normal stream IO if an exception occurs. This also feels messy, and causes VS to break at the exception (unless I untick that exception, which I don't really want to have to do).
Of course, I may simply be going about it the wrong way. This is the code I've tried in .Net 3.5:
Stream reader = new FileStream(fileName, FileMode.Open, readOnly ? FileAccess.Read : FileAccess.ReadWrite, readOnly ? FileShare.ReadWrite : FileShare.Read);
using (DeflateStream decompressedStream = new DeflateStream(reader, CompressionMode.Decompress))
{
workspace = (Workspace)new XmlSerializer(typeof(Workspace)).Deserialize(decompressedStream);
if (readOnly)
{
reader.Close();
workspace.FilePath = fileName;
}
else
workspace.SetOpen(reader, fileName);
}
Any ideas?
Thanks!
Luke.
Doesn't your file format have a header? If not, now is the time to add one (you're changing the file format by supporting compression, anyway). Pick a good magic value, make sure the header is extensible (add a version field, or use specific magic values for specific versions), and you're ready to go.
Upon loading, check for the magic value. If not present, use your current legacy loading routines. If present, the header will tell you whether the contents are compressed or not.
Update
Compressing the stream means the file is no longer an XML document, and thus there's not much reason to expect the file can't contain more than your data stream. You really do want a header identifying your file :)
The below is example (pseudo)-code; I don't know if .net has a "substream", SubRangeStream is likely something you'll have to code yourself (DeflateStream probably adds it's own header, so a substream might not be necessary; could turn out useful further down the road, though).
Int64 oldPosition = reader.Position;
reader.Read(magic, 0, magic.length);
if(IsRightMagicValue(magic))
{
Header header = ReadHeader(reader);
Stream furtherReader = new SubRangeStream(reader, reader.Position, header.ContentLength);
if(header.IsCompressed)
{
furtherReader = new DeflateStream(furtherReader, CompressionMode.Decompress);
}
XmlSerializer xml = new XmlSerializer(typeof(Workspace));
workspace = (Workspace) xml.Deserialize(furtherReader);
} else
{
reader.Position = oldPosition;
LegacyLoad(reader);
}
In real-life, I would do things a bit differently - some proper error handling and cleanup, for instance. Also, I wouldn't have the new loader code directly in the IsRightMagicValue block, but rather I'd spin off the work either based on the magic value (one magic value per file version), or I would keep a "common header" portion with fields common to all versions. For both, I'd use a Factory Method to return an IWorkspaceReader depending on the file version.
Can't you just create a wrapper class/function for reading the file and catch the exception? Something like
try
{
// Try return decompressed stream
}
catch(InvalidDataException e)
{
// Assume it is already decompressed and return it as it is
}