Multiple (Sequential) XmlReader instances on same stream - c#

My question should be relatively straight forward:
Is it (In any way) possible to create multiple XmlReader objects for the same stream in sequence, without the first reader advancing the stream to the end once it's disposed?
Sample code (Note that the second call to ReadElement will fail because the first reader advanced the stream to the end, for whatever reason):
private static void DoTest()
{
using (var stream = new MemoryStream())
{
WriteElement("Test", stream);
Console.WriteLine("Stream Length after first write: {0}", stream.Length);
WriteElement("Test2", stream);
Console.WriteLine("Stream Length after second write: {0}", stream.Length);
stream.Position = 0;
Console.WriteLine(ReadElement(stream));
Console.WriteLine("Position is now: {0}/{1}", stream.Position, stream.Length);
Console.WriteLine(ReadElement(stream)); // Note that this will fail due to the stream position now being at the end.
}
}
private static string ReadElement(Stream source)
{
string result;
using (var reader = XmlReader.Create(source, new XmlReaderSettings
{
ConformanceLevel = ConformanceLevel.Fragment,
CloseInput = false
}))
{
reader.Read();
result = reader.Name;
reader.Read();
}
return result;
}
private static void WriteElement(string name, Stream target)
{
using (var writer = XmlWriter.Create(target, new XmlWriterSettings
{
ConformanceLevel = ConformanceLevel.Fragment,
WriteEndDocumentOnClose = false,
OmitXmlDeclaration = true,
}))
{
writer.WriteStartElement(name);
writer.WriteEndElement();
}
}
If this is not possible with 'pure .Net', are there any alternative ('Light') Xml parser libraries out there that would support this behaviour?

1. Messy way
If you're able to save the length of each sequence you could do as following:
int len, i = 0; //len is the length of the interval
byte[] buffer = new byte[0xff];
while (len --> 0)
buffer[i++] = stream.ReadByte (); //copies the interval
This separately copies an interval of bytes from the MemoryStream and saves it into a buffer. Then you'd simply jongle with the buffer by assigning it to a new MemoryStream or to a String (see XMLCreate overloads).
The problem is that the first read operation is too GREEDY and eats the whole interval.
2. Original way
Write your one stream to suite your needs!

Related

Prevent JsonTextReader from consuming the stream during deserialization

I'm using Json.Net to consume some seekable streams.
// reset the input stream, in case it was previously read
inputStream.Position = 0;
using (var textReader = new StreamReader(inputStream))
{
using (var reader = new JsonTextReader(textReader))
{
deserialized = serializer.Deserialize(reader, expectedType);
}
}
However, this method 'consumes' the stream, meaning the first contained valid Json token is removed from the stream.
That it very annoying. And meaningless, stream Position is provided to emulate a consumption, and 'reading' generally implies 'not modifying'.
Of course, I can dump the stream into a MemoryStream to protect my precious source stream, but that's a huge overhead, especially when doing trial-and-error on a deserialization.
If there is a way to to just 'read' and not 'read-and-consume', thanks for your help, I could not find documentation about that (and I hope this post will help others to google the solution ^^).
JsonTextReader is a forward-only reader, meaning it cannot be set back to a position earlier in the JSON to re-read a portion of it, even if the underlying stream supports seeking. However, the reader does not actually "consume" the stream, as you said. If you set the CloseInput property on the reader to false to prevent it from closing the underlying reader and stream when it is disposed, you can position the stream back to the beginning and open a new reader on the same stream to re-read the JSON. Here is a short program to demonstrate reading the same stream twice:
class Program
{
static void Main(string[] args)
{
string json = #"{ ""name"": ""foo"", ""size"": ""10"" }";
MemoryStream inputStream = new MemoryStream(Encoding.UTF8.GetBytes(json));
JsonSerializer serializer = new JsonSerializer();
using (var textReader = new StreamReader(inputStream))
{
for (int i = 0; i < 2; i++)
{
inputStream.Position = 0;
using (var reader = new JsonTextReader(textReader))
{
reader.CloseInput = false;
Widget w = serializer.Deserialize<Widget>(reader);
Console.WriteLine("Name: " + w.Name);
Console.WriteLine("Size: " + w.Size);
Console.WriteLine();
}
}
}
}
}
class Widget
{
public string Name { get; set; }
public int Size { get; set; }
}
Output:
Name: foo
Size: 10
Name: foo
Size: 10
Fiddle: https://dotnetfiddle.net/fftZV7
A stream may be consumed once read. The solution could be to copy it to a memory or file stream as below:
MemoryStream ms = new MemoryStream();
inputStream.CopyTo(ms);
ms.Position = 0;
using (var textReader = new StreamReader(ms))
(...)
Please let me know if it works.

Get length of Streamreader

How can I get the length of a StreamReader, as I know nothing will be written to it anymore.
I thought that maybe I could pass all the data to a MemoryStream, which has a method called Length, but I got stuck on how to append a byte[] to a MemoryStream.
private void Cmd(string command, string parameter, object stream)
{
StreamWriter writer = (StreamWriter)stream;
StreamWriter input;
StreamReader output;
Process process = new Process();
try
{
process.StartInfo.UseShellExecute = false;
process.StartInfo.CreateNoWindow = true;
process.StartInfo.RedirectStandardOutput = true;
process.StartInfo.RedirectStandardInput = true;
process.StartInfo.FileName = "cmd";
process.Start();
input = process.StandardInput;
output = process.StandardOutput;
input.WriteLine(command + " " + parameter);
input.WriteLine("exit");
using (MemoryStream ms = new MemoryStream())
{
int length = 1024;
char[] charbuffer = new char[length];
byte[] bytebuffer = new byte[length];
while (!output.EndOfStream)
{
output.Read(charbuffer, 0, charbuffer.Length);
for (int i = 0; i < length; i++)
{
bytebuffer[i] = Convert.ToByte(charbuffer[i]);
}
//append bytebuffer to memory stream here
}
long size = ms.Length;
writer.WriteLine(size);
writer.Flush(); //send size of the following message
//send message
}
}
catch (Exception e)
{
InsertLog(2, "Could not run CMD command");
writer.WriteLine("Not valid. Ex: " + e.Message);
}
writer.Flush();
}
So, how can I dinamically append a byte[] to a MemoryStream?
There is any way better than this to get the length of output so I can warn the other end about the size of the message which will be sent?
Does this work for you?
StreamReader sr = new StreamReader(FilePath);
long x = sr.BaseStream.Length;
Stream has a Length property, but will throw an exception if the stream doesn't support seek operations. A network stream for example will throw an exception if you try to read .Length. In your code, you're processing an input stream of a process. Consider if that were user input - how would you know the length until you were completely finished reading?
If you're reading a file, you can get the length with stream.Length.
In .NET 4+, you can copy one stream to another with Stream.CopyTo, eg:
inputStream.CopyTo(outputStream);
You can also load the bytes into memory with:
byte[] data;
using (var ms = new MemoryStream())
{
stream.CopyTo(ms);
data = ms.ToArray();
}
MemoryStream has MemoryStream.Write Method , which writes a byte array to the stream:
ms.Write(bytebuffer,0,bytebuffer.Length);
So, you can call it to add another portion of bytes to the output stream. However remember, to be able to read from MemoryStream after all write operations are over, you'll have to use MemoryStream.Seek Method to set the position within the current stream to its beginning:
//all write operations
ms.Seek(0, SeekOrigin.Begin);
//now ready to be read
This is the easiest approach. Of cousre, you may dynamically move across the stream, while reading/writing. But that may be error prone.
To get the length of the stream, i.e. ms.Length, you don't have to seek the begining of the stream.
I guess, it's worth to note, that if bytebuffer is used only to store bytes before copying them into the MemoryStream, you could use MemoryStream.WriteByte Method instead. This would let you abolish bytebuffer at all.
for (int i = 0; i < length; i++)
{
ms.WriteByte(Convert.ToByte(charbuffer[i]));
}

Cannot access a closed stream ASP.net v2.0

We have a very odd problem, the below code is working fine on all developers machine/ our 2 test servers, both with code and with built version, however when it is running on a virtual machine with windows 2003 server and asp.net v2.0 it throws an error
Cannot access a closed stream.
public String convertResultToXML(CResultObject[] state)
{
MemoryStream stream = null;
TextWriter writer = null;
try
{
stream = new MemoryStream(); // read xml in memory
writer = new StreamWriter(stream, Encoding.Unicode);
// get serialise object
XmlSerializer serializer = new XmlSerializer(typeof(CResultObject[]));
serializer.Serialize(writer, state); // read object
int count = (int)stream.Length; // saves object in memory stream
byte[] arr = new byte[count];
stream.Seek(0, SeekOrigin.Begin);
// copy stream contents in byte array
stream.Read(arr, 0, count);
UnicodeEncoding utf = new UnicodeEncoding(); // convert byte array to string
return utf.GetString(arr).Trim();
}
catch
{
return string.Empty;
}
finally
{
if (stream != null) stream.Close();
if (writer != null) writer.Close();
}
}
Any idea why would it do this?
For your Serialize use using to prevent the stream remain open.
Something like this:
using (StreamWriter streamWriter = new StreamWriter(fullFilePath))
{
xmlSerializer.Serialize(streamWriter, toSerialize);
}
I originally thought that it was because you're closing the stream then closing the writer - you should just close the writer, because it will close the stream also : http://msdn.microsoft.com/en-us/library/system.io.streamwriter.close(v=vs.80).aspx.
However, despite MSDNs protestation, I can't see any evidence that it does actually does this when reflecting the code.
Looking at your code, though, I can't see why you're using the writer in the first place. I'll bet if you change your code thus (I've taken out the bad exception swallowing too) it'll be alright:
public String convertResultToXML(CResultObject[] state)
{
using(var stream = new MemoryStream)
{
// get serialise object
XmlSerializer serializer = new XmlSerializer(typeof(CResultObject[]));
serializer.Serialize(stream, state); // read object
int count = (int)stream.Length; // saves object in memory stream
byte[] arr = new byte[count];
stream.Seek(0, SeekOrigin.Begin);
// copy stream contents in byte array
stream.Read(arr, 0, count);
UnicodeEncoding utf = new UnicodeEncoding(); // convert byte array to string
return utf.GetString(arr).Trim();
}
}
Now you're working with the stream directly, and it'll only get closed once - most definitely getting rid of this strange error - which I'll wager could be something to do with a service pack or something like that.

StreamReader ReadToEnd() returns empty string on first attempt

I know this question has been asked before on Stackoverflow, but could not find an explanation.
When I try to read a string from a compressed byte array I get an empty string on the first attempt, on the second I succed and get the string.
Code example:
public static string Decompress(byte[] gzBuffer)
{
if (gzBuffer == null)
return null;
using (var ms = new MemoryStream(gzBuffer))
{
using (var decompress = new GZipStream(ms, CompressionMode.Decompress))
{
using (var sr = new StreamReader(decompress, Encoding.UTF8))
{
string ret = sr.ReadToEnd();
// this is the extra check that is needed !?
if (ret == "")
ret = sr.ReadToEnd();
return ret;
}
}
}
}
All suggestions are appreciated.
- Victor Cassel
I found the bug. It was as Michael suggested in the compression routine. I missed to call Close() on the GZipStream.
public static byte[] Compress(string text)
{
if (string.IsNullOrEmpty(text))
return null;
byte[] raw = Encoding.UTF8.GetBytes(text);
using (var ms = new MemoryStream())
{
using (var compress = new GZipStream (ms, CompressionMode.Compress))
{
compress.Write(raw, 0, raw.Length);
compress.Close();
return ms.ToArray();
}
}
}
What happened was that the data seemed to get saved in a bad state that required two calls to ReadToEnd() in the decompression routine later on to extract the same data. Very odd!
try adding ms.Position = 0 before string ret = sr.ReadToEnd();
Where is gzBuffer coming from? Did you also write the code that is producing the compressed data?
Perhaps the buffer data you have is invalid or somehow incomplete, or perhaps it consists of multiple deflate streams concatenated together.
I hope this helps.
For ByteArray:
static byte[] CompressToByte(string data)
{
MemoryStream outstream = new MemoryStream();
GZipStream compressionStream =
new GZipStream(outstream, CompressionMode.Compress, true);
StreamWriter writer = new StreamWriter(compressionStream);
writer.Write(data);
writer.Close();
return StreamToByte(outstream);
}
static string Decompress(byte[] data)
{
MemoryStream instream = new MemoryStream(data);
GZipStream compressionStream =
new GZipStream(instream, CompressionMode.Decompress);
StreamReader reader = new StreamReader(compressionStream);
string outtext = reader.ReadToEnd();
reader.Close();
return outtext;
}
public static byte[] StreamToByte(Stream stream)
{
stream.Position = 0;
byte[] buffer = new byte[128];
using (MemoryStream ms = new MemoryStream())
{
while (true)
{
int read = stream.Read(buffer, 0, buffer.Length);
if (!(read > 0))
return ms.ToArray();
ms.Write(buffer, 0, read);
}
}
}
You can replace if(!(read > 0)) with if(read <= 0).
For some reason if(read <= 0) isn't displayed corret above.
For Stream:
static Stream CompressToStream(string data)
{
MemoryStream outstream = new MemoryStream();
GZipStream compressionStream =
new GZipStream(outstream, CompressionMode.Compress, true);
StreamWriter writer = new StreamWriter(compressionStream);
writer.Write(data);
writer.Close();
return outstream;
}
static string Decompress(Stream data)
{
data.Position = 0;
GZipStream compressionStream =
new GZipStream(data, CompressionMode.Decompress);
StreamReader reader = new StreamReader(compressionStream);
string outtext = reader.ReadToEnd();
reader.Close();
return outtext;
}
The MSDN Page on the function mentions the following:
If the current method throws an OutOfMemoryException, the reader's position in the underlying Stream object is advanced by the number of characters the method was able to read, but the characters already read into the internal ReadLine buffer are discarded. If you manipulate the position of the underlying stream after reading data into the buffer, the position of the underlying stream might not match the position of the internal buffer. To reset the internal buffer, call the DiscardBufferedData method; however, this method slows performance and should be called only when absolutely necessary.
Perhaps try calling DiscardBufferedData() before your ReadToEnd() and see what it does (I know you aren't getting the exception, but it's all I can think of...)?

How can I read an Http response stream twice in C#?

I am trying to read an Http response stream twice via the following:
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
stream = response.GetResponseStream();
RssReader reader = new RssReader(stream);
do
{
element = reader.Read();
if (element is RssChannel)
{
feed.Channels.Add((RssChannel)element);
}
} while (element != null);
StreamReader sr = new StreamReader(stream);
feed._FeedRawData = sr.ReadToEnd();
However when the StreamReader code executes there is no data returned because the stream has now reached the end. I tried to reset the stream via stream.Position = 0 but this throws an exception (I think because the stream can't have its position changed manually).
Basically, I would like to parse the stream for XML and have access to the raw data (in string format).
Any ideas?
Copy it into a new MemoryStream first. Then you can re-read the MemoryStream as many times as you like:
Stream responseStream = CopyAndClose(resp.GetResponseStream());
// Do something with the stream
responseStream.Position = 0;
// Do something with the stream again
private static Stream CopyAndClose(Stream inputStream)
{
const int readSize = 256;
byte[] buffer = new byte[readSize];
MemoryStream ms = new MemoryStream();
int count = inputStream.Read(buffer, 0, readSize);
while (count > 0)
{
ms.Write(buffer, 0, count);
count = inputStream.Read(buffer, 0, readSize);
}
ms.Position = 0;
inputStream.Close();
return ms;
}
Copying the stream to a MemoryStream as suggested by Iain is the right approach. But since
.NET Framework 4 (released 2010) we have Stream.CopyTo. Example from the docs:
// Create the streams.
MemoryStream destination = new MemoryStream();
using (FileStream source = File.Open(#"c:\temp\data.dat",
FileMode.Open))
{
Console.WriteLine("Source length: {0}", source.Length.ToString());
// Copy source to destination.
source.CopyTo(destination);
}
Console.WriteLine("Destination length: {0}", destination.Length.ToString());
Afterwards you can read destination as many times as you like:
// re-set to beginning and convert stream to string
destination.Position = 0;
StreamReader streamReader = new StreamReader(destination);
string text = streamReader.ReadToEnd();
// re-set to beginning and read again
destination.Position = 0;
RssReader cssReader = new RssReader(destination);
(I have seen Endy's comment but since it is an appropriate, current answer, it should have its own answer entry.)
have you tried resetting the stream position?
if this does not work you can copy the stream to a MemoryStream and there you can reset the position (i.e. to 0) as often as you want.

Categories

Resources