I need to have the stream of the file in 2 different locations. In the code the IFormFile is already passed as parameter in the 2 methods. I thought of either modifying the methods and calling the OpenReadStream in the beginning and pass the stream as param or calling OpenReadStream separately.
I inspected the dissasembled code and OpenReadStream does this:
return new ReferenceReadStream(_baseStream, _baseStreamOffset, Length);
and the ReferenceReadStream class does this in the constructor:
public ReferenceReadStream(Stream inner, long offset, long length)
{
if (inner == null)
{
throw new ArgumentNullException("inner");
}
_inner = inner;
_innerOffset = offset;
_length = length;
_inner.Position = offset;
}
In my understanding the base stream is the same and it doesn't matter calling OpenReadStream multiple times.
What worries me is if I'll run into problems when I start using Seek method.
Does anyone know what's the correct usage of OpenReadStream in this senario?
Apparently it's not safe to call OpenReadStream multiple times.
When Read method is called, it calls this method:
private void VerifyPosition()
{
if (_inner.Position == _innerOffset + _position)
{
return;
}
throw new InvalidOperationException("The inner stream position has changed unexpectedly.");
}
I was able to trigger this exception with the following code:
var s = file.OpenReadStream();
s.Seek(10, SeekOrigin.Begin);
var b = new byte[2];
var c = s.Read(b);
var s2 = file.OpenReadStream();
c = s.Read(b);
Related
I would like to use new Span to send unmanaged data straight to the socket using SocketAsyncEventArgs but it seems that SocketAsyncEventArgs can only accept Memory<byte> which cannot be initialized with byte * or IntPtr.
So please is there a way to do use span with SocketAsyncEventArgs?
Thank you for your help.
As already mentioned in the comments, Span is the wrong tool here - have you looked at using Memory instead? As you stated, the SetBuffer method does accept that as a parameter - is there a reason you can't use it?
See also this article for a good explanation on how stack vs heap allocation applies to Span and Memory. It includes this example, using a readonly Memory<Foo> buffer:
public struct Enumerable : IEnumerable<Foo>
{
readonly Stream stream;
public Enumerable(Stream stream)
{
this.stream = stream;
}
public IEnumerator<Foo> GetEnumerator() => new Enumerator(this);
IEnumerator IEnumerable.GetEnumerator() => GetEnumerator();
public struct Enumerator : IEnumerator<Foo>
{
static readonly int ItemSize = Unsafe.SizeOf<Foo>();
readonly Stream stream;
readonly Memory<Foo> buffer;
bool lastBuffer;
long loadedItems;
int currentItem;
public Enumerator(Enumerable enumerable)
{
stream = enumerable.stream;
buffer = new Foo[100]; // alloc items buffer
lastBuffer = false;
loadedItems = 0;
currentItem = -1;
}
public Foo Current => buffer.Span[currentItem];
object IEnumerator.Current => Current;
public bool MoveNext()
{
if (++currentItem != loadedItems) // increment current position and check if reached end of buffer
return true;
if (lastBuffer) // check if it was the last buffer
return false;
// get next buffer
var rawBuffer = MemoryMarshal.Cast<Foo, byte>(buffer);
var bytesRead = stream.Read(rawBuffer);
lastBuffer = bytesRead < rawBuffer.Length;
currentItem = 0;
loadedItems = bytesRead / ItemSize;
return loadedItems != 0;
}
public void Reset() => throw new NotImplementedException();
public void Dispose()
{
// nothing to do
}
}
}
You should copy the data to managed memory first, use Marshal or Buffer class.
If not think about when the C code delete the returned pointer what will happen to the send data?
There's a complete example (and implementation of the class) on the MSDN page for the SocketAsyncEventArgs (just follow the link). It shows the proper use of the class and may give you the guidence you're looking for.
Also, as Shingo said, it should all be in managed code, not in pointers.
I'm busy with writing a program that writes JSON serialized objects to a JSON-file.
Now I've a problem with it. I want to write each object inside a list seperately to the JSON-file. Ofcourse in real scenario the program should add all objects to a list and hen write the list to the JSON file at once, but for now I want to do it one by one. The way I'm trying to do that introduces I/O errors and it's not clear for me why.
readonly BaseStore<TestObject> _store2 = new BaseStore<TestObject>("test", Settings.DatabaseBasePath);
private async void testToolStripMenuItem_Click(object sender, EventArgs e)
{
await Write();
}
private async Task Write()
{
for (int i = 0; i < 1000; i++)
{
await _store2.Save(new TestObject
{
Property = "value"
});
}
}
Code of the Save method:
public async Task Save(T obj)
{
var collection = _database.Load<T>(_collection);
// do some collection changes
await _database.Save(collection, _collection);
}
Above method will call the following method:
public async Task Save(object obj, string collectionName)
{
string fileName = string.Format("{0}{1}", collectionName, ".json");
var json = JsonConvert.SerializeObject(obj, Formatting.None);
byte[] encodedText = Encoding.Unicode.GetBytes(json);
using (FileStream sourceStream = new FileStream(Path.Combine(_basePath, fileName),
FileMode.Append, FileAccess.Write, FileShare.None,
bufferSize: 4096, useAsync: true))
{
await sourceStream.WriteAsync(encodedText, 0, encodedText.Length);
};
}
Above code throws the following errors:
Exception has been thrown by the target of an invocation.
The requested operation cannot be performed on a file with a user-mapped section open.
I think maybe the problem is that the Save method inside the for-loop tries to access the JSON-file while it is opened by the previous Save call, but I don't know how to solve that.
Can somebody point me into the right direction please?
I have a webapi that returns some protobuf-net serialized items after being transformed from a database column. The number of items can be rather large so I'd like to avoid materializing them all in memory and rather stream them out however, I never get a single one out before my Stream throws an already disposed exception. If I materialize the list it does work but I'd like to avoid it.
Here is what I'm doing::
private override async Task<IEnumerable<MyObj>> GetRawData(int id)
{
using(var sqlConn = GetOpenConnection())
{
using (var sqlReader =(await sqlConn.ExecuteReaderAsync(
_workingQuery,new {id = id}, 60)
.ConfigureAwait(false) as SqlDataReader))
{
await sqlReader.ReadAsync().ConfigureAwait(false);
using (var stream = sqlReader.GetStream(0))
{
return _serializerModel.DeserializeItems<MyObj>(stream, PrefixStyle.Base128, 1)
}
}
}
}
private async Task<IHttpActionResult> TransformData(int id)
{
var data=await GetRawData().ConfigureAwait(false);
return Ok(new {Data=data.Where(m=>...).Select(m=>...)})
}
[HttpGet,Route("api/myroute/{id}"]
public async Task<IHttpActionResult> GetObjs(int id)
{
return await TransformData();
}
However, I end getting an error about reading a disposed stream. How can I avoid this error?
The long and short of it is that you are returning a non-enumerated sequence, and closing (disposing) everything it needs before it ever gets to the caller. You're going to need to either:
enumerate the sequence eagerly (buffer in memory) - for example, adding .ToList()
restructure the code so that nothing is disposed until the end of the iteration
For the latter, I would be tempted to use an iterator block for the central part (after all the await). Something like:
bool dispose = true;
SqlConnection conn = null;
//...others
try {
conn = ...
...await...
var result = InnerMethod(conn, stream, ..., )
// ^^^ everything now known / obtained from DB
dispose = false; // handing lifetime management to the inner method
return result;
} finally {
if(dispose) {
using(conn) {}
//.//
}
}
IEnumerable<...> Inner(...) { // not async
using (conn)
using (stream) {
foreach(var item in ...) {
yield return item;
}
}
}
This line
return _serializerModel.DeserializeItems<MyObj>(stream, PrefixStyle.Base128, 1)
Returns just an indexer. As you seem to be aware the result is not yet materialized. However, right after the return, the using block disposes your Stream.
The solution would be to have GetRawData return the Stream. Then, inside TransformData, inside a using block for the Stream, you deserialize, filter and return the results.
The second potential issue is that your current approach processes the whole result and then sends it to the client all at once.
In order to have Web API send a streamed result you need to use HttpResponseMessage, and pass a Stream to it's Content property.
Here's an example: http://weblogs.asp.net/andresv/asynchronous-streaming-in-asp-net-webapi
When you want to stream a Stream to another Stream (I know, it sounds funny), you need to keep the ReaderStream open until the WriterStream is done writing. This is a high level representation:
using (ReaderStream)
{
using (WriterStream)
{
// write object from ReaderStream to WriterStream
}
}
I'm struggling here. Normally I'd read a book but there aren't any yet. I've found countless examples of various things to do with reading streams using RX but I'm finding it very hard to get my head around.
I know I can use Observable.FromAsyncPattern to create a wrapper of the Stream's BeginRead/EndRead or BeginReadLine/EndReadLine methods.
But this only reads once -- when the first observer subscribes.
I want an Observable which will keep reading and pumping OnNext until the stream errors or ends.
In addition to this, I'd also like to know how I can then share that observable with multiple subscribers so they all get the items.
You can use Repeat in order to keep reading lines until the end of the stream and Publish or Replay in order to control sharing across multiple readers.
An example of a simple, full Rx solution for reading lines from any stream until the end would be:
public static IObservable<string> ReadLines(Stream stream)
{
return Observable.Using(
() => new StreamReader(stream),
reader => Observable.FromAsync(reader.ReadLineAsync)
.Repeat()
.TakeWhile(line => line != null));
}
This solution also takes advantage of the fact that ReadLine returns null when the end of the stream is reached.
Adding to Lee's answer, using rxx:
using (new FileStream(#"filename.txt", FileMode.Open)
.ReadToEndObservable()
.Subscribe(x => Console.WriteLine(x.Length)))
{
Console.ReadKey();
}
The length of the read buffers will be outputted.
Heh - gonna reuse one of my other answers here (well, part of it, anyways):
Ref: Reading from NetworkStream corrupts the buffer
In that, I've got an extension method like this:
public static class Ext
{
public static IObservable<byte[]> ReadObservable(this Stream stream, int bufferSize)
{
// to hold read data
var buffer = new byte[bufferSize];
// Step 1: async signature => observable factory
var asyncRead = Observable.FromAsyncPattern<byte[], int, int, int>(
stream.BeginRead,
stream.EndRead);
return Observable.While(
// while there is data to be read
() => stream.CanRead,
// iteratively invoke the observable factory, which will
// "recreate" it such that it will start from the current
// stream position - hence "0" for offset
Observable.Defer(() => asyncRead(buffer, 0, bufferSize))
.Select(readBytes => buffer.Take(readBytes).ToArray()));
}
}
You can probably use this as written in a form like so:
// Note: ToEnumerable works here because your filestream
// has a finite length - don't do this with infinite streams!
var blobboData = stream
.ReadObservable(bufferSize)
// take while we're still reading data
.TakeWhile(returnBuffer => returnBuffer.Length > 0)
.ToEnumerable()
// mash them all together
.SelectMany(buffer => buffer)
.ToArray();
The solution is to use Observable.Create
Here is an example which can be adapated for reading any kind of stream
public static IConnectableObservable<Command> GetReadObservable(this CommandReader reader)
{
return Observable.Create<Command>(async (subject, token) =>
{
try
{
while (true)
{
if (token.IsCancellationRequested)
{
subject.OnCompleted();
return;
}
//this part here can be changed to something like this
//int received = await Task.Factory.FromAsync<int>(innerSocket.BeginReceive(data, offset, size, SocketFlags.None, null, null), innerSocket.EndReceive);
Command cmd = await reader.ReadCommandAsync();
subject.OnNext(cmd);
}
}
catch (Exception ex)
{
try
{
subject.OnError(ex);
}
catch (Exception)
{
Debug.WriteLine("An exception was thrown while trying to call OnError on the observable subject -- means you're not catching exceptions everywhere");
throw;
}
}
}).Publish();
}
Don't forget to call Connect() on the returned IConnectableObservable
As mentioned in a couple other posts (see References below) I am attempting to create response filters in order to modify content being produced by another web application.
I have the basic string transformation logic working and encapsulated into Filters that derive from a common FilterBase. However, the logic must operate on the full content, not chunks of content. Therefore I need to cache the chunks as they are written and perform the filter when all the writes are completed.
As shown below I created a new ResponseFilter derived from MemoryStream. On Write, the content is cached to another MemoryStream. On Flush, the full content, now in the MemoryStream is converted to a string and the Filter logic kicks in. The modified content is then written back out to the originating stream.
However, on every second request (basically when a new Filter is instantiated over the previous one) the previous filter's Flush method is being executed. At this point the the application crashes on the _outputStream.Write() method as the _cachedStream is empty.
The order of event is as follows:
First Request
Write method is called
Flush method is called
Close method is called
Close method is called
At this point the app returns and the proper content is displayed.
Second Request
Flush method is called
Application crashes on _outputStream.Write. ArgumentOutOfRangeException (offset).
Continue through crash (w/ in Visual Studio)
Close method is called
There are a couple of questions I have:
Why is Close called twice?
Why is Flush called after Closed was called?
To Jay's point below, Flush may be called before the stream is completely read, where should the filter logic reside? In Close? In Flush but with "if closing"?
What is the proper implementation for a Response Filter that works on the entire content at once?
Note: I experience the exact same behavior (minus Close events) if I do not override the Close method.
public class ResponseFilter : MemoryStream
{
private readonly Stream _outputStream;
private MemoryStream _cachedStream = new MemoryStream(1024);
private readonly FilterBase _filter;
public ResponseFilter (Stream outputStream, FilterBase filter)
{
_outputStream = outputStream;
_filter = filter;
}
// Flush is called on the second, fourth, and so on, page request (second request) with empty content.
public override void Flush()
{
Encoding encoding = HttpContext.Current.Response.ContentEncoding;
string cachedContent = encoding.GetString(_cachedStream.ToArray());
// Filter the cached content
cachedContent = _filter.Filter(cachedContent);
byte[] buffer = encoding.GetBytes(cachedContent);
_cachedStream = new MemoryStream();
_cachedStream.Write(buffer, 0, buffer.Length);
// Write new content to stream
_outputStream.Write(_cachedStream.ToArray(), 0, (int)_cachedStream.Length);
_cachedStream.SetLength(0);
_outputStream.Flush();
}
// Write is called on the first, third, and so on, page request.
public override void Write(byte[] buffer, int offset, int count)
{
// Cache the content.
_cachedStream.Write(buffer, 0, count);
}
public override void Close()
{
_outputStream.Close();
}
}
// Example usage in a custom HTTP Module on the BeginRequest event.
FilterBase transformFilter = new MapServiceJsonResponseFilter();
response.Filter = new ResponseFilter(response.Filter, transformFilter);
References:
How do I deploy a managed HTTP Module Site Wide?
Creating multiple (15+) HTTP Response filters, Inheritance vs. Composition w/ Injection
Thanks to a tip from Jay regarding Flush being called for incremental writes I have been able to make the Filter work as desired by performing the filtering logic only if the Filter is closing and has not yet closed. This ensures that the Filter only Flushes once when the Stream is closing. I accomplished this with a few simple fields, _isClosing and _isClosed as shown in the final code below.
public class ResponseFilter : MemoryStream
{
private readonly Stream _outputStream;
private MemoryStream _cachedStream = new MemoryStream(1024);
private readonly FilterBase _filter;
private bool _isClosing;
private bool _isClosed;
public ResponseFilter (Stream outputStream, FilterBase filter)
{
_outputStream = outputStream;
_filter = filter;
}
public override void Flush()
{
if (_isClosing && !_isClosed)
{
Encoding encoding = HttpContext.Current.Response.ContentEncoding;
string cachedContent = encoding.GetString(_cachedStream.ToArray());
// Filter the cached content
cachedContent = _filter.Filter(cachedContent);
byte[] buffer = encoding.GetBytes(cachedContent);
_cachedStream = new MemoryStream();
_cachedStream.Write(buffer, 0, buffer.Length);
// Write new content to stream
_outputStream.Write(_cachedStream.ToArray(), 0, (int)_cachedStream.Length);
_cachedStream.SetLength(0);
_outputStream.Flush();
}
}
public override void Write(byte[] buffer, int offset, int count)
{
_cachedStream.Write(buffer, 0, count);
}
public override void Close()
{
_isClosing = true;
Flush();
_isClosed = true;
_isClosing = false;
_outputStream.Close();
}
}
I have not yet found answers to my other questions above so I will not mark this answer as excepted at this time.
Flush is not being called explicitly. Perhaps it is called when the code realizes a new object is needed, or perhaps as a result of a finalizer.
I think one can call flush after any incremental write, so I'm not sure that a call to flush is an adequate indication of a complete message anyway.