I'm familiar with using .Read() to detect the EOF
using (IDataReader reader = SqlHelper.ExecuteReader(_connectionString, "dbo.GetOrders"))
{
AssertOrder(reader);
while (reader.Read())
{
yield return FillRecord<Order>(reader, StringComparer.OrdinalIgnoreCase);
}
reader.Close();
}
Due to some weird situation I got myself into, the FillRecord actually advances the reader. So now the .Read() in the while loop actually causes this function to skip some rows -- because we are advancing twice.
I wish there was an IDataReader.EOF, but there isn't. Any thoughts?
I think I probably made my opinion clear in the comments... but, since you asked for "Any Thoughts"... here ya go:
Obviously a two second hack job, but you'll get the idea:
(You'd surely want to implement IDisposable (since I think IDataReader is??) etc. Probably just make the class inherit from IDataReader, and implement the class as a facade. Or get real cool and implement a transparent proxy which hijacks the Read method.
public class DataReaderWithEOF
{
public bool EOF { public get; private set; }
private IDataReader reader;
public DataReaderWithEOF(IDataReader reader)
{
this.reader = reader;
}
public bool Read()
{
bool result = reader.Read();
this.EOF = !result;
return result;
}
}
using (IDataReader reader = SqlHelper.ExecuteReader(_connectionString, "dbo.GetOrders"))
{
if(!reader.HasRows)
{
Response.Write("EOF"); // empty
}
while (reader.Read()) //read only registers
{
yield return FillRecord<Order>(reader, StringComparer.OrdinalIgnoreCase);
}
reader.Close();
}
}
every time u use a. Read () it next record.
then do if(reader.Read()) u would already be advancing a record.
An alternative to Steve's solution is to call reader.Close() in FillRecord when Read() returns false. Then you can check reader.IsClosed in your main loop.
Related
I have some code (C#/ADO.NET) where I get 2 or more readers (IDataReader instances) - each of which could be a reader on multiple datasets meant to be enumerated through the NextResult API.
My task is to combine these into a single reader and return it to my caller, so that they can enumerate all the different results through this single reader - calling NextResult as necessary.
(Note that each of these datasets have different kinds of data.).
Seems like a valid use case. There should be some way to do this?
Just for the fun of it I tried creating a class, below. It would definitely be a hassle to test.
Before explaining why it would be a pain, I'll offer you an excuse for saving yourself the trouble. If your class is creating IDataReaders, there's likely not a good reason to pass them to the caller. You can just read the data from them and pass that to the caller. Is there any good reason why the callers need readers and not just the actual data? Opening a datareader and closing it is something you want to accomplish without getting too many hands in the process, so if you can open it, get what you need, and then close it, that's ideal.
When we advance from one result set to the next within an IDataReader it's usually in the context of making a single call, so it's easier to follow what the result sets are. We just called procedure XYZ, it returns two result sets, so we have to check for both result sets. I wouldn't want to deal with an IDataReader that had lots and lots of result sets, especially when it's a bunch of smaller ones artificially combined. You'd have to keep track of a lot of result sets and switch between different methods for reading them since they contain different columns.
And there's also the issue of those open connections. We usually close a connection when we're done with a reader. But now it's less clear when that connection would get closed. How do you know which connection belongs to which reader? What if you close the connection for a reader while a different reader is still using it?
So here's a rough idea of what it might look like. I obviously didn't test this. You would have to keep track of which one is current and handle advancing to the next reader if NextResult is called and there isn't a next result within the current reader. And you'd have to close the readers and make sure they all get disposed. This could be tested, but just the testing would be a headache, and that's often a good warning not to do something.
public class AggregateDataReader : IDataReader
{
private readonly Queue<IDataReader> _readers;
private IDataReader _current;
public AggregateDataReader(IEnumerable<IDataReader> readers)
{
_readers = new Queue<IDataReader>(readers);
}
private bool AdvanceToNextReader()
{
_current?.Dispose();
var moreReaders = _readers.Any();
if (moreReaders) _current = _readers.Dequeue();
return moreReaders;
}
public bool NextResult()
{
if (_current == null) return false;
if (_current.NextResult()) return true;
return AdvanceToNextReader();
}
public bool Read()
{
return _current.Read();
}
public void Dispose()
{
_current?.Dispose();
while (_readers.Any()) _readers.Dequeue().Dispose();
}
public string GetName(int i)
{
return _current.GetName(i);
}
... lots of these...
public byte GetByte(int i)
{
return _current.GetByte(i);
}
public long GetBytes(int i, long fieldOffset, byte[] buffer, int bufferoffset, int length)
{
return _currentGetBytes(i, fieldOffset, buffer, bufferoffset, length);
}
... etc...
public void Close()
{
_current?.Close();
while (_readers.Any()) _readers.Dequeue().Close();
}
... etc...
}
I have a webapi that returns some protobuf-net serialized items after being transformed from a database column. The number of items can be rather large so I'd like to avoid materializing them all in memory and rather stream them out however, I never get a single one out before my Stream throws an already disposed exception. If I materialize the list it does work but I'd like to avoid it.
Here is what I'm doing::
private override async Task<IEnumerable<MyObj>> GetRawData(int id)
{
using(var sqlConn = GetOpenConnection())
{
using (var sqlReader =(await sqlConn.ExecuteReaderAsync(
_workingQuery,new {id = id}, 60)
.ConfigureAwait(false) as SqlDataReader))
{
await sqlReader.ReadAsync().ConfigureAwait(false);
using (var stream = sqlReader.GetStream(0))
{
return _serializerModel.DeserializeItems<MyObj>(stream, PrefixStyle.Base128, 1)
}
}
}
}
private async Task<IHttpActionResult> TransformData(int id)
{
var data=await GetRawData().ConfigureAwait(false);
return Ok(new {Data=data.Where(m=>...).Select(m=>...)})
}
[HttpGet,Route("api/myroute/{id}"]
public async Task<IHttpActionResult> GetObjs(int id)
{
return await TransformData();
}
However, I end getting an error about reading a disposed stream. How can I avoid this error?
The long and short of it is that you are returning a non-enumerated sequence, and closing (disposing) everything it needs before it ever gets to the caller. You're going to need to either:
enumerate the sequence eagerly (buffer in memory) - for example, adding .ToList()
restructure the code so that nothing is disposed until the end of the iteration
For the latter, I would be tempted to use an iterator block for the central part (after all the await). Something like:
bool dispose = true;
SqlConnection conn = null;
//...others
try {
conn = ...
...await...
var result = InnerMethod(conn, stream, ..., )
// ^^^ everything now known / obtained from DB
dispose = false; // handing lifetime management to the inner method
return result;
} finally {
if(dispose) {
using(conn) {}
//.//
}
}
IEnumerable<...> Inner(...) { // not async
using (conn)
using (stream) {
foreach(var item in ...) {
yield return item;
}
}
}
This line
return _serializerModel.DeserializeItems<MyObj>(stream, PrefixStyle.Base128, 1)
Returns just an indexer. As you seem to be aware the result is not yet materialized. However, right after the return, the using block disposes your Stream.
The solution would be to have GetRawData return the Stream. Then, inside TransformData, inside a using block for the Stream, you deserialize, filter and return the results.
The second potential issue is that your current approach processes the whole result and then sends it to the client all at once.
In order to have Web API send a streamed result you need to use HttpResponseMessage, and pass a Stream to it's Content property.
Here's an example: http://weblogs.asp.net/andresv/asynchronous-streaming-in-asp-net-webapi
When you want to stream a Stream to another Stream (I know, it sounds funny), you need to keep the ReaderStream open until the WriterStream is done writing. This is a high level representation:
using (ReaderStream)
{
using (WriterStream)
{
// write object from ReaderStream to WriterStream
}
}
So i am currently disposing many objects when i close my form. Even though it probably disposes it automatically. But still i prefer to follow the "rules" in disposing, hopefully it will stick and help prevent mistakes.
So here is how i currently dispose, which works.
if (connect == true)
{
Waloop.Dispose();
connect = false;
UninitializeCall();
DropCall();
}
if (KeySend.Checked || KeyReceive.Checked)
{
m_mouseListener.Dispose();
k_listener.Dispose();
}
if (NAudio.Wave.AsioOut.isSupported())
{
Aut.Dispose();
}
if (Wasout != null)
{
Wasout.Dispose();
}
if (SendStream != null)
{
SendStream.Dispose();
}
So basically, the first is if a bool is true, meaning if it isn´t those can be ignore, as they haven´t been made i think.
The others are just ways for me to dispose if it´s there. but it´s not a very good way, i would like to have it in 1 big function, meaning.
Dispose if it´s NOT disposed. or something.
I know that many of them has the "isdisposed" bool, so it should be possible if i can check every object, and dispose if it´s false.
How about a helper method which takes objects which implement IDisposable as params?
void DisposeAll(params IDisposable[] disposables)
{
foreach (IDisposable id in disposables)
{
if (id != null) id.Dispose();
}
}
When you want to dispose multiple objects, call the method with whatever objects you want to dispose.
this.DisposeAll(Wasout, SendStream, m_mouseListener, k_listener);
If you want to avoid calling them explicity, then store them all in a List<>:
private List<IDisposable> _disposables;
void DisposeAll() {
foreach(IDisposable id in _disposables) {
if(id != null) id.Dispose();
}
}
You can implement a Disposer class, that will do the work for you, along these lines:
public class Disposer
{
private List<IDisposable> disposables = new List<IDisposable>();
public void Register(IDisposable item)
{
disposables.Add(item);
}
public void Unregister(IDisposable item)
{
disposables.Remove(item);
}
public void DisposeAll()
{
foreach (IDisposable item in disposables)
{
item.Dispose();
}
disposables.Clear();
}
}
Then, instead of the ugly code in your main class, you can have something like:
public class Main
{
//member field
private Disposer m_disposer;
//constructor
public Main()
{
....
m_disposer = new Disposer();
//register any available disposables
disposer.Register(m_mouseListener);
disposer.Register(k_listener);
}
...
public bool Connect()
{
...
if (isConnected)
{
Waloop = ...
Wasout = ...
// register additional disposables as they are created
disposer.Register(Waloop);
disposer.Register(Wasout);
}
}
...
public void Close()
{
//disposal
disposer.DisposeAll();
}
}
I suggest you use the using statement. So with your code, it would look something like this:
using (WaloopClass Waloop = new WaloopClass())
{
// Some other code here I know nothing about.
connect = false; // Testing the current value of connect is redundant.
UninitializeCall();
DropCall();
}
Note there is now no need to explicitly Dispose Waloop, as it happens automatically at the end of the using statement.
This will help to structure your code, and makes the scope of the Waloop much clearer.
I am going to suppose that the only problem you’re trying to solve is how to write the following in a nicer way:
if (Wasout != null)
Wasout.Dispose();
if (SendStream != null)
SendStream.Dispose();
This is a lot of logic already implemented by the using keyword. using checks that the variable is not null before calling Dispose() for you. Also, using guarantees that thrown exceptions (perhap by Wasout.Dispose()) will not interrupt the attempts to call Dispose() on the other listed objects (such as SendStream). It seems that using was intended to allow management of resources based on scoping rules: using using as an alternative way to write o.Dispose() may be considered an abuse of the language. However, the benefits of using’s behavior and the concision it enables are quite valuable. Thus, I recommend to replace such mass statically-written batches of the “if (o != null) o.Dispose()” with an “empty” using:
using (
IDisposable _Wasout = Wasout,
_SendStream = SendStream)
{}
Note that the order that Dispose() is called in is in reverse of how objects are listed in the using block. This follows the pattern of cleaning up objects in reverse of their instantiation order. (The idea is that an object instantiated later may refer to an object instantiated earlier. E.g., if you are using a ladder to climb a house, you might want to keep the ladder around so that you can climb back down before putting it away—the ladder gets instantiated first and cleaned up last. Uhm, analogies… but, basically, the above is shorthand for nested using. And the unlike objects can be smashed into the same using block by writing the using in terms of IDisposable.)
dotnetfiddle of using managing exceptions.
I try to implement the 'AsyncPattern' within a WCF data service. I define the 2 methods BeginGetExperiments(...) and EndGetExperiments(...) in the interface and implement the methods as can be seen below.
public class GmdProfileService : IGmdProfileService
{
IAsyncResult IGmdProfileService.BeginGetExperiments(AsyncCallback callback, object state)
{
//IAsyncResult res = Experiment.GetExperimentsAsync(callback, state, Properties.Settings.Default.gmdConnectionString);
//return res;
System.Data.SqlClient.SqlConnectionStringBuilder csb = new System.Data.SqlClient.SqlConnectionStringBuilder(Properties.Settings.Default.gmdConnectionString);
csb.AsynchronousProcessing = true;
System.Data.SqlClient.SqlConnection conn = new System.Data.SqlClient.SqlConnection(csb.ConnectionString);
conn.Open();
System.Data.SqlClient.SqlCommand cmd = conn.CreateCommand();
cmd = conn.CreateCommand();
cmd.CommandText = "SELECT id, name, comment, date, doi FROM tf.TagList WITH(NOLOCK) WHERE proprietary=0;";
cmd.CommandType = System.Data.CommandType.Text;
return new SqlCommandAsyncResult(cmd, callback, state);
}
public List<Experiment> EndGetExperiments(IAsyncResult result)
{
List<Experiment> res = new List<Experiment>();
SqlCommandAsyncResult myresult = result as SqlCommandAsyncResult;
using (System.Data.SqlClient.SqlDataReader reader = myresult.cmd.EndExecuteReader(myresult.originalState as IAsyncResult))
{
try
{
while (reader.Read())
{
res.Add(new Experiment(reader));
}
}
catch (Exception ex)
{
throw ex;
}
finally
{
// Closing the reader also closes the connection, because this reader was created using the CommandBehavior.CloseConnection value.
if (reader != null)
{
reader.Close();
}
}
}
return res;
}
BeginGetExperiments returns a class SqlCommandAsyncResult implementing the IAsyncResult interface in addition to holding a reference to my SqlCommand for later access.
public class SqlCommandAsyncResult : IAsyncResult
{
public SqlCommand cmd { get; private set; }
public IAsyncResult originalState { get; private set; }
public SqlCommandAsyncResult(SqlCommand cmd, AsyncCallback callback, object state)
{
this.cmd = cmd;
this.originalState = cmd.BeginExecuteReader(callback,
state,
System.Data.CommandBehavior.SequentialAccess | // doesn't load whole column into memory
System.Data.CommandBehavior.CloseConnection // close connection immediately after read
);
}
public object AsyncState
{
get { return originalState.AsyncState; }
}
public WaitHandle AsyncWaitHandle
{
get { return originalState.AsyncWaitHandle; }
}
public bool CompletedSynchronously
{
get { return false; }
}
public bool IsCompleted
{
get { return AsyncWaitHandle.WaitOne(0); }
}
}
The difficulties I face are in the EndGetExperiments method. I dont know how to access the SqlCommand to call EndExecuteReader(...).
Normally I would use the state object in the BeginExecutereader to pass on the command. But if I do so, I get the exception:
"IAsyncResult's State must be the state argument passed to your Begin call."
So I try to use the IAsyncResult to pass the SqlCommand forward to EndGetExperiments. Here, the point I don’t understand is that in EndGetExperiments the variable result is either of type IAsyncResult or of type SqlCommandAsyncResult depending on the value of CompletedSynchronously in the SqlCommandAsyncResult class.
Setting CompletedSynchronously = false makes my code fail because I cant't access the SqlCommand whereas setting CompletedSynchronously = true the code works like a charm but I have an odd feeling that something might go wrong under the hood.
I appreciate any help, guidance and example code how to make this code working and even more important in helping me to understand the problem at hand.
Thank you very much.
Jan
Today WCF Data Services doesn't support asynchronous processing on the server. Please vote/add a feature request for it here: http://data.uservoice.com/forums/72027-wcf-data-services-feature-suggestions/topics/72603-wcf-data-services-feature-suggestions
If you are using C# 4.0. This might be easier using Task<T> to achieve. Task<T> is to execute Func<T> where T is the return value. You can define a continuation task which fetch Parent.Result. I know this answer might not be what you are looking for. But please consider this as an alternative. The code will be cleaner, easy to maintain, and easy to debug (using Task Parallel Window, Parallel Stacks etc).
In .NET 4.0 and Linq to SQL, I am trying to use a partial class to "trigger" changes from within an update method (an existing DBML method). For simplicity, imagine a table Things with columns Id and Value
The auto gen DBML contains a method OnValueChanged, I'll extend that and as an exercise try to change one value in one other row :
public partial class Things
{
partial void OnValueChanged()
{
MyAppDataContext dc = new MyAppDataContext();
var q = from o in dc.GetTable<Things>() where o.Id == 13 select o;
foreach (Things o in q)
{
o.Value = "1"; // try to change some other row
}
try
{
dc.SubmitChanges();
}
catch (Exception)
{
// SQL timeout occurs
}
}
}
A SQL timeout error occurs. I suspect that the datacontext is getting confused trying to SubmitChanges() before the current OnValueChanged() method has disposed of it's datacontext, but I am not sure.
Mostly I cannot find an example of a good pattern for triggering updates against a DB within an existing DBML generated method.
Can anyone provide any pointers on why this doesn't work and how I can accomplish something that works OK? (I realize I can trigger in the SQL database, but do not want to take that route.)
Thanks!
First, you aren't disposing of the DataContext at all in your function. Wrap it in a using statement.
The actual issue is coming from the fact that you're recursively calling yourself by setting the Value property on the retrieved values. You're just running into the timeout before you can hit a StackOverflowException.
It's unclear what you're trying to do here; if you're trying to allow different behavior between when you set the Value property here versus anywhere else, then it's simple enough to use a flag. In your partial class, declare an internal instance boolean auto property called UpdatingValue, and set it to true on each item inside your foreach block before you update the value, then set it to false after you update the value. Then, as the first line in OnValueChanged, check to ensure that UpdatingValue is false.
Like this:
public partial class Things
{
internal bool UpdatingValue { get; set; }
partial void OnValueChanged()
{
if (UpdatingValue) return;
using(MyAppDataContext dc = new MyAppDataContext())
{
var q = from o in dc.GetTable<Things>() where o.Id == 13 select o;
foreach (Things o in q)
{
o.UpdatingValue = true;
o.Value = "1"; // try to change some other row
o.UpdatingValue = false;
}
dc.SubmitChanges();
}
}
}
I would suspect that you may have introduced infinite recursion by changing the values of Things in the OnValueChanged event handler of Things.
To me, a cleaner solution to your problem is not to generate your class in a DBML file, but instead use LinqToSql attributes on a class you create. By doing so you can do your "trigger" modifications in the setters of your properties/columns.
I had a similar issue. I don't think it is a bug in your code, I'm leaning toward a bug in how the SqlDependency works. I did the same this as you, but I incrementally tested it. If the select statement return 1-100 rows, then it worked fine. If the select statement returned 1000 rows, then I would get the SqlException (timeout).
It is not a stack overflow issue (at least not in this client code). Putting a break point at the OnValueChanged event handler reveals that it does not get called again while the SubmitChanges call is hanging.
It is possible that there is a requirement that the OnValueChanged call must return before you can call SubmitChanges. Maybe calling SubmitChanges on a different thread might help.
My solution was to wrap the code in a big try/catch block to catch the SqlException. If it happens, then I perform the same query, but I don't use an SqlDependency and don't attach it to the command. This does not hang the SubmitChanges call anymore. Then right after that, I recreate the SqlDependency and then make the query again, to reregister the dependency.
This is not ideal, but at least it will process all the rows eventually. The problem only occurs if there are a lot of rows to be selected, and if the program is working smoothly, this should not happen as it is constantly catching up.
public Constructor(string connString, CogTrkDBLog logWriter0)
{
connectionString = connString;
logWriter = logWriter0;
using (SqlConnection conn = new SqlConnection(connString))
{
conn.Open();
using (SqlCommand cmd = new SqlCommand("SELECT is_broker_enabled FROM sys.databases WHERE name = 'cogtrk'", conn))
{
bool r = (bool) cmd.ExecuteScalar();
if (!r)
{
throw new Exception("is_broker_enabled was false");
}
}
}
if (!CanRequestNotifications())
{
throw new Exception("Not enough permission to run");
}
// Remove any existing dependency connection, then create a new one.
SqlDependency.Stop(connectionString);
SqlDependency.Start(connectionString);
if (connection == null)
{
connection = new SqlConnection(connectionString);
connection.Open();
}
if (command == null)
{
command = new SqlCommand(GetSQL(), connection);
}
GetData(false);
GetData(true);
}
private string GetSQL()
{
return "SELECT id, command, state, value " +
" FROM dbo.commandqueue WHERE state = 0 ORDER BY id";
}
void dependency_OnChange(object sender, SqlNotificationEventArgs e)
{
// Remove the handler, since it is only good
// for a single notification.
SqlDependency dependency = (SqlDependency)sender;
dependency.OnChange -= dependency_OnChange;
GetData(true);
}
void GetData(bool withDependency)
{
lock (this)
{
bool repeat = false;
do {
repeat = false;
try
{
GetDataRetry(withDependency);
}
catch (SqlException)
{
if (withDependency) {
GetDataRetry(false);
repeat = true;
}
}
} while (repeat);
}
}
private void GetDataRetry(bool withDependency)
{
// Make sure the command object does not already have
// a notification object associated with it.
command.Notification = null;
// Create and bind the SqlDependency object
// to the command object.
if (withDependency)
{
SqlDependency dependency = new SqlDependency(command);
dependency.OnChange += dependency_OnChange;
}
Console.WriteLine("Getting a batch of commands");
// Execute the command.
using (SqlDataReader reader = command.ExecuteReader())
{
using (CommandQueueDb db = new CommandQueueDb(connectionString))
{
foreach (CommandEntry c in db.Translate<CommandEntry>(reader))
{
Console.WriteLine("id:" + c.id);
c.state = 1;
db.SubmitChanges();
}
}
}
}