SSMS SMO Objects: Get query results - c#

I came across this tutorial to understand how to execute SQL scripts with GO statements.
Now I want to know what can I get the output of the messages TAB.
With several GO statements, the output would be like this:
1 rows affected
912 rows affected
...
But server.ConnectionContext.ExecuteNonQuery() can return only an int, while I need all the text. In case there is some error in some part of query, it should put that also in the output.
Any help would be appreciated.

The easiest thing is possibly to just print the number you get back for ExecuteNonQuery:
int rowsAffected = server.ConnectionContext.ExecuteNonQuery(/* ... */);
if (rowsAffected != -1)
{
Console.WriteLine("{0} rows affected.", rowsAffected);
}
This should work, but will not honor the SET NOCOUNT setting of the current session/scope.
Otherwise you would do it like you would do with "plain" ADO.NET. Don't use the ServerConnection.ExecuteNonQuery() method, but create an SqlCommand object by accessing the underlying SqlConnection object. On that subscribe to the StatementCompleted event.
using (SqlCommand command = server.ConnectionContext.SqlConnectionObject.CreateCommand())
{
// Set other properties for "command", like StatementText, etc.
command.StatementCompleted += (s, e) => {
Console.WriteLine("{0} row(s) affected.", e.RecordCount);
};
command.ExecuteNonQuery();
}
Using StatementCompleted (instead, say, manually printing the value that ExecuteNonQuery() returned) has the benefit that it works exactly like SSMS or SQLCMD.EXE would:
For commands that do not have a ROWCOUNT it will not be called at all (e.g. GO, USE).
If SET NOCOUNT ON was set, it will not be called at all.
If SET NOCOUNT OFF was set, it will be called for every statement inside a batch.
(Sidebar: it looks like StatementCompleted is exactly what the TDS protocol talks about when DONE_IN_PROC event is mentioned; see Remarks of the SET NOCOUNT command on MSDN.)
Personally, I have used this approach with success in my own "clone" of SQLCMD.EXE.
UPDATE: It should be noted, that this approach (of course) requires you to manually split the input script/statements at the GO separator, because you're back to using SqlCommand.Execute*() which cannot handle multiple batches at a time. For this, there are multiple options:
Manually split the input on lines starting with GO (caveat: GO can be called like GO 5, for example, to execute the previous batch 5 times).
Use the ManagedBatchParser class/library to help you split the input into single batches, especially implement ICommandExecutor.ProcessBatch with the code above (or something resembling it).
I choose the later option, which was quite some work, given that it is not pretty well documented and examples are rare (google a bit, you'll find some stuff, or use reflector to see how the SMO-Assemblies use that class).
The benefit (and maybe burden) of using the ManagedBatchParser is, that it will also parse all other constructs of T-SQL scripts (intended for SQLCMD.EXE) for you. Including: :setvar, :connect, :quit, etc. You don't have to implement the respective ICommandExecutor members, if your scripts don't use them, of course. But mind you that you'll may not be able to execute "arbitrary" scripts.
Well, were did that put you. From the "simple question" of how to print "... rows affected" to the fact that it is not trivial to do in a robust and general manner (given the background work required). YMMV, good luck.
Update on ManagedBatchParser Usage
There seems to be no good documenation or example about how to implement IBatchSource, here is what I went with.
internal abstract class BatchSource : IBatchSource
{
private string m_content;
public void Populate()
{
m_content = GetContent();
}
public void Reset()
{
m_content = null;
}
protected abstract string GetContent();
public ParserAction GetMoreData(ref string str)
{
str = null;
if (m_content != null)
{
str = m_content;
m_content = null;
}
return ParserAction.Continue;
}
}
internal class FileBatchSource : BatchSource
{
private readonly string m_fileName;
public FileBatchSource(string fileName)
{
m_fileName = fileName;
}
protected override string GetContent()
{
return File.ReadAllText(m_fileName);
}
}
internal class StatementBatchSource : BatchSource
{
private readonly string m_statement;
public StatementBatchSource(string statement)
{
m_statement = statement;
}
protected override string GetContent()
{
return m_statement;
}
}
And this is how you would use it:
var source = new StatementBatchSource("SELECT GETUTCDATE()");
source.Populate();
var parser = new Parser();
parser.SetBatchSource(source);
/* other parser.Set*() calls */
parser.Parse();
Note that both implementations, either for direct statements (StatementBatchSource) or for a file (FileBatchSource) have the problem that they read the complete text at once
into memory. I had one case where that blew up, having a huge(!) script with gazillions of generated INSERT statements. Even though I don't think that is a practical issue, SQLCMD.EXE could handle it. But for the life of me, I couldn't figure out how exactly,
you would need to form the chunks returned for IBatchParser.GetContent() so that the
parser can still work with them (it looks like they would need to be complete statements,
which would sort of defeat the purpose of the parse in the first place...).

Related

Combining multiple data readers into one

I have some code (C#/ADO.NET) where I get 2 or more readers (IDataReader instances) - each of which could be a reader on multiple datasets meant to be enumerated through the NextResult API.
My task is to combine these into a single reader and return it to my caller, so that they can enumerate all the different results through this single reader - calling NextResult as necessary.
(Note that each of these datasets have different kinds of data.).
Seems like a valid use case. There should be some way to do this?
Just for the fun of it I tried creating a class, below. It would definitely be a hassle to test.
Before explaining why it would be a pain, I'll offer you an excuse for saving yourself the trouble. If your class is creating IDataReaders, there's likely not a good reason to pass them to the caller. You can just read the data from them and pass that to the caller. Is there any good reason why the callers need readers and not just the actual data? Opening a datareader and closing it is something you want to accomplish without getting too many hands in the process, so if you can open it, get what you need, and then close it, that's ideal.
When we advance from one result set to the next within an IDataReader it's usually in the context of making a single call, so it's easier to follow what the result sets are. We just called procedure XYZ, it returns two result sets, so we have to check for both result sets. I wouldn't want to deal with an IDataReader that had lots and lots of result sets, especially when it's a bunch of smaller ones artificially combined. You'd have to keep track of a lot of result sets and switch between different methods for reading them since they contain different columns.
And there's also the issue of those open connections. We usually close a connection when we're done with a reader. But now it's less clear when that connection would get closed. How do you know which connection belongs to which reader? What if you close the connection for a reader while a different reader is still using it?
So here's a rough idea of what it might look like. I obviously didn't test this. You would have to keep track of which one is current and handle advancing to the next reader if NextResult is called and there isn't a next result within the current reader. And you'd have to close the readers and make sure they all get disposed. This could be tested, but just the testing would be a headache, and that's often a good warning not to do something.
public class AggregateDataReader : IDataReader
{
private readonly Queue<IDataReader> _readers;
private IDataReader _current;
public AggregateDataReader(IEnumerable<IDataReader> readers)
{
_readers = new Queue<IDataReader>(readers);
}
private bool AdvanceToNextReader()
{
_current?.Dispose();
var moreReaders = _readers.Any();
if (moreReaders) _current = _readers.Dequeue();
return moreReaders;
}
public bool NextResult()
{
if (_current == null) return false;
if (_current.NextResult()) return true;
return AdvanceToNextReader();
}
public bool Read()
{
return _current.Read();
}
public void Dispose()
{
_current?.Dispose();
while (_readers.Any()) _readers.Dequeue().Dispose();
}
public string GetName(int i)
{
return _current.GetName(i);
}
... lots of these...
public byte GetByte(int i)
{
return _current.GetByte(i);
}
public long GetBytes(int i, long fieldOffset, byte[] buffer, int bufferoffset, int length)
{
return _currentGetBytes(i, fieldOffset, buffer, bufferoffset, length);
}
... etc...
public void Close()
{
_current?.Close();
while (_readers.Any()) _readers.Dequeue().Close();
}
... etc...
}

Pass a LogRecordSet into a method

I have a small command line app written in C# that uses LogParser and I was looking to clean it up a little because it is all in one massive method.
I run my query and I get a LogRecordSet object:
// run the query against wowza log
LogRecordSet logSet = logQuery.Execute(sql, new W3CInputFormat());
All good. Now I want to pass logSet into a method where I will evaluate everything:
private static IEnumerable<Unprocessed> CreateRecords(LogRecordSet logRecordset)
{
for (; !logRecordset.atEnd(); logRecordset.moveNext())
{
...
}
}
And I call it like so:
var records = CreateRecords(logSet);
This compiles fine, however it just sort of ignores the CreateRecords method, just skips over it. I admittedly know very little about c# command line applications, but I would just be interested to know why this is happening, and wasn't really sure what to google.
Edit
I have looked into a little more, and the problem seems to stem from the fact that my method uses
yield return log;
Can I not use yield return in this context?
private static IEnumerable<Unprocessed> CreateRecords(LogRecordSet logRecordset)
{
for (; !logRecordset.atEnd(); logRecordset.moveNext())
{
yield return ...;
}
}
Your CreateRecords() looks ok, just make sure you start enumerating its returned IEnumerable and you'll see it'll get invoked. For example:
var foo = CreateRecords().ToArray();

Current state object - C#

For my current 'testing the waters' project, I'm trying to not use any Try-Catch blocks but instead catch each error (other than fatal) in other ways.
Now, when I say catch errors, my very contrived program makes one error which is easy to avoid; It tries to divide by 0 and this can be prevented by an If statement. To keep it simple I have only 1 C# file, with 1 class and two methods. I guess this is like a template, where the Constructor starts a process:
public class myObject
{
public myObject()
{
Object objOne = methodOne();
methodThree(objOne);
}
public object methodOne()
{
//logic to create a return object
int x = 0;
//I've added a condition to ensure the maths is possible to avoid raising an exception when, for this example, it fails
if (x > 0)
int y = 5 / x;
return object;
}
public void procesObjects(Object objOne)
{
//logic
}
}
So, as you can see in methodOne() I've added the if statement to ensure it checks that the maths isn't dividing by 0. However, since I've caught it, my application continues which is not desired. I need a way to cease the application and log the failing for debugging.
So, this is what I think could work:
Create a class called Tracking which for this example, is very simple (or would a struct be better?).
public class Tracking
{
StringBuilder logMessage = new StringBuilder();
bool readonly hasFailed;
}
I can then update my code to:
public class myObject
{
Tracking tracking = new Tracking();
public myObject()
{
Object objOne = methodOne();
if (!tracking.hasFailed)
methodThree(objOne);
if (tracking.hasFailed)
ExteranlCallToLog(tracking);
}
public object methodOne()
{
//logic
int x = 0;
//I've added a condition to ensure the maths is possible to avoid raising an exception when, for this example, it fails
if (x > 0)
int y = 5 / x;
else
{
tracking.hasFailed = true;
tracking.logMessage.AppendLine("Cannot divide by 0");
}
//may also need to check that the object is OK to return
return object;
}
public void procesObjects(Object objOne)
{
//logic
}
}
So, I hope you can see what I'm trying to achieve but I have 3 questions.
Should my tracking object (as it is in this example) be a class or a struct?
I'm concerned my code is going to become very noisy. I'm wondering if when the system fails, it raises an event within the Tracking object which logs and then somehow closes the program would be better?
Any other ideas are very welcome.
Again, I appreciate it may be simpler and easier to use Try-Catch blocks but I'm purposely trying to avoid them for my own education.
EDIT
The reason for the above was due to reading this blog: Vexing exceptions - Fabulous Adventures In Coding - Site Home - MSDN Blogs
Seriously, Dave - try catch blocks are there for a reason. Use them.
Reading between the lines, it looks like you want to track custom information when something goes wrong. Have you considered extending System.Exception to create your own bespoke implementation suited to your needs?
Something along the lines of:-
public class TrackingException : System.Exception
{
// put custom properties here.
}
That way, when you detect that something has gone wrong, you can still use try/catch handling, but throw an exception that contains pertinent information for your needs.

Is cleaner way to implement boolean logic that passes if either of two things happen?

This happens to me every once in a while and I always end up solving it the same way and then wishing for a cleaner way.
I start out with a calls to related utility functions, followed by and update call.
SynchA();
SynchB();
UpdateLastTime();
Then later I add check boxes so I have:
if(synchA.Checked)
{
SynchA();
}
if(synchB.Checked)
{
SynchB();
}
BUT now I only want to call UpdateLastTime() of ONE OR BOTH the two executed so invariably I end up with:
bool synchHappened = false;
if(synchA.Checked)
{
SynchA();
synchHappened = true;
}
if(synchB.Checked)
{
SynchB();
synchHappened = true;
}
if(synchHappened)
{
UpdateLastTime();
}
That final step always bothers me because I'm spreading this one bool around to three branches of logic.
Is there some obvious better approach to the above logic/scenario that I could use?
The main goal would be - each time as logic has changed - code should be affected at least as possible.
So you've to structure such things once and then it will work for you.
In your particular case I would suggest to Keep It Simple (no Strategy Pattern, and so on), so
extract and encapsulate a logic of switches into the properties. So each time as requirements will be changed - you've to update either logic of particular switch or main logic itself.
Switches with encapsulated rules:
bool IsUpdateLastTime
{
get
{
// logic here even can be fully or partially injected
// as Func<bool>
return this.IsSyncA || this.IsSyncB;
}
}
bool IsSyncA { get { return synchA.Checked; } }
bool IsSyncB { get { return synchB.Checked; } }
Main logic:
if (this.IsUpdateLastTime)
{
this.UpdateLastTime();
}
This kind of problem is where Rx is really useful, as you can combine several events into a single one. It can be done with something like this.
(Assuming winforms for this example, but it's similar with WPF etc, just by changing the event names/generic type arg for FromEvent.)
var synchAchecked = Observable.FromEvent<EventArgs>(synchA.CheckedChanged);
var synchBchecked = Observable.FromEvent<EventArgs>(synchB.CheckedChanged);
var merged = Observable.Merge(synchAchecked, synchBchecked);
synchAchecked.Subscribe(x => SynchA());
synchBchecked.Subscribe(x => SynchB());
merged.Subscribe(x => UpdateLastTime());
One might consider this pattern more compact (and possibly readable) although it still requires the separate variable.
bool syncHappened = false;
if(syncHappened |= synchA.Checked) SynchA();
if(syncHappened |= synchB.Checked) SynchB();
if(syncHappened) UpdateLastTime();
Some may not like this, but I find it very useful and clean. It's a bit odd, so the decision to use this is up to you.
UpdateLastTime(Either(SynchA(synchA.Checked), SynchB(synchB.Checked)));
private bool Either(bool first, bool second)
{
return first || second;
}
This requires that SynchA(), SynchB() and UpdateLastTime() be modified to do no work if shouldRun is false, and return true or false depending whether the synch occurred.
A bit C centric, but:
if (checked_syncs & SYNC_A)
SyncA();
if (checked_syncs & SYNC_B)
SyncB();
if (checked_syncs)
UpdateLastTime();
This has the advantage that the last check doesn't have to change (unless you run out of bits, in which case you can switch to a larger primitive type, or just use more than one). It also has the advantage of effectively doing all the ORs for UpdateLastTime() in parallel, so it is also a fast solution.
NOTE: Of course SYNC_A and SYNC_B have to be unique powers of two, and yes - this may break your encapsulation a bit, and assumes you can |= the condition when the check occurs (may not be possible, or beneficial over setting a boolean if you are talking about a specific GUI toolkit).

Change object type at runtime maintaining functionality

Long story short
Say I have the following code:
// a class like this
class FirstObject {
public Object OneProperty {
get;
set;
}
// (other properties)
public Object OneMethod() {
// logic
}
}
// and another class with properties and methods names
// which are similar or exact the same if needed
class SecondObject {
public Object OneProperty {
get;
set;
}
// (other properties)
public Object OneMethod(String canHaveParameters) {
// logic
}
}
// the consuming code would be something like this
public static void main(String[] args) {
FirstObject myObject=new FirstObject();
// Use its properties and methods
Console.WriteLine("FirstObject.OneProperty value: "+myObject.OneProperty);
Console.WriteLine("FirstObject.OneMethod returned value: "+myObject.OneMethod());
// Now, for some reason, continue to use the
// same object but with another type
// -----> CHANGE FirstObject to SecondObject HERE <-----
// Continue to use properties and methods but
// this time calls were being made to SecondObject properties and Methods
Console.WriteLine("SecondObject.OneProperty value: "+myObject.OneProperty);
Console.WriteLine("SecondObject.OneMethod returned value: "+myObject.OneMethod(oneParameter));
}
Is it possible to change FirstObject type to SecondObject and continue to use it's properties and methods?
I've total control over FirstObject, but SecondObject is sealed and totally out of my scope!
May I achieve this through reflection? How? What do you think of the work that it might take to do it? Obviously both class can be a LOT more complex than the example above.
Both class can have templates like FirstObject<T> and SecondObject<T> which is intimidating me to use reflection for such a task!
Problem in reality
I've tried to state my problem the easier way for the sake of simplicity and to try to extract some knowledge to solve it but, by looking to the answers, it seems obvious to me that, to help me, you need to understand my real problem because changing object type is only the tip of the iceberg.
I'm developing a Workflow Definition API. The main objective is to have a API able to be reusable on top of any engine I might want to use(CLR through WF4, NetBPM, etc.).
By now I'm writing the middle layer to translate that API to WF4 to run workflows through the CLR.
What I've already accomplished
The API concept, at this stage, is somehow similar to WF4 with ActivityStates with In/Out Arguments and Data(Variables) running through the ActivityStates using their arguments.
Very simplified API in pseudo-code:
class Argument {
object Value;
}
class Data {
String Name;
Type ValueType;
object Value;
}
class ActivityState {
String DescriptiveName;
}
class MyIf: ActivityState {
InArgument Condition;
ActivityState Then;
ActivityState Else;
}
class MySequence: ActivityState {
Collection<Data> Data;
Collection<ActivityState> Activities;
}
My initial approach to translate this to WF4 was too run through the ActivitiesStates graph and do a somehow direct assignment of properties, using reflection where needed.
Again simplified pseudo-code, something like:
new Activities.If() {
DisplayName=myIf.DescriptiveName,
Condition=TranslateArgumentTo_WF4_Argument(myIf.Condition),
Then=TranslateActivityStateTo_WF4_Activity(myIf.Then),
Else=TranslateActivityStateTo_WF4_Activity(myIf.Else)
}
new Activities.Sequence() {
DisplayName=mySequence.DescriptiveName,
Variables=TranslateDataTo_WF4_Variables(mySequence.Variables),
Activities=TranslateActivitiesStatesTo_WF4_Activities(mySequence.Activities)
}
At the end of the translation I would have an executable System.Activities.Activity object. I've already accomplished this easily.
The big issue
A big issue with this approach appeared when I began the Data object to System.Activities.Variable translation. The problem is WF4 separates the workflow execution from the context. Because of that both Arguments and Variables are LocationReferences that must be accessed through var.Get(context) function for the engine to know where they are at runtime.
Something like this is easily accomplished using WF4:
Variable<string> var1=new Variable<string>("varname1", "string value");
Variable<int> var2=new Variable<int>("varname2", 123);
return new Sequence {
Name="Sequence Activity",
Variables=new Collection<Variable> { var1, var2 },
Activities=new Collection<Activity>(){
new Write() {
Name="WriteActivity1",
Text=new InArgument<string>(
context =>
String.Format("String value: {0}", var1.Get(context)))
},
new Write() {
//Name = "WriteActivity2",
Text=new InArgument<string>(
context =>
String.Format("Int value: {0}", var2.Get(context)))
}
}
};
but if I want to represent the same workflow through my API:
Data<string> var1=new Data<string>("varname1", "string value");
Data<int> var2=new Data<int>("varname2", 123);
return new Sequence() {
DescriptiveName="Sequence Activity",
Data=new Collection<Data> { var1, var2 },
Activities=new Collection<ActivityState>(){
new Write() {
DescriptiveName="WriteActivity1",
Text="String value: "+var1 // <-- BIG PROBLEM !!
},
new Write() {
DescriptiveName="WriteActivity2",
Text="Int value: "+Convert.ToInt32(var2) // ANOTHER BIG PROBLEM !!
}
}
};
I end up with a BIG PROBLEM when using Data objects as Variables. I really don't know how to allow the developer, using my API, to use Data objects wherever who wants(just like in WF4) and later translate that Data to System.Activities.Variable.
Solutions come to mind
If you now understand my problem, the FirstObject and SecondObject are the Data and System.Activities.Variable respectively. Like I said translate Data to Variable is just the tip of the iceberg because I might use Data.Get() in my code and don't know how to translate it to Variable.Get(context) while doing the translation.
Solutions that I've tried or thought of:
Solution 1
Instead of a direct translation of properties I would develop NativeActivites for each flow-control activity(If, Sequence, Switch, ...) and make use of CacheMetadata() function to specify Arguments and Variables. The problem remains because they are both accessed through var.Get(context).
Solution 2
Give my Data class its own Get() function. It would be only an abstract method, without logic inside that it would, somehow, translate to Get() function of System.Activities.Variable. Is this even possible using C#? Guess not! Another problem is that a Variable.Get() has one parameter.
Solution 3
The worst solution that I thought of was CIL-manipulation. Try to replace the code where Data/Argument is used with Variable/Argument code. This smells like a nightmare to me. I know next to nothing about System.reflection.Emit and even if I learn it my guess is that it would take ages ... and might not even be possible to do it.
Sorry if I ended up introducing a bigger problem but I'm really stuck here and desperately needing a tip/path to go on.
This is called "duck typing" (if it looks like a duck and quacks like a duck you can call methods on it as though it really were a duck). Declare myObject as dynamic instead of as a specific type and you should then be good to go.
EDIT: to be clear, this requires .NET 4.0
dynamic myObject = new FirstObject();
// do stuff
myObject = new SecondObject();
// do stuff again
Reflection isn't necessarily the right task for this. If SecondObject is out of your control, your best option is likely to just make an extension method that instantiates a new copy of it and copies across the data, property by property.
You could use reflection for the copying process, and work that way, but that is really a separate issue.

Categories

Resources