See parameters of UPDATEs in Fluent NHibernate when outputting statements to Console - c#

Introduction
Following on from How to configure Fluent NHibernate to output queries to Trace or Debug instead of Console? the answer provided there nicely outputs information to the console with the exception that it outputs ? instead of the actual values of parameters.
Additionally using ShowSql() does not output any UPDATE lines.
Question
Is there a way to view the UPDATEs, parameters and all in the debug console?
Details of implementations
Using Interceptor
From How to configure Fluent NHibernate to output queries to Trace or Debug instead of Console?, I have implemented the following:
private class Interceptor : EmptyInterceptor
{
public override SqlString OnPrepareStatement(SqlString sql)
{
var s = base.OnPrepareStatement(sql);
Debug.WriteLine(s.ToString());
return s;
}
}
//...
var factory = Fluently.Configure()
// ...
.ExposeConfiguration(c => c.SetInterceptor(new Interceptor())
// ...
which results in outputs like
UPDATE [User] SET Email = ?, HashedPassword = ?, Name = ? WHERE Id = ?
Using ShowSql()
From this blog I have implemented the following
public class CustomDebugWriter : System.IO.TextWriter
{
public override void WriteLine(string value)
{
Debug.WriteLine(value);
base.WriteLine(value);
}
public override void Write(string value)
{
Debug.Write(value);
base.Write(value);
}
public override System.Text.Encoding Encoding
{
get { return new UTF8Encoding(); }
}
}
// ...
Console.SetOut(new CustomDebugWriter());
var dbConfig = MsSqlConfiguration.MsSql2012.ConnectionString(
c => c.FromConnectionStringWithKey(connectionStringKey));
dbConfig.ShowSql();
which doesn't output UPDATE statements at all.

This is a workaround rather than a true answer.
If you're creating a web-app, you can use the Glimpse and NHibernate.Glimpse Nuget packages to examine what database calls are being made.
This has the parameters shown.

It has to do with ISession's psuedo Unit Of Work and batching.
With Fluent-NHibernate you need to set the AdoNetBatchSize property:
dbConfig.AdoNetBatchSize(0);
dbConfig.ShowSql();
dbConfig.FormatSql();
Then after you do your update, you need to call Flush() to flush the "batch".
entity.Title = "test title";
Session.Update(entity);
Session.Flush();
It really depends on your architecture, where you call this, or if you are using your own Unit Of Work implementation. I only worry about the SQL output in my integration tests project so it's easy, I just call Flush on TearDown. Its probably not something you want to just throw in your App, it's usually best to just let NHibernate handle batch lifecycle and do its thing.

Related

How does one set a variable in a command using Mediatr with Moq?

I have a simple functional style test for output of a command that I've written using Mediatr's IRequest and IRequestHandler<>
[Fact]
public void TestReturnValuesAsync()
{
// Arrange
var handler = new Mock<IRequestHandler<SyncSubmerchantDataCommand, CommandResult<int>>>();
handler.Setup(x => x.Handle(It.IsAny<SyncSubmerchantDataCommand>(), It.IsAny<CancellationToken>())).ReturnsAsync(new CommandResult<int>(0, ResultStatus.Success, "string"));
// Act
var result = handler.Object.Handle(new SyncSubmerchantDataCommand(), new CancellationToken());
// Assert
result.Result.Data.ShouldBe(0);
result.Result.Status.ShouldBe(ResultStatus.Success);
result.Result.Message.ShouldBe("string");
}
Since this command runs as a background task, I don't want it interrupted. I have a variable, submerchantList, that is of type List<T> which is used in a foreach loop to do work. The work is set in a try-catch because I don't want the command interrupted, as I stated before. I want to test the output of the what is written to my logs (_log.info) if an exception is thrown during this process.
public class CommandNameHandler : IRequestHandler<source, destination> {
// constructors and privates
public async destination Handle(param, token)
{
var submerchantList = db call.ToList();
foreach (var item in submerchantList)
{
try {
//does work
}
catch (Exception e) {
if (item != null)
_log.info($"{e} - {item.Id}");
}
return some out put
}
The problem is that I can't seem to figure out how to set the value of the any variable, such as the submerchantList within the Handle in order to throw the exception for my next test. I'm stumped.
Any help would be greatly appreciated.
SOLUTION:
Here was the solution: Stubbing the database call by injecting an in-memory DbSet. I used this resource learn.microsoft.com/en-us/ef/ef6/fundamentals/testing/… This issue was db call.ToList It looked something like this _db.Table.Include(x => x.Foreign).Where(x => x.Foreign.Field == Enum.Value).ToListAsync() While I was setting up the Mock DbSet, I had to use the string version, not the LINQ-chain version in the unit test. So, that means mockDbset.Setup(x => x.Table.Include("Foreign")).Returns(myCustomDbSet); Hope that helps someone!

Integration tests - what would you test for in this controller?

I'm applying NUnit integration tests on our controller endpoints in a .NET Web API 2 project whose models and controllers are generated via Entity code first from database.
I'm having trouble thinking of what parts of the controller I should test. In the end, we'd just like to be able to automate "can a user with "x" role get this data?"
Looking in the GET portion of this controller, what parts would you test and what's your reasoning?
namespace api.Controllers.myNamespace
{
public class myController : ApiController
{
private string strUserName;
private string strError = "";
private string strApiName = "myTable";
private myDatabase db = new myDatabase();
// ----------------------------------------------------------------------
// GET: api/path
public IQueryable<myTable> GetmyTable()
{
try
{
this.strUserName = this.getUserName();
if
(
// ----- authorize -----
db.view_jnc_role_api_permission.Count
(
view =>
(
view.permission == "get"
&& view.apiName == this.strApiName
&& view.userName == this.strUserName
)
) == 1
// ----- /authorize -----
)
{
// ----- get -----
IQueryable<myTable> data =
from tbl in db.myTable
where tbl.deleted == null
select tbl;
// ----- /get -----
return data;
}
else
{
strError = "Unauthorized.";
throw new HttpResponseException(HttpStatusCode.Forbidden);
}
}
catch (Exception ex)
{
if (strError.Length == 0)
{
if (this.showException())
{
strError = ex.ToString();
}
}
throw new HttpResponseException(ControllerContext.Request.CreateErrorResponse(HttpStatusCode.Forbidden, strError));
}
}
}
For reference, here's what I have so far. Some of these private fields I'm defining shouldn't be here - currently trying to get access to private methods from my test project via AssemblyInfo.cs to fix this:
namespace api.myNamespace
{
[TestFixture]
public class myController : ApiController
{
private string strUserName;
private string strError = "";
private string strApiName = "myTable";
private myDb db = new myDb();
// Using TransactionScope to (hopefully) prevent integration test's changes to database from persisting
protected TransactionScope TransactionScope;
// Instantiate _controller field
private myController _controller;
[SetUp]
public void SetUp() {
TransactionScope = new TransactionScope(TransactionScopeOption.RequiresNew);
// It's possible that one test may leave some state which could impact subsequent tests - so we must reinstantiate _controller at the start of each new test:
_controller = new myController();
}
[TearDown]
public void TearDown()
{
TransactionScope.Dispose();
}
**//------ TESTS -------//
// CanSetAndGetUserName
// AuthorizedUserCanGetData
// UnauthorizedUserCannotGetData
// AuthorizedUserCanPutData
// UnauthorizedUserCannotPutData
// AuthorizedUserCanPostData
// UnauthorizedUserCannotPostData
// AuthorizedUserCanDeleteData
// UnauthorizedUserCannotDeleteData**
[Test]
public void CanGetAndSetUsername()
{
// ARRANGE
var user = _controller.getUserName();
// ACT
// ASSERT
Assert.That(user, Is.EqualTo("my-internal-username"));
}
[Test]
public void UnauthorizedUserCannotGetData()
{
var user = "Mr Unauthorized";
// Unfinished bc integration testing is super abstract, subjective, hard, time consuming and hard. All downvoters are plebs.
Assert.That(user, Is.EqualTo());
}
}
}
}
integration tests means several things:
you setup your test data in the database, via a script for example.
you call the endpoint under test knowing exactly what data you should call it with and what you should get. This is all based on your test data you setup in step 1.
you compare your expected data with the one you got back.
this is an integration test as it touches everything, both api and database.
Now, you said you are having trouble deciding which parts of the controller to test. This suggests you are confusing integration tests with unit tests.
Integration tests we already covered.
Unit tests cover parts of functionality. You do not test controllers, forget about that.
What you really need to consider doing is this:
First, separate your code from the controller. Keep the controller very basic. It receives a call, validates the request model and passes it further to a class library where the functionality happens. This allows you to forget "testing the controller" and focus on your functionality instead. Unit tests will help here and your test cases will become something like this
I have a user, set up in a certain way.
I have some data, set up in a certain way
When I call method X, then I should get this response.
With such a setup in place, you can set your test data any way you like and check every single test case.
The only reason you wonder how you test your controller is because you dumped all your code into it, which of course makes everything hard. Think SOLID, think SOC ( Separation of concerns ).
One piece of advice: never ever return IQueryable from an endpoint, that's not data, that simply a query that hasn't run yet. Return a List, IEnumerable, an singular object, whatever you need, just make sure you execute that first by calling ToList() for example on your IQueryable expression first.
So, the steps are like this:
Setup your IQueryable first
Execute it by calling ToList(), First(), FirstOrDefault() whatever is appropriate and return the result of that.

Decoupling issue - improvements and alternatives

I'm into learning SOLID principles - especially Inversion Of Control-DI-Decoupling, and as I'm reviewing one of my codes, I noticed that this one method (see below) gets my attention.
This code will be called by any methods that needs to read the json file, accepts string values that will be used to lookup on a json file. But as you can see(I simplified the code - excluded the exception handling for the sake of this topic), I'm not sure where to start(there are a lot of initializations or dependencies?? happening and I'm not sure where to start).
Could this method/scenario a good candidate to start with? Which do you think should I retain? and needs to be decoupled?
Thanks.
public async Task<object> ReadJsonByKey(string jsonPath, string jsonKey)
{
// First - is it okay to have an initialization at this stage?
var value = new object();
// Second - is this fine to have this in the scope of this method?
using (TextReader reader = File.OpenText(jsonPath))
{
// Third - Calling Jobject that accepts new instance of JsonTextReader
var jObject = await JObject.LoadAsync(new JsonTextReader(reader));
obj = jObject.SelectToken(jsonKey);
}
return value;
}
The reason also I asked this is because (based from the standards ) loosely-coupled stuff can be easily tested - i.e, Unit Testing
[UnitTestSuite]
[TestCase1]
// Method should only be able to accept ".json" or ".txt" file
[TestCase2]
// JsonPath file is valid file system
[TestCase3]
// Method should be able to retrieve a node value based from a specific json and key
[TestCase4]
// Json-text file is not empty
It looks like you're trying to decouple an infrastructural concern from your application code.
Assuming that's the case you need a class which is responsible for reading the data:
public interface IDataReader
{
Task<object> ReadJsonByKey(string jsonPath, string jsonKey)
}
The implementation of which would be your above code:
public class DataReader : IDataReader
{
public async Task<object> ReadJsonByKey(string jsonPath, string jsonKey)
{
// First - is it okay to have an initialization at this stage?
var value = new object();
// Second - is this fine to have this in the scope of this method?
using (TextReader reader = File.OpenText(jsonPath))
{
// Third - Calling Jobject that accepts new instance of JsonTextReader
var jObject = await JObject.LoadAsync(new JsonTextReader(reader));
obj = jObject.SelectToken(jsonKey);
}
return value;
}
}
However this class is now doing both file reading & de-serialization so you could further separate into:
public class DataReader : IDataReader
{
IDeserializer _deserializer;
public DataReader(IDeserializer deserializer)
{
_deserializer = deserializer;
}
public async Task<object> ReadJsonByKey(string jsonPath, string jsonKey)
{
var json = File.ReadAllText(jsonPath);
return _deserializer.Deserialize(json, jsonKey);
}
}
This would mean that could now unit test your IDeserializer independently of the file system dependency.
However, the main benefit should be that you can now mock the IDataReader implementation when unit testing your application code.
Make the function like:
public async Task<object> ReadJsonByKey(TextReader reader, string jsonKey)
Now the function works with any TextReader implementation, so you can pass a TextReader that reads from file or from memory or from any other data source.
The only thing that prevents you from unit-testing this properly is the File reference, which is a static. You won't be able to provide the method with a file, because it would have to physically exist. There are two ways you can go about solving this.
First, if it's possible, you could pass something else rather than a path to the method - a FileStream for example.
Second, arguably better, you would abstract the file system (I recommend using the System.IO.Abstractions and then the related TestingHelpers package) into a private field, pass the dependency via ctor injection.
private readonly IFileSystem fileSystem;
public MyClass(IFileSystem fileSystem)
{
this.fileSystem = fileSystem;
}
And then in your method you'd use
fileSystem.File.OpenText(jsonPath);
This should allow you to unit-test this method with ease, by passing a MockFileSystem and creating a json file in-memory for the method to read. And unit-testability is actually a good indicator that your method is maintainable and has a well defined purpose - if you can test it easily with a not-so-complicated unit test, then it's probably good. If you can't, then it's definitely bad.

Custom PSHostUserInterface is ignored by Runspace

The Background
I'm writing an application that programatically executes PowerShell scripts. This application has a custom PSHost implementation to allow scripts to output logging statements. Currently, the behavior I'm seeing is that some requests are properly forwarded to my custom PSHost and others are flat out ignored.
Things get even stranger when I started inspecting the $Host variable in my scripts, which seem to suggest that my custom PSHost isn't even being used.
The Code
I have some code that's executing PowerShell within a .NET application:
var state = InitialSessionState.CreateDefault();
state.AuthorizationManager = new AuthorizationManager("dummy"); // Disable execution policy
var host = new CustomPsHost(new CustomPsHostUI());
using (var runspace = RunspaceFactory.CreateRunspace(host, state))
{
runspace.Open();
using (var powershell = PowerShell.Create())
{
powershell.Runspace = runspace;
var command = new Command(filepath);
powershell.Invoke(command);
}
}
The implementation for CustomPsHost is very minimal, only containing what's needed to forward the PSHostUserInterface:
public class CustomPsHost : PSHost
{
private readonly PSHostUserInterface _hostUserInterface;
public CustomPsHost(PSHostUserInterface hostUserInterface)
{
_hostUserInterface = hostUserInterface;
}
public override PSHostUserInterface UI
{
get { return _hostUserInterface; }
}
// Methods omitted for brevity
}
The CustomPsHostUI is used as a wrapper for logging:
public class CustomPsHostUI : PSHostUserInterface
{
public override void Write(string value) { Debug.WriteLine(value); }
public override void Write(ConsoleColor foregroundColor, ConsoleColor backgroundColor, string value){ Debug.WriteLine(value); }
public override void WriteLine(string value) { Debug.WriteLine(value); }
public override void WriteErrorLine(string value) { Debug.WriteLinevalue); }
public override void WriteDebugLine(string message) { Debug.WriteLine(message); }
public override void WriteProgress(long sourceId, ProgressRecord record) {}
public override void WriteVerboseLine(string message) { Debug.WriteLine(message); }
// Other methods omitted for brevity
}
In my PowerShell script, I am trying to write information to the host:
Write-Warning "This gets outputted to my CustomPSHostUI"
Write-Host "This does not get outputted to the CustomPSHostUI"
Write-Warning $Host.GetType().FullName # Says System.Management.Automation.Internal.Host.InternalHost
Write-Warning $Host.UI.GetType().FullName # Says System.Management.Automation.Internal.Host.InternalHostUserInterface
Why am I getting the strange behavior with my CustomPSHostUI?
You need to provide an implementation for PSHostRawUserInterface.
Write-Host ends up calling your version of Write(ConsoleColor, ConsoleColor, string). PowerShell relies on the raw ui implementation for the foreground and background colors.
I have verified this with sample code. Instead of calling out to a ps1 file, I invoked Write-Host directly:
powershell.AddCommand("Write-Host").AddParameter("Testing...")
By running a script, PowerShell was handling the exceptions for you. By invoking the command directly, you can more easily see the exceptions. If you had inspected $error in your original example, you would have seen a helpful error.
Note that the value of $host is never the actual implementation. PowerShell hides the actual implementation by wrapping it. I forget the exact details of why it's wrapped.
For anyone else still struggling after implementing PSHostUserInterface and PSHostRawUserInterface and finding that WriteErrorLine() is being completely ignored when you call Write-Error, even though Warning, Debug, and Verbose make it to the PSHostUserInterface, here's how to get your errors:
Pay close attention to https://msdn.microsoft.com/en-us/library/ee706570%28v=vs.85%29.aspx and add these two lines right before your .Invoke() call, like so:
powershell.AddCommand("out-default");
powershell.Commands.Commands[0].MergeMyResults(PipelineResultTypes.Error, PipelineResultTypes.Output);
powershell.Invoke() // you had this already
This will merge the Errors stream into your console output, otherwise it apparently doesn't go there. I don't have a detailed understanding of why (so perhaps I shouldn't be implementing a custom PSHost to begin with) but there is some further explanation to be had out there:
http://mshforfun.blogspot.com/2006/07/why-there-is-out-default-cmdlet.html
https://msdn.microsoft.com/en-us/library/system.management.automation.runspaces.command.mergemyresults%28v=vs.85%29.aspx
Also, assuming your host is not a console app, and you're not implementing your own cmd-style character-mode display, you'll need to give it a fake buffer size, because it needs to consult this before giving you the Write-Error output. (Don't give it 0,0, otherwise you get a never-ending torrent of blank lines as it struggles to fit the output into a nothing-sized buffer.) I'm using:
class Whatever : PSHostRawUserInterface
{
public override Size BufferSize
{
get { return new Size(300, 5000); }
set { }
}
...
}
If you ARE a console app, just use Console.BufferWidth and Console.BufferHeight.
Update: If you'd rather get your errors in ErrorRecord objects rather than lines of pre-formatted error text going to your WriteErrorLine override, hook the PowerShell.Streams.Error.DataAdding event and get the ItemAdded property on the event args. Way less unruly to work with if you're doing something other than simple line-by-line output in your GUI.

SSMS SMO Objects: Get query results

I came across this tutorial to understand how to execute SQL scripts with GO statements.
Now I want to know what can I get the output of the messages TAB.
With several GO statements, the output would be like this:
1 rows affected
912 rows affected
...
But server.ConnectionContext.ExecuteNonQuery() can return only an int, while I need all the text. In case there is some error in some part of query, it should put that also in the output.
Any help would be appreciated.
The easiest thing is possibly to just print the number you get back for ExecuteNonQuery:
int rowsAffected = server.ConnectionContext.ExecuteNonQuery(/* ... */);
if (rowsAffected != -1)
{
Console.WriteLine("{0} rows affected.", rowsAffected);
}
This should work, but will not honor the SET NOCOUNT setting of the current session/scope.
Otherwise you would do it like you would do with "plain" ADO.NET. Don't use the ServerConnection.ExecuteNonQuery() method, but create an SqlCommand object by accessing the underlying SqlConnection object. On that subscribe to the StatementCompleted event.
using (SqlCommand command = server.ConnectionContext.SqlConnectionObject.CreateCommand())
{
// Set other properties for "command", like StatementText, etc.
command.StatementCompleted += (s, e) => {
Console.WriteLine("{0} row(s) affected.", e.RecordCount);
};
command.ExecuteNonQuery();
}
Using StatementCompleted (instead, say, manually printing the value that ExecuteNonQuery() returned) has the benefit that it works exactly like SSMS or SQLCMD.EXE would:
For commands that do not have a ROWCOUNT it will not be called at all (e.g. GO, USE).
If SET NOCOUNT ON was set, it will not be called at all.
If SET NOCOUNT OFF was set, it will be called for every statement inside a batch.
(Sidebar: it looks like StatementCompleted is exactly what the TDS protocol talks about when DONE_IN_PROC event is mentioned; see Remarks of the SET NOCOUNT command on MSDN.)
Personally, I have used this approach with success in my own "clone" of SQLCMD.EXE.
UPDATE: It should be noted, that this approach (of course) requires you to manually split the input script/statements at the GO separator, because you're back to using SqlCommand.Execute*() which cannot handle multiple batches at a time. For this, there are multiple options:
Manually split the input on lines starting with GO (caveat: GO can be called like GO 5, for example, to execute the previous batch 5 times).
Use the ManagedBatchParser class/library to help you split the input into single batches, especially implement ICommandExecutor.ProcessBatch with the code above (or something resembling it).
I choose the later option, which was quite some work, given that it is not pretty well documented and examples are rare (google a bit, you'll find some stuff, or use reflector to see how the SMO-Assemblies use that class).
The benefit (and maybe burden) of using the ManagedBatchParser is, that it will also parse all other constructs of T-SQL scripts (intended for SQLCMD.EXE) for you. Including: :setvar, :connect, :quit, etc. You don't have to implement the respective ICommandExecutor members, if your scripts don't use them, of course. But mind you that you'll may not be able to execute "arbitrary" scripts.
Well, were did that put you. From the "simple question" of how to print "... rows affected" to the fact that it is not trivial to do in a robust and general manner (given the background work required). YMMV, good luck.
Update on ManagedBatchParser Usage
There seems to be no good documenation or example about how to implement IBatchSource, here is what I went with.
internal abstract class BatchSource : IBatchSource
{
private string m_content;
public void Populate()
{
m_content = GetContent();
}
public void Reset()
{
m_content = null;
}
protected abstract string GetContent();
public ParserAction GetMoreData(ref string str)
{
str = null;
if (m_content != null)
{
str = m_content;
m_content = null;
}
return ParserAction.Continue;
}
}
internal class FileBatchSource : BatchSource
{
private readonly string m_fileName;
public FileBatchSource(string fileName)
{
m_fileName = fileName;
}
protected override string GetContent()
{
return File.ReadAllText(m_fileName);
}
}
internal class StatementBatchSource : BatchSource
{
private readonly string m_statement;
public StatementBatchSource(string statement)
{
m_statement = statement;
}
protected override string GetContent()
{
return m_statement;
}
}
And this is how you would use it:
var source = new StatementBatchSource("SELECT GETUTCDATE()");
source.Populate();
var parser = new Parser();
parser.SetBatchSource(source);
/* other parser.Set*() calls */
parser.Parse();
Note that both implementations, either for direct statements (StatementBatchSource) or for a file (FileBatchSource) have the problem that they read the complete text at once
into memory. I had one case where that blew up, having a huge(!) script with gazillions of generated INSERT statements. Even though I don't think that is a practical issue, SQLCMD.EXE could handle it. But for the life of me, I couldn't figure out how exactly,
you would need to form the chunks returned for IBatchParser.GetContent() so that the
parser can still work with them (it looks like they would need to be complete statements,
which would sort of defeat the purpose of the parse in the first place...).

Categories

Resources