I have a sql query which takes longer than 30seconds to execute. I'm aware I need to set the CommandTimeout for the command object to overcome this. However, the first place the command object occurs is within the method 'LoadDataSet' within the Enterprise Library.
I don't think I want to be modifying it here.
Could someone please suggest to me an appropriate place to set it?
Thanks!
Try this:
dcCommand = dDatabase.GetSqlStringCommand(sSQLCommand);
dcCommand.CommandTimeout = 60; //**This is the key statement**
dDatabase.LoadDataSet(dcCommand, dsDataSet , saTableNames);
Instead of this
dDatabase.LoadDataSet(CommandType.Text, sSQLCommand, dsDataSet , saTableNames);
I have started using Microsoft Enterprise Library long back where in normal case the DB operation calls using provided methods of “Database” class fulfill the need. In some case, for the long running query, developer wants to set CommandTimeout property of SqlCommand (or DbCommand) class. This will allow query to be executed long time as value set in command timeout.
By default Data Access Application block does not support/take simple CommandTimeout parameter in method calls (there are many workaround samples available on net). To achieve the same with minimal changes, I have added a simple function named “WithCommandTimeOut” taking timeOutSecond parameter in “Microsoft.Practices.EnterpriseLibrary.Data.Database” class which returns same instance of “Database” class. Refer updated code snippet below for code changes. Hope this will solve timeout Problem.
//Class Level Static Variables
//Used to reset to default after assigning in "PrepareCommand" static method
static int DEFAULT_COMMAND_TIMEOUT_RESET = 30;
//Default value when "WithCommandTimeOut" not called
static int COMMAND_TIMEOUT_FOR_THIS_CALL = DEFAULT_COMMAND_TIMEOUT_RESET;
public Database WithCommandTimeOut(int timeOutSeconds)
{
COMMAND_TIMEOUT_FOR_THIS_CALL = timeOutSeconds;
return this;
}
protected static void PrepareCommand(DbCommand command, DbConnection connection)
{
if (command == null) throw new ArgumentNullException("command");
if (connection == null) throw new ArgumentNullException("connection");
//Here is the magical code ----------------------------
command.CommandTimeout = COMMAND_TIMEOUT_FOR_THIS_CALL;
//Here is the magical code ----------------------------
command.Connection = connection;
//Resetting value to default as this is static and subsequent
//db calls should work with default timeout i.e. 30
COMMAND_TIMEOUT_FOR_THIS_CALL = DEFAULT_COMMAND_TIMEOUT_RESET;
}
Ex.
Database db = EnterpriseLibraryContainer.Current.GetInstance(Of Database)("SmartSoftware");
db.WithCommandTimeOut(0).ExecuteDataSet(CommandType.Text, query);
Related
I have a WCF service FooService.
My service implements a method LoginAsync which takes a User object as parameters.
public async Task<Token> LoginAsync(User user)
{
var result = await _userManager.GetModelAsync(user.Uname, user.Pword);
if (result != null && result.Id > 0)
return await _tokenManager.GetModelAsync(result);
return null;
}
Inside this method we call _userManager.GetModelAsync(string, string) which is implemented as follows:
public async Task<User> GetModelAsync(string username, string password)
{
var result =
(from m in await _userRepo.GetModelsAsync()
where m.Uname.Equals(username, StringComparison.InvariantCulture)
&& m.Pword.Equals(password, StringComparison.InvariantCulture)
select m).ToList();
if (result.Any() && result.Count == 1)
{
var user = result.First();
user.Pword = null;
return user;
}
return null;
}
To mention it again: this is all server-side code.
I never want my service to send back the Pword field, even though it is not clear text. I just don't want that information to be on my client-side code.
This is why I'm setting this property to NULL when I found a User by comparing username and password.
Here's how _userRepo.GetModelAsync() is implemented (don't get confused with _userManager and _userRepo):
public async Task<IList<User>> GetModelsAsync()
{
return await MediaPlayer.GetModelsAsync<User>(_getQuery);
}
private readonly string _getQuery = "SELECT ID, Uname, DateCreated, Pword FROM dbo.[User] WITH(READUNCOMMITTED)";
And here MediaPlayer.GetModelsAsync<T>(string, params IDbParameter[])
public static async Task<IList<T>> GetModelsAsync<T>(string query, params DbParameter[] parameters)
{
IList<T> models;
using (SqlConnection con = new SqlConnection(Builder.ConnectionString))
using (SqlCommand command = Db.GetCommand(query, CommandType.Text, parameters))
{
await con.OpenAsync();
command.Connection = con;
using (SqlDataReader dr = await command.ExecuteReaderAsync(CommandBehavior.SequentialAccess))
models = ReadModels<T>(dr);
}
return models;
}
This code works fine the first time executing it after publishing or restarting this service (the service is consumed by a WPF application).
But calling FooService.LoginAsync(User) a second time, without publishing or restarting the service again, it will throw a NullReferenceException in my _userManager.GetModelAsync LINQ because Pword is NULL.
Which is really strange to me, because as you can see there is no logic implemented where my data is explicit stored in memory.
Normally my code should execute a sql query everytime calling this method, but somehow it doesn't. It seems like WCF does not get its data from my database, instead re-uses it from memory.
Can somehow explain this behavior to me and what I can do against it?
Edit 26.09.2018
To add some more details:
The method _userManager.GetModelAsync(string, string) always gets called, same for _userRepo.GetModelsAsync. I did some file logging at different points in my server-side code. What I also did, is to take the result of _userRepo.GetModelsAsync, iterated through every object in it and logged Uname and Pword. Only Pword was NULL (did this logging before doing my LINQ).
I also logged the parameters _userManager.GetModelAsync(user.Uname, user.Pword) receives. user.Uname and user.Pword are not NULL.
I just noticed that this question was reposed. My diagnosis is the same:
What I am thinking right now, is that my service keeps my IList with the cleared Pword in memory and uses it the next time without performing a new sql query.
LINQ to SQL (and EF) reuse the same entity objects keyed on primary key. This is a very important feature.
Translate will give you preexisting objects if you use it to query an entity type. You can avoid that by querying with a DTO type (e.g. class UserDTO { public string UserName; }).
It is best practice to treat entity objects as a synchronized mirror of the database. Do not make temporary edits to them.
Make sure that your DataContext has the right scope. Normally, you want one context per HTTP request. All code inside one request should share one context and no context should ever be shared across requests.
So maybe there are two issues: You modifying entities, and reusing a DataContext across requests.
Is there a better way than the below to detect if the value retrieved from a database is different to the last retrieved value?
I have a feeling that something better than in infinite poll is available out there?
public void CheckForNewMofificationDate(string username)
{
while(true)
{
OdbcConnection sql = null;
if (!DBClass.Instance.OpenConn(ref sql))
throw new DatabaseConnectionException();
try
{
string query = "SELECT MODIFIED_ON FROM USER_DTLS WHERE USERNAME=?";
using (var cmd = new OdbcCommand(query, sql))
{
cmd.Parameters.Add("USERNAME", OdbcType.VarChar, 50).Value = username;
using (var reader = cmd.ExecuteReader())
{
if (reader.Read())
{
if( OldValue != reader.GetString(0))
{
//use INotifyPropertyChange
}
}
}
}
}
finally
{
DBClass.Instance.CloseConn(ref sql);
}
}
}
Short answer: you would have to employ a polling (looping) mechanism like you suggested.
Or, you could do something crazy with triggers on the database and have the trigger execute a custom function or web service that uses an event bus or WCF to notify your application of a change in data, but I would highly recommend not pursuing this approach.
As recommended by #TimSchmelter, A SqlDependancy is the best approach I found so far, it causes Sql Server to detect changes made to tables assoiciated with a query and fire events based on that:
A SqlDependency object can be associated with a SqlCommand in order to
detect when query results differ from those originally retrieved. You
can also assign a delegate to the OnChange event, which will fire when
the results change for an associated command. You must associate the
SqlDependency with the command before you execute the command. The
HasChanges property of the SqlDependency can also be used to determine
if the query results have changed since the data was first retrieved.
This eliminates the need to have a serprate thread with an infinite loop continuasslt polling to detect changes.
I have an application leveraging Entity Framework 6. For queries that are relatively fast (e.g. taking less than a minute to execute) it is working fine.
But I have a stored procedure that queries a table which doesn't have appropriate indices and so the time taken to execute the query has been clocked to take anywhere between 55 and 63 seconds. Obviously, indexing the table would bring that time down but unfortunately I don't have the luxury of controlling the situation and have to deal the hand I was dealt.
What I am seeing is when EF6 is used to call the stored procedure it continues through the code in less than 3 seconds total time and returns a result of 0 records; when I know there are 6 records the SPROC will return when executed directly in the database.
There are no errors whatsoever, so the code is executing fine.
Performing a test; I constructed some code using the SqlClient library and made the same call and it returned 6 records. Also noted that unlike the EF6 execution, that it actually took a few more seconds as if it were actually waiting to receive a response.
Setting the CommandTimeout on the context doesn't appear to make any difference either and I suspect possibly because it isn't timing out but rather not waiting for the result before it continues through the code?
I don't recall seeing this behavior in prior versions but then again maybe the time required to execute my prior queries were within the expected range of EF???
Is there a way to set the actual time that EF will wait for a response before continuing through the code? Or is there a way that I can enforce an asynchronous operation since it seems to be a default synchronous task by default?? Or is there a potential flaw in the code?
Sample of Code exhibiting (synchronous) execution: No errors but no records returned
public static List<Orphan> GetOrphanItems()
{
try
{
using (var ctx = new DBEntities(_defaultConnection))
{
var orphanage = from orp in ctx.GetQueueOrphans(null)
select orp;
var orphans = orphanage.Select(o => new Orphan
{
ServiceQueueId = o.ServiceQueueID,
QueueStatus = o.QueueStatus,
OrphanCode = o.OrphanCode,
Status = o.Status,
EmailAddress = o.EmailAddress,
TemplateId = o.TemplateId
}).ToList();
return orphans;
}
}
catch(Exception exc)
{
// Handle the error
}
}
Sample Code using SqlClient Library (asynchronous) takes slightly longer to execute but returns 6 records
public static List<Orphan> GetOrphanItems()
{
long ServiceQueueId = 0;
bool QueueStatus;
var OrphanCode = String.Empty;
DateTime Status;
var EmailAddress = String.Empty;
int TemplateId = 0;
var orphans = new List<Orphan> ();
SqlConnection conn = new SqlConnection(_defaultConnection);
try
{
var cmdText = "EXEC dbo.GetQueueOrphans";
SqlCommand cmd = new SqlCommand(cmdText, conn);
conn.Open();
SqlDataReader reader;
reader = cmd.ExecuteReader();
while(reader.Read())
{
long.TryParse(reader["ServiceQueueId"].ToString(), out ServiceQueueId);
bool.TryParse(reader["QueueStatus"].ToString(), out QueueStatus);
OrphanCode = reader["OrphanCode"].ToString();
DateTime.TryParse(reader["Status"].ToString(), out Status);
EmailAddress = reader["EmailAddress"].ToString();
int.TryParse(reader["TemplateId"].ToString(), out TemplateId);
orphans.Add(new Orphan { ServiceQueueId = ServiceQueueId, QueueStatus=QueueStatus, OrphanCode=OrphanCode,
EmailAddress=EmailAddress, TemplateId=TemplateId});
}
conn.Close();
catch(Exception exc)
{
// Handle the error
}
finally
{
conn.Close();
}
}
Check the type of executing method.
private async void MyMethod()
{
db.executeProdecudeAsync();
}
Forgetting to await task in async void method can cause described behavior without any InteliSense warning.
Fix:
private async Task MyMethod()
{
await db.executeProdecudeAsync();
}
Or just use db.executeProdecudeAsync().Wait() if you want to run in synchronous mode.
OracleCommand cmd =
new OracleCommand("select * from Test WHERE TestFLAG = 1 or TestFLAGis not null", con);
When there is a change at the table, no matter the condition is, my .net project will still receive notification.
For second issue, After I receive any notification for 1st time, any changes on the table after that are not being notified. Why?
Any solution for my problem?
public class MyNotificationSample
{
static string constr = "your db INFO";
public static bool IsNotified = false;
static OracleDependency dep = null;
public static void Main(string[] args)
{
//To Run this sample, make sure that the change notification privilege
//is granted to scott.
OracleConnection con = null;
try
{
con = new OracleConnection(constr);
OracleCommand cmd = new OracleCommand("select * from Test WHERE TestFLAG = 1 or TestFLAGis not null", con);
con.Open();
// Set the port number for the listener to listen for the notification
// request
OracleDependency.Port = 1005;
// Create an OracleDependency instance and bind it to an OracleCommand
// instance.
// When an OracleDependency instance is bound to an OracleCommand
// instance, an OracleNotificationRequest is created and is set in the
// OracleCommand's Notification property. This indicates subsequent
// execution of command will register the notification.
// By default, the notification request is using the Database Change
// Notification.
dep = new OracleDependency(cmd);
// Add the event handler to handle the notification. The
// OnMyNotification method will be invoked when a notification message
// is received from the database
dep.OnChange += OnMyNotificaton;
// The notification registration is created and the query result sets
// associated with the command can be invalidated when there is a
// change. When the first notification registration occurs, the
// notification listener is started and the listener port number
// will be 1005.
cmd.ExecuteNonQuery();
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
con.Close();
Console.Write("Press Any Key to Quit");
Console.ReadLine();
// Loop while waiting for notification
}
public static void OnMyNotificaton(object src,
OracleNotificationEventArgs arg)
{
if (dep.HasChanges)
{
Console.WriteLine("Notification Received");
DataTable changeDetails = arg.Details;
Console.WriteLine("Data has changed in {0}",
changeDetails.Rows[0]["ResourceName"]);
}
}
Latest Update: TO make the listener never expired.
new OracleDependency(cmd, false, 0 , true);
But, my query still doesn't work...
add this to your code
cmd.Notification.IsNotifiedOnce = false;
Your query has this WHERE clause: TestFLAG = 1 or TestFLAGis not null.
There's probably a missing space between TestFLAG and is not null. In that case, the first part of the expression is unnecessary, as when TestFLAG = 1, then it's not null.
Maybe the problem is, that your query covers much more ground, than you intended.
Apart from that, the Oracle's Database Change Notifications feature does not guarantee, that you will only get notifications for the rows actually returned by the query. It guarantees, that you will get notifications for those rows, but you can also get "false positives", so notifications for rows which actually do not match your query.
This may be a good explanation from the Oracle Docs (emphasis mine):
Query-based registrations have two modes: guaranteed mode and
best-effort mode. In guaranteed mode, any database change notification
ensures that a change occurred to something contained in the queried
result set. However, if a query is complex, then it cannot be
registered in guaranteed mode. Best-effort mode is used in such cases.
Best-effort mode simplifies the query for query-based registration. No
notifications are lost from the simplification. However, the
simplification may cause false positives, as the simpler version's
query result could change when the original query result would
not.There still remain some restrictions on which queries can have
best-effort mode query-based registrations. In such cases, developers
can use object-based registrations, which can register most query
types. Object-based registrations generate notifications when the
query object changes, even if the actual query result does not. This
also means that object-based registrations are more prone to false
positives than query-based registrations. Developers should be aware
of the relative strengths and weaknesses of each database change
notification option and choose the one that best suits their
requirements.
On the second issue, as #user1415516 wrote, you need to set your notification to not get unregistered after first notification, so add cmd.Notification.IsNotifiedOnce = false; before you execute the command.
I came across this tutorial to understand how to execute SQL scripts with GO statements.
Now I want to know what can I get the output of the messages TAB.
With several GO statements, the output would be like this:
1 rows affected
912 rows affected
...
But server.ConnectionContext.ExecuteNonQuery() can return only an int, while I need all the text. In case there is some error in some part of query, it should put that also in the output.
Any help would be appreciated.
The easiest thing is possibly to just print the number you get back for ExecuteNonQuery:
int rowsAffected = server.ConnectionContext.ExecuteNonQuery(/* ... */);
if (rowsAffected != -1)
{
Console.WriteLine("{0} rows affected.", rowsAffected);
}
This should work, but will not honor the SET NOCOUNT setting of the current session/scope.
Otherwise you would do it like you would do with "plain" ADO.NET. Don't use the ServerConnection.ExecuteNonQuery() method, but create an SqlCommand object by accessing the underlying SqlConnection object. On that subscribe to the StatementCompleted event.
using (SqlCommand command = server.ConnectionContext.SqlConnectionObject.CreateCommand())
{
// Set other properties for "command", like StatementText, etc.
command.StatementCompleted += (s, e) => {
Console.WriteLine("{0} row(s) affected.", e.RecordCount);
};
command.ExecuteNonQuery();
}
Using StatementCompleted (instead, say, manually printing the value that ExecuteNonQuery() returned) has the benefit that it works exactly like SSMS or SQLCMD.EXE would:
For commands that do not have a ROWCOUNT it will not be called at all (e.g. GO, USE).
If SET NOCOUNT ON was set, it will not be called at all.
If SET NOCOUNT OFF was set, it will be called for every statement inside a batch.
(Sidebar: it looks like StatementCompleted is exactly what the TDS protocol talks about when DONE_IN_PROC event is mentioned; see Remarks of the SET NOCOUNT command on MSDN.)
Personally, I have used this approach with success in my own "clone" of SQLCMD.EXE.
UPDATE: It should be noted, that this approach (of course) requires you to manually split the input script/statements at the GO separator, because you're back to using SqlCommand.Execute*() which cannot handle multiple batches at a time. For this, there are multiple options:
Manually split the input on lines starting with GO (caveat: GO can be called like GO 5, for example, to execute the previous batch 5 times).
Use the ManagedBatchParser class/library to help you split the input into single batches, especially implement ICommandExecutor.ProcessBatch with the code above (or something resembling it).
I choose the later option, which was quite some work, given that it is not pretty well documented and examples are rare (google a bit, you'll find some stuff, or use reflector to see how the SMO-Assemblies use that class).
The benefit (and maybe burden) of using the ManagedBatchParser is, that it will also parse all other constructs of T-SQL scripts (intended for SQLCMD.EXE) for you. Including: :setvar, :connect, :quit, etc. You don't have to implement the respective ICommandExecutor members, if your scripts don't use them, of course. But mind you that you'll may not be able to execute "arbitrary" scripts.
Well, were did that put you. From the "simple question" of how to print "... rows affected" to the fact that it is not trivial to do in a robust and general manner (given the background work required). YMMV, good luck.
Update on ManagedBatchParser Usage
There seems to be no good documenation or example about how to implement IBatchSource, here is what I went with.
internal abstract class BatchSource : IBatchSource
{
private string m_content;
public void Populate()
{
m_content = GetContent();
}
public void Reset()
{
m_content = null;
}
protected abstract string GetContent();
public ParserAction GetMoreData(ref string str)
{
str = null;
if (m_content != null)
{
str = m_content;
m_content = null;
}
return ParserAction.Continue;
}
}
internal class FileBatchSource : BatchSource
{
private readonly string m_fileName;
public FileBatchSource(string fileName)
{
m_fileName = fileName;
}
protected override string GetContent()
{
return File.ReadAllText(m_fileName);
}
}
internal class StatementBatchSource : BatchSource
{
private readonly string m_statement;
public StatementBatchSource(string statement)
{
m_statement = statement;
}
protected override string GetContent()
{
return m_statement;
}
}
And this is how you would use it:
var source = new StatementBatchSource("SELECT GETUTCDATE()");
source.Populate();
var parser = new Parser();
parser.SetBatchSource(source);
/* other parser.Set*() calls */
parser.Parse();
Note that both implementations, either for direct statements (StatementBatchSource) or for a file (FileBatchSource) have the problem that they read the complete text at once
into memory. I had one case where that blew up, having a huge(!) script with gazillions of generated INSERT statements. Even though I don't think that is a practical issue, SQLCMD.EXE could handle it. But for the life of me, I couldn't figure out how exactly,
you would need to form the chunks returned for IBatchParser.GetContent() so that the
parser can still work with them (it looks like they would need to be complete statements,
which would sort of defeat the purpose of the parse in the first place...).