I have a C# function that persists data to SQL-server through a stored procedure with a little more than 100 parameters. (Yes, I know, that's probably not a good design, but legacy code and lets not get into that here.) ;-)
For some reason this code now comes up with this error:
System.Data.SqlClient.SqlException was unhandled
Message=Error converting data type int to tinyint.
Now, I've taken a first look down the parameter- and variable-pairs, and there is no obvious culprit. (The parameters are of many datatypes, not just tinyint)
So what is the fastest way to determine the rogue variable? And does anybody know of a good way to either handle this proactively, by making the c#-side verify the variable against the parameter at run-time - or somehow extending the error message to tell exactly which parameter is failing?
Implementing an ORM is not an option at this stage.
Edit: I've started writing a function CheckParameters that will initially loop through the parameters and simply list the parameters and the corresponding value. I'm thinking it could be extended with actual knowledge of the different datatypes, but for now I just want it to aid in finding the bad variable.
Structure is as follows:
try
{
cmdStat.ExecuteNonQuery();
}
catch(SqlException)
{
CheckParameters(cmdStat.Parameters);
//re-throw
}
private void CheckParameters(SqlParameterCollection parameterCollection)
{
foreach (SqlParameter parameter in parameterCollection)
{
Trace.Write("{0}, {1}, {2}, {3}", parameter.ParameterName, parameter.DbType, parameter.SqlDbType, parameter.Value);
}
}
What I would do is replace the proc with a stub.
Then I'd binary chop half the parameters to be long enough to cope with int. If the error went away I'd binary chop only half, and check again. If the error didn't go away then I'd do the other half.
I suspect it would take at max less than 10 attempts to find it.
Related
I am trying to figure out if there is a more efficient way than what I'm doing now to build up a message coming in on a serial port and validate it is the right message before I parse it. A complete message starts with a $ and ends with a CR/LF. I use an event handler to get the characters as they show up at the serial port so the message will not necessarily come in as one complete block. Just to confuse things, there are a bunch of other messages that come in on the serial port that don't necessarily start with a $ or end with a CR/LF. I want to see those but not parse them. I understand that concatenating strings is probably not a good idea so I use a StringBuilder to build the message then I use a couple of .ToString() calls to make sure I've got the right message to parse. Do the .ToString calls generate much garbage? Is there a better way?
I'm not a particularly experienced programmer so thanks for the help.
private void SetText(string text)
{
//This is the original approach
//this.rtbIncoming.Text += text;
//First post the raw data to the console rtb
rtbIncoming.AppendText(text);
//Now clean up the text and only post messages to the CPFMessages rtb that start with a $ and end with a LF
incomingMessage.Append(text);
//Make sure the message starts with a $
int stxIndex = incomingMessage.ToString().IndexOf('$');
if (stxIndex == 0)
{ }
else
{
if (stxIndex > 0)
incomingMessage.Remove(0, stxIndex);
}
//If the message is terminated with a LF: 1) post it to the CPFMessage textbox,
// 2) remove it from incomingMessage,
// 3) parse and display fields
int etxIndex = incomingMessage.ToString().IndexOf('\n');
if (etxIndex >= 0)
{
rtbCPFMessages.AppendText(incomingMessage.ToString(0, etxIndex));
incomingMessage.Remove(0, etxIndex);
parseCPFMessage();
}
}
Do the .ToString calls generate much garbage?
Every time you call ToString(), you get a new String object instance. Whether that's "much garbage" depends on your definition of "much garbage" and what you do with those instances.
Is there a better way?
You can inspect the contents of StringBuilder directly, but you'll have to write your own methods to do that. You could use state-machine-based techniques to monitor the stream of data.
Whether any of that would be "better" than your current implementation depends on a number of factors, including but not limited to:
Are you seeing a specific performance issue now?
If so, what specific performance goal are you trying to achieve?
What other overhead exists in your code?
The first question above is very important. Your first priority should be code that works. If your code is working now, and does not have a specific performance issue that you know you need to solve, then you can safely ignore the GC issues for now. .NET's GC system is designed to perform well in scenarios just like this one, and usually will. Only in unusual situations would you need to do extra work to solve a performance problem here.
Without a good, minimal, complete code example that clearly illustrates the above and any other relevant issues, it would not be possible to say with any specificity whether there is in fact "a better way". If the above answers don't provide the information you're looking for, consider improving your question so that it is not so broad.
Is there any side effect of passing and extra argument to string.Format function in C#? I was looking at the string.Format function documentation at MSDN ( http://msdn.microsoft.com/en-us/library/b1csw23d.aspx) but unable to find an answer.
Eg:-
string str = string.Format("Hello_{0}", 255, 555);
Now, as you can see that according to format string, we are suppose to pass only one argument after it but I have passed two.
EDIT:
I have tried it on my end and everything looks fine to me. Since I am new to C# and from C background, I just want to make sure that it will not cause any problem in later run.
Looking in Reflector, it will allocate a little more memory for building the string, but there's no massive repercussion for passing in an extra object.
There's also the "side effect" that, if you accidentally included a {n} in your format string where n was too large, and then added some spare arguments, you'd no longer get an exception but get a string with unexpected items in.
If you look at the exception section of the link you provide for string.Format
"The index of a format item is less than zero, or greater than or equal to the length of the args array."
Microsoft doesn't indicate that it can throw if you have too much arguments, so it won't. The effect is a small loss of memory due to an useless parameter
here are 2 screen shots when i try to debug my code in visual studio 2005
i want to save string value in variable check in variable a but it saves -1 not the actual string which is something like that "<username>admin</username>"
If you want to save the value of check in a, then your assignment is the wrong way round. Currently it's converting the value of a to a string, and storing the result in check.
Of course, you haven't specified the type of a - it may be converted to a string one way in the debugger, but the actual ToString method may be overridden to do something different.
If you actually meant to describe the question the other way round, you need to provide a lot more information - a short but complete program to demonstrate the problem would be ideal.
String assignment very definitely works in C# - so the chances are incredibly high that you're doing something strange in the code that you haven't shown us.
Today I read an article where it's written that we should always use TryParse(string, out MMM) for conversion rather than Convert.ToMMM().
I agree with article but after that I got stuck in one scenario.
When there will always be some valid value for the string and hence we can also use Convert.ToMMM() because we don't get any exception from Convert.ToMMM().
What I would like to know here is: Is there any performance impact when we use TryParse because when I know that the out parameter is always going to be valid then we can use Convert.ToMMM() rather TryParse(string, out MMM).
What do you think?
If you know the value can be converted, just use Parse(). If you 'know' that it can be converted, and it can't, then an exception being thrown is a good thing.
EDIT: Note, this is in comparison to using TryParse or Convert without error checking. If you use either of the other methods with proper error checking then the point is moot. I'm just worried about your assumption that you know the value can be converted. If you want to skip the error checking, use Parse and die immediately on failure rather than possibly continuing and corrupting data.
When the input to TryParse/Convert.ToXXX comes from user input, I'd always use TryParse. In case of database values, I'd check why you get strings from the database (maybe bad design?). If string values can be entered in the database columns, I'd also use TryParse as you can never be sure that nobody modifies data manually.
EDIT
Reading Matthew's reply: If you are unsure and would wrap the conversion in a try-catch-block anyway, you might consider using TryParse as it is said to be way faster than doing a try-catch in that case.
There is significant difference regarding the developing approach you use.
Convert: Converting one "primitive" data in to another type and corresponding format using multiple options
Case and point - converting an integer number in to its bit by bit representation. Or hexadecimal number (as string) in to integer, etc...
Error Messages : Conversion Specific Error Message - for problems in multiple cases and at multiple stages of the conversion process.
TryParse: Error-less transfer from one data format to another. Enabling T/F control of possible or not.
Error Messages: NONE
NB: Even after passing the data in to a variable - the data passed is the default of the type we try to parse in to.
Parse: in essence taking some data in one format and transfer it in to another. No representations and nothing fancy.
Error Messages: Format-oriented
P.S. Correct me if I missed something or did not explain it well enough.
I'm constantly running up against this when writing queries with LINQ-to-XML: the Value property of an XElement is a string, but the data may actually be an integer, boolean, etc.
Let's say I have a "where" clause in my query that checks if an ID stored in an XElement matches a local (integer) variable called "id". There are two ways I could do this.
1. Convert "id" to string
string idString = id.ToString();
IEnumerable<XElement> elements =
from
b in TableDictionary["bicycles"].Elements()
where
b.Element(_ns + "id").Value == idString
select
b;
2. Convert element value to int
IEnumerable<XElement> elements =
from
b in TableDictionary["bicycles"].Elements()
where
int.Parse(b.Element(_ns + "id").Value) == id
select
b;
I like option 2 because it does the comparison on the correct type. Technically, I could see a scenario where converting a decimal or double to a string would cause me to compare "1.0" to "1" (which would be unequal) versus Decimal(1.0) to Decimal(1) (which would be equal). Although a where clause involving decimals is probably pretty rare, I could see an OrderBy on a decimal column--in that case, this would be a very real issue.
A potential downside of this strategy, however, is that parsing tons of strings in a query could result in a performance hit (although I have no idea if it would be significant for a typical query). It might be more efficient to only parse element values when there is a risk that a string comparison would result in a different result than a comparison of the correct value type.
So, do you parse your element values religiously or only when necessary? Why?
Thanks!
EDIT:
I discovered a much less cumbersome syntax for doing the conversion.
3. Cast element to int
IEnumerable<XElement> elements =
from
b in TableDictionary["bicycles"].Elements()
where
(int)b.Element(_ns + "id") == id
select
b;
I think this will be my preferred method from now on...unless someone talks me out of it :)
EDIT II:
It occurred to me since posting my question that: THIS IS XML. If I really had enough data for performance to be an issue, I would probably be using a real database. So, yet another reason to go with casting.
Its difficult to assess the performance issues here without measuring. But I think you have two scenarios.
If you need to use most (or all) of the values in an expression sooner or later, then it is probably best to pay the CPU costs of converting to native types up front - discarding the XML string data early.
If you are only going to touch (evaluate or use) a few of the values, then it will most likely be cheaper in terms of CPU time to convert string data to native types lazily - at the time of (or close to it temporally) consumption.
Now, this is just the CPU time considerations. I suggest that it is likely that the data itself will take up considerably less memory once converted to native value types. This lets you discard the string (XML) data early.
In short, it is rare for questions like this to have black or white answers: it will depend on your scenario, the complexity of the data, how much data there is, and when it will be used (touched or evaluated).
Update
In Dan's comment to my original answer, he ask for a general rule of thumb in cases where there is not time, or reason to do detailed measurements.
My suggestion is to prefer conversion to native types at XML parsing time, not keep the string data around and parse lazily. Here is my reasoning
The code will already be burning some CPU, I/O, and memory resources at parasing time.
The code is like to be simpler doing the conversions at load time (rather than at another time) as this can all be coded in a simple procedural way.
This is likely to be more memory efficient as well.
When the data needs to be used, it is already in a native format - this will be much better performing than dealing with string data at consumption time: comparisons and computation with native types will usually be much more efficient than dealing with data in string format. This is likely to keep the consuming code simpler as well.
Again, I'm suggesting this as a rule of thumb :) There will be scenarios where another approach is more optimal from a performance standpoint, or will make the code 'better' in some way (more cohesive, modular, easier to maintain, etc).
This is one of those cases where you will most likely need to measure the results to be sure you are doing the right thing.
I agree with your second edit. If performance is an issue, you will gain much more by using a more queryable data structure (or just cache a dictionary by ID from your XML for repeated lookups) than by changing how you compare/parse values.
That said, my preference would be using the various explicit cast overrides on XElement. Also, if your ID could ever be empty (better safe than sorry), you can also do an efficient cast to int?.