Dapper with Multiple Records from Stored Procedure - c#

I have seen the answer to this question, How to map multiple records from a single SP with Dapper-dot-net, but it doesn't seem to work for my scenario.
Dummy Tables for illustration...
I have a SP that returns multiple record sets, and the first one looks like...
Column1 (int), Column2 (int)
and I have a class...
public class Columns
{
public int Column1 { get; set; }
public int Column2 { get; set; }
}
Then, I am trying to build a list of columns...
using (var conn = new SqlConnection(...))
{
using (var multi = conn.QueryMultiple("SpData",
commandType: CommandType.StoredProcedure))
{
var cols = multi.Read<Columns>().ToList();
}
}
When I call this, nothing seems to be populated in the cols variable, but I am not getting any errors. Also, if I break and look at what multi contains, I can see the data there. Can somebody see where I am going wrong?

I'm not sure whether to delete the question, or just add this as an answer as I spotted the obvious error (after I woke up)
I wasn't interested in the first two datasets from the SP, so I needed to skip by using Read...
multi.Read();//skip first recordset
multi.Read();//skip second recordset
It might help someone else, otherwise I will delete it soon.

Related

SQLite error Insufficient parameters supplied to the command

When trying to change the active column in the following table 1.
I get the error
Error insufficient parameters supplied to the command
and I cannot figure out for the life of me what is wrong with the code. Please help.
private void dataGridView1_SelectionChanged_1(object sender, EventArgs e)
{
SQLiteConnection sqlConnection = new SQLiteConnection();
sqlConnection.ConnectionString = "datasource = SubjectTable.db";
if (dataGridView1.SelectedRows.Count > 0)
{
ID = dataGridView1.SelectedRows[0].Cells[1].Value.ToString();
//Define SELECT Statement
string commandText = "SELECT * FROM SubjectTable WHERE ID=" + ID;
//Create a datatable to save data in memory
var datatable = new DataTable();
SQLiteDataAdapter myDataAdapter = new SQLiteDataAdapter(commandText, sqlConnection);
sqlConnection.Open();
//Fill data from database into datatable
myDataAdapter.Fill(datatable);
//Fill data from datatable into form controls
CMBactive.Text = datatable.Rows[0]["Active"].ToString();
TBsubjectBUD.Text = datatable.Rows[0]["Budget"].ToString();
sqlConnection.Close();
}
}
You are getting these problems, because you are trying to do too much things in one procedure. You should separate your concerns
Separate the fetching of the data from the database from displaying this data; separate also that you do this of event selection changed.
This has the advantage that you can reuse your code more easily: if you want to do the same because of a Button press, you can reuse the code. After that it is a one-liner code if you want to add a menu item doing the same.
It is easier to test your code if you have a separate method to query the database, or a separate method to fill your controls CmbActive and TbSubjectBud.
It is easier to change your code, for instance, if you will not use SQLite anymore, but entity framework to fetch your data, the displaying and the button handling won't notice this. Only the procedure to fetch the data needs to be changed.
This makes unit testing easier: instead of a real database, you can mock the database with a Dictionary for the tests.
Finally: when using Winforms, don't fiddle with the Cells directly, use the DataSource of the DataGridView to fill and read the data. Again: separate the data from how it is displayed.
First Your actual problem: query the data
So you have an Id, and you want to fetch the values of columns Active and Budget from all Subjects from the database that have this Id. Don't fetch properties that you won't use!
Database handling
First we need a class Subject to put the data that you fetch from table SubjectTable. If you put all columns of this table in it, you can reuse the class for other queries. However. you don't have to fill in all fields. It depends how often you will call this method whether it is wise to fill all properties or only some.
Some people don't like this. Consider to fetch always all columns (inefficient), or to create classes for different queries (a lot of work).
class Subject
{
public int Id {set; set;}
public string Name {get; set;}
public DateTime StartDate {get; set;}
public string Active {get; set;}
public Decimal Budget {get; set;}
}
Create a method to fetch the Active and Budget from tableSubjects with Id, or null if there is not subject with this Id.
Put all your database queries in a separate class. For instance class Repository. You hide that it is in a database, if in future you want to save it in a CSV-file, or JSON format, no one will notice (nice if you want to use it in a unit test!)
private Subject FetchBudgetOrDefault(int id)
{
const string sqlText = #"SELECT Active, Budget FROM SubjectTable WHERE ID = #Id";
using (var dbConnection = new SQLiteConnection(this.dbConnectionString))
{
using (var dbCommand = dbConnection.CreateCommand()
{
dbCommand.Commandtext = sqlText;
dbCommand.Parameters.AddWithValue("#Id", id);
using (var dbReader = dbCommand.ExecuteReader())
{
if (dbReader.Read())
{
// There is a Subject with this id:
return new Subject()
{
Id = id,
Active = dbReader.GetString(0),
Budget = (decimal)dbReader.GetInt64(1) / 100.0D,
};
}
else
{
// no subject with this Id
return null;
}
}
}
}
}
I assumed that the decimal Budget is saved as long * 100 on purpose, to show you that by separating your concerns it is fairly easy to change the database layout without having to change all users: if you want to save this decimal in SQLite as a REAL, then the queries are the only place where you have to change the data.
By the way: this method also solved your problem: ID can't be an empty string!
If you won't do this query 1000 times a second, consider to fetch all columns of Subject. This is a bit less efficient, but easier to test, reuse, and maintain.
Display the fetched data in your form
Currently you display the data in a ComboBox and a TextBox. If you separate your concerns, there will only be one place where you do this. If you want to display the data in a Table, or do something else with it, you only have to change one place:
public void Display(Subject subject)
{
this.comboBoxActive.Text = subject.Active;
this.textBoxBudget.Text = subject.Budget.ToString(...);
}
Bonus points: if you want to change the format of the displayed budget, you'll only have to do this here.
Read and Write the DataGridView
It is seldom a good idea to read and write the cells of a DataGridView directly. It is way to much work. You'll have to do all type checking yourself. A lot of work to test and implement small changes in the displayed data.
It is way easier to use the DataSource.
In the DataSource of the DataGridView you put a sequence of similar items. If you only want to Display once, an ICollection<TSource> will be enough (Array, List). If you want to update changes automatically, use a BindingList<TSource>
In the DataGridView add columns. User property DataGridViewColumn.DataPropertyName to indicate which property should be displayed in that column.
Usually it is enough to use visual studio designer to add the columns.
If your datagridview displays Subjects, code will be like:
DataGridView dgv1 = new DataGridView();
DataGridViewColumn columnId = new DataGridViewColumn
{
DataPropertyName = nameof(Subject.Id),
...
};
DataGridView columnName = new DataGridViewColumn
{
DataPropertyName = nameof(Subject.Name),
...
};
... // other columns
dgv.Columns.Add(columnId);
dgv.Columns.Add(columnName);
...
In your forms class:
private BindingList<Subject> DisplayedSubjects {get; set;} = new BindingList<Subject>();
// Constructor:
public MyForm()
{
InitializeComponent();
this.dgv1.DataSource = this.DisplayedSubjects();
}
void FillDataGridView()
{
using (var repository = new Repository())
{
IEnumerable<Subject> fetchedSubjects = repository.FetchAllSubjects();
this.DisplayedSubjects = new BindingList<Subject>(fetchedSubjects.ToList();
}
}
This is all that is needed to display all fetched subjects. If the operator changes any cell value, the corresponding value in this.DislayedSubjects is automatically updated.
This works both ways: if you change any value in this.DisplayedSubjects, the displayed value in the DataGridView is automatically updated.
No need to read the cells directly. If you allow column reordering, or if you implement row sorting, then everything still works two ways. Because you separated the fetched data from the displayed data, you can change the display without having to change the fetched data.
Put it all together
When you get the event that the selection is changed from the datagridview you want to update Active and Budget. Let's do the from the Selected item:
void OnSelectionChanged(object sender, EventHandler e)
{
// Get the selected Subject
var selectedSubject = this.SelectedSubject;
this.Display(selectedSubject); // described above
}
Subject SelectedSubject => this.Dgv.SelectedRows.Cast<DataGridViewRow>()
.Select(row => (Subject)row.DataBoundItem)
.FirstOrDefault();
Because you separated concerns, each method is easy to understance, easy to test, easy to reuse and to change slightly: if you want to update after a Button Press or a menu item, the code will be a one liner. If you want to display other items than just Active / Budget: small changes; if you want to fetch by Name instead of Id: only limited changes needed.

Simple Sql query conversion to Linq To Sql

I tried to convert this simple sql query to linq to sql query
SELECT * INTO temptable from EFESRDP0
ALTER TABLE temptable DROP COLUMN PASSWORD,SECONDPASS
SELECT * FROM temptable
DROP TABLE temptable
But I couldnt. Anyhelp would be appreciated, thank you.
Since Linq to SQL has no equivalent for the table operations you're trying to perform, the short answer is you can't do that.
From the structure of the query though it looks like the following is happening:
All records from EFESRDP0 are added to a previously non-existent table temptable
A few columns are dropped from temptable
The remaining data is returned as a recordset
The temporary table is dropped
Which is a long-winded way of specifying a list of columns to return from the original table, isn't it? Bad SQL shouldn't be turned into even worse LINQ, it should be fixed.
In query syntax the simple form would be:
var results =
from row in context.EFESRDP0
select new { row.ID, row.Name, row.LastLoginTime /* or whatever */ };
This will result in an SQL query similar to:
SELECT ID, Name, LastLoginTime
FROM EFESRDP0;
Which is a whole lot simpler than the SQL you posted and appears to do basically the same thing without all the table gymnastics.
Since your SQL statements are effectively returning all columns except Password and SECONDPASS, you can do that with a simple Linq like Corey gave creating a new anonymous type. You can also have a type defined with all the columns except those 2 so you could get a typed result. ie: This sample returns only 3 columns from sample Northwind..Customers table:
void Main()
{
DataContext db = new DataContext(#"server=.\SQLexpress;trusted_connection=yes;database=Northwind");
Table<Customer> Customers = db.GetTable<Customer>();
// for LinqPad
//Customers.Dump();
}
[Table(Name = "Customers")]
public class Customer
{
[Column]
public string CustomerID { get; set; }
[Column]
public string ContactName { get; set; }
[Column]
public string CompanyName { get; set; }
}

C# Comparing values from 2 tables in SQL Server Database 2005 and displaying in Gridview in VS2005

I am using VS2005 C# and SQL Server Database 2005.
I am tying to compare values between 2 databases.
I am able to retrieve the variable [StudentName] from tStudent Table via a SELECT WHERE sql statement, as follow:
Now, I have another table named StudentDetails. It has 3 columns, StudentName,Address and ContactNum:
The situation is that I want to grep the result from the first SQL query on tStudent, which returns me a list of Students whose [Status]=DELETED.
And from the list of Students queried , I want to take one Student at a time, and search through my [StudentDetails] table.
If it exist in [StudentDetails], I wan to use a way to store the variable [StudentName] from StudentDetails table and display it in GridView on my webpage.
(open to many solutions here. store in database; display result in GridView; store in array; etc)
May I know what the ways and steps I can take to achieve the result?
Step by step guide and code snippets are very much appreciated, because I am quite weak in C# programming.
you can do like this:
Use Visual Studio to create a DataSet name StudentDS, create table name "Student" in this DataSet, this table will contain 3 table columns: String StudentName; String Address; String ContactNum;
Fill deleted students into this DataSet:
DataSet dset = new StudentDS();
String connectionString = "";// depends on your database system, refer to http://www.connectionstrings.com
using (connection = new OdbcConnection(connectionString))
{
connection.Open();
command.Connection = connection;
command.CommandText = "select StudentName, Address, ContactNum from tStudent WHERE status = 'DELETE'";
OdbcDataAdapter da = new OdbcDataAdapter();
da.SelectCommand = command;
da.Fill(dset, "Student");
}
- After you get this DataSet, you can iterate on its row to do what you want.
if(dset.Tables[0].Rows != null) {
for (int i = 0; i < dset.Tables[0].Rows.Count, i++){
if(!ExistInStudentDetail(dset.Tables[0].Rows[i]["StudentName"]))
{
dset.Tables[0].Rows.remove(i);
i--;
}
}
}
//here, boolean ExistInStudentDetail(String StudentName) is a method, you can create sql for this as same in above.
In your form, add a new DataGridView name "StudentForm",add 1 column for this DataGridView name "StudentName", and set its binding property to "StudentName" (same column name in DataSet), and then set DataSource of this grid.
StudentForm.DataSource = dSet;
HTH.
This is a fairly simple issues but the scope is pretty large. So here goes:
First you should really make sure you have unique columns in the tables you are searching this allows you to modify those individual rows and make sure that you are modifying the correct one. I didn't see any ID columns in the screenshot so I just wanted to cover this.
Second I would create a class for students. In here I would create fields or properties of all the information that I wanted.
class Student
{
public string Name { get; private set; }
public string Address { get; private set; }
public string ContactNum { get; private set; }
}
you can either use a constructor in the above class and fill the properties with that or you can fill in each through your select your choice.
Third I would create a List<Student> students; this will be used as your reference list
List<Student> deletedStudents = SQL Select Statement;
Fourth I would then create another List<Student> detailedStudents;
Finally I would compare the two lists and then do something when a match is found.

Fast insert relational(normalized) data tables into SQL Server 2008 database

I'm trying to find a better and faster way to insert pretty massive amount of data(~50K rows) than the Linq that I'm using now.
The data I'm trying to write to a local db is in a list of ORM mapped data serialized and received from WCF.
I'm keen on using SqlBulkCopy, but the problem is that the tables are normalized and are actually a sequence or interconnected tables with one-to-many relationships.
Here's some code that illustrates my point:
foreach (var meeting in meetingsList)
{
int meetingId = dbAccess.InsertM(value1, value2...);
foreach (var competition in meeting.COMPETITIONs)
{
int competitionId = dbAccess.InsertC(meetingid, value1, value2...);
foreach(var competitor in competition.COMPETITORs)
{
int competitorId = dbAccess.InsertCO(comeetitionId, value1,....)
// and so on
}
}
}
where dbAccess.InsertMeeting looks something like this:
// check if meeting exists
int meetingId = GetMeeting(meeting, date);
if (meetingId == 0)
{
// if meeting doesn't exist insert new
var m = new MEETING
{
NAME = name,
DATE = date
}
_db.InsertOnSubmit(m);
_db.SubmitChanges();
}
Thanks in advance for any answers.
Bojan
I would still use SqlBulkCopy to quickly copy your data from the external file into a staging table that has the same (flat) structure as the file (you'll need to create that table ahead of time)
Once it's loaded, you can split up the data across multiple tables using e.g. a stored procedure or something - should be pretty fast since everything's on the server already.

SQLException : String or binary data would be truncated

I have a C# code which does lot of insert statements in a batch. While executing these statements, I got "String or binary data would be truncated" error and transaction roledback.
To find out the which insert statement caused this, I need to insert one by one in the SQLServer until I hit the error.
Is there clever way to findout which statement and which field caused this issue using exception handling? (SqlException)
In general, there isn't a way to determine which particular statement caused the error. If you're running several, you could watch profiler and look at the last completed statement and see what the statement after that might be, though I have no idea if that approach is feasible for you.
In any event, one of your parameter variables (and the data inside it) is too large for the field it's trying to store data in. Check your parameter sizes against column sizes and the field(s) in question should be evident pretty quickly.
This type of error occurs when the datatype of the SQL Server column has a length which is less than the length of the data entered into the entry form.
this type of error generally occurs when you have to put characters or values more than that you have specified in Database table like in that case: you specify
transaction_status varchar(10)
but you actually trying to store
_transaction_status
which contain 19 characters. that's why you faced this type of error in this code
Generally it is that you are inserting a value that is greater than the maximum allowed value. Ex, data column can only hold up to 200 characters, but you are inserting 201-character string
BEGIN TRY
INSERT INTO YourTable (col1, col2) VALUES (#val1, #val2)
END TRY
BEGIN CATCH
--print or insert into error log or return param or etc...
PRINT '#val1='+ISNULL(CONVERT(varchar,#val1),'')
PRINT '#val2='+ISNULL(CONVERT(varchar,#val2),'')
END CATCH
For SQL 2016 SP2 or higher follow this link
For older versions of SQL do this:
Get the query that is causing the problems (you can also use SQL Profiler if you dont have the source)
Remove all WHERE clauses and other unimportant parts until you are basically just left with the SELECT and FROM parts
Add WHERE 0 = 1 (this will select only table structure)
Add INTO [MyTempTable] just before the FROM clause
You should end up with something like
SELECT
Col1, Col2, ..., [ColN]
INTO [MyTempTable]
FROM
[Tables etc.]
WHERE 0 = 1
This will create a table called MyTempTable in your DB that you can compare to your target table structure i.e. you can compare the columns on both tables to see where they differ. It is a bit of a workaround but it is the quickest method I have found.
It depends on how you are making the Insert Calls. All as one call, or as individual calls within a transaction? If individual calls, then yes (as you iterate through the calls, catch the one that fails). If one large call, then no. SQL is processing the whole statement, so it's out of the hands of the code.
I have created a simple way of finding offending fields by:
Getting the column width of all the columns of a table where we're trying to make this insert/ update. (I'm getting this info directly from the database.)
Comparing the column widths to the width of the values we're trying to insert/ update.
Assumptions/ Limitations:
The column names of the table in the database match with the C# entity fields. For eg: If you have a column like this in database:
You need to have your Entity with the same column name:
public class SomeTable
{
// Other fields
public string SourceData { get; set; }
}
You're inserting/ updating 1 entity at a time. It'll be clearer in the demo code below. (If you're doing bulk inserts/ updates, you might want to either modify it or use some other solution.)
Step 1:
Get the column width of all the columns directly from the database:
// For this, I took help from Microsoft docs website:
// https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlconnection.getschema?view=netframework-4.7.2#System_Data_SqlClient_SqlConnection_GetSchema_System_String_System_String___
private static Dictionary<string, int> GetColumnSizesOfTableFromDatabase(string tableName, string connectionString)
{
var columnSizes = new Dictionary<string, int>();
using (var connection = new SqlConnection(connectionString))
{
// Connect to the database then retrieve the schema information.
connection.Open();
// You can specify the Catalog, Schema, Table Name, Column Name to get the specified column(s).
// You can use four restrictions for Column, so you should create a 4 members array.
String[] columnRestrictions = new String[4];
// For the array, 0-member represents Catalog; 1-member represents Schema;
// 2-member represents Table Name; 3-member represents Column Name.
// Now we specify the Table_Name and Column_Name of the columns what we want to get schema information.
columnRestrictions[2] = tableName;
DataTable allColumnsSchemaTable = connection.GetSchema("Columns", columnRestrictions);
foreach (DataRow row in allColumnsSchemaTable.Rows)
{
var columnName = row.Field<string>("COLUMN_NAME");
//var dataType = row.Field<string>("DATA_TYPE");
var characterMaxLength = row.Field<int?>("CHARACTER_MAXIMUM_LENGTH");
// I'm only capturing columns whose Datatype is "varchar" or "char", i.e. their CHARACTER_MAXIMUM_LENGTH won't be null.
if(characterMaxLength != null)
{
columnSizes.Add(columnName, characterMaxLength.Value);
}
}
connection.Close();
}
return columnSizes;
}
Step 2:
Compare the column widths with the width of the values we're trying to insert/ update:
public static Dictionary<string, string> FindLongBinaryOrStringFields<T>(T entity, string connectionString)
{
var tableName = typeof(T).Name;
Dictionary<string, string> longFields = new Dictionary<string, string>();
var objectProperties = GetProperties(entity);
//var fieldNames = objectProperties.Select(p => p.Name).ToList();
var actualDatabaseColumnSizes = GetColumnSizesOfTableFromDatabase(tableName, connectionString);
foreach (var dbColumn in actualDatabaseColumnSizes)
{
var maxLengthOfThisColumn = dbColumn.Value;
var currentValueOfThisField = objectProperties.Where(f => f.Name == dbColumn.Key).First()?.GetValue(entity, null)?.ToString();
if (!string.IsNullOrEmpty(currentValueOfThisField) && currentValueOfThisField.Length > maxLengthOfThisColumn)
{
longFields.Add(dbColumn.Key, $"'{dbColumn.Key}' column cannot take the value of '{currentValueOfThisField}' because the max length it can take is {maxLengthOfThisColumn}.");
}
}
return longFields;
}
public static List<PropertyInfo> GetProperties<T>(T entity)
{
//The DeclaredOnly flag makes sure you only get properties of the object, not from the classes it derives from.
var properties = entity.GetType()
.GetProperties(System.Reflection.BindingFlags.Public
| System.Reflection.BindingFlags.Instance
| System.Reflection.BindingFlags.DeclaredOnly)
.ToList();
return properties;
}
Demo:
Let's say we're trying to insert someTableEntity of SomeTable class that is modeled in our app like so:
public class SomeTable
{
[Key]
public long TicketID { get; set; }
public string SourceData { get; set; }
}
And it's inside our SomeDbContext like so:
public class SomeDbContext : DbContext
{
public DbSet<SomeTable> SomeTables { get; set; }
}
This table in Db has SourceData field as varchar(16) like so:
Now we'll try to insert value that is longer than 16 characters into this field and capture this information:
public void SaveSomeTableEntity()
{
var connectionString = "server=SERVER_NAME;database=DB_NAME;User ID=SOME_ID;Password=SOME_PASSWORD;Connection Timeout=200";
using (var context = new SomeDbContext(connectionString))
{
var someTableEntity = new SomeTable()
{
SourceData = "Blah-Blah-Blah-Blah-Blah-Blah"
};
context.SomeTables.Add(someTableEntity);
try
{
context.SaveChanges();
}
catch (Exception ex)
{
if (ex.GetBaseException().Message == "String or binary data would be truncated.\r\nThe statement has been terminated.")
{
var badFieldsReport = "";
List<string> badFields = new List<string>();
// YOU GOT YOUR FIELDS RIGHT HERE:
var longFields = FindLongBinaryOrStringFields(someTableEntity, connectionString);
foreach (var longField in longFields)
{
badFields.Add(longField.Key);
badFieldsReport += longField.Value + "\n";
}
}
else
throw;
}
}
}
The badFieldsReport will have this value:
'SourceData' column cannot take the value of
'Blah-Blah-Blah-Blah-Blah-Blah' because the max length it can take is
16.
It could also be because you're trying to put in a null value back into the database. So one of your transactions could have nulls in them.
Most of the answers here are to do the obvious check, that the length of the column as defined in the database isn't smaller than the data you are trying to pass into it.
Several times I have been bitten by going to SQL Management Studio, doing a quick:
sp_help 'mytable'
and be confused for a few minutes until I realize the column in question is an nvarchar, which means the length reported by sp_help is really double the real length supported because it's a double byte (unicode) datatype.
i.e. if sp_help reports nvarchar Length 40, you can store 20 characters max.
Checkout this gist.
https://gist.github.com/mrameezraja/9f15ad624e2cba8ac24066cdf271453b.
public Dictionary<string, string> GetEvilFields(string tableName, object instance)
{
Dictionary<string, string> result = new Dictionary<string, string>();
var tableType = this.Model.GetEntityTypes().First(c => c.GetTableName().Contains(tableName));
if (tableType != null)
{
int i = 0;
foreach (var property in tableType.GetProperties())
{
var maxlength = property.GetMaxLength();
var prop = instance.GetType().GetProperties().FirstOrDefault(_ => _.Name == property.Name);
if (prop != null)
{
var length = prop.GetValue(instance)?.ToString()?.Length;
if (length > maxlength)
{
result.Add($"{i}.Evil.Property", prop.Name);
result.Add($"{i}.Evil.Value", prop.GetValue(instance)?.ToString());
result.Add($"{i}.Evil.Value.Length", length?.ToString());
result.Add($"{i}.Evil.Db.MaxLength", maxlength?.ToString());
i++;
}
}
}
}
return result;
}
With Linq To SQL I debugged by logging the context, eg. Context.Log = Console.Out
Then scanned the SQL to check for any obvious errors, there were two:
-- #p46: Input Char (Size = -1; Prec = 0; Scale = 0) [some long text value1]
-- #p8: Input Char (Size = -1; Prec = 0; Scale = 0) [some long text value2]
the last one I found by scanning the table schema against the values, the field was nvarchar(20) but the value was 22 chars
-- #p41: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [1234567890123456789012]
In our own case I increase the sql table allowable character or field size which is less than the total characters posted from theĀ front end. Hence that resolve the issue.
Simply Used this:
MessageBox.Show(cmd4.CommandText.ToString());
in c#.net and this will show you main query , Copy it and run in database .

Categories

Resources