I am having a problem with checking 2 values in my database at the same time at the same row, in my table i have 2 primary keys (Date and TagNumber), and before I am inserting any new data i want to check for duplicate records.
I need to check I am not inserting any new data with the same date and the same tagnumber.
For example: Current Record
Date: 25/03/2015
TagNumber:111
When new data is available I need to check that the Date and the TagNumber do not already exist on another record (as this would be a duplicate).
So if the new data is
Date:25/03/2015
TagNumber:111
This record would already exist and would skip inserting a new record. However if the new data was:
Date:27/03/2015
TagNumber:111
This would be a new record and would proceed to insert to data.
Code:
foreach (DataGridViewRow row in dataGridView1.Rows)
{
string constring = #"Data Source=(LocalDB)\MSSQLLocalDB;AttachDbFilename=C:\Users\koni\Documents\Visual Studio 2015\Projects\t\Project\DB.mdf;Integrated Security=True";
using (SqlConnection con = new SqlConnection(constring))
{
using (SqlCommand sqlCommand = new SqlCommand("SELECT * from ResultsTable where TagNumber=#TagNumber AND Date=#Date", con))
{
con.Open();
string smdt1 = row.Cells["Exposure Date"].Value.ToString();
string format1 = "dd.MM.yyyy";
DateTime dt1 = DateTime.ParseExact(smdt1, format1, CultureInfo.InvariantCulture, DateTimeStyles.AssumeUniversal);
sqlCommand.Parameters.AddWithValue("#Date", dt1);
sqlCommand.Parameters.AddWithValue("#TagNumber", row.Cells["Device #"].Value);
}
}
}
and i have tried already ExecuteScalar() command and its not good - it worked only on 1 parameter....
Firstly, its not really clear whats in this table or your data types. Lets assume your data types are TagNumber: int and Date: datetime.
Next your problem is probably with the date field.
DateTime dt1 = DateTime.ParseExact(smdt1, format1, CultureInfo.InvariantCulture, DateTimeStyles.AssumeUniversal);
Will parse the date as you would expect. However this will also return the time. So in your query the param #Date will also add the time portion automatically to the result (place a breakpoint and have a look). Now as you supplied DateTimeStyles.AssumeUniversal the time is set to 00:00:00 UTC time which will be translated to the current timezone. (Being here in Australia puts that to 10:30:00).
sqlCommand.Parameters.AddWithValue("#Date", dt1.Date); //parsed date at midnight 00:00:00
Now IMHO using a stored procedure would be your best bet as you can use a single query to query and insert.
A sample procedure such as.
CREATE PROCEDURE InsertNewRecord
#TagNumber int,
#Date datetime
AS
BEGIN
SET NOCOUNT ON;
IF NOT EXISTS (SELECT TOP 1 1 FROM ResultsTable WHERE [Date] = #Date AND TagNumber = #TagNumber)
INSERT INTO ResultsTable (TagNumber, [Date]) VALUES (#TagNumber, #Date)
END
GO
Next you can easily call this (note just using test data).
var tagNumber = "111";
var date = DateTime.ParseExact("28.01.2017", "dd.MM.yyyy", CultureInfo.InvariantCulture, DateTimeStyles.AssumeUniversal);
using(var con = new SqlConnection(connectionString))
{
using(var cmd = new SqlCommand("EXEC InsertNewRecord #TagNumber, #Date", con))
{
cmd.Parameters.AddWithValue("#TagNumber", tagNumber);
cmd.Parameters.AddWithValue("#Date", date.Date);
con.Open();
cmd.ExecuteNonQuery();
con.Close();
}
}
As you see from the stored procedure we are simply querying first (using the NOT EXISTS and selecting a true result limiting to a single row for performance. SELECT TOP 1 1 FROM .. returns a single row of 1 if both the tag number and date exist on a record.
Now you could also change your data type from datetime to date which eliminates the time portion of your #Date paramater. However this will require you to ensure your data is clean and the table would have to be rebuilt.
One final option is to cast your datetime fields to a date in your query and change the #Date paramter to a type of date then check if they are equal such as.
ALTER PROCEDURE InsertNewRecord
#TagNumber int,
#Date date
AS
BEGIN
SET NOCOUNT ON;
IF NOT EXISTS (SELECT TOP 1 1 FROM ResultsTable WHERE cast([Date] as date) = #Date AND TagNumber = #TagNumber)
INSERT INTO ResultsTable (TagNumber, [Date]) VALUES (#TagNumber, #Date)
END
GO
For completness if for some reason you dont want to use a Stored Procedure the following will check if the record exists (note using the .Date property).
using (var con = new SqlConnection(connectionString))
{
bool exists = false;
using(var cmd = new SqlCommand("SELECT TOP 1 1 FROM ResultsTable WHERE TagNumber=#TagNumber AND [Date]=#Date", con))
{
cmd.Parameters.AddWithValue("#TagNumber", tagNumber);
cmd.Parameters.AddWithValue("#Date", date.Date);
con.Open();
var result = cmd.ExecuteScalar(); //returns object null if it doesnt exist
con.Close();
exists = result != null; //result will be one or null.
}
if (exists)
{
//INSERT RECORD
}
}
Either way I would say the issue is lying on the time portion of the data however without more information we can only guess.
This should be done on the SQL side. Pass your parameters to a stored proc that checks if a record already exists in the table and if so, either returns an error or discards the record. If it doesn't exist, insert it into the table. You can't do it on the client side as you wouldn't have the full table in memory.
As #Nico explained, creating a stored procedure is better way of doing it.
I have a procedure which updates some records. When I execute it I get the following exception
"String or binary data would be truncated.\r\nThe statement has been terminated."
I could found this occur when the parameter length is larger than variable's length. I checked again changing the size. But didn't work. Go the same exception again. How can I solve this? Please help
Here is my code for update
bool isFinished = dba.update(desingnation, title, initials, surname, fullname, callingName, civilSatatus, natinality, nic, birthday, passport,
hometp, mobiletp, province, district, division, electorate, gramaNiladhari, takafull, p_city,
c_city, p_hno, c_hno, tokens_P, tokens_C, previousEmployeements, bank, branch, type, account, gender, educatinalQ, languageE, languageS, languageT, empNo, appNo);
if (isFinished)
{
WebMsgBox.Show("Successfully Inserted!");
}
else
{
WebMsgBox.Show("Some Errors Occured");
}
}
else
{
WebMsgBox.Show("Some feilds are not valid");
}
}
}
This is the code for passing parameters to stored procedures
try
{
using (SqlCommand cmd = new SqlCommand())
{
cmd.CommandType = CommandType.Text;
cmd.CommandType = CommandType.StoredProcedure;
cmd.Connection = connection;
cmd.CommandTimeout = 0;
cmd.Transaction = transactions;
/*=======================Update employee details================================*/
cmd.CommandText = "update_HS_HR_EMPLOYEE_AADM";
cmd.Parameters.Add("#appNo", SqlDbType.Int).Value = appNo;
cmd.Parameters.Add("#CALLING_NAME", SqlDbType.VarChar).Value = callingName;
cmd.Parameters.Add("#INITIALS", SqlDbType.VarChar).Value = initials;
cmd.Parameters.Add("#SURNAME", SqlDbType.VarChar).Value = surname;
cmd.Parameters.Add("#TITLE", SqlDbType.VarChar).Value = title;
cmd.Parameters.Add("#NAME", SqlDbType.VarChar).Value = fullname;
cmd.Parameters.Add("#FULLNAME", SqlDbType.VarChar).Value = fullname + " " + surname;
cmd.Parameters.Add("#NIC", SqlDbType.VarChar).Value = nic;
cmd.Parameters.Add("#BDY", SqlDbType.VarChar).Value = birthday;
cmd.Parameters.Add("#GENDER", SqlDbType.VarChar).Value = gender;
cmd.Parameters.Add("#NATIONALITY", SqlDbType.VarChar).Value = natinality;
cmd.Parameters.Add("#CIVILSTATUS", SqlDbType.VarChar).Value = civilSatatus;
cmd.Parameters.Add("#DESIGNATION", SqlDbType.VarChar).Value = desingnation;
cmd.Parameters.Add("#P_ADD1", SqlDbType.VarChar).Value = p_hno;
cmd.Parameters.Add("#P_ADD2", SqlDbType.VarChar).Value = tokens_P[0];
if (tokens_P.Length > 1)
cmd.Parameters.Add("#P_ADD3", SqlDbType.VarChar).Value = tokens_P[1];
else
cmd.Parameters.Add("#P_ADD3", SqlDbType.VarChar).Value = "";
cmd.Parameters.Add("#P_CITY", SqlDbType.VarChar).Value = p_city;
cmd.Parameters.Add("#TP_HOME", SqlDbType.VarChar).Value = hometp;
cmd.Parameters.Add("#TP_MOBILE", SqlDbType.VarChar).Value = mobiletp;
cmd.Parameters.Add("#PROVINCE", SqlDbType.VarChar).Value = province;
cmd.Parameters.Add("#DISTRICT", SqlDbType.VarChar).Value = district;
cmd.Parameters.Add("#C_ADD1", SqlDbType.VarChar).Value = c_hno;
cmd.Parameters.Add("#C_ADD2", SqlDbType.VarChar).Value = tokens_C[0];
cmd.Parameters.Add("#PER_GNDIV_CODE", SqlDbType.VarChar).Value = gramaNiladhari;
cmd.Parameters.Add("#PER_DSDIV_CODE", SqlDbType.VarChar).Value = division;
cmd.Parameters.Add("#TAKAFUL", SqlDbType.VarChar).Value = takafull;
cmd.Parameters.Add("#PASSPORT_NO", SqlDbType.VarChar).Value = passport;
if (tokens_C.Length > 1)
cmd.Parameters.Add("#C_ADD3", SqlDbType.VarChar).Value = tokens_C[1];
else
cmd.Parameters.Add("#C_ADD3", SqlDbType.VarChar).Value = "";
cmd.Parameters.Add("#C_CITY", SqlDbType.VarChar).Value = c_city;
cmd.Parameters.Add("#ELECTORATE", SqlDbType.VarChar).Value = electorate;
//int appNO = int.Parse((cmd.ExecuteScalar().ToString()));
cmd.ExecuteNonQuery();
cmd.Parameters.Clear();
}
}
This is the stored procedure
ALTER PROCEDURE [dbo].[update_HS_HR_EMPLOYEE_AADM]
#appNo Int,
#CALLING_NAME VARCHAR(50),
#INITIALS VARCHAR(50),
#SURNAME VARCHAR(50),
#TITLE VARCHAR(50),
#NAME VARCHAR(50),
#FULLNAME VARCHAR(100),
#NIC VARCHAR(15),
#BDY VARCHAR(50),
#GENDER CHAR(1),
#NATIONALITY VARCHAR(50),
#CIVILSTATUS VARCHAR(50),
#DESIGNATION VARCHAR(50),
#P_ADD1 VARCHAR(50),
#P_ADD2 VARCHAR(50),
#P_ADD3 VARCHAR(50),
#P_CITY VARCHAR(50),
#TP_HOME VARCHAR(50),
#TP_MOBILE VARCHAR(50),
#PROVINCE VARCHAR(50),
#DISTRICT VARCHAR(50),
#C_ADD1 VARCHAR(50),
#C_ADD2 VARCHAR(50),
#C_ADD3 VARCHAR(50),
#C_CITY VARCHAR(50),
#ELECTORATE VARCHAR(50),
#PER_GNDIV_CODE VARCHAR(50),
#PER_DSDIV_CODE VARCHAR(50),
#TAKAFUL VARCHAR(50),
#PASSPORT_NO VARCHAR(50)
AS
BEGIN
update [HS_HR_EMPLOYEE_AADM]
SET
[EMP_CALLING_NAME]=#CALLING_NAME
,[EMP_MIDDLE_INI]=#INITIALS
,[EMP_SURNAME]=#SURNAME
,[EMP_TITLE]=#TITLE
,[EMP_NAMES_BY_INI]=#NAME
,[EMP_FULLNAME]=#FULLNAME
,[EMP_NIC_NO]=#NIC
,[EMP_BIRTHDAY]=#BDY
,[EMP_GENDER]=#GENDER
,[NAT_CODE]=#NATIONALITY
,[EMP_MARITAL_STATUS]=#CIVILSTATUS
,[EMP_DATE_JOINED]=GETDATE()
,[EMP_CONFIRM_FLG]=0
,[CT_CODE]='000008'
,[DSG_CODE]=#DESIGNATION
,[CAT_CODE]='000001'
,[EMP_PER_ADDRESS1]=#P_ADD1
,[EMP_PER_ADDRESS2]=#P_ADD2
,[EMP_PER_ADDRESS3]=#P_ADD3
,[EMP_PER_CITY]=#P_CITY
,[EMP_PER_TELEPHONE]=#TP_HOME
,[EMP_PER_MOBILE]=#TP_MOBILE
,[EMP_PER_PROVINCE_CODE]=#PROVINCE
,[EMP_PER_DISTRICT_CODE]=#DISTRICT
,[EMP_TEM_ADDRESS1]=#C_ADD1
,[EMP_TEM_ADDRESS2]=#C_ADD2
,[EMP_PER_ELECTORATE_CODE]=#ELECTORATE
,[EMP_TEM_ADDRESS3]=#C_ADD3
,[EMP_TEM_CITY]=#C_CITY
,[EMP_PER_GNDIV_CODE]=#PER_GNDIV_CODE
,[EMP_PER_DSDIV_CODE]=#PER_DSDIV_CODE
,[EMP_PASSPORT_NO]=#TAKAFUL
,[EMP_TAK]=#PASSPORT_NO
where App_no = #appNo
END
Specify varchar size in SqlDBType.Varchar in C# code matching the size as specified in stored procedure eg.
cmd.Parameters.Add("#CALLING_NAME", SqlDbType.VarChar, 50).Value = callingName;
corresponding to parameter #CALLING_NAME VARCHAR(50) in stored procdeure.
This ensures that size is not exceeded when being passed to stored procedure.
If length is not specified for string parameter , ADO.NET picks up arbitary length value which may exceed the size specified specified in stored procedures VARCHAR parameters.
Also at front end ensure that the number of characters being entered in textboxes doesnot exceed corresponding parameters size.
This can be done using MaxLength attribute or prompting user with message using JQuery/Javascript if size exceeds.
Do it for other parameters and check.
The specified error, "String or binary data would be truncated.\r\nThe statement has been terminated." is showing when you are trying to insert a value that is higher than the specified size of the column, When we look into the given procedure we can't identify the sizes of each column, So it would better if you cross check the sizes of columns with the values that you are giving.
I can say #GENDER may cause a similar issue, since it is defined as #GENDER CHAR(1), in the procedure but you are taking a string to the method and passing as SqlDbType.VarChar instead for that you have to give the value as char. for this particular field
The String or binary data would be truncated error is telling you that you are losing data. One of the annoying things about this error is that it doesn't tell you which column(s) the problem relates to, and in a scenario like this (with lots of columns), it makes it hard to diagnose.
If you have a suitable version of SQL Server (see the following hyperlinked page), you can turn on Trace Flag 460 (this may require a restart) to tell you exactly which table and column the problem relates to.
If not, here's my more manual approach... after which there is some information about how your parameters can be silently truncated without this error (which is not good).
Notice that for each column, there is a value in a C# variable, a declared parameter type (in both the C# code and the stored proc) and the size of the column in the table (the definition of which is missing from the question - which may explain why there isn't an accepted answer yet). All of these maximum lengths and types need to tie up, for all of the columns. You really need to check all of them; but we all like shortcuts, so...
My tip for finding which column(s) are having the problem is to find a scenario where it occurs so that you can easily repeat it - this is particularly easy to do it you have a unit test of this method. Now modify the stored proc to comment out half of the columns, and try again.
If it works, then you know the uncommented columns are fine (for this particular set of data), and the problem was in one of the columns that was commented out, so uncomment half of the lines, and try again.
If it didn't work, then the problem is with the uncommented columns, so comment out half of the remaining columns and try again.
Repeat until you've worked out which columns have problems. I say 'columns', because although it may only be one column having this problem, there could be more than that.
Now put everything back as it was when you started.
Now that you've worked out which column(s) have problems check each column's definition in the table against the stored proc parameter definition, the C# parameter definition, and the value in the C# variable. You may need to track this all the way back to where the value was entered in the user interface, and ensure that suitable restrictions are in place to prevent the value being too big.
As a bonus tip, I like having unit tests that my parameter sizes correspond with the type and size of the column that they relate to. I also have constants representing the max length of each string field and the maximum value of numeric fields. These constants are unit tested against the column in the database, and are used when restricting the values given by the user in the user interface. They can also be used by the unit test of that method, to prove that inserting the largest possible value for each column actually works.
However, note that it is worth making your varchar, nvarchar and varbinary parameters larger than your column sizes due to the silent truncation that occurs with parameter coercion:
SQL server will silently coerce your values to be whatever type the parameter is. For example...
DECLARE #Varchar VARCHAR(8) = 'I will be truncated';
DECLARE #Decimal92 DECIMAL(9,2) = 123.456;
DECLARE #Int INT = 123.456;
SELECT #Varchar, #Decimal92, #Int;
will output...
I will b 123.46 123
This may come as a surprise, given that SQL will complain about something like this:
BEGIN TRANSACTION;
CREATE TABLE tbl_Test(MyColumn NVARCHAR(5) NOT NULL);
INSERT INTO dbo.tbl_Test (MyColumn) VALUES (N'I will be truncated');
ROLLBACK TRANSACTION;
by saying String or binary data would be truncated. And yet the following code does not complain, silently coerces the value and inserts the record:
BEGIN TRANSACTION;
CREATE TABLE tbl_Test(MyColumn NVARCHAR(5) NOT NULL);
DECLARE #MyColumn NVARCHAR(5)=N'I will be truncated'
INSERT INTO dbo.tbl_Test (MyColumn) VALUES (#MyColumn);
ROLLBACK TRANSACTION;
So if you want to be aware of truncation issues occurring, you need to be sure that your parameter has a larger capacity than the column it is going into. For example, if we just change one character...
BEGIN TRANSACTION;
CREATE TABLE tbl_Test(MyColumn NVARCHAR(5) NOT NULL);
DECLARE #MyColumn NVARCHAR(6)=N'I will be truncated'
INSERT INTO dbo.tbl_Test (MyColumn) VALUES (#MyColumn);
ROLLBACK TRANSACTION;
...will give the truncation error. However, notice that this isn't a complete solution because if I try it with a different value...
BEGIN TRANSACTION;
CREATE TABLE tbl_Test(MyColumn NVARCHAR(5) NOT NULL);
DECLARE #MyColumn NVARCHAR(6)=N'Never complain'
INSERT INTO dbo.tbl_Test (MyColumn) VALUES (#MyColumn);
ROLLBACK TRANSACTION;
Then it will silently coerce, truncating after the space, then (because it's a varchar) the value has trailing spaces removed, and it inserts without complaining.
So the only way to be sure is to make your parameter is several characters larger than it needs to be, because there probably won't be multiple spaces in a row. You could use VARCHAR(MAX) for everything, but there is some concern that this could have performance impacts.
One place where this is particularly important is encrypted values. If encrypted values are truncated, then you can't decrypt them. So you need to be sure that your VARBINARY parameters are sized larger than the relevant column so that you would get errors instead of inserting truncated values. In this case, I believe a single character larger is sufficient, since there is no trimming of VARBINARYs. Well, apparently VarBinarys will trim trailing "nul" (ASCII=0) characters from the end; but only if ANSI_PADDING is set to OFF, but as Microsoft say, it should always be set to "ON". This section also covers what trimming occurs with different field types with different settings.
It's also worth saying that SQL isn't even consistent with how it does this. If we re-try the original example with a DECIMAL...
BEGIN TRANSACTION;
CREATE TABLE tbl_Test(MyColumn DECIMAL(9,2) NOT NULL);
INSERT INTO dbo.tbl_Test (MyColumn) VALUES (123.456);
SELECT * FROM tbl_Test
ROLLBACK TRANSACTION;
It doesn't complain about the value having too many decimal places, it just silently coerces it. And the same is true if I do it through a parameter which has more decimal places than the column...
BEGIN TRANSACTION;
CREATE TABLE tbl_Test(MyColumn DECIMAL(9,2) NOT NULL);
DECLARE #MyColumn DECIMAL(9,3)=123.456
SELECT #MyColumn
INSERT INTO dbo.tbl_Test (MyColumn) VALUES (#MyColumn);
SELECT * FROM tbl_Test
ROLLBACK TRANSACTION;
And yet if I give it a value that is too large...
BEGIN TRANSACTION;
CREATE TABLE tbl_Test(MyColumn DECIMAL(9,2) NOT NULL);
INSERT INTO dbo.tbl_Test (MyColumn) VALUES (1234567890.456);
SELECT * FROM tbl_Test
ROLLBACK TRANSACTION;
Then it will complain with Arithmetic overflow error converting numeric to data type numeric. (as it would if I put that value into a parameter first).
One other thing to mention relates to parameter coercion and encryption. Imagine a scenario where you have an SQL column typed DECIMAL(9,2) and a parameter of the same type, and you are giving it a dot net "decimal" from your C# code. If the "decimal" in your code has lots of decimal places, this silent coercion will effectively be asking SQL to do the rounding for you. Which is fine... Until you decide to encrypt that column, because now the value you are encrypting would be a much longer value than the SQL DECIMAL column would have been able to hold, so is probably larger than you have allowed (in terms of your VARBINARY length). In this scenario you would need to ensure the value was rounded to the correct number of decimal places before encryption.
On the trimming trailing space of the parameter. It is only trimming as much as it needs... this shows that it has trimmed one space from the parameter, but left the remaining 4 after the N.
BEGIN TRANSACTION;
CREATE TABLE tbl_Test(MyColumn NVARCHAR(5) NOT NULL);
DECLARE #MyColumn NVARCHAR(6)=N'N complain'
SELECT #MyColumn +'|',LEN(#MyColumn),DATALENGTH(#MyColumn)
INSERT INTO dbo.tbl_Test (MyColumn) VALUES (#MyColumn);
SELECT MyColumn +'|',LEN(MyColumn),DATALENGTH(MyColumn) FROM dbo.tbl_Test
ROLLBACK TRANSACTION;
Yet another learning point about this... the same concept applies to user defined table types which correlate to table definitions.
Here is an example script to demonstrate the issue. Notice that the creation and dropping of the type has to be done outside the transaction.
CREATE TYPE dbo.MyTableType AS TABLE (MyColumn NVARCHAR(5) NOT NULL);
GO
BEGIN TRANSACTION;
DECLARE #MyColumn NVARCHAR(5)=N'I will be truncated'
DECLARE #MyTable AS dbo.MyTableType;
INSERT INTO #MyTable (MyColumn) VALUES (#MyColumn);
CREATE TABLE dbo.tbl_Test (MyColumn NVARCHAR(5) NOT NULL);
INSERT INTO dbo.tbl_Test SELECT MyColumn FROM #MyTable;
ROLLBACK TRANSACTION;
GO
DROP TYPE dbo.MyTableType;
I have few suggestions to identify this: (Choose any 1, and then move to other, if issue still exists. Order doesn't matter, which ever Suggestion, you fell easy).
Suggestion 1: Open the sql profiler and track the requests. Get the query from profiler and run it directly on SQL Server. you may get more details of error.
Steps:
Start debugging, and stop just before you are going to call the db.
Open Profiler, connect to DB.
Click on Clear Trace Window.
Start the debugging.
Stop the profiler, as soon as you got few requests.
Identify the Query, and run on SQL Server.
Suggestion 2: Try to Insert data using SQL Server management studio.
Steps:
Right click on table -> Click on Script Table as -> Insert To.
Now, compare with your Input, if you are passing correctly.
Suggestion 3: Use this code to get the difference between data length and data passed or directly truncate when passing to DB.
Steps:
Create this Class DbContextExtension.
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Metadata;
namespace Data.Context
{
public class DbContextExtension
{
private Dictionary<IProperty, int> _maxLengthMetadataCache;
public void AutoTruncateStringToMaxLength(DbContext db)
{
var entries = db?.ChangeTracker?.Entries();
if (entries == null)
{
return;
}
var maxLengthMetadata = PopulateMaxLengthMetadataCache(db);
foreach (var entry in entries)
{
var propertyValues = entry.CurrentValues.Properties.Where(p => p.ClrType == typeof(string));
foreach (var prop in propertyValues)
{
if (entry.CurrentValues[prop.Name] != null)
{
var stringValue = entry.CurrentValues[prop.Name].ToString();
if (maxLengthMetadata.ContainsKey(prop))
{
var maxLength = maxLengthMetadata[prop];
stringValue = TruncateString(stringValue, maxLength);
}
entry.CurrentValues[prop.Name] = stringValue;
}
}
}
}
private Dictionary<IProperty, int> PopulateMaxLengthMetadataCache(DbContext db)
{
_maxLengthMetadataCache ??= new Dictionary<IProperty, int>();
var entities = db.Model.GetEntityTypes();
foreach (var entityType in entities)
{
foreach (var property in entityType.GetProperties())
{
var annotation = property.GetAnnotations().FirstOrDefault(a => a.Name == "MaxLength");
if (annotation != null)
{
var maxLength = Convert.ToInt32(annotation.Value);
if (maxLength > 0 && !_maxLengthMetadataCache.ContainsKey(property))
{
_maxLengthMetadataCache[property] = maxLength;
}
}
}
}
return _maxLengthMetadataCache;
}
private static string TruncateString(string value, int maxLength)
{
if (string.IsNullOrEmpty(value)) return value;
return value.Length <= maxLength ? value : value.Substring(0, maxLength);
}
}
}
Use it like this, before calling your SaveChanges:
public class DocumentRepository : IDocumentRepository
{
private readonly DbContext _context;
public DocumentRepository(DbContext context)
{
_context = context;
}
public async Task CreateDocument(Document obj)
{
//Feel free to make it extension method or use it as DI. To make the example easier, I am creating the object here.
var dbExtensions = new DbContextExtension();
dbExtensions.AutoTruncateStringToMaxLength(_context);
await _context.Documents.AddAsync(obj);
await _context.SaveChangesAsync();
}
I'm stuck with a problem. I'm developing a ASP.net MVC application that manages file uploads to a DB. Not that big of a deal. But every time I execute my SQL-Command, he tells me that I need to convert to VARBINARY first.
That problem is asked a lot here and on the Internet, but I still can't get it working..
That's what I got:
The SQL table:
DocID INT IDENTITY(1,1) NOT NULL,
DocName VARCHAR(512) NOT NULL,
DocData VARBINARY(max) NOT NULL,
ContentType NVARCHAR(100) NOT NULL,
ContentLength BIGINT NOT NULL,
InsertionDate DATETIME NOT NULL DEFAULT GETDATE(),
CONSTRAINT PK_DOC_STORE PRIMARY KEY NONCLUSTERED (DocID)
Read the file to a byte[] with BinaryReader.
var reader = new BinaryReader(file.InputStream);
var data = reader.ReadBytes(file.ContentLength);
And the INSERT INTO C# code:
sqlConnection.Open();
var sqlCommand = new SqlCommand
(
"INSERT INTO DocStore VALUES ('#DocumentName', '#DocumentData', '#DocumentType', '#DocumentSize', '#DocumentDate')"
, sqlConnection
);
sqlCommand.Parameters.AddWithValue("#DocumentName", file.FileName);
sqlCommand.Parameters.AddWithValue("#DocumentData", data);
sqlCommand.Parameters.AddWithValue("#DocumentType", file.ContentType);
sqlCommand.Parameters.AddWithValue("#DocumentSize", file.ContentLength);
sqlCommand.Parameters.AddWithValue("#DocumentDate", DateTime.Now);
var success = sqlCommand.ExecuteNonQuery();
sqlConnection.Close();
What's wrong here? I can't see the problem.. Shouldn't the byte[] work in a parameterized command string like this for the VARBINARY part?
You put quotes around parameter names, which will make them string literals.
Also, I would suggest to specify the columns in the insert statement. If you don't specify the columns to insert on, it takes the exact definition from your table (excluding the ID field since it is auto incremented). It is possible to break your query if you insert a field in between.
INSERT INTO DocStore (DocName, DocData, ContentType, ContentLength, InsertionDate)
VALUES (#DocumentName, #DocumentData, #DocumentType, #DocumentSize, #DocumentDate)
Solution
Instead of
var sqlCommand = new SqlCommand
(
"INSERT INTO DocStore VALUES ('#DocumentName', '#DocumentData', '#DocumentType', '#DocumentSize', '#DocumentDate')"
, sqlConnection
);
I would use
var sqlCommand = new SqlCommand
(
"INSERT INTO DocStore VALUES (#DocumentName, #DocumentData, #DocumentType, #DocumentSize, #DocumentDate)"
, sqlConnection
);
Why ?
Because "INSERT INTO ... '#DocumentData' ... " string contain a T-SQL statement. Within T-SQL, single quotes ('bla') are used to delimit the start and the end of string constant and also, in some cases, it can be used for column delimiters. So '#DocumentData' represents a string / VARCHAR constant from the point of view of SQL Server. In this case, it tries to do an implicit conversion of VARCHAR values ('#D...') to VARBINARY (data type of DocData colum; first column is skipped because it has IDENTITY property). But according to
between VARCHAR and VARBINARY are allowed only explicit conversions.
Note: as a best practice I would explicit define the list of target columns for INSERT statement.
I have a SQL query that I am passing a C# variable into my Oracle DB.
I am having trouble passing a C# datetime variable, "PROCESS_DATE", into my query in my application. I do not get any records back. If I copy the query into my oracle developer tool, TOAD, it works fine and I get multiple records.
Here is the query I am using in my application:
String SelectAllSQL = "SELECT * FROM REALMS_AUDIT.R2_GROUP_QUERY_RPT WHERE PROCESS_DATE = :pPROCESS_DATE";
I also tried converting the datetime variable into a shortDateString() so it matches the database exactly I then used the TO_DATE function, which I have to use if I query dates directly in TOAD, without any luck. The shortDateString() changes my date into: 1/16/2016, which is what I need, but the OracleDataReader does not like it. Here it the query with the TO_DATE function:
String SelectAllSQL = "SELECT * FROM REALMS_AUDIT.R2_GROUP_QUERY_RPT WHERE PROCESS_DATE = TO_DATE(:pPROCESS_DATE, 'MM-DD-YYYY'";
:pROCESS_DATE is a datetime variable that is passed in.
There must be a breakdown between C# and Oracle in relation to handling a datetime variable. I am using Oracle DataReader to handle the processing of the query.
OracleDataReader dataReader = mDataAccess.SelectSqlRows ( oracleConnection, oracleCommand, sqlCommand, parameters );
while ( dataReader.Read ( ) )
{
groupEntityFacilityRptList.Add ( ReadRecord ( dataReader ) );
}
If I use the TO_DATE function, the application will not step into the while loop. If I use the original query, it does but returns no data.
The datetime variable PROCESSDATE looks like this:
1/16/2016 12:00:00 AM
I notice it has a timestamp on it, so I'm not sure if that is the problem or not.
The data in Oracle is like this:
1/16/2016
Unless I've totally misunderstood your issue, I think you might be making this harder than it needs to be. ODP.net handles all of that dirty work for you. If PROCESS_DATE is an actual DATE datatype in Oracle, then you just need to pass an actual C# DateTime variable to it and let ODP.net do the heavy lifting. There is no need to do conversion of any type, provided you are passing an actual date:
DateTime testDate = new DateTime(2015, 7, 16);
OracleCommand cmd = new OracleCommand(
"SELECT * FROM REALMS_AUDIT.R2_GROUP_QUERY_RPT WHERE PROCESS_DATE = :pPROCESS_DATE",
conn);
cmd.Parameters.Add(new OracleParameter("pPROCESS_DATE", OracleDbType.Date));
cmd.Parameters[0].Value = testDate;
OracleDataReader reader = cmd.ExecuteReader();
while (reader.Read())
{
object o = reader.IsDBNull(0) ? null : reader.GetValue(0);
}
reader.Close();
If your data in C# is not a date, I'd recommend making it one before even trying:
DateTime testDate;
if (DateTime.TryParse(testDateString, out testDate))
{
// run your query
}
As per my comment, please try below and see this resolves.
TRUNC(TO_DATE(:pPROCESS_DATE,'MM-DD-YYYY HH:MI:SS AM')) if pROCESS_DATE format is 1/16/2016 12:00:00 AM.
TRUNC(TO_DATE(:pPROCESS_DATE,'DD-MM-YYYY HH:MI:SS AM')) if pROCESS_DATE format is 16/1/2016 12:00:00 AM.
First, I learned that my code will not go into the code below unless I actually have records returned to me.
OracleDataReader dataReader = mDataAccess.SelectSqlRows ( oracleConnection, oracleCommand, sqlCommand, parameters );
while ( dataReader.Read ( ) )
{
groupEntityFacilityRptList.Add ( ReadRecord ( dataReader ) );
}
Second, to get ProcessDate to work, I needed to take the string that was coming from my View, convert it to a datetime, and then I formatted it back as a string. It may not be best practices but it worked.
public JsonResult GetGroupReportData ( String reportDate )
{
DateTime processDate = DateTime.Parse ( reportDate );
var monthlyReport = SelectAllGroupRprt (processDate.ToString("MM/dd/yyyy");
return new JsonResult ( )
{
Data = monthly,
MaxJsonLength = Int32.MaxValue
};
}
I am working with ASP.NET MVC 4 using C# and SQL Server
I am selecting a row of data from the following table
CREATE TABLE [dbo].[Mem_Basic] (
[Id] INT IDENTITY (1, 1) NOT NULL,
[Mem_NA] VARCHAR (100) NOT NULL,
[Mem_Occ] VARCHAR (200) NOT NULL,
[Mem_Role] VARCHAR (200) NOT NULL,
[Mem_Email] VARCHAR (50) NULL,
[Mem_MPh] VARCHAR (15) NULL,
[Mem_DOB] DATE NULL,
[Mem_BGr] NCHAR (10) NULL,
[Mem_WAnn] DATE NULL,
[Mem_Spouse] VARCHAR (75) NULL,
PRIMARY KEY CLUSTERED ([Id] ASC)
);
using the following code
public MemberBasicData GetMemberProfile(int id)
{
MemberBasicData mb = new MemberBasicData();
using (SqlConnection con = new SqlConnection(Config.ConnectionString))
{
using (SqlCommand cmd = new SqlCommand("SELECT * FROM Mem_Basic WHERE Id="+id+"", con))
{
try
{
con.Open();
SqlDataReader reader = cmd.ExecuteReader();
if(reader.Read()==true)
{
mb.Id = (int)reader["Id"];
mb.Mem_NA = (string)reader["Mem_NA"];
mb.Mem_Occ = (string)reader["Mem_Occ"];
mb.Mem_Role = (string)reader["Mem_Role"];
mb.Mem_Email = (string)reader["Mem_Email"];
mb.Mem_MPh = (string)reader["Mem_MPh"];
mb.Mem_DOB = (Convert.ToDateTime(reader["Mem_DOB"]));
mb.Mem_BGr = (string)reader["Mem_BGr"];
mb.Mem_WAnn = (Convert.ToDateTime(reader["Mem_WAnn"]));
mb.Mem_Spouse = (string)reader["Mem_Spouse"];
}
}
catch (Exception e) { throw e; }
finally { if (con.State == System.Data.ConnectionState.Open) con.Close(); }
}
}
return mb;
}
This shows the error
Unable to cast object of type 'System.DBNull' to type 'System.String'.
(Mem_Email, MPh.. etc sometimes contain a NULL value.. if the value is null I want return null). Anybody please help me.
Just make some short if, you should do the same for all the other variables:
mb.Mem_Email = reader["Mem_Email"] == System.DBNull.Value ? null : (string) reader["Mem_Email"];
You could save yourself a serious amount of pain here with a tool like dapper (http://www.nuget.org/packages/Dapper):
public MemberBasicData GetMemberProfile(int id)
{
using (var con = new SqlConnection(Config.ConnectionString))
{
return con.Query<MemberBasicData>(
"SELECT * FROM Mem_Basic WHERE Id=#id",
new { id } // full parameterization, done the easy way
).FirstOrDefault();
}
}
things this does:
does correct parameterization (for both performance and safety), but without any inconvenience
does all the materialization, handling nulls (both in parameters and columns) for you
is insanely optimized (basically, it is measurably the same speed as writing all that code yourself, except fewer things to get wrong)
Alternatively to King King's answer you can write code like this:
mb.Mem_Email = reader["Mem_Email"] as string;
For value types, if the column allows nulls, it's a good practice to map them to nullable value types in C# so that this code reader["Mem_DOB"] as DateTime? works
Change for all columns, that might be NULL from this
mb.Mem_NA = (string)reader["Mem_NA"];
to that
mb.Mem_NA = reader["Mem_NA"].ToString();
Treat the nullable fields:
mb.Mem_Email = System.DBNull.Value.Equals(reader["Mem_Email"])?"":
(string)reader["Mem_Email"];
Do the same for:
mb.Mem_MPh, mb.Mem_BGr and mb.Mem_Spouse.
I don't mean to sound like a SQL bigot (which of course means I DO mean to sound like a SQL bigot), but if you followed SQL best practices and used a column list instead of SELECT * you could resolve this problem by using COALESCE on the nullable columns thus:
SELECT
[Id],
[Mem_NA],
[Mem_Occ],
[Mem_Role],
COALESCE( [Mem_Email], '' ) AS [Mem_Email],
COALESCE( [Mem_MPh], '' ) AS [Mem_MPh],
COALESCE( [Mem_DOB], CAST( '1753-1-1' AS DATE ) ) AS [Mem_DOB],
COALESCE( [Mem_BGr, '' ) AS [Mem_BGr],
COALESCE( [Mem_WAnn], CAST( '1753-1-1' AS DATE ) ) AS [Mem_WAnn],
COALESCE( [Mem_Spouse], '' ) AS [Mem_Spouse]
FROM
[dbo].[Mem_Basic];
Your c# code can now dependably process the result set without having to account for outliers (the exception being the dates; you should probably check for whatever default you use in the COALESCE for those (I used the minimum allowable value for a SQL Date variable in the above example), and handle them appropriately.
Additionally, you can get rid of the finally block in your c# code. You wrapped the connection in a "using" block; it will automatically close the connection when you go out of scope (that is the purpose of the "using" block).