I have a database table that contains names with accented characters. Like ä and so on.
I need to get all records using EF4 from a table that contains some substring regardless of accents.
So the following code:
myEntities.Items.Where(i => i.Name.Contains("a"));
should return all items with a name containing a, but also all items containing ä, â and so on. Is this possible?
If you set an accent-insensitive collation order on the Name column then the queries should work as required.
Setting an accent-insensitive collation will fix the problem.
You can change the collation for a column in SQL Server and Azure database with the next query.
ALTER TABLE TableName
ALTER COLUMN ColumnName NVARCHAR (100)
COLLATE SQL_LATIN1_GENERAL_CP1_CI_AI NOT NULL
SQL_LATIN1_GENERAL_CP1_CI_AI is the collation where LATIN1_GENERAL is English (United States), CP1 is code page 1252, CI is case-insensitive, and AI is accent-insensitive.
I know that is not so clean solution, but after reading this I tried something like this:
var query = this.DataContext.Users.SqlQuery(string.Format("SELECT * FROM dbo.Users WHERE LastName like '%{0}%' COLLATE Latin1_general_CI_AI", parameters.SearchTerm));
After that you are still able to call methods on 'query' object like Count, OrderBy, Skip etc.
You could create an SQL Function to remove the diacritics, by applying to the input string the collation SQL_Latin1_General_CP1253_CI_AI, like so:
CREATE FUNCTION [dbo].[RemoveDiacritics] (
#input varchar(max)
) RETURNS varchar(max)
AS BEGIN
DECLARE #result VARCHAR(max);
select #result = #input collate SQL_Latin1_General_CP1253_CI_AI
return #result
END
Then add it in the DB context (in this case ApplicationDbContext) by mapping it with the attribute DbFunction:
public class ApplicationDbContext : IdentityDbContext<CustomIdentityUser>
{
[DbFunction("RemoveDiacritics", "dbo")]
public static string RemoveDiacritics(string input)
{
throw new NotImplementedException("This method can only be used with LINQ.");
}
public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
: base(options)
{
}
}
And Use it in LINQ query, for example:
var query = await db.Users.Where(a => ApplicationDbContext.RemoveDiacritics(a.Name).Contains(ApplicationDbContext.RemoveDiacritics(filter))).tolListAsync();
Accent-insensitive Collation as Stuart Dunkeld suggested is definitely the best solution ...
But maybe good to know:
Michael Kaplan once posted about stripping diacritics:
static string RemoveDiacritics(string stIn)
{
string stFormD = stIn.Normalize(NormalizationForm.FormD);
StringBuilder sb = new StringBuilder();
for(int ich = 0; ich < stFormD.Length; ich++)
{
UnicodeCategory uc = CharUnicodeInfo.GetUnicodeCategory(stFormD[ich]);
if(uc != UnicodeCategory.NonSpacingMark)
{
sb.Append(stFormD[ich]);
}
}
return(sb.ToString().Normalize(NormalizationForm.FormC));
}
Source
So your code would be:
myEntities.Items.Where(i => RemoveDiacritics(i.Name).Contains("a"));
Related
I'm trying to find if there is any match of
array of input strings
with comma separated strings
stored inside SQL Server:
class Meeting
{
public int Id { get; set; }
public string? MeetingName { get; set; }
public string DbColumnCommaSeparated { get; set; }
}
meetingQuery.Where(x => ArrayOfInputString
.Any(y => x.DbColumnCommaSeparated.Split(",").Contains(y)))
Is it feasible to do it in an EF Core query using DbFunctions's and SQL STRING_SPLIT?
What I can suggest with this particular database design is of course based on EF Core Mapping a queryable function to a table-valued function, similar to #GertArnold suggestion. However since the built-in SqlServer SPLIT_STRING function is already TVF, it can be mapped directly thus eliminating the need of custom db function and migration.
First, we'll need a simple keyless entity type (class) to hold the result, which according to the docs is a table with record having a single string column called "value":
[Keyless]
public class StringValue
{
[Column("value")]
public string Value { get; set; }
}
Next is the function itself. It could be defined as instance method f your db context, but I find it most appropriate to be a static method of some custom class, for instance called SqlFunctions:
public static class SqlFunctions
{
[DbFunction(Name = "STRING_SPLIT", IsBuiltIn = true)]
public static IQueryable<StringValue> Split(string source, string separator)
=> throw new NotSupportedException();
}
Note that this is just a "prototype" which never is supposed to be called (hence the throw in the "implementation") and just describes the traslation of actual db function call inside the query. Also all these attributes are arbitrary and the name, built-in etc. attributes can be configured fluently. I've put them here just for clarity and simplicity, since they are enough in this case and don't need the flexibility provided by fluent API.
Finally you have to register the db function for your model, by adding the following to the OnModelCreating override:
modelBuilder.HasDbFunction(() => SqlFunctions.Split(default, default));
The HasDbFunction overload used is the simplest and typesafe way of providing the information about your method using strongly typed expression rather than reflection.
And that's it. Now you can use
var query = db.Set<Meeting>()
.Where(m => SqlFunctions.Split(m.DbColumnCommaSeparated, ",")
.Any(e => ArrayOfInputString.Contains(e.Value)));
which will be translated to something like this:
SELECT [m].[Id], [m].[DbColumnCommaSeparated], [m].[MeetingName]
FROM [Meeting] AS [m]
WHERE EXISTS (
SELECT 1
FROM STRING_SPLIT([m].[DbColumnCommaSeparated], N',') AS [s]
WHERE [s].[value] IN (N'a', N'b', N'c'))
with IN clause being different depending of the ArrayOfInputString content. I'm kind of surprised it does not get parameterized as in a "normal" Contains translation, but at least it gets translated to something which can be executed server side.
One thing to note is that you need to flip ArrayOfInputString and the split result set in the LINQ query, since there is another limitation of EF Core preventing translation of anything else but Contains method of in memory collections. Since here you are looking for intersection, it really doesn't matter which one is first, so putting the queryable first avoids that limitation.
Now that you have a solution for this db design, is it good or not. Well, this seems to be arbitrary and opinion based, but in general using normalized tables and joins is preferred, since it allows db query optimizers to use indexes and statistics when generating execution plans. Joins are very efficient since they almost always use efficient indexed scans, so in my (and most of the people) opinion you should not count them when doing the design. In this particular case though I'm not sure if normalized detail table with indexed single text value would produce better execution plan than the above (which due to the lack of information would do a full table scan evaluating the filter for each row), but it's worth trying, and I guess it won't be worse at least.
Also, all this applies to relational databases. Non relational databases can in fact contain "embedded" arrays or lists of values, which then can be used to store and process such data instead of comma separated string. In either cases, I would prefer normalized design, storing a list of values either as "embedded" or in related detail table instead of single comma separated string. But again, that's just e general opinion/preference. The single string (containing tags list for instance) is valid approach which may outperform the other for some operations, so you whatever is appropriate for you. Also note that SPLIT_STRING is not a standard db function, so in case you need to work with other database than SqlServer, you'll have a problem of finding similar function if it exists at all.
It's actually pretty easy thanks to of EF-core's smooth support for mapping database functions. In this case we need a table-valued function (TVF) that can be called in C# code.
It starts with adding a TVF to the database, which, when using migrations, requires an addition to migration code:
ALTER FUNCTION [dbo].[SplitCsv] (#string nvarchar(max))
RETURNS TABLE
AS
RETURN SELECT ROW_NUMBER() OVER (ORDER BY Item) AS ID, Item
FROM (SELECT Item = [value] FROM string_split(#string, ',')) AS items
This TVF is mapped to a simple class in EF's class model:
class CsvItem
{
public long ID { get; set; }
public string? Item { get; set; }
}
For this mapping to succeed the SQL function always returns unique ID values.
Then a method, only to be used in expressions, is added to the context:
public IQueryable<CsvItem> SplitCsv(string csv) => FromExpression(() => SplitCsv(csv));
...and added to the model:
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.HasDbFunction(typeof(MyContext)
.GetMethod(nameof(SplitCsv), new[] { typeof(string) })!);
}
That's all! Now you can execute queries like these (in Linqpad, db is a context instance):
db.Meetings.Where(m => db.SplitCsv(m.DbColumnCommaSeparated).Any(i => i.Item == "c")).Dump();
var csv = "a,d,d";
db.Meetings.Where(m => db.SplitCsv(m.DbColumnCommaSeparated)
.Intersect(db.SplitCsv(csv)).Any()).Dump();
Generated SQL:
SELECT [m].[Id], [m].[DbColumnCommaSeparated], [m].[MeetingName]
FROM [Meetings] AS [m]
WHERE EXISTS (
SELECT 1
FROM [dbo].[SplitCsv]([m].[DbColumnCommaSeparated]) AS [s]
WHERE [s].[Item] = N'c')
SELECT [m].[Id], [m].[DbColumnCommaSeparated], [m].[MeetingName]
FROM [Meetings] AS [m]
WHERE EXISTS (
SELECT 1
FROM (
SELECT [s].[ID], [s].[Item]
FROM [dbo].[SplitCsv]([m].[DbColumnCommaSeparated]) AS [s]
INTERSECT
SELECT [s0].[ID], [s0].[Item]
FROM [dbo].[SplitCsv](#__csv_1) AS [s0]
) AS [t])
One remark. Although it can be done, I wouldn't promote it. It's usually much better to store such csv values as a table in the database. It's much easier in maintenance (data integrity!) and querying.
EF Core has FromSqlRaw why not using it ?
here is an extension method that would work with DbSet<T> :
public static class EntityFrameworkExtensions
{
private const string _stringSplitTemplate = "SELECT * FROM {0} t1 WHERE EXISTS (SELECT 1 FROM STRING_SPLIT(t1.{1}, ',') s WHERE s.[value] IN({2}));";
public static IQueryable<TEntity> StringSplit<TEntity, TValue>(this DbSet<TEntity> entity, Expression<Func<TEntity, TValue>> keySelector, IEnumerable<TValue> values)
where TEntity : class
{
var columnName = (keySelector.Body as MemberExpression)?.Member?.Name;
if (columnName == null) return entity;
var queryString = string.Format(_stringSplitTemplate, entity.EntityType.GetTableName(), columnName, string.Join(',', values));
return entity.FromSqlRaw(queryString);
}
}
usage :
var result = context.Meetings.StringSplit(x=> x.DbColumnCommaSeparated, ArrayOfInputString).ToList();
this would generate the following SQL :
SELECT *
FROM table t1
WHERE EXISTS (
SELECT 1
FROM STRING_SPLIT(t1.column, ',') s
WHERE
s.value IN(...)
);
ANSWER 1: SQL CLR Approach
STEP 1: Test SQL Schema
create table Meeting
(
Id int identity(1,1) primary key,
MeetingName nvarchar(max) null,
DbColumnCommaSeparated nvarchar(max) not null
)
go
truncate table Meeting
insert into Meeting
values('one','1,2,3,4');
insert into Meeting
values('two','5,6,7,8');
insert into Meeting
values('three','1,2,7,8');
insert into Meeting
values('four','11,22,73,84');
insert into Meeting
values('five','14,25,76,87');
STEP 2: Write SQL CLR Function read more
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Data.Linq;
public partial class UserDefinedFunctions
{
[Microsoft.SqlServer.Server.SqlFunction]
public static SqlBoolean FilterCSVFunction(string source, string item)
{
return new SqlBoolean(Array.Exists(source.Split(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries), i => i == item));
}
}
STEP 3: Enable SQL CLR for database
EXEC sp_configure 'clr enabled', 1;
RECONFIGURE;
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'clr strict security', 0;
RECONFIGURE;
STEP 4: Actual Query
DECLARE #y nvarchar(max)
SET #y = '7'
SELECT * FROM Meeting
WHERE dbo.FilterCSVFunction(DbColumnCommaSeparated, #y) = 1
STEP 5: You can import the function in ef
ANSWER 2 Updated
DECLARE #ArrayOfInputString TABLE(y INT);
INSERT INTO #ArrayOfInputString(y) VALUES(7);
INSERT INTO #ArrayOfInputString(y) VALUES(8);
SELECT DISTINCT M1.* FROM Meeting M1 JOIN
(
SELECT ID, Split.a.value('.', 'VARCHAR(100)') As Data FROM
(SELECT M2.ID, CAST ('<M>' + REPLACE(DbColumnCommaSeparated, ',', '</M><M>') + '</M>' AS XML) as DbXml FROM Meeting M2) A
CROSS APPLY A.DbXml.nodes ('/M') AS Split(a)
) F ON M1.ID = F.ID
WHERE F.Data IN (SELECT y FROM #ArrayOfInputString)
I am using dynamic linq to parse some conditions. I wrote stored procedure and want to filter it dynamicly.
this is my procedure:
;WITH cte
AS
(
SELECT
ID
,[NO]
,Firstname
,Lastname
,PersonalNO
,ReferanceID
,CAST('' AS VARCHAR(MAX)) AS ReferanceNO
FROM dbo.Employees WHERE ReferanceID IS NULL
UNION ALL
SELECT
c.ID
,c.[NO]
,c.Firstname
,c.Lastname
,c.PersonalNO
,c.ReferanceID
,CASE
WHEN ct.ReferanceNO = ''
THEN CAST(ct.[NO] AS VARCHAR(MAX))
ELSE CAST(ct.[NO] AS VARCHAR(MAX))
END
FROM dbo.Employees c
INNER JOIN cte ct ON ct.ID = c.ReferanceID
)
SELECT * FROM cte
and in C# I am calling this procedure;
public List<Employees> GetEmployees(string searchValue, int skip, int pageSize, string sortColumn, string sortColumnDir)
{
var query = DB.sp_GetConsultants().ToList();
var totalRecords = query.Count;
query = query.Where(searchValue).ToList(); // if the searchValue is value
//"PersonalNO.Contains(\"15\")" it filters, with this kind of value
//"Lastname.Contains(\"fish\")" it dose not, but with "Fish" it does. Is the matter with uppercase?
}
and i uploaded table picture:
What is the problem?
string.Contains is case sensitive; as you noticed, searching for "fish" won't return "Fisher", even though searching for "Fish" will. There doesn't seem to be a case insensitive version in .NET (even though you can compare strings case insensitively as an option).
As a workaround, you can convert both strings to lowercase or uppercase (ToLower / ToUpper) before comparing. This might have some issues with certain non-Latin characters, however.
I think there is also a collation option in SQL Server which lets you specify the case sensitivity for strings, if you want to do the comparison at the database level instead.
I use SQL Server 2014 with collation SQL_Latin1_General_CP1_CI_AS.
I have a C# program that inserts Chinese character into my database, for example :
"你","好".
In SQL Server Management Studio, I can see it clearly and I can also search on it through N"你".
The issue is that for some character, it didn't work :
"〇", "㐄".
When my C# program start to insert this two characters, I have a CONSTRAINT UNIQUE exception raised (because I put it into my database a unique constraint for Chinese character).
InnerException = {"Violation of UNIQUE KEY constraint 'AK_Dictionary_Chinese'. Cannot insert duplicate key in object 'dbo.Dictionary'. The duplicate key value is (㐄).\r\nThe statement has been terminated."
And here is my issue : it seems that these two Chinese character (and I have around 70 similar issue) are not well converted into UTF8 and I encounter issue. If I remove the unique constraint, then of course I can insert it into my database and can see it through SQL Server Management Studio. But when I search for the character using N"〇", the database answer me multiple matches : "〇", "㐄"...
So how can I deal with that ? I tried to change the collation for the Chinese one but I have the same issue...
Thanks for your help.
HOW can I add the Chinese characters in my c# program?
My entity object :
public partial class Dictionary
{
public int Id { get; set; }
public string Chinese { get; set; }
public string Pinyin { get; set; }
public string English { get; set; }
}
I just add a new entity object to my database and call SaveChanges();
var word1 = new Dictionary()
{
Chinese = "〇",
Pinyin = "a",
English = "b",
};
var word2 = new Dictionary()
{
Chinese = "㐄",
Pinyin = "c",
English = "bdsqd",
};
// We insert it into our Db
using (var ctx = new DBEntities())
{
ctx.Dictionaries.Add(word1);
ctx.Dictionaries.Add(word2);
ctx.SaveChanges();
}
If you want to try at home, here is a small sql script that reproduce the issue. You can execute it through SQL MANAGEMENT STUDIO :
DECLARE #T AS TABLE
(
ID INT IDENTITY PRIMARY KEY,
Value NVARCHAR(256)
);
INSERT INTO #T(Value) VALUES (N'〇'), (N'㐄');
SELECT * FROM #T;
SELECT * FROM #T WHERE Value = N'〇';
I found the answer. In fact, the Chinese character is "traditional" and SQL SERVER need a bit of help. Collation was the right idea, but I had to specify Chinese traditional for that (to find it I just query the sample provided with all possible collation....).
DECLARE #T AS TABLE
(
ID INT IDENTITY PRIMARY KEY,
Value NVARCHAR(256)
) ;
INSERT INTO #T(Value) VALUES (N'〇'), (N'㐄');
SELECT * FROM #T;
SELECT * FROM #T WHERE Value = N'㐄' COLLATE Chinese_Traditional_Pinyin_100_CI_AS;
Sorry for making lose your time, I really tried with Chinese collation but not the traditional one (I didn't know that it was traditional character...).
Fixed.
Try this without specifying Collation every time!
DECLARE #T AS TABLE
(
ID INT IDENTITY PRIMARY KEY,
Value NVARCHAR(256)
) ;
INSERT INTO #T(Value) VALUES (N'〇'), (N'㐄'), (N'㐄〇'), (N'〇㐄');
SELECT * FROM #T;
SELECT * FROM #T WHERE CAST(Value AS varbinary) like CAST(N'〇%' AS varbinary)
SELECT * FROM #T WHERE CAST(Value AS varbinary) like CAST(N'㐄%' AS varbinary)
I'm trying to migrate a MySQL-based app over to Microsoft SQL Server 2005 (not by choice, but that's life).
In the original app, we used almost entirely ANSI-SQL compliant statements, with one significant exception -- we used MySQL's group_concat function fairly frequently.
group_concat, by the way, does this: given a table of, say, employee names and projects...
SELECT empName, projID FROM project_members;
returns:
ANDY | A100
ANDY | B391
ANDY | X010
TOM | A100
TOM | A510
... and here's what you get with group_concat:
SELECT
empName, group_concat(projID SEPARATOR ' / ')
FROM
project_members
GROUP BY
empName;
returns:
ANDY | A100 / B391 / X010
TOM | A100 / A510
So what I'd like to know is: Is it possible to write, say, a user-defined function in SQL Server which emulates the functionality of group_concat?
I have almost no experience using UDFs, stored procedures, or anything like that, just straight-up SQL, so please err on the side of too much explanation :)
No REAL easy way to do this. Lots of ideas out there, though.
Best one I've found:
SELECT table_name, LEFT(column_names , LEN(column_names )-1) AS column_names
FROM information_schema.columns AS extern
CROSS APPLY
(
SELECT column_name + ','
FROM information_schema.columns AS intern
WHERE extern.table_name = intern.table_name
FOR XML PATH('')
) pre_trimmed (column_names)
GROUP BY table_name, column_names;
Or a version that works correctly if the data might contain characters such as <
WITH extern
AS (SELECT DISTINCT table_name
FROM INFORMATION_SCHEMA.COLUMNS)
SELECT table_name,
LEFT(y.column_names, LEN(y.column_names) - 1) AS column_names
FROM extern
CROSS APPLY (SELECT column_name + ','
FROM INFORMATION_SCHEMA.COLUMNS AS intern
WHERE extern.table_name = intern.table_name
FOR XML PATH(''), TYPE) x (column_names)
CROSS APPLY (SELECT x.column_names.value('.', 'NVARCHAR(MAX)')) y(column_names)
I may be a bit late to the party but this method works for me and is easier than the COALESCE method.
SELECT STUFF(
(SELECT ',' + Column_Name
FROM Table_Name
FOR XML PATH (''))
, 1, 1, '')
SQL Server 2017 does introduce a new aggregate function
STRING_AGG ( expression, separator).
Concatenates the values of string expressions and places separator
values between them. The separator is not added at the end of string.
The concatenated elements can be ordered by appending WITHIN GROUP (ORDER BY some_expression)
For versions 2005-2016 I typically use the XML method in the accepted answer.
This can fail in some circumstances however. e.g. if the data to be concatenated contains CHAR(29) you see
FOR XML could not serialize the data ... because it
contains a character (0x001D) which is not allowed in XML.
A more robust method that can deal with all characters would be to use a CLR aggregate. However applying an ordering to the concatenated elements is more difficult with this approach.
The method of assigning to a variable is not guaranteed and should be avoided in production code.
Possibly too late to be of benefit now, but is this not the easiest way to do things?
SELECT empName, projIDs = replace
((SELECT Surname AS [data()]
FROM project_members
WHERE empName = a.empName
ORDER BY empName FOR xml path('')), ' ', REQUIRED SEPERATOR)
FROM project_members a
WHERE empName IS NOT NULL
GROUP BY empName
Have a look at the GROUP_CONCAT project on Github, I think I does exactly what you are searching for:
This project contains a set of SQLCLR User-defined Aggregate functions (SQLCLR UDAs) that collectively offer similar functionality to the MySQL GROUP_CONCAT function. There are multiple functions to ensure the best performance based on the functionality required...
To concatenate all the project manager names from projects that have multiple project managers write:
SELECT a.project_id,a.project_name,Stuff((SELECT N'/ ' + first_name + ', '+last_name FROM projects_v
where a.project_id=project_id
FOR
XML PATH(''),TYPE).value('text()[1]','nvarchar(max)'),1,2,N''
) mgr_names
from projects_v a
group by a.project_id,a.project_name
With the below code you have to set PermissionLevel=External on your project properties before you deploy, and change the database to trust external code (be sure to read elsewhere about security risks and alternatives [like certificates]) by running ALTER DATABASE database_name SET TRUSTWORTHY ON.
using System;
using System.Collections.Generic;
using System.Data.SqlTypes;
using System.IO;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
using Microsoft.SqlServer.Server;
[Serializable]
[SqlUserDefinedAggregate(Format.UserDefined,
MaxByteSize=8000,
IsInvariantToDuplicates=true,
IsInvariantToNulls=true,
IsInvariantToOrder=true,
IsNullIfEmpty=true)]
public struct CommaDelimit : IBinarySerialize
{
[Serializable]
private class StringList : List<string>
{ }
private StringList List;
public void Init()
{
this.List = new StringList();
}
public void Accumulate(SqlString value)
{
if (!value.IsNull)
this.Add(value.Value);
}
private void Add(string value)
{
if (!this.List.Contains(value))
this.List.Add(value);
}
public void Merge(CommaDelimit group)
{
foreach (string s in group.List)
{
this.Add(s);
}
}
void IBinarySerialize.Read(BinaryReader reader)
{
IFormatter formatter = new BinaryFormatter();
this.List = (StringList)formatter.Deserialize(reader.BaseStream);
}
public SqlString Terminate()
{
if (this.List.Count == 0)
return SqlString.Null;
const string Separator = ", ";
this.List.Sort();
return new SqlString(String.Join(Separator, this.List.ToArray()));
}
void IBinarySerialize.Write(BinaryWriter writer)
{
IFormatter formatter = new BinaryFormatter();
formatter.Serialize(writer.BaseStream, this.List);
}
}
I've tested this using a query that looks like:
SELECT
dbo.CommaDelimit(X.value) [delimited]
FROM
(
SELECT 'D' [value]
UNION ALL SELECT 'B' [value]
UNION ALL SELECT 'B' [value] -- intentional duplicate
UNION ALL SELECT 'A' [value]
UNION ALL SELECT 'C' [value]
) X
And yields: A, B, C, D
Tried these but for my purposes in MS SQL Server 2005 the following was most useful, which I found at xaprb
declare #result varchar(8000);
set #result = '';
select #result = #result + name + ' '
from master.dbo.systypes;
select rtrim(#result);
#Mark as you mentioned it was the space character that caused issues for me.
About J Hardiman's answer, how about:
SELECT empName, projIDs=
REPLACE(
REPLACE(
(SELECT REPLACE(projID, ' ', '-somebody-puts-microsoft-out-of-his-misery-please-') AS [data()] FROM project_members WHERE empName=a.empName FOR XML PATH('')),
' ',
' / '),
'-somebody-puts-microsoft-out-of-his-misery-please-',
' ')
FROM project_members a WHERE empName IS NOT NULL GROUP BY empName
By the way, is the use of "Surname" a typo or am i not understanding a concept here?
Anyway, thanks a lot guys cuz it saved me quite some time :)
2021
#AbdusSalamAzad's answer is the correct one.
SELECT STRING_AGG(my_col, ',') AS my_result FROM my_tbl;
If the result is too big, you may get error "STRING_AGG aggregation result exceeded the limit of 8000 bytes. Use LOB types to avoid result truncation." , which can be fixed by changing the query to this:
SELECT STRING_AGG(convert(varchar(max), my_col), ',') AS my_result FROM my_tbl;
For my fellow Googlers out there, here's a very simple plug-and-play solution that worked for me after struggling with the more complex solutions for a while:
SELECT
distinct empName,
NewColumnName=STUFF((SELECT ','+ CONVERT(VARCHAR(10), projID )
FROM returns
WHERE empName=t.empName FOR XML PATH('')) , 1 , 1 , '' )
FROM
returns t
Notice that I had to convert the ID into a VARCHAR in order to concatenate it as a string. If you don't have to do that, here's an even simpler version:
SELECT
distinct empName,
NewColumnName=STUFF((SELECT ','+ projID
FROM returns
WHERE empName=t.empName FOR XML PATH('')) , 1 , 1 , '' )
FROM
returns t
All credit for this goes to here:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/9508abc2-46e7-4186-b57f-7f368374e084/replicating-groupconcat-function-of-mysql-in-sql-server?forum=transactsql
For SQL Server 2017+, use STRING_AGG() function
SELECT STRING_AGG(Genre, ',') AS Result
FROM Genres;
Sample result:
Result
Rock,Jazz,Country,Pop,Blues,Hip Hop,Rap,Punk
Is it possible to perform an insert or update with the following constraints :
Dapper is used in the project
The Primary Key is auto-incrementing positive
The newly inserted data may not have a PK value (or has a value of 0)
The data needs to have the PK value on a newly inserted row.
The query is being generated procedurally, and generalizing it would be preferable
Something like :
int newId = db.QueryValue<int>( <<insert or update>>, someData );
I have read about different solutions, and the best solution seems to be this one :
merge tablename as target
using (values ('new value', 'different value'))
as source (field1, field2)
on target.idfield = 7
when matched then
update
set field1 = source.field1,
field2 = source.field2,
...
when not matched then
insert ( idfield, field1, field2, ... )
values ( 7, source.field1, source.field2, ... )
but
it seems to fail on the third constraint and
it does not guarantee to return the newly generated id.
Because of the 5th constraint (or preferance), a stored procedure seems overly complicated.
What are the possible solutions? Thanks!
If your table has an auto-increment field, you can't assign a value to that field when inserting a record. OK you can, but it's normally a bad idea :)
Using the T-SQL MERGE statement you can put all of the values into the source table, including your default invalid identity value, then write the insert clause as:
when not matched then
insert (field1, field2, ...)
values (source.field1, source.field2, ...)
: and use the output clause to get the inserted identity value:
OUTPUT inserted.idfield
That said, I think you might be complicating your SQL code generation a little, especially for tables with a lot of fields. It is often better to generate distinct UPDATE and INSERT queries... especially if you've got some way of tracking the changes to the object so that you can only update the changed fields.
Assuming you're working on MS SQL, you can use SCOPE_IDENTITY() function after the INSERT statement to get the value of the identity field for the record in a composite statement:
INSERT INTO tablename(field1, field2, ...)
VALUES('field1value', 'field2value', ...);
SELECT CAST(SCOPE_IDENTITY() AS INT) ident;
When you execute this SQL statement you'll get back a resultset with the inserted identity in a single column. Your db.QueryValue<int> call will then return the value you're after.
For standard integer auto-increment fields the above is fine. For other field types, or for a more general case, try casting SCOPE_IDENTITY() result to VARCHAR(MAX) and parse the resultant string value to whichever type your identity column expects - GUID, etc.
In the general case, try this in your db class:
public string InsertWithID(string insertQuery, params object[] parms)
{
string query = insertQuery + "\nSELECT CAST(SCOPE_IDENTITY() AS VARCHAR(MAX)) ident;\n";
return this.QueryValue<string>(insertQuery, parms);
}
And/or:
public int InsertWithIntID(string insertQuery, params object[] parms)
{
string query = insertQuery + "\nSELECT CAST(SCOPE_IDENTITY() AS INT) ident;\n";
return this.QueryValue<int>(query, parms);
}
That way you can just prepare your insert query and call the appropriate InsertWithID method to get the resultant identity value. That should satisfy your 5th constraint with luck :)