Dapper trimming string result even if less than 4000 chars [duplicate] - c#

DECLARE #result NVARCHAR(max);
SET #result = (SELECT * FROM table
FOR JSON AUTO, ROOT('Data'))
SELECT #result;
This returns a json string of ~43000 characters, with some results truncated.
SET #result = (SELECT * FROM table
FOR JSON AUTO, ROOT('Data'))
This returns a json string of ~2000 characters. Is there any way to prevent any truncation? Even when dealing with some bigdata and the string is millions and millions of characters?

I didn't find and 'official' answer, but it seems that this is an error with the new 'FOR JSON' statement which is splitting the result in lines 2033 characters long.
As recommended here the best option so far is to iterate through the results concatenating the returned rows:
string result = "";
while (reader.Read())
{
result += Convert.ToString(reader[0]);
}
BTW, it seems that the latest versions of SSMS are already applying some kind of workaround like this to present the result in a single row.

I was able to get the full, non-truncated string by using print instead of select in SQL Server 2017 (version 14.0.2027):
DECLARE #result NVARCHAR(max);
SET #result = (SELECT * FROM table
FOR JSON AUTO, ROOT('Data'))
PRINT #result;
Another option would be to download and use Azure Data Studio which I think is a multi-platform re-write of SSMS (similar to how Visual Studio was re-written as VS Code). It seems to spit out the entire, non-truncated json string as expected out of the box!

This will also work if you insert into a temp table - not presenting does not apply the truncate of SSMS.
Might be usefull if you need to calculate several values.
declare #json table (j nvarchar(max));
insert into #json select * from(select* from Table where Criteria1 for json auto)a(j)
insert into #json select * from(select* from Table where Criteria2 for json auto)a(j)
select * from #json

I know this is an old thread but I have had success with this issue by sending the result to an XML variable. The advantage of using an XML variable is that the size is not stated as character length but by size of the string in memory, which can be changed in the options. Therefore Brad C's response would now look like...
DECLARE #result XML
SET #result = (SELECT * FROM table
FOR JSON AUTO, ROOT('Data'))
SELECT #result
or...
PRINT #result;

Here is the answer to JSON truncation:
SQL divides the JSON result into chunks 2k in size (at least my SQL 2016 installation does), one chunk in the first column of each row in the result set. To get the entire result, your client code has to loop through the result set and concatenate the first column of each record. When you've gotten to the end of the rows, voila, your entire JSON result is retrieved, uncut.
When I first encountered the truncation problem I was baffled, and wrote off FOR JSON for several years as an unserious feature suited only to the smallest of datasets. I learned that I need to read the entire recordset only from the FOR XML documentation, and never actually saw it mentioned in the FOR JSON docs.

The easiest workaround to avoid the truncation is to wrap the query in another select:
select (
<your query> FOR JSON PATH [or FOR JSON AUTO]
) as json

We've seen similar issues in SSMS, without using a variable SSMS truncates at 2033.
With a variable the query actually works OK when you use an nvarcahr(max) variable, but it truncates the output in the query results view at 43697.
A possible solution I've tested is outputting Query results to a file, using BCP:
bcp "DECLARE #result NVARCHAR(max); SET #result = (SELECT * FROM table FOR JSON AUTO, ROOT('Data')); SELECT #result as Result;" queryout "D:\tmp\exportOutput.txt" -S SQL_SERVER_NAME -T -w
See BCP docs for specifying server name\instance and authentication options

It's difficult to determine exactly what the problem you're having without posting the data, but I had a similar problem when I was attempting to export a query in JSON format. The solution that worked for me was to go to Query/Query Options/Results/Text/Set "Maximum number of characters displayed in each column:" to 8192 (max value AFAIK).
This probably won't help much with your first query, but that potentially could be broken into smaller queries and executed successfully. I would anticipate that you could effectively run your second query after changing that setting.

If your datalength is less than 65535 then you should use the suggestion of #dfundako who commented in the first post:
Try going to Tools, Options, Query Results, SQL Server, Results to Grid, and set Non-XML data to the max amount (I think 65535)
In my case the datalength was 21k characters so after exporting to grid I copied the value and it was fine, not truncated. Still it doesn't solve the the issue for those with bigger amount of data.

Try Visual Studio Code with Microsoft SQL extension. I got 6800 characters of JSON without truncation. It seems SSMS truncates results.

Related

Two questions in transferring data from an XML document to SQL Server: an error emerging in the with clause and a concern about datetime

I am new at this so please bear with me. I am using SQL Server and am trying to transfer data from my XML document into SQL Server.
I have two questions.
Question #1: I have an error for 'where'. If I remove the where line completely the error moves onto 'session'.
Question #2: I want the aptly named datetime column to feature a datetime data type. When I look at the xml document though I see that there is a T for what datetime equals. I believe this T symbolizes the word time. I am worried that this T will cause problems since I have never seen the data type datetime in SQL Server having a letter in the middle of the date and time. Is this a possibility? If so what should I do? I assume that I will have to change my data type if there is a problem (what should it be changed to if so?). Changing the xml document itself to remove the T is not an option.
Here is the copy-paste of my query:
DECLARE #x xml
SELECT #x = p
FROM OPENROWSET
(BULK 'C:\Users\Owner\Documents\congress\House votes\114 congress 2015\Passage\705.xml; , SINGLE_BLOB) AS HouseVote705(p)
DECLARE #hdoc int
EXEC sp_xml_prepare document #hdoc OUTPUT, #x
SELECT *
FROM OPENXML (#hdoc, '/roll', 1)
WITH (
'where' char,
'session' tinyint,
'year' smallint,
roll smallint,
'datetime' datetime)
EXEC sp_xml_removedocument
Here is the copypaste of my xml document:
<roll where="house" session="114" datetime="2015-12-18T09:49:00-05:00"> </roll>
A screenshot of my query and the xml document I am working with.
To answer your first question, use [ ] brackets to wrap those reserved keywords, instead of handling these as strings. Like some already stated in the comments, try to avoid the use of keywords for attributes.
Also, as long as you don't have a XSD (xml schema definition) try to interpret types as little as possible. The [where] is definitely not a single character. And are you sure the session will always fit in a TINYINT?
For your second question, the datetime provided is in Iso8601 format. Just read it as a varchar and convert it afterwards.
Here is an example based on your screenshot:
DECLARE #x XML
SELECT #x = CAST(N'<roll where="house" session="114" year="2015" roll="705" source="house.gov" datetime="2015-12-18T09:49:00-05:00" updated="2016-12-25T10:03:32-05:00"/>' AS XML)
DECLARE #hdoc INT
EXEC sp_xml_preparedocument #hdoc OUTPUT, #x, '<roll xmlns:xyz="urn:MyNamespace"/>';
SELECT [where] AS DocWhere
, [session] AS DocSession
, [year] AS DocYear
, roll
, CONVERT(datetime2, [datetime], 126) AS DocDatetime
FROM OPENXML (#hdoc, '/roll', 1)
WITH (
[where] VARCHAR(MAX)
, [session] INT
, [year] INT
, [roll] INT
, [datetime] VARCHAR(30)
)
As in your other questions: FROM OPENXML is outdated and should not be used anymore (rare exceptions exist)...
Rather use the modern XML methods for this:
DECLARE #xml XML=
N'<roll where="house" session="114" datetime="2015-12-18T09:49:00-05:00"> </roll>';
SELECT #xml.value(N'(/roll/#where)[1]',N'nvarchar(max)') AS roll_where
,#xml.value(N'(/roll/#session)[1]',N'int') AS roll_session
,#xml.value(N'(/roll/#datetime)[1]',N'datetime') AS roll_datetime
The result
roll_where roll_session roll_datetime
house 114 2015-12-18 14:49:00.000
UPDATE hundreds of files...
If this is a one time action it doesn't matter how you do this. This can be slow, ugly and dirty - as long as your result is correct...
Do you have the file paths/names in a table or are they following a computable schema? It should be easy to use a CURSOR or a WHILE to work this down in a loop.
You can use the command which you use on top of your question to load the XML's content directly into the variable #xml and use SELECT a,b,... INTO #tmpTbl FROM... to write the content of all your files in one go into a staging table.
In most cases it is best, to create the whole statement as string (don't forget to double all quotes!) within your loop and use EXEC to execute this. Otherwise you'd get a problem with the file's path in OPENROWSET...

Determine SQL Server query (stored procedure) result type

I am planning to organize my data in SQL Server as a small orm of my own, creating classes of meta data on each in my code.
For the tests I am hard-coding the objects, the next step is to generate the properties of each using SQL Server queries about those objects.
And now that I deal with the stored procedures section of my code in C#,
I was wondering how it is possible to somehow use SQL Server to query the result type of the command executed?
For example, here we know what it's doing even by reading its name ...
[dbo].[GtallDrLFilesCount]
but another could select some other type of return such as rowset string etc'
Using the above stored procedure will return an int :
if(#AllDrives=1) Begin
Select
* From [dbo].[HddFolderFiles]
End
but the next (above) selects all content rather the RowsCount
I was planning to access SQL Server and query it's objects, and as I do not plan to set return parameter (OUT), is there a more elegant way to achieve it, rather than parsing the .sql file of the stored procedure?
Like if text contains SELECT * (this is a rowset) expect it with DataTable
if text contains Select COUNT(*) (this is int) prepare int type variable.
I thought in the case I did not assign an out parameter to my stored procedures can SQL Server tell the return type somehow even though it has no out parameter to make it easy for it?
I think you would have to execute the SProc to get it's columns, but you could do it without actually returing data using set fmtonly
Even sprocs that return a single value (eg - int) return a table when you use c# ... so you just need to take a look at the reader's Columns to get the data you want.
So:
set fmtonly on
exec [dbo].[MyStoredProc] 0
set fmtonly off
Will return a recordset which you can examine in c#
var adoCon = new System.Data.SqlClient.SqlConnection(_sConnectStr);
var adoCmd = new System.Data.SqlClient.SqlCommand("your SQL (above)", adoCon);
var Rows = adoCmd.ExecuteReader();
DataTable dtSchema = Rows.GetSchemaTable();
Now - you can wander through dtSchema to get columns. It's not pure SQL, but it's a c# + SQL approach. [dbo].[GtallDrLFilesCount] will return a single column table (column of type int).
Obviously - use a SQL command (not string). The next trick is translating SQL types into native c# types (easy for some data types and tricky for others ... take a look at ADOCommand's ReturnProviderSpecificTypes option).
Hope that helps!
From SQL Server 2012+ you can use sys.dm_exec_describe_first_result_set to read metadata about resultset:
This dynamic management function takes a Transact-SQL statement as a
parameter and describes the metadata of the first result set for the
statement.
SELECT *
FROM sys.dm_exec_describe_first_result_set(
N'EXEC [dbo].[MyProcedure]', NULL, 0);
SELECT *
FROM sys.dm_exec_describe_first_result_set(
N'SELECT * FROM [dbo].[tab]', NULL, 0);
SqlFiddleDemo
This method has limitation for more info read Remarks section

Fix html encoded text stored in the database

I have a sql server db that has a table which stores a plain text value in a nvarchar column. Unfortunately there was a bug in the C# code that was running Encoder.HtmlEncode() on chinese characters before inserting it into the table . e.g text value of 您好 is being stored in the table as 您好
Is there any way I clean up this data using just T-sql? This database is heavily locked down, so I can't easily run any code against it other than T-sql.
From what the problem seems to be, you have an option.
You could create a temp table that will store the HTML entity of the characters. As an example;
CREATE TABLE dbo.TempHost
{
Entity varchar(255),
Character nvarchar(255)
}
Then you can actually find the data as csv online (http://www.khngai.com/chinese/charmap/tbluni.php?page=0 or copy and paste to excel), and import it into the table. From there on, all you will need to do is to scan the data and call REPLACE() function and update.
This is a fun challenge, and by fun I mean not really fun. T-SQL is quite bad at string manipulation. To make it even better, HTML entities actually encode a Unicode code point, and there is no simple way of converting that to a Unicode character in T-SQL.
Using a lookup table is probably the most viable method, in that it's likely to be more efficient than what I'm going to propose here: use a function to do the entity replacement. Warning: scalar-valued functions perform horribly in T-SQL and string manipulation is none too fast either. Nevertheless, I present this for, um, inspirational purposes:
CREATE FUNCTION dbo._ConvertEntities(#in NVARCHAR(MAX)) RETURNS NVARCHAR(MAX) AS BEGIN
WHILE 1 = 1 BEGIN;
DECLARE #entityStart INT = CHARINDEX('&#x', #in);
IF #entityStart = 0 BREAK;
DECLARE #entityEnd INT = CHARINDEX(';', #in, #entityStart)
DECLARE #entity VARCHAR(MAX) = SUBSTRING(#in, #entityStart + LEN('&#x'), #entityEnd - #entityStart - LEN('&#x'));
IF #entity NOT LIKE '[0-9A-F][0-9A-F][0-9A-F][0-9A-F]' RETURN #in;
DECLARE #entityChar NCHAR(1) = CONVERT(NCHAR(1), CONVERT(BINARY(2), REVERSE(CONVERT(BINARY(2), #entity, 2))));
SET #in = STUFF(#in, #entityStart, #entityEnd - #entityStart + 1, #entityChar);
END;
RETURN #in;
END;
Aside from performance issues, this function has the major shortcoming that it only works for entities of the form &#x????;, with ???? four hexadecimal digits. It fails quite badly for other entities (like those needing surrogates, those coded as decimal, or special entities like "). I've made it bail out in this case. Although it's fairly easy to extend it to handle single-byte entities, extending it to >4 would be agony.
Realistically, you want to do this in client software using a real programming language. Even if the database is sufficiently locked down that you cannot directly execute queries, you are presumably able to query data if it's not too much, and you can insert data back using generated statements (a lot of them if need be). Terribly slow, but more or less viable.
For completeness, I also mention the option of running CLR code in SQL Server using CLR integration. This requires that the server already allows this or that you can reconfigure it to allow it (improbable if it's "heavily locked down"). The main reason this would be attractive is because it's definitely easier and faster to decode the entities in CLR code, and using CLR integration means you're not using client code (so the data doesn't leave the server). On the other hand, since you need administrative access to the machine to deploy the assembly, this would seem to be a theoretical advantage at best. As far as performance goes, though, it probably can't be beat.
You could take advantage of the fact the characters are being stored all start with "&#x" and are eight characters long. You could loop through the table updating cutting out the bad characters using something like the example below.
DECLARE #str VARCHAR(100)
SET #str = 'Hello 頶頴World'
DECLARE #pos int SELECT #pos = CHARINDEX('&#x', #str)
WHILE #pos > 0
BEGIN
SET #str = LEFT(#str, #pos -1) + RIGHT(#str, LEN(#str) -#pos - 8)
SELECT #pos = CHARINDEX('&#x', #str)
END
SELECT #str
HTML encoding is not the same as XML encoding, but thanks to this question, I've realized there is an embarrassingly simple way of achieving this:
SELECT
REPLACE(
CONVERT(NVARCHAR(MAX),
CONVERT(XML,
REPLACE(REPLACE(_column_, '<', '<'), '"', '"')
)
),
'<', '<'
)
Stick this in an UPDATE and you're done. Well, almost -- if the code contains non-XML escaped entities like é, you'd need to replace these separately. Also, we do need to dance around the issue of XML escaping (hence the < replacing in case there's a < somewhere).
It may still need some refinement, but this sure looks a lot more promising than a scalar-valued function. :-)

Get Description of Columns Returned

In SQL Server and/or its C# API, is there a mechanism by which I can get a description (i.e. column names and data types) of the result of executing arbitrary SQL/prepared statement/stored procedure without actually executing it?
Example...
select * from my my_table
Desired result...
col_1 col_2 col_3 ... col_n
integer float varchar(32) datetime2
Or some equivalent information?
SET FMTONLY is what you're looking for:
Returns only metadata to the client. Can be used to test the format of the response without actually running the query.
E.g.:
SET FMTONLY ON
GO
select * from my my_table
GO
SET FMTONLY OFF
GO
Will produce an empty result set.
In C#, with an SqlCommand object, you can specify the SchemaOnly CommandBehaviour and you'll similarly get an empty result set that you can then examine.
I think you might get use out of this:
EXEC sp_help my_table
Or see my question/answer here: How do I get a list of columns in a table or view?

Using IN operator with Stored Procedure Parameter

I am building a website in ASP.NET 2.0, some description of the page I am working about:
ListView displaying a table (of posts) from my access db, and a ListBox with Multiple select mode used to filter rows (by forum name, value=forumId).
I am converting the ListBox selected values into a List, then running the following query.
Parameter:
OleDbParameter("#Q",list.ToString());
Procedure:
SELECT * FROM sp_feedbacks WHERE forumId IN ([#Q])
The problem is, well, it doesn't work. Even when I run it from MSACCESS 2007 with the string 1,4, "1","4" or "1,4" I get zero results. The query works when only one forum is selected. (In (1) for instance).
SOLUTION?
So I guess I could use WHERE with many OR's but I would really like to avoid this option.
Another solution is to convert the DataTable into list then filter it using LINQ, which seems very messy option.
Thanks in advance,
BBLN.
I see 2 problems here:
1) list.ToString() doesn't do what you expect. Try this:
List<int> foo = new List<int>();
foo.Add(1);
foo.Add(4);
string x = foo.ToString();
The value of "x" will be "System.Collections.Generic.List`1[System.Int32]" not "1,4"
To create a comma separated list, use string.Join().
2) OleDbParameter does not understand arrays or lists. You have to do something else. Let me explain:
Suppose that you successfully use string.Join() to create the parameter. The resulting SQL will be:
SELECT * FROM sp_feedbacks WHERE forumId IN ('1,4')
The OLEDB provider knows that strings must have quotation marks around them. This is to protect you from SQL injection attacks. But you didn't want to pass a string: you wanted to pass either an array, or a literal unchanged value to go into the SQL.
You aren't the first to ask this question, but I'm afraid OLEDB doesn't have a great solution. If it were me, I would discard OLEDB entirely and use dynamic SQL. However, a Google search for "parameterized SQL array" resulted in some very good solutions here on Stack Overflow:
WHERE IN (array of IDs)
Passing an array of parameters to a stored procedure
Good Luck! Post which approach you go with!
When you have:
col in ('1,4')
This tests that col is equal to the string '1,4'. It is not testing for the values individually.
One way to solve this is using like:
where ','&#Q&',' like '*,'&col&',*'
The idea is to add delimiters to each string. So, a value of "1" becomes ",1,"in the column. A value of "1,4" for #Q becomes ",1,4,". Now when you do the comparison, there is no danger that "1" will match "10".
Note (for those who do not know). The wildcard for like is * rather than the SQL standard %. However, this might differ depending on how you are connecting, so use the appropriate wildcard.
Passing such a condition to a query has always been a problem. To a stored procedure it is worse because you can't even adjust the query to suit. 2 options currently:
use a table valued parameter and pass in multiple values that way (a bit of a nuisance to be honest)
write a "split" multi-value function as either a UDF or via SQL/CLR and call that from the query
For the record, "dapper" makes this easy for raw commands (not sprocs) via:
int[] ids = ...
var list = conn.Query<Foo>(
"select * from Foo where Id in #ids",
new { ids } ).ToList();
It figures out how to turn that into parameters etc for you.
Just in case anyone is looking for an SQL Server Solution:
CREATE FUNCTION [dbo].[SplitString]
(
#Input NVARCHAR(MAX),
#Character CHAR(1)
)
RETURNS #Output TABLE (
Item NVARCHAR(1000)
)
AS BEGIN
DECLARE #StartIndex INT, #EndIndex INT
SET #StartIndex = 1
IF SUBSTRING(#Input, LEN(#Input) - 1, LEN(#Input)) <> #Character
BEGIN
SET #Input = #Input + #Character
END
WHILE CHARINDEX(#Character, #Input) > 0
BEGIN
SET #EndIndex = CHARINDEX(#Character, #Input)
INSERT INTO #Output(Item)
SELECT SUBSTRING(#Input, #StartIndex, #EndIndex - 1)
SET #Input = SUBSTRING(#Input, #EndIndex + 1, LEN(#Input))
END
RETURN
END
Giving an array of strings, I will convert it to a comma separated List of strings using the following code
var result = string.Join(",", arr);
Then I could pass the parameter as follows
Command.Parameters.AddWithValue("#Parameter", result);
The In Stored Procedure Definition, I would use the parameter from above as follows
select * from [dbo].[WhateverTable] where [WhateverColumn] in (dbo.splitString(#Parameter, ','))

Categories

Resources