Cannot insert sdo_geometry with more than 500 vertices - c#

I have the following table
CREATE TABLE MYTABLE (MYID VARCHAR2(5), MYGEOM MDSYS.SDO_GEOMETRY );
AND the sql statement below:
INSERT INTO MYTABLE (MYID,MYGEOM) VALUES
( 255, SDO_GEOMETRY(2003, 2554, NULL, SDO_ELEM_INFO_ARRAY(1,1003,1),
SDO_ORDINATE_ARRAY(-34.921816571,-8.00119170599993,
...,-34.921816571,-8.00119170599993)));
Even after read several articles about possible solutions, I couldn't find out how to insert this sdo_geometry object.
The Oracle complains with this message:
ORA-00939 - "too many arguments for funcion"
I know that it's not possible to insert more then 999 values at once.
I tried stored procedure solutions, but I'm not Oracle expert, and maybe I missed something.
Could someone give me an example of code in c# or plsql ( or the both ) with or without stored procedure, to insert that row?
I'm using Oracle 11g, OracleDotNetProvider v 12.1.400 on VS2015 AND my source of spatial data comes from an external json ( so, no database-to-database ) and I can only use solutions using this provider, without datafiles or direct database handling.
I'm using SQLDeveloper to test the queries.
Please, don't point me articles if you are not sure that works with this row/value

I finally found an effective solution. Here: Constructing large sdo_geometry objects in Sql Developer and SqlPlus. Pls-00306 Error

The limitation you see is old. It is based on the idea that no-one would ever write a function that would have more than 1000 parameters (actually 999 input parameters and 1 return value).
However with the advent of multi-valued attributes (VARRAYs) and objects, this is no longer true. In particular for spatial types, the SDO_ORDINATE attribute is really an object type (implemented as a VARRAY) and the reference to SDO_ORDINATE is the constructor of that object type. Its input can be an array (if used in some programming language) or a list of numbers, each one being considered a parameter to a function - hence the limit to 999 numbers).
That happens only if you hard-code the numbers in your SQL statement. But that is a bad practice generally. The better practice is to use bind variables, and object types are no exception. The proper way is to construct an array with the coordinates you want to insert and pass those to the insert statement. Or construct the entire SDO_GEOMETRY object as a bind variable.
And of course, the very idea of constructing a complex geometry entirely manually by hardcoding the coordinates is absurd. That shape will either be loaded from a file (and a loading tool will take care of that), or capture by someone drawing a shape over a map - and then your GIS/capture tool will pass the coordinates to your application for insertion into your database.
In other words, that limitation to 999 attributes / numbers is rarely seen in real life. When it does, it reflects misunderstandings on how those things work.

Related

Store values in separate, C# type-specific columns or all in one column?

I'm building a C# project configuration system that will store configuration values in a SQL Server db.
I was originally going to set the table up as such:
KeyId int
FieldName varchar
DataType varchar
StringValue varchar
IntValue int
DecimalValue decimal
...
Values would be stored and retrieved with the value in the DataType column determining which Value column to use, but I really don't like that design. So I thought I'd go this route:
KeyId int
FieldName varchar
DataType varchar
Value varbinary
Here the value in DataType would still determine the type of Value brought back, but it would all be in one column and I wouldn't have to write a ton of overloads to accommodate the different types like I would have with the previous solution. I would just pull the Value in as a byte array and use DataType to perform whatever conversion(s) necessary to get my Value.
Is the varbinary approach going to cause any performance issues or is it just bad practice to drop all these different types of data into a varbinary? I've been searching around for about an hour and I can't get to a definitive answer.
Also, if there is a more preferred method anyone can think of to reach the same conclusion, I'm all ears (or eyes).
You could serialize your settings as JSON and just store that as a string. Then you have all the settings within one row and your clients can deserialize as needed. This is also a safe way to add additional settings at any time without any modifications to your database.
We are using the second solution and it works well. Remember, that the disk access is in orders of magnitude greater, than the ex. casting operation (it's milliseconds vs. nanoseconds, see ref), so do not look for bottleneck here.
The solution can be to implement polymorphic association (1, 2). But I dont think there is a need for that, or that you should do this. The second solution is close to non-Sql db - you can dump as a value anything, might be as well entire html markup for a page. It should be the caller responsability to know what to do wit the data.
Also, see threads on how to store settings in DB: 1, 2 and 3 for critique.

Stored Procedure sometimes returns short, sometimes returns int

I'm working with a legacy codebase and need to call a stored procedure that I'm not allowed to modify. This stored procedure returns a row or multiple rows of validation data.
Example of result set (two columns, code and text):
0 "success"
OR
3 "short error"
4 "detailed error"
In the procedure itself, the message is selected simply as:
Select 0 as code, 'success' as text
Problem:
I'm using Entity Framework to map the result of this stored procedure to a custom class:
public class ValidationResult
{
public int code { get; set; }
public string text { get; set; }
}
The call itself:
var result = context.Database.SqlQuery<ValidationResult>(#"old_sproc").ToList();
I've written some integration tests, and have noticed that when the procedure returns the success message, the 0 comes across as a short. When it returns a non-zero message, it comes across as an int. I assumed that setting code as an int, the short would fit in. Unfortunately, I get the following exception for my success test:
The specified cast from a materialized 'System.Int16' type to the 'System.Int32' type is not valid.
When I switch code to a short to make my success test pass, my failure test fails with the following exception:
The specified cast from a materialized 'System.Int32' type to the 'System.Int16' type is not valid.
ADO.NET is an answer
One solution is to fall back to ADO.NET's SqlDataReader object, so I have that as a fallback solution. I'm wondering if there is something I can do on the EF side to get this working, though.
(This is a follow-up to my previous answer. It is only relevant for sql-server-2012 and later.)
Short answer:
var sql = "EXECUTE old_sproc WITH RESULT SETS ((code INT, text VARCHAR(MAX)))";
var result = context.Database.SqlQuery<ValidationResult(sql).ToList();
Approach taken in this answer:
This answer will follow in your footsteps and use SqlQuery to execute your stored procedure. (Why not an altogether different approach? Because there might not be any alternative. I'll go into this further below.)
Let's start with an observation about your current code:
var result = context.Database.SqlQuery<ValidationResult>(#"old_sproc").ToList();
The query text "old_sproc" is really abbreviated T-SQL for "EXECUTE old_sproc". I am mentioning this because it's easy to think that SqlQuery somehow treats the name of a stored procedure specially; but no, this is actually a regular T-SQL statement.
In this answer, we will modify your current SQL only a tiny bit.
Implicit type conversions with the WITH RESULT SETS clause:
So let's stay with what you're already doing: EXECUTE the stored procedure via SqlQuery. Starting with SQL Server 2012, the EXECUTE statement supports an optional clause called WITH RESULT SETS that allows you to specify what result sets you expect to get back. SQL Server will attempt to perform implicit type conversions if the actual result sets do not match that specification.
In your case, you might do this:
var sql = "EXECUTE old_sproc WITH RESULT SETS ((code INT, text VARCHAR(MAX)))";
var result = context.Database.SqlQuery<ValidationResult(sql).ToList();
The added clause states that you expect to get back one result set having a code INT and a text VARCHAR(MAX) column. The important bit is code INT: If the stored procedure happens to produce SMALLINT values for code, SQL Server will perform the conversion to INT for you.
Implicit conversions could take you even further: For example, you could specify code as VARCHAR(…) or even NUMERIC(…) (and change your C# properties to string or decimal, respectively).
If you're using Entity Framework's SqlQuery method, it's unlikely to get any neater than that.
For quick reference, here are some quotes from the linked-to MSDN reference page:
"The actual result set being returned during execution can differ from the result defined using the WITH RESULT SETS clause in one of the following ways: number of result sets, number of columns, column name, nullability, and data type."
"If the data types differ, an implicit conversion to the defined data type is performed."
Do I have to write a SQL query? Isn't there another (more ORM) way?
None that I am aware of.
Entity Framework has been evolving in a "Code First" direction in the recent past (it's at version 6 at this time of writing), and that trend is likely to continue.
The book "Programming Entity Framework Code First" by Julie Lerman & Rowan Miller (published in 2012 by O'Reilly) has a short chapter "Working with Stored Procedures", which contains two code examples; both of which use SqlQuery to map a stored procedure's result set.
I guess that if these two EF experts do not show another way of mapping stored procedures, then perhaps EF currently does not offer any alternative to SqlQuery.
(P.S.: Admittedly the OP's main problem is not stored procedures per se; it's making EF perform an automatic type conversion. Even then, I am not aware of another way than the one shown here.)
If you can't alter the stored procedure itself, you could create a wrapper stored procedure which alters the data in some way, and have EF call that.
Not ideal of course, but may be an option.
(Note: If you're working with SQL Server 2012 or later, see my follow-up answer, which shows a much shorter, neater way of doing the same thing described here.)
Here's a solution that stays in EF land and does not require any database schema changes.
Since you can pass any valid SQL to the SqlQuery method, nothing stops you from passing it a multi-statement script that:
DECLAREs a temporary table;
EXECUTEs the stored procedure and INSERTs its result into the temporary table;
SELECTs the final result from that temporary table.
The last step is where you can apply any further post-processing, such as a type conversion.
const string sql = #"DECLARE #temp TABLE ([code] INT, [text] VARCHAR(MAX));
INSERT INTO #temp EXECUTE [old_sproc];
SELECT CONVERT(INT, [code]) AS [code], [text] FROM #temp;";
// ^^^^^^^^^^^^^ ^^^^^^^^^^^
// this conversion might not actually be necessary
// since #temp.code is already declared INT, i.e.
// SQL Server might already have coerced SMALLINT
// values to INT values during the INSERT.
var result = context.Database.SqlQuery<ValidationResult>(sql).ToList();
In the entity framework data modeler page (Model Browser), either change the functional mapping to a specific int which works for the ValidationResult class or create a new functional mapping result class which has the appropriate int and use that as the resulting DTO class.
I leave this process a touch vague because I do not have access to the actual database; instead I provide the process to either create a new functional mapping or modify an existing one. Trial and error will help you overcome the incorrect functional mapping.
Another trick to have EF generate the right information is temporarily drop the stored proc and have a new one return a stub select such as:
select 1 AS Code , 'Text' as text
RETURN ##ROWCOUNT
The reasoning for this is that sometimes EF can't determine what the stored procedure ultimately returns. If that is the case, temporarily creating the stub return and generating EF from it provides a clear picture for the mappings. Then returning the sproc to its original code after an update sometimes does the trick.
Ignore the int/short. the text is always the same for the same number right? get just the text. have a switch case. Yes its a hack but unless you can fix the root of the problem (and you say you are not allowed) then you should go with the hack that will take the least amount of time to create and will not cause problems down the road for the next person maintaining the code. if this stored proc is legacy it will not have any new kinds of results in the future. and this solution together with a nice comment solves this and lets you go back to creating value somewhere else.
Cast the static message code to an int:
Select cast(0 as int) as code, 'success' as text
This ensures the literal returned is consistent with the int returned by the other query. Leave the ValidationResult.code declared as an int.
Note: I know I missed the part in the question about the SP can't be modified, but given that this makes the answer quite complicated, I'm leaving this here for others who may have the same problem, but are able to solve it much more easily by modifying the SP. This does work if you have a return type inconsistency in the SP and modifying is an option.
There is a workaround you could use if you don't find a better solution. Let it be an int. It will work for all error codes. If you get an exception you know the result was a success so you can add a try/catch for that specific exception. It's not pretty and depending on how much this runs it might impact performance.
Another idea, have you tried changing the type of code to object?

Which datatype and which insertion parameter for large data

Here a field in my data records could pass the limit of 8000 chars of nvarchar, and looking for a quite larger Data-Type, e.g about 9000 chars, Any ideas ?
At first I was using NvarChar(8000), after finding some could pass this boundary I used NText
to see what will happen next, with Entity Framework seems it could do the job as it's expected without defining any Insert statement and Data Adapter, During the programming the system changed to data Adapter and I should do the job with a Insert command, Now the parameter defined is look like this :
cmdIns.Parameters.Add("#story", SqlDbType.NText, 16, "Story")
it seems that the limitation of 16 will be increased automatically while using EF is used but not with the Data Adapter(And it just insert 16 chars of the data),
really don't know (can't remember) Is the test with EF passed even the items larger than 8000 ?
If so, I'm curious about the reason.
The situation is deciding the proper Data-Type and it's equivalent working parameter to be used on insertion point of this large data field.
Note : Here SQL Server CE is Used
Edit :
Sorry, I had to go at that time,
The Data-type which should be used is NTEXT with no alternative here
but defining the **insertion Statement and parameter** is a bit hassle,
unfortunately none of the suggested methods could do the desired job similar to the piece which I gave.
without defining the length it will give errors (run-time) !
And Using AddWithValue couldn't use a the DataAdapter and do the insertion in bulk.
Maybe I have to place it in another question, but this is a piece of this question, and a working answer here could be the complete one.
Any ideas ?
If I understood your question correctly you should be fine doing something like this, omitting the size as it isn't necessary:
cmdIns.Parameters.Add( new SqlParameter( "story", SqlDbType.NText )
{
Value = yourVariable;
} );
Use AddWithValue whenever you want to add a parameter by specifying its name and value. Like this command.Parameters.AddWithValue("#story", story);

How do I keep my stored procedure inputs from being silently truncated?

We use the standard System.Data classes, DbConnection and DbCommand, to connect to SQL Server from C#, and we have many stored procedures that take VARCHAR or NVARCHAR parameters as input. We found that neither SQL Server nor our C# application throws any kind of error or warning when a string longer than maximum length of a parameter is passed in as the value to that parameter. Instead, the value is silently truncated to the maximum length of the parameter.
So, for example, if the stored procedure input is of type VARCHAR(10) and we pass in 'U R PRETTY STUPID', the stored procedure receives the input as 'U R PRETTY', which is very nice but totally not what we meant to say.
What I've done in the past to detect these truncations, and what others have likewise suggested, is to make the parameter input length one character larger than required, and then check if the length of the input is equal to that new max length. So in the above example my input would become VARCHAR(11) and I would check for input of length 11. Any input of length 11 or more would be caught by this check. This works, but feels wrong. Ideally, the data access layer would detect these problems automatically.
Is there a better way to detect that the provided stored procedure input is longer than allowed? Shouldn't DbCommand already be aware of the input length limits?
Also, as a matter of curiosity, what is responsible for silently truncating our inputs?
Use VARCHAR(8000), NVARCHAR(4000) or even N/VARCHAR(MAX), for all the variables and parameters. This way you do not need to worry about truncation when assigning #variables and #parameters. Truncation may occur at actual data write (insert or update) but that is not silent, is going to trigger a hard error and you'll find out about it. You also get the added benefit of the stored procedure code not having to be changed with schema changes (change column length, code is still valid). And you also get better plan cache behavior from using consistent parameter lengths, see How Data Access Code Affects Database Performance.
Be aware that there is a slight performance hit for using MAX types for #variables/#parameters, see Performance comparison of varchar(max) vs. varchar(N).

Converting logic of DateTime.FromBinary method in TSQL query

I have a table that contain column with VARBINARY(MAX) data type. That column represents different values for different types in my c# DB layer class. It can be: int, string and datetime. Now I need to convert that one column into three by it's type. So values with int type go to new column ObjectIntValue and so on for every new column.
But I have a problems with transmitting data to datetime column, because the old column contains datetime value as a long received from C# DateTime.ToBinary method while data saving.
I should make that in TSQL and can't using .NET for convert that value in new column. Have you any ideas?
Thanks for any advice!
Using CLR in T_SQl
Basically you use Create Assembly to register the dll with your function(s) in it,
Then create a user defined function to call it, then you can use it.
There's several rules depending on what you want to do, but as basically you only want DateTime.FromBinary(), shouldn't be too hard to figure out.
Never done it myself, but these guys seem to know what they are talking about
CLR in TSQL tutorial
This is a one off convert right? Your response to #schglurps is a bit of a concern.
If I get you there would have to be break in your update script, ie the one you have woukld work up to when you implement this chnage, then you's have a one off procedure for this manouevre, then you would be updating from a new version.
If you want to validate it, just check for the existnec or non-existance of the new columns.
Other option would be to write a wee application that filled in the new columns from the old one and invoke it. Ugh...
If this isn't one off and you want to keep and maintain the old column, then you have problems.

Categories

Resources