Converting logic of DateTime.FromBinary method in TSQL query - c#

I have a table that contain column with VARBINARY(MAX) data type. That column represents different values for different types in my c# DB layer class. It can be: int, string and datetime. Now I need to convert that one column into three by it's type. So values with int type go to new column ObjectIntValue and so on for every new column.
But I have a problems with transmitting data to datetime column, because the old column contains datetime value as a long received from C# DateTime.ToBinary method while data saving.
I should make that in TSQL and can't using .NET for convert that value in new column. Have you any ideas?
Thanks for any advice!

Using CLR in T_SQl
Basically you use Create Assembly to register the dll with your function(s) in it,
Then create a user defined function to call it, then you can use it.
There's several rules depending on what you want to do, but as basically you only want DateTime.FromBinary(), shouldn't be too hard to figure out.
Never done it myself, but these guys seem to know what they are talking about
CLR in TSQL tutorial
This is a one off convert right? Your response to #schglurps is a bit of a concern.
If I get you there would have to be break in your update script, ie the one you have woukld work up to when you implement this chnage, then you's have a one off procedure for this manouevre, then you would be updating from a new version.
If you want to validate it, just check for the existnec or non-existance of the new columns.
Other option would be to write a wee application that filled in the new columns from the old one and invoke it. Ugh...
If this isn't one off and you want to keep and maintain the old column, then you have problems.

Related

Cannot insert sdo_geometry with more than 500 vertices

I have the following table
CREATE TABLE MYTABLE (MYID VARCHAR2(5), MYGEOM MDSYS.SDO_GEOMETRY );
AND the sql statement below:
INSERT INTO MYTABLE (MYID,MYGEOM) VALUES
( 255, SDO_GEOMETRY(2003, 2554, NULL, SDO_ELEM_INFO_ARRAY(1,1003,1),
SDO_ORDINATE_ARRAY(-34.921816571,-8.00119170599993,
...,-34.921816571,-8.00119170599993)));
Even after read several articles about possible solutions, I couldn't find out how to insert this sdo_geometry object.
The Oracle complains with this message:
ORA-00939 - "too many arguments for funcion"
I know that it's not possible to insert more then 999 values at once.
I tried stored procedure solutions, but I'm not Oracle expert, and maybe I missed something.
Could someone give me an example of code in c# or plsql ( or the both ) with or without stored procedure, to insert that row?
I'm using Oracle 11g, OracleDotNetProvider v 12.1.400 on VS2015 AND my source of spatial data comes from an external json ( so, no database-to-database ) and I can only use solutions using this provider, without datafiles or direct database handling.
I'm using SQLDeveloper to test the queries.
Please, don't point me articles if you are not sure that works with this row/value
I finally found an effective solution. Here: Constructing large sdo_geometry objects in Sql Developer and SqlPlus. Pls-00306 Error
The limitation you see is old. It is based on the idea that no-one would ever write a function that would have more than 1000 parameters (actually 999 input parameters and 1 return value).
However with the advent of multi-valued attributes (VARRAYs) and objects, this is no longer true. In particular for spatial types, the SDO_ORDINATE attribute is really an object type (implemented as a VARRAY) and the reference to SDO_ORDINATE is the constructor of that object type. Its input can be an array (if used in some programming language) or a list of numbers, each one being considered a parameter to a function - hence the limit to 999 numbers).
That happens only if you hard-code the numbers in your SQL statement. But that is a bad practice generally. The better practice is to use bind variables, and object types are no exception. The proper way is to construct an array with the coordinates you want to insert and pass those to the insert statement. Or construct the entire SDO_GEOMETRY object as a bind variable.
And of course, the very idea of constructing a complex geometry entirely manually by hardcoding the coordinates is absurd. That shape will either be loaded from a file (and a loading tool will take care of that), or capture by someone drawing a shape over a map - and then your GIS/capture tool will pass the coordinates to your application for insertion into your database.
In other words, that limitation to 999 attributes / numbers is rarely seen in real life. When it does, it reflects misunderstandings on how those things work.

Store values in separate, C# type-specific columns or all in one column?

I'm building a C# project configuration system that will store configuration values in a SQL Server db.
I was originally going to set the table up as such:
KeyId int
FieldName varchar
DataType varchar
StringValue varchar
IntValue int
DecimalValue decimal
...
Values would be stored and retrieved with the value in the DataType column determining which Value column to use, but I really don't like that design. So I thought I'd go this route:
KeyId int
FieldName varchar
DataType varchar
Value varbinary
Here the value in DataType would still determine the type of Value brought back, but it would all be in one column and I wouldn't have to write a ton of overloads to accommodate the different types like I would have with the previous solution. I would just pull the Value in as a byte array and use DataType to perform whatever conversion(s) necessary to get my Value.
Is the varbinary approach going to cause any performance issues or is it just bad practice to drop all these different types of data into a varbinary? I've been searching around for about an hour and I can't get to a definitive answer.
Also, if there is a more preferred method anyone can think of to reach the same conclusion, I'm all ears (or eyes).
You could serialize your settings as JSON and just store that as a string. Then you have all the settings within one row and your clients can deserialize as needed. This is also a safe way to add additional settings at any time without any modifications to your database.
We are using the second solution and it works well. Remember, that the disk access is in orders of magnitude greater, than the ex. casting operation (it's milliseconds vs. nanoseconds, see ref), so do not look for bottleneck here.
The solution can be to implement polymorphic association (1, 2). But I dont think there is a need for that, or that you should do this. The second solution is close to non-Sql db - you can dump as a value anything, might be as well entire html markup for a page. It should be the caller responsability to know what to do wit the data.
Also, see threads on how to store settings in DB: 1, 2 and 3 for critique.

How to keep previous and new value of data?

I’m currently working on a project where we need to archive and trace all the modified data’s.
When a modification surrender, we have to kept theses information
Who has modified the data?
When?
And … that’s why I’m asking this question: Keep the previous
and the new value of the data.
Quickly, I have to trace every modification for every data.
Example :
I have a name field why the value “Morgan”.
When I modify this value, I have to be able to say to the user that the 6th of January, by XXX, the value changed from “Morgan” to “Robert” …
I have to find a clean and generic method to do this because a large amount of data is concerned by this behavior.
My program is in C# (.NET 4) and we are using Sql Server 2008 R2 and NHibernate for the object mapping.
Do you any ideas, experience or solution about how to do a thing like that?
I am a little confused about at what point you want to have the old vs new data available. But, this can be done within a database trigger as in the following question:
trigger-insert-old-values-values-that-was-updated
NHibernate Envers its what you want :)
You must use NHibernate 3.2+ (3.2 is the current release).
Its easy like
enversConf.Audit<Person>();
You can get info here and here
I've been in the same situation as you. I ended up doing in this way:
Save an ActivityEntry in the database containing an identity column (if you have multiple objects that change), an action-indicator (could be "User changed firstname", as a int), date field, userId and most important a parameter field.
Combining the values from the parameter field and the action-indicator I'm able to make strings like "{0} changed {1}'s firstname from {2} to {3}" where my parameter values could be "John;Joe".
I know it feels kinda wrong saving these totally loosely typed values in the database, but I believe it's the only way around, without having a copy of each table.

How to define a specific function for saving and retrieving varchar fields in/from DB?

We have an old DB that we cannot change due to compatibility issues. So most of the varchar fields contain non unicode characters that are read through a charset, cp1251 to be exact.
We are developing new application on the old DB, using EF4.1. Having the data in ascii cp1251 and having to display it in utf-8 is the problem. Unfortunately, I'm new to the EF. So I'm having trouble all over the place.
I'm looking for a way to implement 2 functions that convert the string from cp1251 to utf-8 right at the data retrieval and input from/to DB.
Let me put it this way, have some way to catch the EF attempt to save a varchar field take its current data and convert into cp1251 format and vice versa when retrieving regardless of the field, table, or db currently being used, it would be more of a connection specific implementation.
We don't have a Data Access Layer nor Business Logic, we just go straight from UI to EF4.1, and any Business Logic needing implementation we just put them DbContext class.
I just don't know what to look for online, or where to begin.
any pointers welcome. thanks in advance.
Just make sure that all fields which are using cp1251 are marked as non unicode and try to use it. IMHO it should work. There is no extension point to add custom conversion function for some data type.
To make property non-unicode in EDMX simply set it in property pages of the property. To make it non-unicode in code mapping use:
modelBuilder.Entity<YourEntityType>()
.Property(p => p.YourStringProperty)
.IsUnicode(false);

Best Practice - Handling multiple fields, user roles, and one stored procedure

I have multiple fields both asp:DropDownList's and asp:TextBox's. I also have a number of user roles that change the Visible property of certain controls so the user cannot edit them. All of this data is saved with a stored procedure call on PostBack. The problem is when I send in the parameters and the control was not on the page obviously there wasn't a value for it, so in the stored procedure I have the parameters initialized to null. However, then the previous value that was in the database that I didn't want changed is overwritten with null.
This seems to be a pretty common problem, but I didn't have a good way of explaining it. So my question is, how should I go about keeping some fields from being on the page but also keeping the values in the database all with one stored procedure?
Apply the same logic when chosing what data to update as the logic you're actually using when chosing what data (and its associated UI) to render.
I think the problem is you want to do the update of all fields in a single SQL update, regardless of their value.
I think you should do some sanity check of your input before your update, even if that implies doing individual updates for certain parameters.
Without an example, it is a little difficult to know your exact circumstances, but here is a fictitious statement that will hopefully give you some ideas. It is using t-sql (MS SQL Server) since you did not mention a specific version of SQL:
UPDATE SomeImaginaryTable
SET FakeMoneyColumn = COALESCE(#FakeMoneyValue, FakeMoneyColumn)
WHERE FakeRowID = #FakeRowID
This basically updates a column to the parameter value, unless the parameter is null, in which case it uses the columns existing value.
Generally to overcome this in my update function
I would load the current values for the user
Replacing any loaded values with the newly changed values from the form
Update in db.
This way I have all the current plus everything that has been changed will get changed.
This logic will also work for an add form because all the fields would be null then get replaced with a new value before being sent to the db. You would of course just have to check whether to do an insert or update.

Categories

Resources