I’m currently working on a project where we need to archive and trace all the modified data’s.
When a modification surrender, we have to kept theses information
Who has modified the data?
When?
And … that’s why I’m asking this question: Keep the previous
and the new value of the data.
Quickly, I have to trace every modification for every data.
Example :
I have a name field why the value “Morgan”.
When I modify this value, I have to be able to say to the user that the 6th of January, by XXX, the value changed from “Morgan” to “Robert” …
I have to find a clean and generic method to do this because a large amount of data is concerned by this behavior.
My program is in C# (.NET 4) and we are using Sql Server 2008 R2 and NHibernate for the object mapping.
Do you any ideas, experience or solution about how to do a thing like that?
I am a little confused about at what point you want to have the old vs new data available. But, this can be done within a database trigger as in the following question:
trigger-insert-old-values-values-that-was-updated
NHibernate Envers its what you want :)
You must use NHibernate 3.2+ (3.2 is the current release).
Its easy like
enversConf.Audit<Person>();
You can get info here and here
I've been in the same situation as you. I ended up doing in this way:
Save an ActivityEntry in the database containing an identity column (if you have multiple objects that change), an action-indicator (could be "User changed firstname", as a int), date field, userId and most important a parameter field.
Combining the values from the parameter field and the action-indicator I'm able to make strings like "{0} changed {1}'s firstname from {2} to {3}" where my parameter values could be "John;Joe".
I know it feels kinda wrong saving these totally loosely typed values in the database, but I believe it's the only way around, without having a copy of each table.
Related
I'm building a C# project configuration system that will store configuration values in a SQL Server db.
I was originally going to set the table up as such:
KeyId int
FieldName varchar
DataType varchar
StringValue varchar
IntValue int
DecimalValue decimal
...
Values would be stored and retrieved with the value in the DataType column determining which Value column to use, but I really don't like that design. So I thought I'd go this route:
KeyId int
FieldName varchar
DataType varchar
Value varbinary
Here the value in DataType would still determine the type of Value brought back, but it would all be in one column and I wouldn't have to write a ton of overloads to accommodate the different types like I would have with the previous solution. I would just pull the Value in as a byte array and use DataType to perform whatever conversion(s) necessary to get my Value.
Is the varbinary approach going to cause any performance issues or is it just bad practice to drop all these different types of data into a varbinary? I've been searching around for about an hour and I can't get to a definitive answer.
Also, if there is a more preferred method anyone can think of to reach the same conclusion, I'm all ears (or eyes).
You could serialize your settings as JSON and just store that as a string. Then you have all the settings within one row and your clients can deserialize as needed. This is also a safe way to add additional settings at any time without any modifications to your database.
We are using the second solution and it works well. Remember, that the disk access is in orders of magnitude greater, than the ex. casting operation (it's milliseconds vs. nanoseconds, see ref), so do not look for bottleneck here.
The solution can be to implement polymorphic association (1, 2). But I dont think there is a need for that, or that you should do this. The second solution is close to non-Sql db - you can dump as a value anything, might be as well entire html markup for a page. It should be the caller responsability to know what to do wit the data.
Also, see threads on how to store settings in DB: 1, 2 and 3 for critique.
I'm new to both UniData and Uniobjects so if I ask something that obvious I apologize.
I'm trying to write a tool that will let me export contacts from our ERP (Manage2000) that runs on UniData (v. 6.1) and can then import them into AD/Exchange.
The primary issue I'm having is that I don't know which fields (columns?) in the table (file?) are for what. I know that that there is a dictionary that has this information in it but I'm not sure how to get what I want out of it.
I found that there is a command LIST.METADATA in the current UniData documentation from Rocket but it seems that either the version of UniData that we are using is so old that it doesn't have this command in it or it was removed from the VOC file for some unknown reason.
Does anyone know how or have any tips to pull out the structure of a table so that I can know which fields are for what data?
Thanks in advance!
At TCL:
LIST DICT contact.master
Please note that the database file name (EX: contact.master) is case sensitive. I don't have a UniData instance at the moment to provide an example output. However, it should be similar to Universe's output:
Field......... Type & Field........ Conversion.. Column......... Output Depth &
Name.......... Field. Definition... Code........ Heading........ Format Assoc..
Number
AMOUNT.WEBB A 1 MR22 Amt WEBB 10R M
PANDAS.COST A 3 MD2Z Pandass Cost 10R M
CREDIT.EXP.DT A 6 D4/ Cred Exp Date 10R M
For the example above, you can generally tell the "data type" of the field by looking at the conversion code. "D4/" is the conversion code for a date. "MD2Z" is a numeric conversion code, which we can guess is for monetary amounts. I'm glossing over the power of conversion codes, so please make sure to reference Rocket's documentation for these codes to truly understand what these fields would output. If you don't have the documentation in front of you, you can also reference this site:
http://www.koretech.com/kr_help/KU2/30/en/KMK_Prog_Conversions.htm
If you wanted to use UniObjects and C# to retrieve the field names in a file, you could use the following code:
UniCommand fieldSelectCommand = activeSession.CreateUniCommand();
fieldSelectCommand.Command = "SELECT DICT contact.master";
fieldSelectCommand.Execute();
UniSelectList resultList = activeSession.CreateUniSelectList(0);
String[] allFieldNames = resultList.ReadListAsStringArray();
Having answered your question, I would also like to make a recommendation that you check out Rocket's U2 Toolkit for .NET if you're mostly going to be selecting data from the database instead of reading and manipulating individual records:
http://www.rocketsoftware.com/products/rocket-u2-toolkit-net
Not only does it present an ADO.NET way of accessing the database, it also has a better performance version of the UniObjects library under the U2.Data.Client.UO namespace.
The Dictionary, in my opinion, is a recommendation of how the schema should behave. However, there are cases when it's not 100% accurate. You could run "LIST CONTACT.MASTER TOXML TO MYFILE.XML" which would create an xml file what you could parse.
See https://u2devzone.rocketsoftware.com/accelerate/articles/u2-xml/u2-xml#section-0 for more information.
I am working on optimizing some code I have been assigned from a previous employee's code base. Beyond the fact that the code is pretty well "spaghettified" I did run into an issue where I'm not sure how to optimize properly.
The below snippet is not an exact replication, but should detail the question fairly well.
He is taking one DataTable from an Excel spreasheet and placing rows into a consistantly formatted DataTable which later updates the database. This seems logical to me, however, the way he is copying data seems convoluted, and is a royal pain to modify, maintain or add new formats.
Here is what I'm seeing:
private void VendorFormatOne()
{
//dtSumbit is declared with it's column schema elsewhere
for (int i = 0; i < dtFromExcelFile.Rows.Count; i++)
{
dtSubmit.Rows.Add(i);
dtSubmit.Rows[i]["reference_no"] = dtFromExcelFile.Rows[i]["VENDOR REF"];
dtSubmit.Rows[i]["customer_name"] = dtFromExcelFile.Rows[i]["END USER ID"];
//etc etc etc
}
}
To me this is completely overkill for mapping columns to a different schema, but I can't think of a way to do this more gracefully. In the actual solution, there are about 20 of these methods, all using different formats from dtFromExcelFile and the column list is much longer. The column schema of dtSubmit remains the same across the board.
I am looking for a way to avoid having to manually map these columns every time the company needs to load a new file from a vendor. Is there a way to do this more efficiently? I'm sure I'm overlooking something here, but did not find any relevant answers on SO or elsewhere.
This might be overkill, but you could define an XML file that describes which Excel column maps to which database field, then input that along with each new Excel file. You'd want to whip up a class or two for parsing and consuming that file, and perhaps another class for validating the Excel file against the XML file.
Depending on the size of your organization, this may give you the added bonus of being able to offload that tedious mapping to someone less skilled. However, it is quite a bit of setup work and if this happens only sparingly, you might not get a significant return on investment for creating so much infrastructure.
Alternatively, if you're using MS SQL Server, this is basically what SSIS is built for, though in my experience, most programmers find SSIS quite tedious.
I had originally intended this just as a comment but ran out of space. It's in reply to Micah's answer and your first comment therein.
The biggest problem here is the amount of XML mapping would equal that of the manual mapping in code
Consider building a small tool that, given an Excel file with two
columns, produces the XML mapping file. Now you can offload the
mapping work to the vendor, or an intern, or indeed anyone who has a
copy of the requirement doc for a particular vendor project.
Since the file is then loaded at runtime in your import app or
whatever, you can change the mappings without having to redeploy the
app.
Having used exactly this kind of system many, many times in the past,
I can tell you this: you will be very glad you took the time to do
it - especially the first time you get a call right after deployment
along the lines of "oops, we need to add a new column to the data
we've given you, and we realised that we've misspelled the 19th
column by the way."
About the only thing that can perhaps go wrong is data type
conversions, but you can build that into the mapping file (type
from/to) and generalise your import routine to perform the
conversions for you.
Just my 2c.
A while ago I ran into similar problem where I had over 400 columns from 30 odd tables to be mapped to about 60 in the actual table in the database. I had the same dilemma whether to go with a schema or write something custom.
There was so much duplication that I ended up writing a simple helper class with a couple of overridden methods that basically took in a column name from import table and spit out the database column name. Also, for column names, I built a separate class of the format
public static class ColumnName
{
public const string FirstName = "FirstName";
public const string LastName = "LastName";
...
}
Same thing goes for TableNames as well.
This made it much simpler to maintain table names and column names. Also, this handled duplicate columns across different tables really well avoiding duplicate code.
I have metadata in a DB table that i want to use in code.
The metadata is different sorts of Time types for reporting spent time.
The data can be:
NormalTime
OverTime
Vacation
Illness
etc
The data have a ID and a description and some other stuff.
ID = 1
Name = "Regular time"
Description = "Normal work time"
What is a good way to relate to this data in my code?
If for example i want create a method that sums all the NormalTime reported (i have another table that stores used time where the NormalTime ID and amount and some other stuff) how do i do that?
I dont want to hardcode the ID:
Select * from xyz where TimeType = 1
What i wanna do is:
Select * from xyz where TimeType = NormalTime.
Otherwise the code becomes very hard to read.
In my current solution i have hardcoded string consts that correlates to the ID.
The problem with this is if someone changes the description of the TimeType from NormalTime to something eles the hardcoded string const sais one thing and the db data sais something else.
And yes, this has happend as i dont have control over the DB content :(
So, how do I solve this in the best maintainable and readable way where changes can occur in the DB table and the code dont get very hard to read.
Where someone can add TimeTypes to the DB and later I can add methods that uses them in code.
One way to do this would be to use Visual Studio's T4 text generation templates.
(Entity Framework uses these for its code generation)
You can create a template file which contains code to pull the tables with metadata
from the database, and generates classes with static constants in.
They do need to be run whenever the database changes, though. But I think you might be able
to set them up so they do re-generate every time your code is built.
A question about T4 templates
You could have an enum type on the C# side that maps to a table in the database.
http://www.codeproject.com/Articles/41746/Mapping-NET-Enumerations-to-the-Database
I have a table that contain column with VARBINARY(MAX) data type. That column represents different values for different types in my c# DB layer class. It can be: int, string and datetime. Now I need to convert that one column into three by it's type. So values with int type go to new column ObjectIntValue and so on for every new column.
But I have a problems with transmitting data to datetime column, because the old column contains datetime value as a long received from C# DateTime.ToBinary method while data saving.
I should make that in TSQL and can't using .NET for convert that value in new column. Have you any ideas?
Thanks for any advice!
Using CLR in T_SQl
Basically you use Create Assembly to register the dll with your function(s) in it,
Then create a user defined function to call it, then you can use it.
There's several rules depending on what you want to do, but as basically you only want DateTime.FromBinary(), shouldn't be too hard to figure out.
Never done it myself, but these guys seem to know what they are talking about
CLR in TSQL tutorial
This is a one off convert right? Your response to #schglurps is a bit of a concern.
If I get you there would have to be break in your update script, ie the one you have woukld work up to when you implement this chnage, then you's have a one off procedure for this manouevre, then you would be updating from a new version.
If you want to validate it, just check for the existnec or non-existance of the new columns.
Other option would be to write a wee application that filled in the new columns from the old one and invoke it. Ugh...
If this isn't one off and you want to keep and maintain the old column, then you have problems.