use metadata defined in db in code - c#

I have metadata in a DB table that i want to use in code.
The metadata is different sorts of Time types for reporting spent time.
The data can be:
NormalTime
OverTime
Vacation
Illness
etc
The data have a ID and a description and some other stuff.
ID = 1
Name = "Regular time"
Description = "Normal work time"
What is a good way to relate to this data in my code?
If for example i want create a method that sums all the NormalTime reported (i have another table that stores used time where the NormalTime ID and amount and some other stuff) how do i do that?
I dont want to hardcode the ID:
Select * from xyz where TimeType = 1
What i wanna do is:
Select * from xyz where TimeType = NormalTime.
Otherwise the code becomes very hard to read.
In my current solution i have hardcoded string consts that correlates to the ID.
The problem with this is if someone changes the description of the TimeType from NormalTime to something eles the hardcoded string const sais one thing and the db data sais something else.
And yes, this has happend as i dont have control over the DB content :(
So, how do I solve this in the best maintainable and readable way where changes can occur in the DB table and the code dont get very hard to read.
Where someone can add TimeTypes to the DB and later I can add methods that uses them in code.

One way to do this would be to use Visual Studio's T4 text generation templates.
(Entity Framework uses these for its code generation)
You can create a template file which contains code to pull the tables with metadata
from the database, and generates classes with static constants in.
They do need to be run whenever the database changes, though. But I think you might be able
to set them up so they do re-generate every time your code is built.
A question about T4 templates

You could have an enum type on the C# side that maps to a table in the database.
http://www.codeproject.com/Articles/41746/Mapping-NET-Enumerations-to-the-Database

Related

Best approach to track Amount field on Invoice table when InvoiceItem items change?

I'm building an app where I need to store invoices from customers so we can track who has paid and who has not, and if not, see how much they owe in total. Right now my schema looks something like this:
Customer
- Id
- Name
Invoice
- Id
- CreatedOn
- PaidOn
- CustomerId
InvoiceItem
- Id
- Amount
- InvoiceId
Normally I'd fetch all the data using Entity Framework and calculate everything in my C# service, (or even do the calculation on SQL Server) something like so:
var amountOwed = Invoice.Where(i => i.CustomerId == customer.Id)
.SelectMany(i => i.InvoiceItems)
.Select(ii => ii.Amount)
.Sum()
But calculating everything every time I need to generate a report doesn't feel like the right approach this time, because down the line I'll have to generate reports that should calculate what all the customers owe (sometimes go even higher on the hierarchy).
For this scenario I was thinking of adding an Amount field on my Invoice table and possibly an AmountOwed on my Customer table which will be updated or populated via the InvoiceService whenever I insert/update/delete an InvoiceItem. This should be safe enough and make the report querying much faster.
But I've also been searching some on this subject and another recommended approach is using triggers on my database. I like this method best because even if I were to directly modify a value using SQL and not the app services, the other tables would automatically update.
My question is:
How do I add a trigger to update all the parent tables whenever an InvoiceItem is changed?
And from your experience, is this the best (safer, less error-prone) solution to this problem, or am I missing something?
There are many examples of triggers that you can find on the web. Many are poorly written unfortunately. And for future reference, post DDL for your tables, not some abbreviated list. No one should need to ask about the constraints and relationships you have (or should have) defined.
To start, how would you write a query to calculate the total amount at the invoice level? Presumably you know the tsql to do that. So write it, test it, verify it. Then add your amount column to the invoice table. Now how would you write an update statement to set that new amount column to the sum of the associated item rows? Again - write it, test it, verify it. At this point you have all the code you need to implement your trigger.
Since this process involves changes to the item table, you will need to write triggers to handle all three types of dml statements - insert, update, and delete. Write a trigger for each to simplify your learning and debugging. Triggers have access to special tables - go learn about them. And go learn about the false assumption that a trigger works with a single row - it doesn't. Triggers must be written to work correctly if 0 (yes, zero), 1, or many rows are affected.
In an insert statement, the inserted table will hold all the rows inserted by the statement that caused the trigger to execute. So you merely sum the values (using the appropriate grouping logic) and update the appropriate rows in the invoice table. Having written the update statement mentioned in the previous paragraphs, this should be a relatively simple change to that query. But since you can insert a new row for an old invoice, you must remember to add the summed amount to the value already stored in the invoice table. This should be enough direction for you to start.
And to answer your second question - the safest and easiest way is to calculate the value every time. I fear you are trying to solve a problem that you do not have and that you may never have. Generally speaking, no one cares about invoices that are of "significant" age. You might care about unpaid invoices for a period of time, but eventually you write these things off (especially if the amounts are not significant). Another relatively easy approach is to create an indexed view to calculate and materialize the total amount. But remember - nothing is free. An indexed view must be maintained and it will add extra processing for DML statements affecting the item table. Indexed views do have limitations - which are documented.
And one last comment. I would strongly hesitate to maintain a total amount at any level higher than invoice. Above that level one frequently wants to filter the results in any ways - date, location, type, customer, etc. At this level you are approaching data warehouse functionality which is not appropriate for a OLTP system.
First of all never use triggers for business logic. Triggers are tricky and easily forgettable. It will be hard to maintain such application.
For most cases you can easily populate your reporting data via entity framework or SQL query. But if it requires lots of joins then you need to consider using staging tables. Because reporting requires data denormalization. To populate staging tables you can use SQL jobs or other schedule mechanism (Azure Scheduler maybe). This way you won't need to work with lots of join and your reports will populate faster.

C# translate database data using resx file

In my project (I am using azure storage) I have some data that I want to translate. I have the resource system in place for translations. I have a table in cloud which has name property. I want to translate it somehow.
One option is to create all the entries in database for each language which I don't prefer as it would create a lot entries along with the name.
Is there a smart way to use the resx mechanism I have in place?
So the table has multiple properties and one is name. Name could be anything like Mud, rock etc. Now I want to translate Mud into different language. Something like Texts.Mud would return me the correct value.
But lets say I get data like this
var data = some query;
string translatedName = Texts.data[0].name; // this won't work
You should instead add more columns in the database, each for a different language and select the column based on the user language.
Other solution is to have a transaltion mechanism (a custom class for example), where you pass the original database result (say data[0].name) to a query and it returns the translated value for you.

How to generate and understand a list of field names in a UniData table

I'm new to both UniData and Uniobjects so if I ask something that obvious I apologize.
I'm trying to write a tool that will let me export contacts from our ERP (Manage2000) that runs on UniData (v. 6.1) and can then import them into AD/Exchange.
The primary issue I'm having is that I don't know which fields (columns?) in the table (file?) are for what. I know that that there is a dictionary that has this information in it but I'm not sure how to get what I want out of it.
I found that there is a command LIST.METADATA in the current UniData documentation from Rocket but it seems that either the version of UniData that we are using is so old that it doesn't have this command in it or it was removed from the VOC file for some unknown reason.
Does anyone know how or have any tips to pull out the structure of a table so that I can know which fields are for what data?
Thanks in advance!
At TCL:
LIST DICT contact.master
Please note that the database file name (EX: contact.master) is case sensitive. I don't have a UniData instance at the moment to provide an example output. However, it should be similar to Universe's output:
Field......... Type & Field........ Conversion.. Column......... Output Depth &
Name.......... Field. Definition... Code........ Heading........ Format Assoc..
Number
AMOUNT.WEBB A 1 MR22 Amt WEBB 10R M
PANDAS.COST A 3 MD2Z Pandass Cost 10R M
CREDIT.EXP.DT A 6 D4/ Cred Exp Date 10R M
For the example above, you can generally tell the "data type" of the field by looking at the conversion code. "D4/" is the conversion code for a date. "MD2Z" is a numeric conversion code, which we can guess is for monetary amounts. I'm glossing over the power of conversion codes, so please make sure to reference Rocket's documentation for these codes to truly understand what these fields would output. If you don't have the documentation in front of you, you can also reference this site:
http://www.koretech.com/kr_help/KU2/30/en/KMK_Prog_Conversions.htm
If you wanted to use UniObjects and C# to retrieve the field names in a file, you could use the following code:
UniCommand fieldSelectCommand = activeSession.CreateUniCommand();
fieldSelectCommand.Command = "SELECT DICT contact.master";
fieldSelectCommand.Execute();
UniSelectList resultList = activeSession.CreateUniSelectList(0);
String[] allFieldNames = resultList.ReadListAsStringArray();
Having answered your question, I would also like to make a recommendation that you check out Rocket's U2 Toolkit for .NET if you're mostly going to be selecting data from the database instead of reading and manipulating individual records:
http://www.rocketsoftware.com/products/rocket-u2-toolkit-net
Not only does it present an ADO.NET way of accessing the database, it also has a better performance version of the UniObjects library under the U2.Data.Client.UO namespace.
The Dictionary, in my opinion, is a recommendation of how the schema should behave. However, there are cases when it's not 100% accurate. You could run "LIST CONTACT.MASTER TOXML TO MYFILE.XML" which would create an xml file what you could parse.
See https://u2devzone.rocketsoftware.com/accelerate/articles/u2-xml/u2-xml#section-0 for more information.

Copy Row from DataTable to another with different column schemas

I am working on optimizing some code I have been assigned from a previous employee's code base. Beyond the fact that the code is pretty well "spaghettified" I did run into an issue where I'm not sure how to optimize properly.
The below snippet is not an exact replication, but should detail the question fairly well.
He is taking one DataTable from an Excel spreasheet and placing rows into a consistantly formatted DataTable which later updates the database. This seems logical to me, however, the way he is copying data seems convoluted, and is a royal pain to modify, maintain or add new formats.
Here is what I'm seeing:
private void VendorFormatOne()
{
//dtSumbit is declared with it's column schema elsewhere
for (int i = 0; i < dtFromExcelFile.Rows.Count; i++)
{
dtSubmit.Rows.Add(i);
dtSubmit.Rows[i]["reference_no"] = dtFromExcelFile.Rows[i]["VENDOR REF"];
dtSubmit.Rows[i]["customer_name"] = dtFromExcelFile.Rows[i]["END USER ID"];
//etc etc etc
}
}
To me this is completely overkill for mapping columns to a different schema, but I can't think of a way to do this more gracefully. In the actual solution, there are about 20 of these methods, all using different formats from dtFromExcelFile and the column list is much longer. The column schema of dtSubmit remains the same across the board.
I am looking for a way to avoid having to manually map these columns every time the company needs to load a new file from a vendor. Is there a way to do this more efficiently? I'm sure I'm overlooking something here, but did not find any relevant answers on SO or elsewhere.
This might be overkill, but you could define an XML file that describes which Excel column maps to which database field, then input that along with each new Excel file. You'd want to whip up a class or two for parsing and consuming that file, and perhaps another class for validating the Excel file against the XML file.
Depending on the size of your organization, this may give you the added bonus of being able to offload that tedious mapping to someone less skilled. However, it is quite a bit of setup work and if this happens only sparingly, you might not get a significant return on investment for creating so much infrastructure.
Alternatively, if you're using MS SQL Server, this is basically what SSIS is built for, though in my experience, most programmers find SSIS quite tedious.
I had originally intended this just as a comment but ran out of space. It's in reply to Micah's answer and your first comment therein.
The biggest problem here is the amount of XML mapping would equal that of the manual mapping in code
Consider building a small tool that, given an Excel file with two
columns, produces the XML mapping file. Now you can offload the
mapping work to the vendor, or an intern, or indeed anyone who has a
copy of the requirement doc for a particular vendor project.
Since the file is then loaded at runtime in your import app or
whatever, you can change the mappings without having to redeploy the
app.
Having used exactly this kind of system many, many times in the past,
I can tell you this: you will be very glad you took the time to do
it - especially the first time you get a call right after deployment
along the lines of "oops, we need to add a new column to the data
we've given you, and we realised that we've misspelled the 19th
column by the way."
About the only thing that can perhaps go wrong is data type
conversions, but you can build that into the mapping file (type
from/to) and generalise your import routine to perform the
conversions for you.
Just my 2c.
A while ago I ran into similar problem where I had over 400 columns from 30 odd tables to be mapped to about 60 in the actual table in the database. I had the same dilemma whether to go with a schema or write something custom.
There was so much duplication that I ended up writing a simple helper class with a couple of overridden methods that basically took in a column name from import table and spit out the database column name. Also, for column names, I built a separate class of the format
public static class ColumnName
{
public const string FirstName = "FirstName";
public const string LastName = "LastName";
...
}
Same thing goes for TableNames as well.
This made it much simpler to maintain table names and column names. Also, this handled duplicate columns across different tables really well avoiding duplicate code.

How to keep previous and new value of data?

I’m currently working on a project where we need to archive and trace all the modified data’s.
When a modification surrender, we have to kept theses information
Who has modified the data?
When?
And … that’s why I’m asking this question: Keep the previous
and the new value of the data.
Quickly, I have to trace every modification for every data.
Example :
I have a name field why the value “Morgan”.
When I modify this value, I have to be able to say to the user that the 6th of January, by XXX, the value changed from “Morgan” to “Robert” …
I have to find a clean and generic method to do this because a large amount of data is concerned by this behavior.
My program is in C# (.NET 4) and we are using Sql Server 2008 R2 and NHibernate for the object mapping.
Do you any ideas, experience or solution about how to do a thing like that?
I am a little confused about at what point you want to have the old vs new data available. But, this can be done within a database trigger as in the following question:
trigger-insert-old-values-values-that-was-updated
NHibernate Envers its what you want :)
You must use NHibernate 3.2+ (3.2 is the current release).
Its easy like
enversConf.Audit<Person>();
You can get info here and here
I've been in the same situation as you. I ended up doing in this way:
Save an ActivityEntry in the database containing an identity column (if you have multiple objects that change), an action-indicator (could be "User changed firstname", as a int), date field, userId and most important a parameter field.
Combining the values from the parameter field and the action-indicator I'm able to make strings like "{0} changed {1}'s firstname from {2} to {3}" where my parameter values could be "John;Joe".
I know it feels kinda wrong saving these totally loosely typed values in the database, but I believe it's the only way around, without having a copy of each table.

Categories

Resources