i have a sql server database with table. These are
1stAP_TB, 2ndAP_TB, 3rdAP_TB, 4thAP_TB, 1steng_TB, 2ndeng_TB, 3rdeng_TB,
4theng_TB
all in them are in row. The numbers will be solve individually on specific column. Now, i need to know how am i going to get the average of 1stAP_TB, 2ndAP_TB, 3rdAP_TB and 4thAP_TB while there are in rows.
Also, there are multiple data that will be save inside the database. I am using C# programming language.
Try below method
create table aveexample
(a1stAP_TB int,
a2ndAP_TB int,
a3rdAP_TB int,
a4thAP_TB int,
a1steng_TB int,
a2ndeng_TB int,
a3rdeng_TB int,
a4theng_TB int
)
Sample data
insert into aveexample values(1,2,3,4,5,6,7,8)
insert into aveexample values(11,22,33,44,55,66,77,78)
insert into aveexample values(2,3,1,4,10,10,45,5)
Method 1
select *, (select AVG(totaldata)
from (values(a1stAP_TB),
(a2ndAP_TB),(a3rdAP_TB),(a4thAP_TB),(a1steng_TB),
(a2ndeng_TB),(a3rdeng_TB),(a4theng_TB)) total(totaldata))as average
from aveexample
Method 2
select ((a1stAP_TB)+
(a2ndAP_TB)+(a3rdAP_TB)+(a4thAP_TB)+(a1steng_TB)+
(a2ndeng_TB)+(a3rdeng_TB)+(a4theng_TB))/8 as Average
from aveexample
It is difficult to give concrete advice given the very limited description in the question, but from the description and comments so far, it seems to me like the database needs to be redesigned to better fit your requirements. First, you have no ID field, so there is no way to differentiate one row from the next. Then, what you are left with is a series of repeated values. The clue here is that you have "1st", "2nd", "3rd" in the column names. That's probably a sign that those columns need to be moved into rows of a related table. It may not instantly seem to be the best approach, but this is called "First Normal Form" and is a typical best practice with SQL databases. See also Database Normalization Basics.
It seems to me that what you have here is some entity (which you haven't mentioned in your question) that has a number of values associated with it. The 'entity' here should be given a unique ID and then all of the values for that entity stored with its ID.
You might have a table with the following columns:
CREATE TABLE MyItems (
ID int NOT NULL,
Sequence int NOT NULL,
Value int NOT NULL,
CONSTRAINT PK_MyValues_ID_Sequence PRIMARY KEY
(ID,Sequence)
)
Note: ID + sequence forms the unique primary key for the table and makes every row unique. This also lets you keep track of what order the items were added in. This may or may not be important to you but every table should probably have a unique primary key.
Your data table would then look something like this (the example represents two different entities, the first having 4 values and the second having 3 values):
It's difficult to show a sensible example without knowing more about the application and what it does... but with this table design you have a basis from which to add values one at a time, as you said you needed, and a way to query them back. You can use grouping to produce things like totals and averages, or you can do that in code by iterating over the results of a query or in a LINQ statement.
You can then compute the average for an entity of a given ID using a LINQ query along the lines of:
var average = MyItems.Where(p=>p.ID == 1).Average(q=>q.Value);
As an example of the flexibility of this sort of approach, you could just as easily compute the average of every second value entered across the entire database:
var averageOfSecondItems = MyItems.Where(p => p.Sequence == 2).Average(q => q.Value);
The example I've shown deals with one type of value. In your question it appears that you might have two different types of value. There are several ways you could handle that - for example you could add another column to the table if the values are always entered in pairs, or you could create a second table to hold the separate values. Again, it's hard to make a recommendation based on the limited information given.
If putting your data into First Normal Form seems like a lot of work, then your application might be a better fit for a document database ("NoSQL" database), but that is really a different question. In the question, a SQL database was specified so I've concentrated on that.
Related
The following C# code was working with MySQL server correctly to get the MAX value of a column from a table and at the same time this query adds 1 to the value like this:
SqlDataReader dr = new SqlCommand("SELECT (MAX(Consec) +1) AS NextSampleID FROM Samples", Connection).ExecuteReader();
while (dr.Read())
{ //in case of Maximum value of Consec = 555, the expected result: A556
txtSampleID.Text = "A" + dr["NextSampleID"].ToString();
}
However this code does not work anymore after migrating the DB from MySQL to SQL Server, the result is the same if MAX(Consec) = 555 the result after running the query is A555, it does not add 1 like before when using MySQL server.
Question: What is the correct query to get the MAX value of Consec and how to add "1" to the result of MAX in the same query?
The MySQL query is wrong and won't work except in trivial applications, with only a single user, no deletions and no relations :
Concurrent calls will produce the same MAX value so result in the same, duplicate next value
Deleting records will reduce the MAX value, resulting in previous ID values getting assigned to new rows. If that ID value is used in another table, the new record will end up associated with rows it has no real relation. This can be very bad. Imagine one patient's test samples getting mixed with another's.
Calculating the MAX requires locking the entire table or index, thus blocking or getting blocked by others. Given MySQL's MVCC isolation though, that wouldn't prevent duplicates as concurrent SELECT MAX queries wouldn't block each other.
It's possible MAX+1 would work in a POS application with only one terminal generating invoice numbers, but as soon as you added two POS terminals you'd risk generating duplicate invoices.
In an e-Commerce application on the other hand, it's almost guaranteed that even if only two orders are placed per month, they'll happen at the exact same moment, resulting in duplicates.
Correct MySQL solution and equivalent
The correct solution in MySQL is to use the AUTO_INCREMENT attribute :
CREATE TABLE Samples (
Consec INT NOT NULL AUTO_INCREMENT,
...
);
If you want the invoice number to contain other data, use a calculated column to combine the incrementing number and that other data.
The equivalent in SQL Server is the IDENTITY property :
CREATE TABLE Samples (
Consec INT NOT NULL IDENTITY,
...
);
Sequences
Another option available in SQL Server and other databases is the SEQUENCE object. A SEQUENCE can be used to generate incrementing numbers that aren't tied to a table. It can also be reset, making it ideal for accounting applications where invoice numbers are reset after a specific period (eg every year).
Since a SEQUENCE is an independent object, you can increment and receive the new value before inserting any data in the database with NEXT VALUE FOR eg :
SELECT NEXT VALUE FOR seq_InvoiceNumber;
NEXT VALUE FOR can be uses as a default constraint for a table column the same way IDENTITY or AUTO_INCREMENT are used:
Create MyTable (
...
Consec INT NOT NULL DEFAULT (NEXT VALUE FOR seq_ThatSequence)
)
Multi-table sequences
The same sequence can be used in multiple tables. One case where that's useful is assigning a Document ID to data imported from multiple sources, stored in different tables, eg payments.
Payment providers (credit cards, banks etc) send statements using different formats. Obviously you can't lose any information there so you need to use different tables per provider, but still be able to handle payments the same way no matter where they came from.
If you used an IDENTITY on each table you'd end up with conflicting IDs for payments coming from different providers. On eg the OrderPayments table you'd have to record both the provider name and ID. Generating a single view of payments would end up with ID values that can't be used by themselves.
By using a single SEQUENCE though, each record would get its own ID, no matter the table.
i want to store a great amount of strings into my sqlite database. I want them to be always in the same order when i read them as i add them to the database. I know i could give them an autoincrementing primary key and sort by that but since there can be up to 100.000 strings this is a performance issue. Besides the order should NEVER change or be sorted in any different way.
short example:
sql insert "hghtzdz12g"
sql insert "jut65bdt"
sql insert "lkk7676nbgt"
sql select * should give ALWAYS this order {"hghtzdz12g", "jut65bdt", "lkk7676nbgt" }
Any ideas how to achive this ?
Thanks
In a query like
SELECT * FROM MyTable ORDER BY MyColumn
the database does not need to sort the results if the column is indexed, because it can just scan through the index entries in order.
The rowid (or whatever you call the autoincrementing column) is an index, and is even more efficient than a separate index.
If you are sure you will never need anything but exactly this array in exactly this order, you can cheat the database and put in a single blob field.
But then you should ask yourself why you chose a database in the first place.
The correct database solution is indeed a table using a key that you can sort by.
If this performance is not enough, you can have a look here for performance hints.
If you need ultra-fast performance, maybe a database is not the best tool for the job. Databases are used for their ACID abilities and speed is not one of them but rather a secondary objective of everything in software.
I would like to have a primary key column in a table that is formatted as FOO-BAR-[identity number], for example:
FOO-BAR-1
FOO-BAR-2
FOO-BAR-3
FOO-BAR-4
FOO-BAR-5
Can SQL Server do this? Or do I have to use C# to manage the sequence? If that's the case, how can I get the next [identity number] part using EntityFramwork?
Thanks
EDIT:
I needed to do this is because this column represents a unique identifier of a notice send out to customers.
FOO will be a constant string
BAR will be different depending on the type of the notice (either Detection, Warning or Enforcement)
So is it better to have just an int identity column and append the values in Business Logic Layer in C#?
If you want this 'composited' field in your reports, I propose you to:
Use INT IDENTITY field as PK in table
Create view for this table. In this view you can additionally generate the field that you want using your strings and types.
Use this view in your repoorts.
But I still think, that there is BIG problem with DB design. I hope you'll try to redesign using normalization.
You can set anything as the PK in a table. But in this instance I would set IDENTITY to just an auto-incrementing int and manually be appending FOO-BAR- to it in the SQL, BLL, or UI depending on why it's being used. If there is a business reason for FOO and BAR then you should also set these as values in your DB row. You can then create a key in the DB between the two three columns depending on why your actually using the values.
But IMO I really don't think there is ever a real reason to concatenate an ID in such a fashion and store it as such in the DB. But then again I really only use an int as my ID's.
Another option would be to use what an old team I used to be on called a codes and value table. We didn't use it for precisely this (we used it in lieu of auto-incrementing identities to prevent environment mismatches for some key tables), but what you could do is this:
Create a table that has a row for each of your categories. Two (or more) columns in the row - minimum of category name and next number.
When you insert a record in the other table, you'll run a stored proc to get the next available identity number for that category, increment the number in the codes and values table by 1, and concatenate the category and number together for your insert.
However, if you're main table is a high-volume table with lots of inserts, it's possible you could wind up with stuff out of sequence.
In any event, even if it's not high volume, I think you'd be better off to reexamine why you want to do this, and see if there's another, better way to do it (such as having the business layer or UI do it, as others have suggested).
It is quite possible by using computed column like this:
CREATE TABLE #test (
id INT IDENTITY UNIQUE CLUSTERED,
pk AS CONCAT('FOO-BAR-', id) PERSISTED PRIMARY KEY NONCLUSTERED,
name NVARCHAR(20)
)
INSERT INTO #test (name) VALUES (N'one'), (N'two'), (N'three')
SELECT id, pk, name FROM #test
DROP TABLE #test
Note that pk is set to NONCLUSTERED on purpose because it is of VARCHAR type, while the IDENTITY field, which will be unique anyway, is set to UNIQUE CLUSTERED.
Using the ADO.NET MySQL Connector, what is a good way to fetch lots of records (1000+) by primary key?
I have a table with just a few small columns, and a VARCHAR(128) primary key. Currently it has about 100k entries, but this will become more in the future.
In the beginning, I thought I would use the SQL IN statement:
SELECT * FROM `table` WHERE `id` IN ('key1', 'key2', [...], 'key1000')
But with this the query could be come very long, and also I would have to manually escape quote characters in the keys etc.
Now I use a MySQL MEMORY table (tempid INT, id VARCHAR(128)) to first upload all the keys with prepared INSERT statements. Then I make a join to select all the existing keys, after which I clean up the mess in the memory table.
Is there a better way to do this?
Note: Ok maybe its not the best idea to have a string as primary key, but the question would be the same if the VARCHAR column would be a normal index.
Temporary table: So far it seems the solution is to put the data into a temporary table, and then JOIN, which is basically what I currently do (see above).
I've dealt with a similar situation in a Payroll system where the user needed to generate reports based on a selection of employees (eg. employees X,Y,Z... or employees that work in certain offices). I've built a filter window with all the employees and all the attributes that could be considered as a filter criteria, and had that window save selected employee id's in a filter table from the database. I did this because:
Generating SELECT queries with dynamically generated IN filter is just ugly and highly unpractical.
I could join that table in all my queries that needed to use the filter window.
Might not be the best solution out there but served, and still serves me very well.
If your primary keys follow some pattern, you can select where key like 'abc%'.
If you want to get out 1000 at a time, in some kind of sequence, you may want to have another int column in your data table with a clustered index. This would do the same job as your current memory table - allow you to select by int range.
What is the nature of the primary key? It is anything meaningful?
If you're concerned about performance I definitely wouldn't recommend an 'IN' clause. It's much better try do an INNER JOIN if you can.
You can either first insert all the values into a temporary table and join to that or do a sub-select. Best is to actually profile the changes and figure out what works best for you.
Why can't you consider using a Table valued parameter to push the keys in the form of a DataTable and fetch the matching records back?
Or
Simply you write a private method that can concatenate all the key codes from a provided collection and return a single string and pass that string to the query.
I think it may solve your problem.
Out of my lack of SQL Server experience and taking into account that this task is a usual one for Line of Business applications, I'd like to ask, maybe there is a standard, common way of doing the following database operation:
Assume we have two tables, connected with each other by one-to-many relationship, for example SalesOderHeader and SalesOrderLines
http://s43.radikal.ru/i100/1002/1d/c664780e92d5.jpg
Field SalesHeaderNo is a PK in SalesOderHeader table and a FK in SalesOrderLines table.
In a front-end app a User selects some number of records in the SalesOderHeader table, using for example Date range, or IsSelected field by clicking checkbox fields in a GridView. Then User performs some operations (let it be just "move to another table") on selected range of Sales Orders.
My question is:
How, in this case, I can reach child records in the SalesOrderLines table for performing the same operations (in our case "move to another table") over these child records in as easy, correct, fast and elegant way as possible?
If you're okay with a T-SQL based solution (as opposed to C# / LINQ) - you could do something like this:
-- define a table to hold the primary keys of the selected master rows
DECLARE #MasterIDs TABLE (HeaderNo INT)
-- fill that table somehow, e.g. by passing in values from a C# apps or something
INSERT INTO dbo.NewTable(LineCodeNo, Item, Quantity, Price)
SELECT SalesLineCodeNo, Item, Quantity, Price
FROM dbo.SalesOrderLine sol
INNER JOIN #MasterIDs m ON m.HeaderNo = sol.SalesHeaderNo
With this, you can insert a whole set of rows from your child table into a new table based on a selection criteria.
Your question is still a bit vague to me in that I'm not exactly sure what would be entailed by "move to another table." Does that mean there is another table with the exact schema of both your sample tables?
However, here's stab at a solution. When a user commits on a SalesOrderHeader record, some operation will be performed that looks like:
Update SalesOrderHeader
Set....
Where SalesOrderHeaderNo = #SalesOrderHeaderNo
Or
Insert SomeOtherTable
Select ...
From SalesOrderHeader
Where SalesOrderHeaderNo = #SalesOrderHeaderNo
In that same operation, is there a reason you can't also do something to the line items such as:
Insert SomeOtherTableItems
Select ...
From SalesOrderLineItems
Where SalesOrderHeaderNo = #SalesOrderHeaderNo
I don't know about "Best Practices", but this is what I use:
var header = db.SalesOrderHeaders.SingleOrDefault(h => h.SaleHeaderNo == 14);
IEnumerable<SalesOrderLine> list = header.SalesOrderLines.AsEnumerable();
// now your list contains the "many" records for the header
foreach (SalesOrderLine line in list)
{
// some code
}
I tried to model it after your table design, but the names may be a little different.
Now whether this is the "best practices" way, I am not sure.
EDITED: Noticed that you want to update them all, possibly move to another table. Since LINQ-To-SQL can't do bulk inserts/updates, you would probably want to use T-SQL for that.