I have 3 different table and each table has Rows and Columns. [r,c]
One of my table has 30 rows and 18 columns,
The second one has 18 rows and 16 columns,
And the last one has 12 Rows and 17 columns.
Number of rows and columns are constant.
I will enter values to this tables. And then I want to save all of this tables to Sqlite. My reference value for insert/select is SheetMetarial.
Instert Into {Table1} (col1, col2...) Values (#col1, #col2, ...)
or
SELECT {SheetMetarial} FROM {Table1};
This way is totally not usefull. So I now my rows(Value1, Value2, ... Val30) and also my columns (Column1, column2, ...) for three tables. What is the best way to make this happen?
Thank you all.
I do not really know what your question is, but if I understood it from the title, you want to insert a multidimensional matrix inside the exact database?
If yes, then you could make a loop over every row in your array (I usually use a matrix inside a multidimensional array) and just apply the columns in its place. Is it that what you mean?
This Post maybe helps you, because there you see a sqlite syntax example for inserting multiple rows, if you want.
Related
How do I compare 2 large DATA tables in C#? The DataTable.Select method takes forever.
I need to compare each record’s field value with the other table. The source and target field data type might be different, e.g. Table1’s field1 data type is INT and Table2’s field1 datatype is VARCHAR.
You need to profile application to find what exactly is slow: iterating through columns and comparing values or finding matching records (rows) that needs to be compared.
As for record, solution could be to convert a table to dictionary. That works if your tables have unique column, then you can convert them to dictionary, where key is unique column value for record and value is whole row. Then iterate first DataTable, get unique column value and get row from 2nd Datatable, but from Dictionary.
If issue is in comparison between 2 rows, then it is better to show the code to see, maybe there are extra comparisons or casts. It's hard to tell without code.
First I am sorry for my bad English, is not my language.
My problem is: I have a table with around 10 million records of transaction of bank. It don't have PK and didn't sort as any column.
My work is create a page to filter and export it to csv. But limit of rows to export Csv is around 200k records.
I have some idea like:
create 800 tables of 800 ATMs (just an idea, I know it's stupid) and send data from main table to it 1 time per day => export to 800 file csv
use Linq to get 100k record per time then next time, I skip those. But I am stuck when Skip command need OrderBy and I got OutOfMemoryException with it
db.tblEJTransactions.OrderBy(u => u.Id).Take(100000).ToList()
Can anyone help me, every idea is welcome (my boss said I can use anything includes create hundred of tables, use Nosql ... )
If you don't have a primary key in your table, then add one.
The simplest and easiest is to add an int IDENTITY column.
ALTER TABLE dbo.T
ADD ID int NOT NULL IDENTITY (1, 1)
ALTER TABLE dbo.T
ADD CONSTRAINT PK_T PRIMARY KEY CLUSTERED (ID)
If you can't alter the original table, create a copy.
Once the table has a primary key you can sort by it and select chunks/pages of 200K rows with predictable results.
I'm not sure about my solution. But you can refer and try it:
select top 1000000 *, row_number() over (order by (select null)) from tblEJTransactions
The above query returns sorted list.
And then you can use Linq to get the result.
I'm still in the process of figuring out how to work with tables and data Grids so this is probably a dumb question showing lack of experience..but here goes. I have a table in which I want to multiply two values and place the answer in a third cell on the same row in the table. The calculation(mainly multiplication)/method should produce a unique answer for each row in the table. I tried using a DataReader to pick values out from the table then perform the calculation and send the result back to the table but I couldn't figure out how to make the calculation take place uniquely for each row..Also Would it be wiser to create the column to place the answer before-hand or should I rather Update the table by adding a new column in the process? I hope someone gets me and can help me, thanks.
If you are getting the information from a database why not just do the calculation there. ColA * ColB as ColC
If you are doing different types of calculations that require that you be in C# code then you could create a column when the data is selected. Then when you create the data table you would need to set "ReadOnly" to false. Then you could say ColC = result
I was given a task to insert over 1000 rows with 4 columns. The table in question does not have a PK or FK. Let's say it contains columns ID, CustomerNo, Description. The records needed to be inserted can have the same CustomerNo and Description values.
I read about importing data to a temporary table, comparing it with the real table, removing duplicates, and moving new records to the real table.
I also could have 1000 queries that check if such a record already exists and insert data if it does not. But I'm too ashamed to try that out for obvious reasons.
I'm not expecting any specific code, because I did not give any specific details. What I'm hoping for is some pseudocode or general advice for completing such tasks. I can't wait to give some upvotes!
So the idea is, you don't want to insert an entry if there's already an entry with the same ID?
If so, after you import your data into a temporary table, you can accomplish what you're looking for in the where clause of a select statement:
insert into table
select ID, CustomerNo, Description from #data_source
where (#data_source.ID not in (select table.ID from table))
I would suggest to you to load the data into a temp table or variable table. Then you can do a "Select Into" using the distinct key word which will removed the duplicated records.
you will always need to read the target table, unless you bulk load the target table into a temp table(in this point you will have two temp tables) compare both, eliminate duplicates and then insert in target table, but even this is not accurate, because you can have a new insert in the target table while you do this.
I'm doing some comparison, I have datatable with one text column, and I compare each row of datatable with all others.
My point is to avoid double comparison.
I thought writing IDs of compared rows to other datatable, so every time I can check if that two rows are already compared.
Table of already compared rows:
------
1245 4589
5589 6952
2233 2339
So if I want to compare rows with ids 6952 and 5589, I want to see if there is row with columns 6952/5589 or 5589/6952 in table of already compared rows.
What is the simpliest way?
I think you can add another string column which stores compared column IDs in delimited.
i.e. ,1234,5434,32453,
So you just have a string comparison. Compare ,ID, with that column's value