Words are added to the end of a column - c#

In my database, each letter has its own column, but when I add words to the column, it goes to the bottom and Null fields are created
Now so
And I need it this
I have a list of words and the word must fall into the column from which the first letter of the word begins
My Code
private void ConnectData()
{
_path = "URI=file:" + Application.streamingAssetsPath + "/russianWords.db"; //Path to datab
_dbConnection = new SqliteConnection(_path);
_dbConnection.Open();
if (_dbConnection.State == ConnectionState.Open)
{
m_sqlCmd = new SqliteCommand();
m_sqlCmd.Connection = _dbConnection;
m_sqlCmd.CommandText = "CREATE TABLE IF NOT EXISTS words (а Text , б Text ,в Text,г Text,д Text,е Text,ё Text,ж Text,з Text,и Text,й Text,к Text,л Text,м Text,н Text,о Text,п Text,р Text,с Text,т Text,у Text,ф Text,х Text,ц Text,ч Text,ш Text,щ Text,ъ Text,ы Text,ь Text,э Text,ю Text,я Text)";
m_sqlCmd.ExecuteNonQuery();
}
AddToData("sddd", "т");
}
public void AddToData(string word)
{
try
{
m_sqlCmd.CommandText = $#"INSERT INTO words ('{word.ToCharArray()[0].ToString()}') values ('{ word }')";
m_sqlCmd.ExecuteNonQuery();
}
catch (System.Exception e)
{
Debug.Log(e);
}
}

Databases don't work like this. Putting data in a table will always make a new row. It is not like a game of upwards- Tetris where you insert "Fred" into column "F" and it will find the first empty slot (nearest the "top") and put it there. There isn't even the sense of "top" - datatable tables are just a bunch of rows stored in whatever order the database feels like. If you want your data to have an order you must insert some data that has an order (like an int) and sort by it
If you want this "Tetris" style behavior you must program it explicitly. Have a column that is incrementing numbers
ID A B C
1
2
You want to insert Apple. Select the lowest ID where the column is blank
SELECT MIN(id) FROM t WHERE A IS NULL
ExecuteScalar that and cast the result to an int?
If there is an ID, update it. Otherwise insert:
UPDATE t SET A = 'Apple' WHERE ID = 1
ID A B C
1 Apple
2
Now we want to insert Aeroplane. It will go in ID 2 by the same logic
ID A B C
1 Apple
2 Aeroplane
Now we insert Aurora. There is no row 3. The SELECT that finds the ID will return no rows (you'll get a null returned from execute scalar)
Run the following insert
INSERT INTO t(ID, A)
SELECT MAX(id)+1, 'Aurora' FROM t
ID A B C
1 Apple
2 Aeroplane
3 Aurora
You now have aurora in ID 3
Continue thus either inserting or updating. You should thus have at least one column that is completely full all the time and the others will fill in as they go..
--
Consider to use any auto numbering facility your chosen db has rather than max+1 (but max+1 would work)

Related

Changing database asp.net

I am building a project on asp.net with sql database.
I want to update a row in the table.
the table name - parkingLots , the column I want to update is - "free spaces".
this is the code I wrote but it does'nt do the updating in the table:
var arduinoQuery1 = from b in db.parkingLots
where b.parkingLotID == 1
select b;
foreach (parkingLot b in arduinoQuery1)
{
b.freeSpaces = space;
}
I want to make free spaces of this specific row to be equal to "space".
How do I do that?
write
db.savechange()
in foreach loop

check if values are in datatable

I have an array or string:
private static string[] dataNames = new string[] {"value1", "value2".... };
I have table in my SQL database with a column of varchar type. I want to check which values from the array of string exists in that column.
I tried this:
public static void testProducts() {
string query = "select * from my table"
var dataTable = from row in dt.AsEnumerable()
where String.Equals(row.Field<string>("columnName"), dataNames[0], StringComparison.OrdinalIgnoreCase)
select new {
Name = row.Field<string> ("columnName")
};
foreach(var oneName in dataTable){
Console.WriteLine(oneName.Name);
}
}
that code is not the actual code, I am just trying to show you the important part
That code as you see check according to dataNames[index]
It works fine, but I have to run that code 56 times because the array has 56 elements and in each time I change the index
is there a faster way please?
the Comparison is case insensitive
First, you should not filter records in memory but in the datatabase.
But if you already have a DataTable and you need to find rows where one of it's fields is in your string[], you can use Linq-To-DataTable.
For example Enumerable.Contains:
var matchingRows = dt.AsEnumerable()
.Where(row => dataNames.Contains(row.Field<string>("columnName"), StringComparer.OrdinalIgnoreCase));
foreach(DataRow row in matchingRows)
Console.WriteLine(row.Field<string>("columnName"));
Here is a more efficient (but less readable) approach using Enumerable.Join:
var matchingRows = dt.AsEnumerable().Join(dataNames,
row => row.Field<string>("columnName"),
name => name,
(row, name) => row,
StringComparer.OrdinalIgnoreCase);
try to use contains should return all value that you need
var data = from row in dt.AsEnumerable()
where dataNames.Contains(row.Field<string>("columnName"))
select new
{
Name = row.Field<string>("columnName")
};
Passing a list of values is surprisingly difficult. Passing a table-valued parameter requires creating a T-SQL data type on the server. You can pass an XML document containing the parameters and decode that using SQL Server's convoluted XML syntax.
Below is a relatively simple alternative that works for up to a thousand values. The goal is to to build an in query:
select col1 from YourTable where col1 in ('val1', 'val2', ...)
In C#, you should probably use parameters:
select col1 from YourTable where col1 in (#par1, #par2, ...)
Which you can pass like:
var com = yourConnection.CreateCommand();
com.CommandText = #"select col1 from YourTable where col1 in (";
for (var i=0; i< dataNames.Length; i++)
{
var parName = string.Format("par{0}", i+1);
com.Parameters.AddWithValue(parName, dataNames[i]);
com.CommandText += parName;
if (i+1 != dataNames.Length)
com.CommandText += ", ";
}
com.CommandText += ");";
var existingValues = new List<string>();
using (var reader = com.ExecuteReader())
{
while (read.Read())
existingValues.Add(read["col1"]);
}
Given the complexity of this solution I'd go for Max' or Tim's answer. You could consider this answer if the table is very large and you can't copy it into memory.
Sorry I don't have a lot of relevant code here, but I did a similar thing quite some time ago, so I will try to explain.
Essentially I had a long list of item IDs that I needed to return to the client, which then told the server which ones it wanted loaded at any particular time. The original query passed the values as a comma separated set of strings (they were actually GUIDs). Problem was that once the number of entries hit 100, there was a noticeable lag to the user, once it got to 1000 possible entries, the query took a minute and a half, and when we went to 10,000, lets just say you could boil the kettle and drink your tea/coffee before it came back.
The answer was to stick the values to check directly into a temporary table, where one row of the table represented one value to check against. The temporary table was keyed against the user who performed the search, so this meant other users searches wouldn't become corrupted with each other, and when the user logged out, then we knew which values in the search table could be removed.
Depending on where this data comes from will depend on the best way for you to load the reference table. But once it is there, then your new query will look something like:-
SELECT Count(t.*), rt.dataName
FROM table t
RIGHT JOIN referenceTable rt ON tr.dataName = t.columnName
WHERE rt.userRef = #UserIdValue
GROUP BY tr.dataName
The RIGHT JOIN here should give you a value for each of your reference table values, including 0 if the value did not appear in your table. If you don't care which one don't appear, then changing it to an INNER JOIN will eliminate the zeros.
The WHERE clause is to ensure that your search only returns the unique items that you are looking for at the moment - the design should consider that concurrent access will someday occur here (even if it doesn't at the moment), so writing something in to protect it is advisable.

AutoIncremental field

I need to autoincrement NroFattura and AnnoFattura (this one is the year of the report):
I dont know how to do it, if by C# or by Access settings.
I want that NroFattura is like n = n + 1
And AnnoFattura is like n = get year now (function)
I have a winform behind it (C#)
If you have a simple scenario, where only one user at time works with the database then it is simply a matter to find the highest value inserted for the FatturaNro in a specific year
string query = "SELECT MAX(FatturaNro) FROM yourTableName WHERE AnnoFattura=?";
OleDbCommand cmd = new OleDbCommand(query, connection);
cmd.Parameters.AddWithValue("#p1", DateTime.Today.Year);
object result = cmd.ExecuteScalar();
int newInvoiceNumber = (result == null ? 1 : Convert.ToInt32(result)+1);
The Access auto increment setting is not suitable for this kind of action. If you delete items, the counting will still countinue. For example, if you had 20 items, removed 2, the count would still move to 21 on the next insert. The auto increment field should be used as an unique id of the record, not to as "row number".
You could add a new column for unique indexes, so row specific editing is possible.
Calculating the NroFattura in C# code is easy. Shape the final insert string with the right values and run it.

Syncing DataTable autoincremented columns with Database

The DBConcurrencyException issue occurs when trying to use the Update() method on my Database. I have a table in a database that has an autoincremented ID column, and a DataTable in my c# program that gets its information from this table (including the auto increment part when i use MissingSchemaAction = MissingSchemaAction.AddWithKey).
If I create rows and add them to the datatable, the datatable automatically fills in the autoincremented ID column (starting where the database table left off) for me, which is fine. However if I delete the rows I just added (without first using Update() ) and add new ones, the datatable autoincrement column is filled with a value based on where the DATATABLE is, not where the database is, which is why I get the concurrency error.
for example:
The table in the database has these records:
1 Apple
2 Orange
3 Pear
Which gets copied to the datatable, so when I add a new row with the name value "grape" I get:
1 Apple
2 Orange
3 Pear
4 Grape
Which is fine, however if don't run the Update() method, and I delete the grape row and add a new row "Melon" I get:
1 Apple
2 Orange
3 Pear
5 Melon
And when I try to run Update(), the database is expecting 4 to be the next autoincremented value but instead is getting 5. So I get the error. The Update() occurs when the user clicks a "save" button, so ideally I'd like them to be able to make lots of changes as shown above before finally saving, but is the only way to preserve the concurrency to use Update() after each row is added/deleted?
The expected value is 5 - it would be insanely inefficient for the database to try and fill in the holes in the column every time you do something. Once an auto_increment is used it is gone forever.
Because of this always make sure your column is big enough hold all your records. If you used TINYINT for example then you can only have a 127 record in your table.
The auto increment is stored at the table level and Mysql never looks back to see if it could be lower. You can manually change it by doing the following:
ALTER TABLE tablename AUTO_INCREMENT=2;
But if you do this and there is a collision down the road - bad things are going to happen.
Or you can inspect what it is
SHOW CREATE TABLE tablename;
CREATE TABLE `tablename` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`cat_id` int(10) unsigned NOT NULL,
`status` int(10) unsigned NOT NULL,
`date_added` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `categories_list_INX` (`cat_id`,`status`),
KEY `cat_list_INX` (`date_added`,`cat_id`,`status`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=latin1
And you find out what the last one is.
SELECT LAST_INSERT_ID();
+------------------+
| LAST_INSERT_ID() |
+------------------+
| 2 |
+------------------+
1 row in set (0.00 sec)
My first thought would have been that you should just handle the case that a row is being deleted and first update it, then delete it to keep the auto-increment IDs in sync.
However, I have run into this same situation and it seems to be caused by the 3rd party control my DataGridView is hosted in. Specifically, the problem occurs when the user has focus in the "new" row of the DataGridView, switches to another application and then clicks back into the DataGridView. At this point the original DataRow instance for the new row is deleted and a new one is created with an incremented ID value. I have not been able to figure out a way to handle the deletion of the row before it is actually deleted, nor can I figure out what the 3rd party control is doing that triggers this.
Therefore, for the moment I am handling this problem in a very heavy-handed way, by querying the correct auto-increment value from the database an correcting new DataRows if necessary. If all else fails, this solution seems to work. (Note I am using SqlCe instead of MySQL)
void OnLoad()
{
base.OnLoad(e);
...
_dataTable.TableNewRow += HandleTableNewRow;
}
void HandleTableNewRow(object sender, DataTableNewRowEventArgs e)
{
SetAutoIncrementValues(e.Row);
}
void SetAutoIncrementValues(DataRow row)
{
foreach (DataColumn dataColumn in _dataTable.Columns
.OfType<DataColumn>()
.Where(column => column.AutoIncrement))
{
using (SqlCeCommand sqlcmd = new SqlCeCommand(
"SELECT AUTOINC_NEXT FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '" +
Name + "' AND COLUMN_NAME = '" + dataColumn.ColumnName + "'", _connection))
using (SqlCeResultSet queryResult =
sqlcmd.ExecuteResultSet(ResultSetOptions.Scrollable))
{
if (queryResult.ReadFirst())
{
var nextValue = Convert.ChangeType(queryResult.GetValue(0), dataColumn.DataType);
if (!nextValue.Equals(row[dataColumn.Ordinal]))
{
// Since an auto-increment column is going to be read-only, apply
// the new auto-increment value via a separate array variable.
object[] rowData = row.ItemArray;
rowData[dataColumn.Ordinal] = nextValue;
row.ItemArray = rowData;
}
}
}
}
}

C#, Finding 2 columns sum together in linq

I am bit new to linq .
How do I get the total sum of two columns in my data table.
Let say 2 columns are A& B . I want numeric sum of entire column A and entire column B
(i.e totalSum= sum(A)+sum(B))
IMP :
If I have any field in either of 2 columns a non numeric field (eg AB,WH,DBNULL) . That field should be considered as zero while summing up the values so that it wont throw any exception.
For each row or the sum of entire column A and entire column B?
In the first case you could do a select:
var resultWithSum = from row in table
select new{
A = row.A, //optional
B = row.B, //optional
sum = row.A + row.B
}
Otherwise you can do:
result = table.Sum(row => row.A + row.B)

Categories

Resources