C#: DataTable getting only one row of the search result - c#

I'm having a sudden and strange problem with DataTable. I'm using C# with MySQL database to develop a system, and I'm trying to export custom reports. The problem is that, somehow, my DataTable is getting only one result (I've tested my query on MySQL and should be something like 30 results on the xls file and the DataTable).
Strangely, these functions are used in other parts of the system to export other kinds of reports, and work perfectly. This is the select function that I'm using:
public DataTable selectBD(String tabela, String colunas) {
var query = "SELECT " + colunas + " FROM " + tabela;
var dt = new DataTable();
Console.WriteLine("\n\n" + query + "\n\n");
try
{
using (var command = new MySqlCommand(query, bdConn)) {
MySqlDataReader reader = command.ExecuteReader();
dt.Load(reader);
reader.Close();
}
}
catch (MySqlException) {
return null;
}
bdConn.Close();
return dt;
}
And this is my query:
SELECT
cpf_cnpj, nomeCliente, agenciaContrato, contaContrato,
regionalContrato, carteiraContrato, contratoContrato,
gcpjContrato, avalistaContrato, enderecoContrato,
telefoneContrato, dataChegadaContrato, dataFatoGerContrato,
dataPrimeiraParcelaContrato, dataEmissaoContrato, valorPlanilhaDebitoContrato
FROM
precadastro
INNER JOIN
contrato
ON precadastro.cpf_cnpj = contrato.FK_cpf_cnpj
LEFT JOIN faseprocessual
ON contrato.idContrato = faseprocessual.FK_idContrato
And that is the result of the query on SQLyog
I've tested and the DataTable returned by the function only receive the one row, and it's not the first row of the MySQL results. Someone had this kind of problem before?

DataTable load expects primary key from your data (supplied by DataReader) and tries to guess it from passed rows. Since there's no such key, Load method guesses it's the first column (cpf_cnpj). But, values in that column aren't unique so the each row gets overwritten by next one, and the result is just one row in your DataTable.
It's the issue that persist for years, and I'm not sure there's one solution to rule them all. :)
You can try:
change query so that some unique values get into first column (unfortunately, I can't see something unique in your screenshot) or concatenate two or more values to get unique value.
Prepare DataTable by yourself by creating columns (this mirroring structure of resultset) and then iterate through DataReader to copy data.
add some autoincrement value in your query (or make temporary table with auto_increment column then fill that table)
Last suggestion could be something like this (I haven't worked much with mySql, so this is some suggestion i have googled :)):
SELECT
#i:=#i+1 AS id,
cpf_cnpj, nomeCliente, agenciaContrato, contaContrato,
regionalContrato, carteiraContrato, contratoContrato,
gcpjContrato, avalistaContrato, enderecoContrato,
telefoneContrato, dataChegadaContrato, dataFatoGerContrato,
dataPrimeiraParcelaContrato, dataEmissaoContrato, valorPlanilhaDebitoContrato
FROM
precadastro
INNER JOIN
contrato
ON precadastro.cpf_cnpj = contrato.FK_cpf_cnpj
LEFT JOIN faseprocessual
ON contrato.idContrato = faseprocessual.FK_idContrato
CROSS JOIN (SELECT #i:= 0) AS i
here's answer on SO which uses auto number in query.

Related

Datatable rows.count == 0 gives 1 although there not found records in the table

I have a problem and I don't know why it's happening!
I am trying to find Max value in a column inside the database table, I use this code:
private void FrmCustomer_New_Load(object sender, EventArgs e)
{
int NewID;
DataTable _Dt = CusVar.GetCustomersIDs();
if (_Dt.Rows.Count == 0)
{
NewID = 1;
this.txt1.Text = NewID.ToString();
DataRow _Row = CusVar.GetCustomersIDs().Rows[0];
MessageBox.Show(Convert.ToString(_Dt.Rows[0]["MaxID"]));
}
}
the code is working but it gives 1 although there are no records in the table?
I use C# and Access database ACCDB.
I use this function in Cls_Customers:
public DataTable GetCustomersIDs()
{
DAL.DataAccessLayer DAL = new DAL.DataAccessLayer();
DataTable Dt = new DataTable();
Dt = DAL.DataSelect("Select Max(CustomerID) AS MaxID From TblCustomers", null);
DAL.CloseConn();
return Dt;
}
what is the problem, please?
This is your query:
Select Max(CustomerID) AS MaxID
From TblCustomers
It is an aggregation query. Aggregation queries with no group by always return one row -- regardless of whether any rows match.
The value returned in the single row is NULL for MaxId.
I am highly suspicious of what you want to do. If you want the maximum id -- and no rows if the table is empty -- then do:
select c.CustomerID
from TblCustomers c
order by c.CustomerID desc
fetch first 1 row only;
(This uses ANSI/ISO standard syntax so the exact logic might depend on your database.)
My suspicion is that you then want to use this id for an insert -- and that is a bad approach. Almost all databases support some sort of auto-incremented column (with syntax elements such as auto_increment, serial, or identity). That is the right way to assign a unique incrementing id to a column in a table.
The Sql query might be returning null and if there are no rows that match the criteria.
If the intention is to find the number of rows returned.Change the sql query to return the count instead of using aggregate function.
Try with this:
SELECT MAX(T1.CustomerID) AS MaxID From TblCustomers T1

Optimizing query that uses AsEnumerable and SingleOrDefault

Not long ago there was a feature request in the program I am maintaining. Basically it has to fill up a table in the database with info from a text file. These files can be pretty big, but it was fairly easy to do because these files were defined as the complete list of user data. Therefore the table could be truncated and the just filled up again with data from the text file.
But then a week ago it was decided that these files are actually updates of current user info, so now I have to retrieve the correct MeteringPointId (which only exist once if it does exist) and then update info on it. If it doesn't exist, just insert data as before.
The way I do this is retrieving the complete database table with data from the database into memory and then just updating on that info before finally saving the changes by calling the datatables update function. It works fine, except that finding the row with the MeteringPointId is slow:
DataRow row = MeteringPointsDataTable.NewRow();
// this is called for each line in the text file to find the corresponding MeteringPointId. It can be 300.000 times.
row = MeteringPointsDataTable.AsEnumerable().SingleOrDefault(r => r.Field<string>("MeteringPointId").ToString() == MeteringPointId);
Is there a way to retrieve a DataRow from a DataTable that is faster than this?
If you are sure that only one item con fullfil the condition use FirstOrDefault instead of Single. Thus you won´t collect the whole table but only the first entry you´ve found.
You can use Select method of DataTable.
var expression = "[MeteringPointId] = '" + MeteringPointId + "'";
DataRow[] result = MeteringPointsDataTable.Select(expression);
Also you can create an expression like,
var idList = new []{"id1", "id2", "id3", ...};
var expression = "[MeteringPointId] in " + string.Format("({0})", string.Join(",", idList.Select(i=> "'"+i+"'")));
Similar usage is here
Hope it helps..
You could put the whole table in a dictionary:
//At the start
var meteringPoints = MeteringPointsDataTable.AsEnumerable().ToDictionary(r => r.Field<string>("MeteringPointId").ToString());
//For each row of the text file:
DataRow row;
if (!meteringPoints.TryGetValue(MeteringPointId, out row))
{
row = MeteringPointsDataTable.NewRow();
meteringPoints[MeteringPointId] = row;
}

Table schema as DataTable?

I did a search and found some seemingly related answers, but they don't really do what I'm after.
Given a valid connection string and a table name, I want to get a DataTable of the table. I.e. if the table has a column called "Name", I want the DataTable set up so I can do dt["Name"] = "blah";
The trick is, I don't want to hard code any of that stuff, I want to do it dynamically.
People tell you to use SqlConnection.GetSchema, but that gives you back a table with a bunch of stuff in it.
Everybody has random tricks like TOP 0 * from the table and get the schema from there, etc.
But is there a way to get the table with the primary keys, unique indexes, etc. Ie.. in the final format to do a bulk insert.
You can use SqlDataAdapter.FillSchema:
var connection = #"Your connection string";
var command = "SELECT * FROM Table1";
var dataAdapter = new System.Data.SqlClient.SqlDataAdapter(command, connection);
var dataTable = new DataTable();
dataAdapter.FillSchema(dataTable, SchemaType.Mapped);
This way you will have an empty DataTable with columns and keys defined and ready to use in code like dataTable["Name"] = "blah";.

DBConcurrency Exception Occured While Updating Using Dataadapter

I am trying to edit DataTable Filled by NpgsqlDataAdapter.
After calling Fill() method, I have only one row in DataTable. Then I changed value of one column only and tried to update as below.
Then I am getting this error:
DBConcurrencyException occured
My code is:
NpgsqlDataAdapter getAllData = new NpgsqlDataAdapter("SELECT sn,
code,product, unitprice, quantity, InvoiceNo, Date FROM stocktable WHERE Code='" + product + "'
ORDER BY EDate ASC", DatabaseConnectionpg);
DataTable ds1 = new DataTable();
ds1.Clear();
getAllData.Fill(ds1);
if (ds1.Rows.Count > 0)
{
ds1.Rows[0]["Quantity"] = qty;// calculated value
}
ds1 = ds1.GetChanges();
NpgsqlCommandBuilder cb = new NpgsqlCommandBuilder(getAllData);
//getAllData.RowUpdating += (sender2, e2) => { e2.Command.Parameters.Clear(); };
//cb.SetAllValues = false;
getAllData.DeleteCommand = cb.GetDeleteCommand();
getAllData.InsertCommand = cb.GetInsertCommand();
getAllData.UpdateCommand = cb.GetUpdateCommand();
int x = getAllData.Update(ds1);
if (x > 0)
{
ds1.AcceptChanges();
}
EDIT: I have three fields as primary keys and I am calling only two fields in select statement. Is it reason for DBConcurrency error? But I am able to update the table with same (three fields as primary key) parameters in SQL Server 2005.
UPDATE:
I found the solution and the solution is
I created and used second DataAdapter to update data.
I used getAllData(NpgSqlDataAdapter) To fill table as
NpgsqlDataAdapter getAllData = new NpgsqlDataAdapter("SELECT
code,product, unitprice, quantity, InvoiceNo, Date FROM stocktable WHERE Code='" + product + "'
ORDER BY EDate ASC", DatabaseConnectionpg);
And Also created next Adapter to update as
NpgsqlDataAdapter updateadap= new NpgsqlDataAdapter("SELECT sn, quantity FROM stocktable WHERE Code='" + product + "'
ORDER BY EDate ASC", DatabaseConnectionpg);
NpgsqlCommandBuilder cb = new NpgsqlCommandBuilder(updateadap);
//getAllData.RowUpdating += (sender2, e2) => { e2.Command.Parameters.Clear(); };
//cb.SetAllValues = false;
updateadap.DeleteCommand = cb.GetDeleteCommand();
updateadap.InsertCommand = cb.GetInsertCommand();
updateadap.UpdateCommand = cb.GetUpdateCommand();
int x = updateadap.Update(ds1);
if (x > 0)
{
......
}
I tried alot and found that NpgsqlDataAdapter had problem with Column Code. When i ommited it then it worked. DataType of column code is varchar. I don't know why this was happening. Anybody has idea about it?
This is because DataAdapter uses Optimistic Concurrency by default. This means that if you are trying to update a row that no longer exists in the database or changed, the update from the DataAdapter will fail with the exception above.
Possible scenarios:
Between you selecting the data into the client and sending the
update, another user is deleting or updating this row from his application.
It can be that you are deleting the data from somewhere else in your application.
For example:
You fill the DataTable that will be used for the update.
Deletes the row with Code = 1101 (for example) directly from the database, i.e. you do not use the DataTable here. This is emulating another user deleting the row with Code = 1101 from another application. Or some other part in your code deleting the row with Code = 1101.
Selects out the row with Code = 1101 from the DataTable, this is just to show that it is still there even though you have deleted it from the database itself.
Edits the Quantity column in the row with Code = 1101 in the DataTable. This has to be done, otherwise the call to Update will ignore this row when updating.
Executes the update, this will throw the exception since you are trying to update a row that (no longer) exists in the database.
If you want to implement Last Writer Wins, Add the following code:
cb.ConflictOption = ConflictOption.OverwriteChanges;
Also there is one more possible thing : if you have Decimal/numeric as columns in the DB they may cause this error even though the data looks the same. This is due to a decimal rounding error.
An important note:
You should always use parameterized queries by the way. This kind of string concatenations are open for SQL Injection.

check if values are in datatable

I have an array or string:
private static string[] dataNames = new string[] {"value1", "value2".... };
I have table in my SQL database with a column of varchar type. I want to check which values from the array of string exists in that column.
I tried this:
public static void testProducts() {
string query = "select * from my table"
var dataTable = from row in dt.AsEnumerable()
where String.Equals(row.Field<string>("columnName"), dataNames[0], StringComparison.OrdinalIgnoreCase)
select new {
Name = row.Field<string> ("columnName")
};
foreach(var oneName in dataTable){
Console.WriteLine(oneName.Name);
}
}
that code is not the actual code, I am just trying to show you the important part
That code as you see check according to dataNames[index]
It works fine, but I have to run that code 56 times because the array has 56 elements and in each time I change the index
is there a faster way please?
the Comparison is case insensitive
First, you should not filter records in memory but in the datatabase.
But if you already have a DataTable and you need to find rows where one of it's fields is in your string[], you can use Linq-To-DataTable.
For example Enumerable.Contains:
var matchingRows = dt.AsEnumerable()
.Where(row => dataNames.Contains(row.Field<string>("columnName"), StringComparer.OrdinalIgnoreCase));
foreach(DataRow row in matchingRows)
Console.WriteLine(row.Field<string>("columnName"));
Here is a more efficient (but less readable) approach using Enumerable.Join:
var matchingRows = dt.AsEnumerable().Join(dataNames,
row => row.Field<string>("columnName"),
name => name,
(row, name) => row,
StringComparer.OrdinalIgnoreCase);
try to use contains should return all value that you need
var data = from row in dt.AsEnumerable()
where dataNames.Contains(row.Field<string>("columnName"))
select new
{
Name = row.Field<string>("columnName")
};
Passing a list of values is surprisingly difficult. Passing a table-valued parameter requires creating a T-SQL data type on the server. You can pass an XML document containing the parameters and decode that using SQL Server's convoluted XML syntax.
Below is a relatively simple alternative that works for up to a thousand values. The goal is to to build an in query:
select col1 from YourTable where col1 in ('val1', 'val2', ...)
In C#, you should probably use parameters:
select col1 from YourTable where col1 in (#par1, #par2, ...)
Which you can pass like:
var com = yourConnection.CreateCommand();
com.CommandText = #"select col1 from YourTable where col1 in (";
for (var i=0; i< dataNames.Length; i++)
{
var parName = string.Format("par{0}", i+1);
com.Parameters.AddWithValue(parName, dataNames[i]);
com.CommandText += parName;
if (i+1 != dataNames.Length)
com.CommandText += ", ";
}
com.CommandText += ");";
var existingValues = new List<string>();
using (var reader = com.ExecuteReader())
{
while (read.Read())
existingValues.Add(read["col1"]);
}
Given the complexity of this solution I'd go for Max' or Tim's answer. You could consider this answer if the table is very large and you can't copy it into memory.
Sorry I don't have a lot of relevant code here, but I did a similar thing quite some time ago, so I will try to explain.
Essentially I had a long list of item IDs that I needed to return to the client, which then told the server which ones it wanted loaded at any particular time. The original query passed the values as a comma separated set of strings (they were actually GUIDs). Problem was that once the number of entries hit 100, there was a noticeable lag to the user, once it got to 1000 possible entries, the query took a minute and a half, and when we went to 10,000, lets just say you could boil the kettle and drink your tea/coffee before it came back.
The answer was to stick the values to check directly into a temporary table, where one row of the table represented one value to check against. The temporary table was keyed against the user who performed the search, so this meant other users searches wouldn't become corrupted with each other, and when the user logged out, then we knew which values in the search table could be removed.
Depending on where this data comes from will depend on the best way for you to load the reference table. But once it is there, then your new query will look something like:-
SELECT Count(t.*), rt.dataName
FROM table t
RIGHT JOIN referenceTable rt ON tr.dataName = t.columnName
WHERE rt.userRef = #UserIdValue
GROUP BY tr.dataName
The RIGHT JOIN here should give you a value for each of your reference table values, including 0 if the value did not appear in your table. If you don't care which one don't appear, then changing it to an INNER JOIN will eliminate the zeros.
The WHERE clause is to ensure that your search only returns the unique items that you are looking for at the moment - the design should consider that concurrent access will someday occur here (even if it doesn't at the moment), so writing something in to protect it is advisable.

Categories

Resources