I know that you can get the data of a table in a SAP Server with the function RFCDestination.Repository.GetTableMetadata(string tablename). Unfortunately I get an error when I try to execute the command. The weird thing is when I give a exisiting table I get a different error when I try something random as a tablename.
Existing table:
var x = dest.Repository.GetTableMetadata("TFTIT");
Error:
SAP.Middleware.Connector.RfcInvalidStateException: "cannot find TABLE specified by TFTIT"
Random tablename:
var x = dest.Repository.GetTableMetadata("Test123");
Error:
SAP.Middleware.Connector.RfcInvalidStateException: "metadata for TableOnly TEST123 not available: NOT_FOUND: No active nametab exists for TEST123"
I know there is a way to get the data of a table with the help of a function module but I need to use the GetTableMetadata function.
One cannot do so much wrong when calling RfcRepository.GetTableMetadata(string). Does your used user ID has the required RFC authorizations for repository queries as listed in SAP note 460089 (scenario 3)? If yes, this is maybe a bug in the NCo3 library or even in the ABAP backend. Do you use NCo's latest patch level? This is currently NCo 3.0.20.
If not, try updating the library first.
Otherwise I recommend to create an SAP support ticket for the first error message. The second error is normal when the specified table name does not exist.
Alternatively you may also try what happens if calling RfcRepository.GetStructureMetadata(string) for this table instead. The meta data for tables and structures is quite similar and the same remote function modules are used for the DDIC queries. Maybe this works. However, I think in the first place RfcRepository.GetTableMetadata(string) should work here.
I hope this helps.
I'm trying to populate a DataTable, to build a LocalReport, using the following:
MySqlCommand cmd = new MySqlCommand();
cmd.Connection = new MySqlConnection(Properties.Settings.Default.dbConnectionString);
cmd.CommandType = CommandType.Text;
cmd.CommandText = "SELECT ... LEFT JOIN ... WHERE ..."; /* query snipped */
// prepare data
dataTable.Clear();
cn.Open();
// fill datatable
dt.Load(cmd.ExecuteReader());
// fill report
rds = new ReportDataSource("InvoicesDataSet_InvoiceTable",dt);
reportViewerLocal.LocalReport.DataSources.Clear();
reportViewerLocal.LocalReport.DataSources.Add(rds);
At one point I noticed that the report was incomplete and it was missing one record. I've changed a few conditions so that the query would return exactly two rows and... surprise: The report shows only one row instead of two. I've tried to debug it to find where the problem is and I got stuck at
dt.Load(cmd.ExecuteReader());
When I've noticed that the DataReader contains two records but the DataTable contains only one. By accident, I've added an ORDER BY clause to the query and noticed that this time the report showed correctly.
Apparently, the DataReader contains two rows but the DataTable only reads both of them if the SQL query string contains an ORDER BY (otherwise it only reads the last one). Can anyone explain why this is happening and how it can be fixed?
Edit:
When I first posted the question, I said it was skipping the first row; later I realized that it actually only read the last row and I've edited the text accordingly (at that time all the records were grouped in two rows and it appeared to skip the first when it actually only showed the last). This may be caused by the fact that it didn't have a unique identifier by which to distinguish between the rows returned by MySQL so adding the ORDER BY statement caused it to create a unique identifier for each row.
This is just a theory and I have nothing to support it, but all my tests seem to lead to the same result.
After fiddling around quite a bit I found that the DataTable.Load method expects a primary key column in the underlying data. If you read the documentation carefully, this becomes obvious, although it is not stated very explicitly.
If you have a column named "id" it seems to use that (which fixed it for me). Otherwise, it just seems to use the first column, whether it is unique or not, and overwrites rows with the same value in that column as they are being read. If you don't have a column named "id" and your first column isn't unique, I'd suggest trying to explicitly set the primary key column(s) of the datatable before loading the datareader.
Just in case anyone is having a similar problem as canceriens, I was using If DataReader.Read ... instead of If DataReader.HasRows to check existence before calling dt.load(DataReader) Doh!
I had same issue. I took hint from your blog and put up the ORDER BY clause in the query so that they could form together the unique key for all the records returned by query. It solved the problem. Kinda weird.
Don't use
dr.Read()
Because It moves the pointer to the next row.
Remove this line hope it will work.
Had the same issue. It is because the primary key on all the rows is the same. It's probably what's being used to key the results, and therefore it's just overwriting the same row over and over again.
Datatables.Load points to the fill method to understand how it works. This page states that it is primary key aware. Since primary keys can only occur once and are used as the keys for the row ...
"The Fill operation then adds the rows to destination DataTable objects in the DataSet, creating the DataTable objects if they do not already exist. When creating DataTable objects, the Fill operation normally creates only column name metadata. However, if the MissingSchemaAction property is set to AddWithKey, appropriate primary keys and constraints are also created." (http://msdn.microsoft.com/en-us/library/zxkb3c3d.aspx)
Came across this problem today.
Nothing in this thread fixed it unfortunately, but then I wrapped my SQL query in another SELECT statement and it work!
Eg:
SELECT * FROM (
SELECT ..... < YOUR NORMAL SQL STATEMENT HERE />
) allrecords
Strange....
Can you grab the actual query that is running from SQL profiler and try running it? It may not be what you expected.
Do you get the same result when using a SqlDataAdapter.Fill(dataTable)?
Have you tried different command behaviors on the reader? MSDN Docs
I know this is an old question, but for me the think that worked whilst querying an access database and noticing it was missing 1 row from query, was to change the following:-
if(dataset.read()) - Misses a row.
if(dataset.hasrows) - Missing row appears.
For anyone else that comes across this thread as I have, the answer regarding the DataTable being populated by a unique ID from MySql is correct.
However, if a table contains multiple unique IDs but only a single ID is returned from a MySql command (instead of receiving all Columns by using '*') then that DataTable will only organize by the single ID that was given and act as if a 'GROUP BY' was used in your query.
So in short, the DataReader will pull all records while the DataTable.Load() will only see the unique ID retrieved and use that to populate the DataTable thus skipping rows of information
Not sure why you're missing the row in the datatable, is it possible you need to close the reader? In any case, here is how I normally load reports and it works every time...
Dim deals As New DealsProvider()
Dim adapter As New ReportingDataTableAdapters.ReportDealsAdapter
Dim report As ReportingData.ReportDealsDataTable = deals.GetActiveDealsReport()
rptReports.LocalReport.DataSources.Add(New ReportDataSource("ActiveDeals_Data", report))
Curious to see if it still happens.
In my case neither ORDER BY, nor dt.AcceptChanges() is working. I dont know why is that problem for. I am having 50 records in database but it only shows 49 in the datatable. skipping first row, and if there is only one record in datareader it shows nothing at all.
what a bizzareeee.....
Have you tried calling dt.AcceptChanges() after the dt.Load(cmd.ExecuteReader()) call to see if that helps?
I know this is an old question, but I was experiencing the same problem and none of the workarounds mentioned here did help.
In my case, using an alias on the colum that is used as the PrimaryKey solved the issue.
So, instead of
SELECT a
, b
FROM table
I used
SELECT a as gurgleurp
, b
FROM table
and it worked.
I had the same problem.. do not used dataReader.Read() at all.. it will takes the pointer to the next row. Instead use directly datatable.load(dataReader).
Encountered the same problem, I have also tried selecting unique first column but the datatable still missing a row.
But selecting the first column(which is also unique) in group by solved the problem.
i.e
select uniqueData,.....
from mytable
group by uniqueData;
This solves the problem.
I can't understand the documentation and really need a concrete example.
I've already created the destination. Here I define my BAPI:
IRfcFunction BapiIncomingInvoiceGetDetail = SapRfcRepository.CreateFunction("BAPI_INCOMINGINVOICE_GETDETAIL");
Set my imports, invoke it, and then get my exports - one of which is a table:
IRfcTable ITEMDATATable = BapiIncomingInvoiceGetDetail.GetTable("ITEMDATA");
I now want to add a field to each item in the table ITEMDATATable and set its' value so I can reference it later as if it were one of thefields returned by the BAPI. Can anyone tell me how?
EDIT: Okay, I've made some progress:
RfcFieldMetadata newField = new RfcFieldMetadata("SKU_AMT",0,0,0);
ITEMDATATable.CurrentRow.Metadata.AddField(newField);
ITEMDATATable.SetValue("SKU_AMT",myItemData.SKU_AMT);
However, when I try to set the value, I get RfcInvalidStateException "Cannot add an element to locked STRUCTURE BAPI_INCINV_DETAIL_ITEM".
Any way around this?
you can't append columns to the table, the fields are already defined. You need to add a row to the table and populate the fields of that row. This should work (although i can't test it right now):
IRfcTable ITEMDATATable = BapiIncomingInvoiceGetDetail.GetTable("ITEMDATA");
ITEMDATATable.Append();
ITEMDATATable.SetValue("SKU_ATM",myItemData.SKU_AMT);
I have a table that contain column with VARBINARY(MAX) data type. That column represents different values for different types in my c# DB layer class. It can be: int, string and datetime. Now I need to convert that one column into three by it's type. So values with int type go to new column ObjectIntValue and so on for every new column.
But I have a problems with transmitting data to datetime column, because the old column contains datetime value as a long received from C# DateTime.ToBinary method while data saving.
I should make that in TSQL and can't using .NET for convert that value in new column. Have you any ideas?
Thanks for any advice!
Using CLR in T_SQl
Basically you use Create Assembly to register the dll with your function(s) in it,
Then create a user defined function to call it, then you can use it.
There's several rules depending on what you want to do, but as basically you only want DateTime.FromBinary(), shouldn't be too hard to figure out.
Never done it myself, but these guys seem to know what they are talking about
CLR in TSQL tutorial
This is a one off convert right? Your response to #schglurps is a bit of a concern.
If I get you there would have to be break in your update script, ie the one you have woukld work up to when you implement this chnage, then you's have a one off procedure for this manouevre, then you would be updating from a new version.
If you want to validate it, just check for the existnec or non-existance of the new columns.
Other option would be to write a wee application that filled in the new columns from the old one and invoke it. Ugh...
If this isn't one off and you want to keep and maintain the old column, then you have problems.
Background
Here is my issue. Earlier in the course of my program a System.Data.DataSet was serialized out to a file. Then sometime later the data set schema was changed in the program, specifically a column was added to one of the tables.
This data set has been created using VS and C#, so it has all the properties able to access the rows and columns by name (through the Microsoft generated code). It has all the files (.xsd, .cs, etc.) that VS needs to know what the data set looks like and the names therein.
The file is loaded and saved through XML Serialization. This causes an issue now because when I deserialize the old file it loads in the data related to the old schema. This works for the most part, but the object that is created (the data set) has everything but the column that was added later. So, when trying to access the new column it fails because the deserialization did not know about it and the entire column winds up being null.
This now causes more issues because it throws an exception when trying to access that column (because it's null) through the properties of the data set.
Question
My question is, can I somehow add in the column after deserialization? I apparently need to add it so that it complies with the Microsoft generated code because doing this:
myDataSet.myTable.Columns.Add("MyMissingColumn");
...does not add the column it needs. It may add a column, but the row property myDataRow.MyMissingColumn returns null and errors out.
Do I need to somehow copy the new schema into this object? Again, the only reason this is failing is because the old file was serialized using the old schema.
Any suggestions are appreciated.
Why don't you load the schema from the new schema file, and then load the old data. Provided your column allows nulls it should be fine.
DataSet data = new DataSet();
data.ReadXmlSchema(schemaFile);
data.ReadXml(dataFile, XmlReadMode.IgnoreSchema);
Otherwise just add it on the fly:
if (!data.Tables[0].Columns.Contains("SomeId"))
{
var column = new DataColumn("SomeId", typeof(int));
// give it a default value if you don't want null
column.DefaultValue = 1;
// should it support null values?
column.AllowDBNull = false;
data.Tables[0].Columns.Add(column);
}
you are adding a new column without specify its data type, strange, I would specify typeof(string) using another overload of the Add.
beside this, it's understandable that you cannot do: myDataRow.MyMissingColumn because there was no type/column mapping in the initial version of the XSD, but can you anyway access to this column by name or index?
try something like this
myDataRow["MyMissingColumn"] = "Test";
var s = myDataRow["MyMissingColumn"].ToString();