I've an old system where I have to read data from. It's stores the data an old Visual FoxPro DBF Table.
In C# I'm using adodb.dll with Provider=VFPOLEDB.1 to get the data and it works well so far.
When I select following (its an index):
select * from tableXYZ where tab_f1+STR(tab_f2,10)+STR(tab_f3,3)+STR(tab_f4,3)+STR(tab_f5,3) = "AR1234567890"
I get in Visual FoxPro a fast result (using the index), "AR" is the tab_f1, "1234567890" is the STR(tab_f2,10). The tab_f3, tab_f4 and tab_f5 seems to be "ignored", even filled in the table i get all the rows, regardless of the content of f3 to f5.
The same select in c# with the ADODB Connection returns no result set. Is there a way to let the VFPOLEDB ignore the three fields behind too or do i need an index only for tab_f1+tab_f2?
VFP will treat = as "starts with" based on the SET ANSI setting. If you want to use that same functionality in ADODB I would use LIKE instead:
select *
from tableXYZ
where tab_f1+STR(tab_f2,10)+STR(tab_f3,3)+STR(tab_f4,3)+STR(tab_f5,3) LIKE "AR1234567890%"
However note that ADODB will not use stand-alone ("IDX") index files. If your index in in a CDX associated with that table then it should be used.
Related
So I am new to Oledb and I a have project that requires me to grab data from an excel file using a Console Application. The excel file has around 500 columns and 55 rows. How could I possibly get data from columns past 255?
In order to read columns 256 -> you just need to modify the Select statement. By default both the Microsoft.ACE.OLEDB.12.0 and the Microsoft.Jet.OLEDB.4.0 drivers will read from Column 1-255 (A->IU) but you can ask it to read the remaining columns by specifying them on the Select statement.
To read the next 255 columns and taking "Sheet1" as your sheet name you would specify...
Select * From [Sheet1$IV:SP]
This will work even if there aren't another 255 columns. It will simply return the next chunk of columns if there are 1...255 additional columns.
Incidentally, the Microsoft.ACE.OLEDB.12.0 driver will read both .xls and any variant of .xlsx, .xlsm etc without changing the extended properties from "Excel 12.0". There is no need to if...then...else the connection string depending on the file type.
The OLEDB driver is pretty good for the most part but it really does rely on well formed sheets. Mixed data types aren't handled terribly well and it does weird things if the first columns/rows are empty but aside from that it's fine. I use it a lot.
I am new to SQLITE and in my project, i need to select the data from the .db file but unfortunately, the data which I am selecting is too big. The query is
SELECT distinct * FROM RunTime WHERE Local_Machine = 'GTS-VINAY' and ((Variable like '[MCUL_ErrorMessage#2]%') or (Variable like '[MCUL_UniqueID#1]%')[....])
Here an exception was thrown Expression tree is too large (Maximum depth 1000) I googled a lot and found that people are saying to set limit of SQLITE_LIMIT_EXPR_DEPTH with reference to link http://forums.devart.com/viewtopic.php?f=48&t=31731#p109439
But here the question is how can I implement the same in C# WinForms.
Note: I am trying to add reference of sqlite3 in by project but vs 2013 is not allowing to add the same.
Put all the patterns into a temporary table, and let the database iterate over it:
SELECT RunTime.*
FROM RunTime
JOIN TempTable ON RunTime.Variable LIKE TempTable.Variable
WHERE Local_Machine = 'GTS-VINAY';
I have a stored procedure that returns a large result set (nearly 20 million records). I need to save this result to multiple XML files. I am currently using ADO.Net to fill a dataset, but it quickly throws System.OutOfMemoryException. What other methods that I can use to accomplish this?
Are you using sql server ?
in this case there is a sql instruction to automatically convert the result of a query into a xml structure, you would then get it as a string in the application.
Options :
you split the string into several ones and save them to files (in the app)
modify PS to split result into several xml objects then get them as different strings / row (1 row => 1 object) and save each of them into a file.
write a new PS that calls the original PS, split result into X xml objects, then returns X xml strings that you just have to save in the application
Not using sql server ?
do the XML formatting in the PS or write a new one that does it
Anyway, if think it will be easier to do the xml formatting server side
Assuming you are using SQL Server - you can use paging in your stored procedure. ROW_NUMBER is an option. SQL Server 2012 and above support OFFSET and FETCH.
Also, how many DataTables are you filling? There are row limits for DataTables.
The maximum number of rows that a DataTable can store is 16,777,216
https://msdn.microsoft.com/en-us/library/system.data.datatable.aspx
I'm using SQLite to compare some ids (varchar 50) so I get a numeric database id.
I'm using C# and visual studio 2003. Pretty new in C# so maybe I'm doing a newbie mistake.
When doing some comparison like the following
SELECT * FROM tblUSer WHERE Use_id like '%Ñ120%'
I don't get anything even if this exist... I suppose is an encoding problem, but I don't know how to solve it.
I can't change the database schema since is not my database (need to modify select some data and update a field).
I can't change the data per se since it is a data that already exist and the user codes are the same as some reference like a club id or something...
Changing the data to UTF-8 worked to make it do the query (without it it would give an error) but still I don't get the desired data. Maybe is also how the data is saved by the original program.
What do I do so my query works with the Ñ?
What if your remove the N from your query and just check
Use_id LIKE '%120%'
and then select the appropriate ones in C#?
try this:
SELECT * FROM tblUSer WHERE Use_id like N'%Ñ120%'
Notice the N letter.
You should write in this way:
SELECT * FROM tblUSer WHERE Use_id like N'%Ñ120%'
I have been tasked to write a module for importing data into a client's system.
I thought to break the process into 4 parts:
1. Connect to the data source (SQL, Excel, Access, CSV, ActiveDirectory, Sharepoint and Oracle) - DONE
2. Get the available tables/data groups from the source - DONE
i. Get the available fields form the selected table/data group - DONE
ii. Get all data from the selected fields - DONE
3. Transform data to the user's requirements
4. Write the transformed data the the MSSQL target
I am trying to plan how to handle complex data transformations like:
Get column A from Table tblA, inner joined to column FA from table tblB, and concatenate these two with a semicolon in between.
OR
Get column C from table tblC on source where column tblC.D is not in table tblG column G on target database.
My worry is not the visual, but the representation in code of this operation.
I am NOT asking for sample code, but rather for some creative ideas.
The data transformation will not be with free text, but drag and drop objects that represent actions.
I am a bit lost, and need some fresh input.
maybe you can grab some ideas from this open source project: Rhino ETL.
See my answer: Manipulate values in a datatable?