I have a grid that is populated using an Oracle 11g query that returns a TIMESTAMP(6) WITH TIME ZONE field along with other fields.
When I select a date range for data to be displayed that goes from 12/26/2014 to 1/5/2015, and then try to sort by this column (asc or desc), it does not sort properly. For example, in desc order it displays from 01/01 to 01/05 and then from 12/26 to 12/31. Looks like string sorting.
I am guessing a TIMESTAMP(6) WITH TIME ZONE field that contains a value like 21-JAN-2015 18:17:16:00000 USA/EASTERN is not recognized as a date-time but rather a string. Is there any way to resolve this issue?
Turning a comment into an answer:
There is a number of options:
(i) Rely on the database to sort the data correctly - which you might want to prevent for performance reasons.
(ii) Instruct the grid to handle the timestamp column as such (and not as a string column) - which might not be possible.
(iia) Use a different grid component, which handles date columns properly.
(iii) Use a hidden (string) column with the timestamp formatted as 'YYYY-MM-DD HH24…' for sorting.
(iv) Write your own sort routine.
Feel free to provide more detail if and as you see fit.
I have a report file(.rpt) having text as shown below and this .rpt file get updated every day.
Datum/Uhrzeit,Sta.,Bez.,Unit,TBId,Batch,OrderNr,Mat1,Total1,Mat2,Total2,Mat3,Total3,Mat4,Total4,Mat5,Total5,Mat6,Total6,Summe
41521.755934(04.09.13 18:08:32),TB01,TB01,005,300,9663, ,2,27313.63,0,0.00,0,0.00,3,1776.19,0,0.00,0,0.00,29089.82
41521.797601(04.09.13 19:08:32),TB01,TB01,005,300,9682, ,2,27365.98,0,0.00,0,0.00,3,1780.86,0,0.00,0,0.00,29146.85
41521.839269(04.09.13 20:08:32),TB01,TB01,005,300,9701, ,2,27418.34,0,0.00,0,0.00,3,1785.53,0,0.00,0,0.00,29203.88
41521.880937(04.09.13 21:08:33),TB01,TB01,005,300,9721, ,2,27473.31,0,0.00,0,0.00,3,1790.40,0,0.00,0,0.00,29263.71
41521.922606(04.09.13 22:08:33),TB01,TB01,005,300,9741, ,2,27528.53,0,0.00,0,0.00,3,1795.30,0,0.00,0,0.00,29323.83
41521.964274(04.09.13 23:08:33),TB01,TB01,005,300,9760, ,2,27580.88,0,0.00,0,0.00,3,1799.97,0,0.00,0,0.00,29380.84
41522.005942(05.09.13 00:08:33),TB01,TB01,005,300,9780, ,2,27636.00,0,0.00,0,0.00,3,1804.86,0,0.00,0,0.00,29440.86
Need to extract first and last reading values of every row and need to put that reading in database table.
first reading -- Datum/Uhrzeit
last reading -- Summe
I have used COPY command also but it doesn't take the first value. I want to know which data type to use to extract this value (it is not in normal date formats)???
Also is it not possible just to take these two readings only out of this file and not the whole 20 readings?? Is there any such method available??
I am using PostgreSQL 9.0
Any help would be great.
Assuming that "reading" = "column":
You will need to COPY to a TEMPORARY table where the first column is of type text, not date, since that is an invalid date format.
Then you can do an INSERT INTO teal_table (col1, col2, ...) SELECT some_func(thedate), col2, col3... FROM temptable to transform the temp table contents into correct date data using appropriate SQL and insert it into the real target table.
There are lots of existing examples of this on Stack Overflow, though not for your particular date format. I'm guessing that the date in the parens (...) is the date you want, and that the numbers before are a representation of that date as days since epoch + time since start of day. It'll be easier to just parse the date part, which you can do with:
SELECT to_timestamp(substring('41521.880937(04.09.13 21:08:33)' from '\(.*\)' ), '(DD.MM.YY HH24.MI.SS)');
so that's your some_func for the above.
As for taking only the two desired columns, I already explained that to you before so I'm not going to repeat myself. Short version: Use an ETL tool, re-export the CSV with just those columns, or use a filter program to limit the input.
Text file contains order and order detail lines.
First line in order header.
After that there is variable number of detail lines.
After detail there is blank line followed by next order etc.
For order line first field is order number.
For detail line first field is 1 always.
128502 02.01.2012 20120 02.01.2012
1 Wine 0 1300
1 Meat 5,8333 5,83
128503 02.01.2012 20123 02.01.2012
1 Wine 20 130
1 Meat 1,33 283,23
1 Cow 2,333 333,23
....
This file need to be readed into list of entities:
class Order {
public string Number; // order number from first field, primary key
public string Date;
... other fields
}
class OrderDetails {
public string Number; // order number from previous line , foreign key to Order
public string ProductName;
... other fields
}
(Instead on Number custom integer id column can also used for relation)
How to read such file in C# ASP.NET MVC2 using FileHelpers library or other way ?
Update
sample
http://www.filehelpers.com/example_multirecords.html
referenced from
Multiple CSV strutures with FileHelpers
shows how to read two tables.
How to create relation between those tables: During reading foreign key to order column should added to details table. How to implement this, getting last order number from previous order line and annd it to detail record ?
Well you could go brute force all in one go.
Read in the entire file
Split on the blank like.
Create an Order out of the first line of each split
Create details out of the rest.
If the files are huge then you could read in up to each blank line and then process as above.
Another solution that might have some advantages.
Run the file through a process that edit's the detail line and adds in the order number from it's realted order line.
Then another that splits out order and detail into two separate files.
Then process orders then details without having to worry about all this dittoing and two structures.
Fourth solution, is give whoever came up with this format a good beating to encourage them to do it properly...
There's no magic wand solution for this, the file format is simply wholly unsuitable for any efficent passing of data to something that might store it sensibly.
In my app I have a textbox named "msgBox" and a button named "sendBtn".
When the user taps the Button, the text typed in the msgBox is sent to a server.
I want to save and display the last 15-20 sent items.
How can I do that..??
Please help.
If you want to display the most recent items, you could include the current date with the submission of the form. Then, when accessing the server, sort the items by their date, and only display the first 15-20. This method would be simpler than creating another table dedicated to recent entries. Also, browsing through archives by date would be much easier as the date would be tagged to the entries.
As a mySQL statement (being the server language I know the most), inserting a message would be the following (where [message] is the sanitized MsgBox value):
INSERT INTO `messages` (`msg`,`date`) VALUES ('[message]',NOW());
Retrieval would look like so:
SELECT `msg` FROM `messages` ORDER BY `date` DESC LIMIT 15;
This method would be simpler than creating another table dedicated to recent entries. Also, browsing through archives by date would be much easier as the date would be tagged to the entries.
Hope this helps!
I am currently working on a project for traversing an excel document and inserting data into a database using C#.
The relevant data for this project is:
The excel sheet has 14 rows at the top that I do not care about. (sometimes 15, see Russia/Siberia below)
The data is grouped by name into 2 columns (date and value), such as:
Sheet 1
USA China Russia
Date Value Date Value Siberia
1/1/09 4.3654 1/1/09 2.7456 Date Value
1/2/09 3.5545 1/3/09 9.3214 2/5/09 0.2454
1/3/09 3.2322 1/21/09 5.2234 2/6/09 0.5557
The name I need to acquire is whichever is listed directly above "Date".
I only care about data from dates we do not have in the database. Before each column set is parsed, I will acquire the max date for any given name from the database, and skip anything at or before it.
There is no guarantee that the columns will be in a constant order or have constant spacing.
I do not want data for all names, rather only those in a list I put together before the file is acquired.
My current plan is this:
For each column, if the date field is at row 16, save the name as the value in row 15 above it, check the database for the last date for that name, only insert data where the date is greater than the acquired date.
If the date field is at row 17, do the same thing, but start the for loop through each row at 18.
If the name is not in the list, skip the column. If it is, make sure to grab the column next to it for the necessary values.
My problem is:
I am currently trying to use the ExcelDataReader from Codeplex(http://www.codeplex.com/ExcelDataReader). This only likes csv-like sheets, which this project has not.
I do not know of any alternative Excel readers.
To the best of my knowledge, a straight FileStream traversal of this file can only go row-by-row, rather than column-by-column.
To anyone still reading, thank you for your time. Any recommendations on how to proceed? Please ensure that solutions can traverse each column, not each row.
Also, please don't worry about the database stuff, or the list of names that precedes the traversal.
Addendum: What I'd really like to end up with is some type of table that I can just traverse with a nested loop, making column-centric traversal much, much easier. Because there is so much garbage near the top of the sheet (14+ rows), most simple solutions are not feasible.
If you want to read from excel in C#, i've used this library with great success, it'll give you the flexibility to parse columns/rows just however you'd like:
http://sourceforge.net/projects/koogra/ (read-only)
Other open source libraries i haven't used but could be good:
http://nexcel.sourceforge.net/ (read-only)
http://npoi.codeplex.com/ (can read and write)
http://developer.novell.com/wiki/index.php/Poi.Net (this project is dead)
Alternatively, you can use one of the many good Java libraries, and convert it into a C# assembly using IKVM:
http://jxls.sourceforge.net/
http://www.andykhan.com/jexcelapi/
http://poi.apache.org/ (this one's the grand-daddy of java XLS libraries)
I've covered how to do the IKVM Java -> C# conversion here (it's really not as horrible an option as you think):
http://splinter.com.au/blog/?p=207
Not a straight answer to your question but an alternative idea:
Your data looks like a pivot-ish table. I'd recommend "unpivoting" it into simple table.
Example:
Russia USA
Q1 123 323
Q2 456 321
Q3 567 843
Becomes:
Quarter Country Value
Q1 Russia 123
Q1 USA 323
Q2 Russia 321
....
If that is the case, not sure if I got this right in your question, than processing the data using a OleDB driver or whatever CSV kind of stuff should be become much less painful.
You can access Excel directly using ADO.NET via the ODBC driver. See http://www.davidhayden.com/blog/dave/archive/2006/05/26/2973.aspx or Google for more info on how to do that. You may wish to try HDR=No in your connection string, since your first row isn't really proper headers by the looks of it.
I haven't done this for a while, but I remember that it is a bit "temperamental" and takes some playing around with to get the column names right, but it should work. Try SELECT * FROM [Sheet1$] and see what you get.
I highly recommend saving this Excel document in a CSV format before doing anything else with it. You can do using this code
After you have a CSV, you can either parse it using that library, or write your own parser for it.
As I did before, I prefer to use OLEDB connection in order to connect to an Excel document.
By the way, you can take a look at the following article for more information:
http://www.codeproject.com/KB/office/excel_using_oledb.aspx
SpreadsheetGear for .NET can load workbooks and access any cells on any sheet in any order. You can get the formatted text of the cell (such as "1/1/09") or the underlying value ("1/1/09" is stored as the double 39814.0 in Excel or SpreadsheetGear).
You can see some live ASP.NET samples here and download the free trial here if you want to try it yourself.
Disclaimer: I own SpreadsheetGear LLC