I'm working with SSRS 2008 in some existing reports I have. One of the new requirements is to add a few columns and two of them begins with a number in its header. For example: 170T_Test.
When displaying the data and headers in the Report Viewer all is OK.
HOWEVER, the problem is when exporting to CSV format. Since the headers showed in this format are taken from the textbox property name, I had to put 170T_Test as the name. But, trying to do that the SSRSS 2008 IDE shows the following messages error:
"Specify a valid name. The name cannot contain spaces, and it must
begin with a letter followed by letter, numbers, or the undescore
character (_)"
So, based on all the explained above, is there a way to have the textbox starting with a number? or is it prohibited?
Regards!
It can be achieved in two different ways.
I prefer method 1. It is much better and reuseable.
Method 1:
Step 1
Update your report dataset to include the headers on row 1.
SELECT * FROM
(
SELECT Field1, Field2, 2 as rowOrder
FROM Tables
Where Conditions
UNION ALL
SELECT '170T Test' AS Field1, '180 T$' AS Field2, 1 as rowOrder
) ORDER BY rowOrder
Step 2:
Modify the RSReportServer.config file on Report server to customize CSV export to exclude header.
2012 config file Location: C:\Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\ReportServer
2008 File Location: \Program Files\Microsoft SQL Server\MSSQL.n\Reporting Services\ReportServer
Imp: Make a backup of the RSReportServer.config in case you need to rollback your changes.
Add another entry in <render> section below CSV extension.
<Extension name="CSVNoHeader" Type="Microsoft.ReportingServices.Rendering.DataRenderer.CsvReport,Microsoft.ReportingServices.DataRendering">
<OverrideNames>
<Name Language="en-US">CSV No Header</Name>
</OverrideNames>
<Configuration>
<DeviceInfo>
<NoHeader>true</NoHeader>
</DeviceInfo>
</Configuration>
</Extension>
Save it. Now you have another drop down export option CSV No Header along with CSV, PDF, XML. Users can use this option to extract the data the way they want.
MSDN Link to customize Rendering extension
Method 2:
STEP 1
Same as above
STEP 2
Use URL access and specify the device info for no header
http://msdn.microsoft.com/en-us/library/ms155046.aspx
Resources can't start with a number or a reserved word, if you do this, visual studio will automaticly add an underscore '_' in front of your name.
Your problem probably has something to do with this.
Related
I'm trying to load multiple csv files into a sql database using SSIS but I'm having some issue reading the csv file.
This is how the files are coming:
,,,,
ID, Name, Amount, Important_Dates, Company
101, Mark,"157,500",11/18/19, Amazon, Inc
102,Tom, "388,000",11/14/19, Ebay Corp
103,Tim, "484,000",11/25/19, Wish
104,Richard,"384,750",10/5/19, NBA. INC
Every time I try to open the file with SSIS the data get mixed because of the commas inside the values and this is how it looks:
And trying to read the file in a way that all values stay on its own column like:
Note:
I'll be running this 2 or 1 times a day that's why I'm trying to automate it.
And I already tried the code from this page but didn't work:
https://radacad.com/problem-with-comma-values-in-comma-delimited-file
Probably, a script task with C# code before the data flow, that will take care of the commas across all columns might help but I have no idea how to do that.
Any help or idea I'll really appreciate it.
Thank you!
You should have a connection manager item for whatever the source of this data is, and the connection manager will allow you to set a text qualifier property. Do that, and SSIS should handle this data properly.
Your CSV is not so clean. I have edited and it's
ID,Name,Amount,Important_Dates,Company
101,Mark,"157,500",11/18/19,"Amazon,Inc"
102,Tom,"388,000",11/14/19,Ebay Corp
103,Tim,"484,000",11/25/19,Wish
104,Richard,"384,750",10/5/19,NBA. INC
Then you can change the separator using a classic command line tool as Miller (https://github.com/johnkerl/miller) and convert it from CSV to TSV, running
mlr --c2t cat input.csv >output.tsv
to have
ID Name Amount Important_Dates Company
101 Mark 157,500 11/18/19 Amazon,Inc
102 Tom 388,000 11/14/19 Ebay Corp
103 Tim 484,000 11/25/19 Wish
104 Richard 384,750 10/5/19 NBA. INC
And then you should change import separator, choosing tab
I am using ctreeACE to create a local database, and I was given a csv file that contains 1000 entries of data and wanted to know if there was a way to import it without hard coding it?
Right now I am having to insert line by line with:
INSERT INTO testdata VALUES
('1ZE83A545192635139','2018-06-19 00:00:00',etc)
Note that ctreeACE only allows single row inserts with INSERT...VALUES (Source)
I can't find a way to do this directly, but you could use this tool to create your insert statements.
First input your data. You can load the csv directly, I just hardcoded two sample lines:
Next set your input options as needed. I used comma separators and ' as a quoting character in the example:
Third, set your output options. This would be a huge screenshot and is pretty self-explanatory so I'm leaving it out.
Last, click CSV to SQL Insert, and it will generate formatted insert statements (one line per insert) for you:
Hope that helps.
I have a sql table with the columns "candidatename", "candidatelocation" and "resume".here resume column have only .docx type files in binary form. from front end I need to enter some words or phrases. My requirement is to get all the records which contain these words(or phrases) in .docx file("resume" column).. Here I'm not getting that how to search the given words with the binary type column.. I need this using asp.net with c# and sql server
You will need to install the Microsoft Filter Pack (http://support.microsoft.com/en-us/kb/945934) which will enable you to create a full text index on the varbinary column you are using to store the .docx document.
I have a table Called EmployeeTypes which hold types of Employees, now i want to create a report using SSRS which will have something like this:
EmployeeType1 EmployeeType2 EmployeeType3
4 23 2
where the numbers are the count of employees.
problem is how can i generate the columns programmatically like that as the EmployeeTypes Table can have many Types and expand by time?
It sounds like you are looking for a cross tab report, using a data set like select count(*), employee_type from employee_table group by employee_type.
You can use the report wizard to create a 'Matrix' report type (as opposed to a 'Tabular' report type). The wizard will guide you through the steps to get what you need.
An SSRS report is defined by an XML file (the .RDL or .RDLC file). You could generate columns programmatically by directly modifying the XML. Not for the faint of heart, but people do it, and it can be done.
Here is a sample:
http://www.codeproject.com/Articles/11254/SQL-Reporting-Services-with-Dynamic-Column-Reports
Problem.
I regularly receive a feed files from different suppliers. Although the column names are consistent the problem comes when some suppliers send text files with more or less columns in there feed file.
Furthermore the arrangement of these files are inconsistent.
Other than the Dynamic data flow task provided by Cozy Roc is there another way I could import these files. I am not a C# guru but i am driven torwards using a "Script Task" control flow or "Script Component" Data flow task.
Any suggestion, samples or direction will greatly be appreciated.
http://www.cozyroc.com/ssis/data-flow-task
Some forums
http://www.sqlservercentral.com/Forums/Topic525799-148-1.aspx#bm526400
http://www.bidn.com/forums/microsoft-business-intelligence/integration-services/26/dynamic-data-flow
Off the top of my head, I have a 50% solution for you.
The problem
SSIS really cares about meta data so variations in it tend to result in exceptions. DTS was far more forgiving in this sense. That strong need for consistent meta data makes use of the Flat File Source troublesome.
Query based solution
If the problem is the component, let's not use it. What I like about this approach is that conceptually, it's the same as querying a table-the order of columns does not matter nor does the presence of extra columns matter.
Variables
I created 3 variables, all of type string: CurrentFileName, InputFolder and Query.
InputFolder is hard wired to the source folder. In my example, it's C:\ssisdata\Kipreal
CurrentFileName is the name of a file. During design time, it was input5columns.csv but that will change at run time.
Query is an expression "SELECT col1, col2, col3, col4, col5 FROM " + #[User::CurrentFilename]
Connection manager
Set up a connection to the input file using the JET OLEDB driver. After creating it as described in the linked article, I renamed it to FileOLEDB and set an expression on the ConnectionManager of "Data Source=" + #[User::InputFolder] + ";Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties=\"text;HDR=Yes;FMT=CSVDelimited;\";"
Control Flow
My Control Flow looks like a Data flow task nested in a Foreach file enumerator
Foreach File Enumerator
My Foreach File enumerator is configured to operate on files. I put an expression on the Directory for #[User::InputFolder] Notice that at this point, if the value of that folder needs to change, it'll correctly be updated in both the Connection Manager and the file enumerator. In "Retrieve file name", instead of the default "Fully Qualified", choose "Name and Extension"
In the Variable Mappings tab, assign the value to our #[User::CurrentFileName] variable
At this point, each iteration of the loop will change the value of the #[User::Query to reflect the current file name.
Data Flow
This is actually the easiest piece. Use an OLE DB source and wire it as indicated.
Use the FileOLEDB connection manager and change the Data Access mode to "SQL Command from variable." Use the #[User::Query] variable in there, click OK and you're ready to work.
Sample data
I created two sample files input5columns.csv and input7columns.csv All of the columns of 5 are in 7 but 7 has them in a different order (col2 is ordinal position 2 and 6). I negated all the values in 7 to make it readily apparent which file is being operated on.
col1,col3,col2,col5,col4
1,3,2,5,4
1111,3333,2222,5555,4444
11,33,22,55,44
111,333,222,555,444
and
col1,col3,col7,col5,col4,col6,col2
-1111,-3333,-7777,-5555,-4444,-6666,-2222
-111,-333,-777,-555,-444,-666,-222
-1,-3,-7,-5,-4,-6,-2
-11,-33,-77,-55,-44,-666,-222
Running the package results in these two screen shots
What's missing
I don't know of a way to tell the query based approach that it's OK if a column doesn't exist. If there's a unique key, I suppose you could define your query to have only the columns that must be there and then perform lookups against the file to try and obtain the columns that ought to be there and not fail the lookup if the column doesn't exist. Pretty kludgey though.
Our solution. We use parent child packages. In the parent pacakge we take the individual client files and transform them to our standard format files then call the child package to process the standard import using the file we created. This only works if the client is consistent in what they send though, if they try to change their format from what they agreed to send us, we return the file.