MVC Stream outputting to HTML div - c#

Small problem here with an MVC app that I'm not sure how to figure out a way around.
Basically, I'm adding additional functionality to a system that was originally created by someone else (c#). For a reporting system, the results were only ever displayed on screen. Now I am building in the functionality to allow the user the ability to download their report as an Excel document.
So basically, I have a view that displays the date ranges, and some other search refinement options to the user. I have introduced a radio button that if selected will download the report as opposed to displaying it on screen.
Here are my three actions within the ReportController:
public ActionResult Index()
{
return View();
}
public ActionResult ProductReport(AdminReportRequest reportRequest, FormCollection formVariables)
{
AdminEngine re = new AdminEngine();
if (!reportRequest.Download)
{
AdminReport report = re.GetCompleteAdminReport(reportRequest);
return View(report);
}
Stream ExcelReport = re.GetExcelAdminReport(reportRequest);
TempData["excelReport"] = ExcelReport;
return RedirectToAction("ExcelProductReport");
}
public FileResult ExcelReport()
{
var ExcelReport = TempData["excelReport"] as Stream;
return new FileStreamResult(ExcelReport, "application/ms-excel")
{
FileDownloadName = "Report" + DateTime.Now.ToString("MMMM d, yyy") + ".xls"
};
}
I've debugged through the AdminEngine, and everything looks fine. However, in the ExcelReport action, when it comes to returning the file - it doesn't. What I see is a lot of characters on screen (in the 'panelReport' div - see below), mixed in with what would be the data in the excel file.
I think I have established that the reason it is being displayed on screen is as a result of some code
that was written in the Index view:
<% using (Ajax.BeginForm("ProductReport", "Report", null,
new AjaxOptions
{
UpdateTargetId = "panelReport",
InsertionMode = InsertionMode.Replace,
OnSuccess = "pageLoaded",
OnBegin = "pageLoading",
OnFailure = "pageFailed",
LoadingElementId = ""
},
new { id = "SearchForm" })) %>
As you can see, the Ajax.BeginForm statement states that it should update to the panelReport div - which is what it's doing (through the Product Report partial view). While this is perfect for when the reports need to be displayed on screen, it is obviously not going to work with an excel file.
Is there a way of working around this issue without changing the existing code too much?
Here is the class where I do the workings for the excel file in case it's required to shed light on the situation:
Report Class:
public Stream GetExcelAdminReport(AdminReportRequest reportRequest)
{
AdminReport report = new AdminReport();
string dateRange = null;
List<ProductSale> productSales = GetSortedListOfProducts(reportRequest, out dateRange);
report.DateRange = dateRange;
if (productSales.Count > 0)
{
report.HasData = true;
CustomisedSalesReport CustSalesRep = new CustomisedSalesReport();
Stream SalesReport = CustSalesRep.GenerateCustomisedSalesFile(productSales);
return SalesReport;
}
}
Workings Class:
public class CustomisedSalesReport
{
public Stream GenerateCustomisedSalesFile(List<ProductSale> productSales)
{
MemoryStream ms = new MemoryStream();
HSSFWorkbook templateWorkbook = new HSSFWorkbook();
HSSFSheet sheet = templateWorkbook.CreateSheet("Sales Report");
//Workings
templateWorkbook.Write(ms);
ms.Position = 0;
return ms;
}
}

The problem is pretty obvious that you are using an Ajax Form to download a file. On top you are using the built-in Microsoft Ajax libraries, which seem to be not intelligent enough.
I can provide 2 solutions:
The easiest solution (which I have used in the past) is that instead of streaming a file yourself, create an excel file and save it on the server and then send the download link to the user. It won't require a lot of change to the code.
You could handle the OnSubmit event of the AjaxForm, see if you it's a file to download. If yes, then make a full postback request (using $.post()). This way the browser will automatically pop-up the dialog asking for where to download.
Hope it makes sense.

Related

Scraping html list data from a dynamic server

Hallo guys!
Sorry for the dump question, this is my last resort. I swear i triend countless of other Stackoverflow questions, different Frameworks, etc., but those didnt seem to help.
Ich have the following Problem:
A website displays a list of data (there is a TON of div, li, span etc. tags infront, its a big HTML.)
Im writing a tool that fetches data from a specific list inside a ton of other div tags, downloads it and outputs an excel file.
The website im trying to access, is dynamic. So you open the website, it loads a little bit, and then the list appears (probably some JS and stuff).
When i try to download the website via a webRequest in C#, the html I get ist almost empty with a ton on white spaces, lots of non-html stuff, some garbage data as well.
Now: Im pretty used to C#, HTMLAgillityPack, and countless other libraries, not so much in web related stuff tho. I tried CefSharp, Chromium etc. all of those stuff, but couldnt get them to work properly unfortunately.
I want to have a HTML in my program to work with that looks exactly like the HTML that you see when
you open the dev console in chrome wenn visting the website mentined above.
The HTML parser works flwalessly there.
This is how I image how the code could look like simplified.
Extreme C# pseudocode:
WebBrowserEngine web = new WebBrowserEngine()
web.LoadURLuntilFinished(url); // with all the JS executed and stuff
String html = web.getHTML();
web.close();
My Goal would be that the string html in the pseudocode looks exactly like the one in the Chrome dev tab.
Maybe there is a solution posted somewhere else but i swear i coudlnt find it, been looking for days.
Andy help is greatly appreciated.
#SpencerBench is spot on in saying
It could be that the page is using some combination of scroll state, element visibility, or element positions to trigger content loading. If that's the case, then you'll need to figure out what it is and trigger it programmatically.
To answer the question for your specific use case, we need to understand the behaviour of the page you want to scrape data from, or as I asked in the comments, how do you know the page is "finished"?
However, it's possible to give a fairly generic answer to the question which should act as a starting point for you.
This answer uses Selenium, a package which is commonly used for automating testing of web UIs, but as they say on their home page, that's not the only thing it can be used for.
Primarily it is for automating web applications for testing purposes, but is certainly not limited to just that. Boring web-based administration tasks can (and should) also be automated as well.
The web site I'm scraping
So first we need a web site. I've created one using ASP.net core MVC with .net core 3.1, although the web site's technology stack isn't important, it's the behaviour of the page you want to scrape which is important. This site has 2 pages, unimaginatively called Page1 and Page2.
Page controllers
There's nothing special in these controllers:
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
public class Page1Controller : Controller
{
public IActionResult Index()
{
return View("Page1");
}
}
}
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
public class Page2Controller : Controller
{
public IActionResult Index()
{
return View("Page2");
}
}
}
API controller
There's also an API controller (i.e. it returns data rather than a view) which the views can call asynchronously to get some data to display. This one just creates an array of the requested number of random strings.
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
using System;
using System.Collections.Generic;
using System.Text;
[Route("api/[controller]")]
[ApiController]
public class DataController : ControllerBase
{
[HttpGet("Create")]
public IActionResult Create(int numberOfElements)
{
var response = new List<string>();
for (var i = 0; i < numberOfElements; i++)
{
response.Add(RandomString(10));
}
return Ok(response);
}
private string RandomString(int length)
{
var sb = new StringBuilder();
var random = new Random();
for (var i = 0; i < length; i++)
{
var characterCode = random.Next(65, 90); // A-Z
sb.Append((char)characterCode);
}
return sb.ToString();
}
}
}
Views
Page1's view looks like this:
#{
ViewData["Title"] = "Page 1";
}
<div class="text-center">
<div id="list" />
<script src="~/lib/jquery/dist/jquery.min.js"></script>
<script>
var apiUrl = 'https://localhost:44394/api/Data/Create';
$(document).ready(function () {
$('#list').append('<li id="loading">Loading...</li>');
$.ajax({
url: apiUrl + '?numberOfElements=20000',
datatype: 'json',
success: function (data) {
$('#loading').remove();
var insert = ''
for (var item of data) {
insert += '<li>' + item + '</li>';
}
insert = '<ul id="results">' + insert + '</ul>';
$('#list').html(insert);
},
error: function (xht, status) {
alert('Error: ' + status);
}
});
});
</script>
</div>
So when the page first loads, it just contains an empty div called list, however the page loading trigger's the function passed to jQuery's $(document).ready function, which makes an asynchronous call to the API controller, requesting an array of 20,000 elements. While the call is in progress, "Loading..." is displayed on the screen, and when the call returns, this is replaced by an unordered list containing the received data. This is written in a way intended to be friendly to developers of automated UI tests, or of screen scrapers, because we can tell whether all the data has loaded by testing whether or not the page contains an element with the ID results.
Page2's view looks like this:
#{
ViewData["Title"] = "Page 2";
}
<div class="text-center">
<div id="list">
<ul id="results" />
</div>
<script src="~/lib/jquery/dist/jquery.min.js"></script>
<script>
var apiUrl = 'https://localhost:44394/api/Data/Create';
var requestCount = 0;
var maxRequests = 20;
$(document).ready(function () {
getData();
});
function getDataIfAtBottomOfPage() {
console.log("scroll - " + requestCount + " requests");
if (requestCount < maxRequests) {
console.log("scrollTop " + document.documentElement.scrollTop + " scrollHeight " + document.documentElement.scrollHeight);
if (document.documentElement.scrollTop > (document.documentElement.scrollHeight - window.innerHeight - 100)) {
getData();
}
}
}
function getData() {
window.onscroll = undefined;
requestCount++;
$('results2').append('<li id="loading">Loading...</li>');
$.ajax({
url: apiUrl + '?numberOfElements=50',
datatype: 'json',
success: function (data) {
var insert = ''
for (var item of data) {
insert += '<li>' + item + '</li>';
}
$('#loading').remove();
$('#results').append(insert);
if (requestCount < maxRequests) {
window.setTimeout(function () { window.onscroll = getDataIfAtBottomOfPage }, 1000);
} else {
$('#results').append('<li>That\'s all folks');
}
},
error: function (xht, status) {
alert('Error: ' + status);
}
});
}
</script>
</div>
This gives a nicer user experience because it requests data from the API controller in multiple smaller chunks, so the first chunk of data appears fairly quickly, and once the user has scrolled down to somewhere near the bottom of the page, the next chunk of data is requested, until 20 chunks have been requested and displayed, at which point the text "That's all folks" is added to the end of the unordered list. However this is more difficult to interact with programmatically because you need to scroll the page down to make the new data appear.
(Yes, this implementation is a bit buggy - if the user gets to the bottom of the page too quickly then requesting the next chunk of data doesn't happen until they scroll up a bit. But the question isn't about how to implement this behaviour in a web page, but about how to scrape the displayed data, so please forgive my bugs.)
The scraper
I've implemented the scraper as a xUnit unit test project, just because I'm not doing anything with the data I've scraped from the web site other than Asserting that it is of the correct length, and therefore proving that I haven't prematurely assumed that the web page I'm scraping from is "finished". You can put most of this code (other than the Asserts) into any type of project.
Having created your scraper project, you need to add the Selenium.WebDriver and Selenium.WebDriver.ChromeDriver nuget packages.
Page Object Model
I'm using the Page Object Model pattern to provide a layer of abstraction between functional interaction with the page and the implementation detail of how to code that interaction. Each of the pages in the web site has a corresponding page model class for interacting with that page.
First, a base class with some code which is common to more than one page model class.
namespace StackOverflow68925623Scraper
{
using System;
using OpenQA.Selenium;
using OpenQA.Selenium.Support.UI;
public class PageModel
{
protected PageModel(IWebDriver driver)
{
this.Driver = driver;
}
protected IWebDriver Driver { get; }
public void ScrollToTop()
{
var js = (IJavaScriptExecutor)this.Driver;
js.ExecuteScript("window.scrollTo(0, 0)");
}
public void ScrollToBottom()
{
var js = (IJavaScriptExecutor)this.Driver;
js.ExecuteScript("window.scrollTo(0, document.body.scrollHeight)");
}
protected IWebElement GetById(string id)
{
try
{
return this.Driver.FindElement(By.Id(id));
}
catch (NoSuchElementException)
{
return null;
}
}
protected IWebElement AwaitGetById(string id)
{
var wait = new WebDriverWait(Driver, TimeSpan.FromSeconds(10));
return wait.Until(e => e.FindElement(By.Id(id)));
}
}
}
This base class gives us 4 convenience methods:
Scroll to the top of the page
Scroll to the bottom of the page
Get the element with the supplied ID, or return null if it doesn't exist
Get the element with the supplied ID, or wait for up to 10 seconds for it to appear if it doesn't exist yet
And each page in the web site has its own model class, derived from that base class.
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium;
public class Page1Model : PageModel
{
public Page1Model(IWebDriver driver) : base(driver)
{
}
public IWebElement AwaitResults => this.AwaitGetById("results");
public void Navigate()
{
this.Driver.Navigate().GoToUrl("https://localhost:44394/Page1");
}
}
}
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium;
public class Page2Model : PageModel
{
public Page2Model(IWebDriver driver) : base(driver)
{
}
public IWebElement Results => this.GetById("results");
public void Navigate()
{
this.Driver.Navigate().GoToUrl("https://localhost:44394/Page2");
}
}
}
And the Scraper class:
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium.Chrome;
using System;
using System.Threading;
using Xunit;
public class Scraper
{
[Fact]
public void TestPage1()
{
// Arrange
var driver = new ChromeDriver();
var page = new Page1Model(driver);
page.Navigate();
try
{
// Act
var actualResults = page.AwaitResults.Text.Split(Environment.NewLine);
// Assert
Assert.Equal(20000, actualResults.Length);
}
finally
{
// Ensure the browser window closes even if things go pear-shaped
driver.Quit();
}
}
[Fact]
public void TestPage2()
{
// Arrange
var driver = new ChromeDriver();
var page = new Page2Model(driver);
page.Navigate();
try
{
// Act
while (!page.Results.Text.Contains("That's all folks"))
{
Thread.Sleep(1000);
page.ScrollToBottom();
page.ScrollToTop();
}
var actualResults = page.Results.Text.Split(Environment.NewLine);
// Assert - we expect 1001 because of the extra "that's all folks"
Assert.Equal(1001, actualResults.Length);
}
finally
{
// Ensure the browser window closes even if things go pear-shaped
driver.Quit();
}
}
}
}
So, what's happening here?
// Arrange
var driver = new ChromeDriver();
var page = new Page1Model(driver);
page.Navigate();
ChromeDriver is in the Selenium.WebDriver.ChromeDriver package and implements the IWebDriver interface from the Selenium.WebDriver package with the code to interact with the Chrome browser. Other packages are available containing implementations for all popular browsers. Instantiating the driver object opens a browser window, and calling its Navigate method directs the browser to the page we want to test/scrape.
// Act
var actualResults = page.AwaitResults.Text.Split(Environment.NewLine);
Because on Page1, the results element doesn't exist until all the data has been displayed, and no user interaction is required in order for it to be displayed, we use the page model's AwaitResults property to just wait for that element to appear and return it once it has appeared.
AwaitResults returns an IWebElement instance representing the element, which in turn has various methods and properties we can use to interact with the element. In this case we use its Text property which returns the element's contents as a string, without any markup. Because the data is displayed as an unordered list, each element in the list is delimited by a line break, so we can can use String's Split method to convert it to a string array.
Page2 needs a different approach - we can't use the presence of the results element to determine whether the data has all been displayed, because that element is on the page right from the start, instead we need to check for the string "That's all folks" which is written right at the end of the last chunk of data. Also the data isn't loaded all in one go, and we need to keep scrolling down in order to trigger the loading of the next chunk of data.
// Act
while (!page.Results.Text.Contains("That's all folks"))
{
Thread.Sleep(1000);
page.ScrollToBottom();
page.ScrollToTop();
}
var actualResults = page.Results.Text.Split(Environment.NewLine);
Because of the bug in the UI that I mentioned earlier, if we get to the bottom of the page too quickly, the fetch of the next chunk of data isn't triggered, and attempting to scroll down when already at the bottom of the page doesn't raise another scroll event. That's why I'm scrolling to the bottom of the page and then back to the top - that way I can guarantee that a scroll event is raised. You never know, the web site you're trying to scrape data from may itself be buggy.
Once the "That's all folks" text has appeared, we can go ahead and get the results element's Text property and convert it to a string array as before.
// Assert - we expect 1001 because of the extra "that's all folks"
Assert.Equal(1001, actualResults.Length);
This is the bit that won't be in your code. Because I'm scraping a web site which is under my control, I know exactly how much data it should be displaying so I can check that I've got all the data, and therefore that my scraping code is working correctly.
Further reading
Absolute beginner's introduction to Selenium: https://www.guru99.com/selenium-csharp-tutorial.html
(A curiosity in that article is the way that it starts by creating a console application project and later changes its output type to class library and manually adds the unit test packages, when the project could have been created using one of Visual Studio's unit test project templates. It gets to the right place in the end, albeit via a rather odd route.)
Selenium documentation: https://www.selenium.dev/documentation/
Happy scraping!
If you need to fully execute the web page, then a complete browser like CefSharp is your only option.
It could be that the page is using some combination of scroll state, element visibility, or element positions to trigger content loading. If that's the case, then you'll need to figure out what it is and trigger it programmatically. I know that CefSharp can simulate user actions like clicking, scrolling, etc.

Cannot delete file from server folder

I'm working on a simple portfolio project. I would like to show images on a webpage that logged in users can edit. My problem is in the [HttpPost] Edit, more specifically this part:
if (ModelState.IsValid)
{
//updating current info
inDb = ModelFactory<ArtSCEn>.GetModel(db, artSCEn.ArtSCEnID);
inDb.LastModified = DateTime.Now;
inDb.TechUsed = artSCEn.TechUsed;
inDb.DateOfCreation = artSCEn.DateOfCreation;
inDb.Description = artSCEn.Description;
inDb.ArtSC.LastModified = DateTime.Now;
//validating img
if (Validator.ValidateImage(img))
{
inDb.ImageString = Image.JsonSerialzeImage(img);
}
else
{
//return to the UI becuase we NEED a valid pic
return View(artSCEn);
}
db.Entry(inDb).State = System.Data.Entity.EntityState.Modified;
db.SaveChanges();
//[PROBLEMATIC PART STARTS HERE]
//updating the pic on the server
//getting the string info
string userArtImgFolder = Server.MapPath($"~/Content/Images/Artistic/{inDb.ArtSC.PersonID}");
string imgNameOnServer = Path.Combine(
userArtImgFolder,
$"{inDb.ArtSC.PersonID}_{inDb.ArtSC.ArtSCID}_{inDb.ArtSCEnID}{Path.GetExtension(img.FileName)}");
//deleting previous pic
System.IO.File.Delete(imgNameOnServer);
//creating a new pic
Image.ResizePropotionatelyAndSave(img, Path.Combine(
userArtImgFolder,
$"{inDb.ArtSC.PersonID}_{inDb.ArtSC.ArtSCID}_{inDb.ArtSCEnID}{Path.GetExtension(img.FileName)}"));
return RedirectToAction("Edit", "Art", new { id = inDb.ArtSCID });
}
When I get back the new picture and I want to delete the previous, System.IO.File.Delete() always triggers an exception that it cannot access the resource, because someone else is holding onto it. Any idea what that might be?
Maybe it's something simple, I'm new to ASP, but just can't figure it out.
UPDATE
Following on the suggestions in the comments section, I checked the processes with a tool called Process Monitor and it seems that indeed IIS is locking the resource:
This one appears 2 more times in the logs, by the way.
Judging by the fact that the operation is CreateFileMapping, I guess it has to do with either Server.MapPath() or Path.Combine(), however, the Server is an IDisposable (being derived from Controller), so can that be the one I should deal with?
Also, the resource I'm trying to delete is an image used on the website, which might be a problem, but that section of the website is not shown during this process.
I found the solution building on the comment of #Diablo.
The IIS was indeed holding on to the resource, but Server.MapPath() or any of that code had nothing to do with it: it was the Edit view my page returning the data to. With the help of this SO answer, it turns out I was careless with a BitMap that I used without a using statement in the view to get some image stats. I updated the helper function with the following code:
public static float GetImageWidthFromPath(string imgAbsolutPath, int offset)
{
float width = 0;
using (Bitmap b = new Bitmap(imgAbsolutPath))
{
width = b.Width - offset;
}
return width;
}
Now IIS does not hold on to the resource and I can delete the file.

Generate a pdf using rotativa in asp.net with multiple partial views in one view

I am trying to generate a pdf using rotativa in asp.net. My view has multiple partial views rendered on it. I am able to create a pdf of single partial views but when i am combine all the partial views and trying to generate the pdf. It generates the pdf before loading the data in the pdf. please suggest how to hold the process until the loading of data in the pdf.
Thanks in advance.
public ActionResult Followers()
{
MediaAPIController mac = new MediaAPIController();
JsonResult jR = mac.getUserInfo("", "", "", "201");
MediaLibrary.User u = (MediaLibrary.User)jR.Data;
System.Threading.Tasks.Task.Run(async delegate
{
await Task.Delay(5000);
return 42;
}).Wait();
return new Rotativa.ActionAsPdf("PreviewPdf", u)
{
FileName = "MyDoc.pdf",
PageSize = Rotativa.Options.Size.A4,
PageOrientation = Rotativa.Options.Orientation.Portrait,
PageMargins = { Left = 10, Right = 10 }
};
}
You can use ViewAsPdf method instead of ActionAsPdf and pass the model that are bind in partial views. In this way the view will be bind before rendering and problem will be solved.

Code works on console application but not on ASP.NET application

I want to create reports in my ASP.NET MVC 5 application. I found this code (Hello world example in http://report.sourceforge.net/):
Report report = new Report(new PdfFormatter());
FontDef fd = new FontDef(report, "Helvetica");
FontProp fp = new FontPropMM(fd, 25);
Page page = new Page(report);
page.AddCB_MM(80, new RepString(fp, "Hello World!"));
RT.ViewPDF(report, "HelloWorld.pdf");
When I put this code inside a main in a console application, it creates and opens a PDF file that writes "Hello World".
But when I put the same code inside a controller in ASP.NET like this, I get nothing, no PDF opens or no PDF is saved in the server (I click mywebsite/Person/Print) :
public ActionResult Print(int? id)
{
if (id == null)
{
return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
}
Person person= db.Person.Find(id);
if (person== null)
{
Report report = new Report(new PdfFormatter());
FontDef fd = new FontDef(report, "Helvetica");
FontProp fp = new FontPropMM(fd, 25);
Page page = new Page(report);
page.AddCB_MM(80, new RepString(fp, "Hello World!"));
RT.ViewPDF(report, "HelloWorld.pdf");
}
return View(person);
}
How can I modify my ASP.NET application to let user get PDFs using this code? (I will also appreciate if you know any cool-free reporting tools for ASP.NET MVC 5.) Thanks.
Not sure what libraries you are using there but the general flow of your action should be like:
prepare report object
obtain byte array representing the content of the PDF file
return a File from your action using the File method: Controller.File
The result on the client would be a file download prompt

C# Downloadable Excel Files from Class Library

I'm looking for some advice. I'm building on an additional feature to a C# project that someone else wrote. The solution of the project consists of an MVC web application, with a few class libraries.
What I'm editing is the sales reporting function. In the original build, a summary of the sales reports were generated on the web application. When the user generates the sales report, a Reporting class is called in one of the C# class libraries. I'm trying to make the sales reports downloadable in an excel file when the user selects a radio button.
Here is a snippet of code from the Reporting class:
public AdminSalesReport GetCompleteAdminSalesReport(AdminSalesReportRequest reportRequest)
{
AdminSalesReport report = new AdminSalesReport();
string dateRange = null;
List<ProductSale> productSales = GetFilteredListOfAdminProductSales(reportRequest, out dateRange);
report.DateRange = dateRange;
if (titleSales.Count > 0)
{
report.HasData = true;
report.Total = GetTotalAdminSales(productSales);
if (reportRequest.Type == AdminSalesReportRequest.AdminSalesReportType.Complete)
{
report.ProductSales = GetAdminProductSales(productSales);
report.CustomerSales = GetAdminCustomerSales(productSales);
report.ManufacturerSales = GetAdminManufacturerSales(productSales);
if (reportRequest.Download)
{
FileResult ExcelDownload = GetExcelDownload(productSales);
}
}
}
return report;
}
So as you can see, if reportRequest.Download == true, the class should start up the process of creating the excel file. All the GetAdminSales functions do it use linq queries to sort out the sales if they are being displayed on the webpage.
So I have added this along with the GetAdminSales functions:
private FileResult GetExcelDownload(List<TitleSale> titleSales)
{
CustomisedSalesReport CustSalesRep = new CustomisedSalesReport();
Stream SalesReport = CustSalesRep.GenerateCustomisedSalesStream(productSales);
return new FileStreamResult(SalesReport, "application/ms-excel")
{
FileDownloadName = "SalesReport" + DateTime.Now.ToString("MMMM d, yyy") + ".xls"
};
}
and to format the excel sheet, I'm using the NPOI library, and my formatter class is laid out like so:
public class CustomisedSalesReport
{
public Stream GenerateCustomisedSalesStream(List<ProductSale> productSales)
{
return GenerateCustomisedSalesFile(productSales);
}
private Stream GenerateCustomisedSalesFile(List<ProductSale> productSales)
{
MemoryStream ms = new MemoryStream();
HSSFWorkbook templateWorkbook = new HSSFWorkbook();
HSSFSheet sheet = templateWorkbook.CreateSheet("Sales Report");
HSSFRow dataRow = sheet.CreateRow(0);
HSSFCell cell = dataRow.CreateCell(0);
cell = dataRow.CreateCell(0);
cell.SetCellValue(DateTime.Now.ToString("MMMM yyyy") + " Sales Report");
dataRow = sheet.CreateRow(2);
string[] colHeaders = new string[] {
"Product Code",
"Product Name",
"Qty Sold",
"Earnings",
};
int colPosition = 0;
foreach (string colHeader in colHeaders)
{
cell = dataRow.CreateCell(colPosition++);
cell.SetCellValue(colHeader);
}
int row = 4;
var adminTotalSales = GetAdminProductSales(productSales);
foreach (SummaryAdminProductSale t in adminTotalSales)
{
dataRow = sheet.CreateRow(row++);
colPosition = 0;
cell = dataRow.CreateCell(colPosition++);
cell.SetCellValue(t.ProductCode);
cell = dataRow.CreateCell(colPosition++);
cell.SetCellValue(t.ProductName);
cell = dataRow.CreateCell(colPosition++);
cell.SetCellValue(t.QtySold);
cell = dataRow.CreateCell(colPosition++);
cell.SetCellValue(t.Total.ToString("0.00"));
}
}
templateWorkbook.Write(ms);
ms.Position = 0;
return ms;
}
Again like before, the GetAdminSales (GetAdminProductSales, etc) are contained in the bottom of the class, and are just linq queries to gather the data.
So when I run this, I don't get any obvious errors. The summary sales report appears on screen as normal but no excel document downloads. What I have done, which may be putting this off is in my class library I have referened the System.Web.Mvc dll in order to download the file (I have not done it any other way before - and after reading up on the net I got the impression I could use it in a class library).
When I debug through the code to get a closer picture of what's going on, everything seems to be working ok, all the right data is being captured but I found that from the very start - the MemoryStream ms = new Memory Stream declaration line in my formatter class shows up this (very hidden mind you) :
ReadTimeout '((System.IO.Stream)(ms)).ReadTimeout'
threw an exception of type
'System.InvalidOperationException' int
{System.InvalidOperationException}
+{"Timeouts are not supported on this stream."} System.SystemException
{System.InvalidOperationException}
I get the same for 'WriteTimeout'...
Apologies for the long windedness of the explaination. I'd appreciate it if anyone could point me in the right direction, either to solve my current issue, or an alternative way of making this work.
Without getting bogged down in the details, the obvious error is that in GenerateCustomisedSalesFile you create a MemoryStream ms, do nothing with it, then return it.

Categories

Resources