Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am trying to Convert a MVC 4 View to a PDF. I have no idea where to start, after searching google i found ItextSharp and have been playing around with it.
My View is fairly Simple it has a Map and a Table. i would like to just call an action in the controller and have it print my web page.
Any Advice would be greatly Appreciated
You can use Rotativa
public ActionResult TestViewWithModel(string id)
{
var model = new TestViewModel {DocTitle = id, DocContent = "This is a test"};
return new ViewAsPdf(model);
}
public ActionResult PrintIndex()
{
return new ActionAsPdf("Index", new { name = "Giorgio" }) { FileName = "Test.pdf" };
}
It uses wkhtmltopdf under the hood.
wkhtmltopdf and wkhtmltoimage are open source (LGPLv3) command line
tools to render HTML into PDF and various image formats using the QT
Webkit rendering engine. These run entirely "headless" and do not
require a display or display service.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
How to convert a dynamic link which is a html web page into an Image format. Remember the link is dynamic which contains html content in string format. I have tried a lot of ways like reading the html content using converting to base64 first then visa versa.
var htmlToImageConv = new HtmlToImageConverter();
byte[] jpegBytes = htmlToImageConv.GenerateImage(html, ImageFormat.Jpeg); System.Drawing.Image image; using (System.IO.MemoryStream ms = new System.IO.MemoryStream(strOg))
{
image = System.Drawing.Image.FromStream(ms); string path = Server.MapPath("~/images/");
}
I have tried this code in c# for converting html webpage to image.
You can use a headless browser to render the html and then take a snapshot.
Have a look at PuppeteerSHarp: https://github.com/kblok/puppeteer-sharp
You could use Selenium to render the page and save a screenshot as a png image.
Add the following packages to your project:
Selenium.WebDriver
Selenium.Chrome.WebDriver
Use the following code to save a screenshot:
using OpenQA.Selenium;
using OpenQA.Selenium.Chrome;
namespace ConsoleApp1
{
class Program
{
static void Main(string[] args)
{
var driver = new ChromeDriver();
driver.Navigate().GoToUrl("http://google.com");
Screenshot ss = ((ITakesScreenshot)driver).GetScreenshot();
ss.SaveAsFile("screenshot.png");
}
}
}
That what you need is a conversation from a html containing string to an image, which is already discussed in the answers of this Question.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
My application needs to read data from an excel file and stocks it in MongoDB database. I am using .Net and c# for development.I am using Excel 2007 , MongoDB 3.2 and visual studio 2015 version.
Any idea to access excel file, i need your help please.
This is my code
public void Open_readXLS()
{
Excel.Workbook workbook;
Excel.Worksheet worksheet;
Optioncontext ctx = new Optioncontext();
string filePath = #"C:\Users\user PC\Desktop\ finale\Euro_Dollar_Call_Options.xlsx";
workbook = new Excel.Workbook(filePath);
worksheet = workbook.Sheets.GetByName("Feuil1");
for (ushort i = 0; i <= worksheet.Rows.LastRow; i++)
{
option.type_option= worksheet.Rows[i].Cells[0].Value.ToString(),
option.type_currency= worksheet.Rows[i].Cells[1].Value.ToString();
}
ctx.Option.InsertOne(option);
}
There are many ways of achieving this. The simplest would be to save your Excel as a CSV-file for further processing; you can do this in Excel by selecting "Save As" in the "File" menu and then changing the file-ending to CSV. Once you have done this you can use mongoimport to import its contents - no need for C# code in this scenario. You may have to adjust the contents of your CSV so that it fits the structure that is expected by mongoimport; here is a SO post about just that How to use mongoimport to import csv.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'd like to download the daily photo from this site but i can't use the JPEG's URL because it changes everyday.
Is there any way to download object from site using the page URL and XPath? I tried to find some method in WebClient but with no luck.
An example of my comment with HTML Agility Pack :
WebClient client = new WebClient();
string resource = client.DownloadString("http://photography.nationalgeographic.com/photography/photo-of-the-day/");
HtmlAgilityPack.HtmlDocument html = new HtmlAgilityPack.HtmlDocument();
html.LoadHtml(resource);
var imgDiv = html.DocumentNode.SelectSingleNode("//*[contains(#class,'primary_photo')]");
var imgSrc = imgDiv.SelectSingleNode("//img/#src");
string relativePath = imgSrc.GetAttributeValue("src", "");
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am trying to extract a source page of a given URL with a C# application. Right now I am using class HttpWebRequest.
The strange thing is that the result page obtained from this class is completely different compared to the page obtained with a Google Chrome browser(Ctrl + u).
Can somebody please tell me how to get the exact source page. Or is it wrong to expect that both pages are equal?
Many thanks
Using a Web Browser can be trivial though the html will be properly retrieved from the internet. The code will require you to call another void or place the code within the event.
WebBrowser wb = new WebBrowser();
private void button1_Click(object sender, EventArgs e) {
wb.Navigate("http://kissanime.com/Anime/One-Piece");
wb.ScriptSupress = true;
wb.DocumentCompleted += pageLoaded;
}
private void pageLoaded(object sender, WebBrowserDocumentCompletedEventArgs e) {
string src = wb.DocumentText;
}
by using that method you will get the html straight out of a web browser though it can take time to load depends on the size of the page, images and dependencies (External files like JS, CSS and Pictures/Videos).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have the following situation:
A document view where user can upload multiple files. A one(Document) to many(files) relationship. All these files are "inside" the document by its IDDocument property.
The user will make loads of .xml files upload, each file upload fires that Action in my controller:
[HttpPost]
public ActionResult ProcessSubmitUpload(HttpPostedFileBase attachments, Guid? idDocument)
{
//Validations
var xmlDocument = XDocument.Load(attachments.InputStream);
if (xmlDocument.Root.Name.LocalName == "cteProc")
{
if (DocumentCommonHelper.SendXmlViaWebService(xmlDocument))
{
_documentRepository.UpdateDocumentStatus(StatusOption.DocumentApproved);
}
else
{
_documentRepository.UpdateDocumentStatus(StatusOption.DocumentPending);
}
}
}
The logic is: If all files go correctly in the DocumentCommonHelper.SendXmlViaWebService(xmlDocument) , the document status must be Approved. But if one single file fails, the document status must be Pending.
The problem is that approach in this code is wrong. Because its changing the status of the document each time that the Action is executed, forgeting the others HttpPostedFileBase that are passed before.
What is the best way to do that?
Try to store the HttpPostedFileBase in the Session and retrieve them back when you need it
Session["HttpPostedFileBase"] = attachments;