I am developing an Umbraco intranet site where I call wkhtmltopdf.exe to create some pdfs. Those pdfs are using a main page for the content and two additional pages for header and footer. Things worked pretty fine as long as I had the site without authentication. We want to use our Active Directory accounts to login to the site and thus I have enabled windows authentication. The routine for running this is to click a button that process the page and either shows the pdf on the browser or downloads it. In any case it is the same process. In visual studio when running with debugging when it comes to the first part of the code (var p = ...) it throws an exception "Message = "No process is associated with this object." because it fails to authenticate. I can see that when I pause the code just after its execution and using the visual studio inspector. The method runs to the end but because of the error I mentioned before it produces a blank pdf. If I hardcode username and password then it works fine.
Site is running in my local dev enviroment in iis express. Since Windows Authentication is enabled when I browse to the site the first time I have to login. Wkhtmltopdf.exe is located in the local drive - it is not on the website. The initial setup is based on the method described here http://icanmakethiswork.blogspot.se/2012/04/making-pdfs-from-html-in-c-using.html Only users that are part of our Active Directory domain will have access to the website but since we use the same accounts to login to windows then windows authentication will do the trick :)
public static void HtmlToPdf(string outputFilename, string[] urls,
string[] options = null,
bool streamPdf = false,
string pdfHtmlToPdfExePath = "C:\\Program Files (x86)\\wkhtmltopdf\\bin\\wkhtmltopdf.exe")
{
string urlsSeparatedBySpaces = string.Empty;
try
{
//Determine inputs
if ((urls == null) || (urls.Length == 0))
{
throw new Exception("No input URLs provided for HtmlToPdf");
}
urlsSeparatedBySpaces = String.Join(" ", urls); //Concatenate URLs
var p = new System.Diagnostics.Process()
{
StartInfo =
{
FileName = pdfHtmlToPdfExePath,
Arguments = ((options == null) ? "" : String.Join(" ", options)) + " " + urlsSeparatedBySpaces + " -",
UseShellExecute = false, // needs to be false in order to redirect output
RedirectStandardOutput = true,
RedirectStandardError = true,
RedirectStandardInput = true, // redirect all 3, as it should be all 3 or none
WorkingDirectory = string.Empty
}
};
p.Start();
var output = p.StandardOutput.ReadToEnd();
byte[] buffer = p.StandardOutput.CurrentEncoding.GetBytes(output);
p.WaitForExit(60000);
p.Close();
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.ContentType = "application/pdf";
if (!streamPdf)
{
HttpContext.Current.Response.AppendHeader("Content-Disposition", "attachment; filename='" + outputFilename + "'");
}
HttpContext.Current.Response.BinaryWrite(buffer);
HttpContext.Current.Response.End();
}
catch (Exception exc)
{
throw new Exception("Problem generating PDF from HTML, URLs: " + urlsSeparatedBySpaces + ", outputFilename: " + outputFilename, exc);
}
}
I tested this with LoadUserProfile = true but that didnt't help also. After reading throughout various forum posts the only suggested solution that I saw was to force loging in the process by using UserName, Password etc. But that is bad since the user is already logged in and we could/should use something like CredentialCache.DefaultCredentials .
A workaround that I came too was to use DefaultCredentials in requests to save the htmls locally where I can access them without a problem and create the pdfs but even that is a painstaking process, since i need to create printable css and javascripts and download them etc etc. This is my last solution which I have implemented at 80% but seems nasty also. Here is another code sample how I grab the webpages.
var request = (HttpWebRequest)WebRequest.Create(url);
request.Credentials = CredentialCache.DefaultCredentials;
var response = (HttpWebResponse)request.GetResponse();
var stream = response.GetResponseStream();
So to sum up. Wkhtmltopdf fails to authenticate itself so that it can grab the desired pages and turn them to pdf. Is there any neat way to make the process able to authenticate itself with current user's credentials that I am logged in to the site so that it can access the pages?
I Use Rotativa a wrapper for Wkhtmltopdf.
To get it working on iis, I created a separate user account with enough access to run Wkhtmltopdf.exe. Then Pass the User Name & Password to Wkhtmltopdf with the switches.
public virtual ActionResult PrintInvoice(int id) {
var invoice = db.Invoices.Single(i => i.InvoiceId == id);
var viewmodel = Mapper.Map<InvoiceViewModel>(invoice);
var reportName = string.Format(#"Invoice {0:I-000000}.pdf", invoice.InvoiceNo);
var switches = String.Format(#" --print-media-type --username {0} --password {1} ",
ConfigurationManager.AppSettings["PdfUserName"],
ConfigurationManager.AppSettings["PdfPassword"]);
ViewBag.isPDF = true;
return new ViewAsPdf("InvoiceDetails", viewmodel) {
FileName = reportName,
PageOrientation = Rotativa.Options.Orientation.Portrait,
PageSize = Rotativa.Options.Size.A4,
CustomSwitches = switches
};
}
It appears the pages running within Wkhtmltopdf.exe run with the current users credentials, but the Wkhtmltopdf.exe itself needs rights to execute on the server.
This works on iis when deployed. On Cassini in VS2012 it works for me with no credentials, but in vs2013 on iis express I'm still having trouble when it comes to picking up resources like css & images.
Same solution to run over SSL:
Rotativa and wkhtmltopdf no CSS or images on iis6 over HTTPS, but fine on HTTP
Turn on ASP.NET Impersonation and spawn wkhtmltopdf under the context of the impersonated user.
Note: turning on ASP.NET impersonation is very likely to decrease performance.
Related
I am using Rotativa to generate a PDF from a view. It works on my localhost, but when I push to my server it does not work at all. The Server has Windows Authentication and Impersonate enabled which I need to have for this site.
This is the error I get when I try to run the code on the server
Qt: Could not initialize OLE (error 80070005) Error: Failed loading
page
https://api.mydomain.com/Reports/RedBluePDF?community=CommunityName&procedure=GetTasks
(sometimes it will work just to ignore this error with
--load-error-handling ignore) Exit with code 1 due to http error: 1003
Here is my code:
public byte[] getReportsPDF(string community, string procedure)
{
byte[] pdfBytes = new byte[] { };
RouteData route = new RouteData();
route.Values.Add("controller", "SiteSuperReports");
route.Values.Add("action", "RedBluePDF");
this.ControllerContext = new ControllerContext(new HttpContextWrapper(System.Web.HttpContext.Current), route, this);
if (procedure == "GetProductionTasks")
{
var actionPDF = new Rotativa.ActionAsPdf("RedBluePDF", new { community = community, procedure = procedure })
{
PageSize = Size.A4,
PageOrientation = Rotativa.Options.Orientation.Landscape,
PageMargins = { Left = 1, Right = 1 }
};
try
{
pdfBytes = actionPDF.BuildFile(ControllerContext);
}
catch(Exception e)
{
Console.Write(e.Message.ToString());
}
}
return pdfBytes;
}
And here is RedBluePDF Method, this just returns a View:
public ActionResult RedBluePDF(string community, string procedure) {
return View();
}
What am I doing wrong and how come this is not working on my server, but is on my localhost? And How do I get it work on my server.
Try one of this solutions:
1- Go to IIS > Site > Authentication, click on "ASP.NET Impersonation" and DISABLE it.
2- If you're calling a script or a file or whatever, specify the used protocol:
src="//api.mydomain.com/?????
to:
src="http://api.mydomain.com/?????
3- In your Application Pool's configuration, under Process Model, there's an option "Load User Profile". It comes as False by default, set it as true.
We have a SOAP based web service and we are able to read its wsdl when we type in the url in Browser. We sit behind a proxy in our network but its not blocking anything and we are always able to read wsdl using browser.But when we enter the url in Browser say http://ist.services/CoreServices/coreservices?wsdl it asks for username and password which is not same as my windows credentials. So when i enter the username and password shared by the dev team , it returns the wsdl page. Please note that this webservice is developed and deployed on java based server.
How do i do the same in c#.net code and how do i pass the Security Crednetials in DiscoveryClientProtocol? I tried the below code which works for the webservices which doesn't ask for the Security credentials.
// Specify the URL to discover.
string sourceUrl = "http://ist.services/CoreServices/coreservices?wsdl";
string outputDirectory = "C:\\Temp";
DiscoveryClientProtocol client = new DiscoveryClientProtocol();
var credentials = new NetworkCredential("sunuser1", "xxxxxxx", "");
WebProxy proxy = new WebProxy("http://proxy.bingo:8000/", true) { Credentials = credentials };
client.Credentials = credentials;
// Use default credentials to access the URL being discovered.
//client.Credentials = credentials;//CredentialCache.DefaultCredentials;
client.Proxy = proxy;
String DiscoverMode = "DiscoverAny";
String ResolveMode = "ResolveAll";
try
{
DiscoveryDocument doc;
// Check to see if whether the user wanted to read in existing discovery results.
if (DiscoverMode == "ReadAll")
{
DiscoveryClientResultCollection results = client.ReadAll(Path.Combine("C:\\Temp", "results.discomap"));
//SaveMode.Value = "NoSave";
}
else
{
// Check to see if whether the user wants the capability to discover any kind of discoverable document.
if (DiscoverMode == "DiscoverAny")
{
doc = client.DiscoverAny(sourceUrl);
}
else
// Discover only discovery documents, which might contain references to other types of discoverable documents.
{
doc = client.Discover(sourceUrl);
}
// Check to see whether the user wants to resolve all possible references from the supplied URL.
if (ResolveMode == "ResolveAll")
client.ResolveAll();
else
{
// Check to see whether the user wants to resolve references nested more than one level deep.
if (ResolveMode == "ResolveOneLevel")
client.ResolveOneLevel();
else
Console.WriteLine("empty");
}
}
}
catch (Exception e2)
{
//DiscoveryResultsGrid.Columns.Clear();
//Status.Text = e2.Message;
Console.WriteLine(e2.Message);
}
// If documents were discovered, display the results in a data grid.
if (client.Documents.Count > 0)
Console.WriteLine(client);
}
}
Since the code didn't help me much , i opened the fiddler to trace the http calls when i manual read the wsdl in browser and i see it takes the credentials i entered as "Authorization: Basic cGDFDdsfdfsdsfdsgsgfg=" . In fiddler i see three calls with responses 401,302 and 200. But in my c#.net code i don't get the 200 response and it always throws me the 404 error.
I further debugged this and in httpresponse of client object i see the flag status as INVOCATION_FLAGS_INITIALIZED | INVOCATION_FLAGS_NEED_SECURITY
So looks like i need to pass the credentials as Security Credentials rather than Network credentials.
The below code has fixed the issue.
CredentialCache myCredentialCache = new CredentialCache { { new Uri(sourceUrl),
"Basic", networkCredential } };
discoveryClientProtocol.AllowAutoRedirect = true;
discoveryClientProtocol.Credentials = myCredentialCache;
We have a search API with the URL of the form webURL + "/_api/search/query?querytext='" + query + "'"; whose response is required to generate the contents on a page. This URL is accessible directly via AJAX, but the URL is only accessible by admin users. In order for regular users to use the search feature, the workaround was to have a call to a web method which inturn calls the API's URL using elevated privileges.
We have the code as below, but we're unsure how to open the API's URL using the elevated privileges and return the response.
public string GetSearchListItems(string query)
{
var superUser = SPContext.Current.Web.AllUsers[#"SHAREPOINT\SYSTEM"];
var superToken = superUser.UserToken;
var webURL = SPContext.Current.Web.Url;
using (var site = new SPSite(webURL, superToken))
{
string searchURL = webURL + "/_api/search/query?querytext='" + query + "'" +"&rowlimit=4&sortlist='ViewsLifeTime:descending'";
using (var elevatedWeb = site.OpenWeb())
{
// code to open searchURL using elevated privileges and return the response of searchURL
}
}
}
We tried the following in place of the comment above:
WebClient client = new WebClient();
string response = client.DownloadString(searchURL);
return response;
... but this results in a 401 (Unautorized) exception.
How do we integrate the elevated privileges (the SharePoint\System account) into opening the search API's URL?
Have you tried wrapping the code you commented in the RunWithElevatedPrivileges() delegate.
e.g
SPSecurity.RunWithElevatedPrivileges(delegate()
{
WebClient client = new WebClient();
string response = client.DownloadString(searchURL);
return response;
});
I am creating an ASP.NET C# app that uploads videos to a YouTube Channel.
*I have already read through (to the best of my ability) the documentation at
The YouTube API Documentation
I have been able to successfully implement two examples of uploading a video to the YouTube channel using the sample code provided.
For example, using direct method (only the important code attached):
<!-- eja: import the google libraries -->
<%# Import Namespace="Google.GData.Client" %>
<%# Import Namespace="Google.GData.Extensions" %>
<%# Import Namespace="Google.GData.YouTube" %>
<%# Import Namespace="System.Net" %>
<!-- some more code -->
<%
// specify where to go to once authenticated
Uri targetUri = new Uri(Request.Url, "VideoManager.aspx");
// hide the link to authenticate for now.
GotoAuthSubLink.Visible = false;
// GotoAuthSubLink.Visible = true;
// look for a session var storing the auth token. if it's not empty
if (Session["token"] != null)
{
// go to the VideoManager link
Response.Redirect(targetUri.ToString());
}
else if (Request.QueryString["token"] != null)
{
// if we have the auth key in the URL, grab it from there instead
String token = Request.QueryString["token"];
// set the session equal to AuthSubUtil's calling the exchangeForSessionToken method
// returns the token and convert it to a string
Session["token"] = AuthSubUtil.exchangeForSessionToken(token, null).ToString();
Response.Redirect(targetUri.ToString(), true);
}
else
{
//no auth token, display the link and create the token by loading the google
// auth page
String authLink = AuthSubUtil.getRequestUrl(Request.Url.ToString(), "http://gdata.youtube.com", false, true);
GotoAuthSubLink.Text = "Login to your Google Account";
GotoAuthSubLink.Visible = true;
GotoAuthSubLink.NavigateUrl = AuthSubUtil.getRequestUrl(Request.Url.ToString(),"http://gdata.youtube.com",false,true);
}
<asp:HyperLink ID="GotoAuthSubLink" runat="server"/>
That's page one...it loads the google authentication screen. (see link to attached image ,it's safe, I just set up a new account here on stackOverflow and can't upload images yet).
Then it leads to a page with the upload mechanism...The uploading works I am not worried about that, but here is the snippet of code FYI.
// create an instance ot the YouTubeService class. passing the application name and my DEV KEY
YouTubeService service = new YouTubeService(authFactory.ApplicationName, **API_KEY**);
// retrieve the current session token as a string if any
authFactory.Token = HttpContext.Current.Session["token"] as string;
// incorporate the information into our service
service.RequestFactory = authFactory;
try
{
// a YouTubeEntry object is single entity within a videoFeed object. It generally contains info
// about the video. when uploading, we will assign the values that we received to the feed.
YouTubeEntry entry = new YouTubeEntry();
// aggregate all the initial descriptor information
entry.Media = new Google.GData.YouTube.MediaGroup();
entry.Media.Description = new MediaDescription(this.Description.Text);
entry.Media.Title = new MediaTitle(this.Title.Text);
entry.Media.Keywords = new MediaKeywords(this.Keyword.Text);
// process entry.Media.Categories to assign the category
MediaCategory category = new MediaCategory(this.Category.SelectedValue);
category.Attributes["scheme"] = YouTubeService.DefaultCategory;
entry.Media.Categories.Add(category);
// prepare the token used for uploading
FormUploadToken token = service.FormUpload(entry);
HttpContext.Current.Session["form_upload_url"] = token.Url;
HttpContext.Current.Session["form_upload_token"] = token.Token;
// construct the URL
string page = "http://" + Request.ServerVariables["SERVER_NAME"];
if (Request.ServerVariables["SERVER_PORT"] != "80")
{
page += ":" + Request.ServerVariables["SERVER_PORT"];
}
page += Request.ServerVariables["URL"];
HttpContext.Current.Session["form_upload_redirect"] = page;
Response.Redirect("UploadVideo.aspx");
The page UploadVideo.aspx is the actual upload form, and it works, so I am not concerned about that.
The alternate method is not the recommended method, as it's synchronous in nature, but it DOES avoid that login screen as it allows us to pass credentials to authenticate (it works as a web app)...again principal code attached below.
<%
GAuthSubRequestFactory authFactory = new GAuthSubRequestFactory(ServiceNames.YouTube, "TesterApp");
// Response.Write("serviceNames.youtube=" + ServiceNames.YouTube + "<br />");
YouTubeRequestSettings s = new YouTubeRequestSettings(authFactory.ApplicationName, **your API KEY**,**Your email account as a username**,**your password**);
YouTubeRequest request = new YouTubeRequest(s);
Video newVideo = new Video();
newVideo.Title = "test at 4:40";
newVideo.Tags.Add(new MediaCategory("Games", YouTubeNameTable.CategorySchema));
newVideo.Keywords = "cars, funny";
newVideo.Description = "My description";
newVideo.YouTubeEntry.Private = false;
// newVideo.Tags.Add(new MediaCategory("mydevtag, anotherdevtag", YouTubeNameTable.DeveloperTagSchema));
// newVideo.YouTubeEntry.Location = new GeoRssWhere(37, -122);
// alternatively, you could just specify a descriptive string
newVideo.YouTubeEntry.setYouTubeExtension("location", "Somewhere,Someplace");
newVideo.YouTubeEntry.MediaSource = new MediaFileSource("C:\\IMG_1672.MOV",
"video/quicktime");
Video createdVideo = request.Upload(newVideo);
Response.Write("This will print out once the file is uploaded...indicates that the code is <i>synchronous</i>. The cursor spins around until done. go get a coffee then check the YouTube Channel");
%>
So basically the question that I am asking is - Is there a method that will upload a video to a YouTube Channel in ASP.NET C# code a) for a web application b) that I can pass credentials through the code to c) bypass the Google authentication screen seen above and d) without using OAuth and openID and a cert etc?
The App is for only a short campaign (November only) and I am happy to use the deprecated authSubUtil and a dev key and do not need to worry about oAuth 2.x or open ID (since authsubutil will deprecate in 2015 anyway).
Any Help is appreciated.
thanks
Edward
You would be best placed to use the ClientLogin authentication, where you can store a users username & password for their account and then use DirectUpload.
Direct Upload: https://developers.google.com/youtube/2.0/developers_guide_dotnet#Direct_Upload
ClientLogin: https://developers.google.com/youtube/2.0/developers_guide_protocol_clientlogin#ClientLogin_Authentication
Note client login is being deprecated, and they want you to use OAuth, however if you do it quickly you should be okay!
I have a login.aspx page with custom textbox for username and password i.e. no loginview
after supplying correct username and pwd i assign a sessionid which is used to visit other pages on website.
Now to download a file (1234) i redierct the user to ~/download.aspx?fileid=1234, on this page i check the session id and send the user to file url i.e. ~/file/1234.pdf.
if some one dirctly enters file url, then i am unable to stop him.
plase guide me on how to do this...
P.S. : i have read about authentication rule in web.config file but dont know how to mark user as authenticated ones he supplies correct username and password at login. (i am only checking username and pwd from database and redirecting to home page)
Your authentication strategy is fairly weak. You should be bounding areas of your site (namely the files directory in this instance) with roles and assigning users to them.
However, to get around the more immediate problem, simply disable the outside world from getting to the files directory and when they hit ~/download.aspx?fileid=1234 just serve them the file. You can find instructions for this here: How to properly serve a PDF file
Take a look at this - http://support.microsoft.com/kb/301240
Look for point 4 in that article under - "Code the Event Handler So That It Validates the User Credentials", it explains you how to set authentication cookie after validating user
Code to look at:
FormsAuthenticationTicket tkt;
string cookiestr;
HttpCookie ck;
tkt = new FormsAuthenticationTicket(1, txtUserName.Value, DateTime.Now,
DateTime.Now.AddMinutes(30), chkPersistCookie.Checked, "your custom data");
cookiestr = FormsAuthentication.Encrypt(tkt);
ck = new HttpCookie(FormsAuthentication.FormsCookieName, cookiestr);
if (chkPersistCookie.Checked)
ck.Expires=tkt.Expiration;
ck.Path = FormsAuthentication.FormsCookiePath;
Response.Cookies.Add(ck);
What you can do is:
1. Enable form authentication in web.config
2. deny anonymous access to downloads folder
3. When user authenticates, set authentication cookie and redirect user to download folder
4. download folder now can only be accessed by logged in user and id
Below is the code I use in my projects
void ServeFile(string fname, bool forceDownload)
{
if(UserHasPermission(fname))
{
DownloadFile(fname,forceDownload);
}
else
{
ShowMessage("You have no permission");
}
}
private void DownloadFile( string fname, bool forceDownload )
{
string path = MapPath( fname );
string name = Path.GetFileName( path );
string ext = Path.GetExtension( path );
string type = "";
// set known types based on file extension
if ( ext != null )
{
switch( ext.ToLower() )
{
case ".htm":
case ".html":
type = "text/HTML";
break;
case ".txt":
type = "text/plain";
break;
case ".doc":
case ".rtf":
type = "Application/msword";
break;
case ".pdf":
type = "Application/pdf";
break;
}
}
if ( forceDownload )
{
Response.AppendHeader( "content-disposition",
"attachment; filename=" + name );
}
if ( type != "" )
Response.ContentType = type;
Response.WriteFile( path );
Response.End();
}