I have a website that serves up media files. If a file is not found, I show a simple message on the page to the user with the code below:
var fileNotFoundResponse = new HttpResponseMessage(HttpStatusCode.NotFound);
if (!File.Exists(mediaFile.FilesystemLocation))
{
fileNotFoundResponse.Content = new StringContent("File not found! <br /> Please contact <a href='mailto:support#sbcdef.com'>support#sbcdef.com</a>");
return ResponseMessage(fileNotFoundResponse);
}
The problem is, is that it's just plain text on a white page that doesn't show HTML and I want it to be able to display the HTML.
Is there a way to do this?
Thanks!
You probably aren't sending the client the correct ContentType header, and likely need to set the HttpResponse.ContentType string:
fileNotFoundResponse.ContentType = "text/html";
fileNotFoundResponse.Clear();
fileNotFoundResponse.BufferOutput = true;
Hope this helps!
UPDATE: If you need UTF8 support, try this:
fileNotFoundResponse.ContentType = "text/html; charset=utf-8";
Related
Im trying to search a Hebrew word in a website using c# but i cant figure it out.
this is my current state code that im trying to work with:
var client = new WebClient();
Encoding encoding = Encoding.GetEncoding(1255);
var text = client.DownloadString("http://shchakim.iscool.co.il/default.aspx");
if (text.Contains("ביטול"))
{
MessageBox.Show("idk");
}
thanks for any help :)
The problem seems to be that WebClient is not using the right encoding when converting the response into a string, you must set the WebClient.Encoding property to the expected encoding from the server for this conversion to happen correctly.
I inspected the response from the server and it's encoded using utf-8, the updated code below reflects this change:
using (var client = new WebClient())
{
client.Encoding = System.Text.Encoding.UTF8;
var text = client.DownloadString("http://shchakim.iscool.co.il/default.aspx");
// The response from the server doesn't contains the word ביטול, therefore, for demo purposes I changed it for שוחרות which is present in the response.
if (text.Contains("שוחרות"))
{
MessageBox.Show("idk");
}
}
Here you can find more information about the WebClient.Encoding property:
https://learn.microsoft.com/en-us/dotnet/api/system.net.webclient.encoding?view=netframework-4.7.2
Hope this helps.
First of all what I want to do is legal (since they let you download the pdf).
I just wanted to make a faster and automatic method of downloading the pdf.
For example: http://www.lasirena.es/article/&path=10_17&ID=782
It has an embedded flash pdf and when I download that page source code, the link to the pdf:
http://issuu.com/lasirena/docs/af_fulleto_setembre_andorra_sense_c?e=3360093/9079351
Doesn't show up, the only thing that I have on the source code is this: 3360093/9079351
I tried to find a way to build the pdf link from it, but I can't find the name "af_fulleto_setembre_andorra_sense_c" anywhere...
I've made plenty of automatic downloads like this, but it's the first time that I can't build or get the pdf link and I can't seem to find a way, is it even possible?
I tried to try and find jpg's links but without success either. Either way (jpg or pdf) is fine...
PS: the Document ID doesn't show on the downloaded source code either.
Thank you.
I thought a workaround for this, some might not consider this a solution but in my case works fine because it depends on the ISSUU publisher account.
The Solution itself is making a Request to ISSUU API connected with the publisher account I'm looking for.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://api.issuu.com/query?action=issuu.documents.list" +
"&apiKey=Inser Your API Key" +
"&format=json" +
"&documentUsername=User of the account you want to make a request" +
"&pageSize=100&resultOrder=asc" +
"&responseParams=name,documentId,pageCount" +
"&username=Insert your ISSUU username" +
"&token=Insert Your Token here");
request.Method = "POST";
request.ContentType = "application/x-www-form-urlencoded";
request.Accept = "application/json";
try
{
using (WebResponse response = request.GetResponse())
{
var responseValue = string.Empty;
// grab the response
using (var responseStream = response.GetResponseStream())
{
using (var reader = new StreamReader(responseStream))
{
responseValue = reader.ReadToEnd();
}
}
if (responseValue != "")
{
List<string> lista_linkss = new List<string>();
JObject ApiRequest = JObject.Parse(responseValue);
//// get JSON result objects into a list
IList<JToken> results = ApiRequest["rsp"]["_content"]["result"]["_content"].Children()["document"].ToList();
for (int i = 0; i < results.Count(); i++)
{
Folheto folheto = new Folheto();
folheto.name = results[i]["name"].ToString();
folheto.documentId = results[i]["documentId"].ToString();
folheto.pageCount = Int32.Parse(results[i]["pageCount"].ToString());
string _date = Newtonsoft.Json.JsonConvert.SerializeObject(results[i]["uploadTimestamp"], Formatting.None, new IsoDateTimeConverter() { DateTimeFormat = "yyyy-MM-dd hh:mm:ss" }).Replace(#"""", string.Empty);
folheto.uploadTimestamp = Convert.ToDateTime(_date);
if (!lista_nomes_Sirena.Contains(folheto.name))
{
list.Add(folheto);
}
}
}
}
}
catch (WebException ex)
{
// Handle error
}
You have to pay attention to the Parameter "pageSize" the maximum permitted by the API is 100, this means the maximum number of results you get is 100, since the account I'm following has around 240 pdf's, I used this request once with the Parameter "resultOrder = asc" and another time with the value "resultOrder=desc".
This allowed me to get the first 100 pdfs and the latest 100 pdfs inserted.
Since I didn't need a history but just the pdf's they will be sending out from now, it didn't make a difference.
Finalizing my code I'm sending all the document's ID's to a sql database I made, and when I start the program, I make a check to see if the ID was already downloaded, if not it downloads the pdf, if yes it doesn't.
Hope someone can find this work around useful
I ran into a problem. I am .Net Developer and don't know about php, I am working on a CRM which has an API. My Client says it should be simple page should work with simple post. now i don't understand how i can do a simple Post in .Net. I have created an asp.net WebForm. All is working well. The only thing that i have problem with is that i have to return a list of parameters to response. I am using
Response.Write("100 - Click Recorded Successfully.");
but this return a full html Document with the parameter string at the top of the document. I saw one php Api which return only the prameter string like this with out HTML Document:
response=1
&responsetext=SUCCESS
&authcode=123456
&transactionid=2154229522
&avsresponse=N
&cvvresponse=N
&orderid=3592
&type=sale
&response_code=100
can some one suggest me any better way how i can do this. I found many article that explains how to do a simple Get Post in .Net but none of these solved my problem.
Update:
this is the code that i am using from another application to call the page and get response stream
string result = "";
WebRequest objRequest = WebRequest.Create(url + query);
objRequest.Method = "POST";
objRequest.ContentLength = 0;
objRequest.Headers.Add("x-ms-version", "2012-08-01");
objRequest.ContentType = "application/xml";
WebResponse objResponse = objRequest.GetResponse();
using (StreamReader sr =
new StreamReader(objResponse.GetResponseStream()))
{
result = sr.ReadToEnd();
// Close and clean up the StreamReader
sr.Close();
}
string temp = result;
where url + query is the address to my page. The result shows this code http://screencast.com/t/eKn4cckXc. I want to get the header line only, that is "100 - Click Recorded Successfully."
You have two options. First is to clear whatever response was already generated on the page, write the text, and then end the response so that nothing else added:
Response.Clear();
Response.ClearHeaders();
Response.AddHeader("Content-Type", "text/plain");
Response.Write(Request.Url.Query);
Response.End();
That is if you want to process it on the Page. However a better approach would be to implement Http Handler, in which case all you need to do is:
public void ProcessRequest
{
Response.AddHeader("Content-Type", "text/plain");
Response.Write(Request.Url.Query);
}
Now, first off, I want to understand whether or not its better to use HttpWebRequest and Response or whether its better to simply use a webbrowser control. Most people seem to prefer to use the web browser, however whenever I ask people about it, they tell me that HttpWebRequest and Response is better. So, if this question could be avoided by switching to a web browser (and there's a good reason as to why its better), please let me know!
Basically, I set up a test site, written in PHP, running on localhost. It consists of three files....
The first is index.php, which just contains a simple login form, all the session and everything is just me testing how sessions work, so its not very well written, like I said, its just for testing purposes:
<?php
session_start();
$_SESSION['id'] = 2233;
?>
<form method="post" action="login.php">
U: <input type="text" name="username" />
<br />
P: <input type="password" name="password" />
<br />
<input type="submit" value="Log In" />
</form>
Then, I have login.php (the action of the form), which looks like:
<?php
session_start();
$username = $_POST['username'];
$password = $_POST['password'];
if ($username == "username" && $password == "password" && $_SESSION['id'] == 2233)
{
header('Location: loggedin.php');
die();
}
else
{
die('Incorrect login details');
}
?>
And lastly, loggedin.php just displays "Success!" (using the element).
As you can see, a very simple test, and many of the things I have there are just for testing purposes.
So, then I go to my C# code. I created a method called "HttpPost". It looks like:
private static string HttpPost(string url)
{
request = HttpWebRequest.Create(url) as HttpWebRequest;
request.CookieContainer = cookies;
request.UserAgent = userAgent;
request.KeepAlive = keepAlive;
request.Method = "POST";
response = request.GetResponse() as HttpWebResponse;
if (response.StatusCode != HttpStatusCode.Found)
throw new Exception("Website not found");
StreamReader sr = new StreamReader(response.GetResponseStream());
return sr.ReadToEnd();
}
I built a Windows Form application, so in the button Click event, I want to add the code to call the HttpPost method with the appropriate URL. However, I'm not really sure what I'm supposed to put there to cause it to log in.
Can anyone help me out? I'd also appreciate some general pointers on programatically logging into websites!
Have you considered using WebClient?
It provides a set of abstract methods for use with web pages, including UploadValues, but I'm not sure if that would work for your purposes.
Also, it's probably better not to use WebBrowser as that's a full blown web browser that can execute scripts and such; HttpWebRequest and WebClient are much more light weight.
Edit : Login to website, via C#
Check this answer out, I think this is exactly what you're looking for.
Relevant code snippet from above link :
var client = new WebClient();
client.BaseAddress = #"https://www.site.com/any/base/url/";
var loginData = new NameValueCollection();
loginData.Add("login", "YourLogin");
loginData.Add("password", "YourPassword");
client.UploadValues("login.php", "POST", loginData);
You should use something like WCF Web Api HttpClient. It much easier to achieve.
Following code is writte off the top of my head. But it should give you the idea.
using (var client = new HttpClient())
{
var data = new Dictionary<string, string>(){{"username", "username_value"}, {"password", "the_password"}};
var content = new FormUrlEncodedContent(data);
var response = client.Post("yourdomain/login.php", content);
if (response.StatusCode == HttpStatusCode.OK)
{
//
}
}
I want to fetch website title and images from URL.
as facebook.com doing. How I get images and website title from third party link.?
use html Agility Pack this is a sample code to get the title:
using System;
using HtmlAgilityPack;
protected void Page_Load(object sender, EventArgs e)
{
string url = #"http://www.veranomovistar.com.pe/";
System.Net.WebClient wc = new System.Net.WebClient();
HtmlDocument doc = new HtmlDocument();
doc.Load(wc.OpenRead(url));
var metaTags = doc.DocumentNode.SelectNodes("//title");
if (metaTags != null)
{
string title = metaTags[0].InnerText;
}
}
Any doubt, post your comment.
At a high level, you just need to send a standard HTTP request to the desired URL. This will get you the site's markup. You can then inspect the markup (either by parsing it into a DOM object and then querying the DOM, or by running some simple regexp's/pattern matching to find the things you are interested in) to extract things like the document's <title> element and any <img> elements on the page.
Off the top of my head, I'd use an HttpWebRequest to go get the page and parse the title out myself, then use further HttpWebRequests in order to go get any images referenced on the page. There's a darn good chance though that there's a better way to do this and somebody will come along and tell you what it is. If not, it'd look something like this:
HttpWebResponse response = null;
try
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(<your URL here>);
response = (HttpWebResponse)request.GetResponse();
Stream responseStream = response.GetResponseStream();
StreamReader reader = new StreamReader(responseStream);
//use the StreamReader object to get the page data and parse out the title as well as
//getting locations of any images you need to get
catch
{
//handle exceptions
}
finally
{
if(response != null)
{
response.Close();
}
}
Probably the dumb way to do it, but that's my $0.02.
just u hav to write using javascript on source body
for example
if u r using master page just u hav to write code on matser page thats reflect on all the pages
u can also used the image Url property in this script like that
khan mohd faizan