PUT API request - (IIS media services API) - c#

I'm trying to complete a PUT request to the IIS media services API - to try and set a publishing point to "stopped" state.
I've read the following link, which hasn't helped me very much!
https://msdn.microsoft.com/en-us/library/hh206014%28VS.90%29.aspx
My current code throws an exception on the the httpWebRequest1.GetResponse(), it indicates the web server is returning a 401 unauthorized error code:
string url = "http://localhost/LiveStream.isml/State";
var httpWebRequest1 = (HttpWebRequest)WebRequest.Create(url);
httpWebRequest1.ContentType = "application/atom+xml";
httpWebRequest1.Method = "PUT";
httpWebRequest1.Headers.Add("Authorization", "USERNAME:PASSWORD");
using (var streamWriter = new StreamWriter(httpWebRequest1.GetRequestStream()))
{
XmlDocument document = new XmlDocument();
document.Load("Resources/XMLFile1.xml");
string test = GetXMLAsString(document);
streamWriter.Write(test);
}
var httpResponse = (HttpWebResponse)httpWebRequest1.GetResponse();
using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
{
var responseText = streamReader.ReadToEnd();
}
My Username/password was commented out, but they work fine when visiting the page in a browser, and inputting them in the username/password form that opens.
My Script essentially "PUT"s an XML document that is a copy of the XML document returned when visiting the state page in a browser.
Any help would be appreciated.

Related

How can I pull data from website using C#

Web-page data into the application
You can replicate the request the website makes to get a list of relevant numbers. The following code might be a good start.
var httpRequest = (HttpWebRequest) WebRequest.Create("<url>");
httpRequest.Method = "POST";
httpRequest.Accept = "application/json";
string postData = "{<json payload>}";
using (var streamWriter = new StreamWriter(httpRequest.GetRequestStream())) {
streamWriter.Write(postData);
}
var httpResponse = (HttpWebResponse) httpRequest.GetResponse();
string result;
using (var streamReader = new StreamReader(httpResponse.GetResponseStream())) {
result = streamReader.ReadToEnd();
}
Console.WriteLine(result);
Now, for the <url> and <json payload> values:
Open the web inspector in your browser.
Go to the Network tab.
Set it so Fetch/XHR/AJAX requests are shown.
Refresh the page.
Look for a request that you want to replicate.
Copy the request URL.
Copy the Payload (JSON data, to use it in your code you'll have to add a \ before every ")
Side note: The owner of the website you are making automated requests to might not be very happy about your tool, and you/it might be blocked if it makes too many requests in a short time.

Get instagram full profile information without "Instagram Graph Api"

I wanna get some data from the Instagram users.
So I've used Instagram Basic Display Api and the profile data I could receive was these:
username
media count
account type
but I want these data:
username
name
media count
Profile Image
followers count
following count
I don't know how can I have these data without Instagram Graph API(in any way) in c#?
Or is there any way to get these data with the WebClient class or anything like that?
Update for #Eehab answer: I use RestClient and WebClient in this example and both of them give the same result.
Now see WebClient example:
WebClient client = new WebClient();
string page = client.DownloadString("https://www.instagram.com/instagram/?__a=1");
Console.WriteLine(page);
Console.ReadKey();
and see an image of this code here.
now see the result of the code above here
I've also got, that this link is the only access for login users and I've been login into my Instagram account in chrome already, but I think WebClient needs to log in too.
Edit Through #Eehab answer:
In this case for using this Url(https://www.instagram.com/{username}/?__a=1), we can't do it without Instagram logged-in browser profile. So we should log in to Instagram with selenium and use the logged-in cookies to use it for Url requests. So first Install the selenium web driver and then write the following codes(untested):
var driver = new ChromeDriver();
//go to Instagram
driver.Url = "https://www.instagram.com/";
//Log in
var userNameElement = _driver.FindElement(By.Name("username"));
userNameElement.SendKeys("Username");
var passwordElement = _driver.FindElement(By.Name("password"));
passwordElement.SendKeys(Cars[0].auth.pass);
var loginButton = _driver.FindElement(By.Id("login"));
loginButton.Click();
//Get cookies
var cookies = driver.Manage().Cookies.AllCookies.ToList();
//Send request with given cookies :)
var url = "https://www.instagram.com/{username}/?__a=1";
var httpRequest = (HttpWebRequest)WebRequest.Create(url);
foreach(var cookie in cookies){
httpRequest.Headers["Cookie"] += $"{cookie.Name}={cookie.Value}; ";
}
var httpResponse = (HttpWebResponse)httpRequest.GetResponse();
using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
{
var result = streamReader.ReadToEnd();
}
//...
If anyone can improve this question for more uses can edit and I really appreciate it :)
You could do that using the open API , example :
https://www.instagram.com/instagram/?__a=1
example code from postman code :
var client = new RestClient("https://www.instagram.com/instagram/?__a=1");
client.Timeout = -1;
var request = new RestRequest(Method.GET);
IRestResponse response = client.Execute(request);
Console.WriteLine(response.Content);
you could use HttpClient class also, if you want to use WebClient you could do it with
WebClient.DownloadString Method while I don't recommend using WebClient for this scraping, keep in mind Instagram may block you if blocked you , you need residential proxies to bypass the block.
the response will be json data , use Json.Net or similar library to deserialize it.
just replace instagram with any username you want in the given url.

How to fetch the HTML DOM content of an authorized page in C#

I have Windows Forms app where I am loading a site. I login to the site inside Windows Forms with valid credentials.
Then somehow I manage to get valid session id & this is how URL looks like after the valid credentials
var url = "http://www.somewebsite123.com/portal/sessionId=123";
I am using Microsoft.mshtml & AxInterop.ShDocVw for fetching the content of the authorized page.
WebClient client = new WebClient();
using (Stream data = client.OpenRead(new Uri(url)))
{
StreamReader reader = new StreamReader(data);
string htmlContent = reader.ReadToEnd();
But at the below line, it throwing the error
strHTML = ((IHTMLElement)htmlContent.document).innerHTML.ToString();
Error
Internal error (WWC-00006)
An unexpected error occurred: ORA-01403: no data found (WWV-16016)
How do I get rid of this error?
The actual DOM content can be found in WebException.Response when WebClient hits 4XX or 5XX:
try {
// Webclient that raise 4XX
}
catch (WebException webex)
{
using (var streamReader = new new StreamReader(webex.Response.GetResponseStream())) {
var domContent = streamReader.ReadToEnd();
}
}

Call a ASP.NET script from a C# desktop app

I'm trying to develop a desktop app to be used as a website scraping tool. My requirement is the user should be able to specify a url in the desktop app.The desktop app should be able to invoke the asp.net script to scrape data from the website and return the records to the desktop app.
Should I use a web service or a ASP.NET runtime for this...???
Any help is appreciated :)
Additional details
The scraping activity is already done.I used HTMLAgility pkg. This is my scraping code to extract a list of company names from a web page.
public static String getPageHTML(String URL)
{
String totalCompanies = null;
HttpWebRequest httpWebRequest = (HttpWebRequest)HttpWebRequest.Create(URL);
IWebProxy myProxy = httpWebRequest.Proxy;
if (myProxy != null)
{
myProxy.Credentials = CredentialCache.DefaultCredentials;
}
httpWebRequest.Method = "GET";
HttpWebResponse res;
res = (HttpWebResponse)httpWebRequest.GetResponse();
HtmlDocument doc1 = new HtmlDocument();
doc1.Load(res.GetResponseStream());
HtmlNode node = doc1.DocumentNode.SelectSingleNode("//td[#class='mainbody']/table/tr[last()]/td");
try
{
totalCompanies = node.InnerText;
return totalCompanies;
}
catch (NullReferenceException e)
{
totalCompanies = "No records found";
return totalCompanies;
}
}
You can use HttpWebRequest within your desktop app, i've done this before (winforms). For example: -
HttpWebRequest req = (HttpWebRequest)WebRequest.Create("url");
var response = new StreamReader(req.GetResponse().GetResponseStream()).ReadToEnd();
You can then use HtmlAgilityPack to parse the data from the response:
HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();
doc.LoadHtml(response);
//Sample query
var node = doc.DocumentNode.Descendants("div")
.Where(d => d.Attributes.Contains("id")).ToList();
(it would be helpful to include more details/be more specific)
If your ASP.NET page already does all the scraping, and all you need to do is access that ASP.NET page, you can simply use HttpWebRequest
http://msdn.microsoft.com/en-us/library/456dfw4f.aspx - short description & tutorial
If that URL is the website TO BE SCRAPED, and you need to include that ASP.NET script in your project, then you need to add it as a web service.
You can do it with both but also you can do it by adding a webbrowser to your desktop application. I don't know why but result is much more faster.

Getting an error: The remote server returned an error: (401) Unauthorized. from WebClient class

item = 535;
String URI = "http://localhost:3033/WebFormDesigner.aspx?fm_id=" + item.ToString();
//String URI = "http://www.google.com";
WebClient wc = new WebClient();
Stream s = wc.OpenRead(URI);
XPathDocument doc = new XPathDocument(s);
It doesnt work with:
XPathDocument doc = new XPathDocument(URI);
in the above method, using the google URI works, but it doesnt seem to like the URI for localhost at all. I confirmed that the URI works when put into the browser. Not quite sure whats going on with it.
The error occurs on the Stream declaration.
The remote server returned an error: (401) Unauthorized.
This is an extension of my other post: How do you get the markup of a webpage in asp.net similar to php's get_file_contents
Edit: The issue being looked at here, is that there arent permissions to see the page. The page being visited is a secure page. What I am trying to do is to give my Request the permissions because the pages are within the same project, but using a WebRequest which assumes it is an outside source.
I was trying to accomplish with a couple different attempted. Both involved the cookies.
System.Net.HttpWebRequest request = (System.Net.HttpWebRequest)System.Net.HttpWebRequest.Create(URI);
request.KeepAlive = true;
request.Credentials = CredentialCache.DefaultCredentials;
request.CookieContainer = new CookieContainer();
HttpCookieCollection cookieJar = Request.Cookies;
//foreach (string cookieString in Request.Cookies)
for(int i = 0; i < cookieJar.Count; i++)
{
System.Web.HttpCookie cookie = cookieJar.Get(i);
Cookie oC = new Cookie();
oC.Domain = Request.Url.Host;
oC.Expires = cookie.Expires;
oC.Name = cookie.Name;
oC.Path = cookie.Path;
oC.Secure = cookie.Secure;
oC.Value = cookie.Value;
request.CookieContainer.Add(oC);
}
System.Net.HttpWebResponse response = (System.Net.HttpWebResponse)request.GetResponse();
Stream s = response.GetResponseStream();
This does some pretty simple stuff, but i guess i am still doing it wrong. I get the cookies for my current page, and pass them into the cookiecollection for the target page. After that, i submit the request for a response. It gives the error above as well.

Categories

Resources