Can't obtain Source Code of embedded ISSUU flash - c#

First of all what I want to do is legal (since they let you download the pdf).
I just wanted to make a faster and automatic method of downloading the pdf.
For example: http://www.lasirena.es/article/&path=10_17&ID=782
It has an embedded flash pdf and when I download that page source code, the link to the pdf:
http://issuu.com/lasirena/docs/af_fulleto_setembre_andorra_sense_c?e=3360093/9079351
Doesn't show up, the only thing that I have on the source code is this: 3360093/9079351
I tried to find a way to build the pdf link from it, but I can't find the name "af_fulleto_setembre_andorra_sense_c" anywhere...
I've made plenty of automatic downloads like this, but it's the first time that I can't build or get the pdf link and I can't seem to find a way, is it even possible?
I tried to try and find jpg's links but without success either. Either way (jpg or pdf) is fine...
PS: the Document ID doesn't show on the downloaded source code either.
Thank you.

I thought a workaround for this, some might not consider this a solution but in my case works fine because it depends on the ISSUU publisher account.
The Solution itself is making a Request to ISSUU API connected with the publisher account I'm looking for.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://api.issuu.com/query?action=issuu.documents.list" +
"&apiKey=Inser Your API Key" +
"&format=json" +
"&documentUsername=User of the account you want to make a request" +
"&pageSize=100&resultOrder=asc" +
"&responseParams=name,documentId,pageCount" +
"&username=Insert your ISSUU username" +
"&token=Insert Your Token here");
request.Method = "POST";
request.ContentType = "application/x-www-form-urlencoded";
request.Accept = "application/json";
try
{
using (WebResponse response = request.GetResponse())
{
var responseValue = string.Empty;
// grab the response
using (var responseStream = response.GetResponseStream())
{
using (var reader = new StreamReader(responseStream))
{
responseValue = reader.ReadToEnd();
}
}
if (responseValue != "")
{
List<string> lista_linkss = new List<string>();
JObject ApiRequest = JObject.Parse(responseValue);
//// get JSON result objects into a list
IList<JToken> results = ApiRequest["rsp"]["_content"]["result"]["_content"].Children()["document"].ToList();
for (int i = 0; i < results.Count(); i++)
{
Folheto folheto = new Folheto();
folheto.name = results[i]["name"].ToString();
folheto.documentId = results[i]["documentId"].ToString();
folheto.pageCount = Int32.Parse(results[i]["pageCount"].ToString());
string _date = Newtonsoft.Json.JsonConvert.SerializeObject(results[i]["uploadTimestamp"], Formatting.None, new IsoDateTimeConverter() { DateTimeFormat = "yyyy-MM-dd hh:mm:ss" }).Replace(#"""", string.Empty);
folheto.uploadTimestamp = Convert.ToDateTime(_date);
if (!lista_nomes_Sirena.Contains(folheto.name))
{
list.Add(folheto);
}
}
}
}
}
catch (WebException ex)
{
// Handle error
}
You have to pay attention to the Parameter "pageSize" the maximum permitted by the API is 100, this means the maximum number of results you get is 100, since the account I'm following has around 240 pdf's, I used this request once with the Parameter "resultOrder = asc" and another time with the value "resultOrder=desc".
This allowed me to get the first 100 pdfs and the latest 100 pdfs inserted.
Since I didn't need a history but just the pdf's they will be sending out from now, it didn't make a difference.
Finalizing my code I'm sending all the document's ID's to a sql database I made, and when I start the program, I make a check to see if the ID was already downloaded, if not it downloads the pdf, if yes it doesn't.
Hope someone can find this work around useful

Related

How can I pull data from website using C#

Web-page data into the application
You can replicate the request the website makes to get a list of relevant numbers. The following code might be a good start.
var httpRequest = (HttpWebRequest) WebRequest.Create("<url>");
httpRequest.Method = "POST";
httpRequest.Accept = "application/json";
string postData = "{<json payload>}";
using (var streamWriter = new StreamWriter(httpRequest.GetRequestStream())) {
streamWriter.Write(postData);
}
var httpResponse = (HttpWebResponse) httpRequest.GetResponse();
string result;
using (var streamReader = new StreamReader(httpResponse.GetResponseStream())) {
result = streamReader.ReadToEnd();
}
Console.WriteLine(result);
Now, for the <url> and <json payload> values:
Open the web inspector in your browser.
Go to the Network tab.
Set it so Fetch/XHR/AJAX requests are shown.
Refresh the page.
Look for a request that you want to replicate.
Copy the request URL.
Copy the Payload (JSON data, to use it in your code you'll have to add a \ before every ")
Side note: The owner of the website you are making automated requests to might not be very happy about your tool, and you/it might be blocked if it makes too many requests in a short time.

PHP works fine via browser, won't work via HttpWebRequest

My goal
I'm trying to read and write files from my webserver via C#.
Progress
So far, via PHP I made it to work that you can write to files with file_put_contents(). The saved files are text files, so you can easily read them. My C# program with responses works fine, and I get my desired values.
private string GetWarnInfo(string id)
{
try
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create($"http://example.com/{id}.txt");
request.Method = "GET";
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
using (StreamReader reader = new StreamReader(response.GetResponseStream()))
{
return reader.ReadToEnd();
}
}
catch(Exception)
{
return null;
}
}
100% of the time it did not return null, which is a success.
The problem
Well, the writing. My PHP in example.php looks like this:
if (file_exists($_GET['id'] . '.txt'))
{
unlink($_GET['id'] . '.txt');
file_put_contents($_GET['id'] . '.txt', $_GET['info']);
} else {
file_put_contents($_GET['id'] . '.txt', $_GET['info']);
}
While it fully works via browser calls (http://example.com/example.php?id=23&info=w3), and actually makes the text file, I can't get it to work with C#:
public void ServerRequest(string id, string info)
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://example.com/example.php?id=" + id + "&info=" + info);
request.Method = "GET";
}
For example, I call ServerRequest((23).ToString(), "w3"), but the file won't change, it will always be either non-existant or in it last state (if there was one).
What could cause this problem? How would I fix it?
Thanks to #Barmar
I have figured out the problem, if I don't call GetResponse(), it will never start the web request. After doing that, everything worked fine!

Simple Get Post in Asp.net as like in PHP

I ran into a problem. I am .Net Developer and don't know about php, I am working on a CRM which has an API. My Client says it should be simple page should work with simple post. now i don't understand how i can do a simple Post in .Net. I have created an asp.net WebForm. All is working well. The only thing that i have problem with is that i have to return a list of parameters to response. I am using
Response.Write("100 - Click Recorded Successfully.");
but this return a full html Document with the parameter string at the top of the document. I saw one php Api which return only the prameter string like this with out HTML Document:
response=1
&responsetext=SUCCESS
&authcode=123456
&transactionid=2154229522
&avsresponse=N
&cvvresponse=N
&orderid=3592
&type=sale
&response_code=100
can some one suggest me any better way how i can do this. I found many article that explains how to do a simple Get Post in .Net but none of these solved my problem.
Update:
this is the code that i am using from another application to call the page and get response stream
string result = "";
WebRequest objRequest = WebRequest.Create(url + query);
objRequest.Method = "POST";
objRequest.ContentLength = 0;
objRequest.Headers.Add("x-ms-version", "2012-08-01");
objRequest.ContentType = "application/xml";
WebResponse objResponse = objRequest.GetResponse();
using (StreamReader sr =
new StreamReader(objResponse.GetResponseStream()))
{
result = sr.ReadToEnd();
// Close and clean up the StreamReader
sr.Close();
}
string temp = result;
where url + query is the address to my page. The result shows this code http://screencast.com/t/eKn4cckXc. I want to get the header line only, that is "100 - Click Recorded Successfully."
You have two options. First is to clear whatever response was already generated on the page, write the text, and then end the response so that nothing else added:
Response.Clear();
Response.ClearHeaders();
Response.AddHeader("Content-Type", "text/plain");
Response.Write(Request.Url.Query);
Response.End();
That is if you want to process it on the Page. However a better approach would be to implement Http Handler, in which case all you need to do is:
public void ProcessRequest
{
Response.AddHeader("Content-Type", "text/plain");
Response.Write(Request.Url.Query);
}

Get web page contents from Firefox in a C# program

I need to write a simple C# app that should receive entire contents of a web page currently opened in Firefox. Is there any way to do it directly from C#? If not, is it possible to develop some kind of plug-in that would transfer page contents? As I am a total newbie in Firefox plug-ins programming, I'd really appreciate any info on getting me started quickly. Maybe there are some sources I can use as a reference? Doc links? Recommendations?
UPD: I actually need to communicate with a Firefox instance, not get contents of a web page from a given URL
It would help if you elaborate What you are trying to achieve. May be plugins already out there such as firebug can help.
Anways, if you really want to develop both plugin and C# application:
Check out this tutorial on firefox extension:
http://robertnyman.com/2009/01/24/how-to-develop-a-firefox-extension/
Otherwise, You can use WebRequest or HttpWebRequest class in .NET request to get the HTML source of any URL.
I think you'd almost certainly need to write a Firefox plugin for that. However there are certainly ways to request a webpage, and receive its HTML response within C#. It depends on what your requirements are?
If you're requirements are simply receive the source from any website, leave a comment and I'll point you towards the code.
Uri uri = new Uri(url);
System.Net.HttpWebRequest req = (System.Net.HttpWebRequest)System.Net.WebRequest.Create(uri.AbsoluteUri);
req.AllowAutoRedirect = true;
req.MaximumAutomaticRedirections = 3;
//req.UserAgent = _UserAgent; //"Mozilla/6.0 (MSIE 6.0; Windows NT 5.1; Searcharoo.NET)";
req.KeepAlive = true;
req.Timeout = _RequestTimeout * 1000; //prefRequestTimeout
// SIMONJONES http://codeproject.com/aspnet/spideroo.asp?msg=1421158#xx1421158xx
req.CookieContainer = new System.Net.CookieContainer();
req.CookieContainer.Add(_CookieContainer.GetCookies(uri));
System.Net.HttpWebResponse webresponse = null;
try
{
webresponse = (System.Net.HttpWebResponse)req.GetResponse();
}
catch (Exception ex)
{
webresponse = null;
Console.Write("request for url failed: {0} {1}", url, ex.Message);
}
if (webresponse != null)
{
webresponse.Cookies = req.CookieContainer.GetCookies(req.RequestUri);
// handle cookies (need to do this incase we have any session cookies)
foreach (System.Net.Cookie retCookie in webresponse.Cookies)
{
bool cookieFound = false;
foreach (System.Net.Cookie oldCookie in _CookieContainer.GetCookies(uri))
{
if (retCookie.Name.Equals(oldCookie.Name))
{
oldCookie.Value = retCookie.Value;
cookieFound = true;
}
}
if (!cookieFound)
{
_CookieContainer.Add(retCookie);
}
}
string enc = "utf-8"; // default
if (webresponse.ContentEncoding != String.Empty)
{
// Use the HttpHeader Content-Type in preference to the one set in META
doc.Encoding = webresponse.ContentEncoding;
}
else if (doc.Encoding == String.Empty)
{
doc.Encoding = enc; // default
}
//http://www.c-sharpcorner.com/Code/2003/Dec/ReadingWebPageSources.asp
System.IO.StreamReader stream = new System.IO.StreamReader
(webresponse.GetResponseStream(), System.Text.Encoding.GetEncoding(doc.Encoding));
webresponse.Close();
This does what you want.
using System.Net;
var cli = new WebClient();
string data = cli.DownloadString("http://www.heise.de");
Console.WriteLine(data);
Native messaging enables an extension to exchange messages with a native application installed on the user's computer.

Not generating a complete response from a HttpWebResponse object in C#

I am creating a HttpWebRequest object from another aspx page to save the response stream to my data store. The Url I am using to create the HttpWebRequest object has querystring to render the correct output. When I browse to the page using any old browser it renders correctly. When I try to retrieve the output stream using the HttpWebResponse.GetResponseStream() it renders my built in error check.
Why would it render correctly in the browser, but not using the HttpWebRequest and HttpWebResponse objects?
Here is the source code:
Code behind of target page:
protected void PageLoad(object sender, EventsArgs e)
{
string output = string.Empty;
if(Request.Querystring["a"] != null)
{
//generate output
output = "The query string value is " + Request.QueryString["a"].ToString();
}
else
{
//generate message indicating the query string variable is missing
output = "The query string value was not found";
}
Response.Write(output);
}
Code behind of page creating HttpWebRequest object
string url = "http://www.mysite.com/mypage.aspx?a=1";
HttpWebRequest request = (HttpWebRequest) WebRequest.Create(url)
//this if statement was missing from original example
if(User.Length > 0)
{
request.Credentials = new NetworkCredentials("myaccount", "mypassword", "mydomain");
request.PreAuthenticate = true;
}
request.UserAgent = Request.UserAgent;
HttpWebResponse response = (HttpWebResponse) request.GetResponse();
Stream resStream = response.GetResponseStream();
Encoding encode = System.Text.Encoding.GetEncoding("utf-8");
StreamReader readStream = new StreamReader(resStream, encode, true, 2000);
int count = readStream.Read(read, 0, read.Length);
string str = Server.HtmlEncode(" ");
while (count > 0)
{
// Dumps the 256 characters on a string and displays the string to the console.
string strRead = new string(read, 0, count);
str = str.Replace(str, str + Server.HtmlEncode(strRead.ToString()));
count = readStream.Read(read, 0, 256);
}
// return what was found
result = str.ToString();
resStream.Close();
readStream.Close();
Update
#David McEwing - I am creating the HttpWebRequest with the full page name. The page is still generating the error output. I updated the code sample of the target page to demonstrate exactly what I am doing.
#Chris Lively - I am not redirecting to an error page, I generate a message indicating the query string value was not found. I updated the source code example.
Update 1:
I tried using Fiddler to trace the HttpWebRequest and it did not show up in the Web Sessions history window. Am I missing something in my source code to get a complete web request and response.
Update 2:
I did not include the following section of code in my example and it was culprit causing the issue. I was setting the Credentials property of the HttpWebRequest with a sevice account instead of my AD account which was causing the issue.
I updated my source code example
What webserver are you using? I can remember at one point in my past when doing something with IIS there was an issue where the redirect between http://example.com/ and http://example.com/default.asp dropped the query string.
Perhaps run Fiddler (or a protocol sniffer) and see if there is something happening that you aren't expecting.
Also check if passing in the full page name works. If it does the above is almost certainly the problem.
Optionally, you can try to use the AllowAutoRedirect property of the HttpRequestObject.
I need to replace the following line of code:
request.Credentials = new NetworkCredentials("myaccount", "mypassword", "mydomain");
with:
request.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials;

Categories

Resources