Issue with submit in Extjs - c#

I am using extjs 4.1.3 . I am submitting the form with
'standardSubmit: true'
In server side we use asp.net webapi. i create a excel in code and attach the same in my HTTPResponseMessage content. when 'standardSubmit: true' i am getting the file downloaded in the browser. When i use 'standardSubmit: false' file is not getting downloaded even if it contains the proper content type and attachment.In this case i am getting the excel xml in the response.
The problem with 'standardSubmit: true' is that it is not showing success/failure (I cannot even see the response). Please help.

The issue is that file upload is handled very differently from other inputs due to AJAX limitation of not allowing file uploads.
The framework does not use ajax to submit the form as usual because of the file upload, see the docs on this: http://docs.sencha.com/ext-js/4-1/#!/api/Ext.form.Basic-method-hasUpload
It uses hidden IFrame to submit and process returned data. Follow instructions carefully to set response header from the server to "text/html"

Related

C# Send web request with disabled javascript

I want to download content of one website programatically and it looks like this content is loaded by ajax calls. When I simply disable javascript in my browser, only 1 request is made by this page and all content is loaded without AJAX.
What I need to achieve is to make a web request which will tell web page that I have disabled javascript so it returns me all the content and not just empty body tag with no content at all.
Any suggestions how to do that?
You need to mimic browser.
Steps:
Use Fiddler and see what is sent by browser.
Set the same headers/cookies/user agent via C# code.
If does not work - compare request your code makes with browser's one by using Fiddler as proxy for your C# code (set proxy to http://localhost:8888)

link in excel cause duplicate invocation

Problem Background
In my ASP.net MVC4 web application, we allow user to download data in an Excel workbook, wherein one of the cell contains a hyperlink to a report page. We prepare the link such that when user click the link in Excel, ReportController gets called with parameters, processes the request and return a report summary view i.e. .cshtml page. All works well...
I generate excel using SpreadSheetGear, code snippet that generate link:
rrid = (int.TryParse((string) values[row][column], out outInt) ? outInt : 0);
worksheet.Hyperlinks.Add(worksheet.Cells[row + 1, column],
PrepareProspectProfileLink((int) rrid, downloadCode),
string.Empty,
"CTRL + click to follow link",
rrid.ToString(CultureInfo.InvariantCulture));
Problem
I just noticed that when I click the link in excel, the same request is sent to the web server twice.
Analysis
I checked using Fiddler and placed a breakpoint in application code and its confirmed that the request is indeed sent twice.
In fiddler, Under Process column I found that first request is coming from "excel:24408" and second request is coming from "chrome:4028".
Also if I copy paste link in Outlook, it invokes request just once.
I understand this indicate, the first request is invoked by excel, when excel is served with html, it knows nothing about how to render it hence handover the request to default web browser which is Chrome on my system. Now Chrome fires the same request and on receiving html, it opens the html page.
Question
How can I stop this behavior? It puts unnecessary load on web server. And secondly when I audit user action, I get two entry :(
I'm not sure about excel, but you can handle this weird behavior on web server instead. You can create html page (without auditing) that will use javascript to redirect user to page with real report (and auditing stuff).
If you're concerned just about auditing, you can track requests for report in cache (or db) and consider making auditing entry only if same request for report wasn't fired let's say 5 seconds ago.

Invoke a tag javascript click function from Webclient

I create a web scraping application in C# 4.0. I use Web Client in this application.
now i am facing problem in a A tag click event. Please see the code below
2
Please see this image.
I try to download the html in that page. how to execute this href code from Webclient.
Please help.
You cannot execute javascript from WebClient. You can simulate such request and achieve the required response from the server but it takes some reverse-engineering.
To understand how to get the desired response from the server first you have to perform this operation through browser and record request that is generated by clicking on that link (for that you can use Fiddler Web Debugger). Than you have to recreate such request from your WebClient, that is - send all the required data to the server properly formatted, along with cookies and correct request type (GET or POST - synchronous or asynchronous).
Why you cannot use WebClient for javascript execution is nicely described here
And how you can create a request that resembles the one created by javascript click can be deduced from this.

HttpHandler write to redirected page before sending file

I can never seem to find any docs on .net, and the official ones (when they're correct) are a hopeless maze.
I redirect a user with jQuery to an HttpHandler on a new page where I build and send an html table (and call it an excel file) on the fly. I can build and send the file with no delay (even huge ones) thanks to stackers.
I'd like to tell the user on the new page that I'm building the file, etc as the file is being built and sent.
When I do context.response.write before sending my excel headers, I get:
Server Error in '/' Application.
Server cannot append header after HTTP headers have been sent.
Is there any way to achieve what I want?
Many thanks in advance!
What you need is a temporary landing page which redirects to the download page.
On the temp page you can have your message and a small script to send the user to the download.
<body>
<div> Your file is getting created... Please wait patiently.
</div>
<script>
window.location.href = 'http://yoursite.com/documentpage/blah/foo/bar';
</script>
</body>
What exactly are you trying to achieve? As far as I understand you are building some kind of an excel sheet and send it to the client. I assume that for the client to recognize your content as an excel sheet you try to send some response headers (such as the mime-type).
Although you have to always send any kind of headers before the body, you won't be able to write a notification in the response stream that is shown to the user because the user would also save your notification to the excel file.
So I think you will need a new landing page with a notification on it and some kind of javascript or meta redirection (or else a hyperlink or button) to the url that forces the excel creation and download.

Getting data from a webpage

I have an idea for an App that would really help me out in work but I'm not sure if it's possible.
I want to run a C# desktop application that will ask for a value. When a value is supplied, the application will open a browswer, go to a webpage and add the value into a form on an online website. The form is then submitted and a new page is loaded that contains a table of results. I then want to extract the table of results from the page source and write code to parse the result values.
It is not important that the user see's this happen in an actual browser. In other words if there's a way to do it by reading HTTP requests then thats great.
The biggest problem I have is getting the values into the form and then retrieving the page source after the form is submitted and the next page loads.
Any help really appreciated.
Thanks
Provided that you're only using this in a legal context:
Usually, web forms are sent via POST request to the web server, specifically some script that handles it. You can look at the HTML code for the form's page and find out the destination for the form (form's action).
You can then use a HttpWebRequest in C# to "pretend you are the form", sending a POST request with all the required parameters (adding them to the HTTP header).
As a result you will get the source code of the destination page as it would be sent to the browser. You can parse this.
This is definitely possible and you don't need to use an actual web browser for this. You can simply use a System.Net.WebClient to send your HTTP request and get an HTTP response.
I suggest to use wireshark (or you can use Firefox + Firebug) it allows you to see HTTP requests and responses. By looking at the HTTP traffic you can see exactly how you should pass your HTTP request and which parameters you should be setting.
You don't need to involve the browser with this. WebClient should do all that you require. You'll need to see what's actually being posted when you submit the form with the browser, and then you should be able to make a POST request using the WebClient and retrieve the resulting page as a string.
The docs for the WebClient constructor have a nice example.
See e.g. this question for some pointers on at least the data retrieval side. You're going to know a lot more about the http protocol before you're done with this...
Why would you do this through web pages if you don't even want the user to do anything?
Web pages are purely for interaction with users, if you simply want data transfer, use WCF.
#Brian using Wireshark will result in a very angry network manager, make sure you are actually allowed to use it.

Categories

Resources