ASP.NET Payment Form Submission Need Guidance - c#

Guys, I apologize me if the question is less organized and less clear. I am in hurry :(
My web app has payment form that need to be submitted to another ASP.NET page (lets call it http://vendor.com/getpay.aspx)
residing on another server.
That page will do some mumbo-jumbo works and then redirects it to the acutal
payment gateway site.
when i post my payment form to getpay.aspx via simple HTML form, it works and redirects fine.
if i change the form and its hidden inputs to server side controls, it doesn't work. their page is throwing viewstate exception.
I need the form hidden inputs to be server controls so that i can bind some values generated by my code behind.(i think i can do this like the classic asp way using <%= %>, but it is like going back in standard!)
I tried HttpWebRequest in the code behind, it posts the form but the browser doesn't redirect to Payment Gateway page.
I am posting the payment info over non https, how can i prevent the user tampering with the posted data?.
I want to validate the payment form in the backend then post it, i couldn't trust the user input data.
Also the result was returned to my redirect page with query strings appended. It is also happening over the non https.
how much i can trust this redirect data?
Thx much

Generate your form by clearing the Response and rewriting the html HTTP form out into the cleared response. When I get home I will trawl through my old code and provide an example.
EDIT:
OK here is my code, I had to recreate because I am still at work but it goes a little like this:
Create an intermediate page to capture your variables from the ASPX page and then use this to send as a 'simple' form:
protected void Page_Load(object sender, EventArgs e)
{
// Capture the post to this page
IDictionary<string, string> variables = new Dictionary<string, string>();
variables.Add("test", Request.Form["test"]); // collect all variables after checking they exist
RewriteContent(variable);
}
public void RewriteContent(IDictionary<string, string> variables)
{
string formContent = #"
<html>
<head>
<title>My Form</title>
</head>
<body>
<form action='' method=''>";
foreach (KeyValuePair<string, string> keyVal in variables)
{
formContent += #"<input type='" + keyVal.Key + "' value='" + keyVal.Value + "' />";
}
formContent += #"
</form>
</body>
</html>"; // Add either an auto post in a javascript or an explicit submit button
Response.Clear();
Response.Write(formContent);
Response.Flush();
Response.End();
}
EDIT 2:
Sorry I just realised I have not answered the other questions.
Q3/Q4/Q5. If you are not using https you cannot really stop tampering or be sure the response is correct but you can restrict the chance it is bogus. This can be achieved by hashing the values with a secret that is shared at your end and the destination, and then when you get the response you should hash the values and compare to the hash that is sent back to you before you accept that it is valid.
Most payment mechanisms are verified in this manner usually with an MD5 or SHA1 hash you can find more info on the following links:
http://msdn.microsoft.com/en-us/library/system.security.cryptography.sha1.aspx
http://www.developerfusion.com/code/4601/create-hashes-md5-sha1-sha256-sha384-sha512/
http://snippets.dzone.com/posts/show/5816
EDIT 3:
Doing some encryption now and thought I would share some code with you (because I am nice like that). Might give you an idea of what to do and you can probably code better than me so just tidy up my mess a bit :)
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Security.Cryptography;
using log4net;
namespace MyCompany.Cipher
{
private static readonly ILog log = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);
public string GenerateSha1HashForString(string valueToHash, EncodeStyle encodeStyle)
{
string hashedString = string.Empty;
try
{
hashedString = SHA1HashEncode(Encoding.UTF8.GetBytes(valueToHash), encodeStyle);
}
catch (Exception ex)
{
if (log.IsErrorEnabled) { log.Error(string.Format("{0}\r\n{1}", ex.Message, ex.StackTrace)); }
throw new Exception("Error trying to hash a string; information can be found in the error log", ex);
}
return hashedString;
}
private string ByteArrayToString(byte[] bytes, EncodeStyle encodeStyle)
{
StringBuilder output = new StringBuilder(bytes.Length);
if (EncodeStyle.Base64 == encodeStyle)
{
return Convert.ToBase64String(bytes);
}
for (int i = 0; i < bytes.Length; i++)
{
switch (encodeStyle)
{
case EncodeStyle.Dig:
//encode to decimal with 3 digits so 7 will be 007 (as range of 8 bit is 127 to -128)
output.Append(bytes[i].ToString("D3"));
break;
case EncodeStyle.Hex:
output.Append(bytes[i].ToString("X2"));
break;
}
}
return output.ToString();
}
private string SHA1HashEncode(byte[] valueToHash, EncodeStyle encode)
{
SHA1 a = new SHA1CryptoServiceProvider();
byte[] arr = new byte[60];
string hash = string.Empty;
arr = a.ComputeHash(valueToHash);
hash = ByteArrayToString(arr, encode);
return hash;
}
}
Put it in a class some where that your project can see and it can generate an SHA1 hash based on a string value by calling the public method.

Related

CefSharp Search Engine Implamentation

I am working on a cefsharp based browser and i am trying to implement a search engine into the browser, but the code I have tried docent work, it doesn't really have any errors but when i star the project and type something i the text field nothing happens and it dosent load the search engine i entered into the code, the only time the textbox loads anything is when a url is typed.
This is the code used in the browser that docent work
private void LoadUrl(string url)
{
if (Uri.IsWellFormedUriString(url, UriKind.RelativeOrAbsolute))
{
WebUI.Load(url);
}
else
{
var searchUrl = "https://www.google.com/search?q=" + WebUtility.HtmlEncode(url);
WebUI.Load(searchUrl);
}
}
i have also tried
void LoadURl(String url)
{
if (url.StartsWith("http"))
{
WebUI.Load(url);
}
else
{
WebUI.Load(url);
}
}
i was also suggested to try
private void LoadUrl(string url)
{
if (Uri.IsWellFormedUriString(url, UriKind.RelativeOrAbsolute))
{
WebUI.LoadUrl(url);
}
else
{
var searchUrl = "https://www.google.com/search?q=" + Uri.EscapeDataString(url);
WebUI.LoadUrl(searchUrl);
}
}
We have here really few Information on how your code works. But what I notice is that you use WebUtility.HtmlEncode for the search query. WebUtility has also a WebUtility.UrlEncode Method, that how I understand your question makes more sense it the context. This is the documentation for the method: https://learn.microsoft.com/de-de/dotnet/api/system.net.webutility.urlencode
The Url you are generating is invalid. You need to use Uri.EscapeDataString to convert the url param into a string that can be appended to a url.
// For this example we check if a well formed absolute Uri was provided
// and load that Url, all others will be loaded using the search engine
// e.g. https://github.com will load directly, attempting to load
// github.com will load the search engine with github.com as the query.
//
if (Uri.IsWellFormedUriString(url, UriKind.Absolute))
{
chromiumWebBrowser.LoadUrl(url);
}
else
{
var searchUrl = "https://www.google.com/search?q=" + Uri.EscapeDataString(url);
chromiumWebBrowser.LoadUrl(searchUrl);
}
nothing happens and it dosent load the search engine
You need to subscribe to the LoadError event to get actual error messages. It's up to you to display errors to the user. The following is a basic example:
chromiumWebBrowser.LoadError += OnChromiumWebBrowserLoadError;
private void OnChromiumWebBrowserLoadError(object sender, LoadErrorEventArgs e)
{
//Actions that trigger a download will raise an aborted error.
//Aborted is generally safe to ignore
if (e.ErrorCode == CefErrorCode.Aborted)
{
return;
}
var errorHtml = string.Format("<html><body><h2>Failed to load URL {0} with error {1} ({2}).</h2></body></html>",
e.FailedUrl, e.ErrorText, e.ErrorCode);
_ = e.Browser.SetMainFrameDocumentContentAsync(errorHtml);
}
For testing purposes you can also copy and paste the searchUrl string you've generated and try loading it in Chrome to see what happens, you should also get an error.

Scraping html list data from a dynamic server

Hallo guys!
Sorry for the dump question, this is my last resort. I swear i triend countless of other Stackoverflow questions, different Frameworks, etc., but those didnt seem to help.
Ich have the following Problem:
A website displays a list of data (there is a TON of div, li, span etc. tags infront, its a big HTML.)
Im writing a tool that fetches data from a specific list inside a ton of other div tags, downloads it and outputs an excel file.
The website im trying to access, is dynamic. So you open the website, it loads a little bit, and then the list appears (probably some JS and stuff).
When i try to download the website via a webRequest in C#, the html I get ist almost empty with a ton on white spaces, lots of non-html stuff, some garbage data as well.
Now: Im pretty used to C#, HTMLAgillityPack, and countless other libraries, not so much in web related stuff tho. I tried CefSharp, Chromium etc. all of those stuff, but couldnt get them to work properly unfortunately.
I want to have a HTML in my program to work with that looks exactly like the HTML that you see when
you open the dev console in chrome wenn visting the website mentined above.
The HTML parser works flwalessly there.
This is how I image how the code could look like simplified.
Extreme C# pseudocode:
WebBrowserEngine web = new WebBrowserEngine()
web.LoadURLuntilFinished(url); // with all the JS executed and stuff
String html = web.getHTML();
web.close();
My Goal would be that the string html in the pseudocode looks exactly like the one in the Chrome dev tab.
Maybe there is a solution posted somewhere else but i swear i coudlnt find it, been looking for days.
Andy help is greatly appreciated.
#SpencerBench is spot on in saying
It could be that the page is using some combination of scroll state, element visibility, or element positions to trigger content loading. If that's the case, then you'll need to figure out what it is and trigger it programmatically.
To answer the question for your specific use case, we need to understand the behaviour of the page you want to scrape data from, or as I asked in the comments, how do you know the page is "finished"?
However, it's possible to give a fairly generic answer to the question which should act as a starting point for you.
This answer uses Selenium, a package which is commonly used for automating testing of web UIs, but as they say on their home page, that's not the only thing it can be used for.
Primarily it is for automating web applications for testing purposes, but is certainly not limited to just that. Boring web-based administration tasks can (and should) also be automated as well.
The web site I'm scraping
So first we need a web site. I've created one using ASP.net core MVC with .net core 3.1, although the web site's technology stack isn't important, it's the behaviour of the page you want to scrape which is important. This site has 2 pages, unimaginatively called Page1 and Page2.
Page controllers
There's nothing special in these controllers:
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
public class Page1Controller : Controller
{
public IActionResult Index()
{
return View("Page1");
}
}
}
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
public class Page2Controller : Controller
{
public IActionResult Index()
{
return View("Page2");
}
}
}
API controller
There's also an API controller (i.e. it returns data rather than a view) which the views can call asynchronously to get some data to display. This one just creates an array of the requested number of random strings.
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
using System;
using System.Collections.Generic;
using System.Text;
[Route("api/[controller]")]
[ApiController]
public class DataController : ControllerBase
{
[HttpGet("Create")]
public IActionResult Create(int numberOfElements)
{
var response = new List<string>();
for (var i = 0; i < numberOfElements; i++)
{
response.Add(RandomString(10));
}
return Ok(response);
}
private string RandomString(int length)
{
var sb = new StringBuilder();
var random = new Random();
for (var i = 0; i < length; i++)
{
var characterCode = random.Next(65, 90); // A-Z
sb.Append((char)characterCode);
}
return sb.ToString();
}
}
}
Views
Page1's view looks like this:
#{
ViewData["Title"] = "Page 1";
}
<div class="text-center">
<div id="list" />
<script src="~/lib/jquery/dist/jquery.min.js"></script>
<script>
var apiUrl = 'https://localhost:44394/api/Data/Create';
$(document).ready(function () {
$('#list').append('<li id="loading">Loading...</li>');
$.ajax({
url: apiUrl + '?numberOfElements=20000',
datatype: 'json',
success: function (data) {
$('#loading').remove();
var insert = ''
for (var item of data) {
insert += '<li>' + item + '</li>';
}
insert = '<ul id="results">' + insert + '</ul>';
$('#list').html(insert);
},
error: function (xht, status) {
alert('Error: ' + status);
}
});
});
</script>
</div>
So when the page first loads, it just contains an empty div called list, however the page loading trigger's the function passed to jQuery's $(document).ready function, which makes an asynchronous call to the API controller, requesting an array of 20,000 elements. While the call is in progress, "Loading..." is displayed on the screen, and when the call returns, this is replaced by an unordered list containing the received data. This is written in a way intended to be friendly to developers of automated UI tests, or of screen scrapers, because we can tell whether all the data has loaded by testing whether or not the page contains an element with the ID results.
Page2's view looks like this:
#{
ViewData["Title"] = "Page 2";
}
<div class="text-center">
<div id="list">
<ul id="results" />
</div>
<script src="~/lib/jquery/dist/jquery.min.js"></script>
<script>
var apiUrl = 'https://localhost:44394/api/Data/Create';
var requestCount = 0;
var maxRequests = 20;
$(document).ready(function () {
getData();
});
function getDataIfAtBottomOfPage() {
console.log("scroll - " + requestCount + " requests");
if (requestCount < maxRequests) {
console.log("scrollTop " + document.documentElement.scrollTop + " scrollHeight " + document.documentElement.scrollHeight);
if (document.documentElement.scrollTop > (document.documentElement.scrollHeight - window.innerHeight - 100)) {
getData();
}
}
}
function getData() {
window.onscroll = undefined;
requestCount++;
$('results2').append('<li id="loading">Loading...</li>');
$.ajax({
url: apiUrl + '?numberOfElements=50',
datatype: 'json',
success: function (data) {
var insert = ''
for (var item of data) {
insert += '<li>' + item + '</li>';
}
$('#loading').remove();
$('#results').append(insert);
if (requestCount < maxRequests) {
window.setTimeout(function () { window.onscroll = getDataIfAtBottomOfPage }, 1000);
} else {
$('#results').append('<li>That\'s all folks');
}
},
error: function (xht, status) {
alert('Error: ' + status);
}
});
}
</script>
</div>
This gives a nicer user experience because it requests data from the API controller in multiple smaller chunks, so the first chunk of data appears fairly quickly, and once the user has scrolled down to somewhere near the bottom of the page, the next chunk of data is requested, until 20 chunks have been requested and displayed, at which point the text "That's all folks" is added to the end of the unordered list. However this is more difficult to interact with programmatically because you need to scroll the page down to make the new data appear.
(Yes, this implementation is a bit buggy - if the user gets to the bottom of the page too quickly then requesting the next chunk of data doesn't happen until they scroll up a bit. But the question isn't about how to implement this behaviour in a web page, but about how to scrape the displayed data, so please forgive my bugs.)
The scraper
I've implemented the scraper as a xUnit unit test project, just because I'm not doing anything with the data I've scraped from the web site other than Asserting that it is of the correct length, and therefore proving that I haven't prematurely assumed that the web page I'm scraping from is "finished". You can put most of this code (other than the Asserts) into any type of project.
Having created your scraper project, you need to add the Selenium.WebDriver and Selenium.WebDriver.ChromeDriver nuget packages.
Page Object Model
I'm using the Page Object Model pattern to provide a layer of abstraction between functional interaction with the page and the implementation detail of how to code that interaction. Each of the pages in the web site has a corresponding page model class for interacting with that page.
First, a base class with some code which is common to more than one page model class.
namespace StackOverflow68925623Scraper
{
using System;
using OpenQA.Selenium;
using OpenQA.Selenium.Support.UI;
public class PageModel
{
protected PageModel(IWebDriver driver)
{
this.Driver = driver;
}
protected IWebDriver Driver { get; }
public void ScrollToTop()
{
var js = (IJavaScriptExecutor)this.Driver;
js.ExecuteScript("window.scrollTo(0, 0)");
}
public void ScrollToBottom()
{
var js = (IJavaScriptExecutor)this.Driver;
js.ExecuteScript("window.scrollTo(0, document.body.scrollHeight)");
}
protected IWebElement GetById(string id)
{
try
{
return this.Driver.FindElement(By.Id(id));
}
catch (NoSuchElementException)
{
return null;
}
}
protected IWebElement AwaitGetById(string id)
{
var wait = new WebDriverWait(Driver, TimeSpan.FromSeconds(10));
return wait.Until(e => e.FindElement(By.Id(id)));
}
}
}
This base class gives us 4 convenience methods:
Scroll to the top of the page
Scroll to the bottom of the page
Get the element with the supplied ID, or return null if it doesn't exist
Get the element with the supplied ID, or wait for up to 10 seconds for it to appear if it doesn't exist yet
And each page in the web site has its own model class, derived from that base class.
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium;
public class Page1Model : PageModel
{
public Page1Model(IWebDriver driver) : base(driver)
{
}
public IWebElement AwaitResults => this.AwaitGetById("results");
public void Navigate()
{
this.Driver.Navigate().GoToUrl("https://localhost:44394/Page1");
}
}
}
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium;
public class Page2Model : PageModel
{
public Page2Model(IWebDriver driver) : base(driver)
{
}
public IWebElement Results => this.GetById("results");
public void Navigate()
{
this.Driver.Navigate().GoToUrl("https://localhost:44394/Page2");
}
}
}
And the Scraper class:
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium.Chrome;
using System;
using System.Threading;
using Xunit;
public class Scraper
{
[Fact]
public void TestPage1()
{
// Arrange
var driver = new ChromeDriver();
var page = new Page1Model(driver);
page.Navigate();
try
{
// Act
var actualResults = page.AwaitResults.Text.Split(Environment.NewLine);
// Assert
Assert.Equal(20000, actualResults.Length);
}
finally
{
// Ensure the browser window closes even if things go pear-shaped
driver.Quit();
}
}
[Fact]
public void TestPage2()
{
// Arrange
var driver = new ChromeDriver();
var page = new Page2Model(driver);
page.Navigate();
try
{
// Act
while (!page.Results.Text.Contains("That's all folks"))
{
Thread.Sleep(1000);
page.ScrollToBottom();
page.ScrollToTop();
}
var actualResults = page.Results.Text.Split(Environment.NewLine);
// Assert - we expect 1001 because of the extra "that's all folks"
Assert.Equal(1001, actualResults.Length);
}
finally
{
// Ensure the browser window closes even if things go pear-shaped
driver.Quit();
}
}
}
}
So, what's happening here?
// Arrange
var driver = new ChromeDriver();
var page = new Page1Model(driver);
page.Navigate();
ChromeDriver is in the Selenium.WebDriver.ChromeDriver package and implements the IWebDriver interface from the Selenium.WebDriver package with the code to interact with the Chrome browser. Other packages are available containing implementations for all popular browsers. Instantiating the driver object opens a browser window, and calling its Navigate method directs the browser to the page we want to test/scrape.
// Act
var actualResults = page.AwaitResults.Text.Split(Environment.NewLine);
Because on Page1, the results element doesn't exist until all the data has been displayed, and no user interaction is required in order for it to be displayed, we use the page model's AwaitResults property to just wait for that element to appear and return it once it has appeared.
AwaitResults returns an IWebElement instance representing the element, which in turn has various methods and properties we can use to interact with the element. In this case we use its Text property which returns the element's contents as a string, without any markup. Because the data is displayed as an unordered list, each element in the list is delimited by a line break, so we can can use String's Split method to convert it to a string array.
Page2 needs a different approach - we can't use the presence of the results element to determine whether the data has all been displayed, because that element is on the page right from the start, instead we need to check for the string "That's all folks" which is written right at the end of the last chunk of data. Also the data isn't loaded all in one go, and we need to keep scrolling down in order to trigger the loading of the next chunk of data.
// Act
while (!page.Results.Text.Contains("That's all folks"))
{
Thread.Sleep(1000);
page.ScrollToBottom();
page.ScrollToTop();
}
var actualResults = page.Results.Text.Split(Environment.NewLine);
Because of the bug in the UI that I mentioned earlier, if we get to the bottom of the page too quickly, the fetch of the next chunk of data isn't triggered, and attempting to scroll down when already at the bottom of the page doesn't raise another scroll event. That's why I'm scrolling to the bottom of the page and then back to the top - that way I can guarantee that a scroll event is raised. You never know, the web site you're trying to scrape data from may itself be buggy.
Once the "That's all folks" text has appeared, we can go ahead and get the results element's Text property and convert it to a string array as before.
// Assert - we expect 1001 because of the extra "that's all folks"
Assert.Equal(1001, actualResults.Length);
This is the bit that won't be in your code. Because I'm scraping a web site which is under my control, I know exactly how much data it should be displaying so I can check that I've got all the data, and therefore that my scraping code is working correctly.
Further reading
Absolute beginner's introduction to Selenium: https://www.guru99.com/selenium-csharp-tutorial.html
(A curiosity in that article is the way that it starts by creating a console application project and later changes its output type to class library and manually adds the unit test packages, when the project could have been created using one of Visual Studio's unit test project templates. It gets to the right place in the end, albeit via a rather odd route.)
Selenium documentation: https://www.selenium.dev/documentation/
Happy scraping!
If you need to fully execute the web page, then a complete browser like CefSharp is your only option.
It could be that the page is using some combination of scroll state, element visibility, or element positions to trigger content loading. If that's the case, then you'll need to figure out what it is and trigger it programmatically. I know that CefSharp can simulate user actions like clicking, scrolling, etc.

How to read something from a file with C# and use it with HTML codes on ASP.NET page?

I'm doing a school project, I need to make a simple web site, add google maps on it, read, lets say, 100 diffrent addresses from a text file and show those locations on google maps with markers.
Now I'm trying to add google maps to my ASP.net page with javascript which I saw on google maps tutorials. And there's that problem which I have to convert adresses to coordinates. So for that I'm using
function addAddressToMap(response) {
if (!response || response.Status.code != 200) {
alert("Sorry, we were unable to geocode that address");
}
else {
place = response.Placemark[0];
point = new GLatLng(place.Point.coordinates[1],place.Point.coordinates[0]);
marker = new GMarker(point);
map.addOverlay(marker);
marker.openInfoWindowHtml(place.address + '<br>' + '<b>Country code:</b> ' + place.AddressDetails.Country.CountryNameCode);
}
}
// showLocation() is called when you click on the Search button
// in the form. It geocodes the address entered into the form
// and adds a marker to the map at that location.
function showLocation() {
var address = "izmit";
var address2 = "ağrı";
geocoder.getLocations(address, addAddressToMap);
geocoder.getLocations(address2, addAddressToMap);
}
these functions and they are working fine. But my problem here is, I need to get these address informations from a text file. But to get them, I need to use a few diffrent codes. And I want to make it on the server side with C# codes. But I don't know how to write some codes on server side and then return something to HTML side as address. I hope you understand. Thank you for your helps.
Update:
Server code:
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
Page.ClientScript.RegisterArrayDeclaration("Skills", "'asa'");
Page.ClientScript.RegisterArrayDeclaration("Skills", "'bell'");
Page.ClientScript.RegisterArrayDeclaration("Skills", "'C'");
Page.ClientScript.RegisterArrayDeclaration("Skills", "'C++'");
}
}
Client side:
function showLocation()
{
var address = "izmit";
var address2 = "ağrı";
geocoder.getLocations(Skills[0], addAddressToMap);
}
Now if I use "asa" instead of Skills[0] it will show the location and mark, but with Skills[0] it's not working. And thank you for your answer that was what I'm looking for.
even if I try var MyValue = Skills[0]; and then use MyValue instead of Skills[0] it's still not working
If I understood your question correctly, you want to create an array on the server side and read it in the client.
See this link for a tutorial on how to pass an array from the server to the client.
Basically, you want to use the ClientScriptManager.RegisterArrayDeclaration method to add values to the array.
You can then easily read it in javascript.
Server Side:
string arrayName = "MyArray";
Page.ClientScript.RegisterArrayDeclaration(arrayName , "'value1'");
Page.ClientScript.RegisterArrayDeclaration(arrayName , "'value2'");
Javascript on Client Side:
function readArray()
{
for (var i = 0; i < MyArray.length; i++)
{
//Reading Element From Array
var myValue = MyArray[i];
}
}

Using Webclient to POST/select all the option Values of a Multiselect

I am trying to write an application to automate router configuration. Unfortunately with the router we are using, telnet is not an option.
So I have had to interface with the Cisco web interface using C# WebClient class.
Up until now I had been able to set everything I needed using NameValueCollection and WebClient.UploadValues.
I would take all the input elements on the form, then just upload the name value Collection corresponding to the input types on the form, setting the values of each to the desired setting.
But now I have run into a problem.
With one of the forms, it is using a multiselect control to handle an array of input data, not an input type.
I am at a total loss for how to set this.
The html for the multiselect is as follows
<select multiple class="MultiSelect" name="PortRangeList" size="12" onChange="showList(this.form.PortRangeList);" style="width: 100%">
<option value="All Traffic{[(*-*-*)]}1;0;1;65535;0}">All Traffic [TCP&UDP/1~65535]</option>
<option value="DNS{[(*-*-*)]}2;17;53;53;0}">DNS [UDP/53~53]</option>
<option value="FTP{[(*-*-*)]}3;6;21;21;0}">FTP [TCP/21~21]</option>
...
</select>
When I was using the input types, I would simply do the following
NameValueCollection formNetworkData = new NameValueCollection();
formNetworkData["ipAddr"] = "192.168.1.2";
formNetworkData["lanMask"] = "255.255.255.0";
downloadedData = _routerWebClient.UploadValues(_routerIP + NETWORK, formNetworkData);
But looking at the code for this new form, it appears right before it submits, it selects all the options in the multiselect.
I realize I have probably asked this question poorely, but any assistance would be greatly appreciated.
Using Chrome debugger PortRangeList is exactly as you said.
There are 5 input types
submitStatus, upnpOpen (etc...)
For those my code looks like this
NameValueCollection formData = new NameValueCollection();
formData["submitStatus"]="1";
formData["upnpOpen"]="0";
downloadedData = _routerWebClient.UploadValues(SERVICE0, formData);
But in order to submit the PortRangeList data, I can't use the NameValueCollection because it does not allow a name to have muliple values.
how could submit that?
WebClient.UploadData, WebClient.UploadFile or WebClient.UploadString maybe?
Use Fiddler or Wireshark to compare what goes over the wire when it works ("normal" browser) and when it doesn't work (your code)... once you know the differences you can change your code accordingly...
You have to pass in the selected options by passing in the "PortRangeList" parameter multiple times, once for each option:
PortRangeList=All Traffic{[(*-*-*)]}1;0;1;65535;0}&PortRangeList=DNS{[(*-*-*)]}2;17;53;53;0}&PortRangeList=FTP{[(*-*-*)]}3;6;21;21;0}
That's how browsers do it. Since you're using the WebClient, try this:
PortRangeList=All Traffic{[(*-*-*)]}1;0;1;65535;0},DNS{[(*-*-*)]}2;17;53;53;0},FTP{[(*-*-*)]}3;6;21;21;0}
Obviously, everything has to be properly URL-escaped.
Thought I would post the final answer.
In the end I used the exact solution shown here.
http://anhonga.wordpress.com/2010/05/06/using-webclient-with-uploadvalues-and-uploadstring-to-simulate-post/
This is with his code, but I did essentially the exact same thing (without using global variables)
StringBuilder _strBld = new StringBuilder();
int _intItemCount = 0;
protected void btnSubmit_Click(object sender, EventArgs e)
{
System.Net.WebClient myWebClient = new System.Net.WebClient();
myWebClient.Headers.Add("Charset", "text/html; charset=UTF-8");
myWebClient.Headers.Add("Content-Type", "application/x-www-form-urlencoded"); // ◄ This line is essential
// Perform server-side validations (same as before)
if (this.F_Name.Text.Length == 0 || this.L_Name.Text.Length == 0)
{ AppendError("First and Last name must be provided"); }
…
// Add the user-provided name values
AppendUploadString("last_name", this.L_Name.Text);
AppendUploadString ("first_name", this.F_Name.Text);
AppendUploadString ("address", this.Address.Text);
// Add the Toppings
foreach (ListItem item in this.ToppingsChkBoxList.Items)
{
if (item.Selected)
{
AppendUploadString("Toppings", item.Value.ToString());
}
}
myWebClient.UploadString("https http://www.Destination.com/...?encoding=UTF-8", "POST", _strBld.ToString());
}
private void AppendUploadString(string strName, string strValue)
{
_intItemCount++;
_strBld.Append((intItemCount == 1 ? "" : "&") + strName + "=" + System.Web.HttpUtility.UrlEncode(strValue));
// Update: Use UrlEncode to ensure that the special characters are included in the submission
}

remove html tags or script tags in c# string and also in client using javascript

I need to do a user input validation, and I want it validated both in the client side and in the server side.
I have ang textbox that the user can write his comment on the product, now what I wanted to do is to validate if his comment doesn't have any injections like html or javascripts. So what I wanted to do, after the user clicks on submit
1.) Client Side: How will I execute a validation like if the user inputs this kinds of string
abcd // I will accept only abcd and remove the anchor tag but the abcd should appear as a link
<script type="text/javascript">alert(123);</script> // I will accept only alert(123);as the valid string
<b>abcd</b> // I will display abcd but it must appear bold
2.) Server side: Same situation with the client side, I will remove the tags of the injected script and html tags.
I am using sharepoint 2007, I'm not sure if there is a built-in function to do this kind of validation in sharepoint api or c# for the server side validation.
Note: I don't want to use RegEx for this or any third party software. I know many experts here can help me with this. Thank you so much!
While RegEx is probably your best bet, you can use this and modify to your liking:
public static string StripHtml(this string source)
{
string[] removeElements = new string[] { "a", "script" };
string _newString = source;
foreach (string removeElement in removeElements)
{
while (_newString.ToLower().Contains("<" + removeElement.ToLower()))
{
_newString = _newString.Substring(0, _newString.ToLower().IndexOf("<" + removeElement.ToLower())) + _newString.Substring(_newString.ToLower().IndexOf("</" + removeElement.ToLower() + ">") + removeElement.Length + 3);
}
}
return _newString;
}
You'll use string clean = txtInput.Text.StripHtml();
I am not sure about creating an validation for this. But you can programtically remove the tags using this function.
Use this function to remove the Html tage from the textbox value that user has input
public static string StripHtml(string html, bool allowHarmlessTags)
{
if (html == null || html == string.Empty)
return string.Empty;
if (allowHarmlessTags)
return System.Text.RegularExpressions.Regex.Replace(html, "", string.Empty);
return System.Text.RegularExpressions.Regex.Replace(html, "<[^>]*>", string.Empty);
}
If you want prevent javascript injection attacks just encode user input Server.HtmlEncode(message).
But if you need to clean some tags then Omar Al Zabir wrote good article Convert HTML to XHTML and Clean Unnecessary Tags and Attributes
// Encode the string input
StringBuilder sb = new StringBuilder(
HttpUtility.HtmlEncode(htmlInputTxt.Text));
// Selectively allow <b> and <i>
sb.Replace("<b>", "<b>");
sb.Replace("</b>", "");
sb.Replace("<i>", "<i>");
sb.Replace("</i>", "");
Response.Write(sb.ToString());
I also would like to recomand you check AntiSamy.NET project but I didn't try it by myself.

Categories

Resources