String encoding during page render - c#

I am experiencing an odd problem as the page is being rendered in the browser, or so it seems.
In the Model I have
public string ReviewWrapperUrl
{
get
{
string returnurl = Settings.SiteSettings.SiteRoot + Settings.PowerReviewSettings.CCReviewWrapperUrl + PageID;
if (!string.IsNullOrEmpty(Year))
{
returnurl = returnurl + "&pr_page_id_variant=" + Year;
}
return returnurl;
}
}
At this point returnurl holds a value like http://www.mydomain.co.uk/write-a-review?page_id=191519&pr_page_id_variant=2018
In the cshtml I have:
<script>
try {
POWERREVIEWS.display.render({
api_key: "#Model.PowerReviewAPIKey",
locale: "en_GB",
merchant_group_id: "#Model.PowerReviewMerchantGroupID",
merchant_id: "#Model.PowerReviewMerchantID",
page_id: "#Model.PageID",
review_wrapper_url: "#Model.ReviewWrapperUrl",
components: {
ReviewSnippet: 'pr-reviewsnippet',
ReviewImageDisplay: 'pr-reviewimagedisplay',
ReviewDisplay: 'pr-reviewdisplay'
}
});
} catch (e) {
window.console && window.console.log(e);
}
</script>
If I put a breakpoint on review_wrapper_url: "#Model.ReviewWrapperUrl", the value is the same.
Here is where is get more complicated. The powerreviews plugin then renders to the browser. However by the time it is rendered the string has somehow been encoded. The & in page_id=191519&pr_page_id_variant=2018 has changed to & and this is causing me problems because the plugin on the target page is using & as a delimiter.
I have spoken to Power Reviews and they have tried the same values and cannot replicate the problem. One of their developers that knows a little c#/APSX has suggested it could be that .NET is somehow converting it as its being rendered.
However the final URL has been appended with more parameters by the Power Review plugin which are not affected.
http://www.mydomain.co.uk/write-a-review?page_id=191519&pr_page_id_variant=2018&pr_merchant_id=xxxxxx&pr_api_key=xxx-xxxx-xxx-xxxx-xxxxxxxxx&pr_merchant_group_id=xxxxx
Has anyone experienced this? Suggest ways for me to prove or disprove processes that may be causing it?

Related

Scraping html list data from a dynamic server

Hallo guys!
Sorry for the dump question, this is my last resort. I swear i triend countless of other Stackoverflow questions, different Frameworks, etc., but those didnt seem to help.
Ich have the following Problem:
A website displays a list of data (there is a TON of div, li, span etc. tags infront, its a big HTML.)
Im writing a tool that fetches data from a specific list inside a ton of other div tags, downloads it and outputs an excel file.
The website im trying to access, is dynamic. So you open the website, it loads a little bit, and then the list appears (probably some JS and stuff).
When i try to download the website via a webRequest in C#, the html I get ist almost empty with a ton on white spaces, lots of non-html stuff, some garbage data as well.
Now: Im pretty used to C#, HTMLAgillityPack, and countless other libraries, not so much in web related stuff tho. I tried CefSharp, Chromium etc. all of those stuff, but couldnt get them to work properly unfortunately.
I want to have a HTML in my program to work with that looks exactly like the HTML that you see when
you open the dev console in chrome wenn visting the website mentined above.
The HTML parser works flwalessly there.
This is how I image how the code could look like simplified.
Extreme C# pseudocode:
WebBrowserEngine web = new WebBrowserEngine()
web.LoadURLuntilFinished(url); // with all the JS executed and stuff
String html = web.getHTML();
web.close();
My Goal would be that the string html in the pseudocode looks exactly like the one in the Chrome dev tab.
Maybe there is a solution posted somewhere else but i swear i coudlnt find it, been looking for days.
Andy help is greatly appreciated.
#SpencerBench is spot on in saying
It could be that the page is using some combination of scroll state, element visibility, or element positions to trigger content loading. If that's the case, then you'll need to figure out what it is and trigger it programmatically.
To answer the question for your specific use case, we need to understand the behaviour of the page you want to scrape data from, or as I asked in the comments, how do you know the page is "finished"?
However, it's possible to give a fairly generic answer to the question which should act as a starting point for you.
This answer uses Selenium, a package which is commonly used for automating testing of web UIs, but as they say on their home page, that's not the only thing it can be used for.
Primarily it is for automating web applications for testing purposes, but is certainly not limited to just that. Boring web-based administration tasks can (and should) also be automated as well.
The web site I'm scraping
So first we need a web site. I've created one using ASP.net core MVC with .net core 3.1, although the web site's technology stack isn't important, it's the behaviour of the page you want to scrape which is important. This site has 2 pages, unimaginatively called Page1 and Page2.
Page controllers
There's nothing special in these controllers:
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
public class Page1Controller : Controller
{
public IActionResult Index()
{
return View("Page1");
}
}
}
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
public class Page2Controller : Controller
{
public IActionResult Index()
{
return View("Page2");
}
}
}
API controller
There's also an API controller (i.e. it returns data rather than a view) which the views can call asynchronously to get some data to display. This one just creates an array of the requested number of random strings.
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
using System;
using System.Collections.Generic;
using System.Text;
[Route("api/[controller]")]
[ApiController]
public class DataController : ControllerBase
{
[HttpGet("Create")]
public IActionResult Create(int numberOfElements)
{
var response = new List<string>();
for (var i = 0; i < numberOfElements; i++)
{
response.Add(RandomString(10));
}
return Ok(response);
}
private string RandomString(int length)
{
var sb = new StringBuilder();
var random = new Random();
for (var i = 0; i < length; i++)
{
var characterCode = random.Next(65, 90); // A-Z
sb.Append((char)characterCode);
}
return sb.ToString();
}
}
}
Views
Page1's view looks like this:
#{
ViewData["Title"] = "Page 1";
}
<div class="text-center">
<div id="list" />
<script src="~/lib/jquery/dist/jquery.min.js"></script>
<script>
var apiUrl = 'https://localhost:44394/api/Data/Create';
$(document).ready(function () {
$('#list').append('<li id="loading">Loading...</li>');
$.ajax({
url: apiUrl + '?numberOfElements=20000',
datatype: 'json',
success: function (data) {
$('#loading').remove();
var insert = ''
for (var item of data) {
insert += '<li>' + item + '</li>';
}
insert = '<ul id="results">' + insert + '</ul>';
$('#list').html(insert);
},
error: function (xht, status) {
alert('Error: ' + status);
}
});
});
</script>
</div>
So when the page first loads, it just contains an empty div called list, however the page loading trigger's the function passed to jQuery's $(document).ready function, which makes an asynchronous call to the API controller, requesting an array of 20,000 elements. While the call is in progress, "Loading..." is displayed on the screen, and when the call returns, this is replaced by an unordered list containing the received data. This is written in a way intended to be friendly to developers of automated UI tests, or of screen scrapers, because we can tell whether all the data has loaded by testing whether or not the page contains an element with the ID results.
Page2's view looks like this:
#{
ViewData["Title"] = "Page 2";
}
<div class="text-center">
<div id="list">
<ul id="results" />
</div>
<script src="~/lib/jquery/dist/jquery.min.js"></script>
<script>
var apiUrl = 'https://localhost:44394/api/Data/Create';
var requestCount = 0;
var maxRequests = 20;
$(document).ready(function () {
getData();
});
function getDataIfAtBottomOfPage() {
console.log("scroll - " + requestCount + " requests");
if (requestCount < maxRequests) {
console.log("scrollTop " + document.documentElement.scrollTop + " scrollHeight " + document.documentElement.scrollHeight);
if (document.documentElement.scrollTop > (document.documentElement.scrollHeight - window.innerHeight - 100)) {
getData();
}
}
}
function getData() {
window.onscroll = undefined;
requestCount++;
$('results2').append('<li id="loading">Loading...</li>');
$.ajax({
url: apiUrl + '?numberOfElements=50',
datatype: 'json',
success: function (data) {
var insert = ''
for (var item of data) {
insert += '<li>' + item + '</li>';
}
$('#loading').remove();
$('#results').append(insert);
if (requestCount < maxRequests) {
window.setTimeout(function () { window.onscroll = getDataIfAtBottomOfPage }, 1000);
} else {
$('#results').append('<li>That\'s all folks');
}
},
error: function (xht, status) {
alert('Error: ' + status);
}
});
}
</script>
</div>
This gives a nicer user experience because it requests data from the API controller in multiple smaller chunks, so the first chunk of data appears fairly quickly, and once the user has scrolled down to somewhere near the bottom of the page, the next chunk of data is requested, until 20 chunks have been requested and displayed, at which point the text "That's all folks" is added to the end of the unordered list. However this is more difficult to interact with programmatically because you need to scroll the page down to make the new data appear.
(Yes, this implementation is a bit buggy - if the user gets to the bottom of the page too quickly then requesting the next chunk of data doesn't happen until they scroll up a bit. But the question isn't about how to implement this behaviour in a web page, but about how to scrape the displayed data, so please forgive my bugs.)
The scraper
I've implemented the scraper as a xUnit unit test project, just because I'm not doing anything with the data I've scraped from the web site other than Asserting that it is of the correct length, and therefore proving that I haven't prematurely assumed that the web page I'm scraping from is "finished". You can put most of this code (other than the Asserts) into any type of project.
Having created your scraper project, you need to add the Selenium.WebDriver and Selenium.WebDriver.ChromeDriver nuget packages.
Page Object Model
I'm using the Page Object Model pattern to provide a layer of abstraction between functional interaction with the page and the implementation detail of how to code that interaction. Each of the pages in the web site has a corresponding page model class for interacting with that page.
First, a base class with some code which is common to more than one page model class.
namespace StackOverflow68925623Scraper
{
using System;
using OpenQA.Selenium;
using OpenQA.Selenium.Support.UI;
public class PageModel
{
protected PageModel(IWebDriver driver)
{
this.Driver = driver;
}
protected IWebDriver Driver { get; }
public void ScrollToTop()
{
var js = (IJavaScriptExecutor)this.Driver;
js.ExecuteScript("window.scrollTo(0, 0)");
}
public void ScrollToBottom()
{
var js = (IJavaScriptExecutor)this.Driver;
js.ExecuteScript("window.scrollTo(0, document.body.scrollHeight)");
}
protected IWebElement GetById(string id)
{
try
{
return this.Driver.FindElement(By.Id(id));
}
catch (NoSuchElementException)
{
return null;
}
}
protected IWebElement AwaitGetById(string id)
{
var wait = new WebDriverWait(Driver, TimeSpan.FromSeconds(10));
return wait.Until(e => e.FindElement(By.Id(id)));
}
}
}
This base class gives us 4 convenience methods:
Scroll to the top of the page
Scroll to the bottom of the page
Get the element with the supplied ID, or return null if it doesn't exist
Get the element with the supplied ID, or wait for up to 10 seconds for it to appear if it doesn't exist yet
And each page in the web site has its own model class, derived from that base class.
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium;
public class Page1Model : PageModel
{
public Page1Model(IWebDriver driver) : base(driver)
{
}
public IWebElement AwaitResults => this.AwaitGetById("results");
public void Navigate()
{
this.Driver.Navigate().GoToUrl("https://localhost:44394/Page1");
}
}
}
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium;
public class Page2Model : PageModel
{
public Page2Model(IWebDriver driver) : base(driver)
{
}
public IWebElement Results => this.GetById("results");
public void Navigate()
{
this.Driver.Navigate().GoToUrl("https://localhost:44394/Page2");
}
}
}
And the Scraper class:
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium.Chrome;
using System;
using System.Threading;
using Xunit;
public class Scraper
{
[Fact]
public void TestPage1()
{
// Arrange
var driver = new ChromeDriver();
var page = new Page1Model(driver);
page.Navigate();
try
{
// Act
var actualResults = page.AwaitResults.Text.Split(Environment.NewLine);
// Assert
Assert.Equal(20000, actualResults.Length);
}
finally
{
// Ensure the browser window closes even if things go pear-shaped
driver.Quit();
}
}
[Fact]
public void TestPage2()
{
// Arrange
var driver = new ChromeDriver();
var page = new Page2Model(driver);
page.Navigate();
try
{
// Act
while (!page.Results.Text.Contains("That's all folks"))
{
Thread.Sleep(1000);
page.ScrollToBottom();
page.ScrollToTop();
}
var actualResults = page.Results.Text.Split(Environment.NewLine);
// Assert - we expect 1001 because of the extra "that's all folks"
Assert.Equal(1001, actualResults.Length);
}
finally
{
// Ensure the browser window closes even if things go pear-shaped
driver.Quit();
}
}
}
}
So, what's happening here?
// Arrange
var driver = new ChromeDriver();
var page = new Page1Model(driver);
page.Navigate();
ChromeDriver is in the Selenium.WebDriver.ChromeDriver package and implements the IWebDriver interface from the Selenium.WebDriver package with the code to interact with the Chrome browser. Other packages are available containing implementations for all popular browsers. Instantiating the driver object opens a browser window, and calling its Navigate method directs the browser to the page we want to test/scrape.
// Act
var actualResults = page.AwaitResults.Text.Split(Environment.NewLine);
Because on Page1, the results element doesn't exist until all the data has been displayed, and no user interaction is required in order for it to be displayed, we use the page model's AwaitResults property to just wait for that element to appear and return it once it has appeared.
AwaitResults returns an IWebElement instance representing the element, which in turn has various methods and properties we can use to interact with the element. In this case we use its Text property which returns the element's contents as a string, without any markup. Because the data is displayed as an unordered list, each element in the list is delimited by a line break, so we can can use String's Split method to convert it to a string array.
Page2 needs a different approach - we can't use the presence of the results element to determine whether the data has all been displayed, because that element is on the page right from the start, instead we need to check for the string "That's all folks" which is written right at the end of the last chunk of data. Also the data isn't loaded all in one go, and we need to keep scrolling down in order to trigger the loading of the next chunk of data.
// Act
while (!page.Results.Text.Contains("That's all folks"))
{
Thread.Sleep(1000);
page.ScrollToBottom();
page.ScrollToTop();
}
var actualResults = page.Results.Text.Split(Environment.NewLine);
Because of the bug in the UI that I mentioned earlier, if we get to the bottom of the page too quickly, the fetch of the next chunk of data isn't triggered, and attempting to scroll down when already at the bottom of the page doesn't raise another scroll event. That's why I'm scrolling to the bottom of the page and then back to the top - that way I can guarantee that a scroll event is raised. You never know, the web site you're trying to scrape data from may itself be buggy.
Once the "That's all folks" text has appeared, we can go ahead and get the results element's Text property and convert it to a string array as before.
// Assert - we expect 1001 because of the extra "that's all folks"
Assert.Equal(1001, actualResults.Length);
This is the bit that won't be in your code. Because I'm scraping a web site which is under my control, I know exactly how much data it should be displaying so I can check that I've got all the data, and therefore that my scraping code is working correctly.
Further reading
Absolute beginner's introduction to Selenium: https://www.guru99.com/selenium-csharp-tutorial.html
(A curiosity in that article is the way that it starts by creating a console application project and later changes its output type to class library and manually adds the unit test packages, when the project could have been created using one of Visual Studio's unit test project templates. It gets to the right place in the end, albeit via a rather odd route.)
Selenium documentation: https://www.selenium.dev/documentation/
Happy scraping!
If you need to fully execute the web page, then a complete browser like CefSharp is your only option.
It could be that the page is using some combination of scroll state, element visibility, or element positions to trigger content loading. If that's the case, then you'll need to figure out what it is and trigger it programmatically. I know that CefSharp can simulate user actions like clicking, scrolling, etc.

MVC reCapcha Only Returns False

I am using:
http://mvcrecaptcha.codeplex.com/
My problem is very simple!
bool captchaValid
always returns false, no matter what I do.
Here is my code:
[CaptchaValidator]
[HttpPost]
public ActionResult ViewWidget(int id, TagwallViewModel model, bool captchaValid)
{
model.TagwallCollection = new TagWallCollection() { Id = id };
if (!captchaValid)
{
ModelState.AddModelError("_FORM", "You did not type the verification word correctly. Please try again.");
}
else
It shows no errors..
Things i have done differently, but i think have no influence:
The cs files downloaded from codeplex is not in the same folders.
I registered on https://www.google.com/recaptcha/admin/create to get my two keys with a online domain, but i'm testing it on localhost.
That was my problem, sorry for troubleing you! Having a bad code day.
I am using Razor.

ASP .NET MVC3 ViewBag sanitizing string

So I need to pass to a JavaScript function an array of strings in my view based on data from the database. So I have this code in the controller:
string top_six_string = "[";
foreach (ObjectModel om in collection)
{
myProject.Models.BlobFile file = null;
if (om.BlobFile != null)
{
file = om.BlobFile;
}
else if (om.BlobFiles.Count != 0)
{
file = om.BlobFiles.First();
}
if (file != null)
{
top_six_string += " \"" + file.BlobFileID + "\",";
}
}
top_six_string = top_six_string.TrimEnd(',');
top_six_string += "]";
ViewBag.TopSixList = top_six_string;
Now, I don't particularly understand why we have both a BlobFile field and a BlobFiles collection, but that's not that point. The point is, debugging shows that I accurately the get the string I want (of the form ["25", "21", "61", "59"]).
But when running the JavaScript, I got the confusing error "Unexpected character &", and a little source-viewing in Chrome led me to learn that the string came out looking like this:
[ "25", "21", "61", "59"]
So my assumption is that the ViewBag is sanitizing string that it is passed for display in HTML, but obviously that isn't my concern right now. Am I correct in my assumption? Is there another way to pass the view this information? Is there a way I can coerce the string back to quotes afterwards?
The problem is most likely when you output the contents of the ViewBag in your View. By default the Html helpers sanitize output to help protect against injection attacks.
What you want is this when outputting the value in your View: #Html.Raw(ViewBag.TopSixList)
Since programmers barely use MVC3 and google shows this page also for Asp core
In ASP.Net Core change this line :
ViewBag.TopSixList = top_six_string;
To
ViewBag.TopSixList = new HtmlString(top_six_string);
And add using Microsoft.AspNetCore.Html; if HtmlString is not accessible.

Calling Pagemethod

Hi i have the following pagemethod, however it dues not seem to be working, i tried debugging it and it does not hit the method. Here is what my method looks like;
function InsertStatus() {
var fStatus = document.getElementById('<%=txtStatus.ClientID %>').value;
PageMethods.InsertStatusUpdate(fStatus, onSucess, onError);
function onSucess(result) {
alert(result);
}
function onError(result) {
alert('Cannot process your request at the moment, please try later.');
}
}
And my codebehind;
[WebMethod]
public static string InsertStatusUpdate(string fStatus)
{
string Result = "";
int intUserID = -1;
if (String.IsNullOrEmpty(HttpContext.Current.User.Identity.Name))
HttpContext.Current.Response.Redirect("/login");
else
intUserID = Convert.ToInt32(HttpContext.Current.User.Identity.Name);
if (string.IsNullOrEmpty(fStatus))
return Result = "Please enter a status";
else
{
//send data back to database
return Result = "Done";
}
}
When i click my button it goes straight through the onError Method. Can anyone see what i am doing wrong?
I found the problem i needed a [System.Web.Script.Services.ScriptService] above the method, due to the fact it is being called by a script. Thanks for all the suggestions.
If I were to guess, I would focus on this:
intUserID = Convert.ToInt32(HttpContext.Current.User.Identity.Name);
The best way to solve this is set a breakpoint and start walking through the code. When you run a line and are redirected to the error page, you have found your problem.
The reason I picked that line, is the user is a string. Now, it may be your users are numbers, but it could also be including a domain user == "mydomain/12345", which is not an integer, even if the user part of the string is.
As far as I know, you can't Response.Redirect in a PageMethod.
Return a string of the redirect URL and then use JavaScript document.location.href to handle the redirection.
EDIT: I've just seen that you tried debugging and the method isn't hit: ensure your ScriptManager has EnablePageMethods set to true:
<asp:ScriptManager ID="ScriptManager1" runat="server" EnablePageMethods="true"/>

ASP.NET Payment Form Submission Need Guidance

Guys, I apologize me if the question is less organized and less clear. I am in hurry :(
My web app has payment form that need to be submitted to another ASP.NET page (lets call it http://vendor.com/getpay.aspx)
residing on another server.
That page will do some mumbo-jumbo works and then redirects it to the acutal
payment gateway site.
when i post my payment form to getpay.aspx via simple HTML form, it works and redirects fine.
if i change the form and its hidden inputs to server side controls, it doesn't work. their page is throwing viewstate exception.
I need the form hidden inputs to be server controls so that i can bind some values generated by my code behind.(i think i can do this like the classic asp way using <%= %>, but it is like going back in standard!)
I tried HttpWebRequest in the code behind, it posts the form but the browser doesn't redirect to Payment Gateway page.
I am posting the payment info over non https, how can i prevent the user tampering with the posted data?.
I want to validate the payment form in the backend then post it, i couldn't trust the user input data.
Also the result was returned to my redirect page with query strings appended. It is also happening over the non https.
how much i can trust this redirect data?
Thx much
Generate your form by clearing the Response and rewriting the html HTTP form out into the cleared response. When I get home I will trawl through my old code and provide an example.
EDIT:
OK here is my code, I had to recreate because I am still at work but it goes a little like this:
Create an intermediate page to capture your variables from the ASPX page and then use this to send as a 'simple' form:
protected void Page_Load(object sender, EventArgs e)
{
// Capture the post to this page
IDictionary<string, string> variables = new Dictionary<string, string>();
variables.Add("test", Request.Form["test"]); // collect all variables after checking they exist
RewriteContent(variable);
}
public void RewriteContent(IDictionary<string, string> variables)
{
string formContent = #"
<html>
<head>
<title>My Form</title>
</head>
<body>
<form action='' method=''>";
foreach (KeyValuePair<string, string> keyVal in variables)
{
formContent += #"<input type='" + keyVal.Key + "' value='" + keyVal.Value + "' />";
}
formContent += #"
</form>
</body>
</html>"; // Add either an auto post in a javascript or an explicit submit button
Response.Clear();
Response.Write(formContent);
Response.Flush();
Response.End();
}
EDIT 2:
Sorry I just realised I have not answered the other questions.
Q3/Q4/Q5. If you are not using https you cannot really stop tampering or be sure the response is correct but you can restrict the chance it is bogus. This can be achieved by hashing the values with a secret that is shared at your end and the destination, and then when you get the response you should hash the values and compare to the hash that is sent back to you before you accept that it is valid.
Most payment mechanisms are verified in this manner usually with an MD5 or SHA1 hash you can find more info on the following links:
http://msdn.microsoft.com/en-us/library/system.security.cryptography.sha1.aspx
http://www.developerfusion.com/code/4601/create-hashes-md5-sha1-sha256-sha384-sha512/
http://snippets.dzone.com/posts/show/5816
EDIT 3:
Doing some encryption now and thought I would share some code with you (because I am nice like that). Might give you an idea of what to do and you can probably code better than me so just tidy up my mess a bit :)
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Security.Cryptography;
using log4net;
namespace MyCompany.Cipher
{
private static readonly ILog log = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);
public string GenerateSha1HashForString(string valueToHash, EncodeStyle encodeStyle)
{
string hashedString = string.Empty;
try
{
hashedString = SHA1HashEncode(Encoding.UTF8.GetBytes(valueToHash), encodeStyle);
}
catch (Exception ex)
{
if (log.IsErrorEnabled) { log.Error(string.Format("{0}\r\n{1}", ex.Message, ex.StackTrace)); }
throw new Exception("Error trying to hash a string; information can be found in the error log", ex);
}
return hashedString;
}
private string ByteArrayToString(byte[] bytes, EncodeStyle encodeStyle)
{
StringBuilder output = new StringBuilder(bytes.Length);
if (EncodeStyle.Base64 == encodeStyle)
{
return Convert.ToBase64String(bytes);
}
for (int i = 0; i < bytes.Length; i++)
{
switch (encodeStyle)
{
case EncodeStyle.Dig:
//encode to decimal with 3 digits so 7 will be 007 (as range of 8 bit is 127 to -128)
output.Append(bytes[i].ToString("D3"));
break;
case EncodeStyle.Hex:
output.Append(bytes[i].ToString("X2"));
break;
}
}
return output.ToString();
}
private string SHA1HashEncode(byte[] valueToHash, EncodeStyle encode)
{
SHA1 a = new SHA1CryptoServiceProvider();
byte[] arr = new byte[60];
string hash = string.Empty;
arr = a.ComputeHash(valueToHash);
hash = ByteArrayToString(arr, encode);
return hash;
}
}
Put it in a class some where that your project can see and it can generate an SHA1 hash based on a string value by calling the public method.

Categories

Resources