I am writing a C# .net website for management use within my company. On this page, I have a location on the main page to hold notices, which are contextually colored information panels. On the options page, you are given the ability to create them by filling out a form containing the notification's title, message, and a drop down for its class.
One of the the options I would like to provide for the users is to use {name} within the message to show the username of the person viewing the notification. I attempted to use message.Replace("{name}", "<asp:LoginName ID=\"LoginName1\" runat=\"server\" />");, but this resulted in that exact string being posted as opposed to the .net parser converting it into the username.
I am using the default asp.net user creation engine and database, though I've also heard that Silverlight is now the standard.
As for my questions, would I be better off rebuilding the project in Silverlight, assuming Silverlight has the capability to handle secure logins? Also, what would be the correct way to go about replacing the username? Would I be better off with something like {user=#} where # is the user's ID? Theoretically, it would just be a database query from within my Notice class (the constructor calling a method that replaces bbcode with its html counterpart).
Thank you for your help
<asp:LoginName ID=\"LoginName1\" runat=\"server\" /> will work only if directly put in the ASPX. If you want to dynamically set the name without using a control, there's two way:
From the page's code-behind, you can call this.User.Identity.Name
From anywhere in the app, you can call HttpContext.Current.User.Identity.Name
Related
What is this thing called? I was hoping I could use something similar drop down as a "log in" box instead of having a whole page for logging in (which would contain text boxes and buttons).
Regarding the actual element circled, this is simply an enhanced client-side tooltip to display messages to the user. This one in particular looks like it relies on the jQuery UI library.
That's generally going to be generated by the built-in jQuery validation that comes standard in some of the ASP.NET related templates. It uses a bit of client-side code to let the user know if what they are typing in is valid or not.
In reality, this is simply a just a form that will handle posting your information to the server and then routing the user to the appropriate location (and it could be handled with just a bit of HTML).
I need to create a "speed bump" that issues a warning whenever a user clicks on a link that would direct them to a different website (not on the domain). Is there any way to create a custom Orchard workflow activity that will activate whenever a link on the website is clicked? I'm having a problem getting C# to fire an event whenever a link (or anchor tag) on the page gets clicked (I can't just add an onServerClick event to every anchor tag or add an event handler to anchor tags with specific IDs because I need it to fire on all anchor tags many of which are dynamically assigned an id when created).
Another option I was toying with would be to create a custom workflow task that will search any content item for links and then add a speedbump to any link that is determined to lead to an external url. Is it possible to use C# to search the contents of any content item upon creation/publish for anchor tags and then alter the tag somehow to include a speedbump?
As a side note I also need to be able to whitelist urls so a third party can't use the speedbump to direct the user to a malicious website.
I've been stumped on this for quite some time any help would be greatly appreciated.
One way to do this is to add a bit of client-side script to intercept the A tags click events and handle them according to the logic you want to implement. Advantages are performance and ease of implementation. Very, very few people disable javascript, and those users who do can presumably read a domain name in the address bar, so there are no downsides.
Another way, if you don't want to use javascript, is to write a server-side filter that parses the response being output, finds all A tags, and replaces their URL on the fly with the URL of a special controller, with the actual URL being passed as a querystring parameter. Drawbacks of this method is that it's going to be an important drag on the performance of the server, and it's going to be hard to write.
But the best way to solve the issue, by far, for you and your users, is to convince your legal department that this is an extremely bad idea and that there is, in reality, no legal issue here (but I may be wrong about this: not a lawyer (this is not legal advice)).
I'm trying to help save time at work with for a lot of tedious copy/paste tasks we have.
So, we have a propitiatory CRM (with proper HTML ID's, etc for accessing elements) and I'd like to copy those vales from the CRM to textboxes on other web pages (outside of the CRM, so sites like Twitter, Facebook, Google, etc)
I'm aware browsers limit this for security and I'm open to anything, it can be a C#/C++ application, Adobe AIR, etc. We only use Firefox at work so even an extension would work. (We do have GreaseMonkey installed so if that's usable too, sweet).
So, any ideas on how to copy values from one web page to another? Ideally, I'm looking to click a button and have it auto-populate fields. If that button has to launch the web pages that need to be copied over to, that's fine.
Example: Copy customers Username from our CRM, paste it in Facebook's Username field when creating a new account.
UPDATE: To answer a user below, the HTML elements on each domain have specific HTML ID's. The data won't need to be manipulated or cleaned up, just a simple copy from ourCRM.com to facebook.com / twitter.com
Ruby Mechanize is a good bet for scraping the data. Then you can store it and post it however you please.
First, I'd suggest that you more clearly define exactly what it is you're looking to do. I read this as you're trying to take some unstructured data from Point A and copy it to Point B. Do the names of these fields remain constant every time you do the operation? Do you need to simply pull any textbox elements from the page and copy them all over? Do some sort of filtering of this data before writing it over?
Once you've got a clear idea of the requirements, if you go the C# route, I'd use something like SimpleBrowser. Judging by the example on their Github page, you could give it the URL of the page you're looking to copy, then name each of the fields you're looking to obtain the value of, perhaps store these in an IDictionary, then open a new URL and copy those values back into the page (and submit the form).
Alternatively, if you don't know the names of the fields, perhaps there's a provided function in that or a similar project that will allow you to simply enumerate all the text fields on the page and retrieve the values for all of them. Then you'd simply apply some logic of your own to filter those options down to whatever is on the destination form.
SO we thought of an easier way to do this (in case anyone else runs into this issue).
1) From our CRM, we added a "Sign up for Facebook" button
2) The button opens a new window with GET variables in the URL
3) Use a greasemonkey script to read those GET variables and fill in textbox values
4) SUCCESS!
Simple, took about 10 minutes to get working. Thanks for you suggestions.
I need to write a C# code for grabbing contents of a web page. Steps looks like following
Browse to login page
I have user name and a password, provide it programatically and login
Then you are in detail page
You have to get some information there, like (prodcut Id, Des, etc.)
Then need to click(by code) on Detail View
Then you can get the price for that product from there.
Now it is done, so we can write detail line into text file like this...
ABC Printer::225519::285.00
Please help me on this, (Even VB.Net Code is ok, I can convert it to C#)
The WatiN library is probably what you want, then. Basically, it controls a web browser (native support for IE and Firefox, I believe, though they may have added more since I last used it) and provides an easy syntax for programmatically interacting with page elements within that browser. All you'll need are the names and/or IDs of those elements, or some unique way to identify them on the page.
You should be able to achieve this using the WebRequest class to retrieve pages, and the HTML Agility Pack to extract elements from HTML source.
yea I downloaded that library. Nice one.
Thanks for sharing it with me. But I have a issue with that library. The site I want to get data is having a "captcha" on the login page.
I can enter that value if this can show image and wait for my input.
Can we achive that from this library, if you can like to have a sample.
You should be able to achieve this by using two classes in C#, HttpWebRequest (to request the web pages) and perhaps XmlTextReader (to parse the HTML/XML response).
If you do not wish to use XmlTextReader, then I'd advise looking into Regular Expressions, as they are fantastically useful for extracting information from large bodies of text where-in patterns exist.
How to: Send Data Using the WebRequest Class
OK, so we have an online downloads store accessed via our software. Recently we've had requests to allow downloads via normal browsers and it's fairly easy just to slap a download page on. The problem is that it would be confusing to people having two download links, one for the software and one for their web browser, so we want to differentiate between the two and only show the relevant download link.
From what I've gathered, the .net WebBrowser component is the same as IE and uses the same User Agent, so we can't use that unless we subclass the WebBrowser in the software to make it use a specific User Agent. It's the more sensible option, but we'd have to roll out another updated version, which is less than ideal.
Are there any other ways to tell if someone's accessing a site via the .net component? My only other alternative is to copy the store to a different address with the different download links and send people there. Again this is doable, but not ideal.
Check if window.external is null. IE implements window.external to have methods like AddSearchProvider where most of time WebBrowser.ObjectForScripting is null.
I'm not sure if there is any better way to do this, but here is one idea... The WebBrowser control has a property Document that gives you access to the DOM object representing the loaded document (after the page is loaded). This object has InvokeScript method that you can use to run some JavaScript in the loaded page.
You could write a simple JavaScript function, say hideWebDownload() that would switch the view to a view used when the application runs locally and invoke it from your WinForms application that hosts the WebBrowser control:
webCtrl.Document.InvokeScript("hideWebDownload");
The default view of the page would show the download link for web and calling this function in the local application would switch the view to local download link.
Have your software pass in an invisible (to the user) value in the querystring of the URL.
Trivial to look if that's present.