I have an web based application. The content for the Home page has been currently mentioned in the HTML code for the Home page using , and tags. To change the content anytime in future, it needs to be changed in the HTML code. :(
Is there a way that we can pick up the content from some external place and get it reflected through the website. This ways, any change if required can be made at the external location without referring to the application's code.
Please advise if there is any solution for it.
Thanks.
You can
Use a database
Include external files using Server Side Includes
Read external files and write their contents and an alternative method
Sounds like you're looking for a Content Management System (CMS), which will allow your content editors access to modify only specific blocks of a page that you specify.
There are a ton out there to do what you want, so you don't have to start from scratch. Just Google 'CMS'.
Although I haven't used it myself, DotNetNuke is a popular one these days and has a free version.
Related
I've created a website for a client of mine. It is coded in ASP.NET with C# and hosted on GoDaddy. She requires this website to updated daily by her. However, this client has very little knowledge of how to edit HTML or text within a site. I don't want to edit it every time she wants an update on the site.
What would be the best solution to my problem? I have looked up Content Management Systems, but I'm a little confused by what exactly it does in terms of coding and the management of the existing site. Does it require me to reformat the whole site to follow the CMS's 'templates'? Would it be better for me design my own back-end panel for her to edit the content (this would obviously take significant work)?
If you want to stick with a site you're developing from scratch, I'd use the HtmlEditor from the AjaxControlToolkit or a similar control, and store the html content in the database.
Then, when outputting the html from the database to the client pages, I'd make sure to use the Microsoft Anti-Cross Site Scripting Library to sanitize the html using the GetHtmlFragment() function (since this is tagged asp.net). It's not that much work, actually, if you design the database correctly, and if you've got the skills.
CMS systems are (trying not to oversimplify) entire web sites that are already built and allow people to edit the content using built-in content editing functionality. They range in functionality and extensibility from a "You get what you get and there's very little you can change" to "You can customize the heck out of it and buy or build your own modules to extend functionality." There are a lot of good ones out there, some free, and some expensive.
I'm working at a small company within a rather large company, where I don't really have control over our intranet. I have built a little site/page, and I want it to style exactly like the intranet pages.
I know I can download the stylesheets and start hacking away, but I need the links and the menu's to be up to date.
I'm working with asp.net mvc 2 here, but I've no idea how to go further from here. Thoughts?
You will need to copy the CSS etc.
About the menu - you will need to do the fallowing
use WebRequest for getting the new data, Use Html Agility Pack for parsing the page, And use XPath for getting the relevant data - I will recommend using caching for this
I'd like to create a web page where you can enter your domain name and have it fetch it and show you all the resources, their download times, etc -- similar to FireFox's NET tab.
Here's the page which I'd like emulate: http://tools.pingdom.com/
Now, I know this is a complex feature, but I'd like to hear general ideas. I know I could easily fetch the HTML via a WebClient, but that's the easy part. I need to fetch and time all the resources too, and not all at the same time. I want to mimic a browser. So, I thought about using something like System.Windows.Forms.WebBrowser, but that will only really give me the page load time.
Anyone have any thoughts / tips?
Using the Html Agility Pack you can easily find which external resources are referenced from an HTML page.
This won't tell you exactly when they would be loaded by the browser, and also won't help you with dynamically loaded resources, but is a good start.
I'm afraid the only way to be sure is to instantiate an entire browser. You could use a plug in for the Fiddler HTTP debugging proxy to intercept requests from the WebBrowser control to determine which resources are actually loaded in this case.
I need to display HTML in my silverlight application and cannot find a way of doing it. I cannot use the web browser control as it needs to be able to run in or out of a browser.
Does anyone know of a good way to do this, because all I can think of doing at the moment is running replace methods on the text to just replace the tags with C# equivalents eg(<br /> to \n).
The way I do it is to check if the application is running inside the browser and change the means of display accordingly. If running inside the browser, I overlay the application with an IFrame, as I describe in this article: http://www.silverlightshow.net/items/Building-a-Silverlight-Line-Of-Business-Application-Part-6.aspx. Otherwise, I use the WebBrowser control. I have a control which does this all for you in the source code that accompanies my book, which is downloadable from the Apress website here: http://www.apress.com/book/downloadfile/4638.
Hope this helps...
Chris
I believe what you are looking for is HTML Bridge.
Edit I'm am actually now unsure if you'll still have access to javascript if you're running this OOB. I'm going to look into this some more and will further update. I'll still leave the answer up though for reference.
Second Edit Here is what I've found. HTML Bridge is disabled when you run silverlight out of browser. This disables access to the HTML DOM as well as Javascript. However, according to a comment on this site:
HTML Bridge is not available when you first install a OOB app. But you CAN force it if you modify the index.html in the folder where the app is installed just adding the enablehtmlaccess parameter.
It works!
You can even create dynamic HTML elements using the well-known methods of the HtmlPage class. You can even open a new browser window with the Navigate() method and its "_blank" parameter.
Keep in mind this information was posted about SL 3. Its possible that this may have changed, but I doubt it. So it seems that what you may want to do is build a script into the startup of your SL app that detects whether or not your app is running out of browser. If it is then you may want to have some script to call that can modify this file for you.
There recently was a similar question.
I posted a link there to an implementation that parses and displays HTML inline in Silverlight. Of course, it will work only with simple HTML, but maybe you can expand it to your needs.
I've been struggling to find an exmample of some C# code (I'm using C# Visual Studio 2008 Express) that can programmatically save an entire web page (given a URL) including the images and formatting (e.g. CSS). The intention is that in a subsequent phase I'd ship this off (not sure how yet) so it could be viewed later via a browser.
Is there an example of the most simple approach (leveraging the .NET Framework methods) to save an entire web page? Saving as one page with a subdirectory for images, or otherwise. Basically the same as what you get with browsers when you say "save entire web page".
The simplest way is probably to add a WebBrowser Control to your application and point it at the page you want to save using the Navigate() method.
Then, when the document has loaded, call the ShowSaveAsDialog method. The user can then save the page as a single file, or a file with images in a subdirectory.
[Update]
Having now noticed "programatically" in your question, the above approach is not ideal as it requires either user involvement or delving into the Windows API to send input using SendKeys or similar.
There is nothing built-in to the .NET Framework that does all of what you ask.
So my approach revised would be:
Use System.NET.HttpWebRequest to get the main HTML document as a string or stream (easy).
Load this into a HTMLAgilityPack document where you can now easily query the document to get lists of all image elements, stylesheet links, etc.
Then make a separate web request for each of these files and save them to a subdirectory.
Finally update all relevent links in the main page to point to the items in the subdirectory.
In effect you would be implementing a very simple web browser. You may run into issues with pages that use JavaScript to dynamically alter or request page content, but for most pages this should give acceptable results.
From code Project: ZetaWebSpider
It's definitely not elegant, but you could navigate a System.Windows.Forms.WebBrowser to the URL and then call its ShowSaveAsDiagog() method to save the page.