Content Editor for Client - c#

I've created a website for a client of mine. It is coded in ASP.NET with C# and hosted on GoDaddy. She requires this website to updated daily by her. However, this client has very little knowledge of how to edit HTML or text within a site. I don't want to edit it every time she wants an update on the site.
What would be the best solution to my problem? I have looked up Content Management Systems, but I'm a little confused by what exactly it does in terms of coding and the management of the existing site. Does it require me to reformat the whole site to follow the CMS's 'templates'? Would it be better for me design my own back-end panel for her to edit the content (this would obviously take significant work)?

If you want to stick with a site you're developing from scratch, I'd use the HtmlEditor from the AjaxControlToolkit or a similar control, and store the html content in the database.
Then, when outputting the html from the database to the client pages, I'd make sure to use the Microsoft Anti-Cross Site Scripting Library to sanitize the html using the GetHtmlFragment() function (since this is tagged asp.net). It's not that much work, actually, if you design the database correctly, and if you've got the skills.
CMS systems are (trying not to oversimplify) entire web sites that are already built and allow people to edit the content using built-in content editing functionality. They range in functionality and extensibility from a "You get what you get and there's very little you can change" to "You can customize the heck out of it and buy or build your own modules to extend functionality." There are a lot of good ones out there, some free, and some expensive.

Related

Get Html from Web page and create Setup project for Wpf Application (C#)

I'm trying to create a wpf application such as a movies library because i would like to manage and sort out my movies with a pretty interface.
I'd like to create a library with all my movies getting information from the web, but i don't know how very well.
I thought to get the information from a web site, for example imdb, but i don't know if it's legally to capture html from page to get the nested information.
It's my first desktop application and I would also like to know if it is necessary to create a database within the project and then create a setup project with specified script for deploy it.
Sorry for the confusion but i would like to know too much things :)
Thanks a lot in advance.
The legality of web scraping is a grey area. See my question, "Legality of Web Scraping vs Normal Use" and the corresponding answers for some insight.
Even if the legality is not a problem, web scraping is a flimsy approach because the webpage structure may change without notice, making your application suddenly useless until you update it to the new format. You are much better off using some sort of web API (if the site providing the information offers it).
Whether you need a database or not depends entirely on what your application will be doing and how you design it - it's not something any of us can tell you.
Same goes for the setup project - in fact I wouldn't worry about that until you actually have a working application. Take it step by step and keep the scope within control.
Yes I did not think about api.
It's a great idea, maybe use "themoviedb".
But if i create an application based on it, that has to show all the movies that you have stored on your hdd and get , for example, the posters, the description and the ranking, i have to create a database according to you?
Thanks a lot.

Migrating Public Website Slowly

I am currently working on a website that was built on C# from the 2003 period using server controls, javascript without libraries like the modern age a lack of a data access layer and plenty of spaghetti code.
We have decide due to the sheer size of the web site we will have to migrate web pages peices at time.
The problem is we have links, navigation and menus that need to point from an old domain where the legacy pages are to the new domain where our new MVC 4, BootStrap and clean greenfield rewrites of these legacy pages are being created. The problem is also that the new web pages will have links, navigation and menus that will have to point back to the old site as well.
I know I can create 302, I can use URL rewriting even.
My concern is that all developers will need to keep track of links both in the massive legacy website to the new website and update the urls manually.
Is there a simple way of migrating a website slowly?
Is there an approach I should research to handling this?
Should I stop snivling and just tell everyone on my team to keep track of the links as they go along and use something like wget on the legacy site to find all the links?
I would create a central repository for all the links, an XML file would do nicely, where both new and legacy sites would refer to get the URLs for the links.
Yes, you would need to change all links in both new and legacy to use this repository, but the upside is that once a page has been changed you can just change it's URL in the repository and all the links in both sites would now change.

How to Download Document from Sharepoint Librarly and display in ASP.NET programatically?

I have created a document library with name "ARTICLES" in SHAREPOINT which stores documents. Now, I want to display the documents in Repeater and clicking on row it must display the document. And also all documents must be Downloaded in Application folder.
The application is pure asp.net application using c# and not a webpart or other.
Help appreciated!
thanks!
Although it might not suit your application (and it isn't exactly what you asked for), there is a much easier way of achieving what you described (suited for intranet applications where Windows Authentication is used).
It involves two parts:
In your web application add a IFRAME that points to your SharePoint library (using the default SharePoint web interface).
[optional] Add a custom master page to this library so that you can hide menus etc.
For many applications this solution is sufficient and saves you lot of trouble of coding the integration (and retesting it with every SharePoint update) decoupling your application from SharePoint. It also makes sure that the end user can use all SharePoint functionality such as uploading the changes directly from Word etc.

Web Crawling Sites with Javascripts or web forms

I have a webcrawler application. It successfully crawled most common and simple sites. Now i ran into some types of websites wherein HTML documents are dynamically generated through FORMS or javascripts. I believe they can be crawled and I just don't know how. Now, these websites do not show the actual HTML page. I mean if I browse that page in IE or firefox, the HTML code does not match what's actually in the IE or firefox. These sites contain textboxes, checkboxes, etc... so I believe they are what they call "Web Forms". Actually I am not much familiar with web development so correct me if I'm wrong.
My question is, does anyone in similar situation as I am now and have successfully solved these types of "challenges"? Does anyone know of a certain book or article regarding web crawling? Those that pertains to these advanced type of websites?
Thanks.
There are two separate issues here.
Forms
As a rule of thumb, crawlers do not touch forms.
It might be appropriate to write something for a specific website, that submits predetermined (or semi-random) data (particularly when writing automated tests for your own web applications), but generic crawlers should leave them well alone.
The spec describing how to submit form data is available at http://www.w3.org/TR/html4/interact/forms.html#h-17.13, there may be a library for C# that will help.
JavaScript
JavaScript is a rather complicated beast.
There are three common ways you can deal with it:
Write your crawler so it duplicates the JS functionality of specific websites that you care about.
Automate a web browser
Use something like Rhino with env.js
I found an article which tackles deep web and its very interesting and I think this answers my questions above.
http://www.trycatchfail.com/2008/11/10/creating-a-deep-web-crawler-with-net-background/
Gotta love this.
AbotX handles javascript out of the box. Its not free though.

Home/Landing screen design for a website in asp.net

I have an web based application. The content for the Home page has been currently mentioned in the HTML code for the Home page using , and tags. To change the content anytime in future, it needs to be changed in the HTML code. :(
Is there a way that we can pick up the content from some external place and get it reflected through the website. This ways, any change if required can be made at the external location without referring to the application's code.
Please advise if there is any solution for it.
Thanks.
You can
Use a database
Include external files using Server Side Includes
Read external files and write their contents and an alternative method
Sounds like you're looking for a Content Management System (CMS), which will allow your content editors access to modify only specific blocks of a page that you specify.
There are a ton out there to do what you want, so you don't have to start from scratch. Just Google 'CMS'.
Although I haven't used it myself, DotNetNuke is a popular one these days and has a free version.

Categories

Resources