I am currently working on a website that was built on C# from the 2003 period using server controls, javascript without libraries like the modern age a lack of a data access layer and plenty of spaghetti code.
We have decide due to the sheer size of the web site we will have to migrate web pages peices at time.
The problem is we have links, navigation and menus that need to point from an old domain where the legacy pages are to the new domain where our new MVC 4, BootStrap and clean greenfield rewrites of these legacy pages are being created. The problem is also that the new web pages will have links, navigation and menus that will have to point back to the old site as well.
I know I can create 302, I can use URL rewriting even.
My concern is that all developers will need to keep track of links both in the massive legacy website to the new website and update the urls manually.
Is there a simple way of migrating a website slowly?
Is there an approach I should research to handling this?
Should I stop snivling and just tell everyone on my team to keep track of the links as they go along and use something like wget on the legacy site to find all the links?
I would create a central repository for all the links, an XML file would do nicely, where both new and legacy sites would refer to get the URLs for the links.
Yes, you would need to change all links in both new and legacy to use this repository, but the upside is that once a page has been changed you can just change it's URL in the repository and all the links in both sites would now change.
Related
I am searching for pointers for developing a MVC web app whose views which will be formed after deployment and be modified if needed. I could develop an app where views will always be formed dynamically by picking paragraphs from a database, but I am afraid that might slow down the server. I am looking into the subject of scaffolding, but being a newbie, I could not be sure if that will answer my need.
I'm trying to create a wpf application such as a movies library because i would like to manage and sort out my movies with a pretty interface.
I'd like to create a library with all my movies getting information from the web, but i don't know how very well.
I thought to get the information from a web site, for example imdb, but i don't know if it's legally to capture html from page to get the nested information.
It's my first desktop application and I would also like to know if it is necessary to create a database within the project and then create a setup project with specified script for deploy it.
Sorry for the confusion but i would like to know too much things :)
Thanks a lot in advance.
The legality of web scraping is a grey area. See my question, "Legality of Web Scraping vs Normal Use" and the corresponding answers for some insight.
Even if the legality is not a problem, web scraping is a flimsy approach because the webpage structure may change without notice, making your application suddenly useless until you update it to the new format. You are much better off using some sort of web API (if the site providing the information offers it).
Whether you need a database or not depends entirely on what your application will be doing and how you design it - it's not something any of us can tell you.
Same goes for the setup project - in fact I wouldn't worry about that until you actually have a working application. Take it step by step and keep the scope within control.
Yes I did not think about api.
It's a great idea, maybe use "themoviedb".
But if i create an application based on it, that has to show all the movies that you have stored on your hdd and get , for example, the posters, the description and the ranking, i have to create a database according to you?
Thanks a lot.
I'm importing classic ASP pages into a new Sitefinity installation. Unfortunately, the existing site makes extensive use of URL rewriting via Helicon ISAPI Rewrite 3.
I'm generating the list of pages that need to be imported by crawling the navigation menus in the old site. These are, unfortunately, not dynamically generated from any sort of central repository, so the best way I've found to build the site hierarchy is to crawl the site.
When creating page nodes in the Sitefinity nav hierarchy to hold the content from the old pages, I need to be able to create the new pages at a location roughly equivelant to their location in the file system in the old site. However, the rewrite rules make this difficult to determine. For instance, I may get a link form parsing the old HTML like:
http://www.mysite.com/product_name
which is rewritten (not redirected) to
http://www.mysite.com/products/product_name/product_root.asp
I need a way to get the second url from the first. The first thing that comes to mind is to somehow use the .htaccess file to parse the URLs, get the result and use that for the rest of the import process.
Is there a way to do this from a Winforms app without having to involve a web server? I realize that I could modify one of the ASP includes, such as the page footer, to emit a comment containing the rewritten URL of each page, but I'd rather not make unnecessary changes to the existing code if it can be avoided.
Update
For example,
http://www.keil.com/arm/
rewrites to
http://www.keil.com/products/arm/mdk.asp
I've created a website for a client of mine. It is coded in ASP.NET with C# and hosted on GoDaddy. She requires this website to updated daily by her. However, this client has very little knowledge of how to edit HTML or text within a site. I don't want to edit it every time she wants an update on the site.
What would be the best solution to my problem? I have looked up Content Management Systems, but I'm a little confused by what exactly it does in terms of coding and the management of the existing site. Does it require me to reformat the whole site to follow the CMS's 'templates'? Would it be better for me design my own back-end panel for her to edit the content (this would obviously take significant work)?
If you want to stick with a site you're developing from scratch, I'd use the HtmlEditor from the AjaxControlToolkit or a similar control, and store the html content in the database.
Then, when outputting the html from the database to the client pages, I'd make sure to use the Microsoft Anti-Cross Site Scripting Library to sanitize the html using the GetHtmlFragment() function (since this is tagged asp.net). It's not that much work, actually, if you design the database correctly, and if you've got the skills.
CMS systems are (trying not to oversimplify) entire web sites that are already built and allow people to edit the content using built-in content editing functionality. They range in functionality and extensibility from a "You get what you get and there's very little you can change" to "You can customize the heck out of it and buy or build your own modules to extend functionality." There are a lot of good ones out there, some free, and some expensive.
I am new to WebParts and have a newbie question... is it possible to load webparts from other sites like MSN? For example, can a user save the weather web part from theie MyMSN site and load into my newly created site that allows web parts.
Thanks in advance for any help.
Tony,
That's a good question. Generally with WebParts, in order to load a webpart from a third party website, they would have to provide a WebPart package file to download. Codeplex has a lot of samples: see http://www.codeplex.com/site/search?query=webpart&ac=8. So, if you were looking at a site like MyMSN, it's not likely you would be able to load web parts from that site.
There may be other ways to integrate that data, though. For example, you could offer a web part that acts as a proxy for data within other environments. So, let's say that you have an RSS feed that you want to allow people to add to your site. In this scenario, you could create (or use a third party) web part that reads RSS, and allow your users to simply configure it to read MSN news or Yahoo! news, etc.
One other area to explore might be a portlet specification like JSR-000168 you can download from http://jcp.org/aboutJava/communityprocess/final/jsr168/index.html. This is an attempt to standardize Portlets (i.e. Webparts) that some companies have adopted as a way to share them across the web.