I am new to WebParts and have a newbie question... is it possible to load webparts from other sites like MSN? For example, can a user save the weather web part from theie MyMSN site and load into my newly created site that allows web parts.
Thanks in advance for any help.
Tony,
That's a good question. Generally with WebParts, in order to load a webpart from a third party website, they would have to provide a WebPart package file to download. Codeplex has a lot of samples: see http://www.codeplex.com/site/search?query=webpart&ac=8. So, if you were looking at a site like MyMSN, it's not likely you would be able to load web parts from that site.
There may be other ways to integrate that data, though. For example, you could offer a web part that acts as a proxy for data within other environments. So, let's say that you have an RSS feed that you want to allow people to add to your site. In this scenario, you could create (or use a third party) web part that reads RSS, and allow your users to simply configure it to read MSN news or Yahoo! news, etc.
One other area to explore might be a portlet specification like JSR-000168 you can download from http://jcp.org/aboutJava/communityprocess/final/jsr168/index.html. This is an attempt to standardize Portlets (i.e. Webparts) that some companies have adopted as a way to share them across the web.
Related
Hi I am trying to pull this string from courseweb.hopkinsschools.org and display it on my own asp.net application. I have been looking for a long time for a tutorial but nothing works. Any help would be greatly appreciated.
Picture of String needed:
String
When I started doing work with websites and interfacing with other websites, I originally wanted to do what you're talking about, reading the text from pages, because thats how we as people interface with computers and websites.
But that is not how computers should ever interface with other websites unless absolutely necessary.
Moodle has an API for such things like course management. Its kind of difficult to find information on, but its called Moodle Web Services if I remember quickly. I'll add a link back if I can find it.
What these will do is let you access moodle in a computer friendly way, ie. a way your computer can easily understand, instead of trying to read webpages.
Edit
Here are some resources to get you started:
https://docs.moodle.org/dev/Web_services
https://code.google.com/p/mnet-csharp/
https://delog.wordpress.com/2010/08/31/integrating-a-c-app-with-moodle-using-xml-rpc/
https://delog.wordpress.com/2010/09/08/integrating-c-app-with-moodle-2/
I'm trying to create a wpf application such as a movies library because i would like to manage and sort out my movies with a pretty interface.
I'd like to create a library with all my movies getting information from the web, but i don't know how very well.
I thought to get the information from a web site, for example imdb, but i don't know if it's legally to capture html from page to get the nested information.
It's my first desktop application and I would also like to know if it is necessary to create a database within the project and then create a setup project with specified script for deploy it.
Sorry for the confusion but i would like to know too much things :)
Thanks a lot in advance.
The legality of web scraping is a grey area. See my question, "Legality of Web Scraping vs Normal Use" and the corresponding answers for some insight.
Even if the legality is not a problem, web scraping is a flimsy approach because the webpage structure may change without notice, making your application suddenly useless until you update it to the new format. You are much better off using some sort of web API (if the site providing the information offers it).
Whether you need a database or not depends entirely on what your application will be doing and how you design it - it's not something any of us can tell you.
Same goes for the setup project - in fact I wouldn't worry about that until you actually have a working application. Take it step by step and keep the scope within control.
Yes I did not think about api.
It's a great idea, maybe use "themoviedb".
But if i create an application based on it, that has to show all the movies that you have stored on your hdd and get , for example, the posters, the description and the ranking, i have to create a database according to you?
Thanks a lot.
I am currently working on a website that was built on C# from the 2003 period using server controls, javascript without libraries like the modern age a lack of a data access layer and plenty of spaghetti code.
We have decide due to the sheer size of the web site we will have to migrate web pages peices at time.
The problem is we have links, navigation and menus that need to point from an old domain where the legacy pages are to the new domain where our new MVC 4, BootStrap and clean greenfield rewrites of these legacy pages are being created. The problem is also that the new web pages will have links, navigation and menus that will have to point back to the old site as well.
I know I can create 302, I can use URL rewriting even.
My concern is that all developers will need to keep track of links both in the massive legacy website to the new website and update the urls manually.
Is there a simple way of migrating a website slowly?
Is there an approach I should research to handling this?
Should I stop snivling and just tell everyone on my team to keep track of the links as they go along and use something like wget on the legacy site to find all the links?
I would create a central repository for all the links, an XML file would do nicely, where both new and legacy sites would refer to get the URLs for the links.
Yes, you would need to change all links in both new and legacy to use this repository, but the upside is that once a page has been changed you can just change it's URL in the repository and all the links in both sites would now change.
I have created a document library with name "ARTICLES" in SHAREPOINT which stores documents. Now, I want to display the documents in Repeater and clicking on row it must display the document. And also all documents must be Downloaded in Application folder.
The application is pure asp.net application using c# and not a webpart or other.
Help appreciated!
thanks!
Although it might not suit your application (and it isn't exactly what you asked for), there is a much easier way of achieving what you described (suited for intranet applications where Windows Authentication is used).
It involves two parts:
In your web application add a IFRAME that points to your SharePoint library (using the default SharePoint web interface).
[optional] Add a custom master page to this library so that you can hide menus etc.
For many applications this solution is sufficient and saves you lot of trouble of coding the integration (and retesting it with every SharePoint update) decoupling your application from SharePoint. It also makes sure that the end user can use all SharePoint functionality such as uploading the changes directly from Word etc.
I have a webcrawler application. It successfully crawled most common and simple sites. Now i ran into some types of websites wherein HTML documents are dynamically generated through FORMS or javascripts. I believe they can be crawled and I just don't know how. Now, these websites do not show the actual HTML page. I mean if I browse that page in IE or firefox, the HTML code does not match what's actually in the IE or firefox. These sites contain textboxes, checkboxes, etc... so I believe they are what they call "Web Forms". Actually I am not much familiar with web development so correct me if I'm wrong.
My question is, does anyone in similar situation as I am now and have successfully solved these types of "challenges"? Does anyone know of a certain book or article regarding web crawling? Those that pertains to these advanced type of websites?
Thanks.
There are two separate issues here.
Forms
As a rule of thumb, crawlers do not touch forms.
It might be appropriate to write something for a specific website, that submits predetermined (or semi-random) data (particularly when writing automated tests for your own web applications), but generic crawlers should leave them well alone.
The spec describing how to submit form data is available at http://www.w3.org/TR/html4/interact/forms.html#h-17.13, there may be a library for C# that will help.
JavaScript
JavaScript is a rather complicated beast.
There are three common ways you can deal with it:
Write your crawler so it duplicates the JS functionality of specific websites that you care about.
Automate a web browser
Use something like Rhino with env.js
I found an article which tackles deep web and its very interesting and I think this answers my questions above.
http://www.trycatchfail.com/2008/11/10/creating-a-deep-web-crawler-with-net-background/
Gotta love this.
AbotX handles javascript out of the box. Its not free though.