Pull website text from one site and display it on another - c#

Hi I am trying to pull this string from courseweb.hopkinsschools.org and display it on my own asp.net application. I have been looking for a long time for a tutorial but nothing works. Any help would be greatly appreciated.
Picture of String needed:
String

When I started doing work with websites and interfacing with other websites, I originally wanted to do what you're talking about, reading the text from pages, because thats how we as people interface with computers and websites.
But that is not how computers should ever interface with other websites unless absolutely necessary.
Moodle has an API for such things like course management. Its kind of difficult to find information on, but its called Moodle Web Services if I remember quickly. I'll add a link back if I can find it.
What these will do is let you access moodle in a computer friendly way, ie. a way your computer can easily understand, instead of trying to read webpages.
Edit
Here are some resources to get you started:
https://docs.moodle.org/dev/Web_services
https://code.google.com/p/mnet-csharp/
https://delog.wordpress.com/2010/08/31/integrating-a-c-app-with-moodle-using-xml-rpc/
https://delog.wordpress.com/2010/09/08/integrating-c-app-with-moodle-2/

Related

Meta Search Engine/ Web Scraping in Android Studio/JAVA

I want to create an application that basically search for something with some filters from various websites (I don't require to login to those third party websites so the data available is open to public) and show it on my application. I have a few questions:
1. Is It Legal ?
2. Is this web scraping or Meta Search Engine ?
3. Can I get more information (any web links/articles) to know more
about it ? How to achieve it technically ? One way I know that we can use the XPath technique to scrape but I am wondering if there are more ways.
I am NOT asking for the entire code. Just how to start / Any guidance?
Thank You in Advance !
Firstly you need to understand how search engines work!
-Our so called search engines like google have special programs designed to mine out information from the web they are called "Spiders" what a spider does is basically scroll over all web pages within the search query and find matching information however that's a really complex thing to work on! it takes really good code and algorithm expertise to develop a spider for yourself. However if you can master that you'll be earning a smooth sum of money, but it's really rare unless you're a blatant genius!

ASP.net MVC4 one source code multiple websites

I've created a website in asp.net mvc4 and i've put it online with specific domain name. Now my client asks to replicate same website on different domain name, and change some static texts/images to distinguish the 2 websites. I'd like to handle just one source code and deploy two times. How i can reach this?
We did this a few years ago with a web application. It was a pain in the a**. We had one website running and the resources were loaded after the user has logged in.
During the development you always had to think about that, split the resources always look for the logged in user etc.
It is just easier to copy the published application to a second folder and for the static texts use some kind of resource files that can be replaced on the fly.
As long as you don't have images and files that are a few gigabytes big it should be no problem to copy the compiled source code an the resources.
Though kind of a too late reply, but I just wanted to share some of my experience with you, you can follow these steps, it won't take too much of your time.
Identify the various text / images like logo for branding etc for which you have a requirement to make them tenant specific.
Create a table called tenant settings (tenantid, key, value )
Identify the pages that needs to be tweaked to look up from this setting than a hardcoded value.
Update these pages and provide a UI for each tenant so that they can change the values at any point of time
This way you can achieve the level 4 multi-tenancy with minimal effort to begin with.
HTH

Connecting To A Website To Look Up A Word(Compiling Mass Data/Webcrawler)

I am currently developing a Word-Completion application in C# and after getting the UI up and running, keyboard hooks set, and other things of that nature, I came to the realization that I need a WordList. The only issue is, I cant seem to find one with the appropriate information. I also don't want to spend an entire week formatting and gathering a WordList by hand.
The information I want is something like "TheWord, The definition, verb/etc."
So, it hit me. Why not download a basic word list with nothing but words(Already did this; there are about 109,523 words), write a program that iterates through every word, connects to the internet, retrieves the data(definition etc) from some arbitrary site, and creates XML data from said information. It could be 100% automated, and I would only have to wait for maybe an hour depending on my internet connection speed.
This however, brought me to a few questions.
How should I connect to a site to look up these words? << This my actual question.
How would I read this information from the website?
Would I piss off my ISP or the website for that matter?
Is this a really bad idea? Lol.
How do you guys think I should go about this?
EDIT
Someone noticed that Dictionary.com uses the word as a suffix in the url. This will make it easy to iterate through the word file. I also see that the webpage is stored in XHTML(Or maybe just HTML). Here is the source for the Word "Cat". http://pastebin.com/hjZj6AC1
For what you marked as your actual question - you just need to download the data from the website and find what you need.
A great tool for this is CsQuery which allows you to use jquery selectors.
You could do something like this:
var dom = CQ.CreateFromUrl("http://www.jquery.com");
string definition = dom.Select(".definitionDiv").Text();

Get Html from Web page and create Setup project for Wpf Application (C#)

I'm trying to create a wpf application such as a movies library because i would like to manage and sort out my movies with a pretty interface.
I'd like to create a library with all my movies getting information from the web, but i don't know how very well.
I thought to get the information from a web site, for example imdb, but i don't know if it's legally to capture html from page to get the nested information.
It's my first desktop application and I would also like to know if it is necessary to create a database within the project and then create a setup project with specified script for deploy it.
Sorry for the confusion but i would like to know too much things :)
Thanks a lot in advance.
The legality of web scraping is a grey area. See my question, "Legality of Web Scraping vs Normal Use" and the corresponding answers for some insight.
Even if the legality is not a problem, web scraping is a flimsy approach because the webpage structure may change without notice, making your application suddenly useless until you update it to the new format. You are much better off using some sort of web API (if the site providing the information offers it).
Whether you need a database or not depends entirely on what your application will be doing and how you design it - it's not something any of us can tell you.
Same goes for the setup project - in fact I wouldn't worry about that until you actually have a working application. Take it step by step and keep the scope within control.
Yes I did not think about api.
It's a great idea, maybe use "themoviedb".
But if i create an application based on it, that has to show all the movies that you have stored on your hdd and get , for example, the posters, the description and the ranking, i have to create a database according to you?
Thanks a lot.

monitoring most common web browsers c#/vc++/c

i'm trying to do a download manager just for learning cos i'm new in windows programming,
could someone tell me how to monitor most common web browsers,
i'd like to implement something like:
http://www.iwisoft.com/videodownloader/video-downloader-features.php
everytime you visit a web page in common browsers detects all video files in the web page and allow you to download or not the file, any idea how to do that without building an app for every browser, which is the best language to do it c#/vc++/managed/unmanaged,
i'm learning and using a mix of all to do other parts like download files, add rules to firewall or modify the registry
thanks a lot
I don't really know a neat way of doing this, but you could try the following :
Enumerate the name of the current window using GetForegroundWindow.
Check if the name you get using GetWindowText matches the usual name of the browser.
If it is a browser, moniter the clipboard and check for hyperlinks
then do your download stuff.
I program in C++ and assembly, but I wouldn't be able to advice you on the programming language since I don't have any experience with C#. But since you are new, I would suggest starting out with basic stuff. As pointed out in your comment, this is not something that can be achieved easily.

Categories

Resources