autocomplete textbox: does each web service *have* to be its own file? - c#

So I'm creating a web site where I can enter a specific artist's setlists. I have a table with the artist's discography which includes all her songs. So, now I want to create an Admin page where I can easily pick the tourdate and venue, and then enter the setlist.
In order to do this, I want to create autocomplete textboxes to make it easier. However, everything I've read is saying it's best to have each autocomplete reference its own asmx page. I tried doing this with 2 textboxes on the same page once, and after much hassle I gave in and put them in their own asmx page and all worked well.
However, this time I'm looking at probably 25 textboxes. I can't believe I'm going to need 25 web services, although I guess technically they'll all be identical because they'll read from the same table.
How can this be done? Can I just use the same asmx page over and over? If not, is there an easier way than creating 25 separate files?
For sample code on how I'm implementing this, you can see a previous question I've asked here:
Referencing a field on an asp.net page from a Web Service (asmx) page

Related

Get Html from Web page and create Setup project for Wpf Application (C#)

I'm trying to create a wpf application such as a movies library because i would like to manage and sort out my movies with a pretty interface.
I'd like to create a library with all my movies getting information from the web, but i don't know how very well.
I thought to get the information from a web site, for example imdb, but i don't know if it's legally to capture html from page to get the nested information.
It's my first desktop application and I would also like to know if it is necessary to create a database within the project and then create a setup project with specified script for deploy it.
Sorry for the confusion but i would like to know too much things :)
Thanks a lot in advance.
The legality of web scraping is a grey area. See my question, "Legality of Web Scraping vs Normal Use" and the corresponding answers for some insight.
Even if the legality is not a problem, web scraping is a flimsy approach because the webpage structure may change without notice, making your application suddenly useless until you update it to the new format. You are much better off using some sort of web API (if the site providing the information offers it).
Whether you need a database or not depends entirely on what your application will be doing and how you design it - it's not something any of us can tell you.
Same goes for the setup project - in fact I wouldn't worry about that until you actually have a working application. Take it step by step and keep the scope within control.
Yes I did not think about api.
It's a great idea, maybe use "themoviedb".
But if i create an application based on it, that has to show all the movies that you have stored on your hdd and get , for example, the posters, the description and the ranking, i have to create a database according to you?
Thanks a lot.

How can I copy HTML textbox values from one domain to another domain's textboxes?

I'm trying to help save time at work with for a lot of tedious copy/paste tasks we have.
So, we have a propitiatory CRM (with proper HTML ID's, etc for accessing elements) and I'd like to copy those vales from the CRM to textboxes on other web pages (outside of the CRM, so sites like Twitter, Facebook, Google, etc)
I'm aware browsers limit this for security and I'm open to anything, it can be a C#/C++ application, Adobe AIR, etc. We only use Firefox at work so even an extension would work. (We do have GreaseMonkey installed so if that's usable too, sweet).
So, any ideas on how to copy values from one web page to another? Ideally, I'm looking to click a button and have it auto-populate fields. If that button has to launch the web pages that need to be copied over to, that's fine.
Example: Copy customers Username from our CRM, paste it in Facebook's Username field when creating a new account.
UPDATE: To answer a user below, the HTML elements on each domain have specific HTML ID's. The data won't need to be manipulated or cleaned up, just a simple copy from ourCRM.com to facebook.com / twitter.com
Ruby Mechanize is a good bet for scraping the data. Then you can store it and post it however you please.
First, I'd suggest that you more clearly define exactly what it is you're looking to do. I read this as you're trying to take some unstructured data from Point A and copy it to Point B. Do the names of these fields remain constant every time you do the operation? Do you need to simply pull any textbox elements from the page and copy them all over? Do some sort of filtering of this data before writing it over?
Once you've got a clear idea of the requirements, if you go the C# route, I'd use something like SimpleBrowser. Judging by the example on their Github page, you could give it the URL of the page you're looking to copy, then name each of the fields you're looking to obtain the value of, perhaps store these in an IDictionary, then open a new URL and copy those values back into the page (and submit the form).
Alternatively, if you don't know the names of the fields, perhaps there's a provided function in that or a similar project that will allow you to simply enumerate all the text fields on the page and retrieve the values for all of them. Then you'd simply apply some logic of your own to filter those options down to whatever is on the destination form.
SO we thought of an easier way to do this (in case anyone else runs into this issue).
1) From our CRM, we added a "Sign up for Facebook" button
2) The button opens a new window with GET variables in the URL
3) Use a greasemonkey script to read those GET variables and fill in textbox values
4) SUCCESS!
Simple, took about 10 minutes to get working. Thanks for you suggestions.

How to make a Dictionary and Utility in C#?

I have 15 web pages that has the same labels. But, in one case, I need to change all of their Text, in all pages. Other than writing them all by one, I want to make something general and public that I can use in all the pages. The ID's of the labels are same. I don't know how to do it with different web pages. Can you give me a hint? I don't know if its called Utility or Dictionary. A friend of mine mentioned it a few mins ago but I don't want to ask her again she already thinks I am dumb :(
They are ASP.NET pages with C# also I am using Devexpress.
I want to change only one web page at a time. But use the method for all web pages when they are shown (by the user). Not changing all the 15 at the same time.
If the informations shown on the labels are the same for every user, you could use the ASP.NET Cache.
How to: Add Items to the Cache
How to: Retrieve Values of Cached Items
If not, you could use the Session.
But normally you would use a dbms for this purpose, for example SQL-Server.
Note: Do not use the Label's ID as Keys for the Cache/Session/Database but the meaning. ID's may change, apart from that it's much more readable.

WebCrawling Dynamic Links

Anybody has any idea on crawling websites that have dynamic pages/queries? I mean if I click a certain link, it has different values every I try to reload it in a web browser. Now my webcrawler could not download the contents of these pages. Please advise.
it would be the same way even it is dynamic or not. actually a crawler is only a mater of 3 things
The url
The data it sent to server if it is a POST Method then
The cookie if authentication is required
that's all,
the common problem when doing crawler:
Miss-guess of default page [index.html, index.php, default.aspx etc].. actually it will work without it for all method [POST/GET]
One of each field name is not written exactly
ASP.Net form viewstate id field (i forgot the name) but i can be achieve easily
Dynamic page generated by javascript. this one is the hardest part and the most cases even google still have problem about this.
hope that help.
You might want to look at this question which details how to write a crawler or look at the source code for http://searcharoo.net/ which contains a good crawler (see here).

Home/Landing screen design for a website in asp.net

I have an web based application. The content for the Home page has been currently mentioned in the HTML code for the Home page using , and tags. To change the content anytime in future, it needs to be changed in the HTML code. :(
Is there a way that we can pick up the content from some external place and get it reflected through the website. This ways, any change if required can be made at the external location without referring to the application's code.
Please advise if there is any solution for it.
Thanks.
You can
Use a database
Include external files using Server Side Includes
Read external files and write their contents and an alternative method
Sounds like you're looking for a Content Management System (CMS), which will allow your content editors access to modify only specific blocks of a page that you specify.
There are a ton out there to do what you want, so you don't have to start from scratch. Just Google 'CMS'.
Although I haven't used it myself, DotNetNuke is a popular one these days and has a free version.

Categories

Resources