what i'm trying to do is load a webpage from serverside ,for example www.facebook.com then insert username and password programmatically and log in.using desktop application i know it's possible .i know how to do that in c#but in desktop/client side.but what i looking is do that in server side.
for example
i send a request with username and password to a site[my site].let's say www.fbloger.com. then server logging to Facebook using that details .so server can send me important details.my final requirement is to get an alert when a specific friend is online.so i don't need to always logged and check is she online.i can log in to fb as soon as server give me a alert.i don't know is it really possible
It sounds like you are trying to write some kind of server-side web crawler/spider. If this is the case, all you need to do is examine the network requests being performed in a browser then emulate these in C#.
In c# if you send the request with HttpClient, exactly as your browser does, you can then capture the returned web page and scrape the content with something like the HTML Agility Pack which allows you to query the HTML like an XML document for extracting the values you require. See http://html-agility-pack.net (get it via NuGet).
Since you know how to do that in C# simply use C# for server side code.
ASP.Net allows to use C# for code behind and you can copy (or better reuse) desktop code that signs in to a web site.
If your desktop code used WebBrowser control - you'll need to rewrite crawling code with something like HttpClient and avoid pages that execution of JavaScript to render/log in.
Related
I have a service that should connect to a video management server that does not provide soap Access or other command line login options, so I have to use their login form for getting information. The problem is that I need to create a Windows service that gets the info every now and then. Is it possible?
I'm using c#.
Login is done via Windows login form provided by SDK that only Works on Windows not web.
Since they do not provide any sort of service, you would need to basically read in and parse the HTML returned from their server. This is sort of a broad question, but you can at least look in the direction of using HttpWebRequest and related classes. You would basically be performing a series of GET and/or POST requests to their web server and parsing the returned HTML for the information you need. You can run into issues with this approach if they end up changing their HTML depending upon how you are parsing it.
I understood how to login to Gmail using c#, but when I try to go to the webpage it does not recognize I have logged in to Gmail.
Overall, I need to login to Gmail, and then access a webpage once I'm logged in, and save its source code, all using c#, preferably without having to open a browser, just doing all within the c# application.
Edit: I have logged in to Gmail successfully. But when I then go to the website, it doesn't recognize that I'm logged in. I need a way to do it in the same session. I tried researching but couldn't understand how to do it.
I'm pretty sure that you can't Download the source code behind gmail, that is must likely closely guarded for security reasons, you can maybe get a response and try to download a list of mails if you want to make a Outlook lookalike.
if you need it all in one session, you need to find a way to share that across programs, your C# app runs in a separate environment then a browser does and cannot inter act directly, this has to be done through a API, socket communication etc.
you can however if all you want to do is to access gmail through your own program add the Web-browser component, from the toolbox to your form. (if you have one)
this is just a blank space (looks like a giant text box) that web pages can easily be loaded into. No URL bar, no controls in any way just completely blank
and then control the pages though your source code.
but what i wonder is why do you what this, why not just make your browser log you in automatically ?
To also use the same session in the browser you should transfer the session cookie to the browser. I don't know if this is possible. I don't even know if Gmail likes/allows this.
I would suggest you try something different (not trying to transfer sessions for example), like opening Gmail in the browser instead of your C# program.
You also can't download Gmail's source.
Is there a way using either C# or a scripting language, such as Python, to load up a website in the user's default webbrowser and continue to interact it via code (e.g. invoke existing Javascript methods)? When using WinForms, you can host a Webbrowser control and invoke scripts from there, but only IE is supported. Is there a way of doing the same thing in the user's default browser (not necessarily using WinForms)?
Update: The website is stored on the user's machine, not served from a third party server. It is a help page which works dynamically with my C# program. When the user interacts with my C# program, I want to be able to execute the Javascript methods on the website.
You might want to look into Selenium. It can automate interaction with FireFox, IE, Chrome (with chromedriver) and Opera. It may not be suitable for your purposes due to the fact that it uses a fresh, stripped down profile, rather than the user's normal browser profile.
If you look at the HTTP request header you can determine the user-agent making the request. Based upon that information you can write logic from the server side as to respond with a unique page per detected user-agent string. Then you add any unique JavaScript you want as a static string to be executed by the user-agent application.
This website has a custom google search box:
http://ezinearticles.com/
The search results are generated by a piece of JS code. How would I access these results using wget and/or C#'s WebClient?
It looks like the searches on that page are normal google site searches. Try wget with the following url, where 'asdf' is your search
wget http://www.google.com/search?&q=site:ezinearticles.com+asdf
You need to to what your web browser does - render the page. Maybe you can extract the js call to the webservice providing the results and just execute this request and parse the output directly.
You need to access it with a programmable browser supporting JavaScript.
The HtmlUnit library for Java does this, and runs fine headless.
You can automate a real web browser, e.g. with WatiN on Windows, and access the page's content. This requires a GUI desktop though, because a real browser window is opened.
I am trying to implement ajax back/forward button support and therefore writing variables after a # in my url. I would also like the user to be able to copy the url and then link back to it. Does anyone know how can I parse the url and grab my "querystrings" even though they are behind a #?
The value after the hash is not transmitted to the server. There's another SO question about that somewhere, but I'm having trouble finding it. Likewise it's taken me a while to find a decent reference to cite, but this Wikipedia article has some confirmation:
The fragment identifier functions
differently than the rest of the URI:
namely, its processing is exclusively
client-side with no participation from
the server. When an agent (such as a
Web browser) requests a resource from
a Web server, the agent sends the URI
to the server, but does not send the
fragment.
I assume you want to respond to it on the server side rather than the browser side? (Given that you're asking about doing it in C#...)
http://msdn.microsoft.com/en-us/library/system.uri.fragment.aspx