Body
Hi guys. I'll be rather brief if I can so here goes.
I made this app in C# that goes onto my employee portal and automatically gets my shifts for me every 30 minutes my using a web browser control and then it reads the HTML data from that and generates a calendar for me and also provides automated alerts.
Issue
Problem is that this web browser uses IE (yeesh help me) and it doesn't work with all parts of the site. I have done some digging around on the site and I found where the ASP site gets the data from: An XML sheet somewhere on the server. I can access this XML sheet, but only if I'm logged in (please see the attached images for more information).
Current solution
So my question is this: How do I actually login to this area?
I could login using the webbrowser and then download the XML using that, but it's too slow and too old, so is there a way I can pass my credentials through?
The URL is like this "https://www.mycoles.com.au/api/rosters/nextweek" -- I don't see any thing like ?name=myname ?pass=mypassword... soo yea. (I'm a bit new).
Further details:
Application language: C#.
Current technology: Windows forms applications/ IE web browser control.
Site backend: Microsoft Sharepoint.
Anything I'm missing? Please ask..?
Attached content
Mycoles XML Logged in
Mycoles XML Access Denied
Update:
So after a while of searching and examining the site, I tried to access the data with a c# webbrowser and it didn't work. It said that it can't download the data, however chrome is able to. Odd. I'm not sure it is an XML file anymore, rather a request and I don't have enough knowledge to work with this, so pointers anyone? Check this site out https://www.mycoles.com.au/api/rosters/nextweek and tell me what you think it is please. Thanks in advance... :)
SharePoint supports different forms of authentication. Out of the box, Active Directory-based single sign-on is provided, and forms-based (username, password) authentication can be configured.
Typically, organizations use AD SSO for its simplicity. If, once you open your desktop browser and navigate to a SharePoint site, you don't have to enter any credentials and are just logged in, then it's most likely this case. This can be either Kerberos or NTLM. The HttpWebRequest class supports both these methods.
Related
I'm developing a software on C# which has to get info from a website which the user opens in chrome, the user has to input some data and then the website returns a list of different items.
What I want is a way to be able to access to the source code of the page in order to get the info, I cant open the web myself as it doesnt show anything because I didnt input any data, so I need to get it directly from chrome.
How can I achieve this ? A chrome extension ? Or can I access to chrome directly from my software ?
Off the top of my head, I don't know any application that gets data directly from an open instance of Chrome. You'd have to write your own Chrome extension.
Alternatively, you can open the web browser from your application initially.
You can look into these libraries for doing so:
Watin (My personal favourite)
Selenium
Awesomium (You'd have to roll out your own UI, it's invisible)
Cef
Essential Objects Web Browser
EDIT: I didn't think about using QA tools as the actual browser hook as #TheAnathema mentions. That would probably work for your needs.
You're going to need to create it as Chrome extension if you must be dependent on the user actually going to a specific web site (i.e. not being able to do the requests yourself with either Selenium or standard web requests in Python).
The reason why a Chrome extension would be required is because think of how bad it could be for any software to easily read the pages you browse. Banking, medical, email, etc. could all be accessed anonymously from any process if Google allowed any outside process to tap into the web page.
Even Chrome extensions have to ask for permission to be able to do what they want, but at least it is software the user knowingly installed and agreed to the permissions.
A quick search yielded this example of modifying a page's HTML with a Chrome extension: https://blog.lateral.io/2016/04/create-chrome-extension-modify-websites-html-css/
It sounds like you want to do web scraping. Here's a good tutorial to get you started: HTML Scraping.
And this answer has a good example of how to scrape data from a website where you need to submit a form to get access to the data.
I have a service that should connect to a video management server that does not provide soap Access or other command line login options, so I have to use their login form for getting information. The problem is that I need to create a Windows service that gets the info every now and then. Is it possible?
I'm using c#.
Login is done via Windows login form provided by SDK that only Works on Windows not web.
Since they do not provide any sort of service, you would need to basically read in and parse the HTML returned from their server. This is sort of a broad question, but you can at least look in the direction of using HttpWebRequest and related classes. You would basically be performing a series of GET and/or POST requests to their web server and parsing the returned HTML for the information you need. You can run into issues with this approach if they end up changing their HTML depending upon how you are parsing it.
I understood how to login to Gmail using c#, but when I try to go to the webpage it does not recognize I have logged in to Gmail.
Overall, I need to login to Gmail, and then access a webpage once I'm logged in, and save its source code, all using c#, preferably without having to open a browser, just doing all within the c# application.
Edit: I have logged in to Gmail successfully. But when I then go to the website, it doesn't recognize that I'm logged in. I need a way to do it in the same session. I tried researching but couldn't understand how to do it.
I'm pretty sure that you can't Download the source code behind gmail, that is must likely closely guarded for security reasons, you can maybe get a response and try to download a list of mails if you want to make a Outlook lookalike.
if you need it all in one session, you need to find a way to share that across programs, your C# app runs in a separate environment then a browser does and cannot inter act directly, this has to be done through a API, socket communication etc.
you can however if all you want to do is to access gmail through your own program add the Web-browser component, from the toolbox to your form. (if you have one)
this is just a blank space (looks like a giant text box) that web pages can easily be loaded into. No URL bar, no controls in any way just completely blank
and then control the pages though your source code.
but what i wonder is why do you what this, why not just make your browser log you in automatically ?
To also use the same session in the browser you should transfer the session cookie to the browser. I don't know if this is possible. I don't even know if Gmail likes/allows this.
I would suggest you try something different (not trying to transfer sessions for example), like opening Gmail in the browser instead of your C# program.
You also can't download Gmail's source.
I'm having issues with converting my Intranet Page to PDF file. I used 2 solutions which actually works, however with some issues.
Solution 1:
I used wkhtmltopdf.exe tool. I was able to make it work on my local machine.
However, when I deployed it to our Server, it stopped working until I notice that it's not working with intranet sites. When I tried extranet sites, it's working.
Solution 2:
I took an alternative solution by getting the HTML of that site, and let the wkhtmltopdf.exe tool to make it PDF which also works, however, the data on my page that I'm trying to convert to PDF is database driven. So all information including images was not supplied when it was converted to PDF.
Please help if there's a way to make the wkhtmltopdf.exe tool work in Intranet Sites(solution 1) or
how I can retrieve the whole page including data and images when converting it to PDF(solution 2)
Thank you very much!
it stopped working until I notice that it's not working with intranet sites.
That is not an exhaustive problem report. I have done it by rendering a view to a string and then converting that string to a pdf using wkhtmltopdf.
Rendering the view to a string: Render a view as a string
i did not include wkhtmltopdf direct, rather I used the tuespechkin nuget package: https://github.com/tuespetre/TuesPechkin
I would say to look at the permissions available. Intranet sites normally have different permission levels than a public facing site. It could be that the public facing sites have permissions that have been applied to the .exe such as the IIS_IUSR account to enable it to work with anonymous guest accounts, but lack the permissions needed in an intranet which often uses the domain user account of the logged in user to authenticate resources.
For whtmltopdf software to generate pdf on your intranet server, you need to have 2 files msvcp120.dll & msvrp120.dll in the same folder as wkhtmltopdf.exe file to running from server side. Hope this helps.
I'm working on a C# application that needs to scrape some data from a phpBB forum. The forum scraping requires logging in. The application will prompt the user for their login credentials to connect.
I've scraped websites before with C#, but what I'm not sure how to do is login to phpBB and keep a session open during the duration of the screen scraping. I've done some searching and haven't had much luck. Is there a good way to programmatically do something like this?
You don't say what you've tried, but if you used an HttpWebRequest object to retrieve pages and/or logon, then you need to assign a new CookieContainer collection to the HttpWebRequest to store any cookies returned by the website. Share this amongst HttpWebRequest objects to remain logged in
look for the names of the username and password fields using Firebug or Chrome (or even View Source), then use my answer here: Programmatically logging into a site, replacing 'session_key' and 'session_password' as appropriate. that should work.
and then translate to C#!
I would recommend using WatiN API for doing screen scraping. I have done screen scraping using this API and it does good work.
Check it out !
I recommend using HTML Agility Pack.