Update a WinForms-Application automatically (async) - c#

I am trying to write a little application and, as I am only a student, i was wondering if someone maybe could help me with with some advices / tips to achieve the following:
On start, the desired program shall call a special function. That function looks onto a remote server (most likely it will be a NAS - network attached storage - with given username + password) if there is an update available. If yes, download & install it (including a display of the current progress) and restart program afterwards.
Having done some research, I found the following code snippet, hope it helps:
var webClient = new WebClient();
webClient.DownloadFileCompleted += new AsyncCompletedEventHandler(Completed);
webClient.DownloadProgressChanged += new DownloadProgressChangedEventHandler(ProgressChanged);
webClient.DownloadFileAsync("https://195.50.139.130:5001/FTP/KlingGastro Update/file.txt", "C:\\file.txt");
Please note there is a blank character between KlingGastro and Update.
Furthermore the credentials are "KlingGastro" as username, please excuse that I will not post password due to security reasons. Lets say its "myPassword".
I think downloading is not going to be that big problem, I already found this tutorial:
https://ourcodeworld.com/articles/read/227/how-to-download-a-webfile-with-csharp-and-show-download-progress-synchronously-and-asynchronously
Rather the cancellation of the program by itself and the restart might could be difficult?
Sorry for all potential language faults in addition, hope it was nevertheless somehow understandable.
Would be really happy for every answer and help effort.
Best regards

A call to "System.Windows.Forms.Application.Restart()" within the DownloadFileCompleted event-handler will do this for you after the update is completed.

Well, You May Need To Clear Your Question A Bit Furthermore. If You Are Wanting Your Program To Cancel The Download Task And Then Restart The Program Again,
webClient.CancelAsync();
System.Diagnostics.Process.Start(Application.ExecutablePath);
System.Diagnostics.Process.GetCurrentProcess().Kill();
else if you are wanting your program to cancel the download task and start download again,
webClient.CancelAsync();
//Initializing A New One So It Can't Cause Any Issues
webClient = new WebClient();
webClient.DownloadFileCompleted += new AsyncCompletedEventHandler(Completed);
webClient.DownloadProgressChanged += new DownloadProgressChangedEventHandler(ProgressChanged);
webClient.DownloadFileAsync("https://195.50.139.130:5001/FTP/KlingGastro Update/file.txt", "C:\\file.txt");
Hope It Helps! If It Doesn't Help, Please Let Us Know Exactly What You Are Trying To Achieve.

Related

How does .NET know when to continue to the next line of code?

I wrote this little bit of C# code to test an implementation I intend to use for an internal tool at work. Much to my surprise, it functions exactly as I hoped but I do not understand why.
private void button1_Click(object sender, EventArgs e)
{
WebClient wc = new WebClient();
wc.DownloadFile("http://url censored", #"C:\Users\Dustin\Desktop\flashplayer.exe");
bool dlComplete = System.IO.File.Exists(#"C:\Users\Dustin\Desktop\flashplayer.exe");
if (dlComplete == true)
{
System.Diagnostics.Process.Start(#"C:\Users\Dustin\Desktop\flashplayer.exe");
}
else
{
System.Windows.Forms.MessageBox.Show("Something's jacked!");
}
}
When I press on button1, my machine downloads the Flash installer and then checks if the file exists (this is my roundabout way of avoiding event handlers which I have not learned to deal with yet), and continues on.
Why doesn't my computer check for the file's existence while the file is downloading? How does this wizard of a computer know to hold on a moment while the file download completes?
WebClient.DownloadFile is a Synchronous method in which downloads to a local data file.
As stated on the MSDN link here - "[t]his methods blocks while downloading the resource."
In other words, the process is waiting for completion (blocking the calling function), before returning control and execution to the thread.
This results in the wizardry you're experiencing with the application knowing when to check for the file's presence. I know magic can be ruined once you know the trick; however, I hope this ins't the case..
For reference, here's a way that would work the way you didn't expect, asynchronously.
var webClient = new WebClient())
webClient.DownloadFileCompleted += new AsyncCompletedEventHandler(Completed);
webClient.DownloadProgressChanged += new DownloadProgressChangedEventHandler(ProgressChanged);
webClient.DownloadFileAsync("http://www.server.com/file.txt", "C:\\file.txt");
In fact, there's a whole set of Asynchronous C# functions. It's worth reading up on if you're interested in getting into development.
https://msdn.microsoft.com/en-us/library/mt674882.aspx

C# How to process several web requests at once

I have been reading a lot about ThreadPools, Tasks, and Threads. After awhile I got pretty confused with the whole thing. Lots of people saying negative/positive things about each... Maybe someone can help me find a solution for my problem. I created a simple diagram here to get my point across better.
Basically on the left is a list of 5 strings (URL's) that need to be processed. In the center is just my idea of a handler that has 2 events to track progress. Inside that handler it takes all 5 URL's creates separate tasks for them, shown in blue. Once each one complete I want each one to return the webpage results to the handler. When they have all returned a value I want the OnComplete to be called and all this information passed back to the main thread.
Hopefully you can understand what I am trying to do. Thanks in advance for anyone who would like to help!
Update
I have taken your suggestions and put them to use. But I still have a few questions. Here is the code I have built, mind it is not build proof, just a concept to see if I'm going in the right direction. Please read the comments, I had included my questions on how to proceed in there. Thank you for all who took interest in my question so far.
public List<String> ProcessList (string[] URLs)
{
List<string> data = new List<string>();
for(int i = 0; i < URLs.Length - 1; i++)
{
//not sure how to do this now??
//I want only 10 HttpWebRequest running at once.
//Also I want this method to block until all the URL data has been returned.
}
return data;
}
private async Task<string> GetURLData(string URL)
{
//First setup out web client
HttpWebRequest Request = GetWebRequest(URL);
//
//Check if the client holds a value. (There were no errors)
if (Request != null)
{
//GetCouponsAsync will return to the calling function and resumes
//here when GetResponse is complete.
WebResponse Response = await Request.GetResponseAsync();
//
//Setup our Stream to read the reply
Stream ResponseStream = Response.GetResponseStream();
//return the reply string here...
}
}
As #fendorio and #ps2goat pointed out async await is perfect for your scenario. Here is another msdn article
http://msdn.microsoft.com/en-us/library/hh300224.aspx
It seems to me that you are trying to replicate a webserver within a webserver.
Each web request starts its own thread in a webserver. As these requests can originate from anywhere that has access to the server, nothing but the server itself has access or the ability to manage them (in a clean way).
If you would like to handle requests and keep track of them like I believe you are asking, AJAX requests would be the best way to do this. This way you can leave the server to manage the threads and requests as it does best, but you can manage their progress and monitor them via JSON return results.
Look into jQuery.ajax for some ideas on how to do this.
To achieve the above mentioned functionality in a simple way, I would prefer calling a BackgroundWorker for each of the tasks. You can keep track of the progress plus you get a notification upon task completion.
Another reason to choose this is that the mentioned tasks look like a back-end job and not tightly coupled with the UI.
Here's a MSDN link and this is the link for a cool tutorial.

C# BackgroundWorker skips DoWork and goes straight to RunWorkerCompleted

I'm new with C# so go easy on me!
So far - I have a console app that listens for client connections and replies to the client accordingly.
I also have a WPF form with a button and a text box. The button launches some code to connect to the server as a BackgroundWorker, which then waits for a response before appending it to the end of the text box.
This works great, once. Sometimes twice. But then it kept crashing - turns out that the DoWork block wasn't being called at all and it was going straight to RunWorkerCompleted. Of course, the .result is blank so trying to convert it to a string fails.
Is this a rookie mistake? I have tried searching the internet for various ways of saying the above but haven't come across anything useful...
This is the code so far: http://pastebin.com/ZQvCFqxN - there are so many debug outputs from me trying to figure out exactly what went wrong.
This is the result of the debug outputs: http://pastebin.com/V412mppX
Any help much appreciated. Thanks!
EDIT: The relevant code post-fix (thanks to Patrick Quirk below) is:
public void dorequest(string query)
{
request = new BackgroundWorker();
request.WorkerSupportsCancellation = true;
request.WorkerReportsProgress = true;
request.ProgressChanged += request_ProgressChanged;
request.DoWork += request_DoWork;
request.RunWorkerCompleted += request_RunWorkerCompleted;
request.RunWorkerAsync(query);
}
You're attaching your DoWork handler after calling RunWorkerAsync. Flip those around and that should fix it.
Also, in the future please paste code in your question rather than using an external site. And when possible give the smallest amount of code that demonstrates the problem. Makes it easier on us and people who might have the same issue.

Use a proxy with webBrowser control C#/.net 3.5

I need some help from someone who has already use the webBrowser control along with a proxys.
What I need is the following.
1 - Set a proxy for a webBrowser control.
2 - Load a specific site.
3 - Execute a routine over the site.
4 - Set a diferent proxy for the webBrowser control.
5 - Load another site.
6 - Execute the same routine from point number 3.
And the process keeps in that way, looping from a list of proxys, until all of them had been used.
But. I'm having some problems with the app. to do that:
1 - I'm using the code attached to set the proxy into the webBrowser control, but seems to work only once during the execution, when I call it again in the loop it just doesn't work, I can get t ounderstand why.
2 - I'm having problems to determine when the page has loaded completely, I mean, when I set the first site to load, I need the program to wait until it has finish to load, and after that execute the routine over it, and continue with the process.
Hope some one could help me with this...
/// The function that I'm using -----------------------------
private void SetProxy(string Proxy)
{
MessageBox.Show("Setting :" + Proxy);
string key = "Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings";
RegistryKey RegKey = Registry.CurrentUser.OpenSubKey(key, true);
RegKey.SetValue("ProxyServer", Proxy);
RegKey.SetValue("ProxyEnable", 1);
}
// The app logic --------------------------------------
SetProxy("190.97.219.38:80");
webBrowser1.Navigate("http://www.whatismyip.com/");
ExecuteRoutine();
SetProxy("187.93.77.235:80");
webBrowser1.Navigate("http://www.whatismyip.com/");
ExecuteRoutine();
SetProxy("109.235.49.243:80");
webBrowser1.Navigate("http://www.whatismyip.com/");
ExecuteRoutine();
Perhaps this link is useful:
http://blogs.msdn.com/b/jpsanders/archive/2011/04/26/how-to-set-the-proxy-for-the-webbrowser-control-in-net.aspx
I tested the code and it seemed to work. But two points are important:
It's not compatible to projects in compile mode "Any CPU" (x86 works fine)
JUST for HTTP proxy servers ; not for SOCKS
1- I guess webBrowser control checks the proxy only while its is created, so create a new control after setting the proxy
2- Navigate is not a blocking call and does not wait till page it loaded, use webBrowser.DocumentCompleted event
Below code should work (Not tested)
void Exec(string proxy,string url)
{
var th = new Thread(() =>
{
SetProxy(proxy);
using (WebBrowser wb = new WebBrowser())
{
wb.DocumentCompleted += (sndr, e) =>
{
ExecuteRoutine();
Application.ExitThread();
};
wb.Navigate(url);
Application.Run();
}
});
th.SetApartmentState(ApartmentState.STA);
th.Start();
th.Join();
}
I had a somewhat similar question in the past. The accepted answer for the question suggests to take a look at this Microsoft Knowledge Base article:
"How to programmatically query and set proxy settings under Internet Explorer"
Basically, you have to do some P/Invoke and call some WinInet DLL functions. Although I never tried it in a real-world project, I strongly assume that this is the way to go.
Just to let you all know, this guy has posted 5 question, all asking the same thing, and based on his first question and how badly he was knocked down, it seems he is trying to commit some type of cybercrime. Now, based on my reading of his intellect, he'll probably end up in prison really quickly, but I'm just thinking perhaps we can save him from that by letting him know that it's not possible to provide an imaginary IP address to services you are communicating with (since if you did, the service will not be able to reach you to provide a response). Here is his entertaining list:
https://stackoverflow.com/questions/12045317/how-to-hide-my-ip-address-c-net-3-5
Use a proxy with webBrowser control C#/.net 3.5
how to pass ip-address to webBrowser control
how to use custom ip address to browse a web page c#/.net
https://stackoverflow.com/questions/12019890/how-to-load-webpage-using-user-provided-ipaddress-webbrowser-control-c-net
And now, I think he has created a new username, user1563019, with more proxy/settings questions below:
https://stackoverflow.com/users/1563019/user1563019

Monitor web pages access

I hope I can get some help.
I’m trying to create an host based application using C# (in the simplest way) to monitor access to a web page from the computer that hosts the application, if this web page is accessed while the program is running an event should rise.
So far I have used the SHDocVw.ShellWindows() but it works only if the web page has already been accessed not while is being accessed.
It monitors Windows Internet Explorer
I have also researched the httplistener but to no avail.
Do you have any solution?
Please let me know if you require more details
This may or may not be valid for your situation, but I had to do something similar with an Intranet website (cross-browser so it was a little harder than just with IE) recently. My solution was to setup a client-side application which hosts a WCF service. Then, when the user clicks a link on the web page (or raises any event, such as, $(document).ready) it sends an message back to the server telling the server to connect to the IP address associated with the current session (really just the IP Address on the request) on a known port. This connection is made to the client side application which is listening at that IP address and port for instructions on what to do (in my case it is dynamically compiling code in the request and running it).
That of course will only work for Intranet websites. A more general approach that will work for IE across the internet, is to create a IE extension (or maybe a Silverlight application) that talks on localhost. I've never done it, so I can't tell you how or if it is actually possible (but in principle seems possible).
If you don't have access to the website at all then perhaps using SharpPCAP or the Fiddler API would work for you.
Assuming the question is "I want to know when a program on my local computer accesses a give web page": A transparent http proxy is likely approach you want to take. Check out Fiddler to see if it is exactly what you want.
If your question is more "I want to know when a particular page is hit on my remote server": There are plenty of monitoring tools that parse web server logs and event logs to know state of the server. If you want to do something yourself and control the server's code - collect hit information for the page you are interested and provide some page that reports this data.
After few hours of work I have found a solution, not the most elegant one so far,(and at times causes a memory dump) but it does what I need.
Thanks
Just last edit, I solved the crash issue by adding a time so it checks the page every sec or so.
once again thanks for your iterest in my question.
class wait
{
private static System.Timers.Timer aTimer;
public void timed1()
{
// Create a timer with a ten second interval.
aTimer = new System.Timers.Timer(10000);
// Hook up the Elapsed event for the timer.
aTimer.Elapsed += new ElapsedEventHandler(OnTimedEvent);
// Set the Interval to 2 seconds (2000 milliseconds).
aTimer.Interval = 2000;
aTimer.Enabled = true;
Console.WriteLine("Press the Enter key to exit the program.");
Console.ReadLine();
}
private static void OnTimedEvent(object source, ElapsedEventArgs e)
{
//NetKeyLogger klog = new NetKeyLogger();
// Console.WriteLine("The Elapsed event was raised at {0}", e.SignalTime);
Kelloggs.Program KKA = new Kelloggs.Program();
SHDocVw.ShellWindows shellWindows = new SHDocVw.ShellWindows();
string filename;
foreach (SHDocVw.InternetExplorer ie in shellWindows)
{
filename = Path.GetFileNameWithoutExtension(ie.FullName).ToLower();
if (filename.Equals("iexplore"))
{
string ddd = (ie.LocationURL);
// Console.WriteLine(ddd);
if (ie.LocationURL == "http://www.testPage.com/")
{
Console.WriteLine("Page found");
// Console.ReadLine();
aTimer.Enabled = false;
KKA.Maino();
}
}

Categories

Resources