I am looking for a simple and reliable way to create Python Web Service and consume it from the .Net (c#) application.
I found plenty of different libraries, where one is better than another, but nobody seems to have a complete working example with Python Web Service and some simple c# client. And reasonable explanations of steps to configure and run
I am suggesting using Tornado. It is very simple to use, non-blocking web-server written in Python. I've been using it in the past and I was shocked how easy it was to learn and use it.
I am strongly encouraging you to design your API with REST in mind. It will make your API simple, and easy to consume by any language/platform available.
Please, have a look at the 'Hello World' sample - it has been taken from Torando's main site:
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
application = tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
As for the client part - there is nothing complicated:
string CreateHTTGetRequest(string url, string cookie)
{
WebRequest request = WebRequest.Create(url);
request.Method = "GET";
request.Headers.Add("Cookie", cookie);
WebResponse response = request.GetResponse();
Stream stream = response.GetResponseStream();
StreamReader reader = new StreamReader(stream);
string content = reader.ReadToEnd();
reader.Close();
response.Close();
return content;
}
In case the server is running on your local machine, the URI is: 'http://localhost:8888/'
you may start your practice by:
Install ZSI
Create a WSDL for your service
The full example
4.On client(C#) follow this tutorial
Related
Writing trace events to Applications Insights is extremely easy on any platform. For example, in C# under dotnet core it is:
Client.InstrumentationKey = InstrumentationKey;
Client.TrackTrace("Test Trace from DotNet Console App.");
But reading that data back appears to have no such simple API, at least via NuGet.
I have seen the documentation for Kusto:
https://learn.microsoft.com/en-gb/azure/kusto/api/netfx/about-kusto-ingest
But the closest I've come to simply and easily reading trace events is by reading the documentation for the API Explorer and converting that into dotnet core C#:
using (var client = new HttpClient(new HttpClientHandler {}))
{
client.DefaultRequestHeaders.Add("x-api-key", ApiKey);
var response = client.GetAsync(InsightsUrl).Result;
var succ = response.IsSuccessStatusCode;
var body = response.Content.ReadAsStringAsync().Result;
var path = $#"{AppDomain.CurrentDomain.BaseDirectory}..\..\..\Insights.json";
File.WriteAllText(path, body);
}
What is a comparably easy method to use for reading Insights trace (etc) events without having to build a web client?
Actually, no other simple ways like 1 or 2 line method for reading the trace(and other telemetry data) back.
As of now, the web api you used is the best way to achieve that.
i am very new to asp.net. I would like to ask can asp.net run without the .net framework? as in can default.aspx run smoothly without the .net framework? I am asking this due to the following existing code which was runned on a web hosting server and another is a private server. I am not sure about the private server details ( going to know in a 2-3 days)...the code goes as...
try
{
WebRequest req = WebRequest.Create(General.SiteUrl + "/pages/" + page + ".htm");
WebResponse resp = req.GetResponse();
Stream stream = resp.GetResponseStream();
StreamReader reader = new StreamReader(stream);
content = reader.ReadToEnd();
}
catch { content = "<html><head></head><body>Content not found.</body></html>"; }
the web hosting server manage to run the "Try" successfully whereas the private one always shows content not found....any ideas guys?
People that visit your website will not need the .NET Framework; all they'll need is a browser.
The server that runs your website will need the .NET Framework since ASP.NET is a part of it.
The .NET Framework is required on the Server side for a few reasons (these are just some examples):
Your code is compiled into an intermediate language designed to be platform agnostic. A runtime (The .NET Framework) is required to convert this intermediate language into something the machine can understand. This is accomplished by the JIT.
There are several libraries in ASP.NET; System.Web.dll; for example. These are distributed as part of the .NET Framework.
The code is hosted inside of a virtual machine (in the non-traditional sense). The virtual machine takes care of a lot of heavy lifting for you; such as security; garbage collection; etc. Again; this is all part of the .NET Framework.
EDIT:
I think you are asking the wrong question here. You ask wondering why your code is going inside of the catch block and returning Content not found. The .NET Framework is properly installed since the catch block is being called; in fact it couldn't get nearly that far without the .NET Framework.
You need to figure out what exception is being thrown inside of the try block that is causing it to go into the catch block. You can achieve this with a debugger; logging; or temporarily removing the catch block all together to get the server to let the exception bubble all the way up to the top. For example; if you change your code block to look like this:
WebRequest req = WebRequest.Create(General.SiteUrl + "/pages/" + page + ".htm");
WebResponse resp = req.GetResponse();
Stream stream = resp.GetResponseStream();
StreamReader reader = new StreamReader(stream);
content = reader.ReadToEnd();
The exception details will be displayed in the browser (provided you have debugging turned on). What error is displayed without the try / catch?
No, .Net code will not run without support of the .Net framework. Because code written in .Net language will be compiled and converted to IL (Intermediate Language) Code.
The .NET framework, or some variation of, e.g. Mono, is not required on the client side. This is a requirement of the server which is serving the pages.
When data is sent to the client via HTTP, it is translated into HTML. So all the client would need would be a browser capible of consuming HTML and running any scripts associated with that site.
the .net framework is the foundation that powers this code
try
{
WebRequest req = WebRequest.Create(General.SiteUrl + "/pages/" + page + ".htm");
WebResponse resp = req.GetResponse();
Stream stream = resp.GetResponseStream();
StreamReader reader = new StreamReader(stream);
content = reader.ReadToEnd();
}
catch
{
content = "<html><head></head><body>Content not found.</body></html>";
}
so in short, "no", you must have the .net framework installed on the server that is hosting your website.
On the other hand however, on the client side, your website visitors do NOT need the .net framework to "view" your website.
I would like to have an offline ClickOnce application (they can run it from Start menu), but I would like my application to function similar to an online one (make sure the web page / server is there to run). This way I can take off (uninstall) a ClickOnce application, and it will stop working for end users without having to go to 1000's of desktops. This is for an internal corporate environment, so we have total control over the servers, end clients, etc.
There are a lot of clients out there world wide. Essentially, I would like to give them a message like "This applications functionality has been moved to XXX application, please use it instead." Or "This application has been retired." If I could get the install folder URL from code, I could have a message.xml file sitting in that directory that would have some logical tags in it for accomplishing this. If that message isn't there (server offline) I could have the application fail gracefully and instruct the user to contact their local IT for assistance.
Or can this same thing be accomplished in a different way?
I've used the following code to solve part of your problem:
try
{
// The next four lines probe for the update server.
// If the update server cannot be reached, an exception will be thrown by the GetResponse method.
string testURL = ApplicationDeployment.CurrentDeployment.UpdateLocation.ToString();
HttpWebRequest webRequest = WebRequest.Create(testURL) as HttpWebRequest;
webRequest.Proxy.Credentials = CredentialCache.DefaultCredentials;
HttpWebResponse webResponse = webRequest.GetResponse() as HttpWebResponse;
// I discard the webResponse and go on to do a programmatic update here - YMMV
}
catch (WebException ex)
{
// handle the exception
}
There may be some other useful exceptions to catch as well -- I've got a handful more in the code I use, but I think the rest all had to do with exceptions that the ClickOnce update can throw.
That handles the missing server case -- though it does require you to have been proactive in putting this in place well before the server is retired.
You can only get the deployment provider URL if the application is online-only. Otherwise it's not available.
Aside from moving a deployment, you can programmatically uninstall and reinstall the application.
You deploy the new version (or whatever you want to install instead) to another URL. Then you add uninstall/reinstall code to the old version and deploy it. When the user runs it, he will get an update, and then it will uninstall itself and call the new deployment to be installed.
The code for uninstalling and reinstalling a ClickOnce application can be found in the article on certificate expiration on MSDN, Certificate Expiration in ClickOnce Deployment.
You could create a new version of your application which only contains a messagebox saying "This application is retired" and deploy it.
The next time a user starts the application the new version will be downloaded showing your messagebox.
What I did was to combine the comments on this question, and some sprinkling of my own to come out with this answer below.
XML Document saved as an HTML (our web servers don't allow XML transfers):
<?xml version="1.0" encoding="utf-8"?>
<appstatus>
<online status="true" message="online"/>
</appstatus>
Then I have the following code read from the above, and use it to see if the application should close or not:
string testURL = "";
try
{
// Probe for the update server.
// If the update server cannot be reached, an exception will be thrown by the GetResponse method.
#if !DEBUG
testURL = ApplicationDeployment.CurrentDeployment.UpdateLocation.ToString() + "online.html";
#else
testURL = "http://testserver/appname/online.html";
#endif
HttpWebRequest webRequest = WebRequest.Create(testURL) as HttpWebRequest;
webRequest.Credentials = CredentialCache.DefaultCredentials;
HttpWebResponse webResponse = webRequest.GetResponse() as HttpWebResponse;
StreamReader reader = new StreamReader(webResponse.GetResponseStream());
XmlDocument xmlDom = new XmlDocument();
xmlDom.Load(reader);
if (xmlDom["usdwatcherstatus"]["online"].Attributes["status"].Value.ToString() != "true")
{
MessageBox.Show(xmlDom["usdwatcherstatus"]["online"].Attributes["message"].Value.ToString());
this.Close();
return;
}
}
catch (WebException ex)
{
// handle the exception
MessageBox.Show("I either count not get to the website " + testURL + ", or this application has been taken offline. Please try again later, or contact the help desk.");
}
Apache server is installed in one machine and there is a .php script present in the server. Now from my win32 or c# application how do I invoke the script and how to receive the data from the server?
Its the same as reading output from any web page, the php script is processed by the server
This code reads the output of a php page from the php.net online manual:
HttpWebRequest wr = (HttpWebRequest)WebRequest.Create(#"http://www.php.net/manual/en/index.php");
using (HttpWebResponse resp = (HttpWebResponse) wr.GetResponse())
{
StreamReader sr = new StreamReader(resp.GetResponseStream());
string val = sr.ReadToEnd();
Debug.WriteLine(val);
}
You need to open a network connection to localhost, or use the php command-line interpreter, but I'm not sure if that is linux-only, it should work for windows... try php (filename.php) to execute and return the echoed output.
So, I'm actually trying to setup a Wopi Host for a Web project.
I've been working with this sample (the one from Shawn Cicoria, if anyone knows this), and he provides a whole code sample which tells you how to build the links to use your Office Web App servers with some files.
My problem here, is that his sample is working with files that are ON the OWA server, and i need it to work with online files (like http://myserv/res/test.docx. So when he reads his file content, he's using this :
var stream = new FileStream(myFile, FileMode.Open, FileAccess.Read);
responseMessage.Content = new StreamContent(stream);
But that ain't working on "http" files, so i changed it with this :
byte[] tmp;
using (WebClient client = new WebClient())
{
client.Credentials = CredentialCache.DefaultNetworkCredentials;
tmp = client.DownloadData(name);
}
responseMessage.Content = new ByteArrayContent(tmp);
which IS compiling. And with this sample, i managed to open excel files in my office web app, but words and powerpoint files aren't opened. So, here's my question.
Is there a difference between theses two methods, which could alter the content of the files that i'm reading, despite the fact that the WebClient alows "online reading" ?
Sorry for the unclear post, it's not that easy to explain such a problem x) I did my best.
Thanks four your help !
Is there a difference between theses two methods, which could alter
the content of the files that i'm reading, despite the fact that the
WebClient allows "online reading"
FileStream open a file handle to a file placed locally on disk, or a remote disk sitting elsewhere inside a network. When you open a FileStream, you're directly manipulating that particular file.
On the other hand, WebClient is a wrapper around the HTTP protocol. It's responsibility is to construct HTTP request and response messages, allowing you to conveniently work with them. It has no direct knowledge of a resources such as a file, or particularly where it's located. All it knows is to construct message complying with the specification, sends a request and expects a response.