I use a API client NuGet package that sends HTTP requests and receives responses from a third party API. How can I capture and log the HTTP requests and responses executed by that NuGet package in order to store them in a log database?
Example:
var client = new ThirdPartyPackage.Client();
var todos = client.fetchTodos(); // how to log the HTTP request/response executed by the third party package?
Specific example: https://github.com/exactonline/exactonline-api-dotnet-client/blob/master/test/ExactOnline.Client.Sdk.IntegrationTests/ExactOnlineQueryTests.cs#L21
IIS already does the logging for you. You can find the logs in the folder:%SystemDrive%\inetpub\logs\LogFiles
You can customize the log schema as you wish. you dont need a Nuget package for this. Or if you want, you can use Nlog
The library you're using appears to use WebRequest under the hood to perform its HTTP requests. Therefore, you should be able to set the WebRequest.DefaultWebProxy property to intercept the requests.
https://msdn.microsoft.com/en-us/library/system.net.webrequest.defaultwebproxy(v=vs.110).aspx
(Source code ref: https://github.com/exactonline/exactonline-api-dotnet-client/blob/master/src/ExactOnline.Client.Sdk/Helpers/ApiConnector.cs#L167)
Be warned however this is potentially a lot of work depending on exactly how much information you wish to capture about the request, so if I were you I'd seriously consider forking the library and making changes to their code to provide some hooks for logging.
The benefits of the DefaultWebProxy approach are that once it's up and working it shouldn't need much maintenance, the negatives are that it's potentially a lot of work up front and is adding another failure point. The benefits of forking the library are that it should be quicker to implement initially, but potentially more ongoing maintenance as you may need to merge any changes to the original SDK manually into your fork. Given that the package you're using has barely changed in 2 years however this seems like an acceptable risk.
In addition, if you go the fork route then you can always send the original repository maintainers a pull request, which would hopefully mean your code eventually ends up in the Nuget package and you can then revert back to Nuget instead of your fork.
Related
I have gone through Jaeger Documentation. They have specified that how will Jaeger will work the HTTP request kind of scenario but if I want to get traces of Nservicebus's to publish/subscribe method then How will I get using Jaeger?
Is it possible? Or Jaeger only works with HTTP requests?
Not out of the box, you have to plug a behaviour into NSB that uses open telemetry
https://github.com/open-telemetry/opentelemetry-dotnet
You will have to write custom code.
Plus you can do push metrics as well as shown in our app insights, Prometheus and other samples.
Let's continue the conversation in our support channels?
Not sure if you are still looking for a solution for this. You should be able to do this currently using the NServiceBus.Extensions.Diagnostics.OpenTelemetry package from nuget. This is built by Jimmy Bogard and instruments NServiceBus with the required support for Open Telemetry.
The source for this is available here. You can connect this to any backend of your choice that supports Open Telemetry including but not limited to Jaeger and Zipkin.
Additionally, here is an example that shows this in action.
I have a scenario that requires me to export a large mailing list (> 1m customers) to an external email system. I have source code and control over both applications.
I need a mechanism for transferring the data from one system to another that is:
Robust
Fast
Secure
So far, I have set up a standard MVC controller method that responds to a request (over https), performs some custom security checks, and then pulls the data from the DB.
As data is retrieved from the DB, the controller method iterates over the results, and writes response in a plain text format, flushing the response every 100 records or so. The receiver reads each row of the response and performs storeage and processing logic.
I have chosen this method because it does not require persisting user data to a permanent file, and a client built in any language will be able to implement receiver logic without a dependency on any proprietary technology (e.g. WCF).
I am aware of other transport mechanisms that I can use with .NET, but none with an overall advantage, given the requirements listed above.
Any insight into which technologies might be better than my request / response solution?
Two suggestions come to mind, we had something similar to this happen at our company a little while ago (acquired website with over 1 million monthly active users and associated data needed a complete datacenter change, including 180gb db that was still active).
We ended up setting up a pull replication to it over SSH (SQL Server 2005), this is black magic at best and took us about a month to set up properly between research and failed configurations. There are various blog posts about it, but the key parts are:
1) set up a named server alias in SQL Server configuration manager on the subscriber db machine, specifying localhost:1234 (choose a better number).
2) set up putty to make a ssh tunnel between your subscriber's localhost:1234 from step 1 and publish db's port 9876 (again, choose a better number). Also make sure you have ssh server enabled on the publisher. Also keep the port a secret and figure out a secure password for the ssh permissions.
3) add a server alias on publisher for port 9876 for the replicated db.
4) if your data set is small enough, create the publications and try starting up the subscriber using snapshot initialize. If not, you need to create a publication with "initialize from backup" enabled, and restore a partial backup at the subscriber using ftp to transfer the backup file over. This method is much faster than snapshot initialization for larger datasets.
Pros: You don't need to worry about authentication for the sql server, "just" the ssh tunnel. Publication can be easily modified in case you realize you need more columns or schema changes. You save time writing an api that may be only temporary and might have more security issues.
Cons: It's weird, there's not much official documentation and ssh on windows is finicky. If you have a linux based load balancer, it may be easier. There are a lot of steps.
Second suggestion: use ServiceStack and protobuf.NET to create a very quick webservice and expose it over https. If you know how to use ServiceStack, it should be very quick. If you don't, it would take a little time because it operates on a different design philosophy from Web API and WCF. Protobuf.NET is the most compact and fastest serialization/deserialization wire format widely available currently. Links:
ServiceStack.NET
Protobuf.NET
Pros: You can handle security however you like. This is also a downside since you then have to worry about it. It's much better documented. You get to use or learn a great framework that will speed up the rest of your webservice-related projects for the rest of your life (or until something better comes along).
Cons: You have to write it. You have to test it. You have to support it.
I hope someone will have answered the details difference between server side and client side of GIT smart HTTP protocol.
Best way is provide some references book & code for advance.
Some people said,
libgit2 already exposes a packbuilder. However, you'll have to implement the server-side protocol by yourself.
reference to this link
Can we implement the server side with libgit2sharp(or libgit2) with small code?
Following the question above. We can dealing with pack with git.exe receive-pack and git.exe upload-pack command with --stateless-rpc argument. The implemented code are here and here.
Can we compile above codes as native code into .Net assembly? Even though we can connect ASP.NET stream and git.exe by pipeline, but it is not a good way.
If you are just looking for a .NET library for interacting with GIT try GitSharp or nGit . The source code for GitSharp may also be useful since you appear to be a c# developer and GitSharp is not an automated port. Otherwise:
(as the comments above show), there isn't a whole lot of easy to find documentation on this protocol. Fortunately, Git makes reverse engineering the protocol easy and shouldn't be too difficult.
The newer Smart-Git protocol now adds another parameter to the GET http request that Older servers will ignore (older than 1.6.6) and will cause newer servers to switch to a multi post mode. The newer server at this point build a custom packfile for the client containing only the files the client needs.
In order to reverse engineer exactly what happens during the protocol portion you can use the environment variable:
SET GIT_CURL_VERBOSE=1
With this enabled, Git will output the HTTP requests and headers for each call that it makes, and will also output the HTTP status code and response headers for each response. You can also use a tool like Fiddler to see all of the http traffic that is occurring. In order to do this you will have to use a second Git environment variable to force GIT to go through the http proxy:
SET HTTP_PROXY=http://localhost:8888
At this point you basically start issuing Git commands and monitoring the http traffic.
For instance executing "git push -u origin master"
returns:
GET http://localhost:8000/gitserver/git/info/refs?service=git-receive-pack
This blog entry has a decent example of the described methodology above.
I have a server client application.
The clients sends the server http posts with info every second or so.
The server is implemented using C#, there server doesn't need to respond in any way to the client.
Whats the easiest and most practical way to get this done? Is there some kind of library that is easy to use that I can import into my project.
Why not just use a regular old web service? It sounds like you have simple functionality that doesn't need to maintain a connection state. With a web service, you can simply expose the methods to your client, accessible via HTTP/S. If you're already using .NET for your client, you can simply add a web reference to your project and have .NET do the heavy lifting for you. There wouldn't be any need to reinvent the wheel.
You can use http.sys to create your own http listener without IIS or additional overhead. Aaron Skonnard has a good article here.
Because of certain limitations of uhttpsharp (specifically no support for POST forms and file uploads and it using threads to process requests), I've made NHttp available at github which supports full request parsing like ASP.net and processes requests using the asynchronous TCP model.
I want to watch all the HTTP requests going out of a certain application and cancel them if needed.
Is there a way to do this in C#?
If you can touch the system's proxy configuration (which is used by many applications) or, if the app doesn't heed that setting, touch the application's configuration to use a proxy of your choice, you can create a HTTP proxy that would do the job for you.
If you just want to profile, there is a ready made tool that behaves like this and is very nice, Fiddler.
Else you'd have to go deeper into the network stack and implement something like a sniffer/firewall, for instance using WinPcap. That's a lot harder to do.
If you're working with Windows, it is possible to hook the WinInet API calls and do whatever you want with them.
FreeCap (or its commercial version, WideCap) does that: allows you to send TCP traffic through a proxy server. That proxy might then do the filtering (e.g. Fiddler).
I know this brings more stand-alone applications into the system, but it works.