Prevent API client from leaking sensitive data [closed] - c#

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Suppose I have an API endpoint such as Facebook Graph API, which I design an application running on my PC to periodically connect to the API and retrieve my posts, comments, etc. On each Timer_Tick, the program reconnects to the API and brings the top 10 data items from the API, and persists these data into databases.
Now, suppose that this application is built by 3rd party, and I just downloaded from the internet as binary file not opensource.
How can I know if the application is leaking my Facebook data to third party without my knowledge?
Is there a mechanism to monitor such leaking if found? (from programmatic perspective)

This is a matter of security and for being sure you almost must think about any vulnerability here and try to make sure there is no way to reveal data by the known vulnerabilities but you cannot be sure about unknown ones.
If this is the matter of the trust and you are dealing with sensitive data i strongly recommend you to avoid using 3rd party tools unless they are provided or certified by the API provider. here are some techniques witch will help you understand about what is going on in the backyard but they will definitely not guaranty the safety :
1- First of all make sure the application is really a binary code (i know you mentioned it as a binary), it's because some executable files are just scripts or semi-scripts but look a like a binary files. for instance in the some cases if the source of the executable application is written with C#, Python, Java, there are tools out there that will help you DeComplie the application and find out what's going inside. this solution of course can be considerably tough if for example the code is obfuscated or there is complex models or OO programming models involved.
2- Use network monitoring tools like WireShark or any other tool to capture all traffic of HTTP/HTTPS requests while using the 3rd party application. because the API is just the same as HTTP requests used by applications to exchange data you can use these tools to monitor what's going on in your computer. normally this application must only connect to the Facebook servers and URLs needed to use the web API, if there is any other request sent or received from a server other than the Facebook there is chance of data leak here. if these requests are not encrypted by SSL/TLS you would be able to see the data being exchanged or if they are encrypted through SSL/TLS there are tools that provide man in the middle attack solution to see these traffics but if they are encrypted in the application layer you won't be able to see what data are being transmitted so it might involve suspicion about data even higher chance of data leak. don't forget that this monitoring must be extended for the entire using cycle of the application.
Also limiting the application to talk only to the server in witch you are calling the API with OS Firewall will be step forward to decrease the chance of data leak here.

Related

How to self host asp.net web api over the internet? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
This question has been asked multiple times here and still i can't get my head around a few things about this since i'm a bigger beginner then most people here with programming in general. So this is my scenario to make you understand where i'm at, i apologize for the amount of text. I have created a .NET web api and uploaded it to a normal web app in Azure. I can now use this web api successfully over the internet.
Now i read about the cost/gb of the bandwith in Azure and realized that if this web api is scaled and used alot this could end up being extremely expensive(more than i can afford). I read around some more and saw sites like Arvixe and more offering unlimited bandwith in shared hosts, however after digging some more i found out that as soon people start getting high bandwith theese sites ban you for exploiting the service. I read some more and see people saying you can use VPS and dedicated servers as a good choice. I look into this and see that you need some experience setting up theese which i dont have and that the cost of renting a computer with good cpu and ram costs alot aswell, all i'm really looking for is to host the site, no configurations is needed.
Now i've gotten to where i read about self hosting web api. I figured i could use my own computer which has alot better CPU and Ram than thoose i can afford as VPS etc. i see LOTS of examples. But i dont really understand how they can be used practically over the internet just with localhost.
Questions:
Would there be a good choice to host this web api myself and can i do it safely without to much knowledge?
Alot of people refer to iis which i have never used but seen alot of examples using localhost. Since i already own a url can i use this instead from iss to run this from my own computer? and would it be considered safe?
I also see alot of people using Owin self host which does not need iis. Would this be the better alternative? like this for example http://www.asp.net/web-api/overview/hosting-aspnet-web-api/use-owin-to-self-host-web-api
Conclusion:
What it all comes down to is that i'm looking for a way for a beginner to host a web api over the internet without having to worry about bandwith and cpu usage getting to high for me as a private person to afford.
Any input or help about where i can begin with this very appreciated, Thanks!
So firstly, I think you're confused by the term "self-host", this usually refers to a way of hosting an ASP.NET WebAPI application in a process other than IIS. This does not refer (as I think you understand it to) to where to host the API from an internet access perspective.
Would there be a good choice to host this web api myself and can i do it safely without to much knowledge?
No. Don't do this, at best it will be difficult to configure and perform badly. At worst it can actually expose your computer(s) # home to some significant vulnerabilities.
Alot of people refer to iis which i have never used but seen alot of examples using localhost. Since i already own a url can i use this instead from iss to run this from my own computer? and would it be considered safe?
IIS is Microsoft's web server and generally this is the host software that you would install to run an ASP.NET application (of any kind). You can install this on Windows 7/8/10, but again I strongly recommend against this.
I also see alot of people using Owin self host which does not need iis. Would this be the better alternative? like this for example http://www.asp.net/web-api/overview/hosting-aspnet-web-api/use-owin-to-self-host-web-api
This goes to my first comment, you misunderstand what this means.
I wish I had some recommendations as to where you could host this at a very low cost, but I don't. One thing I know you can do with Azure is to set up spending limits; you could use Azure to host the API but set a spending limit of say $100 and then when that limit is reached your API just becomes unavailable. You haven't really specified what you want to use the API for, so I don't know if that would work for you or not, but it's a good way to at least test it out so you can get an idea what it's going to cost you more long term.

Most efficient interface for enduser: Winforms/WCF versus Web Browser access [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
What I need to do is meet my competition and 'update' my winforms application so that it is fully accessible via an online service.
I'm not sure whether to recode my winforms app so that it uses WCF/Odata to access the database or whether the whole app will need re-writing as a webforms app and moving the database to the hosted website. The later option will likely be the more difficult of the two options given my coding experience. At present, the database resides on the end-users PC and the winforms app provides 100% of the end-users access. There is also a web interface that provides the end-user's clients access to reservations, their own user details etc. The web interface is self-hosted on the end-users PC.
With regards to serving up the data to the end-user, will there be an appreciable time difference between my proposed winforms app retrieving and consuming the data from a remote hosted database VERSUS a fully hosted webforms/database app? Can this difference be quantified before I take the plunge?
Web versus Desktop is a huge topic.
My two cents:
Web:
1 - Pros:
Accesible from all kinds of devices (PC, Mac, Smartphone, Tablet).
No installation required.
Only server deployment / update is required.
2 - Cons:
Stateless (this means no client side cache, for example)
Less control over the client computer's features (File System and the like).
Browser hell (UI looks/behaves different on every browser)
harder to code due to crappy javascript everywhere
Web-based vulnerabilities (such as XSS and the like)
.Net Windows Desktop:
1 - Pros:
Stateful (you can cache lots of data in the client).
More control over the client computer's features.
Looks the same on every machine.
easier to code due to stateful nature and no javascript (hooray).
2 - Cons:
Only works in Windows. No smartphone, no Mac, no tablet.
client installation required (which might include a .Net Framework installation).
Server + client updates required (easier with ClickOnce).
That said, winforms is a really old technology no one cares about anymore, and which does not support anything. Web applications can be made to look and feel beautifully with some CSS. winforms looks ugly no matter how hard you try to improve it.
If you go the Windows Desktop route, you'd rather upgrade your application to WPF.

How to make website status available to all platforms [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Recently, a project came my way with requirements to ...
1. Build a C# console app that continuously checks website availability.
2. Save website status somewhere so that different platforms can access the status.
The console app is completed but I'm wrestling with where I should save the status. I'm thinking a SQL record.
How would you handle where you save the status so that it's extensible, flexible and available for x number of frameworks or platforms?
UPDATE: Looks like I go with DB storage with a RESTful service. I also save the status to an xml file as a fallback to service being down.
The availability of the web-sites could be POSTed to a second web service which returned a JSON/Xml result on the availability of said website(s). This pretty much means any platform/language that is capable of making a web-service call can check the availability of the web site(s).
Admittedly, this does give a single point of failure (the status web service), but inevitably you'll end up with that kind of thing anyway unless you want to start having fail-over web services, etc.
You could save it as XML, which is platform independent. And then to share it, you could use a web server and publish it there. It seems ironic to share website availability on an other website but just as websites, other type of servers/services can have downtime also.
You could create a webservice, and you probably will need to open less unusual ports on firewall to connect to a HTTP server than to connect a SQL Server database. You can also extend that service layer to add business rules more easily than at database level.
I guess webservice is the best option. Just expose a restful api to get a simple Json response with the server status. Fast and resources cheap.
Don't re-invent the wheel. Sign up for Pingdom, Montastic, AlertBot, or one of the plethora of other pre-existing services that will do this for you.
But, if you really must, a database table would be fine.

Moving from single user desktop applications to multiuser development [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to bootstrap a micro ISV on my nights and weekends. I have an application at a very early stage of development. It is written in C# and consists mainly of a collection of classes representing the problem domain. At this point there's no UI or data persistence. (I haven't even settled on the .NET platform. Its early enough that I could change to Java or native executables)
My goal for this application is that it will be a hybrid single user/ occasionally connected multiuser application. The single user part will use an embedded database for local storage. This is a development model I'm familiar with.
The multiuser part is where I have no prior experience. I know each user will need two things:
IP based communication to a remote server on the public internet
User authentication and remote data storage
I have an idea of what services I want this server to provide (information lookup and user to user transactions) but beyond that I'm out of my element. The server will need to be hosted by a third party since I don't have resources to run my own server. Keeping in mind that I will be the sole developer for this project for the foreseeable future:
Which technologies would be the simplest way to implement the two things mentioned above? Direct access to the datastore/database or is it better to isolate it? Should I implement a webservice? If so, SOAP or REST?
What other things do I need to consider when moving to a multiuser application?
I know security is a greater concern in a multiuser application. Especially when your dealing with any kind of banking information(which I will). Performance can be an issue when dealing with a remote connection and large numbers of users. Anything else I'm overlooking?
Regarding moving to a multiuser application, centralising your data is the first step of course, and the simplest way to achieve it is often to use a cloud-based database, such as Amazon SimpleDB or MS Azure. You typically get an access key and a long 'secret' for authentication.
If your data isn't highly relational, you might want to consider Amazon SimpleDB. There are SDKs for most languages, which allow simple code to store/retrieve data in your SimpleDB database using a key and secret, anywhere in the world. You pay for the service based on your data storage and volume of traffic, so it has a very low barrier of entry, especially during development. It will also scale from a tiny home application up to something of the size of amazon.com.
If you do choose to implement your own database server, you should remember two key things:
Ensure no session state exists, i.e. the client makes a call to your web service, some action occurs, and the server forgets about that client (apart from any changed data in the database of course). Similarly the client should not be holding any data locally that could change as a result of interaction from another user. Cache locally only data you know won't change (or that you don't care if it changes).
For a web service, each call will typically be handled on its own thread, and so you need to ensure that access to the database from multiple threads is safe. If you use the standard .NET or Java ways of talking to a SQL database, this should be handled for you. However, if you implement your own data storage, it would be something you'd need to worry about.
Regarding the question of REST/SOAP etc., a key consideration should be what kinds of platforms/devices you want to use to connect to the database server. For example if you were implementing your server in .NET you might consider WCF for implementing your web services. However that might introduce difficulties if you later want to use non-.NET clients. SOAP is a mature technology for web services, but quite onerous to implement, and libraries to wrap up the handling of SOAP calls may not necessarily be available for a given client platform. REST is simple to implement (trivially easy if you use ASP.NET MVC on your server), accessible by any client that can handle HTTP POST/GET without the need for libraries, and easy to test, so REST would be my technology of choice.
If you are sticking with .net (my personal preference), I would expose data access calls via WCF. WCF configuration is really flexible and pretty easy to pick up and you'll want to hide your DB behind a service layer.
1.Direct access to db is the simplest, and the worst. Just think about how you'd auth the db access... I would just write a remote-able API with serializable parameters, and worry about which methods to connect later (web services, IIOP, whatever) - the communication details are all wrapped and hidden anyway.
2.none

Distributed Monitoring Service using C# .NET 3.5 [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Let's say for example you have 5 different companies using the same platform (Windows based) all wrote their own web services, what technique using C# and .Net 3.5 would you recommend using to monitor all their different web services?
My intention is to build an application that provides visual feedback on service statuses to site administrators and of course e-mail/sms alerts as needed. Is there a best practice or methodology to follow in your opinion?
In addition, are there any windows based tools available to perform this that I'm unaware off? Preferrably open-source?
*Edit: Think of the end result, an application that just shows a red or green light next to the services running across the different companies.
Company 1
> Web Service 1 - Green
> Web Service 2 - Green
Company 2
> Web Service 1 - Red
> Web Service 2 - Green
You should try PolyMon, a .NET based, open-source monitoring tool on CodePlex:
http://polymon.codeplex.com/
At least for our case, it hit the sweet-spot of functionality and a lean and easy setup.
You can choose from some out-of-the-box tasks like Ping or URL monitoring, but you can also easily implement your own more complex tasks. Works quite well for us.
The tool itself is not distributed, but you can easily set up two instances of the service (e. g. on servers in different locations) and monitor the same services, or use one instance to monitor the other.
We experienced just one issue that was very annoying and a little freaky, when a server that was running both PolyMon and the SQL Server instance used by PolyMon repeatedly crashed on reboots (endless loop of reboot). Seems to be some kind of race condition. Therefore I strongly recommend to host the PolyMon service and the SQL Server service on different (virtual) machines, or set the start-up type of the PolyMon service to "Manual" instead of "Automatic", and start PolyMon manually after everything else booted, to avoid this problem.
Big Brother System and Network Monitor, will probably do most of what you want. It is extensible and plug-ins can be written in any language. They have a free editon of their monitoring software:
http://www.bb4.org/home1.html
You should also use a local monitor on each server that is being monitored. This is because it is difficult to diagnose problems remotely. This blog has a good discussion of the problem and details of a local monitor design pattern:
http://sleeksoft.co.uk/public/techblog/articles/20041218_1.html
The most commonly used open source monitoring tool is Nagios. Built in it has support for many different services, and you can always write a script or app to test any service not already supported.
Nagios Home Page
Windows Monitoring Service
Instruction for setting up Windows service
If they write aspx web services, all monitoring best practices described for ASP.NET applications, like ASP.NET Health Monitoring Overview MSDN article and in ASP.NET Health Monitoring are applicable
Also see an SO question Tools and methods for live-monitoring ASP.NET web applications?
Standard tools will allow you to ping the IP or probe the HTTP port - these are cheap & simple ways of verifying that the web service is minimally available. In order to validate that they are also fully functional, you will need to do a bit more... your monitoring package will need valid credentials to login to the various web services, not to mention specific proxies & business logic to execute against each one.
Have you looked at MOM (especially MOM management packs)?
There are tools like IPSentry with which you can make HTTP requests and check the returning data...
I know this is a pretty old thread but thought I would add Wolfpack to the list for anyone still looking for a "distributed system monitor" - Wolfpack was designed (by me) as exactly that. It can run multiple "agents" collecting data about the servers they monitor and reporting it to a central server instance.
It has a rich set of monitoring and storage plugins and you can easily roll your own for custom checks (there are numerous supporting nuget packages available - just search for Wolfpack on nuget.org) plus there is an active contrib project too...and its really easy to install via chocolatey.org (cinst wolfpack)!
It is open source and completely FREE! I'm also rewriting major parts of it at the moment for the next v3.0 release and it will support SignalR alert notifications, powershell and a full web api/interface (via ServiceStack).

Categories

Resources