Safer way to connect to Remote Database in applications - c#

Let's say I'm making an Xamarin.android (let's take it as any platform/framework). So, I have to use remote database for store and retrieve information.
I've been making PHP based APIs on my server that interact with database. I make my application hit up those API endpoints with the data and those APIs update the information in the database.
So, I was wondering that if I make my application directly connect to the remote MySQL, it'll be a bit faster than hitting up APIs.
Which way would be safer? I don't want anyone to be able to intercept the data or connection details to DB for obvious reasons. What would be the safest way? Or is there any other way to get this job done?

Forget about safer for now; A back-end layer (the API server in your case) will make your system more decoupled, and it is the least thing to do. suppose you suddenly decided to use another type of database, lets say PostgreSQL instead of MYSQL, you will need probably to use different format for connection strings, are you willing to update all your clients android apps with new database connection string each time? this will take time and cost and will make your application difficult to maintain, and break it at users end.
And yes, it is safer to use APIs (in your case); It is much easier to reverse engineer a client application such as android app than hacking a well secured and designed server. as you said in your question, knowing too much makes your database vulnerable to multiple types of attacks. therefore, it is much safer to encapsulate your database with an API.
However, it is not enough to use an API to make your system safer; you should also follow best practices. for example; use a HTTPS, use authentication techniques (passwords, Oauth2), use authorization, etc...
Still, i am not saying, it is not possible to use your database directly without API, you can secure your database with different techniques, it is possible but NOT recommended at all.

Related

Does migrating data from SQL Server to SQLite have effects on the application codes

I have a project I developed with C# windows forms using SQL Server as database but I want to make the project standalone without any database server. I found SQLite as an option to go for, Is it possible to migrate my SQL Server database to SQLite and not affecting my code? And how do I go about it?
I used entityframework code first in connecting the SQL Server database
The answer is almost certainly going to be "yes." Depending on a few things, you might have to change very little (or no) code, or you might have to change a lot.
The first consideration is your SQL code. If you were very careful to write ANSI-compliant SQL and you didn't use any of the built-in SQL Server views or T-SQL-specific functions, you may not have to re-write much code at all. In reality, you probably will have to at least write some. In particular, while SQL Server's engine is meant to handle multiple concurrent sessions and queries, SQLite is not: you will need to manage your program carefully to ensure no two threads attempt to access the SQLite database at once.
The second consideration is how your application calls the database. Again, depending on your design, you may need to re-write almost no code, or you may need to re-write a lot. In my C# applications, I create an interface for database providers that defines common functionality (select, delete, insert, etc). Then I create simple wrapper classes for different RDBMS that implement the interface. When I need to switch databases, I simply instantiate and use a different class. If you have your project setup like this, then you'd simply need to create a new class for SQLite that implements your database interface and instantiate that instead of your SQL Server class. If you wrote a lot of SQL Server specific C# code into your business logic, you might have a lot of coding to do.

How to create an application supporting multiple databases

I have a situation where I need to create an application which supports multiple databases. Multiple databases means the client can use any of the database like Oracle, SQL Server, MySQL, PostgreSQL at first.
I was trying to use ORM like NHibernate or MyBatis. But they have their limitation and need expertise to use.
So I decide to user the Data Providers provided by Microsoft like ADO.NET, OLEDB, ODP.NET etc.
Is there any way so that the my logic of database keep same for all the database? I have tried IDbConeection, IDbCommand etc but they have a problem in case of Oracle (Ref Cursor).
I there any way to achieve this? Some link or guide would be appreciated.
Edit:
There is problem with the DBTypes because they are enum define differently with different data providers.
Well, real-life applications are complicated like that. Before you know it, you want to replace the UI with an App, expose your logic as a WCF service, change the e-mail service with another service provider, test pieces of your code while mocking the DAL and change the database with another one.
The usual way to deal with this is to pass all calls through an interface that separates the implementation from the caller. After that, you can implement the different DAL's.
Personally I usually go with this approach:
First create a single DLL that contains all interfaces. Basically the idea is to expose all calls that your UI, App or whatever needs through the interface. From now on, your UI doesn't talk to databases or e-mail providers anymore.
If you need to get access to the interface, you use a factory pattern. Never use 'new'; that will get you in trouble in the long run.
It's not trivial to create this, and needs proper crafting. Usually I begin with a bare minimum version, hack everything else in the UI as a first version, then move everything that touches a DB or a service into the right project while creating interfaces and finally re-engineer everything until I'm 100% satisfied.
Interfaces should be built to last. Sure, changes will happen over time, but you really want to minimize these. Think about what the future will hold, read up on what other people came up with and ensure your interfaces reflect that.
Basically you now have a working piece of software that works with a single database, mail provider, etc. So far so good.
Next, re-engineer the factory. Basically you want to use the configuration settings to pick the right provider (the right DLL that implements your interface) for your data. A simple switch can suffice in most cases.
At this point I usually make it a habit to make a ton of unit tests for the interfaces.
The last step is to create DLL's for the different database providers. One of these will be loaded at run-time in your application.
I prefer simple Linq to SQL (I also use the library from LinqConnect) because it's pretty fast. I simply start by copy-pasting the other database provider, and then re-engineer it until it works. Personally I don't believe in a magic 'support all sql databases' solution anymore: In my experience, some databases will handle certain queries a much, much faster than other databases - which means that you will probably end up with some custom code for each database anyways.
This is also the point where your unit tests are really going to pay off. Basically, you can just start with copy-paste and give it a test. If you're lucky, everything will run right away with decent performance... if not, you know where to start.
Build to last
Build things to last. Things will change:
Think about updates and test them. Prefer automatic tests.
You don't want to tinker with your Factory every day. Use Reflection, Expressions, Code generation or whatever your poison is to save yourself the trouble of changing code.
Spend time writing tests. Make sure you cover the bulk. I cannot stress this enough; under pressure people usually 'save' time by not writing tests. You'll notice that this time that you 'save' will double back on you as support when you've gone live. Every month.
What about Entity Framework
I've seen a lot of my customers get into trouble with performance because of this. In the many times that I've tested it, I had the same experience. I noticed customers hacking around EF for a lot of queries to get a bit of decent performance.
To be fair, I gave up a few years ago, and I know they have made considerable performance improvements. Still, I would test it (especially with complex queries) before considering it.
If I would use EF, I'd implement all EF stuff in a 'database common DLL', and then derive classes from that. As I said, not all databases are the same with queries - and you might want to implement some hacks that are necessary to get decent performance. Your tests will tell.
Bonuses
Other reasons for programming through interfaces has a lot of advantages in combination with proxy's. To name a few, you can easily create log sinks, caching, statistics, WCF, etc. by simply implementing the same interface. And if you end up hating your current OR mapper some day, you can just throw it away without touching a single line of your app.
I believe Microsoft's Data Access Components would be suitable to you.
https://en.wikipedia.org/wiki/Microsoft_Data_Access_Components
How about writing microservices and connect them by using a rest api?
You (and maybe your team) could provide a core application which handles the logic and the ui. This is still based on your current technology. But instead of adding directly some kind of database connection, you could provide multiple types of microservices (based on asp.net or core) providing a rest api. You get your data from each database from such a microservice. So you would develop 1 micro service for e.g. MySQl and another one for MsSQL and when a new customer comes up with oracle you write a new small microservice which handles your expected API.
More info (based on .net core) is here: https://docs.asp.net/en/latest/tutorials/first-web-api.html
I think this is a teams discussion, which kind of technology you decide to use. But today I would recommend writing a micro service. It makes the attachment of a new app for a e.g. mobile device also much easier :)
Yes its possible.
Right now am working with the same scenario where my all logic related data( typically you can call meta data) reside inside one DB and date resides in another DB.
What you need to do. you should have connection related parameter in two different file or you can call these file as prop files. now you need to have connection concrete class which take the parameter from these prop file. so where you need to create connection just supply the prop files and it will created the db connection as desired.

Using a PHP script to make a MySQL connection for a desktop application

Ok here's the thing.
My webhost, which happens to be the cheapest and most reliable in my country, doesn't allow direct access to my MySQL database. I can only connect to it using "localhost" on the website.
This is a bit of a piccle since I have to write a desktop application to interact with the database (could use a web interface, but I'd like to avoid it if I can). I'll probably write it in C#.
So maybe I could make a simple PHP MySQL connection script and somehow query the database with C# using that connection.
Is this possible?
It's certainly possible. Commonly direct database interface scripts are used for that purpose. But take care to limit access (.htpasswd, Allow From, https?) to it.
You could use a simple JSON or POST interface which directly accepts SQL queries, and simply returns the result as JSON array:
<?php
// maybe even: $_POST = json_decode(file_get_contents("php://input"));
$db = new PDO(...);
$stmt = $db->prepare($_POST["query"]);
$stmt->execute($_POST["params"]); // enumerated array
print json_encode($stmt->fetchAll());
You might need to add some error checking and devise a signaling mechanism (typically it suffices to send a result array with magic values or identifiers to differentiate it from ordinary data lists).
I can tell u can use SQLyog php-mysql tunnel, but even this is a fast tunnel script u have to think about some delay process, when u get big or huge results this has to be processed first by script before it send answer to your application, so if that software its not time critical one, will just work fine with this, otherwise, i encourage u to get a better hosting solution, even outside your country.
I would write a web service to gather the data and return it in a more usable format instead of allowing the script to execute raw queries. That gives you better security (by not allowing arbitrary queries) and easier maintenance (by being able to change the underlying infrastructure without messing with both the server script and the desktop client) at once.
Yes. It is possible. You can use PHP for connecting and run queries there. Please don't forget adding some security codes for php code. because http access to your php code would couse dangers. You can encrypt PHP outputs.
If your requirement from a client is direct access I would change hosts and use a third party mysql client and not reinvent the wheel again.

C# Sync two identical DataSets over a Web Service

What is the most effective way to sync two identically structured DataSets using a Web Service?
The design I use at the moment is simple without a Web Service. All data iscached on the Client, and it will only update data from the MySQL database if an update has been done, this is determined by a timestamp.
If possible I want to keep the same simple design, but add a Web Service in the middle, to allow easier access to the database over our limited VPN.
Best Regards, John
That's one heck of a question, but something I'm doing myself too. Easiest way I guess would be to add a "saved version" property. If it really is a simple design then you could just re-write only the DAL code to get things working for a web service. In fact, if the WSDL is done right, you may only need to make very minor adjustments (especially if the DB was previously designed using EF).
You say you want to sync two datasets simultaneously, but the question I guess is why? And which two? Do you have two web services, or are you wanting to sync data to both the local cache and the online web service (MSSQL db?) simultaneously?

Ways and techniques to get defense from SQL-injections

i have some WinForms app (Framework to develop some simple apps), written in C#. My framework later would be used to develop win forms applications. Other developers they are beginers often and sometimes do not use Parameters - they write direct SQL in code. So first i need somehow to do protection in my framework base classes in C#.
Do solve this, one developer suggested me to using an ORM such as NHibernate, which takes care of this issue for you (and you don't have to write SQL statements yourself most of the time).
So I want to ask, is there some general alternatives(other ways and techniques) when i want to get defense from SQL-injections.Some links or examples would be very nice.
I don't see how there is any means to protect any SQL-based library from developer misuse without crippling its functionality (i.e. never giving direct access to the database).
Even with NHibernate or Linq to SQL it's possible to bypass the mapping layers and directly write a SQL statement.
Personally I think your best option would be to write in BIG BOLD TEXT that people who use your library need to PARAMETERIZE THEIR QUERIES. Failing that, you could try to do some kind of clumsy input sanitization, but that is honestly a flimsy second-rate hack.
Parameterized queries have been around for so long now, there's no excuse for anyone writing code that touches any database to not be aware of it or understand how to use it. The only cure for ignorance is education.
Maybe if we knew more about what this library is supposed to do with respect to data access, we could offer more targeted suggestions...
Agree with Aaronaught, a framework will not completely prevent the possibility. I would never substitute stringent validation on the data layer. Also provide an abstraction layer around your data access that you open up as the API rather then allow developers to connect directly to database.
It sounds like you need to train your developers to use parameter binding instead of looking for a technical solution.
One other alternative would be to keep the database layer in a different project and only allow your SQL savy developers to code in it. The GUI can be in a different project. That way the GUI programmers won't mess up your DB.
Security is usually a process, not a product or api.
It is also an evolving process, we have to adapt or get hacked.
A heavy handed approach:
You can force everyone to write stored procedures,and not allow
direct table access from the accounts that are allowed to talk to
the database. (GRANT EXECUTE ON etc)
Then you would need to ensure that nobody writes any fancy stored procedures
that take a sql query as a parameter and evaluates it dynamically.
This tends to slow down development, and I personally would not use it,
but I have consulted at several shops that did.

Categories

Resources