We would like to give users of our system the opportunity to drag some of the data from a database into Excel. (Only reading data, no chance of writing any data back to the DB). The users do not have direct access to the database, so we would have some authentication for them in place. (Firstly to connect to the database, but also secondly to use the security settings of our system, so that user 1 is only allowed to see certain tables.)
I was instructed to begin writing a C# addin for this, but a colleague was instructed to write a VBA macro.
I was thinking of using the Entity Framework to access the data, but I haven't worked with it before. I don't know what they would be using from within the macro, but the macro-manager thinks that I will be killing the network with the heavy data transfer. He also doesn't like the idea that the users have to install the add-in on their computers. However, I have a vague uneasiness regarding macro's and the notion that they're not very secure. How safe are macro's? Are they tweak-safe, or could a knowledgable user change the code?
I would like to know, what are the pro's and con's of each approach and what the general feeling is of people with more experience and knowledge than myself?
With particular regard to matters such as:
Information Security (Some tables should not be accessed.)
Network traffic
Ease of maintenance and future modifications
Any other relevant concern that I've missed
Kind regards,
What I would do in a situation like this is:
-Create Views on the database and assign them to a schema. Don't restrict the data, just specify the columns you want them to see, let them filter the data in Excel (assuming it's a massive amount of data returned)
-Create an Active Directory Group and give members of it read access to that schema
-Use the Excel -> Data -> Connections (It's in Excel 2010, not sure about 2008) to connect worksheets to that View
They can mess away with the data in excel, but it can't be written back to the database. And you can restrict what tables / columns they can see, and do the joins for lookup tables in the View so they don't see the Database Ids.
Related
I realize this is not a very specific question that is open to different opinions, but answering it requires technical expertise that I do not currently have.
My team of 10 people continually maintains and updates a SQL database of approximately 1 million records and 100 fields. We are a market research outfit - most of the data points are modeled assumptions that we need to update frequently as new information becomes available.
We use a .NET web interface to update/insert/delete. However, this interface is clunky and slow. For example:
We cannot copy/paste in a row of numbers; e.g., budgeted amounts across several time periods;
We cannot copy/paste similar dimension values across multiple records;
We cannot create new records and fill it's contents quickly
Cannot assess the effect of your changes on your dataset, as you could with Excel
Most of us are more inclined to navigate data, change values and test our assumptions in Excel, and then go back and make changes in the .NET interface.
My question is:
is it possible for multiple users to simultaneously use Excel as a content management system for a custom SQL database? I would design a workbook with tabs that would be specifically designed to upload to the database, and on other tabs analysts could quickly and easily perform their calculations, copy/paste, etc.
If Excel is not ideal, are there other CMS solutions out there that I could adapt to my database? I was looking at something like this, but I have no idea if it is ideal: http://sqlspreads.com/
If the above is not realistic, are there ways that a .NET CMS interface can be optimized to 1) run queries faster 2) allow copy/paste, or 3) other optimization?
Having multiple people working on one Excel sheet won't work. What you want to do it create an Excel template that is the same for everyone. Then you have have everyone entering data in on their templates. Write a script that takes this template and uploads it to the database table. You can have a template for each table/view and then you have join tables or views to get a bigger picture of all the data.
it's possible to do something like that in Excel - but it's not that easy. I created such a solution for one of my customers. 400 to 500 users are downloading data from a MS-SQL server into Excel. The data can be changed there and uploaded back to the server then. This works for pure line by line as well as for more complex reporting decks. But as I said: to built such a solution isn't a quick one.
Personally I would try to improve the .NET frontend. Because if it is so slow then I would guess you are doing something wrong there. On the end of the day it doesn't make such a great difference what kind of frontend you use. You will always face similar problems.
I'm having an issue regarding the security of an application that I'm building... It's a wrapper for a crystal report viewer that provides users with some additional functionality.
There are many internal users with the ability to create/modify Crystal Reports. I've done some tests, and for an application that deals so intimately with connecting to various data sources, it doesn't seem to care the least bit about doing so safely. There is nothing stopping me from modifying an existing crystal report that everyone trusts to make it into something malicious and harmful. All it takes is an added command table with the following sql:
DELETE FROM tbl_Employees; SELECT FROM tbl_Employees;
In fact, you can do anything in a crystal report that the user has permissions to do... so long as it ends with a select. Which leads me to my question: Is there any way for me to ensure that my application limits any connections to our sql server to just selects? I can't temporarily modify user credentials, and I can't use a single read only account because I still need to limit the user to their normal permissions (i.e. which databases they can query).
I'm not very hopeful, as nothing that I've read has led me to believe that I can restrict connections in such a manner.
On the other hand, most of the people making the reports could take a much more direct approach to destroying our data, if they were so inclined... but I hardly think that that is a good excuse not to do my best to ensure that my application is as safe as I can make it. I just can't seem to find any viable answers.
You should use a read-only account for reporting purposes--no exceptions! Give the account access to SELECT rights to tables and views and EXEC rights functions (exposed via synonyms). Avoid procedures, if possible--they are usually unnecessary and you may inadvertently give users access to procedures that modify the database (an experience a client of mine encountered).
** edit **
I guess it depends also on how the sensitive data is represented.
You would add a row-level filter to the record-selection formula when the report is run.
If the sensitive data is contained in a small number of tables, you could use role-based security (user added to group; roles assigned to group).
If you are using BusinessObjects Enterprise, you could use a Universe to control data security. BusinessViews are also an option; they are the original (before BusinessObjects and SAP) semantic layer that supports dynamic/cascading parameters, but they have been slated for obsolescence.
Ok, I'm putting this as an answer because I still can't post global comments, but this is an idea that can help you in some way (i hope).
I don't have lots of experience in SQL Server as I do in Oracle DBs. In Oracle DBs, for each user you have their own space. Each user can only read and modify DB objects (tables and other things) that are within its space (Schema), no exceptions. Then you can grant access for each user to a specific object outside his own schema, and this is a qualified access (like "read-only", "just update", etc). To make it easier to maintain those grants, you can create roles, which are basically a named group of grants that can be assigned to a specific user.
Ok, I'm pretty sure that nothing out of this is new for you, but my point is: to acomplish what you're looking for, in an Oracle environment, you could use DB users for each user and control the access to information (tables) through that (very secure) controls.
I'm pretty sure that in SQL Server, aside from minor differences, you could do the same kind of strategy. Hope this leads you to some ideas on solving that...
I've made a program to manage a movie collection and it stores the data in an access database. I realise it can be done manually, but I'd like it to be possible to export and import the databases from within the program, so that users don't have to start their database from scratch every time I bring out a new version.
How do I go about doing that?
I'm still pretty new to programming so if I've forgotten to mention anything, please do ask!
You need to export each table's records to a file in a format of your choosing (ie, csv, xml, your own format, etc) with a export version number (so later versions of your program know what format they will be reading in). This is serializing your data, and you can find lots of information on how to save out data.
To import, you will need to read in each exported file, and insert this into a new database. This is just the other face of serializing your data, so again, there are lots of informaiton sources on how to do this.
If you are going to allow users to re-import data into an existing database, you will need to decide on how you handle duplicate entries, and whether there is a batch process that users can use so they only have to pick how to handle duplicates once (ie, have user choose once to overwrite all existing records or have user choose to skip all existing records).
This is a pretty broad question, so I'll answer it broadly. You can create the database through code, which I'll let you research how to do. Should be plenty of articles on how to do this.
You can also include the database as part of your deployment through whatever deployment means you have. You'd want to get their database, load up the results in code and fill in the deployed database and then remove their original.
You could also just change the existing database on their machine to match your new changes. If it's something like additional columns or another table, that would be pretty easy.
The choices are numerous and you just need to pick one. Hope these ideas help.
We have a cloud based SaaS application and many of our customers (school systems) require that a backup of their data be stored on-site for them.
All of our application data is stored in a single MS SQL database. At the very top of the "hierarchy" we have an "Organization". This organization represents a single customer in our system. Each organization has many child tables/objects/data. Each having FK relationships that ultimately end at "Organization".
We need a way to extract a SINGLE customer's data from the database and bundle it in some way so that it can be downloaded to the customers site. Preferably in a SQL Express, SQLite or an access database.
For example: Organization -> Skill Area -> Program -> Target -> Target Data are all tables in the system. Each one linking back to the parent by a FK. I need to get all the target data, targets, programs and skill areas per organization and export that data.
Does anyone have any suggestions about how to do this within SQL Server, a C# service, or a 3-rd party tool?
I need this solution to be easy to replicate for each customer who wants this feature "turned on"
Ideas?
I'm a big fan of using messaging to propagate data at the moment, so here's a message based solution that will allow external customers to keep a local, in sync copy of the data which you provide on the web.
The basic architecture would be an online, password secured and user specific list of changes which have occurred in the system.
At the server side this list would be appended to any time there was a change to an entity which is relevant to the specific customer.
At the client would run an application which checks the list of changes for any it hasn't yet received and then applies them to its local database (in the order they occurred).
There a a bunch of different ways of doing the list based component of the system but my gut feeling is that you would be best to use something like RSS to do this.
Below is a practical scenario of how this could work:
A new skill area is created for organisation "my org"
The skill is added to the central database and associated with the "my org" reccord
A SkillAreaExists event is also added at the same time to the "my org" RSS with JSON or XML data specifying the properties of the new skill area
A new program is added to the skill area that was just created
The program is added to the central database and associated with the skill area
A ProgramExists event is also added at the same time to the "my org" RSS with JSON or XML data specifying the properties of the new program
A SkillAreaHasProgram event is also added at the same time to the "my org" RSS with JSON or XML data specifying an identifier for the skill area and program
The client agent checks the RSS feed and sees the new messages and processes them in order
When the SkillAreaExists event is processed a new Skill area is added to the local DB
When the ProgramExists event is processed a new Program is added to the local DB
When the SkillAreaHasProgram event is processed the program is linked to the skill area
This approach has a whole bunch of benefits over traditional point in time replication.
Its online, a consumer of this can get realtime updates if required
Consistancy is maintained by order, at any point in time in the event stream if you stop receiving events you have a local DB which accuratly reflects the central DB as at some point in time.
Its diff based, you only need to recieve changes
Its auditable, you can see whats actually happened not just the current state.
Its easily recoverable, if there's a data consistency issue you can revert the entire DB by replaying the event stream.
It allows for multiple consumers, lots of individual copies of the clients info can exist and function autonomously.
We have had a great deal of success with these techniques for replicating data between sites especially when they are only sometimes online.
While there are some very interesting enterprise solutions that have been suggested, I think my approach would be to develop a plane old scheduled backup solution that simply exports the data for each organisation with a stored procedure or just a number of select statements.
Admittedly you'll have to keep this up to date as your database schema changes but if this is a production application I cant imagine that happens very drastically.
There are any number of technologies available to do this, be it SSIS, a custom windows service, or even something as rudimentary as a scheduled task that kicks off a stored procedure from the command line.
The format you choose to export to is entirely up to you and should probably be driven by how the backup is intended to be used. I might consider writing data to a number of CSV files and zipping the result such that it could be imported into other platforms should the need arise.
Other options might be to copy data across to a scratch database and then simply create a SQL backup of that database.
However you choose to go about it, I would encourage you to ensure that the process is well documented and has as much automated installation and setup as possible. Systems with loosely coupled dependencies such as common file locations or scheduled tasks are prone to getting tweaked and changed over time. Without those tweaks and changes being recorded you can create a system that works but can't be replicated. Soon no one wants to touch it and no one remembers exactly how it works. When it eventual needs changing, or worse it breaks, you have to start reverse engineering before you can fix it.
In a cloud based environment this is especially important because you want to be able to deploy as quickly as possible. If there is a lot of configuration that needs to be done you're likely to make mistakes or just be inconsistent. By creating a nuke-and-repave deployment you have a single point that you can change installation and configuration, safe in the knowledge that the change will be consistent across any deployment.
From what i understand, you have one large database for all the clients, you use relations which lead to the table organization to know which data for which client, and you want to backup the data based on client => organization.
To backup the data you can use one of the following methods:
As the comments from #Phil, and #Kris you can use SSIS for automated backup, check this link for structure backup, and check this link for how to Export a Query Result to a File using SSIS and instead of file do it to access or SQL Server database.
Build an application\service using C# to select the data and export it manually, need time but customization has no limits.
Have you looked at StreamInsight?
http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/complex-event-processing.aspx
When I've had to deal with backups of relational data in the past (in MySQL which isn't super different in terms of capability from MSSQL that you're running) is to create a backup "package" file which is essentially a zip file with a different file extension so that windows won't let users open it.
If you really want to get fancy, encrypt the file after zipping it and change the extension. I presume you're using ASP for your SaaS and since I'm a PHP-geek, I can't help too much with the code side of things, but the way I've handled this before was for a script that would package an entire Joomla site and Database for migration to a new server.
//open the MySQL connection
$dbc = mysql_connect($cfg->host,$cfg->user,$cfg->password);
//select the database
mysql_select_db($cfg->db,$dbc);
output( 'Getting database tables
');
//get all the tables in the database
$tables = array();
$result = mysql_query('SHOW TABLES',$dbc);
while($row = mysql_fetch_row($result)) {
$tables[] = $row[0];
}
output( 'Found '.count($tables).' tables to be migrated.
Exporting tables:
');
$return = "";
//cycle through the tables and get their create statements and data
foreach($tables as $table) {
$result = mysql_query('SELECT * FROM '.$table);
$num_fields = mysql_num_fields($result);
$return.= 'DROP TABLE IF EXISTS '.$table.";\n";
$row2 = mysql_fetch_row(mysql_query('SHOW CREATE TABLE '.$table));
$return.= $row2[1].";\n";
while($row = mysql_fetch_row($result)) {
$return.= 'INSERT INTO '.$table.' VALUES(';
for($j=0; $j<$num_fields; $j++) {
$row[$j] = mysql_escape_string($row[$j]);
$row[$j] = ereg_replace("\n","\\n",$row[$j]);
if (!empty($row[$j])) {
$return.= "'".$row[$j]."'" ;
} else {
$return.= "NULL";
}
if ($j<($num_fields-1)) {
$return.= ',';
}
}
$return.= ");\n";
}
}
That's the relevant portion of the code in PHP that loops the database structure and stores the recreation script in $result which can then be output to a file.
In your case, you don't want to recreate the databases, but rather the data itself. You've compounded the issue slightly since you have a SaaS that is prone to possible data structure changes which you'll need to be able to account for. My suggestion would be this then:
Use a similar system to the above to dump the relevant data from the individual tables. I'm simply pulling all the data, but you could pull only the parts that pertain to the individual user by using JOIN statements and whatnot. Dump the contents of each table's insert/replace statements into a file named after the table. Create a file called manifest.xml or something of that sort and populate it with the current version of your SaaS application, name/information, unique ID, etc of the client exporting the data.
Package all those files into a ZIP file, change the extension to whatever you want, encrypt it if you desire, etc. Let them download that backup file and you're set.
In your import script, you will need to read the version number of the exported data and compare it to some algorithm that can handle remapping the data based on revisions you make later on. This way if you need to re-import one of their backups later, you can correctly handle transitioning the data from when they pulled the backup to the current structure of the data in that table now.
Hopefully that helps ;)
Because you keep all the data in just one database, it will always be difficult to export/backup data on customer basis.
Even if you implement such scenario now, you will end up with two different places you need to maintain/change/test every time you change the database schema (fixing bugs, adding new features, optimization, etc).
I would recommend you to partition the data, say, by using a database per organization. Then you change your application just once (mainly around building a connection string for the specified organization), and then you can safely export/backup each database separately in a way you want it.
It also gives you a lot of extra benefits "for free" such as scalability and the ability to dedicate resources on per-organization base (whether it is needed in the future).
Say, you have a set of small and low priority (from a business point of view) organizations, and a big and high priority one. So you will be able to keep a set of small low priority databases on one server, but dedicate another one for that specific important big one.
Or if your current DB server is overloaded (perhaps you have A LOT of data and A LOT of requests to the database), you can simply get another cheap server and move half of the load without any changes in your system...
You still need to write something in order to split the existing big database into several small ones, but you do it just once, and after it is done this "migration tool" can be thrown away so you don't need to support it anymore.
Have you tried SyncFramework?
Have a look at this article!
It explains how to sync filtered data between databases using Sync Framework.
You can sync to the customer's database or sync to your own empty db and then export it as a file.
Did you thought about using an ORM? (Object Relational Mapper)
I know, and use, LLBLGen Pro (so I can talk only about the feature of this specific ORM)
Anyway, with LLBLGen you can reverse-engineer the DB and create a hierarchy of class that map the tables and relations of your DB.
Now If all the data of a customer is reachable via relations, I can tell to my ORM framework to load a single costumers (1 row of a specific table) and then load all the related data in the related table.
If the data is not too complex, it should be possible.
If you have hundreds of self referenced tables or strange relations, it may be undoable, it depend upon your data.
If all the data of a single customer is, say, 10'000 rows in 100 tables, it will probably work.
If all the data of is 100'000 rows in 1000 tables it "may" work if you have some times, and a lot of memory.
If all the data is 10'000'000 you probably cant load it all at once, and you'll need a more efficient way.
Anyway, if you can load all the data at once, then you'll have a nice "in memory" graph with all the data of a single customer, and then you can serialize this data, or project it on a dataset (obtaining a set of datatable/relations) and then serialize the dataset.
Using an ORM to load and export all the data of a single customer as explained, probably, is not the most efficient way of doing things, but when doable it's a simple and cheap way.
Naturally, with or without ORM, you can find hundreds of different way to export this data :-)
For you design, you should have sharded your database for customers.
However, as you have already developed the database design, I suggest you to create a temp database and create the new tables in this temp database using the FK relation.
For this, you need to sort the tables based on the FK relationship and create them in the temp database.
Then, select the table data from the source database and insert them in the temp database.
You can also use this technique to shard your database and revamp your database design.
Aravind
I have a C# application that allows one user to enter information about customers and job sites. The information is very basic.
Customer: Name, number, address, email, associated job site.
Job Site: Name, location.
Here are my specs I need for this program.
No limit on amount of data entered.
Single user per application. No concurrent activity or multiple users.
Allow user entries/data to be exported to an external file that can be easily shared between applications/users.
Allows for user queries to display customers based on different combinations of customer information/job site information.
The data will never be viewed or manipulated outside of the application.
The program will be running almost always, minimized to the task bar.
Startup time is not very important, however I would like the queries to be considerably fast.
This all seems to point me towards a database, but a very lightweight one. However I also need it to have no limitations as far as data storage. If you agree I should use a database, please let me know what would be best suited for my needs. If you don't think I should use a database, please make some other suggestions on what you think would be best.
My suggestion would be to use SQLite. You can find it here: http://sqlite.org/. And you can find the C# wrapper version here: http://sqlite.phxsoftware.com/
SQLite is very lightweight and has some pretty powerful stuff for such a lightweight engine. Another option you can look into is Microsoft Access.
You're asking the wrong question again :)
The better question is "how do I build an application that lets me change the data storage implementation?"
If you apply the repository pattern and properly interface it you can build interchangable persistence layers. So you could start with one implementation and change it as-needed wihtout needing to re-engineer the business or application layers.
Once you have a repository interface you could try implementations in a lot of differnt approaches:
Flat File - You could persist the data as XML, and provided that it's not a lot of data you could store the full contents in-memory (just read the file at startup, write the file at shutdown). With in-memory XML you can get very high throughput without concern for database indexes, etc.
Distributable DB - SQLite or SQL Compact work great; they offer many DB benefits, and require no installation
Local DB - SQL Express is a good middle-ground between a lightweight and full-featured DB. Access, when used carefully, can suffice. The main benefit is that it's included with MS Office (although not installed by default), and some IT groups are more comfortable having Access installed on machines than SQL Express.
Full DB - MySql, SQL Server, PostGreSQL, et al.
Given your specific requirements I would advise you towards an XML-based flat file--with the only condition being that you are OK with the memory-usage of the application directly correlating to the size of the file (since your data is text, even with the weight of XML, this would take a lot of entries to become very large).
Here's the pros/cons--listed by your requirements:
Cons
No limit on amount of data entered.
using in-memory XML would mean your application would not scale. It could easily handle a 10MB data-file, 100MB shouldn't be an issue (unless your system is low on RAM), above that you have to seriously question "can I afford this much memory?".
Pros
Single user per application. No concurrent activity or multiple users.
XML can be read into memory and held by the process (AppDomain, really). It's perfectly suited for single-user scenarios where concurrency is a very narrow concern.
Allow user entries/data to be exported to an external file that can be easily shared between applications/users.
XML is perfect for exporting, and also easy to import to Excel, databases, etc...
Allows for user queries to display customers based on different combinations of customer information/job site information.
Linq-to-XML is your friend :D
The data will never be viewed or manipulated outside of the application.
....then holding it entirely in-memory doesn't cause any issues
The program will be running almost always, minimized to the task bar.
so loading the XML at startup, and writing at shutdown will be acceptible (if the file is very large it could take a while)
Startup time is not very important, however I would like the queries to be considerably fast
Reading the XML would be relatively slow at startup; but when it's loaded in-memory it will be hard to beat. Any given DB will require that the DB engine be started, that interop/cross-process/cross-network calls be made, that the results be loaded from disk (if not cached by the engine), etc...
It sounds to me like a database is 100% what you need. It offers both the data storage, data retrieval (including queries) and the ability to export data to a standard format (either direct from the database, or through your application.)
For a light database, I suggest SQLite (pronounced 'SQL Lite' ;) ). You can google for tutorials on how to set it up, and then how to interface with it via your C# code. I also found a reference to this C# wrapper for SQLite, which may be able to do much of the work for you!
How about SQLite? It sounds like it is a good fit for your application.
You can use System.Data.SQLite as the .NET wrapper.
You can get SQL Server Express for free. I would say the question is not so much why should you use a database, more why shouldn't you? This type of problem is exactly what databases are for, and SQL Server is a very powerful and widely used database, so if you are going to go for some other solution you need to provide a good reason why you wouldn't go with a database.
A database would be a good fit. SQLite is good as others have mentioned.
You could also use a local instance of SQL Server Express to take advantage of improved integration with other pieces of the Microsoft development stack (since you mention C#).
A third option is a document database like Raven which may fit from the sounds of your data.
edit
A fourth option would be to try Lightswitch when the beta comes out in a few days. (8-23-2010)
/edit
There is always going to be a limitation on data storage (the empty space of the hard disk). According to wikipedia, SQL Express is limited to 10 GB for SQL Server Express 2008 R2