I am developing an application which keeps document repository and provide searching from it. I want to prevent users to view/add/modify/delete documents outside application. Currently, i am storing documents in a normal folder structure in windows, which is easily accessible to any authorized windows user. I am seeking for some methodology using which, I can ensure that, none other than my application can access those documents.
Is there any library or technique available, using which I can hide those files from windows user, but my application can use it in normal way?
If you have to save documents in the windows and want to restrict users then try saving them in encrypted form with possibly a password. So that even if someone has access to it, it becomes useless for him. Something like this should work for encryption
using (var fileStream = File.OpenWrite(theFileName))
using (var memoryStream = new MemoryStream())
{
// Serialize to memory instead of to file
var formatter = new BinaryFormatter();
formatter.Serialize(memoryStream, customer);
// This resets the memory stream position for the following read operation
memoryStream.Seek(0, SeekOrigin.Begin);
// Get the bytes
var bytes = new byte[memoryStream.Length];
memoryStream.Read(bytes, 0, (int)memoryStream.Length);
var encryptedBytes = yourCrypto.Encrypt(bytes);
fileStream.Write(encryptedBytes, 0, encryptedBytes.Length);
}
And you can use this library for password protection and zipping
If you are on Windows Server 2012, you can use Windows Server 2012 Dynamic Access Control which brings 2 features called SDDL (Security Descriptor Definition Language) and DACL (Dynamic Access Control).
Here are a couple articles worth reading:
http://blogs.technet.com/b/wincat/archive/2012/07/20/diving-deeper-into-windows-server-2012-dynamic-access-control.aspx
http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/SIA341
With dynamic access control, you can define which users view / edit which files and folders based on classification and user claims.
Related
I have a SharePoint server and I want to open files directly from the Server with SharePoint CSOM.
User clicks button --> the file (Excel, Word, ...) opens at the client machine with the standard software.
Directly means, that if I change something to the file and click save, that the file is directly saved on the SharePoint server (or if I click e.g. 'Save as' in Excel the suggested path is 'https://sharpoint.url.com/folder').
Actually I have:
using Microsoft.SharePoint.Client;
var clientContext = new ClientContext("https://sharpoint.url.com");
string relativePath = "/folder/file.xls";
clientContext.Credentials = CredentialCache.DefaultCredentials;
var file = clientContext.Web.GetFileByServerRelativeUrl(relativePath);
clientContext.Load(file);
clientContext.ExecuteQuery();
What do I have to do now, if I want to open the file directly (no download)?
I assume you ask how to access the file's stream instead of downloading it to a local folder.
You can use the File.OpenBinaryDirect method to get access to its ETag and stream, eg :
using(var fileInfo=File.OpenBinaryDirect(clientContext,"/folder/file.xls"))
using(var reader=new StreamReader(fileInfo.Stream))
{
//Do whatever you want with the data
}
BTW you shouldn't use the old xls files. The format is deprecated for over 10 years. The current Excel format, xlsx, is a zipped package of XML files that's better supported by SharePoint itself, doesn't require Excel to generate or read.
For example, if you wanted to read cell values from an xlsx file, you could use the popular EPPlus library to read directly from the stream:
using(var fileInfo=File.OpenBinaryDirect(clientContext,"/folder/file.xlsx"))
using(var package=new ExcelPackage(fileInfo.Stream))
{
var sheet=package.Workbook.Worksheets[0];
var value=ws.Cells["A1"].Value;
//...
}
UPDATE
It seems the question isn't related to programming after all. All that's needed to save or open a SharePoint document is clicking on the document's link. What happens then depends on the Open Documents in Client Applications setting at the site and document library level.
This affects the headers the server sends to the browser when the user clicks on a document link. The browser may still refuse to open the registered application and display the Save dialog.
If that doesn't work, you should check why instead of writing code. It's probably a configuration error or a browser setting. Solving it is easier than creating workarounds, pushing them to all client machines. And then keeping track of all the patches, where they are deployed and deploying new ones.
Apart from that, the Office applications know about SharePoint and document libraries since 2003. They can browse them, display SharePoint properties for the document, show collaborators etc.
As I mentioned in the question comments, a lot of what people think as "SharePoint Developoment" is nothing more that configuration, administration and end user features.
MSDN docs don't help either - they actually cause harm by not covering SP administration or explaining the features and how they are used. You'll find that in Technet. For years, people created webparts in code to change how grids looked because MSDN didn't explain how eg the DataViewWebPart worked or how you could style a grid from the UI.
In general, the best place for such questions is http://sharepoint.stackexchange.com. For example, check “Open in the client application” Vs “Use the server default (Open in the client application)” inside the document library advance settings
We can create Map Network Drive for SharePoint library, and open the file from the network location. Check article below:
http://support.sherweb.com/Faqs/Show/how-to-connect-to-a-sharepoint-site-using-webdav-sharepoint-2013
Or we can download the file from SharePoint and open it using the code below:
Application.Workbooks.Open(#"C:\Test\YourWorkbook.xlsx");
Reference: https://msdn.microsoft.com/en-us/library/b3k79a5x.aspx
I am working on an application where file uploads happen often, and can be pretty large in size.
Those files are being uploaded to a Web API, which will then get the Stream from the request, and pass it on to my storage service, that then uploads it to Azure Blob Storage.
I need to make sure that:
No temp files are written on the Web API instance
The request stream is not fully read into memory before passing it on to the storage service (to prevent OutOfMemoryExceptions).
I've looked at this article, which describes how to disable input stream buffering, but because many file uploads from many different users happen simultaneously, it's important that it actually does what it says on the tin.
This is what I have in my controller at the moment:
if (this.Request.Content.IsMimeMultipartContent())
{
var provider = new MultipartMemoryStreamProvider();
await this.Request.Content.ReadAsMultipartAsync(provider);
var fileContent = provider.Contents.SingleOrDefault();
if (fileContent == null)
{
throw new ArgumentException("No filename.");
}
var fileName = fileContent.Headers.ContentDisposition.FileName.Replace("\"", string.Empty);
// I need to make sure this stream is ready to be processed by
// the Azure client lib, but not buffered fully, to prevent OoM.
var stream = await fileContent.ReadAsStreamAsync();
}
I don't know how I can reliably test this.
EDIT: I forgot to mention that uploading directly to Blob Storage (circumventing my API) won't work, as I am doing some size checking (e.g. can this user upload 500mb? Has this user used his quota?).
Solved it, with the help of this Gist.
Here's how I am using it, along with a clever "hack" to get the actual file size, without copying the file into memory first. Oh, and it's twice as fast
(obviously).
// Create an instance of our provider.
// See https://gist.github.com/JamesRandall/11088079#file-blobstoragemultipartstreamprovider-cs for implementation.
var provider = new BlobStorageMultipartStreamProvider ();
// This is where the uploading is happening, by writing to the Azure stream
// as the file stream from the request is being read, leaving almost no memory footprint.
await this.Request.Content.ReadAsMultipartAsync(provider);
// We want to know the exact size of the file, but this info is not available to us before
// we've uploaded everything - which has just happened.
// We get the stream from the content (and that stream is the same instance we wrote to).
var stream = await provider.Contents.First().ReadAsStreamAsync();
// Problem: If you try to use stream.Length, you'll get an exception, because BlobWriteStream
// does not support it.
// But this is where we get fancy.
// Position == size, because the file has just been written to it, leaving the
// position at the end of the file.
var sizeInBytes = stream.Position;
Voilá, you got your uploaded file's size, without having to copy the file into your web instance's memory.
As for getting the file length before the file is uploaded, that's not as easy, and I had to resort to some rather non-pleasant methods in order to get just an approximation.
In the BlobStorageMultipartStreamProvider:
var approxSize = parent.Headers.ContentLength.Value - parent.Headers.ToString().Length;
This gives me a pretty close file size, off by a few hundred bytes (depends on the HTTP header I guess). This is good enough for me, as my quota enforcement can accept a few bytes being shaved off.
Just for showing off, here's the memory footprint, reported by the insanely accurate and advanced Performance Tab in Task Manager.
Before - using MemoryStream, reading it into memory before uploading
After - writing directly to Blob Storage
I think a better approach is for you to go directly to Azure Blob Storage from your client. By leveraging the CORS support in Azure Storage you eliminate load on your Web API server resulting in better overall scale for your application.
Basically, you will create a Shared Access Signature (SAS) URL that your client can use to upload the file directly to Azure storage. For security reasons, it is recommended that you limit the time period for which the SAS is valid. Best practices guidance for generating the SAS URL is available here.
For your specific scenario check out this blog from the Azure Storage team where they discuss using CORS and SAS for this exact scenario. There is also a sample application so this should give you everything you need.
I am writing a messaging app in C# that runs on a shared file server on a network. The program works by multiple users running the program which accesses a file that is shared between the multiple computers. Hence, I need to use the StreamReader/StreamWriter to access the file with multiple programs at once (EDIT: I now know this isn't a good way to do it, but it's what I needed at the time). So how may I access a single file with multiple programs without getting errors about the file being in use?
I think your approach will lead to problems in the future. I'd consider leveraging Redis pub/sub if I were you.
But, since you asked... (I wrote a blog post on this: http://procbits.com/2011/02/18/streamwriter-share-read-access-in-another-process/ )
Generator of chat data:
var fs = File.Open(#"C:\messages.txt", FileMode.Append, FileAccess.Write, FileShare.Read);
var sw = new StreamWriter(fs);
sw.AutoFlush = true;
Somewhere else in your app or another app...
Readers of chat data:
var fs = File.Open(#"C:\messages.txt", FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
var sr = new StreamReader(fs);
If you need single file where multiple users/programs/entites.. shoud read/write without disturbing each other, I would suggest to consider (among other solutions) an use of Sqlite like a simple DB backend. No installation or service setup needed. Just use C# dlls of it and, basically, according to your requests, you will get what you need.
One user writes in the db file (INSERT) another can read (SELECT) from it.
I think you should think twice about using a text file as a means of a communication peer.
It's like asking for trouble.
Please take a look at using a P2P solution instead:
Peer Channel Chat
A simple peer to peer chat application using WCF netPeerTcpBinding
That will give you a much more fitting architecture for your requirements.
I have a doubt from a silverlight application we can access MyDocuments. I am creating an Application which will download a set of files from a remote server . Is it possible to save these file in MyDocuments instead of Isolated Storage. I am using Silverlight 4.0 . Can any one give me Sample codes for it.
In order to acheive that you need to use Silverlight 4 and specify that is should get elevated privileges when install as an Out-of-browser application. When running as an OOB the app will have access to the users Documents folder.
In all other cases you will need to use the SaveFileDialog where the user can explictly specify where to save the file.
Edit code example:-
if (Application.Current.HasElevatedPermissions)
{
string path = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments);
path = Combine.Path(path, "MySaveFile.dat");
using (var filestream = File.OpenWrite(path))
{
// pump your input stream in to the filestream using standard Stream methods
}
}
No Isolated storage is currently the only option.
Im trying to clone facebook image uploader which is built in java. But I would like to use silverlight so Im wondering if I can somehow read local directory.
If I have this running an some remote server I can easily read the content of that server as I have C# as backend. But Im not sure how could I read certain directory of the user which is using silverlight application.
Any ideas if this is possible or not?
It's possible to read file "blindly" using OpenFileDialog. Blindly means you can let the user point the dialog to the file so Silverlight can read its content but it can't tell where the file is located.
Example:
var fileDialog = new OpenFileDialog();
var dialog = fileDialog.ShowDialog();
if (dialog.HasValue && dialog.Value)
{
byte[] bytes;
using (var fileReader = fileDialog.File.OpenRead())
{
bytes = new byte[fileReader.Length];
fileReader.Read(bytes, 0, (int) fileReader.Length);
}
}
The access to the file system is limited for security. Some access (blind as well) can be done using Isolated Storage where you can store data and access later.