if (Upload.ContentLength > 0)
{
var FileName = Upload.FileName;
var path = Path.Combine(Server.MapPath("~/Content/Images"), FileName);
Upload.SaveAs(path);
cement.ImageLocation = ("/Content/Images/" + FileName);
}
This code is working perfectly in Local IIS hosting. But having problem on AppHarbor. The error that I got from the logging is this.
**Message**
An unhandled exception has occurred.
**Exceptions**
[DirectoryNotFoundException: Could not find a part of the path 'D:\Users\apphb9840cac8716388\app\_PublishedWebsites\RMQGrainsBeta\Content\Images\Capture.PNG'.]
I tried to read some of the articles about this and Got some Information that that the Folder might be protected or something, so I removed the ReadOnly on the Images folder and try to git bash, the problem is. GitBash doesn't recognize the difference and won't push the changes of the folder.
Finally got an answer from AppHarbor. So the thing is, it is not the code that is having the problem. It is the AppHarbor services that is blocking us to upload or directly access the file system. Here is what the Tech from the AppHarbor says.
About storing images: AppHarbor does not allow write access to the
application directory by default. This is because the local filesystem is
ephemeral and may be wiped on each deployment and/or during system
maintenance. For this reason it needs to be manually enabled on the
settings page, and it should only be used for temporary storage purposes such as caching.
I'd recommend using a cloud file storage solution such as Amazon S3. You can
upload files to S3 from your application or the client can upload files
directly to Amazon S3 by using presigned URLs. This will allow you to build
a scalable, distributed file storage feature that is suitable for cloud-based
platforms like AppHarbor. Let me know if you need help
implementing/architecting a solution that suits your needs!
Best,
Rune
Sorry for wasting your time Guys. But thank you for viewing the problem.
Related
i have multiple web server and one central file server inside my data center.
and all my Web server store the user uploaded files into central internal file server.
i would like to know what is the best way to pass the file from web server to file server in this case?
as suggested i try to add more details to question:
the solution i came up was:
after receiving files from user at web server, i should just do an Http Post to the file server. but i think there is some thing wrong with this because it causes large files to be entirely loaded into memory twice: (once at web server and once at file server)
Is your file server just another windows/linux server or is it a NAS device. I can suggest you number of approaches based on your requirement. The question is why d you want to use HTTP protocol when you have much better way to transfer files between servers.
HTTP protocol is best when you send text data as HTTP itself is based
on text.From the client side to Server side HTTP is used as that is
the only available option for you by our browsers .But
between your servers ,I feel you should use SMB protocol(am assuming
you are using windows as it is tagged for IIS) to move data.It will
be orders of magnitude faster as much more efficient to transfer the same data over SMB vs
HTTP.
And for SMB protocol,you do not have to write any code or complex scripts to do this.As provided by one of the answers above,you can just issue a simple copy command and it will happen for you.
So just summarizing the options for you (based on my preference)
Let the files get upload to some location on the each IIS web server e.g C:\temp\UploadedFiles . You can write a simple 2-3 line powershell script which will copy the files from this C:\temp\UploadedFiles to \FileServer\Files\UserID\\uploaded.file .This same powershell script can delete the file once it is moved to the other server successfully.
E.g script can be this simple and easy to make it as windows scheduled task
$Destination = "\\FileServer\Files\UserID\<FILEGUID>\"
New-Item -ItemType directory -Path $Destination -Force
Copy-Item -Path $Source\*.* -Destination $Destination -Force
This script can be modified to suit your needs to delete the files if it is done :)
In the Asp.net application ,you can directly save the file to network location.So in the SaveAs call,you can give the network path itself. This you have to make sure this network share is accessible for the IIS worker process and also has write permission.Also in my understanding asp.net gets the file saved to temporary location first (you do not have control on this if you are using the asp.net HttpPostedFileBase or FormCollection ). More details here
You can even run this in an async so that your requests will not be blocked
if (FileUpload1.HasFile)
// Call to save the file.
FileUpload1.SaveAs("\\networkshare\filename");
https://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.fileupload.saveas(v=vs.110).aspx
3.Save the file the current way to local directory and then use HTTP POST. This is worst design possible as you are first going to read the contents and then transfer it as chunked to other server where you have to setup another webservice which recieves the file.The you have to read the file from request stream and again save it to your location. Am not sure if you need to do this.
let me know if you need more details on any of the listed method.
Or you just write it to a folder on the webservers, and create a scheduled task that moves the files to the file server every x minutes (e.g. via robocopy). This also makes sure your webservers are not reliant on your file server.
Assuming that you have an HttpPostedFileBase then the best way is just to call the .SaveAs() method.
You need the UNC path to the file server and that is it. The simplest version would look something like this:
public void SaveFile(HttpPostedFileBase inputFile) {
var saveDirectory = #"\\fileshare\application\directory";
var savePath = Path.Combine(saveDirectory, inputFile.FileName);
inputFile.SaveAs(savePath);
}
However, this is simplistic in the extreme. Take a look at the OWASP Guidance on Unrestricted File Uploads. File uploads can be the source of many vulnerabilities in your application.
You also need to make sure that the web application has access to the file share. Take a look at this answer
Creating a file on network location in asp.net
for more info. Generally the best solution is to run the application pool with a special identity which is only used to access the folder.
the solution i came up was: after receiving files from user at web server, i should just do an Http Post to the file server. but i think there is some thing wrong with this because it causes large files to be entirely loaded into memory twice: (once at web server and once at file server)
I would suggest not posting the file at once - it's then full in memory, which is not needed.
You could post the file in chunks, by using ajax. When a chunk receives at your server, just add it to the file.
With the File Reader API, you could read the file in chunks in Javascript.
Something like this:
/** upload file in chunks */
function upload(file) {
var chunkSize = 8000;
var start = 0;
while (start < file.size) {
var chunk = file.slice(start, start + chunkSize);
var xhr = new XMLHttpRequest();
xhr.onload = function () {
//check if all chunks are and then send filename or send in in the first/last request.
};
xhr.open("POST", "/FileUpload", true);
xhr.send(chunk);
start = end;
}
}
It can be implemented in different ways. If you are storing files in files server as files in file system. And all of your servers inside the same virtual network
Then will be better to create shared folder on your file server and once you received files at web server, just save this file in this shared folder directly on file server.
Here the instructions how to create shared folders: https://technet.microsoft.com/en-us/library/cc770880(v=ws.11).aspx
Just map a drive
I take it you have a means of saving the uploaded file on the web server's local filesystem. The question pertains to moving the file from the web server (which is probably one of many load-balanced nodes) to a central file system all web servers can access it.
The solution to this is remarkably simple.
Let's say you are currently saving the files some folder, say c:\uploadedfiles. The path to uploadedfiles is stored in your web.config.
Take the following steps:
Sign on as the service account under which your web site executes
Map a persistent network drive to the desired location, e.g. from command line:
NET USE f: \\MyFileServer\MyFileShare /user:SomeUserName password
Modify your web.config and change c:\uploadedfiles to f:\
Ta da, all done.
Just make sure the drive mapping is persistent, and make sure you use a user with adequate permissions, and voila.
I am currently building a local static site generator in C#. It compiles a bunch of templates together into a hierarchy of plain old HTML files. I want to upload the resulting files to my Windows Azure Website and have the changes reflected live, and I want to be able to do this programmatically via my script.
As it stands, I'm having to upload the generated files manually using WebMatrix, as I haven't been able to find an API or SDK that lets me directly upload HTML to a Windows Azure Website.
Surely there must be a way to do this from code, other than just using an sFTP library (which, because it doesn't use the WebMatrix/IIS protocol, which I think sends zipped diffs, would be slow and would mean out-of-sync data during the upload while some files have been updated and others haven't.) I'd also rather not have to commit my generated site to source control if I can avoid it. It seems conceptually wrong to me to be putting something into source control merely as an implementation detail of deployment.
Update: WebMatrix internally uses Web Deploy (MSDeploy). Theoretically you should be able to build the deployment package yourself using the API, but 99% of the examples I can find are using the command-line tool or the GUI tools in Visual Studio. I need to build the package and deploy it programmatically from within C#. Any ideas or guidance on how to go about this? The docs on MSDN don't really show any examples for this kind of scenario.
OK, so I worked out what to do with help from a couple of friendly folks at Microsoft. (See David's Ebbo's response to my forum question, and this very helpful info from Sayed Hashimi showing how to do exactly what I wanted to do from the msdeploy.exe console app).
Just grab your PublishSettings file from the Azure web portal. Open it in a text editor to get the values to paste into the below code.
var destinationOptions = new DeploymentBaseOptions()
{
// userName from Azure Websites PublishSettings file
UserName = "$msdeploytest",
// pw from PublishSettings file
Password = "ThisIsNotMyPassword",
// publishUrl from PublishSettings file using https: protocol prefix rather than 443 port
// and adding "/msdeploy.axd?site={msdeploySite-variable-from-PublishSettings}"
ComputerName = "https://waws-prod-blu-003.publish.azurewebsites.windows.net/msdeploy.axd?site=msdeploytest",
AuthenticationType = "Basic"
};
// This option says we're giving it a directory to deploy
using (var deploymentObject = DeploymentManager.CreateObject(DeploymentWellKnownProvider.ContentPath,
// path to root directory of source files
#"C:\Users\ryan_000\Downloads\dummysite"))
{
var syncOptions = new DeploymentSyncOptions();
syncOptions.WhatIf = false;
// "msdeploySite" variable from PublishSettings file
var changes = deploymentObject.SyncTo(DeploymentWellKnownProvider.ContentPath, "msdeploytest", destinationOptions, syncOptions);
Console.WriteLine("BytesCopied: " + changes.BytesCopied.ToString());
Console.WriteLine("Added: " + changes.ObjectsAdded.ToString());
Console.WriteLine("Updated: " + changes.ObjectsUpdated.ToString());
Console.WriteLine("Deleted: " + changes.ObjectsDeleted.ToString());
Console.WriteLine("Errors: " + changes.Errors.ToString());
Console.WriteLine("Warnings: " + changes.Warnings.ToString());
Console.WriteLine("ParametersChanged: " + changes.ParameterChanges.ToString());
Console.WriteLine("TotalChanges: " + changes.TotalChanges.ToString());
}
You might also be able to stumble your way through the obscure documentation on MSDN. There is a lot of passing around of oddly-named options classes, but with a bit of squinting of one's eyes and flailing about in the docs it's possible to see how the command-line options (of which it is much easier to find examples online) map to API calls.
The easiest way is probably to set up Git publishing for your website and programmatically do a git commit followed by a git push. You can think of it as a deployment mechanism instead of source control, given that Azure websites natively support a backing Git repository that doesn't have to have anything to do with your chosen SCM solution.
WebMatrix uses WebDeploy to upload the files to Windows Azure Web Sites.
An alternative is to use the VFS REST API (https://github.com/projectkudu/kudu/wiki/REST-API#wiki-vfs). The diagnostic console uses this to work with the file system today.
I am developing a Windows Phone 8 application but am having a lot of issues with file access permission exceptions hindering the approval of my application when ever I try accessing files in the "local" folder (this only happens after the application has been signed by the WP store, not when deployed from Visual Studio). To solve this I have moved all file operations to IsolatedStorage and this seems to have fixed the problems.
I only have one problem left though. My application needs to make use of the file extension system to open external files and this seems to involve the file first being copied to the local folder where after I can then manually copy it into IsolatedStorage. I have no problem in implementing this but it seems that a file access permission exception also occurs once the system tries to copy the external file into the local folder.
The only way I think this can be solved is if I can direct the system to directly copy into IsolatedStorage but I cannot figure how to do this or if it is even possible. It seems as if though the SharedStorageAccessManager can only copy into a StorageFolder instance but I have no idea how to create one that is directed into IsolatedStorage, any ideas?
PS. Do you think that the Microsoft system might be signing my application with some incompetent certificate or something because there is not a hint of trouble when I deploy the application from Visual Studio, it only happens when Microsoft tests it or when I install it from the store using the Beta submission method.
Below is a screenshot of the catched exception being displayed in a messagebox upon trying to open a file from an email:
EDIT:
Just to make it even clearer, I do NOT need assistance in figuring out the normal practice of using a deep link uri to copy an external file into my application directory. I need help in either copying it directly into isolatedstorage or resolving the file access exception.
Listening for a file launch
When your app is launched to handle a particular file type, a deep link URI is used to take the user to your app. Within the URI, the FileTypeAssociation string designates that the source of the URI is a file association and the fileToken parameter contains the file token.
For example, the following code shows a deep link URI from a file association.
/FileTypeAssociation?fileToken=89819279-4fe0-4531-9f57-d633f0949a19
Upon launch, map the incoming deep link URI to an app page that can handle the file
// Get the file token from the URI
// (This is easiest done from a UriMapper that you implement based on UriMapperBase)
// ...
// Get the file name.
string incomingFileName = SharedStorageAccessManager.GetSharedFileName(fileID);
// You will then use the file name you got to copy it into your local folder with
// See: http://msdn.microsoft.com/en-us/library/windowsphone/develop/windows.phone.storage.sharedaccess.sharedstorageaccessmanager.copysharedfileasync(v=vs.105).aspx
SharedStorageAccessManager.CopySharedFileAsync(...)
I've inline the information on how to do this from MSDN http://msdn.microsoft.com/en-us/library/windowsphone/develop/jj206987(v=vs.105).aspx
Read that documentation and it should be clear how to use the APIs as well as how to setup your URI mapper.
Good luck :)
Ok I figured it out. The "install" directory is actually restricted access but for some reason the Visual Studio signing process leaves the app with enough permissions to access this folder. The correct procedure of determining a relative directory is not to use "Directory.GetCurrentDirectory()" but rather to use "ApplicationData.Current.LocalFolder". Hope this helps!
Currently I am getting this error when trying to copy a local file over to a shared network drive. Access to the path '\\vlanxxxx\Vlanxxx\VT\12345.pdf' is denied'
Background Information
Running ASP.NET MVC3, Windows Server 2008 R2
In IIS->Application Pools my Application has the correct permission to access the shared drive
I have tried copying the file from a local folder to local folder and it worked successfully. i.e C:\log\12345.pdf -> C:\test\12345.pdf
Purpose: My overall purpose is to copy a file in my local server to a shared mapped network drive.
Q: Is this actually possible when using ASP.NET, is there any networked shared drive limitations when using this technology?
Code for copying the file over (I am pretty sure the code is fine, but here is the code incase)
String path = #"C:\log\12345.pdf";
String dest_path = #"\\vlanxxxx\Vlanxxx\VT\12345.pdf";
File.Copy(path, Path.Combine(dest_path, Path.GetFileName(path)));
Thank you for the help and assistance, I appreciate your time! Please let me know if there is any misunderstanding in the question. I will try to edit and clarify as soon as possible.
In IIS, try to give write permission to the users IUSR and IIS_IUSR in that destination path / folder.
I have my project files on my Dropbox folder so I can play around with my files at the office as well.
My project contains an EmbeddableDocumentStore with UseEmbeddedHttpServer set to true.
const int ravenPort = 8181;
NonAdminHttp.EnsureCanListenToWhenInNonAdminContext(ravenPort);
var ds = new EmbeddableDocumentStore {
DataDirectory = "Data",
UseEmbeddedHttpServer = true,
Configuration = { Port = ravenPort }
};
Now, this day when I started my project on my office pc I saw this message: Could not open transactional storage: D:\Dropbox\...\Data
Since it's early in my development stage I deleted the data folder on my Dropbox and the project started flawlessly. Now I'm back at home I ran into the same issue! I don't want to end up deleting this folder every time of course.
Can't I store my development data on my Dropbox? Should I bypass something to get this to work?
Set a data directory to a physical disk volume on your local computer. You will not be able to use any sort of mapped drive, network share, UNC path, dropbox or skydrive as a data directory. Just because you have a drive letter does not mean you have a physical disk.
The only types of non-physical storage that even make sense is a LUN attached from a SAN over iSCSI or FibreChannel, or an attached VHD in a virtualized or cloud environment. They will all present as physical disks to the OS.
This would be the case for just about ANY data access environment. Try it with SQL Server if you don't believe me. In RavenDB's case, it is using ESENT as its data store, which requires direct access to the filesystem.
Update
To clarify, even if you are storing on a physical disk, you can't rely on any type of synchronization technology like DropBox or SkyDrive. Why? Because they will be taking a shared read lock on the files to watch for changes. Technologies like ESENT (which RavenDB is based upon) require an exclusive lock to the file.
Other technologies like SQL Server and Windows Virtual Machine also take exclusive locks on their data stores. Why? Because they are constantly reading and writing bits of data in a random-access manner to the file. Would you really want DropBox to be trying to perform an sync operation for every bit of data change? It would be very inefficient and problematic.
Applications that use shared locks don't have this problem. For example, when you work on an MS Word document, it is all being done in memory. When you save the file, DropBox can read the entire file and sync it to the cloud. It can optimize by sending only the bits that have changed, but it still needs to be able to read the file to do so.
So if DropBox has a shared read lock on the ESENT file, then when RavenDB tries to open it exclusively, it gets an error and raises the exception you are seeing.