Perhaps the main issue is where I am uploading to - I am currently using MediaFire to upload my files. I have tested with downloading files of both ".exe" as well as ".png" formats; neither seem to work for me though.
The issue that is constantly occurring for me:
When I attempt to download a file, (I will put the URL's of the two files at the end of my question), the amount of data retrieved is either far greater - or far less than the actual size of the file. For example, I uploaded a blank VB6 executable file which is 16kb. The downloaded file comes out to be nearly 60 kilobytes!
Some things to note:
1) Both files download with no problems through Chrome (and I'm assuming other browsers as well).
2) I have tried multiple methods of retrieving the data from the downloaded file (same result).
My Code:
// Create a new instance of the System.Net 'WebClient'
System.Net.WebClient client = new System.Net.WebClient();
// Download URL
// PNG File
string url = #"http://www.mediafire.com/imageview.php?quickkey=a16mo8gm03fv1d9&thumb=4";
// EXE File (blank VB6 exe file # 16kb)
// string url = #"http://www.mediafire.com/download.php?nn1cupi7j5ia7cb";
// Destination
string savePath = Environment.GetFolderPath(Environment.SpecialFolder.Desktop) + #"\Test.png";
byte[] result = client.DownloadData(url);
MessageBox.Show(result.Length.ToString()); // Returns 57,000 something (over 3x larger than my EXE file!).
// Write downloaded data to desired destination
System.IO.File.WriteAllBytes(savePath, result);
Related
This is my first post because I could not find any solution for my problem. Neither for myself nor in any thread.
Here is the situation. I do have a .JSON file which stores my credentials for an API authentification.
The software is used on multiple computers, so please do not propose anything regarding a hardcoded path.
My code:
string path = Path.GetTempPath();
path = path + "unterschrift.png";
if (System.IO.File.Exists(path) == false)
{
Properties.Resources.credentials.Save(path);
}
Now I do get the error: byte[] does not contain a definition for "Save"
Looking for your support!
The same code worked for an image file but not now because it is a different file format. I have already tried decoding the byte but it does not work.
I am getting SharePoint Error: The server does not allow messages larger than 2097152 bytes error while uploading large file.
Can you please help me
Below is my code:
Folder currentRunFolder = site.GetFolderByServerRelativeUrl(barRootFolderRelativeUrl + "/" + newFolderName);
FileCreationInformation newFile = new FileCreationInformation { Content = System.IO.File.ReadAllBytes(#p), Url = Path.GetFileName(#p), Overwrite = true };
currentRunFolder.Files.Add(newFile);
currentRunFolder.Update();
context.ExecuteQuery();
this is SP online or onPrem?
if onPrem You may first try to extend a bit the settings
extend the max file upload on CA for this web application,
set on CA for this web application 'web page security Validation' on Never (in this link there is a screen how to set it)
extend timeout on IIS
also for large files you may consider using Microsoft.SharePoint.Client.File.SaveBinaryDirect method
please check one of my previous answers to a very similar method
Upload file to sharepoint with httpclient
here I give an example for SP 2013 (but would work in SP 2010 and 2016 as well) to upload a file under 2MB using Files.Add and larger files using SaveBinaryDirect
I am simply trying to transfer text files from one FTP server to another using a windows service. I download the required files from source FTP server and save it locally on my system and then upload the saved file to the destination server. For downloading and uploading files I am using WinSCP .Net Assembly. Here is my code that I am using to transfer files to the destination server:
WinSCP.SessionOptions sessionOptions = new WinSCP.SessionOptions();
sessionOptions.Protocol = WinSCP.Protocol.Ftp;
sessionOptions.UserName = "myUsername";
sessionOptions.Password = "myPassword"
sessionOptions.PortNumber = 21;
sessionOptions.HostName = serverIPAddress;
session.Open(sessionOptions);
WinSCP.TransferOptions transferOptions = new WinSCP.TransferOptions();
transferOptions.TransferMode = WinSCP.TransferMode.Binary;
WinSCP.TransferOperationResult transferResult;
transferResult = session.PutFiles(PathToLocalFile + filename, destinationFilePath, false, transferOptions);
transferResult.Check();
It works fine and uploads file to the server, but in case a connectivity issue occurs while transferring the file, an incomplete chunk of required file is transferred to the destination server.
I have searched the WinSCP official documentation but I couldn't find anything related to this.
Is there any way to ensure that only complete files gets transferred to the destination otherwise (in case an error occurs during transfer), the transferred chunk of file gets deleted automatically? (Without manually deleting the incomplete file)
There no way to make this automatic.
You have to code it. Just check, if the transfer failed, reconnect (if needed), and delete the partially uploaded file.
Though as already mentioned in comments, if the transfer fails, because of problems with connection, you may not be able to reconnect to delete the file.
There's no magic solution. The server should be able to deal with partial files in the first place.
See also:
How to detect that a file is being uploaded over FTP (while seemingly different topic, detecting if file is being uploaded is basically the same thing, as detecting if file has not been uploaded completely)
File upload with WinSCP .NET/COM with temporary filenames
I've taken on an asp/c# web app to fix originally done by the previous developer at my workplace. The code shows a Gridview populated by results from a query showing a list of files, one column is made up of 'command fields' that when clicked download a file. Everything seems to go smoothly until it reaches the file download as it can't seem to find the file on the server. My C# really isn't strong so bear with me and if you need further info that I've missed out please do say so.
Here is the specific part of code that causes problems:
//strSuppDocName - is already declared elsewhere
string path = System.IO.Path.Combine(Server.MapPath("~/Documents/"), strSuppDocName);
if (!Directory.Exists(path)){
System.Windows.Forms.MessageBox.Show(path + " - file path doesn't exist");
}
else {
System.Net.WebClient client = new System.Net.WebClient();
Byte[] buffer = client.DownloadData(path);
if (buffer != null)
{
Response.ClearContent();
Response.ClearHeaders();
FileInfo file = new FileInfo(path);
Response.Clear();
Response.AddHeader("Content-Disposition", "Attachment;FileName:" + file.Name);
Response.AddHeader("Content-Length", file.Length.ToString());
Response.ContentType = ReturnExtension(strExtSuppDoc.ToLower());
Response.WriteFile(file.FullName);
Response.End();
}
}
What happens when I run the code is that the grid view populates okay, I click the file to download and it enters the first branch of the if statement showing the path. Before I added in the if statement it was showing the following error: "could not find a part of the path". I've tried fiddling with the path such as setting it absolutely:
string path = System.IO.Path.Combine(#"E:\web\Attestation\Documents\", strSuppDocName);
And without using the Combine method above and using standard string concatenation with '+'. Any help or guidance is most appreciated, thanks!
You're mixing a handful of technologies here. First of all, this doesn't belong in a web application:
System.Windows.Forms.MessageBox.Show(path + " - file path doesn't exist");
Web applications aren't Windows Forms applications. This won't display anything to someone using the web application, because there's no concept of a "message box" over HTTP.
More to the point, however, you're using path in two very different ways. Here:
Byte[] buffer = client.DownloadData(path);
and here:
FileInfo file = new FileInfo(path);
Is path a URL on the network or a file on the file system? It can't be both. The first line is treating it as a URL, trying to download it from a web server. The second line is treating it as a local file, trying to read it from the file system.
What is path and how are you looking to access it? If it's a URL, download it with the WebClient and stream it to the user. If it's a file, read it from the file system and stream it to the user. You can't do both at the same time.
If you are interacting with a path on a network (aka UNC path), you have to use Server.MapPath to turn a UNC path or virtual path into a physical path that .NET can understand. So anytime you're opening files, creating, updating and deleting files, opening directories and deleting directories on a network path, use Server.MapPath.
Example:
System.IO.Directory.CreateDirectory(Server.MapPath("\\server\path"));
The answer in short is that the file name was incorrect.
Strangely or mistakenly the author of the code, when uploading a given file, added an extra extension so a file would be something like 'image.png' to start off with then when uploaded would become image.png.png. Why didn't I notice this before you may ask? Simply because the whole path wasn't shown in Windows XP (don't ask why I was using XP) when viewing it through the explorer window and I dismissed this issue long before - a big mistake! After trying to find the file by typing the address of the file into the windows explorer address bar and receiving an error that the file doesn't exist, yet I could plainly see it did, a colleague looked at for the file remotely using Windows 7 and we saw that the file was shown as 'image.png.png'. Thereafter the path to the file worked correctly.
In my Windows application I am using WebClient DownloadFile method to download several PDF files from a server on local network.
Each file is a report that gets generated when its URL is called. Reports are of different sizes and take different periods of time to get generated.
My code loops through a list of URLs (about 400), and for each URL it calls DownloadFile method, and the corresponding report is generated and downloaded to local machine. URLs are definitely correct.
The problem is that almost each time the application is run, some of downloaded files are damaged, only 7KBs are downloaded (I think it’s for meta data), and Acrobat Reader gives me a message when I try to open the file:
“…it’s either not a supported file type or because the file has been damaged…”
It’s not always the same files that get damaged, and when I re-run the application, those files often succeed, and some others might fail… it seems to be random and I can’t find out the criteria.
Note 1: I don’t want a file to start download until its precedence is completely downloaded, that’s why I am not using the asynchronous method.
Note 2: All files are Oracle reports and get generated by querying a database.
Note 3: No EXCEPTION is thrown in case of damaged files.
Here is my code :
using ( WebClient client = new WebClient() )
{
for(int i=0; i< URL_List.Length; i++)
{
try
{
client.DownloadFile( URL_List[i] , myLocalPath+fileName+".pdf" );
}
catch(Exception x)
{
// write exception message to error log...
}
}
}
Thanks in advance.