Web Api 2 RESTFUL Image Upload - c#

I'm new in Web Api and I'm working on my first project. I'm working on mobile CRM system for our company.
I want to store companies logo, customers face foto etc.
I found some tutorials on this topic, but unfortunately some of them was old (doesn't use async) and the others doesn't work.
At the end I found this one:
http://www.intstrings.com/ramivemula/articles/file-upload-using-multipartformdatastreamprovider-in-asp-net-webapi/
It works correctly, but I don't understand a few things.
1) Should I use App_Data (or any other folder like /Uploads) for storing this images, or rather store images in database?
2) Can I set only supported images like .jpg, .png and reject any other files?
3) How can I processed image in upload method? Like resize, reduce size of the file, quality etc?
Thank you

1) We are storing files in a different location than app_data. We have a few customer groups and we gave them all a unique folder that we get from the database. Storing in database is also an option but if you go down this road, make sure that the files you are saving don't belong directly to a table that you need to retrieve often. There is no right or wrong, but have a read at this question and answer for some pros and cons.
2) If you foollowed that guide, you can put a check inside the loop to check the file ending
List<string> denyList = new List<string>();
denyList.Add(".jpg");
foreach (MultipartFileData file in provider.FileData)
{
string fileName = Path.GetFileName(file.LocalFileName);
if(denyList.Contains(Path.GetExtension(fileName))
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
files.Add(Path.GetFileName(file.LocalFileName));
}
3) Resizing images is something that I have never personally done my self, but I think you should have a look at the System.Drawing.Graphics namespace.
Found a link with an accepted answer for downresize of picture: ASP.Net MVC Image Upload Resizing by downscaling or padding

None of the questions are actually related to Web API or REST.
If you are using SQL Server 2008 or newer the answer is use FILESTREAM columns. This looks like a column in database with all its advantages (i.e. backup, replication, transactions) but the data is actually stored in file system. So you get the best of each world, i.e. it will not happen that someone deletes the file accidentally so database will reference an inexistent file, or vice versa, records from database are deleted but files not so you'll end up with a bunch of orphan files. Using a database has many advantages, i.e. metadata can be associated with files and permissions are easier to set up.
This depends on how files are uploaded. I.e. if using multipart forms then examine content type of each part before part is saved. You can even create your own MultipartStreamProvider class. Being an API maybe the upload method has a stream or byte array parameter and a content type parameter, in this case just test the value of content type parameter before content is saved. For other upload methods do something similar depending on what the input is.
You can use .Net's built in classes (i.e. Bitmap: SetResolution, RotateFlip, to resize use a constructor what accepts a size), or if you are not familiar with image processing rather choose an image processing library.
All of the above work in Asp.Net, MVC, Web API 1 and 2, custom HTTP handlers, basically in any .Net code.

#Binke
Never user string operations on paths. I.e. fileName.split('.')[1] will not return the extension if file name is like this: some.file.txt, and will fail with index out of range error if file has no extension.
Always use file API, i.e. Path.GetExtension.
Also using the extension to get content type is not safe especially when pictures and videos are involved, just think of avi extension what is used by many video formats.
files.Add(Path.GetFileName(file.LocalFileName)) should be files.Add(fileName).

Related

Best place to store saved messages

I am currently creating a console applications that only accepts some commands defined by me. The thing is I'm storing a lot of error and notification messages on a static class I created called Messages and then just calling them like Messages.ErrorMessage.... ErrorMessage is just a static string that contains w/e I want printed on the console.
What I wanted to ask is if that's a good way of implementing such behavior or should I instead change where I'm keeping all of this Messages?
Thanks for your help!
I guess for your need you can use Resource file instead of static class.
as documented on official site
Visual C# applications often include data that is not source code.
Such data is referred to as a project resource and it can include
binary data, text files, audio or video files, string tables, icons,
images, XML files, or any other type of data that your application
requires. Project resource data is stored in XML format in the .resx
file (named Resources.resx by default) which can be opened in Solution
Explorer.
For more information :-
http://msdn.microsoft.com/en-us/library/7k989cfy(v=VS.90).aspx
As suggested by others, storing them in resource files is the recommended way to deal with such resources, particularly if you'll need to deal with internationalization. then you can load a different resource file from a satellite assembly at run time with the correct information.
You will notice a slight performance hit as the strings will be looked up in a resource dictionary, but usually that is negligible.

Save any file type copied into the Clipboard

I'm working with the code from this Stack article (specifically the second answer) to monitor when the clipboard changes. The end goal for this application is that the user can copy any file type (whether it's a .xlsx, .pk3, .sln, etc), folder, an image or a string and have it automatically saved to their temp directory. I would set file size limits though so that the temp folder doesn't get overloaded. The overall concept of the application is to provide convenience, so users can recover data that was copied but then deleted or lost.
So far, the above referenced code is working great for strings and images. However, after reviewing the items in the DataFormats list (i.e. usage: DataFormats.Bitmap), I can't find a catch-all for any file type, or for folders. I also can't find any way to determine what type of file was copied. Is there any way to determine that? For example, if there was a way to get the file extension of the file copied, that would help.
Maybe my hopes are set too high. Even if I kept an array of allowed file types (.xlsx, .sln, etc) there would be no way I can think of to save that type of file. It seems I can't get bytes from a DataObject type, which would be the easy way out.
Any ideas on how I could accomplish this? Thanks.
The reason, why you can't find a catch all, is because every format is registered on the host machine, by the applications using that file format.
you can get the current list of fileformats that the object held in the clipboard has by using
string[] formats = iData.GetFormats();
But couldn't you just serialize (to exact copy) what ever data comes in and save it as the last entry in Clipboard.GetFileDropList();
seems like all files that arent audio/images/string has a very specific set of formats
well anyway, just my thoughts
maybe look here
http://www.codeproject.com/Articles/15333/Clipboard-backup-in-C

loading lots of Azure blob data in a WPF app

I've been given a task to build a prototype for an app. I don't have any code yet, as the solution concepts that I've come up with seem stinky at best...
The problem:
the solution consist of various Azure projects which do stuff to lots of data stored in Azure SQL db-s. Almost every action that happens creates a gzipped log file in blob storage. So that's one .gz file per log entry.
We should also have a small desktop (WPF) app that should be able to read, filter and sort these log files.
I have absolutely 0 influence on how the logging is done, so this is something that can not be changed to solve this problem.
Possible solutions that I've come up with (conceptually):
1:
connect to the blob storage
open the container
read/download blobs (with applied filter)
decompress the .gz files
read and display
The problem with this is that, depending on the filter, this could mean a whole lot of data to download (which is slow), and process (which will also not be very snappy). I really can't see this as a usable application.
2:
create a web role which will run a WCF or REST service
the service will take the filter params and other stuff and return a single xml/json file with the data, the processing will be done on the cloud
With this approach, will I run into problems with decompressing these files if there's a lot of them (will it take up extra space on the storage/compute instance where the service is running).
EDIT: what I mean by filter is limit the results by date and severity (info, warning, error). The .gz files are saved in a structure that makes this quite easy, and I will not be filtering by looking into the files themselves.
3:
some other elegant and simple solution that I don't know of
I'd also need some way of making the app update the displayed logs in real time, which i suppose would need to be done with repeated requests to the blob storage/service.
This is not one of those "give me code" questions. I am looking for advice on best practices, or similar solutions that worked for similar problems. I also know this could be one of those "no one right answer" questions, as people have different approaches to problems, but I have some time to build a prototype, so I will be trying out different things, and I will select the right answer, which will be the one that showed a solution that worked, or the one that steered me in the right direction, even if it does take some time before I actually build something and test it out.
As I understand it, you have a set of log file in Azure Blob storage that are formatted in a particular way (gzip) and you want to display them.
How big are these files? Are you displaying every single piece of information in the log file?
Assuming that if this is a log file, it is static and historical...meaning that once the log/gzip file is created it cannot be changed (you are not updating the gzip file once it is out on Blog storage). Only new files can be created...
One Solution
Why not create an worker role/job process that periodically goes out and scans the blob storage and builds a persisted "database" so that you can display. Nice thing about this is that you are not putting the unzipping/business logic to extract the log file in a WPF app or UI.
1) I would have the worker role scan the log file in Azure Blob storage
2) Have some kind of mechanism to track which ones where processed and a current "state" maybe the UTC date of the last gzip file
3) Do all the unzipping/extracting of the log file in the worker role
4) Have the worker role place the content in a SQL database, Azure Table Storage or Distributed Cache for access
5) Access can be done by a REST service (ASP.NET Web API/Node.js etc)
You can add more things if you need to scale this out, for example run this as a job to re-do all of the log files from a given time (refresh all). I don't know the size of your data so I am not sure if that is feasable.
Nice thing about this is that if you need to scale your job (overnight), you can spin up 2, 3, 6 worker roles...extract the content, pass the result to a Service Bus or Storage Queue that would insert into SQL, Cache etc for access.
Simply storing the blobs isn't sufficient. The metadata you want to filter on should be stored somewhere else where it's easy to filter and retrieve all the metadata. So I think you should split this into 2 problems:
A. How do I efficiently list all "gzips" with their metadata and how
can I apply a filter on these gzips in order to show them in my client
application.
Solutions
Blobs: Listing blobs is slow and filtering is not possible (you could group in a container per month or week or user or ... but that's not filtering).
Table Storage: Very fast, but searching is slow (only PK and RK are indexed)
SQL Azure: You could create a table with a list of "gzips" together with some other metadata (like user that created the gzip, when, total size, ...). Using a stored procedure with a few good indexes you can make search very fast, but SQL Azure isn't the most scalable solution
Lucene.NET: There's an AzureDirectory for Windows Azure which makes it possible to use Lucene.NET in your application. This is a super fast search engine that allows you to index your 'documents' (metadata) and this would be perfect to filter and return a list of "gzips"
Update: Since you only filter on date and severity you should review the Blob and Table options:
Blobs: You can create a container per date+severity (20121107-low, 20121107-medium, 20121107-high ...). Assuming you don't have too many blobs per data+severity, you can simply list the blobs directly from the container. The only issue you might have here is that a user will want to see all items with a high severity from the last week (7 days). This means you'll need to list the blobs in 7 containers.
Tables: Even though you say table storage or db aren't an option, do consider table storage. Using partitions and row keys you can easily filter in a very scalable way (you can also use CompareTo to get a range of items (for example, all records between 1 and 7 november). Duplicating data is perfectly acceptable in Table Storage. You could include some data from the gzip in the Table Storage entity in order to show it in your WPF application (the most essential information you want to show after filtering). This means you'll only need to process the blob when the user opens/double clicks the record in the WPF application
B. How do I display a "gzip" in my application (after double clicking on a search result for example)
Solutions
Connect to the storage account from the WPF application, download the file, unzip it and display it. This means that you'll need to store the storage account in the WPF application (or use SAS or a container policy), and if you decide to change something in the backend of how files are stored, you'll also need to change the WPF application.
Connect to a Web Role. This Web Role gets the blob from blob storage, unzips it and sends it over the wire (or send it compressed in order to speed up the transfer). In case something changes in how you store files, you only need to update the Web Role

Should I save uploaded(img files) to App_Data?

I am using asp.net mvc and have a section where a user can upload images. I am wondering where should I store them.
I was following this tutorial and he seems to store it in the app_data. I however read another person say it should only hold your database.
So not sure what the advantages are for using app_data. I am on a shared hosting so I don't know if that makes a difference.
Edit
I am planning to store the path to the images in the database. I will be then using them in a image tag and rendering them to the user when they come to my site. I have a file uploader that only will expect images(check will be client and server)
The tutorial is a simple example - and if you read the comments, the original code just saved to an uploads directory, no app_data in sight.
It was changed to app_data because that's a special folder - one that will not allow execution of code.
And you have understood correctly - app_data is really there for holding file based databases. That's the meaning of the folder. As such saving images into it doesn't feel like the right thing to do.
If you are certain only images will get uploaded (and you control that), I would suggest an /uploads directory - a reserved location for images that also will not allow code execution (something that you should be able to control via IIS).
I would say that depends on what you will do with that images later. If you use those images in an img tag, you could save them somewhere in the Content/ folder structure.
If you do not need them reachable from the outside, or need to stream them changed back, you might store them out of the web root if the hoster allows for that.
I wouldn't store them in app_data, as I - personally - think that it's more a convention to store there a database. Most developers not familiar with that product wouldn't look there for the image.
But: you could store binaries in a db (even though it is probably not the best thing to do), so a reference in a db pointing to a file in the same directory makes sense again.
It's more an opinion thing than a technical question though, I think.
I prefer to store them in the database. When storing images on the file system I've found it can be a bit harder to manage them. With a database you can quickly rename files, delete them, copy them, etc. You can do the same when they're on the file system, but it takes some scripting knowledge.
Plus I prefer not to manage paths and file locations, which is another vote for the database. Those path values always make their way into the web.config and it can become more difficult to deploy and manage.

Download 3000+ Images Using C#?

I have a list of around 3000 image URLs, where I need to download them to my desktop.
I'm a web dev, so naturally wrote a little asp.net c# download method to do this but the obvious problem happened and the page timed out before I got hardly any of them.
Was wondering if anyone else knew of a good, quick and robust way of me looping through all the image URL's and downloading them to a folder? Open to any suggestions, WinForms, batch file although I'm a novice at both.
Any help greatly appreciated
What about wget? It can download a list of URL specified in a file.
wget -i c:\list-of-urls.txt
Write a C# command-line application (or Winforms, if that's your inclination), and use the WebClient class to retrieve the files.
Here are some tutorials:
C# WebClient Tutorial
Using WebClient to Download a File
or, just Google C# WebClient.
You'll either need to provide a list of files to download and loop through the list, issuing a request for each file and saving the result, or issue a request for the index page, parse it using something like HTML Agility Pack to find all of the image tags, and then issue a request for each image, saving the result somewhere on your local drive.
Edit
If you just want to do this once (as in, not as part of an application), mbeckish's answer makes the most sense.
You might want to use an existing download manager like Orbit, rather than writing your own program for the purpose. (blasphemy, I know)
I've been pretty happy with Orbit. It lets you import a list of downloads from a text file. It'll manage the connections, downloading portions of each file in parallel with multiple connections, to increase the speed of each download. It'll take care of retrying if connections time out, etc. It seems like you'd have to go to a lot of effort to build these kind of features from scratch.
If this is just a one-time job, then one easy solution would be to write a HTML page with img tags pointing to the URLs.
Then browse it with FireFox and use an extension to save all of the images to a folder.
Working on the assumption that this is a one off run once project and as you are a novice with other technologies I would suggest the following:
Rather than try and download all 3000 images in one web request do one image per request. When the image download is complete redirect to the same page passing the URL of the next image to get as a query string parameter. Download that one and then repeat until all images are downloaded.
Not what I would call a "production" solution, but if my assumption is correct it is a solution that will have you up an running in no time.
Another fairly simple solution would be to create a simple C# console application that uses WebClient to download each of the images. The following psuedo code should give you enough to get going:
List<string> imageUrls = new List<string>();
imageUrls.Add(..... your urls from wherever .....)
foreach(string imageUrl in imagesUrls)
{
using (WebClient client = new WebClient())
{
byte[] raw = client.DownloadData(imageUrl);
.. write raw .. to file
}
}
I've written a similar app in WinForms that loops through URLs in an Excel spreadsheet and downloads the image files. I think they problem you're having with implementing this as a web application is that server will only allow the process to run for a short amount of time before the request from your browser times out. You could either increase this time in the web.config file (change the executionTimeout attribute of the httpRuntime element), or implement this functionality as a WinForms application where the long execution time won't be a problem. If this is more than a throw-away application and you decide to go the WinForms route, you may want to add a progress bar to ind

Categories

Resources