We are developping an e-commerce application and I have a bit of a problem.
Right now we have 2 MVC applications:
A main MVC application which role is to manage the inventory and set items to sale;
Another MVC application which will serve as a the e-commerce on which the items set to sale by the main application will be displayed.
My main problem is that these two shares a same library of image, and this library is huge (about 60 000 images and counting). Up to now to allow a fast process each project has a physical copy of each images "~/Images/BankImages/FullImage/theFirstImage.jpeg", and so on, but you can guess that this is a pretty huge library that takes a lot of room.
I'm looking for options on how I could develop something that would return an image in whichever C# format. I was thinking about a web service, I suppose, which task would be to return these images upon being called, but I don't know how I can do it (newb here) and I think I may lose a bit of speed because a call to the web service may not return immediately the needed image, and I may have to retrieve a few hundred of these images at the same time.
So I'm looking for suggestions. What would be the best way to solve my main problem and avoid (if possible) having to copy each time the whole image library?
Thanks a lot!
When You say:
My main problem is that these two shares a same library of image
This mean you need a single assets repository, so you need a CDN, same asset, common API
Related
I've created a website in asp.net mvc4 and i've put it online with specific domain name. Now my client asks to replicate same website on different domain name, and change some static texts/images to distinguish the 2 websites. I'd like to handle just one source code and deploy two times. How i can reach this?
We did this a few years ago with a web application. It was a pain in the a**. We had one website running and the resources were loaded after the user has logged in.
During the development you always had to think about that, split the resources always look for the logged in user etc.
It is just easier to copy the published application to a second folder and for the static texts use some kind of resource files that can be replaced on the fly.
As long as you don't have images and files that are a few gigabytes big it should be no problem to copy the compiled source code an the resources.
Though kind of a too late reply, but I just wanted to share some of my experience with you, you can follow these steps, it won't take too much of your time.
Identify the various text / images like logo for branding etc for which you have a requirement to make them tenant specific.
Create a table called tenant settings (tenantid, key, value )
Identify the pages that needs to be tweaked to look up from this setting than a hardcoded value.
Update these pages and provide a UI for each tenant so that they can change the values at any point of time
This way you can achieve the level 4 multi-tenancy with minimal effort to begin with.
HTH
I am coming from an asp.net background where if you want to display a photo gallery you have to have two files for each photo i.e. the original and a separate thumbnail file.
If I was to create a Win8App gallery that has say 100 photos per view. Would it be okay performance wise to simply change the size of the photo... i.e. only have the 1 file. (These are loaded from the file system).
I know it may depend on certain conditions but generally what is the best way to do it?
Depends on the file size, and where you get them from. If the files are on the system you could use StorageFile.GetThumbnailAsync. Otherwise, if the files are large and you are getting them from somewhere else (a service) you could load them only as they scroll into view for the user. Make sure to dispose objects as you are not using them as bitmaps are notorious for eating up memory resources.
100 images doesn't sound like a lot for me, but it's better to have numbers to back that statement up with as I have no idea how large the files are.
Here are some general guidelines for thumbnails from MSDN
I would try different ways to deal with it and use the performance tools to see what the end result is. Maybe you could group the images and have the user view one group at the time, maybe use placeholder images, or maybe the files aren't that big and its no problem at all to simply resize depending on view.
For lazy loading (recommended with many items) use data virtualization by implementing the ISupportIncrementalLoading You can find more information about that on MSDN.
I'm trying to create a wpf application such as a movies library because i would like to manage and sort out my movies with a pretty interface.
I'd like to create a library with all my movies getting information from the web, but i don't know how very well.
I thought to get the information from a web site, for example imdb, but i don't know if it's legally to capture html from page to get the nested information.
It's my first desktop application and I would also like to know if it is necessary to create a database within the project and then create a setup project with specified script for deploy it.
Sorry for the confusion but i would like to know too much things :)
Thanks a lot in advance.
The legality of web scraping is a grey area. See my question, "Legality of Web Scraping vs Normal Use" and the corresponding answers for some insight.
Even if the legality is not a problem, web scraping is a flimsy approach because the webpage structure may change without notice, making your application suddenly useless until you update it to the new format. You are much better off using some sort of web API (if the site providing the information offers it).
Whether you need a database or not depends entirely on what your application will be doing and how you design it - it's not something any of us can tell you.
Same goes for the setup project - in fact I wouldn't worry about that until you actually have a working application. Take it step by step and keep the scope within control.
Yes I did not think about api.
It's a great idea, maybe use "themoviedb".
But if i create an application based on it, that has to show all the movies that you have stored on your hdd and get , for example, the posters, the description and the ranking, i have to create a database according to you?
Thanks a lot.
I have a system where users can upload full resolution sized images of about 16 mega-pixels which result in large files.
The current methodology is:
Receive the upload in a HTTP request.
Within the request, write the original file to blob store
Still within the request, make about 10 copies of the file at various resolutions. (These are thumbnails at different sizes, some for Hi-DPI (retina) devices, as well as a dimension for full-sized viewing. I also convert the images to WebP.
I then transfer all the results to blob stores in different regions for private CDN purposes.
Clearly, the issue is that since this is all done within a HTTP request, it consumes vastly more server resources than any other typical HTTP request, especially when users start uploading images in bulk, several users at a time. If a user uploads a large image, the memory consumption jumps dramatically (I am using ImageMagick.NET for image processing).
Is this architecture more suitable:
Receive the file upload, write to the blob, add a notification to a processing queue, return success to the user.
A separate worker server receives the notification of the new file and starts all the re-sizing, processing and replication.
I just set the client-side JavaScript to not load the image previews for a few seconds, or get it retry if the image is not found (meaning that the image is still being processed, but is likely to show up sometime soon).
At least this new method will scale easier, has more predictable performance. But it seems like a lot of work just to handle something as 'every day' as photo uploading. Is there a better way?
I know the new method follows the same principle as using an external re-sizing service where, but wan't to do this in house since I am concerned about privacy of some of these third-party services. It would still mean I would have to adapt the client to deal with missing/unprocessed images.
Yes, what you're describing is a better way. It sounds more complicated, but it is how majority of scalable sites handle big load.. offload it to a queue and let workers process it.
I'd add a correction in your case for step #2:
A separate worker server monitors a queue and and starts all the re-sizing, processing and replication when a message appears instructing it to do so.
Another option would be to use the new Web Jobs feature. In fact your scenario seems to be so common (in terms of image processing) that it's listed as one of the Typical Scenario on MSDN.
Image processing or other CPU-intensive work. A common feature of web
sites is the ability to upload images or videos. Often you want to
manipulate the content after it's uploaded, but you don't want to make
the user wait while you do that.
Whether its better or not I'll leave it up to you to decide.
Basically functionality i need is:
easy integration with ASP.NET application.
user ability to crop image with handle prior to save
image optimization from any image type to jpg (compression levels) during the save
saving images with proper h/w ratios during the save
user ability to rotate the image prior to save
ability to translate application to foreign language as it won't be used on international site
If you know any application which fits my needs even if it costs money, give me a twit...
Our company has implemented a photo cropper in an ASP.Net MVC application using Atalasoft's DotImage. I did not implement this myself, but I currently maintain the whole of that application (and consequently the cropping component).
Based on the way you phrased your question, I feel that I should explicitly point out that Atalasoft's DotImage only provided the functionality to manipulate images. Other answers referenced ImageMagick and GDI+. In the same vein, these libraries also only provide the functionality to manipulate images.
We had to implement the UI and workflow ourselves. This was, while not rocket science, still far from trivial. While we used a pre-built component for fancy, AJAX-y file uploads (for the source photos) - we still had to integrate that into the application and manage persistence of the files and database records associated with them. (Similarly, as a convenience we allow import photo from url - another feature we had to explicitly create)
I would suggest that you will not find any general purpose component to integrate that will give you cropping functionality and a Web UI. I suppose there may be one made by a ASP.Net component vendor, but I am certainly not aware of any off-hand.
The problem tends to involve lots of pieces that span from the client to the server, and consequently I think what you are looking for will involve a fair amount of specific-to-your-application development and integration.
You can try ImageMagick, it supports hundreds of image formats and it comes with a .NET wrapper.
Have found exactly what i was looking for iLoad
Does exactly what i asked and doesn't cost that much compared to other suggested solutions. Haven't tried it yet but demo is impressive.
Have a look at mcImageManager