I've created a website in asp.net mvc4 and i've put it online with specific domain name. Now my client asks to replicate same website on different domain name, and change some static texts/images to distinguish the 2 websites. I'd like to handle just one source code and deploy two times. How i can reach this?
We did this a few years ago with a web application. It was a pain in the a**. We had one website running and the resources were loaded after the user has logged in.
During the development you always had to think about that, split the resources always look for the logged in user etc.
It is just easier to copy the published application to a second folder and for the static texts use some kind of resource files that can be replaced on the fly.
As long as you don't have images and files that are a few gigabytes big it should be no problem to copy the compiled source code an the resources.
Though kind of a too late reply, but I just wanted to share some of my experience with you, you can follow these steps, it won't take too much of your time.
Identify the various text / images like logo for branding etc for which you have a requirement to make them tenant specific.
Create a table called tenant settings (tenantid, key, value )
Identify the pages that needs to be tweaked to look up from this setting than a hardcoded value.
Update these pages and provide a UI for each tenant so that they can change the values at any point of time
This way you can achieve the level 4 multi-tenancy with minimal effort to begin with.
HTH
Related
Hi I am trying to pull this string from courseweb.hopkinsschools.org and display it on my own asp.net application. I have been looking for a long time for a tutorial but nothing works. Any help would be greatly appreciated.
Picture of String needed:
String
When I started doing work with websites and interfacing with other websites, I originally wanted to do what you're talking about, reading the text from pages, because thats how we as people interface with computers and websites.
But that is not how computers should ever interface with other websites unless absolutely necessary.
Moodle has an API for such things like course management. Its kind of difficult to find information on, but its called Moodle Web Services if I remember quickly. I'll add a link back if I can find it.
What these will do is let you access moodle in a computer friendly way, ie. a way your computer can easily understand, instead of trying to read webpages.
Edit
Here are some resources to get you started:
https://docs.moodle.org/dev/Web_services
https://code.google.com/p/mnet-csharp/
https://delog.wordpress.com/2010/08/31/integrating-a-c-app-with-moodle-using-xml-rpc/
https://delog.wordpress.com/2010/09/08/integrating-c-app-with-moodle-2/
I'm trying to create a wpf application such as a movies library because i would like to manage and sort out my movies with a pretty interface.
I'd like to create a library with all my movies getting information from the web, but i don't know how very well.
I thought to get the information from a web site, for example imdb, but i don't know if it's legally to capture html from page to get the nested information.
It's my first desktop application and I would also like to know if it is necessary to create a database within the project and then create a setup project with specified script for deploy it.
Sorry for the confusion but i would like to know too much things :)
Thanks a lot in advance.
The legality of web scraping is a grey area. See my question, "Legality of Web Scraping vs Normal Use" and the corresponding answers for some insight.
Even if the legality is not a problem, web scraping is a flimsy approach because the webpage structure may change without notice, making your application suddenly useless until you update it to the new format. You are much better off using some sort of web API (if the site providing the information offers it).
Whether you need a database or not depends entirely on what your application will be doing and how you design it - it's not something any of us can tell you.
Same goes for the setup project - in fact I wouldn't worry about that until you actually have a working application. Take it step by step and keep the scope within control.
Yes I did not think about api.
It's a great idea, maybe use "themoviedb".
But if i create an application based on it, that has to show all the movies that you have stored on your hdd and get , for example, the posters, the description and the ranking, i have to create a database according to you?
Thanks a lot.
We are developping an e-commerce application and I have a bit of a problem.
Right now we have 2 MVC applications:
A main MVC application which role is to manage the inventory and set items to sale;
Another MVC application which will serve as a the e-commerce on which the items set to sale by the main application will be displayed.
My main problem is that these two shares a same library of image, and this library is huge (about 60 000 images and counting). Up to now to allow a fast process each project has a physical copy of each images "~/Images/BankImages/FullImage/theFirstImage.jpeg", and so on, but you can guess that this is a pretty huge library that takes a lot of room.
I'm looking for options on how I could develop something that would return an image in whichever C# format. I was thinking about a web service, I suppose, which task would be to return these images upon being called, but I don't know how I can do it (newb here) and I think I may lose a bit of speed because a call to the web service may not return immediately the needed image, and I may have to retrieve a few hundred of these images at the same time.
So I'm looking for suggestions. What would be the best way to solve my main problem and avoid (if possible) having to copy each time the whole image library?
Thanks a lot!
When You say:
My main problem is that these two shares a same library of image
This mean you need a single assets repository, so you need a CDN, same asset, common API
I want users to be able to upload images for profile pictures.
Are there any guidlines as to how this should best be handled?
eg - where to save the images? and folder structure to use.
- make it difficult for users to browse through everyones profile pics?
thanks.
I don't mean to be a wet blanket if your into writing this yourself, but I would just use http://en.gravatar.com
But to answer your questions directly:
Are there any guidelines as to how this should best be handled? eg - where to save the images? and folder structure to use. - make it difficult for users to browse through everyones profile pics?
Generally this is going to depend greatly on the setup of server environment. Do you have multiple web servers? Do you have a database server you want to use? Do you have an images only domain you want to use? etc.
The simplest approach is to write them to the file system and use code to retrieve them. By not writing these files into your web directory you can be sure that users cannot use this to execute code or script on your server. Useing an ASPX page to return the image content also allows you to relocate the image store at any time.
As for preventing browsing, I would just use a unique image identifier generated for each user. BTW, I would not use the user's internal "ID" field; rather, create a new id just for images.
If it is only to display a single user's picture, I would recommend to implement Gravatar instead of your own approach. There are plenty of articles out there how to implemt Gravatar with ASP.NET MVC the best.
If you really want to have your own solution, I'd recommend to give all of the user's profile pictures a random file name (for example with a GUID "3F2504E0-4F89-11D3-9A0C-0305E82C3301.jpg").
The solution is for a project in which changing all instances of Session[string] is not an option. My thoughts have been implementing the SessionStateStoreProviderBase. I understand that creating a class Session and having properties like Session.UserName would be a good idea.
Edit: The goal here is to turn off Sessions per user request, not application wide, without changing code in each aspx page.
First you need a way to tell a bot from a human apart.
When you're through, consider what do you want to achieve.
If you wish to disable Session to bots, then be sure it won't break you site. If a search engine bot gets a crashed page, it will index and rank it as such.
Set up your robots.txt file to direct (most) bots to a page of your choice, where you have control over session and other information. If you want free access to all pages, you have to put in code to distinguish bots by http header information - that's a research project in itself.