I am working on a project where I have a JSON file and am trying to pass that data through MVC to be displayed on a web page. As part of the JSON file, I have some data that I am trying to pass to my view, it looks like:
"htmlContent": "<p>It was over two decades ago that Bill Gates declared ‘Content is King’ and the consensus still stands today with it arguably being the most important part of designing a website. </p><p>Content is essentially your UX. It encompasses the images, words, videos and data featured across your website. If the main purpose of your site is to share valuable and relevant content to engage your audience, then you should be considering your content long before embarking on a web project. All too often, businesses miss the opportunity to create impressive UX designs, instead waiting until the later stages of the project to sign off content which inevitably creates new challenges to overcome. </p>\r\n<p>Having a research strategy in place that supports a content-first design approach should be at the top of your agenda. When businesses choose to design content-first, they are putting their valuable resources centre stage, conveying their brand through effective and engaging UX design. Throughout this blog, we will share our tips on how you can develop a content-first design approach. </p>\r\n<h2><strong>How to develop a content-first design approach </strong> </h2>\r\n<p>Content can no longer be an after-thought, but there’s no denying that generating content can be a tricky. To get you thinking along the right lines and help put pen to paper, follow our top tips: </p>\r\n<h3><strong>Ask lots of questions</strong> </h3>\r\n<p>Generating content that successfully satisfies what your customers want to know requires a lot of research. Get into the habit of asking open-ended questions that answer the Who, What, Where, When, Why and How. Using this approach will allow you to delve deep and gain an understanding of what your website should include to build a considered site map. </p>\r\n<h3><strong>Consider your Information Architecture (IA)</strong> </h3>\r\n<p>How your content is organised and divided across the website is a crucial aspect of UX design. Without effective sorting, most users would be completely lost when navigating a site and there’s no point having memorable features if they can’t be found! Use card sorting exercises, tree tests, user journey mapping and user flow diagrams to form an understanding of how best to display your content in a logical and accessible way. </p>\r\n<h3><strong>Conduct qualitative and quantitative research</strong> </h3>\r\n<p>Although Google Analytics is extremely useful, it doesn’t hold all the answers. Google Analytics is great at telling you <em>what</em> your users are doing, but it doesn’t give you the insight into <em>why</em> they’re doing it. Qualitative one-to-one user interviews is an effective method of really getting to grips with your user needs to understand why they do what they do. User testing also falls into this category. Seeing a user navigate through your website on a mobile phone in day to day life can give you great insight for UX design in terms of context and situation. </p>\r\n<h3><strong>Align your content strategy with long-term business goals</strong> </h3>\r\n<p>Before beginning your web project, it’s important to understand the goals of the project and the pain points you are trying to solve. Include all the necessary stakeholders within this research to gain a comprehensive understanding of these insights before embarking on your web design project. </p>\r\n<h3><strong>Content first, design second</strong> </h3>\r\n<p>Avoid designing content boxes across your website and trying to squeeze the content into these boxes. When designing a new website, it may seem counter intuitive to begin with a page of words rather than a design mock-up. But, it’s important to remember that Lorem Ipsum isn’t going to help anyone either. Begin with the content your users need and then design out from there. Capturing the content and its structure can be done in many ways; we like to build content models based on IA site maps and qualitative user testing such as card sorting and user journey mapping. </p>\r\n<p>By using a content-first design approach, you can understand what content needs to fit into your website design. Analysing your website’s content needs in the early stages or, even better, prior to the project beginning, can effectively inform and shape all touch points ultimately generating an optimised result with reduced time delays and constraints along the way. If you have a web project in mind and need help on how to get started, get in touch with the team today. </p>",
In the view I am then accessing this JSON data through a foreach loop and can access it like so
#jsondata.htmlContent
This then gets said 'htmlcontent' from the JSON file, when I open the html web page the 'htmlcontext' is not working as I would expect it to, the '<p.>' tag does not display as a paragraph on the web page, instead the content on the web page is exactly the same as the JSON string.
How would I go about doing this and displaying the data in the tags?
Related
I'm learning .net mvc through making a web application.
I have a page for a list of items(for example a list of properties). How should I generate the urls for each of these item? Inside each url there would be a page for more detailed information of the item. Should I manually create each view for each item? There could be thousands of them. Any advise on any tools or methods I could use? Thanks!
This is a pretty broad question. The identification of opportunities for generalization vs. specialization is one of the most important competencies for a software engineer. And there are a lot of variables.
Two principles you should be aware of:
Keep it Simple Stupid (KISS)
Don't Repeat Yourself (DRY)
N.B. In many ways these principles are in opposition to one another.
You could build a web site with one view for each product. This is pretty rare because you'd have to build a lot of views. On the other hand, if you only have a few products and you want to highlight their features using a custom layout (for example, visit Tesla.com), you might develop View for each and every product. This does keep it simple, but you may end up repeating yourself a lot.
More commonly (e.g. Amazon.com) there will be one "Product Page" view with code that customizes the page (e.g. fills in the text and image areas) per product. This is a much more scalable solution, although when you do it that way you need to take the time to develop a rich and flexible data model, because you need a uniform way of representing the content on the page in a database so that the code that populates the View can be completely generic. This avoids repeating yourself, but it is less simple.
Then there are solutions that are in between. Maybe half of your products can be represented on a single type of page, but some of them need a different layout (e.g. need to include a size/color UX, or include spec sheets). So you may end up with a half dozen views, one for each type of product, that cover hundreds of products.
What about other pages? Like search, homepage, etc. Well you will need a separate view for each page that is different enough that justifies splitting it off from other views. For example, if your Search Results page and your Browse By Category page look very similar, you might implement them with a single View and write code to tailor some details about how they look (e.g. change the title). On the other hand, if they are pretty different it may be less work to implement them as separate Views. It is all about striking a balance between DRY and KISS.
There are a lot of judgment calls. If you don't feel comfortable making these judgments, I suggest you find another site that is similar to yours and see how they have broken down their views. It is usually pretty easy to tell by paying attention to the URL as you click through the UX.
I am in a bit of a crisis here. I would really appreciate your help on the matter.
My Final Year Project is a "Location Based Product Recommendation Service". Now, due to some communication gap, we got stuck with an extremely difficult algorithm. Here is how it went:
We had done some research about recommendation systems prior to the project defense. We knew there were two approaches, "Collaborative Filtering" and "Content Based Recommendation". We had planned on using whichever technique gave us the best results. So, in essence, we were more focused on the end product than the actual process. The HOD asked us what algorithms OUR product would use? But, my group members thought that he meant what are the algorithms that are used for "Content Based Recommendations". They answered with "Rule Mining, Classification and Clustering". He was astonished that we planned on using all these algorithms for our project. He told us that he would accept our project proposal if we use his algorithm in our project. He gave us his research paper, without any other resources such as data, simulations, samples, etc. The algorithm is named "Context Based Positive and Negative Spatio-Temporal Association Rule Mining" In the paper, this algorithm was used to recommend sites for hydrocarbon taps and mining with extremely accurate results. Now here are a few issues I face:
I am not sure how or IF this algorithm fits in our project scenario
I cannot find spatio-temporal data, MarketBaskets, documentation or indeed any helpful resource
I tried asking the HOD for the data he used for the paper, as a reference. He was unable to provide the data to me
I tried coding the algorithm myself, in an incremental fashion, but found I was completely out of my depth. I divided the algo in 3 phases. Positive Spatio-Temporal Association Rule Mining, Negative Spatio-Temporal Association Rule Mining and Context Based Adjustments. Alas! The code I write is not mature enough. I couldn't even generate frequent itemsets properly. I understand the theory quite well, but I am not able to translate it into efficient code.
When the algorithm has been coded, I need to develop a web service. We also need a client website to access the web service. But with the code not even 10% done, I really am panicking. The project submission is in a fortnight.
Our supervisor is an expert in Artificial Intelligence, but he cannot guide us in the algorithm development. He dictates the importance of reuse and utilizing open-source resources. But, I am unable to find anything of actual use.
My group members are waiting on me to deliver the algorithm, so they can deploy it as a web service. There are other adjustments than need to be done, but with the algorithm not available, there is nothing we can do.
I have found a data set of Market Baskets. It's a simple excel file, with about 9000 transactions. There is not spatial or temporal data in it and I fear adding artificial data would compromise the integrity of the data.
I would appreciate if somebody could guide me. I guess the best approach would be to use an open-source API to partially implement the algorithm and then build the service and client application. We need to demonstrate something on 17th of June. I am really looking forward to your help, guidance and constructive criticism. Some solutions that I have considered are:
Use "User Clustering" as a "Collaborate Filtering" technique. Then
recommend the products from similar users via an alternative "Rule
Mining" algorithm. I need all these algorithms to be openly available
either as source code or an API, if I have any chance of making this
project on time.
Drop the algorithm altogether and make a project that actually works
as we intended, using available resources. I am 60% certain that we
would fail or marked extremely low.
Pay a software house to develop the algorithm for us and then
over-fit it into our project. I am not inclined to do this because it
would be unethical to do this.
As you can clearly see, my situation is quite dire. I really do need extensive help and guidance if I am to complete this project properly, in time. The project needs to be completely deployed and operational. I really am in a loop here
"Collaborative Filtering", "Content Based Recommendation", "Rule Mining, Classification and Clustering"
None of these are algorithms. They are tasks or subtasks, for each of which several algorithms exist.
I think you had a bad start already by not really knowing well enough what you proposed... but granted, the advice from your advisor was also not at all helpful.
I've been tasked to create (or seek something that is already working) a centralized server with an API that has the ability to return a PDF file passing some data, and the name of the template, it has to be a robust solution, enterprise ready. The goal is as follows:
A series of templates for different company things. (Invoices, Orders, Order Plannings, etc)
A way of returning a PDF from external software (Websites, ERP, etc)
Can be an already ready enterprise solution, but they are pressing for a custom one.
Can be any language, but we don't have any dedicated Java programmers in-house. We are PHP / .NET, some of us dabble, but the learning curve could be a little steep.
So, I've been reading. One way we've thought it may be possible is installing a jasper reports server, and creating the templates in Jaspersoft Studio, then using the API to return the PDF files. A colleague stands for this option, because it's mostly done, but 1º is java and 2º I think it's like using a hammer to crack a nut.
Other option we've been toying with is to use C# with iTextSharp to create a server, and create our own API that returns exactly the PDF with the data we need. Doing this we could have some benefits, like using the database connector we have already made and extracting most of the data from the database, instead of having to pass around a big chunk of data, but as it is bare, it doesn't really have a templating system. We'd have create something from with the XMLWorker or with c# classes but it's not really "easy" as drag and drop. For this case I've been reading about XFA too, but documentation on the iText site is misleading and not clear.
I've been also reading about some other alternatives, like PrinceXML, PDFBox, FOP, etc, but the concept will be the same as iText, we'd have to do it ourselves.
My vote, even if it's more work is to go the route of iText and use HTML / CSS for the templates, but my colleagues claim that the templates should be able to be changed every other week (I doubt it), and be easy. HTML / CSS would be too much work.
So the real question is, how do other business approach this? Did I leave anything out on my search? Is there an easier way to achieve this?
PS: I didn't know if SO would be the correct place for this question, but I'm mostly lost and risking a "too broad question" or "off topic" tag doesn't seem that bad.
EDIT:
Input should be sent with the same request. If we decide the C# route, we can get ~70% of the data from the ERP directly, but anyway, it should accept a post request with some data (template, and data needed for that template, like an invoice data, or the invoice ID if we have access to the ERP).
Output should be a PDF (not interested in other formats, just PDF).
Templates will be updated only by IT. (Mostly us, the development team).
Performance wise, I don't know how much muscle we'll need, but right now, without any increase, we are looking at ~500/1000 PDFs daily, mostly printed from 10 to 10.30 and from 12 to 13h. Then maybe 100 more the rest of the day.
TOP performance should not be more than ~10000 daily when the planets align, and is sales season (twice a year). That should be our ceiling for the years to come.
The templates have some requirements:
Have repeating blocks (invoice lines, for example).
Have images as background, as watermark and as blocks.
Have to be multi language (translatable, with the same data).
Have some blocks that are only show on a condition.
Blocks dependent on the page (PDF header / page header / page footer / PDF footer)
Template will maybe have to do calculations over some of the data, I don't think we'll ever need this, but it's something in the future may be asked by the company.
The PDFs don't need to be stored, as we have a document management system, maybe in the future we could link them.
Extra data: Right now we are using "Fast-Reports v2 VCL"
Your question shows you've been considering the problem in detail before asking for help so I'm sure SO will be friendly.
Certainly one thing you haven't detailed much in your description is the broader functional requirements. You mentioned cracking a nut with a hammer, but I think you are focused mostly on the technology/interfacing. If you consider your broader requirements for the documents you need to create, the variables involved, it's might be a bigger nut that you think.
The approach I would suggest is to prototype solutions, assuming you have some room to do so. From your research, pick maybe the best 3 to try which may well include the custom build you have in mind. Put them through some real use-cases end to end - rough as possible but realistic. One or two key documents you need to output should be used across all solutions. Make sure you are covering the most important or most common requirements in terms of:
Input Format(s) - who can/should be updating templates. What is the ideal requirement and what is the minimum requirement?
Output Requirement(s) - who are you delivering to and what formats are essential/desirable
Data Requirement(s) - what are your sources of data and how hard/easy is it to get data from your sources to the reporting system in the format needed?
Template feature(s) - if you are using templates, what features do the templates need? This includes input format(s) but I was mostly thinking of features of the engine like repeating/conditional content, image insertion, table manipulation etc. ie are your invoices, orders and planning documents plain or complex
API requirements - do you have any broader API requirements. You mentioned you use PHP so a PHP library or Web/Web Service is likely to be a good starting point.
Performance - you haven't mentioned any performance characteristics but certainly if you are working at scale (enterprise) it would be worth even rough-measuring the throughput.
iText and Jasper are certainly enterprise grade engines you can rely on. You may wish to look at Docmosis (please note I work for the company) and probably do some searches for PDF libraries that use templates.
A web service interface is possibly a key feature you might want to look at. A REST API is easy to call from PHP and virtually any technology stack. It means you will likely have options about how you can architect a solution, and it's typically easy to prototype against. If you decide to go down the prototyping path and try Docmosis, start with the cloud service since you can prototype/integrate very quickly.
I hope that helps.
From my years of experience in working with PDF I think you should pay attention to the following points:
The performance: You may do the fastest performance with API based pdf files generation in comparision to HTML or XML to PDF generation (because of an additional layer of conversion involved). Considering peaks in the load you may want to calculate the cost of scaling up the generation by adding more servers (and estimate the cost of additional servers or resources required per additional pdf file per day).
Ease of iterations and changes: how often will you need to adjust templates? If you are going to create templates just once (with some iterations) but then no changes required then you should be OK by just coding them using the API. Otherwise you should strongly consider using HTML or XML for templates to simplify changes and to decrease the complexity of making changes in templates;
Search and indexing: If you may need to run search among created documents then you should consider storing indexes of documents generated or maybe store more the source data in XML along with PDF file generated;
Long time preservation: you should better conform to PDF/A sub-format in case you are looking for a long time digital preservation for your documents. See the VeraPDF open source initiative that you may use to validate generated and incoming PDF documents against the conformance to PDF/A requirements;
Preserving source files The PDF format itself was not designed to be edited (though there are some PDF editors already) so you may consider the need of preserving the source data to be able to regenerate PDF documents later and probably introduce additional output formats later.
I am planning on creating a free open source Porn blocker software that as much as possible blocks porn websites.
The idea is to create a list of websites like xxxporn.xxx or whatever and once user at any time tries to visit that website in any web browser it just kills the request and the user goes no where.
I am good with programming and my problem isn't with code i just want to know from where should i start?
I heard about packet sniffers so how do i do it in C#? all i want is just a demo method or a code sample that shows me the currently vistied websites and kill the request when a predefined website is visited.
I wrote a web crawler and had to deal with filtering out porn on free crawls.
Looks for the following terms:
18 U.S.C 2257
18 U.S.C. 2257
section 2257 compliance
Most pornographic sites have these terms in their html source.
This is not an answer to your request, but rather only ones view on the subject.
Porn is not something that just Pops up while you are surfing regular web sites. like this one for example... Porn is something that you need to look for.
If you have small children and you dont want them to be exposed to things that are not under your control you can simply define all their surfing destinations with windows firewall.
If you have older children and you are afraid that they might wonder off in search for porn due to age or hormonal impulses, or get exposed by simply surfing to all sorts of dubious and pirated websites, you should have a talk with them and explain things in a grown manner and not try to block the reality of life in such medieval way.
In this modern age where internet governs all aspects of our lives, and there are far greater risks out there in cyberspace then porn, proper training and education on what is good and bad when surfing is the key to save kids from all risks and harmful contents.
I apologize that this has nothing to do with programming.
I'm working on a free web application that will analyze top news stories throughout the day and provide stats. Most news websites offer RSS feeds, which works fine for knowing which stories to retrieve. However, the problems arise when attempting to get the full news story from the news website itself. At the moment, I have separate NewsSource classes for each source (CNN, NY Times, etc) that read the appropriate RSS feed(s), follows each link, and strips out the body. This seems tedious and very unmanageable when a news website decides to change the HTML structure of their articles.
Is there a service (preferably free) that already aggregates multiple news sources with the full article content (not just a summary)? If not, do you have any suggestions for handling multiple sources with different HTML structures that may change without notice?
Use readability. Search for readability port for the language you use.