Skip to content

What is it?

Content that is designed to adapt to the needs of the customer, not just cosmetically, but also in substance and in capability. Adaptive content automatically responds to the screen size and orientation of any device, but goes further by displaying relevant content that takes full advantage of the specific capabilities of the device being used.

Why is it important?

Enables content professionals to deliver the best information experience possible in the most efficient and effective way.

Why does a content strategist need to know this?

Adaptive content automatically adjusts to different environments and device capabilities to deliver the best possible customer experience. It can be customized on-the-fly, displayed in any order, made to respond to specific customer interactions, changed based on location, and integrated with content from other sources. Adaptive content takes advantage of the features of the device being used in order to meet the needs of individual customers.

Not only can we display information based on screen size and orientation (the basis of responsive design), but with adaptive content we can:

  • Leverage location-awareness to determine where users are and deliver locale-specific content

  • Determine wireless internet connection speed (hot spots versus mobile cellular, for example) and deliver content that’s optimized to the available bandwidth

  • Discover whether a device is in motion or not (and at what speed and direction), providing us with sufficient evidence to deduce whether a user is traveling by automobile or by plane, so we can provide content of value for that user’s travel situation

  • Deliver content in the right language (the language of the user’s choosing)

In today’s mobile, global world, our content must be able to adapt so that it reaches the right person at the right time in the right language and format. And, it must be able to be intelligent enough to use the capabilities of the device to the best effect.

About Charles Cooper

Photo of Charles Cooper

Charles Cooper has been involved in creating and testing digital content for more than 20 years. He’s passionate about user experience, taxonomy, workflow design, composition, digital publishing, and mobile delivery. Charles teaches, facilitates modeling sessions, and develops taxonomy and workflow strategies. He loves to figure out challenging delivery issues.

Term:

Email: mailto:cooper@rockley.com

Website: rockley.com

Twitter: @Cooper_42

LinkedIn: linkedin.com/in/charlescoopertrg

What is it?

A software application that supports information capture, editorial, governance, and publishing processes with tools such as workflow, access control, versioning, search, and collaboration.

Why is it important?

Without the automation that a content management system (CMS) provides, and the potential for integration into other software systems, many content-related tasks must be completed manually, greatly decreasing reliability and efficiency.

Why does a content strategist need to know this?

A content management system (CMS) is important because it provides a hub for multiple users and systems to interact with content. The CMS gives a content author the tools needed to support multi-channel delivery, adaptive or semantic content, and more.

There are thousands of CMS options on the market, most of which fall into a few main categories such as Web CMS, Component CMS, and Enterprise CMS. The usability of different systems, when applied to different delivery contexts, varies greatly.

The differences between the types of specialized content management systems, as well as the different specific vendor options, are often not clear to many in the organization, leaving the responsibility to the content strategist to bridge the gap. IT departments may understand some of the technicalities, but the full impact on the content and users will often not be clear to them.

Content strategists must understand basic CMS principles and capabilities so that the organization’s business goals drive and shape the content process. The ability to explain CMS-specific requirements can help ensure the correct system is selected or make the case for a replacement system when needed.

If change is not feasible, content strategists need to articulate a realistic set of customization and configuration requirements to the technical integration team so that content processes are properly supported.

About Noz Urbina

Photo of Noz Urbina

Noz Urbina is an internationally recognized content strategist and co-author of Content Strategy: Connecting the dots between business, brand, and benefits. He specializes in consulting and training in cutting-edge, multi-channel, business-driven projects. Since 2000, he has provided services to Fortune 500 organizations and small-to-medium enterprises.

Term:

Email: mailto:b.noz.urbina@gmail.com

Website: lessworkmoreflow.blogspot.com

Twitter: @nozurbina

LinkedIn:

Facebook: facebook.com/noz.urbina.3

What is it?

A methodology for specifying, designing, and deploying the digital documents needed to automate business processes and web services.

Why is it important?

Using a systematic approach to modeling documents and the processes that use them ensures that documents make sense for the people and applications that use them. A systematic approach also makes documents more robust and adaptable when technology or business conditions change.

Why does a content strategist need to know this?

Document engineering systematizes and synthesizes concepts and skills from information and process analysis, electronic publishing, business informatics, and web architecture. Content strategists may already be familiar with some of these disciplines, but document engineering brings them together into a focused, document-centric methodology.

Document engineering builds on the simple ideas that documents formalize the interactions between businesses and their customers or partners, and that the exchange of documents between these parties follows common patterns. Supply chains, web-based stores and marketplaces, government services, auctions, and numerous other types of network-enabled business and services models are examples.

Document engineering bridges seemingly incompatible approaches to designing and deploying document models. Narrative or publication document types have generally been designed using qualitative or even informal methods. Such design methods make narrative documents seem very different from transactional document types, which are generally designed using formal methods such as those of relational database theory in order to optimize them for automated applications.

Document engineering proposes that analyzing and understanding narrative and transactional document models involves reaching the same goals with different techniques: identifying content components, refining them to ensure that they are sound, organizing for reuse, and creating new document models from the collection of reusable content parts.

This enables document engineering to be applied to the entire range of document types, which is essential because most document-intensive processes involve a mixture of narrative and transactional types—think of filing personal income taxes, where you go back and forth between the instructions and the tax forms. In this light, document engineering is a natural consequence of audience analysis and user experience.

About Robert J. Glushko

Photo of Robert J. Glushko

Robert J. Glushko is an Adjunct Professor at the University of California, Berkeley. He has three decades of research and development, consulting, and entrepreneurial experience in information systems and service design, electronic publishing, and Internet commerce. He founded or co-founded four companies, including Veo Systems (1997), which pioneered Extensible Markup Language (XML) in electronic business.

Term:

Email: mailto:glushko@berkeley.edu

Website: people.ischool.berkeley.edu/~glushko/

Twitter: @rjglushko

LinkedIn: linkedin.com/pub/bob-glushko/0/14/a66

Facebook: facebook.com/bob.glushko

What is it?

The application of engineering discipline to the design, acquisition, management, delivery, and use of content and the technologies deployed to support the full content lifecycle.

Why is it important?

Ensures that improvement investments achieve the greatest benefits by introducing rigorous discipline to the design of content and associated technical and business processes.

Why does a content strategist need to know this?

Engineering applies scientific principles to the design, development, support, and use of systems that are themselves made up of structures and processes. The challenge of engineering is to design systems that balance and integrate a variety of considerations including usability, sustainability, affordability, manufacturability, efficiency, and effectiveness. To overcome this challenge, engineering approaches these objectives with a methodical use of precedents, standards, frameworks, measurement, testing, and state-of-the-art technologies.

It has become increasingly obvious that, in the 21st century, the business of content cannot continue to operate as a cottage industry. However well-intentioned they may be, professionals working in isolation and leveraging their preferred desktop tools and personalized techniques simply cannot keep up with the demands of a rapidly evolving, and increasingly digital, global economy.

Content engineering seeks to bring the business of content into the modern era by ensuring that content structures, tools, and processes are designed in a way that will make the most of current best practices, proven content technologies, applicable design patterns, and existing implementation experience.

Simply put, the discipline of content engineering represents the context within which content strategists, and indeed everyone involved in the content lifecycle, will operate from now on.

Author Information:

Name:

Joe Gollner

Bio:

Joe Gollner is Managing Director of Gnostyx Research, an independent consultancy and integrator specializing in applied content technologies. He has been active in the content management industry for over 20 years and has led numerous large-scale implementations where open standards were leveraged to integrate complex enterprise content processes.

Email:

jag@gnostyx.com

Website:

http://www.gnostyx.com

Twitter:

@joegollner

LinkedIn:

http://ca.linkedin.com/in/jgollner/

Facebook:

http://www.facebook.com/joegollner

Phone:

1-613-670-5786

Address:

1 Rideau Street, Suite 700 Ottawa, Ontario, Canada K1N 8S7

About Joe Gollner

Photo of Joe Gollner

Joe Gollner is Managing Director of Gnostyx Research, an independent consultancy and integrator specializing in applied content technologies. He has been active in the content management industry for over 20 years and has led numerous large-scale implementations where open standards were leveraged to integrate complex enterprise content processes.

Term:

Email: mailto:jag@gnostyx.com

Website: gnostyx.com

Twitter: @joegollner

LinkedIn: ca.linkedin.com/in/jgollner/

Facebook: facebook.com/joegollner

What is it?

The inclusion of content from one source into another source by hyperlink reference. The presented result appears as though the included content had occurred at the point of reference.

Why is it important?

First formalized as the idea of link-based, use-by reference, transclusion is a fundamental feature for any document representation system that enables true use-by-reference.

Why does a content strategist need to know this?

Transclusion was coined by hypertext pioneer Ted Nelson as an attempt to define and codify the concept that we now accept as hypertext. It has only been with technologies such as Structured Generalized Markup Language (SGML), Extensible Markup Language (XML), and Hypertext Markup Language (HTML) that it has become possible to implement transclusion.

Content used to be reused through the problematic copy-and-paste method. Transclusion allows content to be reused far more efficiently by the more sophisticated method of including a hyperlink that refers to the content to be placed there. In the information management sense, transclusion makes content easy to track, removes redundant information, eliminates errors, and so on.

Use-by-reference serves the creators and managers of content by allowing a single instance to be used in multiple places and by maintaining an explicit link between the reused content and all of the places it is used, which supports better tracking and management. These two aspects of use-by-reference—transparency to readers and manageability—are embodied in the term transclusion.

Ideally, source content would be authored and managed in one system and delivered to many other systems. It requires a lot of effort to process transclusion links. That’s one reason why transclusion is not a general feature of HTML; it’s much easier to do the processing in the authoring environment and deliver the HTML content with the references already resolved.

Transclusion has been implemented in many XML applications, such as the Darwin Information Typing Architecture (DITA) content reference (conref) and map facilities. Many content management systems provide proprietary facilities for use-by-reference that can also be considered forms of transclusion.

About Eliot Kimber

Photo of Eliot Kimber

Eliot Kimber is a long-time developer of large-scale hyperdocument management systems; a contributor to supporting standards, including HyTime, XML, and DITA

Term:

Email: mailto:ekimber@contrext.com

Website: contrext.com

Twitter: @drmacro

LinkedIn: linkedin.com/in/eliotkimber/

What is it?

A form of structured content that is designed, created, and delivered as discrete components within the content whole.

Why is it important?

Enables device-independent delivery in multiple contexts, at multiple levels of detail, and with varying consumer focus. It allows the content strategist to meet today’s delivery challenges and prepare for tomorrow’s unknowns.

Why does a content strategist need to know this?

Nearly every product that we consume is now offered in more flavors, sizes, and styles than we could imagine just ten years ago. Manufacturing products this way is expensive, but successful organizations have figured out how to adapt. Content is product, and the traditional, hand-crafted development process for content is too resource intensive (creativity, intellect, and time) to be sustainable in a consumer- and technology-driven market.

By analyzing the structure and purpose of content, we can break it down into modular components that can be delivered to any device and easily modified for particular audiences or purposes. Modular content has the following common characteristics:

  • Design: What does a module of content look like? One approach starts with the concept of a topic: a chunk of information organized around a single subject. A topic is large enough to be self-contained from a writer’s point of view but small enough to be delivered in a variety of contexts.

  • Structure: There is no one-size-fits-all topic. You can create multiple topic types and what will define each one is its internal structure. By aligning the structure of a topic with its purpose, you can create a model for authoring and delivery that is repeatable and flexible.

  • Self-description: Modular content is a vehicle that is ready to take your ideas anywhere. Metadata puts gas in the tank. Describing your modular content with metadata will enable real-time delivery based on a set of rules that you define, and change, as necessary.

About Michael Boses

Photo of Michael Boses

Michael Boses pioneered the simple capture and multi-platform delivery of personalized content and continues to drive innovation in the content community. His success with large content initiatives has led many to consider him an authority on how to make web content creation seamless and efficient in an enterprise setting.

Term:

Email: mailto:mboses@contelligence.org

Website: contelligence.org

Twitter: @mboses

LinkedIn: linkedin.com/in/michaelboses/

What is it?

The practice of using content components in multiple information products.

Why is it important?

Developing reusable content that can be used in multiple places and output formats saves valuable resources, enforces consistency, and improves content quality and effectiveness.

Why does a content strategist need to know this?

Content reuse is a key tactical component of a content strategy. Efficient content reuse enables single sourcing and multi-channel publishing; enforces editorial consistency; conserves time and fiscal resources; and can help ensure accurate, compliant (and thus effective) content.

Efficient content reuse does not involve copy-and-pasting but uses transclusion, whereby content is authored in one location and used by reference in other locations. Many Extensible Markup Language (XML) architectures implement transclusion; perhaps the most well-known is the Darwin Information Typing Architecture (DITA). Many authoring systems and content management systems also include proprietary mechanisms for transclusion.

Companies can maximize content reuse by developing structured content that is standards-based and semantically rich. Content can be reused at different levels of granularity:

  • An entire information product

  • An entire topic or collections of topics

  • Elements of a topic

In addition, content can be designed so that conditional processing (filtering) can generate different variants of information products. A content analysis can determine the appropriate level of granularity. A reuse strategy should define the method of content reuse, what content should be reused, the granularity of reuse, how reused content is controlled, and who owns reused content.

You can effectively manage reused content by employing a content management system (CMS) to control access, determine where the controlled content is used, and identify potentially reusable content. When content is structured well, content managers can employ automation to power content reuse, for example by pre-populating information products with reused content or using tools such as Schematron to help prevent content authors from accidentally deleting the controlled content.

Author Information:

Name:

Kristen James Eberlein

Bio:

Kristen James Eberlein is an information architect who works with clients that use the Darwin Information Typing Architecture (DITA). She chairs the OASIS Technical Committee that develops the DITA standard.

Email:

kris@eberleinconsulting.com

Website:

http://eberleinconsulting.com

Twitter:

@kriseberlein

LinkedIn:

www.linkedin.com/in/kristeneberlein/

Phone:

(919) 682-2290

Address:

226 Monmouth Avenue Durham, North Carolina 27701-1908 USA

About Kristen James Eberlein

Photo of Kristen James Eberlein

Kristen James Eberlein is an information architect who works with clients that use the Darwin Information Typing Architecture (DITA). She chairs the OASIS Technical Committee that develops the DITA standard.

Term:

Email: mailto:kris@eberleinconsulting.com

Website: eberleinconsulting.com

Twitter: @kriseberlein

LinkedIn: linkedin.com/in/kristeneberlein/

What is it?

Extensible Markup Language (XML) is an open standard for structured information storage and exchange.

Why is it important?

Encodes content and content structure, which in turn allows for machine-driven filtering and formatting of information. XML also serves as an interchange layer among otherwise incompatible systems.

Why does a content strategist need to know this?

With XML, you capture not just text but information about the text and relationships among the various text components. A foundation of rich, intelligent content opens up sophisticated content manipulation possibilities, such as personalization of information based on reader demographics or automatically linking product references to corresponding 3D images.

XML files do not usually include formatting information. Instead, the formatting is applied dynamically or in a rendering phase. Deferring formatting until after the authoring phase allows for the following possibilities:

  • You can add new outputs, or modify existing outputs, without affecting the source content files. This is in contrast to word processors and desktop publishing tools, in which a formatting change requires manipulation of each content file.

  • The introduction of an independent formatting layer makes it much easier to manage content in multiple languages. The translation effort can focus on linguistics instead of having to disentangle formatting as part of the translation.

  • Authors no longer need expertise in formatting or desktop publishing, only in the subject matter.

XML is an open, nonproprietary standard. Using XML reduces the risk that an organization will be locked in to a particular vendor solution or publishing workflow when the organization’s requirements change. XML is widely supported in software, not just by software of interest to content strategists.

About Sarah O’Keefe

What is it?

Content, whether in a textual, visual, or playable format, that conforms to structural and semantic rules that allow machine processing to meet specific business requirements.

Why is it important?

Humans are much better than computers when it comes to understanding the nuances of content. Structuring content with semantic metadata allows computers to understand the content’s relationship to business processes. This enables better discovery, marketing, and user engagement.

Why does a content strategist need to know this?

Readers understand the visual grammar of style in what they read in a browser or in print, but computers do not. Even for scanned pages converted into word processor files, the computer can only determine that something in a block of text is possibly a paragraph, but it cannot necessarily discern a paragraph from a note or a quotation. By indicating the order and intent of the parts of a document, writers ensure that publishing tools well into the future can usefully render that content, even if reading technologies change.

Adding structure to content adds both present and future value, turning content from a single-use commodity into a long-term asset. Content can be structured in a number of ways, although most commonly it is done by applying descriptive, codified markup to it (Extensible Markup Language (XML) or other semantic markup) or by storing content in named fields in a database.

Structured content clearly indicates not only the parts of the discourse (the titles, sections, lists, tables, and phrases that represent organization) but also the semantic intent of those containers. For example, paragraphs identified more specifically as quotations can be not only rendered differently for readers, but also made more easily discovered in searches for quotations or citations.

By structuring content appropriately, you can more easily turn information into knowledge, instructions into automation, concepts into lesson units, and more, thereby increasing its value to the business.

About Don Day

Photo of Don Day

Don Day is a content engineer with deep experience with innovative authoring solutions and information architectures for structured, semantic content for the Web and across the enterprise. He provides consulting on strategy, technology, and best practices for optimizing the value and usefulness of unstructured data.

Term:

Email: mailto:donrday@contelligencegroup.com

Website: contelligencegroup.com

Twitter: @donrday

LinkedIn: linkedin.com/in/donrday/

Facebook: facebook.com/donrday

What is it?

The ability to create content once, planning for its reuse in multiple places, contexts, and output channels.

Why is it important?

Leverages content to its fullest potential, with benefits such as increased consistency and accuracy and reduced development time.

Why does a content strategist need to know this?

Single sourcing is an approach to developing content that can be used to produce multiple outputs in different formats for different platforms. With this approach, authors only need to maintain one set of source content, greatly reducing authoring, editing, and translation time, as well as reducing the risk of introducing inconsistencies between multiple, often redundant, content sets.

One key to single sourcing is separating content from formatting. Rather than formatting the content while authoring, the content is formatted as part of the publishing process. This frees authors to concentrate solely on the quality of the content and allows designers to format content appropriately for each channel. Single-sourced content is usually in an open format, such as XML, which describes the content semantically so that it can be processed intelligently based on the nature of the information and its intended use.

Successful single sourcing requires a solid plan for content creation and content reuse. The two go hand-in-hand. When creating content, authors must be mindful of all the ways in which it might be used. It’s up to the content strategist to develop the plan for intelligent reuse.

Content strategists must architect content to ensure its maximum reusability in multiple contexts. The content must be sufficiently granular, and it must share a common voice and vocabulary. For single sourcing to succeed, content strategists and authors must collaborate and regularly re-evaluate content in its various uses.

About Leigh White

Photo of Leigh White

Leigh White is a 20+ year technical communications veteran advocating that effective technical communicators need to be more than writers; they need to be part programmer, part designer, and part project manager. Leigh speaks on Extensible Markup Language (XML) and the Darwin Information Typing Architecture (DITA) at conferences including the Society for Technical Communication Summit, Intelligent Content, WritersUA, and LavaCon.

Term:

Email: mailto:leigh.white@ixiasoft.com

Website:

Twitter: @leighww

LinkedIn: linkedin.com/in/leighwwhite