Interactive Cognitive Load
[Author: Bill Fischer]
Overview
Cognitive load is a particularly challenging aspect of designing interactive media. Navigation and layout systems depend on the user's working memory, executive function, fine motor skills, spatial organization, language appropriation, and sensory input filtering capabilities to sort out the menagerie before them. The design goal is to minimize the extraneous cognitive load and working memory required to gain access to information through the graphic user interface.
The I-See-U blueprint breaks this down into 3 user interaction events
The pre-amble (the moments leading up to an action)
The action (when a decision is made and an action is performed)
The reaction (the result of an action)
Preamble
When a user is presented with a graphic user interface, there is a lot of cognitive work they need to do before taking action. How challenging this is can depend on where they may land on the neurodiverse spectrum, what type of environmental distractions are present, and which physical stresses they may be experiencing. This section recommends several methodologies for optimizing the preamble experience.
Epic+KCAD partnered with Protege Games and Innocademy Schools to imagine a virtual-reality, educational field trip called Amplify: Journey To Mars (external link). The concept integrates Microsoft Hololens AR with projected images and walky-talky style communications to create an immersive experience that students can engage with in small groups. The UI challenge involved isolating the interactive components (signal) from the immersive environment (noise).
Signal and Noise Management
Visual assets can be categorized in three ways: information, decoration, and distraction.
Visual hierarchy: Position, color, and contrast of screen elements should be designed in a hierarchical attentional order.
Focus: where do we want attention to be prioritized. The user should be able to easily find and obtain what they are looking for or should be provided clear options/suggestions/ instructions for what they could or should do next.
Grouping: organizing content in intentional chunks that are separated by generous margins and further differentiated by design elements like background value, lines, and other visual elements.
Discoverability
"Discoverability' is a design method that calls for all possible actions to be visible in full view (not hidden). Working memory in our brains has limited capacity and is the system where we temporarily hold information available for processing things like navigation systems and page/screen content. Examples include:
Breaking information into relatively small chunks and categorizing it with titles, headings and subheadings will optimize the scan-ability of a page/screen.
Fully viewable menu systems avoid asking users to create mental maps of hidden navigational elements which require more working memory and increase cognitive load.
Site maps can be used in lieu of fully expandable navigation systems. They also have the advantage of being able to utilize heading systems to optimize the screen reader experience.
Search capability is a boon to all of us, but especially to persons with sight impairments.
Affordance
When users are provided a choice, it should be clear what the results of that choice will be and should be built on an expected, shared understanding. This can include:
Underlining text links.
Maintaining consistency of formatting from page to page.
Consistency of visual cues throughout the media, including branding, and menus.
Utilizing massively adopted conventions for both image and layout.
Accompanying icons with text.
Action
Memory Optimization
Humans can only keep 4-5 pieces of information in working memory. Working memory is a temporary holding system in the brain that is required for processing, in our case, a graphical user interface. According to Hick's law, the time it takes to make a decision increases with the number and complexity of choices. This means that chunking information and navigation into small sets will reduce cognitive load, and increase efficiency.
The primacy effect describes our mental capacity to remember the first thing we experience in a set.
The recency effect describes our mental capacity to remember the last thing we experience in a set.
Epic+KCAD partnered with the Grand Rapids Public Museum to develop a place-based, augmented reality, role-playing game in the Old Streets exhibit, called Old Streets Adventure (external link). There are no buttons in the app. The interface is controlled by the movement of the tablet alone. Users learn how to navigate through a feedback loop that is activated by moving the tablet. Interactions include: start, zoom, replay, and change focus.
Reaction
Feedback
Providing information that orients, and confirms the success or failure of the user's choices, is important because that can mirror the physical world, which provides constant feedback to our senses. That is the norm for most of our human experience. Digital interactive systems only provide it if the designer includes it. This can include:
Page/screen header/titles that reinforce where we landed and remind us where we are.
'submitted', 'saved', and other confirmations.
Consistent brand/product badging that reminds us who or what we are engaging with.
Recovery
Undo, Redo, and Back are essential navigational elements for usability. Without them, users must return to 'home' pages/screens or navigation systems to undo an action.
How Blind and Sight-impaired Persons Understand the Visual World?
We need to keep in mind that the sight impaired create their own understanding of events happening on screen within the context of the way they experience the world through their available senses. And, that experience is rooted in sound, touch, smell, and taste. We only have sound to work with in video, so audio-storytelling that addresses as many senses as possible is key.
I have created shortcuts to segments inside of the videos to make it easier to revisit them, and also to provide a curated experience of the parts that I thought were most relevant to issues that are centered around how blind persons understand the visual world.
A sighted person attempts to describe the world to a blind person
Types of blindness explained
Watch the entire video
How a blind person processes the visual world
Watch the entire video
Challenges of watching video as a blind person.
Communicating while deaf or hearing impaired
Getting to know the various ways how that deaf and hard of hearing persons generally communicate with the hearing world can offer insights that we can synthesize into our communication-focused media design.
I have created shortcuts to segments inside of the videos to make it easier to revisit them, and also to provide a curated experience of the parts that I thought were most relevant to issues that are centered around communication.
Functional Aspects of Communication and Inner Voice
Social challenges of communicating with hearing persons
Education challenges
A Deaf Experience in Mainstream School
Watch with captions on
Experiencing sensory inputs when neurodiverse
Understanding how neurodiverse persons experience sensory inputs can inform our design decisions for media that can deliver multimodes of input simultaneously.
I have created shortcuts to segments inside of the videos to make it easier to revisit them, and also to provide a curated experience of the parts that I thought were most relevant to issues that are centered around how neurodiverse persons react to sensory inputs.
Dylexia simulation
Watch the entire 2 minute video