Sunday, March 06, 2005

The Information Architecture of Things - Part I: What If a Button Really Is a Button? (Bill DeRouchey)

Session description

Bill Rouchey is an ID - industrial designer - from Ziba design.

speaker started out as IA, moved in to ID, works for ZIBA (product design co)


Products get more complicated
- no longer focus on single tasks: consume, manipulate and present info in small space

All require _getability_: If they don't get it, they won't get it

Products traditionally don't tap the entire experience
- fast and cheap access to info adds complexity

Elegance v features
- need for elegance and reduced cost usu. means fewer physical controls
- leads to increased modality in controls
- competitive marketplace usu. means more features

It's going to get worse and better
- more complicated, more interesting
*- ubicomp leads to highly-specialized and info-rich products
- e.g., info-goggles, car PCs
- new opportunities for people who enjoy both structuring info and designing elegance

- process
- usability, mental models , etc


- taxonomies and classifications play a lesser role
- web specifics (server, coding, databases)
- The principles help, the details don't


It's still organizing info
- structuring a system that works
- documenting the solution

Knowing the user is key
- get into their context, what they don't realize they need
- what makes them giggle?

Team-driven process
- collaboration is key
- team includes CFM (color, finish, materials)


Physical context
- screen size
- two hands or one
- display stared at or glanced at
- safety issues (medical devices)

Interaction is event-driven
- "at rest" state
- interaction triggered by events; process the event, return to "at rest" (documenting a loop vs a path)

Interaction model is reversed
- Devices: less info; less visual space; wide variety of interaction points (buttons, knobs, sliders, switches, gestures); push, push+hold, slide, turn (fast or slow), turn+ hold, tilt, shake, etc.

Modality increases
- Devices: heavily varies; same control can have multiple physical actions (push vs hold) combined w/ user modes (set clock vs set alarm)

- Devices: visual (new screen, lighting icons), audio (ding, buzz), spoken, touch (vibration, detents)


Site maps aren't enough
- don't easily capture multiple interactions per control

Site maps become function maps
- inventory all possible flow
*- incorporate "at rest" state
- example shown akin to mobile phone functional flows


Wireframes don't apply
- complex to capture offscreen status/feedback (sounds, lighting)
- difficult to capture modes
- physical form of device gets in the way

Wireframes become screenflows


Rapid prototyping not conducive to document maintenance

*Divorce function from layout
- interaction definition occurs simultaneously with prototyping
*Divorce function from action (abstract it! - think like CSS/XML)
- Document topology, not description (object-oriented approach, analogous to CSS)


Interaction matrix
- grid to compare all possible interactions in all modes

- similar to wireframes

Other documents: inventories
- modes
- possible user actions
- screen templates and animations
- system feedback (sounds, lighting); use high-level names that match actions (Scroll, Confirm)
- patterns (how it all goes together)

- future is getting wierder
- new complexity means new opportunities, regardless of you job title
- Objects are closer than they appear (take in all the info you can, read wierd stuff, e.g., Erratic Behavior - Sweden)
- Don't forget to abstract your documentation (patterns not results)

ct: I see opportunity for software that allows you to represent interactions and modes in different views for different purposes (engineering, client presentation, user experience direction)