The whole campus in his hands

The Talking Campus Model, located in the Grousbeck Center, is unlike any other interactive design in the world.

The Grousbeck Center for Students and Technology.

The Talking Campus Model, located in the Grousbeck Center, is unlike any other interactive design in the world.

Regardless of whether the user is sighted, s/he can locate any building on campus by a touch-sensitive platform and audio or braille cues. Exploring buildings and land features by touch or navigational buttons releases a cornucopia of information on campus, the lion’s share of it supplied by the Perkins community.

And there’s even a game, which up to four players, who are sighted, blind or visually impaired alike, can play by use of raised-button controls adjacent to the mini-campus. These buttons also navigate the various menus that peel away onion-layered modes of the Talking Campus Model.

In many ways this model, utilizing technology in rather innovative ways, is light years ahead of similar designs. Yet its ancestors lay just a stone’s throw away in the museum at the Howe Building on Perkins’ Watertown campus.

Steve Landau, the architect at Touch Graphics who created the model along with Nicole Rittenour and Zach Eveland, said they built upon these precedents, when envisioning how someone who is blind would experience this miniature version of Perkins.

“It’s based on the concept of empathy,” Landau said. “Good design relies on getting into the head of the user.”

People around the talking campus model

Steve stands on the symmetrical grid of floor tiles lining the Howe Museum, where games and maps designed for people who are blind remain on display. He turns the knobs on a wooden Halma board, from the 19th century. All the knobs are smooth, perhaps worn from touch, and interconnected on a carved, tactile grid.

“When you’re designing things for everyone, including people who are blind or have low vision, you have to rethink your approach to the world.” Landau says. “These historical pieces demonstrate principles of non-vision design, and yet I think they are kind of beautiful. They were created largely without considering their visual appearance: the designers had to force themselves not to think about what it looks like. And, yet, they embody a simplicity and clarity and an aesthetic precision that I really find interesting and enjoyable.”

He spirals up the stone staircase to the second floor of the museum. Here he sees an old campus model, uneven to mimic the topography of the area it represents. It’s also about the size of a dining room table. Landau has to move around the platform’s perimeter to feel each building.

“We like to make things that are small enough that you can stand in one place and feel the whole thing,” he says. “Otherwise, it’s easy to lose your sense of orientation.” The average person’s wingspan can fully engage his Talking Campus Model, especially if you want to explore with two hands.

Maintaining orientation on a miniature version of something is important when trying to imagine yourself within the model. “That’s not always intuitive, especially for a blind child,” he says. “Yet by creating a model, we are trying to teach the idea of scale, of miniaturizing the world.”

A hand touching the Grousbeck Center on the talking campus model

When one touches certain features of the Talking Campus Model—buildings, landmarks, etc.—the user hears sounds recorded at that location. Touch the pond, for example, and you’ll hear the fountain; touch the bell tower in the Howe Building and you’ll here chimes. Landau refers to these cues as “earcons,” which are sound clips that act as a symbol for something else, like an icon (“eye”-con) on a computer desktop would represent a certain program.

Making “mental linkages” of this sort, he says, allows the user to take the leap from reality into the miniature world of the model.

“With technology, we have the ability to layer information in ways that our predecessors couldn’t,” he says. “We add visual highlighting and large-print captions with a video projector pointing down on the model from above, and we add audio feedback, braille, speech and sound effects—all of these other things that tend to emphasize and focus people, to immerse them in the sensory experience of using the model.”

Steve walks along a timeline of physical tactile artifacts encased under glass in the second-floor museum. He stops at a raised map of the United States. He speculates that teachers back then would have had to guide the student’s fingers over the map and explain what part of the country they were feeling.

“We prefer to make things that are more independently usable and don’t require one-on-one help,” he says. “I wouldn’t make a map like that today unless it had an audio component to it.” This is an example of Landau taking an original idea and making it “stronger with technology,” as he puts it.

Press down a building once on the Talking Campus Model and you’ll hear the building’s name, with an accompanying earcon sound; if you continue touching the same building, you hear a short description of that building’s function at Perkins. Continue to press and you hear directions for walking to that place from Grousbeck Center. Perkins Training and Educational Resources Program Coordinator Betsy McGinnity furnished this information for the project. At Touch Graphics, Rittenour configured all of the computer-aided design (CAD) modeling for the structures.

On the Talking Campus Model, “each building is hooked up by a wire,” he says. Underneath the surface, a tangle of wires spiderwebs into the bottoms of each touchable feature on the map. They connect to metal pins that penetrate a protective epoxy coating into specialized paint that can conduct an electrical charge. This charged paint reacts to the pulse in one’s finger when the two become close. The visible color on the surface, therefore, is not the touch-sensitive layer. That’s below. “You could pour water on it and nothing would happen. The conductive paint can’t scratch off,” he says.

Underneath alive with wires, Eveland, who designed and built the touch-sensitive circuitry housed in the model base, hooked the other end of this bundled network into a computer program containing all of the relevant campus information users could access by engaging the model. All of this was done for the tactile user.

“When we first made the campus model, if you were touching two things at once, an alarm would go off to warn the user that the computer could not decide which building the user was interested in learning about,” he says. “That didn’t prevent kids from touching more than one thing at the same time, however. So, we changed the program so that, when the user touches more than one thing, it goes silent. You freeze it, which is the way we think people who are blind want to explore.” Lift one hand, and the model announces what the user is still touching. In other words, with two hands, users can investigate the contours of each building uninterrupted; lift a hand, and more information on the building they are still touching comes alive.

Landau, who noted he would often close his eyes and touch the scaled-down parts of campus while in the model design process, found that this method of using both hands gave the user more control. They could decide when they wanted to hear information. This technique also adds a layer of interactivity with the model; gesturing between two- and one-handed uses extends the tactile functionality beyond just a touch-and-listen relationship.

“I would be very happy if the work we were doing here encouraged future developers to figure ways that information could be conveyed through touch and gestures, not just as a way of picking things,” he says. “There are other ways to interact with computers besides the traditional keyboard and mouse that provide a deeper and more intuitive experience for many users, and we are exploring some of these ideas in the Talking Campus Model.”

Though, Landau did not completely abandon the concept of keys and button navigation in his design. In an effort to include as many ways as he could to interact with the model, Landau incorporated a large, raised, red button flanked by directional left and right arrow keys, as a second mode of model interaction. He installed this setup on all four sides of the square table. On one side, a refreshable 80-character braille display slides out, which dictates the audio information. The braille display is capable of Grade-1 and the more complex Grade-2 braille versions.

Watch and listen to a video of Steve Landau demonstrating the functionality of the Talking Campus Model, including how to locate specific buildings on the Watertown grounds.

“Some people are just never going to be that tactile,” he says. “We want to make everything accessible in other ways that don’t require this tactile exploration. That’s why we have these buttons.”

Hit the raised circle to access the main menu. Navigate by the left and right directionals; select an option by the circle. The index menu, for instance, lists each building alphabetically.

“The index is very important,” he says. “We talked before about free exploration—exploring the model with one or two hands. That’s good if you want to get a sense of things. If you want to know where something is, you need an index.”

If the user is trying to find a specific building on campus, s/he would begin by touching anywhere on the model after selecting that particular building in the index. The map then tells the user where the building is in relation to where they touched the map. Directional audio cues, like “Go west, Go south,” after each successive contact, coach the user on their path to the desired building. Then chimes, resembling the completion of a level in Nintendo’s Super Mario Bros., sound. “You need to have a way to find things,” he says.

The user then has the option to light up a trail between the Grousbeck Center (where they are) and the destination building. Touch the building again and audible directions describe the trail, beginning by telling the user to step left or right out of the Grousbeck Center and then which walkway(s) to follow. Braille users would read these directions. Landau worked with Perkins O&M Specialist Lorraine Bruns, who provided all of the scripts for every possible set of directions.

Yet the model is not all business. Perkins Assistive Technology Coordinator Jim Denham is working with Landau on developing a game within the model, which will involve history and geography of the campus. Up to four players can operate the game by the circle and directional button controllers that also light up.

Back on the second floor of the Howe Museum, Steve reaches the end of the tactile timeline. He says it looks like it ends somewhere around the 1970s. But that’s OK. “In a way, this exhibit stops here and picks up at the Grousbeck,” he says.

As he walks downstairs and through the Howe lobby en route to the Grousbeck Center, the echoes of his footsteps bounce off the high concave stone ceiling. “When you touch the Howe Building, on the Talking Campus Model, you hear this,” he says. From the Howe, an earcon exits.

The Talking Campus Model lit up.

The Talking Campus Model, at the Grousbeck Center, in Google Earth mode.

In 1913, the sparsely wooded banks of the Charles River afforded a clear view of Perkins’ campus, which had recently relocated to Watertown, Massachusetts.

An ever-changing campus

Kevin and teacher Sue Sullivan.

Mapping the way forward for one student before graduation

A white, bumpy sphere in the palm of a hand

Accessibility and learning in three dimensions