MAPS on xTV

 

Douglas Crockford

David Lawrence

Lucasfilm Ltd.

March 19, 1987

 

 

Cartography is a dubious activity, translating all or part of the surface of a sphere onto a flat sheet of paper. There is no correct way to do this, and every map is erroneous in some aspects. Nevertheless, maps are extremely useful objects, as well as being the reason for there being a field of Geography.

 

 

Technical Considerations.

Using inexpensive computer technology, we can produce useful maps for group presentation, designing to the conditions of viewing and the limitations of the technology.

Apple IIGS Hardware Capabilities

We can get considerable leverage with simple, predefined, animation by using color cycling. We can combine IIGS images and images from the videodisc by using video overlay. Scale and perspective changes can be prepared by producers and recorded on a videodisc. Such production can take place on more powerful machines. We look forward to someday having this class of machine in schools. Until then, for interactive usage, the design task is to explore the possibility space in the intersection of maps, video, and GS graphics capability, and to discover conventions which offer teachers and students considerable flexibility in structuring powerful, instructive, map displays and graphic presentations.

Color Cycle Animation

Color cycle animation takes advantage of the ease with which IIGS hardware lets us remap color regions. A pixel displayed on the IIGS screen in 320x200 mode can be any one of 4096 possible colors. At any given moment for any of the 200 horizontal scanlines however, it is only possible to work with a single group of sixteen colors. These groups are called palettes. The IIGS can have up to sixteen palettes. This means that it can display a maximum of 256 colors at once with certain limitations.

(16 colors x 16 palettes = 256 possible colors.)

A pixel's color is determined by three things: the palette being used, the pixel's palette position, and the color value of its palette position. It is extremely easy for us to change two of these three things. The palette is determined by a one byte pointer for each of the 200 horizontal scanlines. Changing a palette pointer will instantly change the palette for its scanline. This means that the entire 32K screen can be remapped to completely different colors by moving only 200 bytes. If all 200 scanlines have the same palette, the process is even easier. A palette is composed of 16, two byte color values. All pixels mapped to a given palette location will have their color determined by the color value of that location. This makes it is possible to remap the colors of the entire screen by moving only 32 bytes. By cleverly designing graphic regions that use different palette positions and manipulating the colors inside the palette, we can easily create effective animated sequences.

Color Cycling Applications

The Settlement sequence from the "Maps on xTV" video report (see "xTV Transition Report Video") is an excellent example of how we can use color cycling techniques to do animation. The Settlement sequence depicts the expansion of settled regions in the U.S. between 1675 and 1890 with a red mass that slowly grows westward. Even though there is color movement over a large screen area, the computational requirements for this effect are minimal. It is accomplished as follows: The base image, a white outline of the United States, is actually a choropleth map with seven regions; one for each of the six depicted years, and one for the white base color. Each of the regions is drawn with a color from a different palette position. The color values for each of those positions are then set to white, matching the base color and thus making the regions invisible. The computer fades a region from white to red by simply changing the two byte color value for that region's palette position. Doing this for each of the regions in sequence produces the growth animation effect.

The vegetation and rainfall maps from the "Maps on xTV" video are two examples of how we can take advantage of color cycling to highlight regions. Again, The highlighting effect is accomplished simply by changing a two byte color value for each region.

This general technique is an efficient and flexible use of the hardware. By changing just one byte of palette information we can affect up to 32K of screen memory. This gives us tremendous leverage for many graphic bound tasks and dramatically increases the overall effective power of the IIGS. We believe color cycle animation will play an important role in the design and implementation of IIGS animated sequences.

Video Overlay

Apple engineers have designed a peripheral card for the IIGS that lets it overlay its graphics on any NTSC video source. The "Pokey" card essentially duplicates the graphics section of the IIGS, adding genlock/overlay capability. In super high-res mode, it works by substituting one of the 16 palette colors with video. Apple engineers have worked hard with Pokey's filtering to get the cleanest possible video. We are impressed with the quality they have achieved.

By using video overlay, we are able to combine computer graphics and video images. This lets us take advantage of the benefits of both media simultaneously. Video offers us high quality images and full frame real-time motion. This is well beyond the capabilities of IIGS technology. The IIGS offers us relatively high quality images that can be directly processed by the computer in response to a viewer's immediate needs. Combining these media offers an opportunity to create dynamic audio/visual presentation experiences.

Video Overlay Applications

The "Electric Chalkboard" is a simple example of how overlay can aid teachers in their presentations of video material. This feature of the xTV Workshop will provide simple draw and erase fuctionality, allowing a teacher to graphically annotate video images in real time. It is similar to the use of overlay in televised sporting events, where instant replays are often enhanced by freehand drawn marks to indicate areas of interest.

(See "Graphics", p.##, xTV Workshop Specification.)

Another important use of overlay will be in the presentation text captions for video images. Overlay lets us display online text annotation that will be available for each video frame. Some of this text will be provided by the application product. Some may be made by a teacher or student. In either case, it will be very useful to display text with corresponding images. Overlay lets us do this efficiently. (See "Text Functionality", p.##, xTV Workshop Specification.)

With overlay, transitional effects such as wipes and fades between the computer and the video source are possible. For example, we can fade a video image to black, iris wipe in a computer graphic, dissolve a computer image to video, and so on. What we cannot but very much want to do is use these kinds of transitions between video frames. In order to accomplish this we need a frame buffer. (See "Frame Buffer Applications", p.##.)

We expect each xTV product to have some number of application specific overlays keyed to particular video frames. These could be dynamic cross-sections. They might show the inner layers of a photographed subject and give teachers or students control over the temporal display of additional information. For example, there may be a base picture showing an aerial view of the city of Palm Springs. Overlays could be used to represent the various underground water reservoirs that are located beneath the city. By selecting a region, a teacher or student could bring up information such as capacity and rate of depletion for the particular reservoir they're interested in.

Simple computer animation overlayed on video images is an effective way to present otherwise impossible views of a subject's inner workings. We can show a base photograph of an engine and use animated overlays to demonstrate how its various internal systems actually operate. We can show a base picture of a volcano and use overlays to reveal the internal processes that take place prior to an eruption.

We believe that overlaying real photographs with IIGS graphics can be a powerful way of connecting computer based information to real world subjects. It is an efficient use of resources since we can simultaneously take advantage of the best elements of both media -- high video image resolution and dynamic computer control.

Frame Buffer

Most teachers and students are veteran film and television viewers, thereby used to a high standard of quality for their visual material. We deliver this quality with optically stored video.

We want to let teachers and students have direct control over their visual information. Unfortunately, there's very little we can do inside single, unprocessed video frames. A frame grabber/buffer would let us capture a video frame and manipulate it fully and directly as computer data. We think this is an optimum approach.

Frame Buffer Applications

In addition to easily being able to do everything that is possible with overlay, a frame buffer gives us significant new capabilities. For example:

Transitions

A frame buffer lets us do wipes, fades, and dissolves. These effects are extremely useful for creating diverse, dynamic video presentations. Right now we are limited to straight cuts between video frames. Any temporal video effects such as a dissolve between frames must be done in disc production and stored as running stills. A frame buffer will let us have these effects with no disc real estate cost or post-production work. This will give us tremendous leverage in producing segments with diverse rhythms, moods, and pacings. It will also help preserve visual continuity during seek/access times when either the videodisc player or the computer are unable to display an image.

While overlay provides some of this transitional functionality, it is only between computer and video images. We think it is at least equally important to be able to move flexibly between video images. (See "Video Overlay Applications", p.##.)

Windowing

A frame buffer lets us do extremely useful windowing effects with direct video images. Without any post-production work, we can do splitscreen, scroll, image zoom, image shrink, image move and image processing. With the current hardware standard, in order to accomplish these kinds of effects an image must be sampled in advance, then stored as a compressed IIGS screen on the CD-ROM, or, produced as a special single frame or running segment. Being able to window with direct video will give us considerably more flexibility in the creation of dynamic segments as well as in the implementation of the user interface.

A frame buffer is the most direct way for us to take advantage of the high visual quality of video along with the dynamic potential of computer manipulation. We strongly recommend the incorporation of a frame buffer into the xTV classroom system.

Apple IIGS Hardware Limitations

It would seem that the Apple IIGS computer would be an excellent tool for the creation, display, and manipulation of temporal graphics, and it is, allowing one significant limitation:

The 16-color 320x200 graphics mode in the GS which can display attractive images uses 4 times more graphics memory than a traditional Apple II. The GS is 2.5 times faster than the Apple II. That speed increase is dizzying for some applications, but for tasks which are graphics bound (as this one is), the net system performance of the Apple IIGS is about 65% of an Apple II. In its favor, the GS has a superior graphics design and a CPU with 16-bit registers. Even so, except that it can now produce attractive colors, it is still an Apple II. It should not be expected to perform the kind of computationally intensive, real-time image manipulation that we would like to see.

Adding a frame buffer makes matters worse. A 640x480 frame buffer capable of displaying only 256 simultaneous colors (1 byte per pixel) is over 300K in screen memory. Increasing the graphics space by this order of magnitude makes the IIGS effectively ten times slower.

The processing limitations of the IIGS puts some severe constraints on what can be done in manipulating graphics in a presentation situation. The computer is not sufficiently powerful to show smooth changes in scale or perspective, or to process and display complex cartographic data in real-time. It is critical that the computer not delay the presentation of material. The classroom audience is highly critical and in most cases actively seeking distractions. A lethargic computer display is such an invitation.

Macintosh II

We expect a more powerful machine such as the Macintosh II to be able to handle this task. With its fast 32 bit 68020 processor and 68881 math co-processor, The Macintosh II should have enough horsepower to perform map generation based on cartographic data, and other computationally intensive real-time operations. Its high resolution screen and color depth make video quality graphics possible. Its open architecture makes it easy to add graphics co-processors. We look forward to eventually having this and even more powerful machines in classrooms.

Macintosh II with Graphics Processor

A Macintosh II with a dedicated graphics co-processor is a very attractive hardware environment for xTV. We will be able to work with paintbox quality graphics. This will make television quality production values for our computer generated segments possible. We will be able to dedicate the 68020 and 68881 to tasks that make best use of their computational power, such as figuring dynamic map projections from cartographic data, or managing the massive xTV database, rather than devoting them to the graphics drudge work.

A co-processsor, like RCA's recently announced DVI system, makes possible a much larger and richer set of capabilities and interactions

 

 

Television.

During the prototype phase of GTV, we discovered problems in using a television-based medium for map presentations to large groups. The first problem is in the inherent differences between video and paper. A paper map can be huge, having an area which is many times larger than a television screen. Paper and ink are capable of preserving minute detail. Television, by contrast, has a small display area, and resolution which is measured in dots-per-screen, rather than dots-per-inch. A paper-to-video conversion, while trivial to perform with a camera, loses a considerable amount of information and produces terrible results.

For example, on a map of the conterminous United States, small cities and rivers completely disappear. The ignore-or-exaggerate dilemma that confounds cartographers is compounded horrendously.

The other problem is in designing maps for remote viewing situations, such as a classroom presentations. Typically, the map at the front of the room is just an enlargement of an ordinary map. The detailed features are usually too small to be intelligible to students from their seats. Even some of the larger graphic features become noisy and confusing with distance.

The detail which is lost with distant viewing is the same detail that is lost in the low resolution of video. (This is not surprising because television was designed for distant viewing, relying on the associated information loss to reduce the bandwidth requirements and to match the capabilities of 1940's electronics.) We should be able to solve both problems by designing new ways to show spatial relations between places. We particularly want to take advantage of new design opportunities offered by the video medium.

Video offers the ability to change an image over time. In moving from paper to video, we will be trading spatial resolution for temporal resolution. In designing the maps themselves, we can focus on specific features, and avoid the clutter that comes from designing maps for multiple purposes. We can now change the purpose of a map deliberately and dynamically in order to better communicate.

 

 

Maps on xTV.

In designing video-based maps, we are looking at some of the longstanding conventions in cartography, and identifying which of those conventions are based on sound principles, and thus worthy of respect, and which conventions are due to the biases and constraints of paper and ink, and thus worthy of rampant violation.

It is common in spatial maps to try to pack as much information into the field as possible. In temporal maps, this is not only impossible, it is also unnecessary. Many things can be shown on a map but there is no need to show everything at the same time. A temporal map, at any instant, does not need to attempt to satisfy multiple purposes. Over time, however, it will do exactly that. We believe that this ability to make information appear or disappear as needed should be recognized as a display convention for temporal maps. While certain paper conventions such as high information density make no sense for video cartography, others are reasonably adapted.

Frame. The purpose of the frame is to avoid mistaken interpretation of white space in the margins as map data. In our video presentations, the screen margins and bezel will act effectively as the frame. No other graphics frame is needed. This is assuming that our maps fill the entire video frame. Given the difficulty of reading maps on video, we believe this will be the general case, however, it is possible that maps will be used as smaller components of composite images. In this case, it may be necessary to have some sort of frame to isolate the map data.

Scale. An indication should be made of the correspondence of map dimensions to the world, with a formula for converting inches to miles, or centimeters to kilometers. In our video maps, it isn't possible to state the formula, because the size of the display screen is unknown to the computer. However, we can label a line segment as being equivalent to some number of miles or kilometers. That line will correctly indicate the scale, regardless of screen size.

Title. A map should have a descriptive title. Because the attention of a temporal map changes over time, so does its title. There is no need to put a place-name on top of a place (and thus obscure it). The place is simply highlighted, and the title contains the verbal identification. Graphics are stressed over verbal content.

Legend. Where a non-obvious use of color or symbolism is used, a descriptive key should be provided. This key should follow the temporal convention of appearing and disappearing according to its need, thus the person using the map only need see a description of what's important to him or her at any given time.

Directional arrow. An indication of North should appear. It can be omitted if North is at the top of the map, or if the information is implied by lines of latitude and longitude, or if a pole is displayed.

Features such as the scale and the directional arrow do not need to be on the screen permanently. When scale is needed, the title indicates the length of the scale line in miles or kilometers, and the scale line is displayed and highlighted. When scale is unimportant (usually) it is not displayed.

It may be the case that distant temporal maps, as a medium, will have a bias that favors presentation of patterns and distributions over locations, and so may tend to be more effective in developing analytical skills than navigational skills. This is due to the medium's inability to present high-frequency localization data without visual confusion, while at the same time being able to present many different views and interpretations of a location or region.

Choropleth techniques are very effective on temporal maps, especially if they are colored well. Choropleth techniques can be applied to a wide variety of data. They are easy to read from a distance, and are very good at communicating distributions and patterns.

Other presentations, such as dots, circles, isolines, and icons, will tend to work less well because they don't satisfy the viewing conditions or video resolution. Exaggerated lines can be effective in showing systems and networks in a general way where detail is not a concern (hydrology and railroads, for example). Lines can also show boundaries (but not for purposes like counting the States).

The above examples represent just the beginning of our explorations. We want to depart even further from conventional cartography. We look forward to more experimentation and discovery in the use of temporal presentations for delivering analytical, conceptual, and spacial understanding.

 

 

Realization.

Base maps and overlays are created computationally, or are created with a paint program, perhaps using paper maps as the original source. Each map or overlay will include a title that can be displayed temporally. Overlays will have a reference to a base map which can be from a computer or video source. When an overlay is displayed, it can be selected to always refresh its base map, or to refresh the base map only when the base map isn't already there.

The maps are compressed into run-codes, and stored on CD-ROM or floppy. (Because maps and overlays can come from floppy disk, they can be made by third-party publishers or by teachers or students.)

Like any other object in the xTV Workshop environment, maps and overlay graphics are accessed by playlist commands or via remote control. Complex overlays can be constructed out of simple ones during playlist programming. This involves selecting a base map, one or more overlays, and a new title.

(For a complete discussion of how this is handled, see "Playlists", p.##, and "Light tables", p.##, xTV Workshop Specification.)

A special case overlay is the video overlay. It is a variable-size window which can be opened anywhere on a computer graphic, letting a videodisc image shine through. (Because the xTV classroom system currently lacks any kind of video scaling hardware, it isn't possible to reduce a video frame to fit the window. Any such tricks need to be done as a part of the disc production.)

When it is time to show an overlay or map, it is decompressed and matted or placed into display memory. The GS will be able to handle a small variety of transition effects (wipes and fades).

Since it is a function of the xTV Workshop, the "Electric Chalkboard" will always be available for instant annotation of a video or computer generated image. Other graphics tools anticipated in the Workshop include:

Text Captioning

There will be a simple character generator that will allow teachers or students to add titles to their images. We assume that font sizes will usually need to be large in order to facilitate distant viewing.

Markers

In addition to text captioning, teachers or students will be able to annotate their images with simple iconographic markers. These might be dots used to indicate longitude and latitude or arrows to indicate "you are here ‘."

Any changes or new material made by a teacher or student with any image will be saved by the workshop.

 

 

Graphics.

Temporal implementation will also work well with charts and graphs, which don't seem to have the same condition of viewing problems that we have with maps. That is because graphs are usually designed for the same viewing conditions that we are concerned with: A boardroom isn't that different from a classroom. We can thank the business community for providing us with considerable experience in presentation graphics. The most important goal in designing a good graph is to communicate a position, trend, or situation. There is no compulsion to try to tell the entire story with a single graph.

Graphs, like maps, can be prepared with paint programs or other programs, and compressed. In that form, our presentation software will be useful for graphics as it is for maps.

Graphs are also simple enough that they can be created computationally in real time. Graph procedures can be built into simulations.

 

 

Literature.

In February 1987, Don Holtgrieve conducted a literature search in order to determine if there was information published on video or television maps for multiple person audiences. The Index of Applied Science and Technology was consulted with negative results. The 1980 to 1987 issues of the following journals were consulted: The Cartographic Journal of the British Cartographic Society, The American Cartographer of the American Congress of Surveying and Mapping, Cartographica of the Canadian Cartographic Association, the Annals of the Association of American Geographers, and the Journal of Geography of the National Council for Geographic Education. In all cases there was information on computer cartography including the production of maps on CRTs, but none on the use of maps for television, videotape, or videodisc.