We’re a self-funded research lab since 1983.
We agree with Schumpeter, that entrepreneurs drive the economy in new directions the big guys can’t.
InScape, 1989: Non-Immersive Virtual Reality
Hardware/Software Combination; research-turned-product
InScape was the first commercially-available non-immersive virtual reality infrastructure: all the math and ideas that went into VR CAVEs, (immersive “holodeck”-like rooms) a year or two before
At the time LEEP optics stretched very low resolution head-mountable displays a few hundred pixels wide to cover as much of your visual field as they could—and we couldn’t find too many real-world applications where the position of your body with respect to the data was the essential issue; architectural walk-throughs and video games perhaps
Instead, InScape wrapped the whole million pixels of a standard desktop display around just the model you were interested in; here: an options portfolio value optimization tool described later in this document
[expand title=”More”]
Custom viewing transforms were built and forced into the rendering pipeline and they were updated in real time to project the viewed volume—generally mostly inside the monitor—onto the location of the physical screen itself; one looked into the monitor as if it were a fish tank and the model was inside, live animated, and available for interaction
Historic note: Developed on the first Silicon Graphics computer, the IRIS, serial number 49
[/expand]
The Cricket c. 1990: A Hardware Handle, Input and Output Affordance for Virtual Realities
InScape environments had objects in them that needed to be manipulated, so we developed a real-world handle.
At the time all available 3D mouse devices looked like a desktop mouse with extra loops or surfaces to allow one to grab it—apparently nobody realized this was uncomfortable and unnatural—we made the Cricket fit the hand in the normal upright, vertical, angled-forward posture normal for one’s hand when brought to the space in front of one.
Patent: Three-Dimensional Mouse with Tactile Feedback
US Patent Number 5,506,605 was a fundamental patent in the field, with over 600 citations as of 2018 (most patents are not cited at all, or by single-digit counts). It was cited, e.g., by Nintendo’s Wii device patent: essentially every feature of the Cricket, with a single button moved (our somatically/semantically appropriate grip button became their awkward second trigger); we did not realize the likely infringement until just before 5,506,605 expired.
[expand title=”More”]
Realizing that a primary function was to grip objects, and that the bottom three fingers of the hand are typically operated together, we created a grip button along most of the bottom of the handle
Another key function was pointing at things, and the natural human digit used to indicate things is the index finger; we added a trigger.
This left the most agile digit of the hand—the thumb—free, so we developed a full 3D manipulating pad under it; essentially a flat joystick that also sensed downward pressure and even cushioned slightly downward for passive haptic feedback; Ted Selker independently developed IBM’s similar TrackPoint device years later.
Research at two universities and with an internationally-known hand surgeon helped us realize that the thenar eminence—the fleshy bump of muscle below your thumb—has almost as many touch sensors as any part of your body. So we added an active tactile feedback vibrator—not just a motor with offset weight, but a custom-modified tactile display that could display any waveform in the range of 0-500 Hz.
It won an I.D. Magazine Design Distinction award in their 40th Annual Design Review.
[/expand]
The Monkey c.1994: An Animator’s Hardware Input Device—Shaped Like the Task
Positioning a human figure on a screen with a 2D mouse is a tedious task, like trying to be a choreographer without talking to the dancers and directing only with a bar of soap on a tabletop—the input device has nothing to do with the task.
The Monkey came from the simple realization that people couldn’t reach into the screen to move the figure—but we could bring the figure out!
We placed 100 of these devices (not easy for a $20,000 mouse in the early 90s), at studios like Will Vinton (of dancing raisins and M&Ms fame), Industrial Light and Magic, and Disney.
[expand title=”More”]
An “inverse robot,” the Monkey was a sixteen-inch-tall puppet with high-resolution analog conductive plastic resistors on each of forty joints most often used by the extensive group of animators we polled
[/expand]
The Cyclops, 1997: A Cinematographer’s Input Device—Shaped Like the Task
Positing a virtual camera in a computer-animated world was an equally tedious task, typically solved by algorithms that all the human touch out of the motion; left it floating or rubbery
We realized that real-world cinematographers had decades of experience programmed into their muscle memory—they tapped into it when they tracked a complicated motion using a tripod head’s grip…
…but only when that grip was in their hands. So we put it there.
[expand title=”More”]
The cyclops mounted a very early LCD displayon the tripod to stand in for the camera viewfinder, and used the same angle encoders we used on the Monkey to capture all three degrees of rotational freedom—and we added a slider near their thumb for zooming
[/expand]
Tide Stain Detective c. 1996: Ubiquitous Computing—long before the Internet of Things…
Biochemistry Information Exactly Where It’s Needed: The Laundry Room, with Procter and Gamble Research
This might have been the world’s first widely-deployed task-specific information hardware utility, pre-dating the Internet of Things wave by decades—P&G was to have done a test run of 40,000 units if we could manufacture them for $10 each; alas, we couldn’t break $13
[expand title=”More”]
Procter and Gamble hired dozens of biochemists for years to discover the best way to remove hundreds of common stains from scores of fabrics with things available in most households—greatly to their credit: not only P&G products. But it sometimes took several steps, and few had access to this very useful, very structured database.
We designed, executed, and programmed the first Tide site. It was one of the first five consumer sites, back when the Web was a new technology. They were so pleased they almost let us talk them into a more interesting technology: Ubiquitous Computing.
Just after the Newton and Palm Pilot general purpose digital assistants came out, and perhaps a decade before any task-specific information-utility devices were made for the general public we designed and engineered this 6″ wide device, thinking of it as an active refrigerator magnet—for your washing machine.
Even the buttons were purpose-engineered: while the inexpensive button technology available was contacts under deformable plastic bumps, we realized those bumps could be shaped like rocker switches when the task was to scroll the four-line LCD display, and like little arrows when the intent was to allow someone to pick a line (existing calculator-like devices had simple round bubbles for buttons).
[/expand]
TextArc, 2000: A Tool for Structural Literary Analysis
Patented in 2003: US 20030235807
TextArc is a graphical index of a text that can be used with any text. It generalizes to show index, frequency of use, and distribution of any set of entities with respect to a linear ordering of another class of entities that contain them.
[expand title=”More”]
It draws the lines of the entire text in a tiny font, wrapped into a clock-like ellipse inscribed within the display bounds. This results in an unreadable mess; to make it useful, TextArc also:
• Draws each significant word at the centroid of its usages in the text; pulling it close to where it’s used most.
• Draws those words larger, brighter if they’re used more.
• Draws rays from those words (when hovered over in the interactive program) to where they’re used.
• Draws star-like “distribution glyphs” (in the print version) next to each word to show where it’s used
• Draws the word in a color related to position: saturation means a word is used mainly in one place, hue is determined by its position in the text; this allows words common to one area to be more easily seen when used elsewhere
• Background-color outlines around words separate them from words behind and make them more readable
[/expand]
TextArc has been exhibited in The Museum of Modern Art, a solo show at Arizona State University, the Chelsea Art Museum, Google’s New York headquarters, and it won Grand Prize [non-interactive] at the 2002 [6th] Japan Media Arts Festival, among many other appearances “in person,” on the Web, and in books.
Richer background and interactive, deeply-zoomable images of this and other “research becomes art” are at W. Bradford Paley’s art-related site.
Content-Guided Search and Scan (CoGSS) c. 2003 [Concept and conceptual architecture]
CoGSS is a search tool designed for medium-sized document libraries. It is tuned for library exploration and senior management use.
The layout is an easy-to-use left-to-right refinement search, with four panes:
• Common Terms: the search tool suggests what to look for
• Search Criteria: multiple ways to look for a document
• Document List: the list of documents that meet the criteria
• Document Preview: a curated excerpt of a document
[expand title=”More”]
Each pane has special features, we discuss them as the workflow will proceed: from left to right.
The Common Terms pane may have three lists in it, each one showing many possible search terms (e.g. Health, Atkins Diet, Organic, February 2003). The three lists are currently “Favorites” (a list that people can populate themselves), “Business-Wide Terms” (terms that commonly occur within the business), and “Terms in Selected Documents.”
This last list is rather special: it looks through all of the documents that are in the current “winnowed down” list of documents and shows the most commonly used terms—only in those documents. It can be a help for people to see what sorts of topics remain, and provide “proven” (known existing) terms to refine the search. It changes every time the Document List changes.
Contents of the lists may change depending on what search field is currently active, e.g. if the Author search field is active the Business-Wide Terms list may only contain valid author names. Lists may also be winnowed by typing characters into an empty “filtering” field at the top of each list—as each letter is typed the list is winnowed to contain only terms that contain the typed sequence of letters.
The Search Criteria pane has type-in fields for each criterion allowed in the searches, headed by one overall Google-like “search anywhere” field (which matches text in any of the other fields). Other common fields include Title, Author, Key Words, Text Body, Date Range, and Size Range. The last two range fields are assisted by active histograms, visible immediately under each field. By sweeping out a range on an active histogram, a range is automatically entered into the range fields—thus the histogram not only gives context, but provides an easy way to enter search limits.
A special “Co-occurrence” triple field allows people to enter two words separated by a distance (e.g. “Organic” 5 “Food”) allowing matches of any document that finds the first word within a specified number of words from the second.
All fields may be filled by typing into them; the Document List shrinks as each character is typed. Fields may also be filled by dragging search terms from any of the Common Terms lists. More search terms, specific to a given application (e.g. “Project”) may be added to this pane.
The Document List shows a constantly updated list of all documents that meet all of the search criteria. Each document is represented by a roughly rectangular object that has critical defining information about the document, for instance its title. These objects are listed in an outline-like indented hierarchy whose intermediate headings remain only so long as they have any selected documents underneath them.
The document objects are rich information displays in themselves: each one can be thought of as a miniature map of where the search terms occur in the document. Each search term has a characteristic color (e.g. “Organic” might be green, and “Food” might be brown), and vertical stripes are painted into the document object in a position proportional to where they occur in the document. This way, without even retrieving the document people can see how often a word occurs and its distribution within the document. E.g. if a word occurs 30 times in a chapter near the front of the document, there will be a concentration of lines of the right color near the left of the document object. Even more valuable: by sweeping out a range in the document object, the corresponding excerpt of the real document is downloaded and displayed in the last pane.
The Document Preview pane is filled with a whole document (if the document object is clicked) or an excerpt (if a range has been swept out). In a final implementation, this document or excerpt will have the same color coding as the search terms, allowing easy spotting of the concepts that people care about.
The information that is used in the text extract (and in all of the computational indexing and analysis) would come from text extracts that would be created to reside in the original database along side of the original formats. In this way, we could do full text searches in the full variety of formats people use to create documents (e.g. Excel, Word, PowerPoint, e-mail, Web pages). The document would be scanned in the raw text of the text extract, so that color-coding would stay consistent with the rest of the tool. But when the document was downloaded it would be received in the original format.
Bandwidth limitations may prevent the real-time display of some of the data (e.g. word co-occurrence filtering and word position stripes in the document objects). But that data could still be displayed by user request (e.g. a button click causing the information to be generated on the server, then sent to the interface).
[/expand]