Curatorial Insights – CHM https://computerhistory.org Computer History Museum Wed, 14 Jun 2023 18:09:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 Echoes of History https://computerhistory.org/blog/echoes-of-history/ Wed, 14 Jun 2023 18:09:29 +0000 https://computerhistory.org/?p=27616 Three generations of Sutherland's visit CHM to see Jim Sutherland's 50-year-old creation, the ECHO IV home computer system.

The post Echoes of History appeared first on CHM.

]]>
Visits with computing pioneers are magical. Recently, Jim Sutherland, creator of one of CHM’s most intriguing artifacts, came to the Museum’s environmentally controlled storage facility with his son and grandson to see something he made over half a century ago.

Sutherland was the visionary engineer who, in 1965, built a home computing system based on minicomputer parts he had scavenged from work. He called it ECHO IV, an acronym for the “Electronic Computing Home Operator.”

ECHO quickly caught the attention of the media, appearing in dozens of publications. Like some of today’s coverage of new technology, the tone vacillated from wonder to irony. Even Jim’s wife Ruth remarked at the time, “At first, I thought it might really replace me!” Read the full story here.

The Sutherland family in front of ECHO IV. Jim sits at ECHO IV’s keyboard. His wife, Ruth, puts a raincoat on daughter Sally, while Jay and Ann look on. (Photo: Pittsburgh Post-Gazette, 1966)

Jim’s son Jay and grandson Evan took a transcontinental flight to visit CHM and see, perhaps for the last time, this wonderful invention of nearly 60 years ago. It was deeply moving to witness Jim’s joy at rediscovering something he had not seen in decades, seeing his pride at showing his grandson what he had built, and hearing Jay’s detailed memories of using ECHO IV as a young boy of about Evan’s age.

Occasions like this are great opportunities for revisiting the history of specific objects and asking questions of their creators. As former CHM Trustee Donna Dubinsky once said, “We live in an era when we can ask the great inventors of our days directly about their work . . . imagine being able to go up to Michelangelo and ask him questions.” And so, earlier that day, while Jay and Evan were on a guided tour of CHM, I conducted an extended oral history with Jim about ECHO IV as he sees it from today’s perspective. Stay tuned!

Main image:  From left, ECHO IV, Jim, Jay, and Evan Sutherland at CHM, May 18, 2023.

 

SUPPORT CHM’S MISSION

Blogs like these would not be possible without the generous support of people like you who care deeply about decoding technology. Please consider making a donation.

FacebookTwitterCopy Link

The post Echoes of History appeared first on CHM.

]]>
Weather By Computer https://computerhistory.org/blog/weather-by-computer/ Wed, 07 Jun 2023 16:04:04 +0000 https://computerhistory.org/?p=27575 Using homegrown satellite communications equipment in the early '60s, the CDC 1604 laid the foundation for modern weather forecasting tools.

The post Weather By Computer appeared first on CHM.

]]>
Laying the Foundation

Established on the Monterey Peninsula in 1961, the Fleet Numerical Weather Facility (FNWF), known locally as Fleet Numerical, was chartered to apply the newly emerging processing power of digital computers and communications technology to provide accurate weather and ocean condition prediction services to the US Navy. 

Based on the Naval Postgraduate School (NPS) campus and at a facility on Point Pinos in Pacific Grove, and using homegrown satellite communications equipment and Model #1, Serial #1 of the Control Data Corporation 1604 computer, FNWF laid the foundations for modern weather forecasting technology.

The Fleet Numerical Weather Facility

The Navy established a Numerical Weather Problems Group (Project NANWEP) in Suitland, MD, in 1958 to generate operational weather prediction products for the Navy. To take advantage of the computing capability at NPS, in March 1959, the Navy assigned NANWEP to Monterey under Capt. Paul M. Wolff, where, in 1961, it was renamed the Fleet Numerical Weather Facility (FNWF).

Wolff distinguished FNWF’s mission from other activities in the field: “Atmospheric and oceanographic analysis and prediction problems have been faced before — in the universities, in industry, in governmental agencies. To my knowledge, however, FNWF acts singularly in its treatment of the two fluids as a single, coupled system. Correct solutions to environmental problems demand this approach.” [1]

The Navy Acquires a Supercomputer

The NPS Department of Mathematics purchased its first electronic automatic digital computer, a National Cash Register NCR 102A in 1953. It was used in practically all phases of the physical sciences, including early approaches to weather simulation.

NCR 102A at NPS. Photo by Dean Vannice. Source: Calhoun: The NPS Institutional Archive

Pioneering computer architect Seymour Cray and his team built the world’s first commercially successful transistorized computer at Control Data Corporation (CDC) in Minneapolis in 1959. The central computer weighed one ton, and the console half a ton. With a clock speed of 5 microseconds, the 48-bit CDC 1604 claimed to be the fastest machine of its time.

CDC 1604 transistorized logic module. Source: Calhoun: The NPS Institutional Archive

In 1958, when the Bureau of Ships contracted to acquire ten 1604s from CDC, Cray lobbied for the first system to be delivered to Monterey. [2] And in January 1960, he personally supervised the installation of Model #1, Serial #1 of the CDC 1604 in Room 101A of Spanagel Hall.

The CDC 1604 is delivered and assembled in Spanagel Hall. Source: Calhoun: The NPS Institutional Archive

“I was there when Cray sat at the 1604 console and, like a master pianist, ran through the test programs,” said Edward Norton Ward, a mathematician and the first computer technician hired by Professor W. R. Church, Chairman of the Mathematics Department. “I watched and listened. When it’s raining knowledge, you just hold out your hand.” [3]

Lt Harry Nicholson at a 1604 console. Capt. Nicolson served as Commanding Officer of FNOC from 1982- 86. Source: Calhoun: The NPS Institutional Archive

According to Professor Douglas Williams who became Director of the NPS Computer Center in 1963, “It was used by submitting machine language programs on paper tape. There was no operating system and no assemblers, compilers, or utilities. I obtained a Fortran compiler—folklore says it was written by Seymour Cray.” [4] Despite its limitations, the 1604 boasted impressive computing power for the time, with 32,768 bits of 48-bit core main memory and 100,000 computations per second.

CDC 1604 tape drive storage units. Source: Calhoun: The NPS Institutional Archive

In August 1960, the Monterey Peninsula Herald reported that FNWF demonstrated “the first surface weather map to be produced by a computer … it cuts the time for compiling hemispheric weather forecasts from hours to minutes. And is 40 percent more accurate than old hand methods.”

Printout shows the temperature at the sea surface and various depths. Source: Weather by Computer

FNWF acquired its own CDC 1604 in 1961, which was installed with the school’s machine in the converted lobby of the first floor of Spanagel Hall. The complete system incorporating a CDC 160A for data transmission, a Varian 530 plotter, ASR-33 teletype machines, and tape storage drives, is shown below.

Diagram of Control Data computer system FNWF. Source: Weather by Computer

In 1963, CDC published a report, Weather By Computer, that described the FNWF operation and programs written for the 1604 that generated a broad range of weather predictions for naval operations worldwide.

Weather By Computer. Source: Computer History Museum

To eliminate conflicts between the immediate demands of computer processing time for weather prediction and the teaching needs of the school, in 1964, FNWF built a dedicated computer center on the NPS campus. To handle the increased computational load, a CDC 3200 computer (a 24-bit version of the 1604) was purchased in October and was running at full capacity by the end of the year.

Also, in 1964, FNWF established a separate Communications Division to design, fabricate, and test special-purpose electronic communications and interface devices to serve unique requirements for receiving and transmitting real-time weather data. Administrative offices, workshops, and R&D laboratories were located in a former Navy radar training facility on Point Pinos at 1352 Lighthouse Avenue, Pacific Grove.

When NPS acquired an IBM 360 Model 67 in 1967, Model #1, Serial #1 of the CDC 1604 was transferred to FNWF and moved to Point Pinos, where it continued to serve for archival storage of weather data.

All FNWF operations were consolidated at a single site at the Navy Annex, Monterey Airport in 1974 where today, as the Fleet Numerical Meteorology and Oceanography Center (FNMOC), it serves as a primary DoD production site for computer-generated meteorological and oceanographic analysis and forecast products worldwide. The operation is highly respected, and its computing capability ranks as one of the most powerful in its field in the world.

Main image: The CDC 1604 supercomputer with a figure as scale. Source: Wikipedia. https://en.wikipedia.org/wiki/CDC_1604

 

NOTES

[1] Wolff, Paul M. “Oceanographic data collection” (1968) Bulletin American Meteorological Society: Vol 49, No. 2, (February 1968), p. 96.

[2] Created by Congress in 1940, the Bureau of Ships responsibilities included supervising the design, construction, conversion, procurement, maintenance, and repair of ships and other craft for the Navy; managing shipyards, repair facilities, laboratories, and shore stations; developing specifications for fuels and lubricants; and conducting salvage operations.

[3] Honneger, Barbara, “NPS Computing: 50 Years Golden and Growing,” Calhoun: The NPS Institutional Archive.

[4] Douglas Williams, Interviews, Calhoun: The NPS Institutional Archive.

[5] The Computer History Museum collection holds a 1604 main cabinet, all three sections of the operator’s console, and the core memory unit, together with numerous other related documents and manuals.

SUPPORT CHM’S MISSION

Blogs like these would not be possible without the generous support of people like you who care deeply about decoding technology. Please consider making a donation.

FacebookTwitterCopy Link

The post Weather By Computer appeared first on CHM.

]]>
A Backup of Historical Proportions https://computerhistory.org/blog/a-backup-of-historical-proportions/ Wed, 10 May 2023 15:18:14 +0000 https://computerhistory.org/?p=27317 Discover what surprises await in CHM's release of the Xerox PARC file system archive.

The post A Backup of Historical Proportions appeared first on CHM.

]]>
Access the Xerox PARC file system archive here.

An Ancient Anxiety

“Is my phone really backed up in the Cloud?” “When was the last time I backed up my laptop?” “Is it true that I need a local backup of my Google Drive?!” “Oh dear, I forgot my password!” Now that we have interwoven computers so deeply into our daily lives, an ancient anxiety has become a fiercer everyday companion for us. For centuries we have worried “Are my most precious records okay?” In the past, we calmed this anxiety using a variety of technologies: safety deposit boxes, shoe boxes, photo albums, photocopies, scriptoria, institutional archives, and more. In a world of digital computing, we are all too aware of the fragility of record keeping. In some ways, our ancient anxiety has expanded.

Scriptoria were dedicated spaces for the copying of manuscripts, making this a drawing of a 16th century backup. From the National Gallery of Art. https://www.nga.gov/collection/art-object-page.74850.html

Computing professionals have been living with this digital flavor of archival anxiety for longer than the rest of us. From the very beginning, the fluidity and fungibility of digital information came with fragility. Making matters worse, many of the means for holding and storing digital information were less reliable and much harder to work with than today’s. As a result, computing professionals met their anxiety about—and real challenges of—digital fragility with a new discipline: They started to make purposeful copies. They began to back things up.

A Laboratory for the Office of the Future

PARC in 2022. Cmichel67, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons

In 1970, the well-heeled corporate behemoth Xerox, with a nearly perfect monopoly on the quintessential office technology of photocopying, cut the ribbon on a new and ambitious bet on its future: the Xerox Palo Alto Research Center (PARC). PARC was a large research and development organization, comprised of distinct laboratories. Several concentrated on extending Xerox’s dominance of photocopying, like the General Science and Optical Science Laboratories. Others, specifically the Computer Science and Systems Science Laboratories, were aimed at a new goal. They would develop computer hardware and software that could plausibly form the basis for the “office of the future” some ten to fifteen years hence, giving Xerox a profound head start in this arena. The previous year, Xerox had leapt into the computer industry through the purchase, for an enormous sum, of the company SDS.

The leadership of PARC scoured the computing community across the United States and recruited what proved to be an astonishing collection of young talent. Part of the attraction PARC held for this cohort was, surely, the fact that the new laboratories held the opportunity to pursue a vision about the future of computing that they already held deeply. In this future, computing would be increasingly personal, graphical, interactive, and networked. Xerox’s deep pockets, and a PARC leadership that shared this vision, proved compelling.

Backing Up the Office of the Future

At PARC, the new recruits wanted to have the same sort of computing environment they had been familiar with in their academic research: a PDP-10 mainframe from the Digital Equipment Corporation running the timesharing TENEX operating system from BBN. Xerox refused. They had just purchased SDS, a maker of timesharing computers, and couldn’t countenance such a major purchase from their prime competitor. The PARC computing crowd responded by simply building their own clone of a PDP-10, calling it MAXC (an eye-poking pun on the name of the founder of SDS, Max Palevsky), and installed TENEX. Immediately, they began to back up what they were creating with MAXC. Using a TENEX program named BSYS, the PARC researchers could store their data and programs on 9-track magnetic tapes. Tape backups had arrived at PARC.

A 9-track tape drive in the collection of the Computer History Museum. https://www.computerhistory.org/collections/catalog/102752062

The next several years to 1975 contained a remarkable flourishing of computing developments at PARC. The researchers created the Alto computer and a swath of novel software for it that, through the subsequent decades, has broadly defined our use of computers. To learn more about this remarkable story, you might start here. Critical to the use and success of the Alto were PARC’s innovations in computer networking, specifically the creation of Ethernet for wired connectivity. Ethernet wove the many Altos across PARC together, further connecting them to the now two MAXC systems as well as a variety of printers. Moreover, PARC researchers developed the PUP networking protocol, allowing Xerox to knit together many local Ethernet networks across the US into a sprawling corporate internetwork.

Individual Alto users could store and back up their files in several ways. Altos could store information on removable “disk packs” the size of a medium pizza. Through the Ethernet, they could also store information on a series of IFSs, “Interim File Servers.” These were Altos outfitted with larger hard drives, running software that turned them into data stores. The researchers who developed the IFS software never anticipated that their “interim” systems would be used for some fifteen years.

With the IFSs, PARC researchers could store and share copies of their innovations, but the ancient anxiety demanded the question: “But what if something happened to an IFS?!” Here again, Ethernet held a solution. The PARC researchers created a new tape backup system, this time controlled by an Alto. Now, using Ethernet connections, files from the MAXC, the IFSs, and individuals’ Altos could be backed up to 9-track magnetic tapes. Later, at the end of the 1970s, the PARC researchers even developed a new program called ARCHIVIST, which ran on a more powerful successor to the Alto known as the Dorado. ARCHIVIST automated the process, allowing researchers to archive to and retrieve files from the IFSs by sending simple commands through electronic mail.

From Backup to Migration

Nearly a decade later, at the close of the 1980s, PARC’s researchers increasingly adopted commercially produced computers from outside the company, rather than the Altos, Dorados, and other systems that they had devised in-house. These outside computers were new workstations produced by a local firm, Sun Microsystems. While the Sun systems were directly inspired by the Altos, they brought PARC closer to the computing mainstream through Sun’s embrace of the Unix operating system and microprocessors. This shift to Sun implied yet another wrinkle for PARC’s solutions to its archival anxieties.

A Sun workstation in a Stanford laboratory. https://www.computerhistory.org/collections/catalog/102657163

By the start of the 1990s, PARC’s computer researchers began storing their information on new Unix-based servers using Sun’s Network File System (NFS) protocol, which has gone on to be a standard for Unix and Linux systems worldwide. These new PARC NFS servers used 8mm digital tape cassettes for backup. MAXC was decommissioned, and no one used the ARCHIVIST system anymore. PARC had accumulated an impressive thicket of 9-track magnetic tapes holding backups of programs, data, messages, and documents from the astonishing contributions of PARC to computing across the 1970s and 1980s, but now no one was using the 9-track tape systems anymore. With this, a particularly horrible aspect of the ancient archival anxiety came to the fore: “What if I lose the key to my lock box?” “What if I can’t access my backups anymore?” Now backup’s twin, migration, took center stage. PARC’s computer crowd wrote fresh programs that migrated the data from the 9-track tapes to the new 8mm digital tape cartridges, which they also used for their NFS servers. The older tapes were discarded, and the 8mm tapes of this remarkable record of the work of the 1970s and 1980s then sat for another decade.

An 8mm data tape cartridge. Mister rf, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons

Migration Springs Eternal

Like hope, migration springs eternal. In 2003, the researchers at PARC realized that, while 8mm tapes were still in use, other media were becoming more popular. To keep the archive of PARC’s astonishing accomplishments accessible, migration would again be necessary. In that year, PARC researcher Dan Swinehart approached Al Kossow to tackle the challenge. Kossow was then a senior engineer at Apple Computer and already known as a passionate preservationist of both computer hardware and software, especially around the Alto. Kossow was able to transfer all the data from the 8mm tapes to a set of DVD-ROM discs. Again, this unique archive for the history of computing was safe, sound, and accessible—strictly within the confines of PARC.

A few years later, in 2006, Kossow joined the Computer History Museum (CHM) full-time as the Robert N. Miner Software Curator. When he had worked on the migration of the PARC archive to DVD, Kossow had created an extra CD-ROM onto which he had copied almost 15,000 files relating to the work done specifically on the Alto in the 1970s and 1980s, reflecting his keen appreciation for the importance of the Alto in the history of computing. Now at CHM, he and others began an effort to see if PARC would be willing to donate the extra Alto CD-ROM to CHM, and thereby open it up to the world. Testifying to the perseverance of CHM and the sagacity of PARC, the Alto archive CD-ROM was donated to CHM in 2011 with permission to make it public.

Public Translation

CHM now faced a major challenge. How could this nearly four-decade old software, data, and information be made accessible to today’s public? The information was created with a now deeply obsolete forty-year-old experimental computer, with research software that no one had touched in decades. Much of the Alto archive was in now arcane formats for printing like “Press” or for document editing like “Bravo.” Certainly, you couldn’t provide the public with working Altos to read the archive.

Paul McJones (right) in 1973, with Edsger Dijkstra. https://mcjones.org/dustydecks/archives/2011/04/

An answer to the dilemma came in 2013, through the contributions of Paul McJones. McJones is a retired software engineer and established software preservationist who had met many (future) PARC researchers while he was working at Berkeley in the 1960s and 1970s. In the second half of the 1970s, McJones had done programming for the new division set up by Xerox to commercialize PARC’s computer innovations. He again worked with many former PARC researchers in projects at DEC’s laboratory in Palo Alto, and then again as a principal scientist at Adobe.

During 2013, McJones crafted a program that processed the Alto archive, creating HTML translations of Bravo files and PDF translations of Press files, and organizing them into a set of web pages for access, search, and browsing. With this further act of migration qua translation, the Alto archive was at last ready to share with the world, and in 2014 https://xeroxalto.computerhistory.org/ went live.

From the Alto Archive to the PARC File System Archive

Since its launch, the Alto archive has proved essential for efforts at both CHM and the Living Computer Museum (LCM) in Seattle. At LCM (which sadly closed during the COVID pandemic), senior software engineer Josh Dersch used the archive and Al Kossow’s Bitsavers repository to build ContrAlto, an emulator for the Xerox Alto that can run on contemporary computers and, in turn, run the software found in the Alto archive. At LCM, ContrAlto was a key part in an impressive Alto restoration that visitors could use. At CHM itself, the Alto archive proved indispensable to a number of projects, ranging from its own restoration of an Alto, a major event on the history of the Alto, and a series of video ethnographies of software innovations on the Alto.

Charles Simonyi (standing) and Tom Malloy demonstrate the Bravo word processor on a restored Alto for a 2017 Computer History Museum event. Courtesy Doug Fairbairn.

But what of the rest of the PARC archive from the 1970s and 1980s that resided on the sixteen or so DVDs that remained sitting in a box? Could the rest of the archive be collected by CHM and, through it, be released to the public? Did the archive contain sensitive personal information that should not be released? Did it contain intellectual property that was still vital for PARC, or that was owned by others? Could these types of materials be identified and filtered out?

Once again, Paul McJones offered his expertise and help. Acting as a CHM volunteer, he entered into an NDA (non-disclosure agreement) with PARC enabling him to work there on the remaining archive. He copied the archive from the DVDs to a contemporary hard drive and identified personal files that should be filtered out. He used his translation and organization program to make the remaining archive readable and accessible and transferred it to PARC researchers and legal staff for review. Eventually it was sent to CHM. The resulting archive of nearly one hundred fifty thousand unique files of PARC’s groundbreaking work from the 1970s and 1980s arrived at CHM on a thumb drive and could now be made available to the public.

A screenshot of a Press file, detailing PARC backup procedures, now rendered as PDF in the new archive.

With the new archive, new challenges arose in preparing it for public release. Paul McJones’ program could convert Press and Bravo files to PDF and HTML, making them readable, but not the Tioga files found in great abundance in the new archive. Tioga is the file format for a successor text editor to Bravo that the PARC researchers had created and used extensively in the 1980s. A significant fraction of the archive remained inscrutable. This time, Josh Dersch, the creator of the Alto emulator, answered the call. He was able to supply logic for Paul McJones’ program to render Tioga files as HTML documents. The archive was finally unlocked.

The PARC File System Archive, Unlocked

The nearly one hundred and fifty thousand unique files —around four gigabytes of information—in the archive cover an astonishing landscape: programming languages; graphics; printing and typography; mathematics; networking; databases; file systems; electronic mail; servers; voice; artificial intelligence; hardware design; integrated circuit design tools and simulators; and additions to the Alto archive. All of this is open for you to explore today at https://info.computerhistory.org/xerox-parc-archive Explore! 

One thing that is missing, hopefully temporarily, are files related to the critically significant programming language and environment Smalltalk. Smalltalk is a key piece in both the history of object-oriented programming and that of the graphical user interface. The Smalltalk materials in the archive are currently under review by the company Cincom, which owns significant intellectual property rights in Smalltalk and markets Smalltalk-based software globally today. An additional unresolved question is what 8mm tape backups may possibly remain at PARC for the NFS servers, holding the archives of work done at PARC across the 1990s and in the new millennium. It is a topic for further investigation.

Exploring the Archive: The Unexpected Story of Interscript

What kinds of discoveries await in https://info.computerhistory.org/xerox-parc-archive? I’d like to share something really surprising and fascinating that I came across in the archive—a new story that has enriched my view of a topic in the history of computing that is tremendously important. I hope that it might inspire you to find your own discoveries in the archive.

Take a moment to consider how most writing occurs today. What tools do people most commonly use? Pencil and paper? Pen or brush and ink? Compare that to all the writing that we do through computing: taps on a keyboard—physical or onscreen—assembling texts, messages, posts, mail, lists, and documents of a bewildering assortment. Think also of voice to text, itself a kind of writing, sending messages, submitting search queries, and the like. In many parts of the world today, I think it’s very safe to say that most writing takes place through computing. How did this happen? One thing is certain, it did not happen on its own. How did we make computers write? This question animates my new book project, and while it is at an early stage one finding is absolutely clear: Many of the most innovative minds in the history of computing have devoted an extraordinary amount of time and energy to this very project of making computers write.

One of the episodes in this long historical project is the creation of PostScript, a coding language that afforded the ability for computers to produce high-quality printed pages. It acted as a common language that let you print exactly what you wanted, no matter which computer, app, or printer you happened to be using. I wrote about the story of PostScript a few months ago, when CHM released the source code for PostScript in connection with the fortieth anniversary of Adobe, the company that made it. While you may not be familiar with PostScript, you are certainly intimately aware of a technology that directly developed out of it: PDF.

Adobe cofounders John Warnock (left) and Chuck Geschke (right). Courtesy Adobe Inc. and Doug Menuez.

Adobe was founded in 1982 by two Xerox PARC computer researchers, Chuck Geschke and John Warnock, and their first order of business was to create PostScript. The reason was that the pair had worked with others—Butler Lampson, Bob Sproull, and Brian Reid—on a very similar project at PARC, the coding language Interpress. While Interpress differed from PostScript in some aspects of fundamental approach, the intention behind Interpress was exactly the same: creating a coding language for the high-quality printing of documents. Computers, programs, and printers that could “speak” Interpress would be able to cooperate seamlessly. The Interpress effort had started in Geschke’s laboratory at PARC in 1979, and by 1981 it had reached an advanced state of development. Leadership at Xerox had even agreed that Interpress would become the whole corporation’s standard, but that this would take years to happen. Concerned about that slow pace in the face of rapid developments in computing, Geschke and Warnock left PARC in 1982, forming Adobe to get a standard coding language for printing quickly into the world.

What I stumbled across in https://info.computerhistory.org/xerox-parc-archive reveals part of the story of what happened next for the researchers who had worked on Interpress and who remained at PARC. This was a new effort, initially called InterDoc, later Interscript, that aimed to do for editable documents just what Interpress and PostScript did for printable documents. Perhaps the same approach—creating a new coding language for the interchange of documents between various computers and apps—could work here as well.

The promise of Interscript, in two figures from a 1985 Xerox document. https://bitsavers.org/pdf/xerox/interscript/IntroductionToInterscript.pdf

The Interscript effort, as electronic mail held in the archive show, really took off in 1981 as the success of Interpress became clear. Spurred by Butler Lampson and Jim Mitchell, the project also included Brian Reid, who had worked with Lampson on Interpress, as well as Bob Ayers and Jim Horning, who worked especially closely with Mitchell. The Interscript project ran from 1981 into at least early 1984.

An email exchange in the archive, documenting the emergence of the first name for the effort, InterDoc.

What the Interscript team immediately discovered was that editable documents presented a greater challenge than printable documents. Editable documents were inherently dynamic. By definition, they were going to be subject to constant change. And these changes were not just about what words they contained and in what order. These changes were also about the organization and appearance of the text, the layout, from outlined or numbered text to headlines, captions, illustrations, columns, and the like. Creating a coding language that could contend with such dynamic complexity was a true challenge.

Furthermore, the editors that were emerging in the first half of the 1980s ran from the rudimentary to the elaborate. This spread of editor functions was itself another challenge. How could an interchange format work from the simplest to most complex editors? How could simple editors just work on the parts of a document that they could, but leave everything else alone?

The tree structure of Interscript’s layout templates. https://bitsavers.org/pdf/xerox/interscript/IntroductionToInterscript.pdf

To meet the challenges, the Interscript team again turned to computer science. Not only would they turn to a new coding language as part of the solution, but they would also look to one of the key organizational forms of computer science, the “tree” data structure. In it, elements are connected to one another in a hierarchy, just like the trunk of a tree leads to branches, and on to sticks and then twigs, each step in greater profusion. Very roughly put, Interscript would capture the possible layouts of an editable document as a tree structure of possible templates. Careful control algorithms would then guide the “pouring” of the text into the proper templates of the tree. These scripts would allow the editable document to be reconstituted, edited, and then shared between different computers and programs.

A portion of a November 1983 Tioga document, rendered as HTML in the archive, summarizing some of the milestones and motivations of the Interscript project.

Although significant progress had been made on designing Interscript into early 1984, the effort then appears to have ended abruptly. While Butler Lampson, in a telephone interview with me recently, holds it ultimately ended because it was “naïve” given the complexity of editable documents, another factor was that, at the end of 1983, the Computer Science Laboratory at PARC descended into chaos. This was the Laboratory that housed the Interscript project, and its charismatic leader Bob Taylor abruptly resigned, soon followed by half of its technical staff. Lampson left to rejoin Taylor, and Mitchell, temporarily thrust into the position of the manager for the unravelling laboratory, himself quickly departed for Acorn Computers.

Unsolved Problems

Remarkably, Lampson explains, no one has yet to solve the problem that Interscript set out to address. We still lack a common format for editable documents that can contend with layout well. In his view, only partial and de facto solutions exist. Microsoft’s Word, itself originally directly based on the Bravo editor from Xerox PARC, has become a de facto standard, but only because any editor needs to work with Word documents if it is to be commercially viable. And even so, we all know how layout suffers when moving a document from one editor to another. PDF, with its roots in printed documents, only succeeds in limited ways with editing. For his part, Mitchell believes that the fundamental approaches of Interscript had great promise, and that if they had been more diligently pursued by PARC and Xerox, our lives with electronic documents could have been much different, and for the better.

So here, in a single, small directory in https://xeroxparcarchive.computerhistory.org, lies a fascinating story about making computers write, and an unsolved problem within it. Who knows, perhaps the person who finally solves it will find inspiration in the archive.

Acknowledgements

This archival project, and this article, would have been impossible without the efforts of:
Al Kossow
Paul McJones
Hansen Hsu
Josh Dersch
Butler Lampson
Jim Mitchell
“JKF,” the author of the 1991 README in the rosetta.tar file of the archive at PARC, who is very likely James K. Foote
Tim Curley
Heather Walker
Eric Bier
John Kitchura
PARC, A Xerox Company
Xerox

Additional Resources

David C. Brock, “50 Years Later, We’re Still Living in the Xerox Alto’s World,” https://spectrum.ieee.org/xerox-alto

The Alto in CHM’s flagship exhibition, Revolution: The First 2000 Years of Computing, https://www.computerhistory.org/revolution/input-output/14/347

A selection of video recordings featuring an Alto computer restored by CHM, https://youtube.com/playlist?list=PLQsxaNhYv8dbSX7IyztvLjML_lgB1C_Bb

A 1986 lecture by Alan Kay, “The Dynabook—Past, Present, and Future,” https://www.youtube.com/watch?v=GMDphyKrAE8&list=PLQsxaNhYv8dbIuONzZcrM0IM7sTPQFqgr&index=8

A 1986 lecture by Butler Lampson, “Personal Distributed Computing – The Alto and Ethernet Software,” https://www.youtube.com/watch?v=h33A-KWJKDQ&list=PLQsxaNhYv8dbIuONzZcrM0IM7sTPQFqgr&index=9

A 1986 lecture by Chuck Thacker, “Personal Distributed Computing – The Alto and Ethernet Hardware,” https://www.youtube.com/watch?v=A9n2J24Jg2Y&list=PLQsxaNhYv8dbIuONzZcrM0IM7sTPQFqgr&index=10

Main Image: The Xerox PARC File System Archive, newly released by the Computer History Museum.

 

SUPPORT CHM’S MISSION

Blogs like these would not be possible without the generous support of people like you who care deeply about decoding technology. Please consider making a donation.

FacebookTwitterCopy Link

The post A Backup of Historical Proportions appeared first on CHM.

]]>
The Remarkable Ivan Sutherland https://computerhistory.org/blog/the-remarkable-ivan-sutherland/ Tue, 21 Feb 2023 12:50:16 +0000 https://computerhistory.org/?p=26909 CHM releases to the public for the first time a full oral history with Ivan Sutherland, pioneer of computer graphics, virtual reality, asynchronous systems, and more.

The post The Remarkable Ivan Sutherland appeared first on CHM.

]]>
In His Own Words

Ivan Sutherland has blazed a truly unique trail through computing over the past six decades. Along the way, he helped to open new pathways for others to explore and dramatically extend: interactive computer graphics, virtual reality, 3D computer graphics, and asynchronous systems, to name but a few.

The Computer History Museum is delighted to make public its two-part oral history with Ivan Sutherland, one of the most influential figures in the story of computing to date. These new oral history interviews present a wonderful opportunity to learn more about Ivan Sutherland’s life in computing directly from the source, with his own reflections and interpretations and in his own words. The transcripts for these interviews can be viewed and downloaded here and here. And the full interview video can be viewed below.

The Museum is deeply grateful to Bob Sproull, a lifelong colleague of Sutherland and himself a major figure in computing, for his roles as instigator, interviewer, and editor for these oral histories, and for involving me, Marc Weber, and Jim Waldo in the effort. The Museum is also delighted to make these oral history interviews public during the 60th anniversary year of Ivan Sutherland’s breakthrough in interactive computer graphics, the program Sketchpad, for which he earned his PhD from MIT in 1963.

A Man of Many Parts

There is a phrase, far more popular in 17th and 18th century England than it is today, that recurs for me when thinking about Ivan Sutherland and the remarkable story of his life in computing: “A man of many parts.” The description was used for an individual who had made serious contributions to a domain, while also possessing multiple, and often diverse, talents and pursuits. The description fits Ivan Sutherland well, but I think it also misses something important: there is a commonality in Sutherland’s multiple contributions and accomplishments, a connective tissue or shared wellspring for his many parts.

To get at this wellspring, start with geometry. From his youth, Sutherland possessed an unusually keen spatial, geometric intuition. In his mind and at his hands, he experienced an immediacy in perceiving how things fit and worked together. Perspective drawing involves a set of techniques to represent a three-dimensional scene on the two-dimensional plane of a sheet of paper or a stretch of canvas. These renderings can proceed in different ways, determined by the number of vanishing points employed. Together the vanishing points define the viewpoint of the observer. One-point, two-point, and three-point perspectives are all very different, providing distinct ways to understand the represented scene.

Thomas Eakins, Untitled (Perspective study of boy viewing object), Hirshhorn Museum and Sculpture Garden, Smithsonian Institution, Washington, DC, Gift of Joseph H. Hirshhorn, 1966. Accession Number 66.1553.A-B

This switching of viewpoints, the ability to look at something from a fresh and unexpected angle, and then to integrate this new perspective with those that came before, seems to me the link between Sutherland’s unusual spatial intuition with his diverse contributions in computing. It’s an ability to find a new viewpoint on a subject, to look at it from this novel perspective, and then to explore how this vantage might change the subject itself through fresh solutions and directions.

Connections and Intersections

Ivan (left) and Bert Sutherland at the Computer History Museum in 2017. Courtesy Doug Fairbairn.

In what follows, I trace some of the lines of Sutherland’s story, intersecting them with related materials held in the Museum’s collection. As recounted in his oral history interviews, Ivan’s life in computing was profoundly shaped by interactions he and his brother Bert had with two central figures in the early history of computing: Edmund Berkeley and Claude Shannon. Bert, who went on to a remarkable career in computing himself, distinguished by his roles as a research manager at Xerox PARC and at Sun Laboratories, told his story in his own oral history with the Museum.

The Sutherland brothers, through a connection of their mother’s, began visiting Edmund Berkeley in New York City from their home in Scarsdale while Ivan was still in grade school. At the time, Berkeley was establishing himself as a leading author, publisher, and consultant for the new world of digital computers. In Berkeley’s offices, the Sutherland brothers encountered his light-seeking robot “Squee,” now in the collection of the Computer History Museum, which also holds some of Berkeley’s papers.

Berkeley’s “robot squirrel,” Squee. Computer History Museum, B1630.01, © Mark Richards. https://www.computerhistory.org/collections/catalog/B1630.01

The Sutherland brothers worked on their own versions of light-seeking robots afterward at home, using surplus parts their engineer-father helped them to source in New York City, and the pursuit became rather long-lasting for Ivan. As an undergraduate engineering student at Carnegie Tech (today’s Carnegie Mellon University), and then again during his early stint as a graduate student at Caltech (before moving to MIT after one year), Sutherland continued to build more advanced, refined light-seeking robots of his own design. The reason? Aesthetics, he explains in his oral history. For Sutherland, engineering design has a strong aesthetic dimension. Beauty and simplicity, for Sutherland, gave the practice of engineering an aesthetics, an affective pull. “In fact, I think that engineering and art are very closely related,” he explains.

Ivan Sutherland discusses a surplus military gunsight computer his father installed for the brothers in the family kitchen. From the new CHM oral history.

In Berkeley’s offices, the Sutherland brothers also had the opportunity to work with his new creation, Simon, a very simple and inexpensive computer. Unlike the giant mainframes of this era, which relied on thousands of vacuum tubes, Simon was animated by a handful of inexpensive relays—simple electrical on/off switches. Nevertheless, the machine was able to perform mathematical and logical operations.

Berkeley’s Simon. Computer History Museum, 102627259, © Mark Richards. https://www.computerhistory.org/collections/catalog/102627259

Further, Simon was programmable, using instructions encoded on a punched paper tape. During his high school years in the 1950s, Ivan Sutherland was able to devise a working program for Simon, allowing it to perform division, quite a feat for the humble machine. “I’m quite proud of having written a division routine for a two-bit computer when I was in high school,” he explains in the oral history. “So I can almost literally say I’ve been in the computer business nearly all my life.”

Through Berkeley, the Sutherland brothers were introduced to another key figure in the early years of digital computing: Claude Shannon, renowned for his development of information theory. While a maestro of abstraction, Shannon was also a keen builder. During a visit to Shannon’s office at the Bell Telephone Laboratories in northern New Jersey, he showed the brothers his creation Theseus. It consisted of a small maze of movable metal panels affixed to the top of a metal box containing magnets and relay electronics like Berkeley’s Simon. Through the action of the relay electronics and magnets, a toy mouse was able to find its way through the maze and then “remember” the successful route. While the Sutherland brothers were duly impressed, their attempts to recreate this early effort in machine problem-solving and artificial intelligence proved unsuccessful.

Claude Shannon with Theseus. Computer History Museum, 102630792. https://www.computerhistory.org/collections/catalog/102630792

Breakthrough at MIT

After graduating from Carnegie Tech, Ivan Sutherland headed to Caltech for graduate studies in electrical engineering. There, as he recounts in his oral history, he was invited to attend a lunch with Marvin Minsky and Oliver Selfridge, two central figures in digital computing at MIT and the new field of artificial intelligence. Over the meal, Sutherland listened to Minsky and Selfridge’s enthusiastic reports of new computer developments at MIT and its Lincoln Laboratory. Adding to Sutherland’s excitement about the computer activity at MIT was the fact that Claude Shannon had moved there. Sutherland quickly decided to continue his graduate work at MIT, and Shannon agreed to advise him.

Two unidentified women at the controls of the TX-2 in 1962. http://www.bitsavers.org/pdf/mit/tx-2/photographs/2022-10-31/P91-206_RR_127176.jpg

Once at MIT, Sutherland met with Wesley Clark, the designer and impresario of an immensely powerful experimental computer, the TX-2, at MIT’s Lincoln Laboratory. Clark had designed the TX-2 incorporating two critical innovations in computer component technology: high-speed switching transistors and large capacity magnetic core memories. The machine would provide valuable lessons about the use, capabilities, and potential of these new technologies.

A transistorized “flip-flop” logic module from the TX-2. Computer History Museum, 102732767. https://www.computerhistory.org/collections/catalog/102732767

But perhaps more importantly, for Clark the TX-2 had the potential to make real a kind of computing that could become more widespread in the future. As Sutherland explains in his oral history, “Wes took TX-2 and treated it as a window into the future of what computing might be if everybody had one of his own.” Sutherland proposed to use TX-2 to create software for generating engineering drawings. Without hesitation, Clark gave him access to the machine.

Ivan Sutherland discusses the origins of Sketchpad in his new CHM oral history.

In January 1963, Ivan Sutherland successfully completed his PhD on the system he created on the TX-2, Sketchpad. With it, a user was able to interactively, and in real time, create line drawings on the computer’s CRT screen, using a light pen for direct input on the display. Sketchpad afforded many different capabilities for working with these line drawings, such as the automatic completion of shapes, sizing, the ability to copy and repeat elements, and more.

Ivan Sutherland using Sketchpad on the TX-2, circa 1962-1963. Computer History Museum, 102652182. https://www.computerhistory.org/collections/catalog/102652182

A Sketchpad image, from Ivan Sutherland’s dissertation, 1963. Computer History Museum, 102726907. https://www.computerhistory.org/collections/catalog/102726907

For Sutherland, and for many others who experienced and learned about it, Sketchpad represented much more than just a new way to create line art. As he put it in his thesis, “The Sketchpad system makes it possible for a man and a computer to converse rapidly through the medium of line drawings. Heretofore, most interaction between men and computers has been slowed down by the need to reduce all communication to written statements that can by typed; in the past, we have been writing letters to rather than conferring with our computers… The Sketchpad system… opens up a new era of man-machine communication.” A listing of Sutherland’s source code for Sketchpad in the Computer History Museum’s collection is available here, and his 1994 lecture about the history of Sketchpad can be viewed here.

Innovation in the Military

After MIT, Sutherland fulfilled his ROTC commitments to military service by serving in the US Army, first at the NSA, where he continued work on computer graphics, and then as the second director of the Information Processing Technology Office of ARPA, the Advanced Research Projects Administration of the Department of Defense. Only in his mid-twenties, Sutherland succeeded the MIT psychologist J. C. R. Licklider, who had established the office and its leading role in supporting computer science and artificial intelligence research in the nation.

While Sutherland continued many of Licklider’s projects at ARPA, he added new projects of his own to the mix. Critically for Sutherland, he supported a new effort by Wesley Clark, the designer of the TX-2 who had moved from MIT to Washington University, St. Louis. Clark had created an innovative small computer for an individual user called the LINC, especially suited to the real time needs of biomedical research, and moved the project and team to St. Louis. (Clark discusses the history of the LINC in a 1986 talk here.) Now, Clark envisioned an entirely new approach to computer design. In it, computers would be built up from distinct units, each unit providing an entire function. In this way, computers could be composed in a flexible and bespoke manner, built with just what was needed for some use, no more. Clark called the approach macromodule, and Sutherland funded the research.

Wesley Clark (left) and Charles Molnar (right) with a LINC computer. Molnar was a key figure in the macromodule research with Clark. Computer History Museum, 102680046. https://www.computerhistory.org/collections/catalog/102680046

The researchers in Clark’s macromodule effort succeeded in building a variety of different units, such as the one below donated to CHM by Ivan Sutherland that performed addition. The modularity of this new approach entailed a radical departure in digital computing design. In the mainstream, all the operations of computers were coordinated by following the regular beat of a single electronic signal, the “clock.” For the macromodule approach, an alternate, asynchronous approach to the orchestration of computer operations was required. The practical challenges and the theoretical potentials of asynchronous systems became a central passion and focus for Ivan Sutherland thereafter.

An addition function macromodule from Wesley Clark’s research group. Computer History Museum, 102766550. https://www.computerhistory.org/collections/catalog/102766550

Computer Graphics at Harvard and Utah

After his appointment at ARPA, Sutherland accepted a tenured engineering faculty position at Harvard University. There, Sutherland expanded his graphical ambitions from the two-dimensional abilities of Sketchpad to the concept of three-dimensional graphics and a new interface for experiencing them. Sutherland created a laboratory of graduate and undergraduate students alike, aimed at creating views of 3D scenes—drawn with lines—as well as a display worn on the head that would present different views of the 3D scene depending on the direction that the user looked. By the close of the 1960s, they had a system in place that could do just that. This project is frequently cited as an early milestone in the history of virtual reality. Sutherland discusses the project and its relation to virtual reality in this 1996 lecture.

The head-mounted display from Sutherland’s Harvard project. Computer History Museum, 102680042. https://www.computerhistory.org/collections/catalog/102680042

Students of USC professor and VR researcher Scott Fisher image Sutherland’s head-mounted display at the Computer History Museum in 2022 for a project to recreate his laboratory in virtual reality.

Some early results of the USC virtual reality effort.

Soon afterward, Sutherland left Harvard for the University of Utah, and for a new startup he was cofounding to pursue systems for 3D computer graphics. The key partner for Sutherland in both moves was David C. Evans, an accomplished computer researcher. Evans was establishing a computer science department focused on 3D computer graphics, the same focus as the company he was starting with Sutherland. The new company, Evans and Sutherland, moved quickly to produce workstations for creating 3D graphics, beginning with the LDS-1 and then moving on to the very successful Picture System. Other products and efforts became essential to computer animation and to military pilot training.

A page from a brochure for the Picture System. Computer History Museum, 102646288. https://www.computerhistory.org/collections/catalog/102646288

Sutherland and Evans also fostered a remarkably productive and creative community of students in computing and especially computer graphics at Utah, counting the cofounders of Adobe, Pixar, Silicon Graphics, and more among its members. Some of these figures discussed this remarkable environment in a 1994 meeting.

Sutherland’s experiences through his time in Utah comprise just the first half of his story in computing and engineering. Beyond it lies another startup, a faculty career at Caltech, a revolution in VLSI microchip design, a walking-robot project at Carnegie Mellon, venture capital investing, a consulting firm that became the basis for Sun Laboratories, and fresh contributions to asynchronous systems that continues to this day at Portland State. For these stories, Sutherland’s new oral history interviews (Part 1 and Part 2) are an incredible source, as are this event with the Sutherland brothers in 2004 and this retrospective lecture by Ivan Sutherland at the Computer History Museum in 2005.

Main image: Ivan Sutherland. Photo credit: BBVA Foundation.

 

SIGN UP!

Learn more about the Art of Code at CHM or sign up for regular emails about upcoming source code releases and related events.

FacebookTwitterCopy Link

The post The Remarkable Ivan Sutherland appeared first on CHM.

]]>
The Lisa: Apple’s Most Influential Failure https://computerhistory.org/blog/the-lisa-apples-most-influential-failure/ Thu, 19 Jan 2023 13:31:19 +0000 https://computerhistory.org/?p=26613 CHM publicly releases the source code to Apple's Lisa computer, including its system and applications software.

The post The Lisa: Apple’s Most Influential Failure appeared first on CHM.

]]>
Happy 40th Birthday to Lisa! The Apple Lisa computer, that is. In celebration of this milestone, CHM has received permission from Apple to release the source code to the Lisa software, including its system and applications software.

Access the code here.

What is the Apple Lisa computer, and why was its release on January 19, 1983, an important date in computer history? Apple’s Macintosh line of computers today, known for bringing mouse-driven graphical user interfaces (GUIs) to the masses and transforming the way we use our computers, owes its existence to its immediate predecessor at Apple, the Lisa. Without the Lisa, there would have been no Macintosh—at least in the form we have it today—and perhaps there would have been no Microsoft Windows either.

From DOS to GUI

Before the 1970s and even into the early 1990s, a majority of personal computer users interacted with their machines via command-line interfaces, text-based operating systems such as CP/M and MS/DOS in which users had to type arcane commands to control their computers.

Apple II ProDOS Command-line interface. The catalog command shown lists the files on the current disk. Public domain.

The invention of the GUI, especially in the form of windows, icons, menus, and pointer (WIMP), controlled by a mouse, occurred at Xerox PARC in the 1970s, on the Alto, a computer with a bitmapped graphics display designed to be used by a single person, i.e. a “personal computer,” despite the research prototype’s high cost. Key elements of the WIMP GUI paradigm, especially overlapping windows and popup menus, were invented by Alan Kay’s Learning Research Group for their children’s software development environment, Smalltalk.

Screenshot of Smalltalk-78 emulation running at https://smalltalkzoo.thechm.org/HOPL-St78.html. Shown is a demo given by Dan Ingalls to Steve Jobs at PARC in 1979. Overlapping windows were a key new feature of Smalltalk, which was a development environment. Note the lack of icons, buttons, or an ever-present menu bar. Commands, including window resizing, were executed by right-clicking the mouse and selecting from a popup menu.

In 1979, a delegation from Apple Computer, led by Steve Jobs, visited PARC and received a demonstration of Smalltalk on the Alto. Upon seeing the GUI, Jobs instinctively grasped the potential of this new way of interacting with a computer and didn’t understand why Xerox wasn’t marketing this technology to the public. Jobs could see that all computers should work this way, and he wanted Apple to lead the way by bringing this technology out from the research lab to the masses.

From Apple II to Lisa

Apple had already been working on a computer in its own R&D labs to leapfrog the company’s best-selling, but command-line-based, Apple II personal computer. It was code-named “Lisa” after Lisa Brennan (now Brennan-Jobs), Steve Jobs’ child with a former high school girlfriend, whom he initially refused to acknowledge as his own. The code-name stuck, and a backronym, Local Integrated Systems Architecture, was invented to obfuscate the connection to Jobs’ daughter.(1) Unlike the Apple II, which was aimed at the home computer market, the Lisa would be targeted at the business market, would use the powerful Motorola 68000 microprocessor, and would be paired with a hard drive.

After the PARC visit, Jobs and many of Lisa’s engineers, including Bill Atkinson, worked to incorporate the ideas of the GUI from PARC into the Lisa. Atkinson developed the QuickDraw graphics library for the Lisa, and collaborated with Larry Tesler, who left PARC to join Apple, on developing the Lisa’s user interface. Tesler created an object-oriented variant of Pascal, called “Clascal,” that would be used for the Lisa Toolkit application programming interfaces. Later, by working with Pascal creator Niklaus Wirth, Clascal would evolve into the official Object Pascal.

Apple Lisa 2 screenshot. Icons on the desktop and the menu bar with pulldown menus at the top of the screen have made their appearance. This interface is very similar to that of the original Macintosh. Photo Courtesy of David T. Craig. CHM Object ID 500004666.

A reorganization of the company in 1982, however, removed Jobs from having any direct influence on the Lisa project, which was subsequently managed by John Couch. Jobs then discovered the Macintosh project started by Jef Raskin. Jobs took over that project and moved it away from Raskin’s original appliance-like vision to one more like Lisa—a mouse-driven GUI-based computer but more affordable than the Lisa.

Steve Jobs with John Couch, VP and General Manager of the Lisa division, showing off the original Lisa, 1983. Photo courtesy of John Couch.

Competition and Collaboration

For a few years, both the Lisa and Macintosh teams competed internally, although there was collaboration as well. Bill Atkinson’s QuickDraw graphics became part of the Macintosh, and Atkinson thus contributed to both projects. Lisa software manager Bruce Daniels actually left the Lisa project to work on the Macintosh for a period of time, greatly influencing the direction of the Mac towards the Lisa’s GUI. Larry Tesler’s work on the object-oriented Lisa Toolkit application frameworks would later evolve into the MacApp frameworks, which used Object Pascal. Owen Densmore, who had been at Xerox, worked on Printing on both the Lisa and the Macintosh.

Bill Atkinson’s Apple ID badge. Atkinson was an important figure in the creation of the Lisa, developing key aspects of the user interface. Credit: Folklore.org

Managers in the Lisa development group. From left to right: Wayne Rosing (hardware, later all of Lisa engineering), Larry Tesler (applications software and libraries, user interface design and testing), Bruce Daniels (software, systems architecture). Photo by John Blaustein. Scan of page 97 of Personal Computing Magazine, March 1983, CHM #102661078.

The Lisa’s user interface design underwent many different versions before finally arriving at the icon-based desktop metaphor familiar to us from the Macintosh.(2) Nevertheless, the final Lisa Desktop Manager still has a few key differences from the Mac. One was a document-centric rather than application-centric model. Each program on the Lisa featured a “stationery pad” that resided on the desktop, separate from the application icon. Users tore off a sheet from the stationery pad to create a new document. Users rarely interacted with the application’s icon itself, but rather with these stationery pads.(3) The idea of centering the user’s world around documents rather than applications would reemerge in the 1990s with technologies such as Apple’s OpenDoc and Microsoft’s OLE.

The Cost of Innovation

Lisa was released to the public on January 19, 1983, at a cost of $9,995. This was two years after Xerox had released its own commercial GUI-based workstation, the Star, for $16,595, which was similarly targeted towards office workers. The high price of both machines compared to the IBM PC, a command-line based PC which retailed for $1,565, doomed them both to failure. But there were other significant problems too. The Lisa’s sophisticated operating system, which allowed multiple programs to run at the same time (“multitasking”) was too powerful even for its 68000 processor, and thus ran sluggishly. The Lisa shipped with a suite of applications, including word processing and charts, bundled with the system, which discouraged third party developers from writing their own software for it. The original Lisa shipped with a floppy drive (“Twiggy”), designed in-house, that was unreliable. 

Brochure showing Lisa 1 screen and Twiggy floppy drives. Brochure text lists the original specs, a 32-bit Motorola 68000 processor (16-bit data bus), 1 MB RAM, and 364 x 720 resolution bitmap display. External ProFile hard disk is not shown. CHM #102634506

From Lisa to the Mac

Announced in the famous Superbowl ad, the Apple Macintosh shipped in January 1984 for $2,495. Eliminating a hard drive, multitasking, and other advanced features, and a greatly reduced memory made it much more affordable than the Lisa. An innovative marketing program created by Dan’l Lewin (today CHM’s CEO) that sold Macintoshes at reduced prices to college students contributed significantly to the Mac’s installed base. The advent of Postscript-driven laser printers like the Apple LaserWriter in 1985, combined with the page layout application PageMaker from 3rd party software company Aldus, created a brand-new killer application—desktop publishing—for the Macintosh.(4) This new market would grow to a billion dollars by 1988, and the Macintosh became the first commercially successful computer with a graphical user interface and a product-line that continues to this day.

The Lisa 2 series, consisting of two models, Lisa 2/5 and 2/10, priced at $3,495 and $5,495, respectively, was announced alongside the Macintosh in January 1984. Lisa 2 replaced the original Lisa’s twin Twiggy floppy drives with a single Sony 3.5” floppy drive, the same drive that was in the Mac. In January 1985, the Lisa 2/10 was rebranded as the Macintosh XL with MacWorks, an emulator that allowed it to run Mac software, but despite improved sales this product was killed off in April 1985 to focus on the Mac.(5)

The Lisa 2 series was announced in January 1984, with the Macintosh, as part of the Apple 32 SuperMicro series. Note that the twin Twiggy drives have been replaced by the Mac’s Sony 3.5” floppy drive. Not only did this improve reliability, but also improved compatibility with the Mac, allowing them to use the same floppy disks. CHM #102689034

The release of the GUI-based Lisa and its successor the Macintosh inspired several PC software companies to create software “shells” that would install GUI environments on top of MS-DOS command-line based IBM PCs. The first of these was VisiOn, released in late 1983 by VisiCorp, the publisher of the first spreadsheet program VisiCalc. This was followed in 1985 by GEM from Digital Research, the company behind the command-line based CP/M operating system. Microsoft followed with Windows the same year.

The Influence of Innovation

Both GEM and Windows were released after the Macintosh and were influenced by user interface elements from the Mac. Though Windows was first released in 1985 it was not widely used by most PC users until 1990’s Windows 3.0. Between Windows and the Macintosh, GUIs have become the primary user interface paradigm on personal computers.

Lisa in use by John Couch’s son, with Couch looking on. The image illustrates “What You See Is What You Get,” with Couch holding a printout that mirrors the drawing on the screen. Despite this marketing image, the Lisa, at $9,995, was not aimed at the home computer market, but rather at office professionals. But, used to selling retail, Apple lacked experience in direct sales, which was how computers were sold to businesses, a strategy IBM had perfected. Businesses also required IBM mainframe compatibility, which the Lisa did not have. Corporate customers preferred the IBM PC, which cost only $1,565. Photo Courtesy of John Couch.

Despite the Lisa’s failure in the marketplace, it holds a key place in the history of the GUI and PCs more generally as the first GUI-based computer to be released by a personal computer company. Though the Xerox Star 8010 beat the Lisa to market in 1981, the Star was competing with other workstations from Apollo and Sun. Perhaps more importantly, without the Lisa and its incorporation of the PARC-inspired GUI, the Macintosh itself would not have been based on the GUI. Both computers shared key technologies, such as the mouse and the QuickDraw graphics library. The Lisa was a key steppingstone to the Macintosh, and an important milestone in the history of graphical user interfaces and personal computers more generally.

NOTES

(1) https://www.folklore.org/StoryView.py?project=Macintosh&story=Bicycle.txt

(2) Roderick Perkins, Dan Smith Keller, and Frank Ludolph, “Inventing the Lisa User Interface,” Interactions 4, no. 1 (January 1, 1997): 40–53, https://doi.org/10.1145/242388.242405. See also https://www.folklore.org/StoryView.py?project=Macintosh&story=Busy_Being_Born.txt 

(3) https://www.callapple.org/modern-apple-computing/the-legacy-of-the-apple-lisa-personal-computer-an-outsiders-view/

(4) John Scull and Hansen Hsu, “The Killer App That Saved the Macintosh,” IEEE Annals of the History of Computing 41, no. 3 (July 2019): 42–52, https://doi.org/10.1109/MAHC.2019.2918094

(5) Owen Linzmayer, Apple Confidential 2.0: The Definitive History of the World’s Most Colorful Company, Rev. 2nd ed. (San Francisco, CA: No Starch Press, 2004), 79–80.

SIGN UP!

Learn more about the Art of Code at CHM or sign up for regular emails about upcoming source code releases and related events.

FacebookTwitterCopy Link

The post The Lisa: Apple’s Most Influential Failure appeared first on CHM.

]]>
“The Surface State Job” https://computerhistory.org/blog/the-surface-state-job/ Mon, 12 Dec 2022 16:37:25 +0000 https://computerhistory.org/?p=26494 Happy 75th birthday to the transistor! Find out who was REALLY behind its invention.

The post “The Surface State Job” appeared first on CHM.

]]>
Celebrating the 75th Anniversary of Bardeen and Brattain’s Invention of the Transistor

We should tell Shockley what we did today

— Walter Brattain

While driving home from Bell Telephone Laboratories Murray Hill facility in New Jersey during a “Miracle Month” of intense activity culminating in their demonstration of the first transistor, Walter Brattain told his carpool colleagues, “Bardeen and I produced an amplifier using the field effects at very low frequencies . . . But the next night, I swore them all to secrecy. They weren’t supposed to know anything about this.” [1]

John Bardeen in 1956. Photo courtesy: nobelprize.org

Walter Brattain in 1956. Photo courtesy: nobelprize.org

Walter Houser Brattain (1902–1987) was born to American parents in Amoy (now Xiamen) China. On their return to the US, the family settled on a ranch near Tonasket, Washington. He received a master’s degree from the University of Oregon and a PhD from the University of Minnesota. Brattain joined Bell as a research physicist in 1929, where he was noted as a skilled experimentalist.

Born in Madison, Wisconsin, theoretical physicist John Bardeen (1908–1991) skipped three grades in school as a child prodigy. He earned a master’s degree from the University of Wisconsin and a PhD from Princeton, where he pursued an interest in solid-state physics.

Under the secret code name ‘The Surface State Job,” their project was an important priority within Bell Labs, the research arm of the American Telephone and Telegraph Company, to find a smaller and lower power replacement for bulky, power-hungry vacuum tubes in telephone switching systems. Mervin J. Kelly, the Labs’ research director, believed that crystalline semiconductor materials, such as germanium or silicon, might offer a solution. In 1936 he recruited William Shockley from the Massachusetts Institute of Technology (MIT) to research solid-state materials for this application.

Born in London, England, to American parents, William Bradford Shockley spent his youth in Palo Alto, California, just yards from the famed Hewlett-Packard garage. A precocious child, he was “ill-tempered, spoiled, almost uncontrollable, who made his doting parents’ lives miserable.” [2] He earned a bachelor’s degree at the California Institute of Technology and a PhD in theoretical physics from MIT. Brilliant, Intel’s Gordon Moore commented that Shockley “could see electrons,” but egotistical and volatile, he enjoyed management’s support but was less popular with his peers. [3] “He understood everything but people,” according to Nobel Laureate Charles Townes. [4]

Shockley’s semiconductor amplifier idea

Convinced he could find a solution based on solid materials, in 1939, Shockley wrote, “It has today occurred to me that an amplifier using semiconductors rather than vacuum is in principle possible.” Brattain assisted Shockley with experiments on his idea for what we would call today a field-effect transistor (FET) but achieved no useful result. 

World War II disrupted this work, but it resumed in 1945 when Shockley hired John Bardeen and asked him to see if he could find anything wrong with his design. Bardeen initially concluded that it should have worked.

The FET is a device that uses an electric field to control the flow of current in a semiconducting material. Shockley had published a paper in his MIT days that assumed that electrons near the surface would be as free to move about as those in the bulk of the material. On March 19, 1946, Bardeen determined theoretically that they were not. He concluded that electrons in that region must be trapped, thus creating a surface state that formed a barrier to movement. 

Bardeen and Brattain, aided by physicist Gerald Pearson and chemist Robert Gibney, devoted themselves to figuring out if he was correct. By early 1947, in a laboratory experiment, they demonstrated the presence of the barrier. As their manager, Shockley offered suggestions on how to breach the barrier but was not involved in their work on a day-to-day basis.

“The Magic Month”

On Monday, November 17, Gibney suggested that Brattain apply a voltage between a metal plate on the upper surface and a contact on the rear of a slab of germanium crystal to create a strong electric field perpendicular to the surface. A drop of liquid electrolyte at the point where electrical contacts touched the material neutralized the surface state and produced a measurable field effect in the structure. 

Following Bardeen’s suggestion to probe the surface with a sharp metal point surrounded with the electrolyte, on November 21 Brattain produced a functioning amplifier, albeit only at very low frequencies. A couple of weeks of long hours and feverish activity on blackboards and lab benches, during what Shockley called “The Magic Month,” combined fortuitous “accidents” in processing the material with smart intuitive sense in taking advantage of what they learned, yielded amplification without the presence of the electrolyte.

Bardeen calculated that reducing the distance between the two contacts would enhance the effect. Brattain came up with an ingenious approach that involved cementing gold foil onto a plastic wedge and with surgical precision slicing the tip with a razor blade to create two contact points separated by the width of a sheet of paper.

On the afternoon of Tuesday, December 16, 1947, they attached a spring to press the crude contraption firmly against the germanium surface. Brattain found that if he wiggled it just right, “I had an amplifier with the order of magnitude of 100 amplification, clear up to the audio range.” [5] The solid-state semiconductor amplifier was born. 

He and Brattain agreed: “We should tell Shockley what we did today.” [1]

Elements of Bardeen and Brattain’s Transistor. Image: © Computer History Museum

Bardeen seldom discussed his work at home; however, that night, he remarked casually to his wife who was peeling carrots in the kitchen, “We discovered something today.” “That’s great,” she responded automatically. Sometime later, Jane found out that the something was the transistor. [6]

The demonstration 

Shockley admitted that their news “provoked conflicting emotions in me. My elation with the group’s success was balanced by the frustration of not being one of the inventors.” [7] But, realizing the importance of their breakthrough, he arranged a demonstration of the amplifier for Bell executives on Tuesday afternoon, December 23, 1947.

Brattain’s record of the December 23, 1947 demonstration. Courtesy Lucent Technologies 1997.

Brattain recorded in his notebook that with a microphone and headphones, “This circuit was actually spoken over and . . . could be heard and seen on the scope presentation.” Sadly no one remembers what was said, just that it worked. Shockley called it a “magnificent Christmas present.”

Within days after Christmas, Bell Labs’ patent attorneys began to document their work and prepare for a public announcement. As Shockley’s ego-driven, self-promotional activities made him the most visible spokesman for Bell Labs, orders came down the line that no pictures be taken of Bardeen and Brattain without his presence. Publicity photos at the time show him front and center of the scene.

Electronics magazine cover. © McGraw-Hill Publishing Company, Inc

At the first press conference in New York on June 30, 1948, a spokesman claimed the transistor “may have far-reaching significance in electronics and electrical communication.” Unimpressed, The New York Times relegated the story to “The News of Radio” page —below the announcement of a soap opera sponsor.

New York Times report on the announcement of the transistor. Published on July 1, 1948

Brattain’s colleague John Pierce is credited with coming up with the name. Aware that it operated on the principle of trans-resistance, Pierce derived transistor from the associated electronic component called a resistor.

Western Electric, the equipment arm of AT&T, began manufacturing point-contact transistors in 1951 and, by mid-1952, was producing more than 6,000 devices a month, predominantly for telephone switching systems and hearing aids.

Sonotone 1010 (1952) First commercial hearing aid to use a transistor. Photo: Author Joe Haupt (CC-by-S-2.0)

A more practical transistor

According to Brattain, Shockley, who was pushing to incorporate some of his ideas into their patent filing, “called both Bardeen and I in separately, shortly after the demonstration, and told us that sometimes the people who do the work don’t get the credit for it. I told him, ‘Oh hell, Shockley, there’s enough glory in this for everybody.’ ” But he “went off by himself and worked at home, and in a way ceased being a member of the research team.” [1]

Spurred by professional jealousy at not being more visibly involved with the transistor’s invention and a need to maintain his standing relative to his subordinates, Shockley began a month of intense theoretical activity alone. He determined that point-contact transistor operation was not the near-surface field effect that had been assumed but was due to an entirely different structure in the bulk of the crystal called a P-N junction.

As a result of this work, on January 23, 1948, Shockley conceived a distinctly different element, called a junction transistor, that proved to be more reliable and easier to build in volume than the point-contact device. Fabricating working transistors still presented formidable challenges until Bell Labs announced the advance on July 4, 1951. His version became the dominant active electronic building block for the next two decades by enabling new generations of powerful computers.

John Bardeen accepts the Nobel Prize. Walter Brattain waits behind him.

Based on his theoretical contributions to the understanding of semiconductor physics and his invention of the junction transistor, Shockley joined Bardeen and Brattain in accepting the 1956 Nobel Prize in Physics for “researches on semiconductors and their discovery of the transistor effect.”

Sources of Quotations

[1] “Oral History Interview of Walter Brattain - Session II,” May 28, 1974. Niels Bohr Library & Archives, American Institute of Physics. Retrieved on 11.1.2022 from https://www.aip.org/history-programs/niels-bohr-library/oral-histories/4532-2

[2] Transistorized! ScienCentral, Inc. and The American Institute of Physics (1999) Retrieved on 1.16. 2016 from: http://www.pbs.org/transistor/album1/shockley/index.html

[3] “Interview with Gordon E. Moore,” March 3, 1995, Silicon Genesis: Oral Histories of Semiconductor Industry Pioneers, Stanford University. Retrieved on 11.1.2022 from: https://landley.net/history/mirror/interviews/Moore.html

[4] “Absent at the Creation,” Ronald Kessler, The Washington Post (April 6, 1997) Retrieved on 11.1.2022 from: https://www.washingtonpost.com/archive/lifestyle/magazine/1997/04/06/absent-at-the-creation/2a432ee5-b1e3-49b9-93f2-ad821d1832dd/

[5] “Oral History Interview of Walter Brattain - Session I,” June 1964. Niels Bohr Library & Archives, American Institute of Physics. Retrieved on 11.3.2022 from https://www.aip.org/history-programs/niels-bohr-library/oral-histories/4532-1

[6] Vicki Daitch & Lillian Hoddeson, True Genius: The Life and Science of John Bardeen, Joseph Henry Press (2002)

[7] John Rhea and Paul Plansky, “Twas Two Days Before Christmas,” Electronic News, December 18, 1972.

For More Information

Michael Riordan and Lillian Hoddeson, Crystal Fire: The Birth of the Information Age (New York: W. W. Norton, 1997)

Joel Shurkin, Broken Genius: The Rise and Fall of William Shockley (London: Macmillan, 2006)

The Silicon Engine - Timeline, 1950s https://www.computerhistory.org/siliconengine/timeline/ 

David A. Laws, ” The Lunch That Launched Silicon Valley,” The Bold Italic (Feb 25, 2021) https://medium.com/p/7a3c4d9906f3/edit

Main image: Bardeen and Brattain’s Point Contact transistor 1947. Photo: Bell Telephone Laboratories

FacebookTwitterCopy Link

The post “The Surface State Job” appeared first on CHM.

]]>
PostScript: A Digital Printing Press https://computerhistory.org/blog/postscript-a-digital-printing-press/ Thu, 01 Dec 2022 16:21:10 +0000 https://computerhistory.org/?p=26271 CHM publicly releases the source code for the breakthrough printing technology, PostScript.

The post PostScript: A Digital Printing Press appeared first on CHM.

]]>
The story of PostScript has many different facets. It is a story about profound changes in human literacy as well as a story of trade secrets within source code. It is a story about the importance of teams, and of geometry. And it is a story of the motivations and educations of engineer-entrepreneurs.

The Computer History Museum is excited to publicly release, for the first time, the source code for the breakthrough printing technology, PostScript. We thank Adobe, Inc. for their permission and support, and John Warnock for championing this release.

Access the code here.

The Big Picture

Printing has always been a technology with profound cultural consequences. Movable type first emerged in east Asia, and, later, technology from wine and oil presses in 15th century Europe was combined with novel practices to mass produce type using metal casting to evolve the printing press, and with it, a revolution in human literacy. Books became cheaper and quicker to produce, and as a result appeared in ever greater numbers. Readers and libraries expanded. Greater access to information transformed learning, research, government, commerce, and the arts.

Adobe cofounders John Warnock (left) and Chuck Geschke (right). Courtesy Adobe Inc. and Doug Menuez.

From the start of Adobe Systems Incorporated (now Adobe, Inc.) exactly forty years ago in December 1982, the firm’s cofounders envisioned a new kind of printing press—one that was fundamentally digital, using the latest advances in computing. Initial discussions by cofounders Chuck Geschke and John Warnock with computer-makers such as Digital Equipment Corporation and Apple convinced them that software was the key to the new digital printing press. Their vision: Any computer could connect with printers and typesetters via a common language to print words and images at the highest fidelity. Led by Warnock, Adobe assembled a team of skillful and creative programmers to create this new language. In addition to the two cofounders, the team included Doug Brotz, Bill Paxton, and Ed Taft. The language they created was in fact a complete programming language, named PostScript, and was released by Adobe in 1984.

Chuck Geschke discusses how Adobe came to focus on PostScript as their initial business.

By treating everything to be printed the same, in a common mathematical description, PostScript granted abilities offered nowhere else. Text and images could be scaled, rotated, and moved at will, as in the opening image to this essay. Adobe licensed PostScript to computer-makers and printer manufacturers, and the business jumped into a period of hypergrowth. There was tremendous demand for the new software printing press. Computer-makers from the established worlds of minicomputers and workstations to the rapidly growing world of personal computers adopted the technology. Printer-makers joined in, from well-established printers to the new laser printers and professional typesetters. Software-makers rushed to make their offerings compatible with PostScript.

Fueling this growth were advances Adobe was making around a critical need: Providing professional-quality digital typefaces—and the many fonts that comprise them—for use within PostScript. Adobe developed a fresh approach to describing typefaces geometrically, and the company licensed many of the most well-known typefaces, including those for Asian languages. PostScript and the Adobe Type Library revolutionized printing and publishing, and kickstarted the explosive growth of desktop publishing starting in the 1980s. PostScript became so successful that it grew into a de facto standard internationally, with Adobe publishing the details of the PostScript language, allowing others to create products that were PostScript compatible. Today, most printers rely on PostScript technology either directly or through a technology that grew out of it: PDF (Portable Document Format).

An early typeface created by Adobe using its new technologies. Courtesy Adobe Inc.

John Warnock championed the development of PDF in the 1990s, transforming PostScript into a technology that was safer and easier to use as the basis for digital documents, but retaining all the benefits of interoperability, fidelity, and quality. Over the decades, Adobe had developed PDF tremendously, enhancing its features, and making it a crucial standard for digital documents, printing, and for displaying graphics of all kinds on the screens from laptops to smartphones and smartwatches.

Today, the digital printing press has far exceeded anything envisioned by the Adobe cofounders when they first set out create PostScript with their team. Almost everything printed on paper is done so using computers. Indeed, in many areas of the world, computers have become the overwhelming tool for writing. As Doug Brotz puts it, PostScript “democratized the print world.” With PDF now so successful that it too has become a global standard, the number of PDFs created each year is now measured in the trillions.

A Graphical Background

Typography is the combination of art and technique that is concerned with the display of writing, especially as printed. It is concerned with the shape and placement of characters, words, paragraphs, and so on. In this, typography is thoroughly graphical, a matter of visual design. Digital typography is no different, just focused to computer techniques and displays. It is fitting, then, that the roots of PostScript, and its contributions to the development of digital typography, lie in advanced computer graphics.

Adobe cofounder John Warnock, the architect for PostScript, launched his computing career as a graduate student at the University of Utah at the close of the 1960s. Utah was then one of the world’s foremost centers for advanced computer graphics research. In his work there, and then subsequently at a leading computer graphics firm run by the lead professors at Utah, David Evans and Ivan Sutherland, Warnock embraced their characteristic geometric approach to computer graphics. Shapes, scenes, images, and animations were created and designed by using mathematics to describe the geometry of the visual, and using various computer procedures to realize these descriptions as imagery. In particular, Warnock was impressed with the power of a procedural computer language he and John Gaffney helped to develop at the firm Evans and Sutherland, the “Design System.”

In 1978, Chuck Geschke had just set up a new organization in the famed Xerox Palo Alto Research Center (PARC), the Imaging Science Laboratory. Geschke hired Warnock into his laboratory, where Warnock took up a pressing challenge for the lab. PARC was creating a set of new experimental computers with new kinds of displays, and intended to be used with an array of novel printers—as PARC had recently invented the laser printer. The challenge, which Warnock took up, was to create a “device-independent” graphics system, which could be used across any computer, display, or printer. To meet this challenge, Warnock saw that something like the Design System could work if it were re-implemented in this new computing environment, but refocused from 3D graphics to PARC’s concern with professional quality printing and high-quality display of text and images on displays. The result was another geometrical, procedural language called JaM, that Warnock created in partnership with another PARC researcher, Martin Newell.

An image made using JaM, the predecessor to PostScript. Courtesy Adobe Inc.

From 1979 into 1981, JaM became a major component in a new effort in Geschke’s laboratory. This was a push to develop a printing language that could be commercialized, used with the production version of PARC’s experimental computers called the Xerox Star, and more broadly used across all of Xerox’s lines of printers. A group of six Xerox researchers—Chuck Geschke, Butler Lampson, Jerry Mendelson, Brian Reid, Bob Sproull, and John Warnock—melded the JaM approach with other more established protocol techniques. The result was named Interpress.

Xerox leadership was quickly convinced of the potential for Interpress, deciding that it would indeed be developed into the firm’s printing standard. However, moving to this standard would take several years during which time Interpress would be under wraps. This delay in the further refinement of Interpress by having it used and challenged more broadly outside of Xerox spurred Geschke and Warnock to move. They would leave PARC to create a startup in which they would create a rival to Interpress, but built more fully along the geometric, procedural language approach that Warnock found to be so powerful. But for the new startup to create this new language, PostScript, as the digital printing press it would require a brilliant team.

Chuck Geschke discusses the motivations behind the formation of Adobe. https://www.computerhistory.org/collections/catalog/102738493

John Warnock discusses key early actions in establishing Adobe.

PostScript’s Team

In December 1982, when Chuck Geschke and John Warnock created Adobe Systems Incorporated, the new printing language they intended to create was at the very center of their plans, hopes, and vision. The future of the firm hinged on PostScript. Geschke and Warnock were themselves both highly experienced software creators. Geschke had earned his Ph.D. at Carnegie Mellon University working on advanced compilers, and had been a leader in the creation of an important programming language developed and used at PARC called Mesa. As discussed, Warnock had a Ph.D. in computer graphics software from the University of Utah and years of experience creating languages exactly like their envisioned PostScript. But even with, perhaps because of, their extensive background in creating cutting-edge software, the cofounders knew they needed to expand their team to create PostScript.

This photograph shows early Adobe employees and friends during a company party, sailing in the San Francisco Bay. Many of them were welcomed by Chuck Geschke (second from right in rear) and John Warnock (middle in rear) from Xerox PARC, including Doug Brotz (leftmost rear), Dan Putman (third from left in front), and Tom Boynton (right of Putman). Steve MacDonald (between Brotz and Warnock) came from Hewlett-Packard to serve as Adobe’s first head of sales and marketing. Linda Garger (third from right in front) was administrative assistant for Geschke and Warnock and the first official Adobe employee. Carolyn Bell (rightmost front) was an engineer. Marva Warnock, who designed the first Adobe logo sits immediately in front of John Warnock. Nancy Geschke, a librarian, sits in front and to the left of Chuck Geschke. Courtesy Adobe, Inc.

Adobe’s PostScript team quickly took shape as three other highly talented software creators from PARC decided to leave and rejoin with Geschke and Warnock: Doug Brotz, Bill Paxton, and Ed Taft. Brotz had earned a Ph.D. in computer science from Stanford before joining PARC in 1977, and became the first computer scientist on the payroll, after the cofounders of course. Paxton also had a Ph.D. in computer science from Stanford, and joined PARC the same year as Brotz. Taft had joined PARC earlier, hired by Geschke as soon as Taft finished his undergraduate studies at Harvard in 1973. Together, and with input from company colleagues like Andy Shore, the team created PostScript by the close of 1984.

A Trade Secret in the Source Code

Adobe’s commitment to a geometrical approach for PostScript carried consequences for how it would contend with typefaces—distinctive character shapes—and the numerous fonts that actually realize these typefaces at different sizes and styles (point sizes, regular, italic, bold, etc.). At PARC, fonts had been created as a set of individual, hand-crafted bitmap images—static definitions of which bits were on, and which were off, to create each character of the font. In contrast, researchers at PARC and beyond were exploring ways to define character shapes mathematically.

At Adobe, the team followed this mathematical description approach to fonts, in keeping with the broader direction of PostScript, defining characters using Bézier curves. But this still left the problem of device-independence. How could Adobe’s font definitions contend with different displays, printers, and different resolutions on both? For eyes so accustomed to reading published text, even the slightest inconsistencies or irregularities in the appearance of text are readily noticed and jarring. At lower resolutions, the chance for these defects only becomes worse. Rendering fonts reliably at different resolutions was a critical issue. Without a solution, PostScript could never become the digital printing press.

Elements of Adobe’s secret solution to creating professional quality fonts for different resolutions. Courtesy of John Warnock.

It was John Warnock who came up with Adobe’s solution, turning the problem itself into the solution. The resolution of the output would determine a set of procedures that would correct the fonts to optimize their appearance at that resolution. Warnock, Brotz, and Paxton worked on the procedures for months, eventually settling on ways to define key aspects of the font shapes, and fitting them to the pixel rows and columns of the specified resolution, and changing some aspects of the character shapes depending on the resolution. Eventually, the Adobe team decided that greatest advantage lay in keeping these approaches and procedures as a trade secret. They stayed secret in PostScript’s source code, known to very few at the company, until Warnock publicly disclosed them in a 2010 lecture.

The version of the PostScript source code released to the public by the Computer History Museum is a very early version, dating to late February 1984. While this version does contain an early version of the “font hinting” procedures later kept as a trade secret, these approaches were completely rewritten, expanded, and refined by Bill Paxton in subsequent months. These changes were critical to the success of PostScript as it fully came to market. Some modules that contain some low-level graphics engine functions are not included in this release. Adobe still retains trade secret rights in those modules. Otherwise, this collection is complete.

Chuck Geschke discusses the trade secret in the PostScript source code.

Acknowledgements

Thank you to Doug Brotz and Bill Paxton for their helpful comments on a draft of this essay. Thank you to Adobe Inc. and Doug Menuez for permission to use several images here.

This essay is based on oral histories and interviews conducted by the Computer History Museum, and several critical published sources:

J. E. Warnock, “The Origins of PostScript,” in IEEE Annals of the History of Computing, vol. 40, no. 3, pp. 68-76, Jul.-Sep. 2018, doi: 10.1109/MAHC.2018.033841112.

WARNOCK, JOHN E. “Simple Ideas That Changed Printing and Publishing.” Proceedings of the American Philosophical Society, vol. 156, no. 4, 2012, pp. 363–78. JSTOR, http://www.jstor.org/stable/23558230. Accessed 13 Oct. 2022.

J. E. Warnock and C. Geschke, “Founding and Growing Adobe Systems, Inc.,” in IEEE Annals of the History of Computing, vol. 41, no. 3, pp. 24-34, 1 July-Sept. 2019, doi: 10.1109/MAHC.2019.2923397.

Legally, PostScript® is a trademark of Adobe, Inc., used to identify its language and interpreter. For ease of reading, above we use the more common PostScript to denote both the PostScript® language and the PostScript® interpreter.

[UPDATED January 2023: Additional information about the completeness of the source code release was added for clarification.]

 

Sign Up!

Learn more about the Art of Code at CHM or sign up for regular emails about upcoming source code releases and related events.

 

FacebookTwitterCopy Link

The post PostScript: A Digital Printing Press appeared first on CHM.

]]>
50 Years of Fun With Pong https://computerhistory.org/blog/50-years-of-fun-with-pong/ Wed, 30 Nov 2022 17:08:20 +0000 https://computerhistory.org/?p=26372 Happy birthday to PONG! The iconic game turned 50, and it's still going strong.

The post 50 Years of Fun With Pong appeared first on CHM.

]]>

Just keeping the game simple and fun to play is the secret. I had no idea that it would have such an impact.

— Al Alcorn

PONG is one of the most enduring electronic video games in history. Unlike most current video games, it can be mastered in seconds rather than weeks or months, making PONG the perfect game for parties, bar patrons, and family fun. Its simple game play of batting a ball back and forth has been known for thousands of years, dating back to Roman times, if not earlier.

But PONG has not stood still since the original 1972 prototype designed by Al Alcorn, on display and part of the permanent historical collection of CHM. The game has been copied, extended, and modified—though the essential play hasn’t changed. Most recently, in a stunning experiment, scientists got neurons in a dish to begin playing PONG with only five minutes of training.

The nerve-cell array called DishBrain at work. The colors represent different types of nerve cells. The researchers taught the neurons to respond to an electrical signal that substitutes for the ball in Pong. Credit: Cortical Labs

PONG began life as the first project of a new company called Atari, formed by three former AMPEX employees, Nolan Bushell, Ted Dabney, and Al Alcorn. AMPEX was a world leader in audio and video recording, providing Alcorn with the background to design PONG.

Alcorn designed the very first Atari PONG game using analog circuitry—there were no microprocessors at the time—and few people yet knew how to design in the new realm of an emerging field called “digital video.” This first prototype was a square wooden cube covered with a faux woodgrain plastic layer, a screen, two knobs, and a coinbox.

The first PONG Prototype, CHM # 102741638, 1972. Designed by Al Alcorn, Atari.

Al Alcorn Atari employee badge, ca. 1971. Ref: https://www.computerhistory.org/revolution/computer-games/16/183/752

Amusingly, for PONG’s display, Alcorn bought a cheap 12” black-and-white TV set from a local store and used it in place of a proper stand-alone display, which were expensive at the time. Installed at Andy Capp’s Tavern in Sunnyvale, where Bushnell already had contacts, the game was popular. But, within a few days of installation, the machine had stopped working. When Alcorn was called in to check it, he found that the game hadn’t failed—it was just so full of quarters that it couldn’t accept any more.

This was a sign!

Bushnell took this as an indication that the game might have a much larger market. Though the Magnavox Odyssey, which probably inspired Bushnell to do PONG in the first place, was coming on the scene as the earliest home video console game, there was plenty of market to go around.

As Alcorn explains it, Bushnell concluded there was “more profit selling to millions of homes than to just a few bars.” Atari continued making arcade versions of PONG, of course, now in much improved cabinets (see below), but the real action was in bringing PONG into the home.

PONG Arcade version, 1973. Ref: https://commons.wikimedia.org/wiki/File:Atari_Pong_arcade_game_cabinet.jpg

To accomplish that, Alcorn led a team in Grass Valley—a rural town in California’s historic Gold Country—to implement a custom integrated circuit that would enable mass production of PONG at a competitive price. They moved there to get away from the distractions of Atari’s main Silicon Valley production facilities, which were producing the arcade versions.

From this quiet location, Alcorn worked with engineers Bob Brown and Harold Lee to get the whole game onto one chip. The wiring was done by Alcorn’s wife, Katie. This demonstration prototype was used to make an historic sale to Sears, one of the largest department store chains in North America.

The Sears sale propelled Atari into the big leagues, with a first order for 200,000 units, a big challenge for Bushnell’s new company to fulfill in time for the 1975 Christmas season. They did it by finding extra assembly buildings and staff in a hurry, meeting their commitment and making the Sears PONG game, rebranded as “Tel-Games,” a must-have gift for Christmas that year.

Kids on Christmas morning, 1975, with Sears PONG game. Ref: https://2warpstoneptune.files.wordpress.com/2014/12/christmas-1975-pong.jpg

While the Sears contract marked the high point of PONG, it lives on in many guises today, from the Sony PS/4 to the iOS App store, a testament to the endless new and evolving variations possible when a game itself is timeless.

In the decades since it first burst into public consciousness, not only PONG, but games in general, are receiving more attention as important cultural objects in their own right. Whether the neurons playing PONG are encased in a human cranium or in a laboratory dish, it seems that PONG is here to stay.

Main Image: CHM summer intern squares off in a re-creation of PONG at CHM with PONG’s inventor, Al Alcorn. To the right is the world’s first PONG game, also created by Alcorn.

 

SUPPORT CHM’S MISSION

Blogs like these would not be possible without the generous support of people like you who care deeply about decoding technology. Please consider making a donation.

FacebookTwitterCopy Link

The post 50 Years of Fun With Pong appeared first on CHM.

]]>