Software History Center – CHM https://computerhistory.org Computer History Museum Fri, 19 Aug 2022 20:11:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 For Whom Does Data Work? https://computerhistory.org/blog/for-whom-does-data-work/ Tue, 09 Mar 2021 16:40:27 +0000 https://computerhistory.org/?p=20890 Private sector firms create, gather, and sell data about us for their profit, and not necessarily—or even secondarily—for our benefit. But India is creating a new market structure aimed at putting data to work for people.

The post For Whom Does Data Work? appeared first on CHM.

]]>
Trust, Differences, and the World’s Largest Democracy

On January 6th, 2021, when the Republican crowd stormed the US Capitol building, I found out about it early on when I checked my Twitter feed. I admit I never did get back to work that afternoon; I was glued to the NBC News feed streaming on YouTube through my living room television set. My reaction was a funhouse mirror version of the Latin phrase attributed to Julius Caesar: I sat, I watched, I worried.

Just shy of a month later, on February 5th, I—another confession—lay in my bed, scanning the New York Times app before getting up as is my practice. I dove into a remarkable piece in the Opinion section about another data breach, one that told tales about the horrors of the 6th. This breach was the second time a corporate employee had leaked some of that company’s product to the Times: The consolidated geographical location information for thousands of smartphones, harvested and then aggregated from location tracking software embedded in many different apps installed on these phones that just so happened to be in Washington, DC for January 6th.

Why did the unnamed location data company have all this information? Because it is their business to harvest and aggregate this kind of data, and then sell it to whomever will pay for it. Mostly, we are told, the idea is that people will buy this location data as part of an effort to sell people something. What stories did this aggregated location data tell about January 6th? It showed smartphones in pockets, purses, and inevitable tactical gear, marching from the rally of the then-President and other Republican politicians to the US Capitol, scurrying around inside it, transforming it into a murder crime scene among other things, and then shuffling off back home.

Did the members of this crowd know that the smartphones were sending out some 100,000 location pings during and after the horrors of the 6th? How would they know that this kind of data about them was being produced, aggregated, and sold? For whom was this data about them working? Are there apps on my iPhone doing this for (or to) me? To be honest, I don’t know. I adjust my location settings to only allow sharing my geographical location data with apps that I trust, or that I want to use that won’t work without it. I run all of my cellular and Wi-Fi connectivity through another app, a Virtual Private Network, designed—I’m told, and I trust—to block out all sorts of tracking and monitoring, or, to put it differently, the creation of data about me that I don’t trust will be put to work for my benefit. Would these steps prevent location data about me from ending up in the massive aggregated data products created by location data firms, such as the eerie first data leak to the Times of 50 billion pings? Who knows?

(If you know a lot about data collection, try our KAHOOT! Challenge and join the running to win a prize; the quiz must be completed by March 23, 2021.)

Just One Iceberg

Why, dear reader, would I drag you through these unsettling landscapes? It is to point out, maybe dramatize, just how few of us really know what’s going on with the data produced, aggregated, and sold about us, and to whose benefit this data actually accrues. This example of location data is but one iceberg in a dark and cold sea of private sector firms that create, gather, and sell data about us for their profit, and not necessarily—or even secondarily—for our benefit.

From Cracked Labs, 2017, “Corporate Surveillance in Everyday Life.”

While stories about data creation, aggregation, and use by large technology firms for their core targeted advertising businesses abound, there exists another huge, and hugely consequential, amount of similar activity around our finances: What and where we buy with credit, debit, and gift cards; If and how we use banks; How and when we pay our bills; Our use of payday lenders, car loans, and other sorts of loans. Here again we find companies that specialize in data aggregation—buying the data about you that credit and debit card companies create about your spending, buying data from banks about your activities there, etc.—and then selling this back to other financial firms so that they can make decisions about you.

The creation and processing of financial data about people is an important theme in the history computing. This IBM 405 from 1934 performed accounting calculations. https://www.computerhistory.org/collections/catalog/102645478

Perhaps the best known of these financial data aggregators are the consumer credit reporting bureaus and the company FICO, with its all-too-familiar “FICO Score” product. Your FICO Score—and if you are 18 or older in the United States, it is an extremely safe bet that FICO has one for sale about you—is marketed as a measure of your credit-worthiness, a gauge of how likely it is that you will pay back your debts with interest. In practice, the number FICO sells about you does much to determine if, and how expensively, you can access credit: The amount of interest you have to pay for a car loan, and if you can even get one, for example. This data created about you isn’t fully in your control. You don’t have a lot to say about who looks at your FICO Score, if or how your Score goes up or down, and how exactly it’s used. Both with marketing data aggregators and financial data aggregators in the US, the data produced and sold about you is put to work for the primary benefit of others. Whether or not this data in fact works for your benefit is, in the end, unclear.

An abundance of apps allows smartphone users to check their FICO Score. From https://www.quoteinspector.com/images/credit/bad-credit-mobile-application/

Indeed, the FICO Score is mutating into a kind of for-profit “social credit” scheme–moving from a measure purporting to reflect your credit-worthiness, to becoming a measure for your social worthiness, how trustworthy you are in general. It is common today for your FICO Score to be used in employment decisions about you. There are some state laws that seek to put some limits on the practice, but the exemptions are broad, and the loopholes gaping. There has been a lot of attention to China’s development of its “Social Credit System,” a very large state-run data aggregation effort to collect all sorts of data created about its citizens, and with this to assign a numerical score purporting to reflect their general social worthiness. The Social Credit Score is used in China for a variety of purposes, including employment decisions. Sound familiar?

Caption: A smartphone screenshot for an app for checking your Social Credit Score in China. From http://ub.triviumchina.com/2019/10/long-read-the-apps-of-chinas-social-credit-system/

 

Four Approaches

Around the world, there are four distinct approaches to answering the question “For whom does data about people work?” These approaches were beautifully, and succinctly described in a 2018 Washington Post article by the New York University data scientist Vasant Dhar.

The United States

The first approach to data creation and use is the “US approach,” which “emphasizes moneymaking.” This is the model that you readers in the US are living with today. There are few laws, regulations, or restrictions on the ability of private firms to create data about you, and to buy and sell that data for the purpose of their profit. A second model is the “European approach,” which emphasizes laws and regulations to mitigate the harms to individuals in a system of for-profit data creation, aggregation, and sales. It is, in my view, the US model with harm-reduction laws imposed. These laws seek to protect individuals from data about them being “misused, lost, or stolen.”

California

A possible step in the evolution of the US approach toward the European approach is a California state law passed in 2018: the California Consumer Privacy Act (CCPA). The CCPA aims to give California residents some controls over the data created about them. You can’t exert control over data about you that you don’t know exists, and so the CCPA gives residents the right to know what data firms create about you, how they use it, and if they share or sell it. With exceptions, the CCPA also gives residents the right to have the data created about them by companies deleted. Further, the CCPA gives residents the right to prevent a company from selling data about them, and to non-discrimination—essentially non-retaliation—for exercising any of these new rights of control.

If the CCPA plays out as intended, it should prove an important step toward harm reduction in the US approach, moving it significantly toward the European model. The data about you will still be designed to work for the profit of others, but at least you will in principle have some controls to prevent the data about you from working against you. Nothing in the structure of the CCPA appears to fundamentally shift the model to ensure that the data about you is put to work for your interests.

China

The third model described by Dhar is the “Chinese approach,” which “emphasizes controlling data—and people.” The model essentially centers the state, where closely enforced rules and laws ensure that the data created about people by any organization are fully accessible to the state, and under the ultimate control of the state. In this approach, it is clear that data produced about people is put to work for the interests of the state. Whether or not putting data to work for the state also serves the interests of a particular individual is, to understate, complicated. The answer is likely partial and changing. It depends who you are, what you are trying to do, and what aspect of your life you are considering.

India

In India—the world’s largest democracy—an inventive and exciting approach is taking shape, and a fascinating new experiment is underway. As Dhar puts it, “India is treating critical digital platforms as public goods.” Through new laws, the Indian government is creating a new type of market structure for the data created about people, explicitly aimed at “data empowerment.” In the terms I’ve been using in this article, the goal is to create a computing-enabled model that ensures by design that the data about people is put to work primarily for people. Initially the market design will concentrate on finance, attempting to put financial information about an individual in service to that individual and at the control of that individual.

From “Data Empowerment and Data Protection Architecture: Draft for Discussion,” August 2020.

At the center of this new model is a novel kind of organization, the “Account Aggregator.” Aggregators are regulated, financial data intermediaries, restricted to the business of making data about individuals work for individuals. Aggregators will act as “consent managers,” acting on behalf of individuals to safely share, at an individual’s request and control, financial data about them from one organization to another in order to access some good or service. Aggregators are restricted to the business of being only this kind of data intermediary, and they cannot store any of the financial information gathered and transmitted on behalf of an individual. Data flows through the aggregator, but is not stored. For a good account of this Account Aggregator experiment, see a recent working paper by historian Arun Mohan Sukumar.

The Account Aggregators are the latest layer in a set of public sector digital technologies collectively known as the “India Stack.” The first layer comprised systems for establishing individual identity. The most prominent of these is Aadhaar, a digital identity confirmation system that provides unique identification numbers—associated with a set of biometric identifiers—for people resident in India. While there have been serious discussions about privacy and other concerns with Aadhaar, nearly 1.3 billion people in India have joined the system. The second layer added a digital payment infrastructure: a system of common protocols and services that tap into the identification layer to allow a broad range of digital payments and other transactions. The third “data empowerment” layer, in which the system of Account Aggregators resides, is in the process of construction.

From “Data Empowerment and Data Protection Architecture: Draft for Discussion,” August 2020.

One of the most prominent advocates for Account Aggregators and the entire India Stack of public sector digital infrastructure is Nandan Nilekani, a cofounder of the major India software and services firm Infosys. Recently, Nilekani articulated the vision for the India Stack and the possibilities of a the Indian model for public digital infrastructures in a major piece in Foreign Affairs. He has also advocated for the place of data empowerment in the transformation of India into what he and others call an “opportunity state:”

In the end, India’s new model of digital infrastructures as public goods depends on the underlying laws that created them, and the enforcement of these laws. That, in turn, depends on the trustworthiness of the democratically elected government of India. I, for one, will be following the Indian experiment with great interest. It is the only of the four approaches that has the design intention to ensure that data about people is put to work for people. And it is certainly true that questions of trust cannot—and should not—be avoided in any of the four existing approaches for putting the data created about people to work. We cannot avoid the necessity of asking ourselves if we can trust ourselves to do the right thing.

 

bar divider

 

 

Read the related blog, Making Your Match: How Dating Apps Decide How We Connect, or check out more CHM resources and learn about decoding trust and tech.  

Want to test your knowledge about what you just read? Try our KAHOOT! Challenge and join the running to win a prize (quiz must be completed by March 23, 2021).

FacebookTwitterCopy Link

The post For Whom Does Data Work? appeared first on CHM.

]]>
Introducing the Smalltalk Zoo https://computerhistory.org/blog/introducing-the-smalltalk-zoo-48-years-of-smalltalk-history-at-chm/ Thu, 17 Dec 2020 16:42:56 +0000 https://computerhistory.org/?p=19331 In commemoration of the 40th anniversary of the release of Smalltalk-80, the Computer History Museum is proud to announce a collaboration with Dan Ingalls to preserve and host the “Smalltalk Zoo.”

The post Introducing the Smalltalk Zoo appeared first on CHM.

]]>
48 Years of Smalltalk History at CHM

In commemoration of the 40th anniversary of the release of Smalltalk-80, the Computer History Museum is proud to announce a collaboration with Dan Ingalls to preserve and host the “Smalltalk Zoo.”

Dan Ingalls demonstrates Smalltalk-76 on an Alto at a CHM Live event.

What is the Smalltalk Zoo?

The Smalltalk Zoo, created by Dan Ingalls, is a collection of historical versions of the revolutionary graphical programming and user environment Smalltalk, originally developed at Xerox PARC, ranging from the 1972 version all the way to the modern “Squeak” version whose development began in 1995. These emulated Smalltalk environments run in your web browser and are hosted by CHM at smalltalkzoo.thechm.org.

Screenshot of Smalltalk-74 on a Xerox Alto computer. ©PARC. CHM Object ID 500004657.

What is Smalltalk?

Smalltalk was a revolutionary system developed by the Learning Research Group (LRG) at Xerox PARC in the 1970s, led by Alan Kay. Smalltalk was comprised of a programming language, a development environment, and a graphical user interface (GUI), running on PARC’s groundbreaking Alto computer. In fact, it is most famous for being the GUI that inspired Steve Jobs when he and a group of Apple engineers visited PARC in 1979. Smalltalk pioneered overlapping windows, popup menus, and paned browsers, all controlled by a mouse. All of these UI elements have come down to us today through systems like the Macintosh and Microsoft Windows. Smalltalk was also one of the earliest, and most influential, object-oriented programming languages, which make up the most dominant type of programming languages today. Object-oriented languages are designed to make it easy to reuse existing pieces of code, but in a flexible way. Python, Java, Ruby, and Objective-C, among others, all owe debts to ideas originally developed in Smalltalk.

Cartoon of two children using their Dynabooks, drawn by Alan Kay. From “A Personal Computer for Children of All Ages,” by Alan Kay, Xerox PARC, 1972.

Alan Kay’s vision for Smalltalk was that it would be easy to use by children. It would provide the user environment for his vision of personal computing, the “Dynabook,” a tablet-like computer that he mocked up. Kay understood that computers were a form of media, but unlike traditional print or broadcast media, they could be easily tailored to the particular interests and needs of the user. A computer running Smalltalk, in other words, would be a “personal dynamic medium.” For Kay, this meant that users, and especially children, needed to be able to program their system.

Children animating horses in Smalltalk-72 on an Alto computer. Courtesy of the PARC Library. © PARC. CHM Object ID 500004466

These ideas were further illustrated in the famous demos of Smalltalk to Steve Jobs and Apple, which inspired Jobs to make the Lisa and Macintosh computers based on a similar graphical interface. Dan Ingalls was one of the demonstrators. In the most famous of the demos, he showed how a user could change the text selection behavior from being a solid black box with inverted text to an outline around the text, all by tweaking just a few lines of code while the system was running. Most systems of the day required recompilation and reloading for any change to take effect.

Screenshot of Smalltalk-78 emulation running in the Smalltalk Zoo. This shows the demo given to Steve Jobs in which Dan Ingalls changed the text highlighting behavior from black box with inverted text (seen in the browser panes) to the two-pixel outline around the selected text. The actual text selected here is the very code that implements the change. A similar code snippet was run manually to invert the elf cartoon twice, resulting in a thick rectangle framing the elf.

Later in the demo, Jobs observed that the scrolling behavior of the text views was jerky, rather than smooth and asked if that could be changed as well. Line-by-line scrolling had seemed most natural for editing code, but during a lunch break, Ingalls reorganized some code to make this possible. When Jobs returned, Ingalls demonstrated that with a simple change, he was able to make the window scroll smoothly, a pixel at a time, instead of a line at a time, again while the system was still running. He has repeated this demonstration for CHM on our restored Xerox Alto, both in a live event and in the following recorded video demo. The selection behavior demo begins at 39:04, and the scrolling behavior demo begins at 44:48. These specific demos take place in Smalltalk-76 running on a Xerox Alto.

Dan Ingalls demonstrates various versions of Smalltalk through the years.

While Alan Kay is famous for articulating the vision behind Smalltalk (and coining the term “object-oriented programming”), Dan Ingalls was the lead programmer and responsible for many features and design changes that took place as the LRG developed Smalltalk further. The first version of Smalltalk, the version that the children played with, was Smalltalk-72. (Each version of Smalltalk is named for roughly the year it was developed, with Smalltalk-72 coming out in 1972.) A running version of Smalltalk-72 is hosted at the Smalltalk Zoo here.

Screenshot of Smalltalk-72 emulation running in the Smalltalk Zoo.

Smalltalk-72 had a number of limitations, and some minor improvements were made in the next version, Smalltalk-74. Two of the most important improvements were a virtual memory system called OOZE, developed by Ted Kaehler, and a new graphics routine called BitBLT, developed by Ingalls. BitBLT (pronounced “bit-blit”) was a new graphics primitive that copied pixels from one memory location to another in a fast and efficient manner. This operation would come to be known as “blitting.” This was used anywhere that bitmapped graphics needed to be quickly moved around, such as scrolling text or animation. In earlier versions of Smalltalk, both text and graphics had their own versions of this routine, duplicating functionality. Ingalls united them and reimplemented the routine in the Alto’s microcode to make it fast. Today, blitting operations are used everywhere, including in 2D video games, where it is the basis for animating sprites.

Screenshot of Smalltalk-76, image from Wikipedia article on Xerox Alto.

The next major version, Smalltalk-76, was an even bigger upgrade. Smalltalk-76 changed the design of the Smalltalk language, incorporating a feature that is often seen as synonymous with object-oriented programming, “inheritance.” It also introduced a syntax that could be compiled, adding keyword-labelled parameters, something that is familiar to today’s Objective-C and Swift programmers. For interested technical readers, Dan Ingalls’s paper “The Evolution of Smalltalk: From Smalltalk-72 through Squeak” has many more details.[1] Smalltalk-76 established the design that most modern versions of Smalltalk follow today.

Like 74 and 76 before them, the next two versions of Smalltalk at PARC were a minor update (78) followed by a major one (80). Smalltalk-78 was a port of Smalltalk-76 to the NoteTaker, a portable computer designed largely by Doug Fairbairn and powered by Intel 8086 microprocessors. This version of Smalltalk stripped out the OOZE virtual memory, and made key changes that facilitated portability in the next phase, Smalltalk-80.

An emulation of Smalltalk-78, which runs very similarly to Smalltalk-76, is hosted in the Smalltalk Zoo here.

Smalltalk-80 graphical user interface, ca 1980. Courtesy of the PARC Library, © PARC. CHM Object ID 500004472.

Adele Goldberg, who by 1980 was the manager of LRG, led the effort to publicize and make Smalltalk available outside of Xerox PARC. The development of this public version, Smalltalk-80, was led by Dan Ingalls. Licenses were granted to four corporate partners who would develop Smalltalk to run on their own systems: Tektronix, DEC, Apple, and HP. A key change in this effort was to translate the use of special keyboard characters specific to Xerox hardware into ASCII standard equivalents. A special issue of Byte Magazine was devoted to Smalltalk in August 1981, with contributions by LRG members including Ingalls, Goldberg, Kaehler, Larry Tesler, and others. This publication had a big impact on the computer industry in spreading object-oriented programming ideas. For one example, this Byte issue motivated Brad Cox to combine Smalltalk concepts with the C programming language to create Objective-C. (See “A Short History of Objective-C” also “The Origins of Objective-C at PPI/Stepstone and Its Evolution at NeXT.”[2] Several books on Smalltalk, also by LRG researchers, were also published. (Smalltalk-80: The Language, Smalltalk-80: The Language and its Implementation, Smalltalk-80: The Interactive Programming Environment, and Smalltalk-80: Bits of History, Words of Advice.) Smalltalk-80 became the basis for all future commercial versions of Smalltalk. A version of Smalltalk-80 can be run here.

By the mid-1980s, Kay, like other ex-PARC colleagues Larry Tesler and Dan Ingalls, had joined Apple. In 1995, after a hiatus from the computer industry, Ingalls rejoined Kay’s group to produce a new, portable version of Smalltalk derived from the free Apple Macintosh version. Called “Squeak,” this version supported full color and contained many improvements, including an updated version of the BitBLT routine called “WarpBlt” that supported rotation and scaling effects. A cool example of WarpBlt can be found here. Squeak also supported a new effort to make a children’s educational computing environment, eToys. eToys can be run here. Squeak was released as open source in 1996, and remains in active development today. Thanks to Vanessa Freudenberg’s SqueakJS (an implementation of Squeak in JavaScript), you can now run most versions of Squeak in the Smalltalk Zoo here.

The WarpBlt mandala effect in the Squeak emulation running in the Smalltalk Zoo.

Over its almost 50-year history, there have been many versions of Smalltalk, many of which have had significant impacts on the wider computer industry. From today’s perspective, in which mice and graphical user interfaces are ubiquitous, it can be difficult to imagine how radical the GUI was back in the 1970s. In addition, early PARC GUIs differed in key ways from the user interfaces we use today, be they Windows PCs or iPhones. For one, the dynamic, live nature of these systems, in which users were also programmers of their own systems and could change things immediately, at will, has been lost in favor of security and consistency. Text-based descriptions of these systems simply do not do them justice. They have to be seen. Recorded demos, such as those done by Ingalls and published by the Software History Center at CHM, do a much better job of capturing the experience.

It is this dynamic nature of Smalltalk, and a sense that the locked-down user interfaces of today have lost the flexibility and tinkerability of early Smalltalk that has motivated its creators to return to it and bring it back to life. Alan Kay’s complete vision of personal dynamic media remains unfulfilled in modern computing. In order to grasp that original vision and its revolutionary impact, one must experience first-hand Smalltalk’s dynamism, its “liveness.” To do so, one must be able to run a version of Smalltalk, play with it, make changes live and see how it works. Of course, most people don’t have access to a working Xerox Alto. Fortunately, for many years, Dan Ingalls has, in collaboration with others, created an emulation environment called LivelyWeb, implemented in JavaScript and running in any web browser, that can host various historical simulations of Smalltalk, from Smalltalk-72 all the way to Squeak. This is the Smalltalk Zoo: a menagerie of historical Smalltalk emulations running live in your web browser and accessible to all.

CHM, in partnership with Dan Ingalls, is proud to host the Smalltalk Zoo, which you can access at smalltalkzoo.thechm.org. The Smalltalk Zoo site is also the companion to Ingalls’ ACM History of Programming Languages 2020 paper “The Evolution of Smalltalk: From Smalltalk-72 through Squeak,” as the paper refers to the live Smalltalk emulations as examples. As the premier non-profit institution committed to the preservation of and access to computer history for years to come, CHM is the ideal place to archive and experience such historical software artifacts.

[1] Daniel Ingalls, “The Evolution of Smalltalk: From Smalltalk-72 through Squeak,” Proceedings of the ACM on Programming Languages 4, no. HOPL (June 12, 2020): 85:1–85:101, https://doi.org/10.1145/3386335.

[2] Brad J. Cox, Steve Naroff, and Hansen Hsu, “The Origins of Objective-C at PPI/Stepstone and Its Evolution at NeXT,” Proceedings of the ACM on Programming Languages 4, no. HOPL (June 12, 2020): 82:1–82:74, https://doi.org/10.1145/3386332.

Related Resources

Alto System Project: Dan Ingalls demonstrates Smalltalk

From Smalltalk to Squeak, lecture by Dan Ingalls

Yesterday’s Computer of Tomorrow: The Xerox Alto

Adele Goldberg Oral History

Finding Aid to the Adele Goldberg papers

Goldberg, ParcPlace, and Xerox PARC videotapes and DVDs

Alto System Project: Larry Tesler demonstration of Gypsy

Larry Tesler Oral Histories:

Alan Kay: The Dynabook—Past, Present, and Future

Alan Kay’s University of Utah Doctoral Thesis. The Reactive Engine; and Flex

The Computer Revolution Hasn’t Happened Yet

Doing with Images Makes Symbols

Alan C. Kay Papers

Alan Kay Oral History (2008)

Back to the Future of Software, Lecture by Alan Kay:

Model of the Dynabook

Dynabook, the Complete Story

40th Anniversary of the Dynabook, CHM Lecture by Alan Kay

Software

Smalltalk v286

VisualAge for Smalltalk

Smalltalk-80 Virtual Image 2.2 / Virtual Machine 1.1 Atari ST

Smalltalk/V Windows Object-Oriented Programming System

Smalltalk-72 and 80 manuals:

Hardware

Xerox Alto:

bar divider

 

COVER IMAGE: The 1981 issue of Byte magazine featured Xerox’s Smalltalk, a groundbreaking graphical environment and programming language that introduced object-oriented programming to a large audience. Copyright Robert Tinney.

FacebookTwitterCopy Link

The post Introducing the Smalltalk Zoo appeared first on CHM.

]]>
CHM Releases New Recordings and Personal Stories with AI Expert Systems Pioneers https://computerhistory.org/blog/chm-releases-new-recordings-and-personal-stories-with-ai-expert-systems-pioneers/ Thu, 25 Jun 2020 20:48:37 +0000 https://computerhistory.org/?p=17365 As CHM continues its commitment to decoding the history and impact of AI, we are honored to preserve and make accessible these unique discussions with some of the field’s leading pioneers

The post CHM Releases New Recordings and Personal Stories with AI Expert Systems Pioneers appeared first on CHM.

]]>
By David C. Brock
Director and Curator
Software History Center, CHM

Massimo Petrozzi
Senior Audio/Video and Digital Archivist &
Oral History Program Coordinator, CHM

As CHM continues its commitment to decoding the history and impact of AI, we are honored to preserve and make accessible these unique discussions with some of the field’s leading pioneers.

— Dan' Lewin, CHM CEO

Today, we are bombarded by messages about the ways in which artificial intelligence (AI) is changing our world and its future promises and perils. But today’s AI, called machine learning, is very different from much of AI in the past. From the 1970s until the 1990s, a very different approach, called “expert systems,” appeared poised to radically change society in many of the same ways that today’s machine learning seems. Expert systems seek to encode into software systems the experience and understanding of the finest human specialists in everything from diagnosing an infectious disease to identifying the sonar fingerprint of enemy submarines, and then have these systems suggest reasoned decisions and conclusions in new, real-world cases. Today, many of these expert systems are commonplace in everything from systems for maintenance and repair, to automated customer support systems of various sorts. While such uses appear prosaic today, expert systems were viewed as a major advance, able to meet or exceed the capabilities of human experts in a set of specific domains.

In May 2018, CHM hosted a unique two-day meeting titled “AI: Expert Systems Pioneer Meeting.” The meeting was a collaboration between the Software History Center and the Software Industry Special Industry Group. The goal was to bring together important pioneers in expert systems and let them tell their stories. As Burt Grad, one of the moderators and organizers, highlighted during his opening remarks, attendees were invited describe the history of the field from their personal perspective. As he put it, “tell what you know personally, either from people you worked with or things you experienced.”1 The result was an engaging conversation covering the 1950s to the 1990s with attention to the expert systems companies founded in those years.

Day 1: Origins, Memories, and Early Companies

The first day opened with an interesting discussion about the origins of AI as a field. The attendees shared memories about the people and the institutions that created the field in its first two decades. In particular the session focused on the contributions of Marvin Minsky, John McCarthy, Allen Newell, and Herb Simon. The attendees also discussed different definitions of what terms such as “expert systems,” “knowledge-based systems,” and “artificial intelligence” meant in those years. The conversation also touched on the relationship between academic research and business application of AI.

The day ended with an overview of some of the early companies that embraced expert systems from the 1960s to 1980s. The founders of companies such as Machine Intelligence, Symantec, Advanced Decision Systems, AI Corporation, and others explained how and when each company was founded, their primary products and services, and discussed their main sources of funding and revenue. This discussion also offered attendees the opportunity to address how the signature AI programming language, LISP, affected the growth of expert systems companies. To use the words of one of the attendees, “LISP machines were the best development environment ever invented.”1

Day 2: Later Companies and Changing Approaches to AI and Machine Learning

The second day of the meeting focused on companies created in the 1980s and 1990s such as CyCorp, Syntelligence, and Neuron Data. Attendees also discussed how larger companies like IBM, Schlumberger, and Franz Inc. implemented this technology. The discussion provided not only an overview of how these companies grew and, in the end, failed, but also the technical problems they were trying to address.

The meeting ended with an interesting analysis of how today’s dominant approach to AI—machine learning—differs from expert systems. As Edward Feigenbaum summarizes, the main difference “[i]t’s a granularity issue. When we were going after knowledge from a doctor, we wanted a gold bar; we didn’t want gold dust. We wanted it all packaged up with your expertise and all your rules of good judgment, all packaged together. What you get now is gold dust, and you need 100,000 of them.”2 During the final discussion attendees underscored how this change of the approach to AI was intertwined with technological advances such as advances in computing power.

The following day, CHM recorded four oral histories with some of the meeting attendees Herb Schorr, Alain Rappaport, Brad Allen, and Peter Friedland. By sharing their personal stories, these four AI pioneers had the opportunity to provide extra context to the topics discussed during the meeting.

Now for the first time, CHM is releasing these recordings from its archives and adding historical context to today’s conversations surrounding AI and machine learning.

“We are proud to release these important recordings, including companion oral histories, from our “AI: Expert Systems Pioneer Meeting,” says CHM CEO Dan’l Lewin. “These recordings highlight the voices of AI legends and contributors like Edward Feignbaum, Herb Schorr, Peter Norvig, Peter Hart, Brad Allen, Peter Friedland and many others in an engaging story about the people behind expert systems companies from the 1970s to the 1990s. As CHM continues its commitment to decoding the history and impact of AI, we are honored to preserve and make accessible these unique discussions with some of the field’s leading pioneers.”

By recording the voices and stories of these AI pioneers CHM is providing an invaluable contribution to the understanding of one of most fascinating technologies of our era.

Meeting Transcripts

View All

Oral Histories

Oral History of Brad Allen

Oral History of Brad Allen, interviewed by Avron Barr on May 16, 2018 in Mountain View, CA, X8641.2018 © Computer History Museum. Transcript.

Oral History of Peter Friedland

Oral History of Peter Friedland, interviewed by David Grier on May 16, 2018 in Mountain View, CA, X8635.2018 © Computer History Museum. Transcript.

Oral History of Peter Hart

Oral History of Peter Hart, Part 1, interviewed by David C. Brock on May 16, 2018 in Mountain View, CA X8637.2018 © Computer History Museum. Transcript.

Oral History of Peter Hart, Part 2, interviewed by David C. Brock on August 14, 2018 in Fishers Island, NY X8637.2018 © Computer History Museum. Transcript.

Oral History of Brian McCune

Oral History of Brian McCune, interviewed by Hansen Hsu on May 16, 2018 in Mountain View, CA X8636.2018 © Computer History Museum. Transcript.

Oral History of Herb Schorr

Oral History of Herb Schorr, interviewed by Burt Grad on May 16, 2018 in Mountain View, CA X8633.2018 © Computer History Museum. Transcript.

Related Oral Histories

Oral History of Edward Feignbaum, interviewed by Nils Nilsson

Oral History of Edward Feignbaum, Part 1, interviewed by Nils Nilsson, on June 20, 2007 in Mountain View, California, X3896.2007 © Computer History Museum. Transcript.

Oral History of Edward Feignbaum, Part 2, interviewed by Nils Nilsson, on on June 27, 2007 in Mountain View, California, X3896.2007 © Computer History Museum. Transcript.

Oral History of Edward Feignbaum, interviewed by Donald Knuth

Oral History of Edward Feignbaum, Part 1, interviewed by Donald Knuth April 4, 2007 in Mountain View, CA X3897.2007 © Computer History Museum. Transcript.

Oral History of Edward Feignbaum, Part 2, interviewed by Donald Knuth May 2, 2007 in Mountain View, CA X3897.2007 © Computer History Museum. Transcript.

Notes

  1. Session 4 page 16.
  2. Session 8, page 23.

bar divider

PHOTOS COURTESY OF PAUL MCJONES AND EDWARD LAHAY

FacebookTwitterCopy Link

The post CHM Releases New Recordings and Personal Stories with AI Expert Systems Pioneers appeared first on CHM.

]]>
The Earliest Unix Code: An Anniversary Source Code Release https://computerhistory.org/blog/the-earliest-unix-code-an-anniversary-source-code-release/ https://computerhistory.org/blog/the-earliest-unix-code-an-anniversary-source-code-release/#respond Thu, 17 Oct 2019 18:25:37 +0000 https://computerhistory.org/?p=12588 In celebration of Unix’s 50th anniversary, the CHM Software History Center is delighted to make publicly accessible for the first time some of the earliest source code produced in the Unix story.

The post The Earliest Unix Code: An Anniversary Source Code Release appeared first on CHM.

]]>
What is it that runs the servers that hold our online world, be it the web or the cloud? What enables the mobile apps that are at the center of increasingly on-demand lives in the developed world and of mobile banking and messaging in the developing world? The answer is the operating system Unix and its many descendants: Linux, Android, BSD Unix, MacOS, iOS—the list goes on and on. Want to glimpse the Unix in your Mac? Open a Terminal window and enter “man roff” to view the Unix manual entry for an early text formatting program that lives within your operating system.

2019 marks the 50th anniversary of the start of Unix. In the summer of 1969, that same summer that saw humankind’s first steps on the surface of the Moon, computer scientists at the Bell Telephone Laboratories—most centrally Ken Thompson and Dennis Ritchie—began the construction of a new operating system, using a then-aging DEC PDP-7 computer at the labs. As Ritchie would later explain:

“What we wanted to preserve was not just a good environment to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing, as supplied from remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication.”1 

Ken Thompson (seated) and Dennis Ritchie (standing) with the DEC PDP-11 to which they migrated the Unix effort in 1971. Collection of the Computer History Museum, 102685442.

Ken Thompson was the motive force for the development of this system, which was soon called Unix, while Ritchie was the key individual in the creation of a new programming language for it, called C. Like Unix itself, the language C has been tremendously influential. C and languages inspired by it (C++, C#, Java) predominate the list of the most popular programming languages to the present. Indeed, they account for 4 of the 10 most popular programming languages in 2019 according to the IEEE.2  C itself is #3.

To mark Unix’s 50th anniversary, the CHM Software History Center is delighted to make publicly accessible for the first time some of the earliest source code produced in the Unix story.

Recently, CHM was entrusted to preserve the papers of Dennis Ritchie by the Ritchie family. Within these papers, I identified a black binder with the hand-label “Unix Book II” containing nearly 190 pages of printed source code listings, written in PDP-7 assembly code.

With invaluable early review of these listings from Warren Toomey of The Unix Heritage Society and from John Mashey, an early Unix contributor and CHM trustee, we can date these listings to 1970, perhaps early 1971, before the Unix effort migrated to a new PDP-11. A PDF of the listings contained in this Unix Book II binder is available for download via this catalog record.

A page from the source code listing for Space Travel, ca. 1970. Space Travel was critical to the start of the Unix story. Ken Thompson began implementing the science fictional game, in which players guide a spacecraft through the solar system to land on various moons and planets, on a PDP-7 at the Bell Telephone Laboratory in 1969. Dennis Ritchie soon joined in the effort. In working on and playing Space Travel on the PDP-7, Thompson turned to developing a full, if fledgling, operating system for the computer that incorporated file system and other ideas that he and others in his computer science department had been considering. Ritchie and other colleagues were soon attracted to the system and its development. That early system, the start of Unix, and programs for it are represented in this source code release. Collection of the Computer History Museum, Dennis M. Ritchie Papers, 102788942.

These programs were written by the first participants in the Unix effort at the Bell Telephone Laboratories, starting in 1969. Ken Thompson and Dennis Ritchie were the two central, and initial, participants. Other early participants include Rudd Canaday, Doug McIlroy, Brian Kernighan, Bob Morris, and Joe Ossanna. It is likely that much of the work represented in this binder is due to the work of Thompson and Ritchie.

The binder is also likely to have been originally kept in the “Unix Room” at Bell Labs and was part of a collection there of the earliest Unix code. The collection appears to have been divided into two binders, presumably the Unix Book II now preserved at the Museum and another companion binder, perhaps labeled “Unix Book I.” The listings within this companion binder were photocopied by Norman Wilson in the later 1980s and, in 2016, scanned and made available through The Unix Heritage Society. The current location of this companion binder is unknown.

Provisional identifications and notes on the program listings in Unix Book II, keyed to the page numbers of the PDF, follow. Our sincere thanks to Warren Toomey and John Mashey for their vital assistance in these provisional identifications and notes. We are excited to see what additional identifications and insights will come from the examination of these source code listings by the public. To share your ideas and insights with us, please join the discussion at the end of this post or email the CHM Software History Center.

Happy golden anniversary, Unix!

Unix Book II Identifications and Notes

pp. 2−15
Handwritten identifier on p. 2: “fops”

These may be PDP-7 assembly listings for the floating-point arithmetic operations that was among the first software that Ken Thompson and Dennis Ritchie had to create in order to develop the game Space Travel for the PDP-7 starting in 1969.

In Ritchie’s The Evolution of the Unix Time-sharing System” he writes: Also during 1969, Thompson . . . and I rewrote Space Travel to run on this machine [‘a little-used PDP-7 computer with an excellent display processor; the whole system was used as a Graphic-2 terminal’]. The undertaking was more ambitious than it might seem; because we disdained all existing software, we had to write a floating-point arithmetic package, the point-wise specification of the graphic characters for the display, and a debugging system . . . All this was written in assembly language . . .

Warren Toomey believes that the fopslisting from pp. 2−15 represents mathematics functions like multiplication, division, sine, cosine, square root, and others.

pp.17−18
PDP-7 assembly listing for “ln”

Unix command for creating links to files.

pp. 20−24
PDP-7 assembly listing for “ls”

Unix command for listing file names.

pp. 26−34
PDP-7 assembly listing for “moo”

Number guessing game, available in 1970 on Multics and in 1968 on University of Cambridge mainframe. A version of the mind or paper game Bulls and Cows.

pp. 36−39
PDP-7 assembly listing for “nm”

Unix command to list the symbol names of an executable file.

pp. 42−43
“op”

A list of definitions, showing the instruction values for all of the PDP-7 assembly mnemonic codes and also numbers for the Unix system calls.

pp. 45−63
PDP-7 assembly listing for what may be a simulation or game for billiards or pool.

pp. 65
PDP-7 assembly listing for “pd”

Unidentified program.

Might “pd” stand for “previous directory”?

pp. 67−71
PDP-7 assembly listing for “psych”

Unidentified program.

pp. 73
PDP-7 assembly listing for “rm”

Unix command for removing files.

pp. 75
PDP-7 assembly listing for “rn”

Possibly a Unix command for renaming, or moving, files that was later implemented as the “mv” command.

pp. 77−92
PDP-7 assembly listing for “roff”

The first Unix text-formatting program.

pp. 94−98
PDP-7 assembly listing for “salv”

Unix command for file system salvage, reconstructing the file system.

pp. 100−106
PDP-7 assembly listing for “sh”

The Thompson Shell, the Unix command interpreter.

pp. 109−136
PDP-7 assembly listing for Space Travel.

Space Travel is a computer game that was central to the beginning of the Unix effort at Bell Labs.

pp. 138−139
PDP-7 assembly listing for “stat”

Unix command that provides status information about files.

pp. 141−142
PDP-7 assembly listing for “tm”

A command that performs the system call time and performs a conversion. Likely an early version of the Unix command “date.”

pp. 145−169
PDP-7 assembly listings for “t1,” “t2,” “t3,” “t4,” “t5,” “t6,” “t7,” and “t8”

Unidentified program.

Perhaps an interpreter for a programming language? B?

pp. 172−183
PDP-7 assembly listings for “ttt1”

Tic Tac Toe game.

pp. 184−188
PDP-7 assembly listing for “ttt2”

Unidentified program.

Presumably related to Tic Tac Toe.

pp. 190
PDP-7 assembly listing for “un”

Unix command for finding undefined symbols.

Notes

1. https://www.bell-labs.com/usr/dmr/www/hist.html

2. https://spectrum.ieee.org/computing/software/the-top-programming-languages-2019

FacebookTwitterCopy Link

The post The Earliest Unix Code: An Anniversary Source Code Release appeared first on CHM.

]]>
https://computerhistory.org/blog/the-earliest-unix-code-an-anniversary-source-code-release/feed/ 0
Math Miracles for Missileers: The Aerospace Industry, Computer Programming, and the Rise of IBM https://computerhistory.org/blog/math-miracles-for-missileers-the-aerospace-industry-computer-programming-and-the-rise-of-ibm/ https://computerhistory.org/blog/math-miracles-for-missileers-the-aerospace-industry-computer-programming-and-the-rise-of-ibm/#respond Fri, 26 Apr 2019 00:00:00 +0000 http://computerhistory.org/blog/math-miracles-for-missileers-the-aerospace-industry-computer-programming-and-the-rise-of-ibm/ Robert W. “Bob” Bemer - who worked at Lockheed's Missile Systems Division in Van Nuys and who would become its IBM 650's power user - carefully cut out the article and placed it into a scrapbook. In 2018, through its Access to Historical Records grant from the National Archives' National Publications and Records Commission, CHM digitized and made freely available online roughly 10 percent of Bemer's historical collection, over 3,000 pages.

The post Math Miracles for Missileers: The Aerospace Industry, Computer Programming, and the Rise of IBM appeared first on CHM.

]]>

A newspaper clipping ca. 1953.

 

“New Data Machine Performs Math Miracles for Missileers” declaims the bold and italicized headline of a well-preserved clipping, carrying with it no publication attribution nor date. The clipping continues:

Even if you’re a real hotshot in math, you’ll probably think that adding and subtracting 10-digit figures at the rate of 200 a second is not only fantastic but impossible. It’s fantastic but not impossible, at least for the new electronic data processing machine that arrived at MSD a week ago.

Here, “MSD” stands for the Missile Systems Division of the Lockheed Aircraft Corporation. Lockheed created MSD in 1953, placing it in Van Nuys, California—just north of Hollywood in the Los Angeles Basin, and a short drive from Lockheed’s headquarters in Burbank to the east. MSD’s remit was to get Lockheed into the military missile race for nuclear ICBMs, guided missiles of various stripes, and also satellites and their launching. By 1957, MSD was redubbed the “Missiles and Space Division,” and had relocated north, to Sunnyvale on the San Francisco Peninsula where it would become the area’s largest employer as it developed and produced the submarine-launched nuclear ballistic missile, Polaris. From the casual use of MSD, one can infer that the “Math Miracles for Missileers” article was clipped from some internal Lockheed publication.

But what of the “new electronic data processing machine” that could achieve these “fantastic” calculations? The article continues:

The new machine is called a Type 650 magnetic drum data processing machine, and is one of the first to be shipped by IBM. It is the first in the missile and aircraft industry and also the first this side of the Mississippi river.

Today, the IBM 650 is recognized as among that firm’s first commercial electronic computers, but the word “computer” did not yet find its way into the clipped article.

IBM 650 Electronic Data Processing System, November 1956.

What was MSD’s 650 to do?

For the most part, the new machine will be working for missile engineers, computing flight paths for fully guided missiles, calculating heating effects at extremely high speeds, helping with upper-atmosphere research, and working on design studies and computation of orbits for space vehicles . . . Once a week, it will take an hour off to turn out the MSD payroll.

How is it that this clipping has survived to the present day? How is it that it is available in digital form to reproduce here as an illustration, for you and your author to read? The necessary cause was that Robert W. “Bob” Bemer—who worked at Lockheed’s Missile Systems Division in Van Nuys at this time and who would become its IBM 650’s power user—carefully cut out the article and placed it into a scrapbook. The sufficient causes were that after Bemer’s death in 2004 at the age of 84, Bettie Bemer, his wife, donated all of Bemer’s personal papers to the Computer History Museum (CHM). In 2018, the Museum used its recent Access to Historical Records grant from the National Archives’ National Publications and Records Commission to create a finding aid for the Bemer papers and to digitize and make freely available online roughly 10 percent of the collection, over 3,000 pages.2

Bemer, who had graduated with a mathematics degree from Albion College, had worked in computing at RAND, Lockheed, and at Marquardt Aircraft before joining Lockheed’s Missile Systems Division in 1954. By the time Bemer returned to Lockheed, he was one of the longest-term and accomplished users of an early contribution from aerospace engineering to electromechanical computing: The IBM-CPC, the “card-programmed calculator.”

A few years earlier, in 1948, engineers at Northrup Aircraft’s Hawthorne facility near the Los Angeles International Airport, made a novel connection between an IBM calculator and an IBM accounting machine (a high-end tabulator). The resulting system was a calculator in which the sequence of operations could be controlled—programmed—by properly coded sequences of punched cards fed into the accounting machine. Critically, the operations performed by the system and controlled by the punched cards were determined by another programmed element, a very carefully and specifically wired plug “board” within calculator. These boards defined the space of operations that could be performed, while the punched cards determined what actually took place within this space. With their self-made system, Northrop engineers could automate complex technical calculations needed for their aerospace R&D. 3

IBM Card-Programmed Calculator (CPC)

IBM saw in the work of the Northrop engineers, and their enthusiasm for it, a market, quickly turning the user innovation into a standardized and designed product announced in November 1948, the IBM-CPC. IBM’s imagination of a market proved productive: By the mid-1950s, aerospace and other engineers, private firms, and government organizations gobbled up some 700 of the CPCs. The newly digitized portions of the Bemer collection show us that, if one counts his early résumé as generally accurate, then Bob Bemer not only encountered the IBM-CPC very early in 1949 at RAND, but that he also used the first and fourth CPCs ever manufactured by IBM.4

A job application sent to the MSD at Lockheed.

 

At RAND, Bemer learned to “code” the CPC to produce calculations central to some of the most pressing concerns of the new Cold War defined by nuclear weapons—the Soviet Union detonated its first nuclear bomb shortly after Bemer joined RAND—delivered by aerospace vehicles. Bemer used the CPC to “analyze” such “problems” as “strategic war games,” “statistical bombing,” “nuclear fission shielding,” and “strategic aircraft perimeter studies,” whatever that may mean. Moreover, at RAND and shortly afterward at Lockheed through 1952, Bemer developed what he called “systems” for the CPC that were, in essence, system software or operating systems of a sort. Bemer’s CPC systems were programmed boards that defined a space of operations within which calculations could be automated through programmed control by punched cards. One of these systems, named FLODESC, was described by Bemer as a “general purpose computation system for the CPC.” Particularly, it afforded the ability to perform calculations using floating-point numbers. Remarkably, Bemer kept his description of FLODESC, along with his other CPC work in a binder, confusingly labeled “IBM 650 Programming,” in his files.

Yet perhaps Bemer’s labeling of his binder of CPC programming as “IBM 650 Programming” does make sense—beyond possible reuse of office supplies—in that he turned immediately from leading-edge use of the CPC to extensive use of one of IBM’s first commercial digital electronic computers, the IBM 650—the machine that Lockheed boasted performed “math miracles” for its “missileers.” Surely some of the design, analysis, approach, and mathematics that Bemer had honed for the CPC transferred to his quick engagement with the IBM 650 computer.

Indeed, in a short-memoir of his experiences of the IBM 650 published in a 1986 issue of the Annals of the History of Computing, Bemer recalls that one of his first tasks when joining Lockheed’s Missile Systems Division was to install the division’s new IBM 650. One of the first 10 650s produced, the installation called for Bemer to travel across the country to IBM’s Endicott, New York, facilities for testing of the computer, the first of its kind—in Bemer’s recollection—intended for scientific work. While the IBM 650 was primarily intended by IBM to be a smaller-scale business computer, Bemer’s boss in the Missile Systems Division, Art Hubbard, was making a rather substantial bet that the 650 could be an accessible and useful took for scientific and engineering calculations: He had ordered three of them!5

After bringing the 650 online, the first major task for Bemer and his colleagues was to make the 650 perform the floating-point calculations so central to the work of the missile division’s engineers. As Bemer later recalled, “Almost all scientific usage of the 650 was done with floating-point routines, which we had to fabricate ourselves.” Within months, drawing on his previous work creating a floating-point “system” for the CPC and similar work by his colleagues, Bemer’s group created a floating-point “system” for their new IBM 650 called the FLAIR system, for “floating abstract interpretive routines.” The tie between Bemer’s seminal work on the CPC and his first work to make the IBM 650 useable by Lockheed’s engineers was tight.

Quickly, the Missile Systems Division’s big bet on the IBM 650 for critical aerospace engineering work, and Bob Bemer’s enabling role in it, became something of a community and a multicorporation resource. IBM began bringing its own employees as well as potential IBM 650 customers into the missile division’s operation for discussions and presentations, many conducted by Bemer himself. In doing so, this positioned the missile division at the center of development around the engineering use of the IBM 650 among other users and IBM itself.

A letter to Bob Bemer from D.W. Pendery in 1955.

 

This February 14, 1955 letter to Bemer from Donald W. Pendery—then IBM’s Field Manager for Los Angles, and later a top corporate planner for Xerox—was digitized as part of the Museum’s Software History Processing Project and shows both the extent to which IBM was drawing from Bemer’s expertise with the 650 and also the leadership position that the missile division had over IBM at this time in software. “The duplicated programs and card forms which you distributed,” Pendery wrote, “will be most helpful to our other customers.”6

Pendery’s opinion on the helpfulness of Bemer and the missile division’s software and expertise to other early scientific and engineering users of the IBM 650 was certainly reflected in the expressed opinions of some of these very users. S. A. Lawrence, the director of the Systems Control Department of Collins Radio in Iowa, wrote to Bemer the following month, in March 1955, thanking Bemer and his colleagues for introducing a Collins Radio delegation to the IBM 650. “We are convinced,” Lawrence wrote, “that you are pioneering the field on the use of the 650 and know its capabilities and limitations much better than probably even I.B.M.” 7

A letter from the Collins Radio Company offering support to Bemer.

Bemer’s work to disseminate the missile division’s use of the 650 for aerospace engineering and its related software was of mutual benefit to Lockheed, IBM, and Bemer himself. Bemer became widely known among the nascent computing community as a programming expert, and IBM and Lockheed remained at the center of developments in the use of IBM’s computers for technical work. This tripartite benefit to Lockheed opening its doors and sharing its software with IBM and its customers was the focus of a May 1955 letter from Fred L. Brown, an IBM manager, to Bemer’s boss, Art Hubbard.

Mr. Bemer is as an acknowledged leader in the field of digital analysis,” Brown wrote, continuing “his contributions toward effective utilization of our several computers has been of great value to IBM as well as to Lockheed Aircraft Corporation.8

It may come as little surprise for the reader to learn, then, that IBM poached Bemer just a few months later. Bemer left the Missile Systems Division to join IBM and its software efforts. At IBM, Bemer kept close watch on the development of FORTRAN, while he first created a system for performing floating-point calculations on the IBM 705, the company’s new and most-powerful business computer. This 705 system, called PRINT-1, was a continuation of Bemer’s creation of floating-point systems for the IBM 650 and IBM CPC. From PRINT-1, Bemer moved into software developments related to FORTRAN. 9

A letter from IBM Manager Fred Brown commending Bemer.

By 1957, Bemer was still with IBM’s Programming Research Department and had become an absolute devotee of what he called “automatic programming,” that is, programming in a high-level programming language and using compilers to create the executable machine code. In March 1957, Bemer published a short note in a now-obscure magazine, Automatic Control, titled “How to Consider a Computer.” In the piece, Bemer opined that “[a] computer should not be rented or purchased unless an automatic programming or coding system is furnished for its operation.” He continued, presenting a view of the “new synthetic languages..in…process which will affect your use of computers.” These high-level programming languages “will be essentially algebraic, both arithmetic and logical, and linguistic so that procedures may consist of real sentences in a living language.”10

In writing his article, it appears almost certain that Bemer had in mind the work being carried out contemporaneously in Grace Hopper’s Automatic Programming Department in the Remington Rand Univac division of Sperry Rand. How can one be confident of this? A letter, digitized as part of the Software History Processing Project, from Hopper to Bemer dated April 1, 1957, just after the appearance of Bemer’s article. In the very familiar letter, Hopper expresses her delight at Bemer’s opinionated article and gives him an update on her work with “B-Zero . . . now called Flow-matic by the Sales Department,” an early compiled programming language developed by Hopper for business software. Hopper’s English-like Flow-matic would prove instrumental in her subsequent work to create the famous COBOL programming language, still running many key financial and government systems today.11

Dr. Grace Hopper noting her appreciation of Bemer’s article in Automatic Control.

These few selections from the Bob Bemer papers provide further insight into the importance of southern California to the rise of electronic digital computing and, in particular, the history of software. They underscore the way in which computer experts within the aerospace industry developed important capabilities in both hardware and software that was at par or even exceeded those within computer makers like IBM, and how these lead users within aerospace helped to build a community of other computer users and with computer makers. Bemer’s case shows how the lines between the contexts of the aerospace industry and the rising computer industry were highly permeable, with computer experts moving between the sectors, contributing to hardware and software developments within both.

Bob Bemer’s career went in many other directions over the course of the decades. He worked in timesharing computing, and in computing standards like ASCII, before becoming an early and vociferous voice in concern, public awareness, and technical solutions for the Y2K bug. All of these fascinating topics are touched on in the 3,000 digitized pages of the Bemer papers now available online through the Museum, and the finding aid surveys tens of thousands of additional pages in which an even fuller picture awaits future researchers.

Notes

  1. Bemer Scrapbooks, p. 211.
  2. Guide to the Robert (Bob) Bemer Papers
  3. The IBM Card Programmed Calculator
  4. James W. Cortada, IBM: The Rise and Fall and Reinvention of a Global Icon. MIT Press: 2019, p. 157 and “The History of the Johnniac,” by F. J. Gruenberger, p.2.
  5. Robert W. Bemer, “Nearly 650 Memories of the 650,” Annals for the History of Computing, v. 8, n. 1, 1986, pp. 66-69.
  6. Bemer Scrapbooks, p. 186.
  7. Bemer Scrapbooks, p. 187.
  8. Bemer Scrapbooks, p. 184.
  9. Proceedings of the Western Joint Computer Conference,” February 7-9, 1956, p. 52.
  10. Robert W. Bemer, “How to Consider a Computer,” Automatic Control, March 1957, pp. 66−69.
  11. Bemer Scrapbooks, p. 151.

Resources

Finding Aid

Digitized Materials

Box 1: Memoirs—computing prior to FORTRAN, 102785394 (1 folder)

Box 2: Speeches and papers, 102785430 (5 folders)

Box 3: IBM 650—lab book, 102785388 (1 folder)

Box 4: Y2K problem—historical file, 102785459 (2 folders)


For more about our Software History Processing Project supported by the National Archives’ National Historical Publications and Records Commission see:

FacebookTwitterCopy Link

The post Math Miracles for Missileers: The Aerospace Industry, Computer Programming, and the Rise of IBM appeared first on CHM.

]]>
https://computerhistory.org/blog/math-miracles-for-missileers-the-aerospace-industry-computer-programming-and-the-rise-of-ibm/feed/ 0
If Discrimination, Then Branch: Ann Hardy’s Contributions to Computing https://computerhistory.org/blog/if-discrimination-then-branch-ann-hardy-s-contributions-to-computing/ https://computerhistory.org/blog/if-discrimination-then-branch-ann-hardy-s-contributions-to-computing/#respond Mon, 25 Mar 2019 00:00:00 +0000 http://computerhistory.org/blog/if-discrimination-then-branch-ann-hardy-s-contributions-to-computing/ In the realm of software, a “branch” is a computer instruction that causes a shift from the default pattern of activity to a different sequence of actions, a different way of moving ahead if you will. For Ann Hardy, a pioneer in timesharing software and business, her contributions to computing were achieved through repeated, creative branching in the face of sexist discrimination.

The post If Discrimination, Then Branch: Ann Hardy’s Contributions to Computing appeared first on CHM.

]]>
Ann Hardy pictured in the mid-1950s when she began her career in software.

Ann Hardy pictured in the mid-1950s when she began her career in software.

In the realm of software, a “branch” is a computer instruction that causes a shift from the default pattern of activity to a different sequence of actions, a different way of moving ahead if you will. For Ann Hardy, a pioneer in timesharing software and business, her contributions to computing—detailed in her recent oral history with the Software History Center—were achieved through repeated, creative branching in the face of sexist discrimination. A serious challenge came in the early 1950s as an undergraduate: Despite her interest, she was not allowed to major in chemistry. That was for men only. Hardy branched. The physical therapy major allowed her to take all of the chemistry and technical classes she wanted.

In the mid-1950s, at the suggestion of a childhood friend and fellow mathematics lover, Hardy stopped by IBM’s offices at 57th and Madison Avenue in Manhattan and took a computer programming aptitude test. Passing with flying colors, she took a six-week course and aced the final exam. The top 10 percent of the class was promised a job in sales, the pinnacle of IBM, but upper management eventually decided this could not apply to women. Hardy branched. She became an IBM programmer instead, making important contributions to the software for the Stretch supercomputer. Stretch led to a job at the Lawrence Livermore National Laboratory, where Hardy first experienced the then novel timesharing approach to computing. Thrilled by the possibilities of interactive computing, in 1966 she convinced a pioneering startup in the field, Tymshare, to hire her to write their timesharing operating system. They did.

To learn about further branchings by Ann Hardy in her rise to an executive at Tymshare and then to a cofounder of a secure-computing firm, read her oral history on the CHM website or watch her oral history on our YouTube channel.

“If Discrimination, Then Branch: Ann Hardy’s Contributions to Computing” was published in the Computer History Museum’s 2018 issue of Core magazine.

Oral History of Ann Hardy, Session 1

Oral History of Ann Hardy, Session 1, interviewed by David C. Brock, Hansen Hsu, and Marc Weber on February 20, 2018. Collection of the Computer History Museum, X7849.2017. Full transcript.

Session 1, Part 1

Session 1, Part 2

Oral History Ann Hardy, Session 2

Oral History of Ann Hardy, Session 2, interviewed by David C. Brock, Hansen Hsu, and Marc Weber on July 22, 2018. Collection of the Computer History Museum, X7849.2017. Full transcript.

FacebookTwitterCopy Link

The post If Discrimination, Then Branch: Ann Hardy’s Contributions to Computing appeared first on CHM.

]]>
https://computerhistory.org/blog/if-discrimination-then-branch-ann-hardy-s-contributions-to-computing/feed/ 0
Meeting Whirlwind’s Joe Thompson https://computerhistory.org/blog/meeting-whirlwind-s-joe-thompson/ https://computerhistory.org/blog/meeting-whirlwind-s-joe-thompson/#respond Wed, 20 Feb 2019 00:00:00 +0000 http://computerhistory.org/blog/meeting-whirlwind-s-joe-thompson/ The photograph was dated 1950, a date when a now unimaginably small number of humans had ever beheld a computer, no less touched one, and when unabashed racism and discrimination was endemic on the American scene. Who was the young African-American man who nevertheless sat at the controls of this storied machine? What was his name? What was his story?

The post Meeting Whirlwind’s Joe Thompson appeared first on CHM.

]]>
I cannot recall the exact moment when I resolved to meet Joe Thompson, but my email files tell me it was on or before December 4, 2017. However, I am sure of the causes for my resolution. One cause was the steps that my colleagues and I had taken toward exploring some of the earliest digital computer programs in the collection at the Computer History Museum (CHM): punched paper tapes from the 1950s used with Whirlwind, the breakthrough experimental computer created at MIT.

In reviewing the breadth and depth of the Museum’s holdings of and about Whirlwind, I was given pause by this photograph:

Whirlwind I Computer, ca. 1950. Collection of the Computer History Museum, 102622503.

The photograph was dated 1950, a date when a now unimaginably small number of humans had ever beheld a computer, no less touched one, and when unabashed racism and discrimination was endemic on the American scene. Who was the young African-American man who nevertheless sat at the controls of this storied machine? What was his name? What was his story? The description that came with the photograph held no name, no clues. (I would learn much later that the standing figure in the photograph is Jack Gilmore.)

My curiosity was heightened from lessons that I had learned, particularly from the work—and Twitter feed—of Professor Marie Hicks. From her, I had really taken in the importance of asking a set of questions to photographs and other images from the history of computing: Who is pictured? Who is not? What are the names of the people in the images? Why are some figures identified, and others not? What does the difficulty of identifying some of the figures tell us?

I kept digging using the Museum’s online catalog. I immediately found another photograph of this young man at the helm of Whirlwind:

Whirlwind with Joe Thompson. Collection of the Computer History Museum, 102622504.

Importantly, with the description written on the back of the photograph I now had his name—Joe Thompson!—and the suggestion of what must be a fascinating personal and professional story:

Black and white. Joe Thompson sitting at Whirlwind. Verso label: “In 1951, high school graduate Joe Thompson, 18, was trained as one of the first two computer operators. The computer was the Whirlwind, the prototype for the SAGE air defense system. Thompson, now 58, is a senior analyst at Unisys in Culver City, CA. ‘Computers changed my whole life,’ he says. The Whirlwind is the centerpience [sic] of the second milestone in the exhibit, People and Computers: Milestones of a Revolution, opening June 29, 1991, at The Computer Museum, Boston.”

If the caption was to be believed, Joe Thompson had gone on from the remarkable experience of operating the influential Whirlwind to a further career in computing that took him to the Los Angeles Basin, a center for critical developments in the history of computing and especially so in exactly the period in which Joe Thompson had been active. I was then resolved: I would try hard to meet Joe Thompson and to conduct a filmed oral history interview with him for CHM.

I dug further. Publications from the Museum’s history in Boston, confirmed the details of the photograph’s caption and added more details:

“It had been Jack Gilmore of the Whirlwind project, famous for his software contributions, who had been key to bringing Joe Thompson into the project in an MIT push to meet the demands for skilled staff by recruiting from local high schools those students who were academically and socially exceptional, but for whom, for whatever reasons, college was inaccessible. The outlines of the story seemed to me to be becoming even more interesting and important.”

From the Archives of The Computer Museum

The first step in my resolution to meet Joe Thompson was, of course, a web search to see if I could find him. There were so many “Joseph Thompsons” in Southern California, I despaired of finding his contact information by that route. With his evident connection to the Computer History Museum in its Boston days, perhaps our files or records somehow held an address? No luck. Perhaps one of the motive forces behind the Museum’s genesis in Boston, renown computer architect Gordon Bell, might have an address? Again, no luck.

It was around this time that I recalled discussions with Whirlwind software pioneer Judy Clapp and with MIT Museum historian and curator Dr. Deborah Douglas who had mentioned several reunions of people who had been involved with Whirlwind over the years. Might an address for Joe Thompson be in the records for these reunions? Did such records survive? If there was an address for Joe Thompson in them, could he still be there? I wrote to Dr. Douglas to tell her about my effort. She kindly agreed to dig through her files. Within a few days, she wrote me back with an address!

Immediately, on December 20, 2017, I wrote a letter to the address with great hopes, but equal skepticism, that it would find Joe Thompson there. December turned to January. The new year began. My resolve to meet Joe Thompson receded from the center of my attention toward the back of my mind. With the letter, I had done all that I could think of doing. My resolve had become a wondering question—“What happened with that letter”—would recur seemingly randomly.

On January 9, 2018—I saved the voicemail—my mobile telephone rang, as it so often does, with the caller having a number I did not recognize. By this time the plague of mobile telephone spam was within its exponential growth, so I did not answer. If the call were real, I reasoned then as I do now, the caller would leave a voicemail. The phone blipped: there was a new voicemail. I listened: “Hello, David. This is Joe Thompson . . .” A few months later, I sat down in Mr. Thompson’s living room in Carson, California, to conduct his oral history, carefully recorded by my Museum colleague Max Plutte.

My story of meeting Joe Thompson, while very important to me personally, is of course of no importance in comparison to the experiences that he shared with me. A wonderful excerpt of the interview was recently published by my colleague Dag Spicer in the IEEE Annals of the History of Computing.

I will indulge myself by closing this essay with a lesson that I have taken with me from my meeting and talking with Joe Thompson. Perhaps the capabilities, interests, passions, and even genius in the sense of talents of rare quality that people possess are not latent, but rather emergent. These human contributions to technology, science, art, medicine, industry, community, and care come into existence when people have an opportunity to encounter new pursuits, new situations, and new experiences. A genius for computing, for example, comes into being through the encounter of a particular person with a particular situation, with it appearing fundamentally unknowable which combination of person and situation will lead to such excellence. For me, the import of this lesson is for the deep importance of diversity and true inclusion. The potentials for excellence lie everywhere. It is only by opening and increasing the chances for combining many different people with many different experiences that the potentialities for contribution, success, and happiness can become actualities, and human talent made real.

Discovering Whirlwind’s Joe Thompson

CHM Software History Center Director and Curator David C. Brock shares how he discovered the identity of Joe Thompson in a 1950s photograph of the historic Whirlwind computer.

Oral History of Joe Thompson

Oral History of Joe Thompson, interviewed by David C. Brock on February 19, 2018, in Carson, CA. Collection of the Computer History Museum, 102738732. Full transcript.

From the Collection

FacebookTwitterCopy Link

The post Meeting Whirlwind’s Joe Thompson appeared first on CHM.

]]>
https://computerhistory.org/blog/meeting-whirlwind-s-joe-thompson/feed/ 0
An Inflection Point in the History of Multimedia: Video Ethnographies of Visual Almanac and News Navigator https://computerhistory.org/blog/an-inflection-point-in-the-history-of-multimedia-video-ethnographies-of-visual-almanac-and-news-navigator/ https://computerhistory.org/blog/an-inflection-point-in-the-history-of-multimedia-video-ethnographies-of-visual-almanac-and-news-navigator/#respond Thu, 18 Oct 2018 00:00:00 +0000 http://computerhistory.org/blog/an-inflection-point-in-the-history-of-multimedia-video-ethnographies-of-visual-almanac-and-news-navigator/ CHM's Software History Center has been conducting “video ethnographies” to record and preserve the experience of running historical software. Over the course of 2018, the center has conducted two video ethnographies surrounding a key moment at the end of the late 1980s and early 1990s, the birth of multimedia. Watch and learn from experts as they discuss and demonstrate the Visual Almanac and News Navigator.

The post An Inflection Point in the History of Multimedia: Video Ethnographies of Visual Almanac and News Navigator appeared first on CHM.

]]>
The Computer History Museum’s Software History Center has been conducting what we call “video ethnographies” to record and preserve the experience of running historical software. To put it simply, a video ethnography is a filmed demonstration of a historical piece of software, running on its original hardware or in emulation, demonstrated by a person with firsthand knowledge of the software, including original software creators and developers.

We believe that to properly document software, more than just its bits and code need to be preserved. Software is a performance, and must be experienced live in order to fully capture its nature and workings. As software ages, it becomes more difficult to preserve it in a state in which it can be run. Therefore, it is important to capture the experience of running the software on original hardware when it, and many of the software’s original creators, are still available. We call these demos “ethnographies” because beyond simply demonstrating the functionality of the software, we are simultaneously interviewing the demonstrator about the significance of the software in its larger cultural and social context.

Over the course of 2018, the Software History Center has conducted two video ethnographies surrounding a key moment at the end of the late 1980s and early 1990s, the birth of multimedia, which can be defined as a form of media that mixes text, graphics, audio, video, and animations together in an interactive and nonlinear form.

The first is an investigation into the Visual Almanac (102675565, 102647922, 102651553), the Encyclopedia of Multimedia (102651553), and other software titles in CHM’s permanent collection that include LaserDiscs. These titles were created by Apple Computer and ran on Macintosh computers connected to a LaserDisc player via an RS-232 serial cable. Software on the Mac controlled the playback of video on the LaserDisc, allowing an interactive and nonlinear viewing experience. This software came in the form of a HyperCard stack. HyperCard, created by famed Apple software engineer Bill Atkinson, was an interactive hypertext environment that combined text, graphics, and sound. With a scripting language (HyperTalk) and an external plug-in architecture allowing developers to later add animations and even digital video, it became an authoring platform for much of the multimedia software of the early 1990s. In many ways, HyperCard presaged the modern World Wide Web. Prior to digital video becoming widespread on desktop PCs, HyperCard lacked the capability to show video, but could be configured through plug-ins to control content playing on an attached LaserDisc player. The Apple Multimedia Lab produced many of these hybrid computer/LaserDisc titles for educational and marketing uses.

Dr. Andrew Lison, an assistant professor of Media Study at the University of Buffalo, has been researching multimedia’s transition in the geopolitical context of the end of the Cold War. In this video ethnography for the Software History Center, Dr. Lison demonstrates and discusses the Visual Almanac and the Encyclopedia of Multimedia.

Of course, Apple’s invention of QuickTime, the first digital video format to become widely available on desktop PCs, made the use of computer-controlled LaserDiscs obsolete for multimedia applications. Announced in 1990 and shipping in 1991, QuickTime allowed for small postage stamp-size videos to play in a window, and depended on an efficient compression/decompression algorithm (codec) code-named “Road Pizza” that would be fast enough to decompress video in real-time. QuickTime video, embedded in HyperCard stacks and shipping on CD-ROMs, blew open the doors of the multimedia industry, and is still with us today in the form of the MPEG-4 video standard, which can be found in everything from mobile phones to 4K streaming TVs. In February, I hosted a CHM Live panel event, “Press Play: The Origins of QuickTime,” a conversation with Bruce Leak, Peter Hoddie, and Doug Camplejohn, three former Apple developers who played pivotal roles on the QuickTime project.

CHM Live | Press Play: The Origins of QuickTime, February 28, 2018.

Although QuickTime did not ship until 1991, the first public demonstration of QuickTime technology may have been in the fall of 1990, at an educational computing conference called Educom, held that year in Atlanta, Georgia. This took the form of a daily news magazine called News Navigator that ran on kiosks at the conference and contained news stories with embedded video content from CNN. News Navigator, the brainchild of then Apple marketing manager Greg Gretsch, was implemented as a set of HyperCard stacks by Clate Sanders, a HyperCard expert at Georgia Tech.

In December 2017/January 2018, the Software History Center worked with Clate Sanders to restore his original HyperCard stacks (which were still stored on old magneto-optical disk cartridges), and run them on an appropriate vintage Macintosh. We produced the following video ethnography of the News Navigator, with interviews of Greg Gretsch and Clate Sanders and demonstration by Clate Sanders.

Video ethnography of the News Navigator, with interviews of Greg Gretsch and Clate Sanders and demonstration by Clate Sanders.

Taken together, these two video ethnographies straddle a crucial turning point in the history of multimedia. On one side, the Visual Almanac, involving a complicated setup in which text, graphics and sound were mixed in a hypermedia format on the computer, but video remained separate, controlled through commands sent over a serial cable to a separate machine playing videodiscs that were still essentially analog. On the other side we have the first public demonstration of fully digital, compressed video embedded in hypermedia, freely mixed in with text, still graphics, and audio clips, albeit in a prototype form. What we see in the News Navigator is not that far removed from the websites we see today, in which the mixing of these heterogeneous media forms, and especially the embedding of digital video (such as in this very blog post) is taken for granted.

About the Software History Center

The purpose of the Software History Center at the Computer History Museum is to collect, preserve, and interpret the history of software and its transformational effects on global society.

Software is what a computer does. The existence of code reflects the story of the people who made it. The transformational effects of software are the consequences of people’s creation and use of code. In the stories of these people lie the technical, business, and cultural histories of software—from timesharing services to the cloud, from custom code to packaged programs, from developers to entrepreneurs, from smartphones to supercomputers. The center is exploring these people-centered stories, documenting soft-ware-in-action, and leveraging the Museum’s rich collections to tell the story of software, preserve its history, and put it to work today for gauging where we are, where we have been, and where we might be going.

FacebookTwitterCopy Link

The post An Inflection Point in the History of Multimedia: Video Ethnographies of Visual Almanac and News Navigator appeared first on CHM.

]]>
https://computerhistory.org/blog/an-inflection-point-in-the-history-of-multimedia-video-ethnographies-of-visual-almanac-and-news-navigator/feed/ 0