Multicore Programming Education – Speakers and Abstracts

 

 

A Case for Teaching Parallel Programming to Freshmen

Arvind, Massachusetts Institute of Technology

 

The prevailing opinion of most experts is that parallel programming

is an advanced topic to be tacked after students have mastered

sequential programming - something akin to teaching Quantum

Mechanics after students have mastered Newtonian Mechanics. I think

this viewpoint is misguided. In future, parallel and concurrent

programming would be viewed as indistinguishable from ordinary

programming because kids will grow up playing with robots where

reactive programming would be the norm. It would be strange indeed

if we told freshmen that our programming abstractions were not

appropriate for dealing with robots or games because they require

managing more than one activity.

 

Multicore hardware has offered us an opportunity to make a

transition to parallel programming now, though in future the role of

multicores would be no more than offering more performance in a

transparent manner, similar to what faster clocks have done until

now.

 

Bio:

Arvind is the Johnson Professor of Computer Science and

Engineering at MIT where in the late eighties his group, in

collaboration with Motorola, built the Monsoon dataflow machines and

its associated software. In 2000, Arvind started Sandburst which was

sold to Broadcom in 2006. In 2003, Arvind co-founded Bluespec Inc.,

an EDA company to produce a set of tools for high-level synthesis.

In 2001, Dr. R. S. Nikhil and Arvind published the book "Implicit

parallel programming in pH". Arvind's current research focus is on

enabling rapid development of embedded systems. Arvind is a Fellow

of IEEE and ACM, and also a member of  National Academy of Engineering.

 

Parallel Thinking

Guy Blelloch, Carnegie Mellon University

 

There seem to be three basic choices to teaching parallelism: (1) we

only train a small number of experts in parallel computation who

develop a collection of libraries, and everyone else just uses them;

(2) we leave our core curriculum pretty much as is, but add some

advanced courses on parallelism or perhaps tack on a few lectures at

the end of existing courses; or (3) we completely rethink how we

teach computing from the start with parallelism as the main theme

and sequential computing as a special case.

 

This talk will argue for the third option and that thinking about

parallelism, when treated in an appropriate way, might be as easy or

easier than thinking sequentially.  A key prerequisite, however, is

to identify what the core ideas in parallelism are and how they

might be layered and integrated with existing concepts.  The talk

will go through an initial list of some core ideas in parallelism,

and how they might be integrated.

 

Bio:

Guy Blelloch is a Professor of Computer Science and Associate Dean

of Planning at Carnegie Mellon.  He received a BA from Swarthmore

College in 1983 and a PhD degree from MIT in 1988.  His research

interests are in programming languages and algorithms and how they

interact with an emphasis on parallel computation.  He worked on one

of the early Parallel Machines, the Thinking Machines Connection

Machine, where he developed several of the parallel primitives for

the machine.  At Carnegie Mellon Blelloch designed the parallel

programming language NESL, and in the area of parallel computation

has worked on topics including scheduling, algorithm design, cache

efficiency, garbage collection, and synchronization primitives.

 

It ain't the Meat it's the Notion: Why Theory is Essential to Teaching Concurrent Programming

Maurice Herlihy, Brown University

 

Much of the focus on teaching concurrency rightly focuses

on applied techniques and experience. Nevertheless, we will explain

why teaching concurrency effectively requires covering some of the

field's basic mathematical foundations, if for no other reason that

to discourage students from attempting foolish or impossible things.

 

Bio:

Maurice Herlihy received an A.B. degree in Mathematics from

Harvard University, and a Ph.D. degree in Computer Science from MIT.

He has been a faculty member in the Computer Science Department at

Carnegie Mellon University, a member of the research staff at

Digital Equipment Corporation's Cambridge (MA) Research Lab, and a

consultant for Sun Microsystems. He is now a Professor of Computer

Science at Brown University.

 

Prof. Herlihy's research centers on practical and theoretical

aspects of multiprocessor synchronization, with a focus on wait-free

and lock-free synchronization. His 1991 paper "Wait-Free

Synchronization" won the 2003 Dijkstra Prize in Distributed

Computing, and he shared the 2004 Goedel Prize for his 1999 paper

"The Topological Structure of Asynchronous Computation." He is a

Fellow of the ACM.

 

Parallel Programming in Undergraduate Education: A View from the Ground

Dan Grossman, University of Washington

 

I will try to give a sobering view of where I see a typical

undergraduate curriculum today and how we can change it to better

prepare students for multicore programming.  More specifically, I

will consider three approaches and their barriers to success.

First, we could revise the entire curriculum from introductory

programming through graduation to emphasize parallel programming.  I

will explain why I consider this approach risky, ill-advised, and

unlikely to be widely adopted.  Second, we could introduce new

advanced courses covering modern parallel programming.  I will argue

that such courses are great, but even among experts in the area

there is enormous variation in what this means.  Third, we could

enrich current courses by replacing outdated material with relevant

units on parallel programming.  While this approach risks, "doing

too little," I will share some personal examples where I believe I

have had some success.

 

Bio:

 

Dan Grossman has been a faculty member in the Department of Computer

Science & Engineering at the University of Washington since 2003.

His research in the design and implementation of programming

languages is aimed at improving software quality.  With respect to

parallel programming, one focus area has been the semantics of

atomic blocks and language-integrated implementations of atomic

blocks using software transactional memory.  He has taught several

courses at different levels related to programming languages and

systems programming.  He will soon teach introductory programming

for the first time.  He is currently co-chairing a department-wide

effort at the University of Washington to revise the core required

undergraduate curriculum.  The undergraduates in his department have

twice elected Dan for their "Teacher of the Year" award.

 

Teaching People How to "Think Parallel"

Timothy G. Mattson, Intel

Peruse texts on parallel programming and you'll find a large number

of books about how to use a particular programming language or how

to use parallel computers in some arcane scientific discipline.  But

there is very little literature on "how to think parallel".

However, if we are to effectively educate the next generation of

programmers, this is precisely the problem we need to address.

 

We have been addressing this problem by constructing a pattern

language of parallel programming.  These design patterns present the

essential 'tricks" expert parallel programmers take for granted.

The way the patterns are organized into a pattern language captures

the methodologies experienced programmers use when engineering high

quality parallel software.  In this talk, I will describe this

pattern language and how it can be used to teach parallel programming.

 

Bio:

Tim Mattson earned a PhD for his work on quantum molecular

scattering theory (UCSC, 1985).  This was followed by a Post-doc at

Caltech where he worked on the Caltech/JPL hypercubes.  Since then,

he has held a number of commercial and academic positions with high

performance computers as the common thread. Application areas have

included mathematics libraries, exploration geophysics,

computational chemistry, molecular biology, and bioinformatics.

 

Dr. Mattson joined Intel in 1993. Among his many roles at Intel, he

was applications manager for the ASCI teraFLOPS project, helped

create OpenMP, founded the Open Cluster Group (OSCAR), and launched

Intel's programs in computing for the Life Sciences.

 

Currently, Dr. Mattson is conducting research on abstractions that

bridge across parallel system design, parallel programming

environments, and application software.  This work builds on his

recent book on Design Patterns in Parallel Programming (written with

Professors Beverly Sanders and Berna Massingill and published by

Addison Wesley).  The patterns provide the "human angle" and help

keep his research focused on technologies that help general

programmers solve real problems.

 

 

Don't Start with Dekker's Algorithm: Top-Down Introduction of Concurrency

Michael L. Scott, University of Rochester

 

Those of us who do research in concurrency have traditionally taught

the subject the way it developed historically, starting with low

level mechanisms and working upward toward higher levels of abstraction.

In the modern context, I argue that it makes more sense to teach the

other way around: to start with the simplest and most abstract forms

of concurrency, where the semantic load for the programmer is least,

and progressively to reveal the lower-level mechanisms needed for

more complex forms of process interaction.  What might such an

approach look like, and how might it fit into the undergraduate

curriculum?

 

Bio:

Michael L. Scott is a Professor and past Chair of the

Department of Computer Science at the University of Rochester.  He

received his Ph.D. from the University of Wisconsin-Madison in 1985.

His research interests span operating systems, languages, architecture,

and tools, with a particular emphasis on parallel and distributed

systems.  He is best known for work in synchronization algorithms and

concurrent data structures, in recognition of which he shared the 2006

SIGACT/SIGOPS Edsger W. Dijkstra Prize.  Other widely cited work has

addressed parallel operating systems and file systems, software

distributed shared memory, and energy-conscious operating systems and

microarchitecture.  His textbook on programming language design and

implementation (Programming Language Pragmatics, third edition,

Morgan Kaufmann, Mar. 2009) has become a standard in the field.  In 2003

he served as General Chair for SOSP; more recently he has been Program

Chair for TRANSACT'07 and PPoPP'08.  He was named a Fellow of the ACM in

2006.  In 2001 he received the University of Rochester's Robert and

Pamela Goergen Award for Distinguished Achievement and Artistry in

Undergraduate Teaching.

 

Is Parallel programming Really Hard?
Marc Snir, University of Illinois

 

Parallel programming, as currently practiced, is hard to teach and

hard to do. There are two possible explanations to this: (i)

parallel programming is intrinsically hard or (ii) parallel

programming is done now the wrong way. I shall argue that the second

explanation is closer to the truth. If so, then teaching multicore

programming using current programming languages and tools will

merely inculcate bad programming practices in our students.

Alternatives will be discussed.

 

Bio:

Professor Marc Snir is  Michael Faiman and Saburo Muroga

Professorship in the Department of Computer Science at the

University of Illinois at Urbana-Champaign and has a courtesy

appointment in the Graduate School of Library and Information

Science.  He currently pursues research in parallel computing. He is

PI for the software of the petascale Blue Waters system and

co-director of the Intel and Microsoft funded Universal Parallel

Computing Research Center (UPCRC). From 2007 to 2008 he was director

of the Illinois Informatics Institute. He was head of the Computer

Science Department from 2001 to 2007. Until  2001 he was a senior

manager at the IBM T. J. Watson Research Center where he led the

Scalable Parallel Systems research group that was responsible for

major contributions to the IBM SP scalable parallel system and to

the IBM Blue Gene system. Marc Snir received a Ph.D. in Mathematics

from the Hebrew University of Jerusalem in 1979, worked at NYU on

the NYU Ultracomputer project in 1980-1982, and worked at the Hebrew

University of Jerusalem in 1982-1986, before joining IBM. Marc Snir

was a major contributor to the design of the Message Passing

Interface. He has published numerous papers and given many

presentations on computational complexity, parallel algorithms,

parallel architectures, interconnection networks, parallel languages

and libraries and parallel programming environments. Marc is AAAS

Fellow, ACM Fellow, and IEEE Fellow. He is on the Computer Research

Association Board of Directors and is on the NSF CISE advisory

committee. He has Erdos number 2 and is a mathematical descendent of

Jacques Hadamard.

 

The Future Is Parallel: What's a Programmer to Do? Breaking Sequential Habits of Thought

Guy L. Steele Jr., Sun Microsystems Laboratories

 

Parallelism is here, now, and in our faces. It used to be just the

supercomputers and servers, but now multicore chips are in desktops

and laptops, and general practitioners, not just specialists, need

to get used to parallel programming. The sequential algorithms and

programming tricks that have served us so well for 50 years are the

wrong way to think going forward.  In this talk we illustrate the

divide-and-conquer strategy with some small, cute programs that

represent the necessary future approach to program structure.

 

Bio:

Guy L. Steele Jr. (Ph.D., MIT, 1980) is a Sun Fellow and heads the

Programming Language Research group within Sun Microsystems

Laboratories in Burlington, MA.  Before coming to Sun in 1994, he

held positions at Carnegie-Mellon University, Tartan Laboratories,

and Thinking Machines Corporation.  He is the author or co-author of

several books on programming languages (Common Lisp, C, High

Performance Fortran, the Java Language Specification). He has served

on accredited standards committees for the programming languages

Common Lisp, C, Fortran, Scheme, and ECMAScript. He designed the

original EMACS command set and was the first person to port TeX.

 

Dr. Steele is a Fellow of the Association for Computing Machinery (1994) and

has received the ACM Grace Murray Hopper Award (1988), a Gordon Bell

Prize (1990), and the ACM SIGPLAN Programming Languages Achievement

Award (1996).  He has been elected to the National Academy of

Engineering of the United States of America (2001) and to the

American Academy of Arts and Sciences (2002).