Multicore Programming Education –
Speakers and Abstracts
A Case for Teaching Parallel Programming to Freshmen
Massachusetts Institute of Technology
prevailing opinion of most experts is that parallel programming
is an advanced topic to be tacked
after students have mastered
- something akin to teaching Quantum
after students have mastered Newtonian Mechanics. I think
this viewpoint is misguided.
In future, parallel and concurrent
programming would be
viewed as indistinguishable from ordinary
programming because kids
will grow up playing with robots where
reactive programming would be the
norm. It would be strange indeed
if we told freshmen that our
programming abstractions were not
appropriate for dealing
with robots or games because they require
managing more than one activity.
Multicore hardware has offered us
an opportunity to make a
transition to parallel
programming now, though in future the role of
multicores would be no
more than offering more performance in a
similar to what faster clocks have done until
Arvind is the Johnson Professor
of Computer Science and
at MIT where in the late eighties his group, in
Motorola, built the Monsoon dataflow machines and
its associated software. In 2000,
Arvind started Sandburst
sold to Broadcom in 2006. In
2003, Arvind co-founded Bluespec
an EDA company to produce a set
of tools for high-level synthesis.
In 2001, Dr.
R. S. Nikhil and Arvind published the book
parallel programming in pH".
Arvind's current research focus is on
enabling rapid development of
embedded systems. Arvind is a Fellow
of IEEE and ACM, and also a
member of National Academy of Engineering.
Guy Blelloch, Carnegie Mellon University
seem to be three basic choices to teaching parallelism: (1) we
train a small number of experts in parallel computation who
a collection of libraries, and everyone else just uses them;
we leave our core curriculum pretty much as is, but
courses on parallelism or perhaps tack on a few lectures at
end of existing courses; or (3) we completely rethink how we
computing from the start with parallelism as the main theme
sequential computing as a special case.
talk will argue for the third option and that thinking about
parallelism, when treated in an appropriate way, might be as easy or
than thinking sequentially. A key
prerequisite, however, is
identify what the core ideas in parallelism are and how they
be layered and integrated with existing concepts. The talk
go through an initial list of some core ideas in parallelism,
how they might be integrated.
Blelloch is a Professor of Computer Science and
Planning at Carnegie Mellon. He received
a BA from Swarthmore
College in 1983 and a PhD degree from MIT in 1988. His
interests are in programming languages and algorithms and how they
with an emphasis on parallel computation.
He worked on one
the early Parallel Machines, the Thinking Machines Connection
where he developed several of the parallel primitives for
machine. At Carnegie Mellon Blelloch designed the parallel
programming language NESL, and in the area of parallel computation
worked on topics including scheduling, algorithm design, cache
efficiency, garbage collection, and synchronization primitives.
It ain't the Meat it's the Notion: Why Theory is Essential to Teaching Concurrent
Maurice Herlihy, Brown University
of the focus on teaching concurrency rightly focuses
applied techniques and experience. Nevertheless, we will explain
teaching concurrency effectively requires covering some of the
basic mathematical foundations, if for no other reason that
discourage students from attempting foolish or impossible things.
Herlihy received an A.B. degree in Mathematics from
University, and a Ph.D. degree in Computer Science
has been a faculty member in the Computer Science Department at
Mellon University, a member of the research staff at
Equipment Corporation's Cambridge (MA) Research Lab, and a
consultant for Sun Microsystems. He is now a Professor of Computer
Science at Brown University.
Herlihy's research centers on practical and
of multiprocessor synchronization, with a focus on wait-free
lock-free synchronization. His 1991 paper "Wait-Free
won the 2003 Dijkstra Prize in Distributed
Computing, and he shared the 2004 Goedel Prize for his
"The Topological Structure of Asynchronous
Computation." He is a
Fellow of the ACM.
Programming in Undergraduate Education: A View from the Ground
University of Washington
I will try to
give a sobering view of where I see a typical
today and how we can change it to better
prepare students for multicore programming.
More specifically, I
will consider three
approaches and their barriers to success.
could revise the entire curriculum from introductory
graduation to emphasize parallel programming.
will explain why I consider
this approach risky, ill-advised, and
unlikely to be widely
adopted. Second, we could introduce new
advanced courses covering modern
parallel programming. I will argue
that such courses are great,
but even among experts in the area
there is enormous variation in
what this means. Third, we could
current courses by replacing outdated material with relevant
parallel programming. While this
approach risks, "doing
little," I will share some personal examples where I believe I
have had some
has been a faculty member in the Department of Computer
Engineering at the University of Washington since 2003.
in the design and implementation of programming
aimed at improving software quality.
With respect to
programming, one focus area has been the semantics of
and language-integrated implementations of atomic
software transactional memory. He has
different levels related to programming languages and
programming. He will soon teach
for the first
time. He is currently co-chairing a
effort at the
University of Washington to revise the core required
curriculum. The undergraduates in his
Dan for their "Teacher of the Year" award.
Teaching People How to "Think Parallel"
texts on parallel programming and you'll find a large number
books about how to use a particular programming language or how
use parallel computers in some arcane scientific discipline. But
is very little literature on "how to think parallel".
if we are to effectively educate the next generation of
this is precisely the problem we need to address.
have been addressing this problem by constructing a pattern
of parallel programming. These design
patterns present the
'tricks" expert parallel programmers take for granted.
way the patterns are organized into a pattern language captures
methodologies experienced programmers use when engineering high
parallel software. In this talk, I will
language and how it can be used to teach parallel programming.
Mattson earned a PhD for his work on quantum molecular
theory (UCSC, 1985). This was followed
by a Post-doc at
where he worked on the Caltech/JPL hypercubes. Since then,
has held a number of commercial and academic positions with high
computers as the common thread. Application areas have
mathematics libraries, exploration geophysics,
chemistry, molecular biology, and bioinformatics.
Mattson joined Intel in 1993. Among his many roles at Intel, he
applications manager for the ASCI teraFLOPS project,
OpenMP, founded the Open Cluster Group (OSCAR), and
programs in computing for the Life Sciences.
Dr. Mattson is conducting research on abstractions that
across parallel system design, parallel programming
and application software. This work
builds on his
book on Design Patterns in Parallel Programming (written with
Beverly Sanders and Berna Massingill
and published by
Wesley). The patterns provide the
"human angle" and help
his research focused on technologies that help general
solve real problems.
Don't Start with Dekker's Algorithm: Top-Down Introduction of
Scott, University of Rochester
Those of us
who do research in concurrency have traditionally taught
the way it developed historically, starting with low
mechanisms and working upward toward higher levels of abstraction.
In the modern
context, I argue that it makes more sense to teach the
around: to start with the simplest and most abstract forms
concurrency, where the semantic load for the programmer is least,
progressively to reveal the lower-level mechanisms needed for
forms of process interaction. What might
like, and how might it fit into the undergraduate
Scott is a Professor and past Chair of the
Computer Science at the University of Rochester. He
Ph.D. from the University of Wisconsin-Madison in 1985.
interests span operating systems, languages, architecture,
with a particular emphasis on parallel and distributed
systems. He is best known for work in synchronization
data structures, in recognition of which he shared the 2006
Edsger W. Dijkstra
Prize. Other widely cited work has
parallel operating systems and file systems, software
shared memory, and energy-conscious operating systems and
microarchitecture. His textbook on programming language design
(Programming Language Pragmatics, third edition,
Kaufmann, Mar. 2009) has become a standard in the field. In 2003
he served as
General Chair for SOSP; more recently he has been Program
TRANSACT'07 and PPoPP'08. He was named a
Fellow of the ACM in
2006. In 2001 he received the University of
Rochester's Robert and
Pamela Goergen Award for Distinguished Achievement and Artistry in
Is Parallel programming Really Hard?
Marc Snir, University of Illinois
programming, as currently practiced, is hard to teach and
hard to do.
There are two possible explanations to this: (i)
programming is intrinsically hard or (ii) parallel
is done now the wrong way. I shall argue that the second
is closer to the truth. If so, then teaching multicore
using current programming languages and tools will
inculcate bad programming practices in our students.
will be discussed.
Marc Snir is
Michael Faiman and Saburo
in the Department of Computer Science at the
Illinois at Urbana-Champaign and has a courtesy
in the Graduate School of Library and Information
Science. He currently pursues research in parallel
computing. He is
PI for the
software of the petascale Blue Waters system and
of the Intel and Microsoft funded Universal Parallel
Research Center (UPCRC). From 2007 to 2008 he was director
Illinois Informatics Institute. He was head of the Computer
Department from 2001 to 2007. Until 2001
he was a senior
the IBM T. J. Watson Research Center where he led the
Parallel Systems research group that was responsible for
contributions to the IBM SP scalable parallel system and to
the IBM Blue
Gene system. Marc Snir received a Ph.D. in
Hebrew University of Jerusalem in 1979, worked at NYU on
the NYU Ultracomputer project in 1980-1982, and worked at the
Jerusalem in 1982-1986, before joining IBM. Marc Snir
was a major
contributor to the design of the Message Passing
has published numerous papers and given many
on computational complexity, parallel algorithms,
architectures, interconnection networks, parallel languages
and parallel programming environments. Marc is AAAS
Fellow, and IEEE Fellow. He is on the Computer Research
Board of Directors and is on the NSF CISE advisory
has Erdos number 2 and is a mathematical descendent
The Future Is Parallel: What's a Programmer to Do? Breaking Sequential
Habits of Thought
Guy L. Steele
Jr., Sun Microsystems Laboratories
is here, now, and in our faces. It used to be just the
and servers, but now multicore chips are in desktops
and general practitioners, not just specialists, need
to get used
to parallel programming. The sequential algorithms and
tricks that have served us so well for 50 years are the
wrong way to
think going forward. In this talk we
strategy with some small, cute programs that
necessary future approach to program structure.
Guy L. Steele
Jr. (Ph.D., MIT, 1980) is a Sun Fellow and heads the
Language Research group within Sun Microsystems
in Burlington, MA. Before coming to Sun
in 1994, he
positions at Carnegie-Mellon University, Tartan Laboratories,
Machines Corporation. He is the author
or co-author of
on programming languages (Common Lisp, C, High
Fortran, the Java Language Specification). He has served
standards committees for the programming languages
C, Fortran, Scheme, and ECMAScript. He designed the
EMACS command set and was the first person to port TeX.
Dr. Steele is
a Fellow of the Association for Computing Machinery (1994) and
the ACM Grace Murray Hopper Award (1988), a Gordon Bell
and the ACM SIGPLAN Programming Languages Achievement
(1996). He has been elected to the
National Academy of
of the United States of America (2001) and to the
Academy of Arts and Sciences (2002).