feat: LLM Wiki 세컨드 브레인 초기 셋팅

- CLAUDE.md 생성 (볼트 운영 규칙, Karpathy LLM Wiki 10가지 규칙)
- 나의 핵심 맥락.md 생성 (아키텍트 프로필, 세컨드 브레인 목적, 핵심 소스)
- raw/ 구조 정립 (book/기존 설계원칙 보존, articles/repos/notes/ 추가)
- wiki/ 초기화 (index.md, log.md, concepts/sources/patterns/ 폴더)
- output/ 초기화
- LLMWiki/ 기존 프롬프트 패턴 파일 보존

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
minsung
2026-04-30 14:34:29 +09:00
parent d7a123de97
commit 44e26d6972
48 changed files with 14334 additions and 0 deletions

View File

@@ -0,0 +1,207 @@
# SOFTWARE DESIGN FOR FLEXIBILITY
How to Avoid Programming Yourself into a Corner
Chris Hanson and Gerald Jay Sussman
![](설계원칙-001-020_images/_page_0_Picture_3.jpeg)
# **Software Design for Flexibility**
# **Software Design for Flexibility**
How to Avoid Programming Yourself into a Corner
Chris Hanson and Gerald Jay Sussman foreword by Guy L. Steele Jr.
The MIT Press Cambridge, Massachusetts London, England
#### © 2021 Massachusetts Institute of Technology
This work is subject to a
Creative Commons Attribution-ShareAlike 4.0 International License.
To view a copy of this license, visit [http://creativecommons.org/licenses/by-sa/4.0/.](http://creativecommons.org/licenses/by-sa/4.0/)
![](설계원칙-001-020_images/_page_3_Picture_4.jpeg)
Subject to such license, all rights are reserved.
This book was set in Computer Modern by the authors with the typesetting system.
Library of Congress Cataloging-in-Publication Data
Names: Hanson, Chris (Christopher P.), author. | Sussman, Gerald Jay, author.
Title: Software design for flexibility : how to avoid programming yourself into a corner / Chris Hanson and Gerald Jay Sussman ; foreword by Guy L. Steele Jr.
Description: Cambridge, Massachusetts : The MIT Press, [2021] | Includes bibliographical references and index.
Identifiers: LCCN 2020040688 | ISBN 9780262045490 (hardcover)
Subjects: LCSH: Software architecture. | Software patterns. Classification: LCC QA76.76.D47 H35 2021 | DDC 005.1/112 dc23
LC record available at <https://lccn.loc.gov/2020040688>
10 9 8 7 6 5 4 3 2 1
A computer is like a violin. You can imagine a novice trying first a phonograph and then a violin. The latter, he says, sounds terrible. That is the argument we have heard from our humanists and most of our computer scientists. Computer programs are good, they say, for particular purposes, but they aren't flexible. Neither is a violin, or a typewriter, until you learn how to use it.
Marvin Minsky, "Why Programming Is a Good Medium for Expressing Poorly-Understood and Sloppily-Formulated Ideas" in *Design and Planning*, (1967)
# Contents
| Foreword |
|----------------------------------------------------------|
| Preface |
| Acknowledgments |
| 1:<br>Flexibility<br>in<br>Nature<br>and<br>in<br>Design |
| 2:<br>Domain-Specific<br>Languages |
| 3:<br>Variations<br>on<br>an<br>Arithmetic<br>Theme |
| 4:<br>Pattern<br>Matching |
| 5:<br>Evaluation |
| 6:<br>Layering |
| 7:<br>Propagation |
| 8:<br>Epilogue |
| A<br>Appendix:<br>Supporting<br>Software |
| B<br>Appendix:<br>Scheme |
| References |
| Index |
| List<br>of<br>Exercises |
#### **List of figures**
#### Chapter 1
- Figure 1.1 The superheterodyne plan, invented by Major Edwin Armstrong in 1918,…
- Figure 1.2 Exploratory behavior can be accomplished in two ways. In one way a g…
#### Chapter 2
- Figure 2.1 The composition f g of functions f and g is a new function that is…
- Figure 2.2 In parallel-combine the functions f and g take the same number of ar…
- Figure 2.3 In spread-combine the n +m arguments are split between the functions…
- Figure 2.4 The combinator spread-combine is really a composition of two parts. …
- Figure 2.5 The combinator (discard-argument 2) takes a threeargument function …
- Figure 2.6 The combinator ((curry-argument 2) 'a 'b 'c) specifies three of the …
- Figure 2.7 The combinator (permute-arguments 1 2 0 3) takes a function f of fou…
#### Chapter 3
Figure 3.1 A trie can be used to classify sequences of features. A trie is a di…
#### Chapter 7
- Figure 7.1 Kanizsa's triangle is a classic example of a completion illusion. Th…
- Figure 7.2 The angle θ of the triangle to the distant star erected on the semim…
- Figure 7.3 Here we see a "wiring diagram" of the propagator system constructed …
- Figure 7.4 The constraint propagator constructed by c:\* is made up of three dir…
# <span id="page-8-0"></span>**Foreword**
Sometimes when you're writing a program, you get stuck. Maybe it's because you realize you didn't appreciate some aspect of the problem, but all too often it's because you made some decision early in the program design process, about a choice of data structure or a way of organizing the code, that has turned out to be too limiting, and also to be difficult to undo.
This book is a master class in specific program organization strategies that maintain flexibility. We all know by now that while it is very easy to declare an array of fixed size to hold data to be processed, such a design decision can turn out to be an unpleasant limitation that may make it impossible to handle input lines longer than a certain length, or to handle more than a fixed number of records. Many security bugs, especially in the code for the Internet, have been consequences of allocating a fixed-size memory buffer and then failing to check whether the data to be processed would fit in the buffer. Dynamically allocated storage (whether provided by a C-style malloc library or by an automatic garbage collector), while more complicated, is much more flexible and, as an extra benefit, less error-prone (especially when the programming language always checks array references to make sure the index is within bounds). That's just a very simple example.
A number of early programming language designs in effect made a design commitment to reflect the style of hardware organization called the *Harvard architecture*: the code is *here*, the data is *there*, and the job of the code is to massage the data. But an inflexible, arm's-length separation between code and data turns out to be a significant limitation on program organization. Well before the end of the twentieth century, we learned from functional programming
languages (such as ML, Scheme, and Haskell) and from objectoriented programming languages (such as Simula, Smalltalk, C++, and Java) that there are advantages to being able to treat code as data, to treat data as code, and to bundle smallish amounts of code and related data together rather than organizing code and data separately as monolithic chunks. The most flexible kind of data is a record structure that can contain not only "primitive data items" such as numbers and characters but also references to executable code, such as a function. The most powerful kind of code constructs other code that has been bundled with just the right amount of curated data; such a bundle is not just a "function pointer" but a *closure* (in a functional language) or an *object* (in an object-oriented language).
Jerry Sussman and Chris Hanson draw on their collective century of programming experience to present a set of techniques, developed and tested during decades of teaching at MIT, that further extend this basic strategy for flexibility. Don't just use functions; use *generic* functions, which are open-ended in a way that plain functions are not. Keep functions small. Often the best thing for a function to return is another function (that has been customized with curated data). Be prepared to treat data as code, perhaps even to the extreme of creating a new embedded programming language within your application if necessary. (That is one view of how the Scheme language got its start: the MacLisp dialect of Lisp did not support a completely general form of function closure, so Jerry and I simply used MacLisp to code an embedded dialect of Lisp that did support the kind of function closure we needed.) Be prepared to replace a data structure with a more general data structure that subsumes the original and extends its capabilities. Use automatic constraint propagation to avoid a premature commitment to which data items are inputs and which are outputs.
This book is not a survey, or a tutorial—as I said before, it is a master class. In each chapter, watch as two experts demonstrate an advanced technique by incrementally developing a chunk of working code, explaining the strategy as they go, occasionally
pausing to point out a pitfall or to remove a limitation. Then be prepared, when called on, to demonstrate the technique yourself, by extending a data structure or writing additional code—and then to use your imagination and creativity to go beyond what they have demonstrated. The ideas in this book are rich and deep; close attention to both the prose and the code will be rewarded.
> Guy L. Steele Jr. Lexington, Massachusetts August 2020
# <span id="page-11-0"></span>**Preface**
We have all spent too much time trying to deform an old piece of code so that it could be used in a way that we didn't realize would be needed when we wrote it. This is a terrible waste of time and effort. Unfortunately, there are many pressures on us to write code that works very well for a very specific purpose, with few reusable parts. But we think that this is not necessary.
It is hard to build systems that have acceptable behavior over a larger class of situations than was anticipated by their designers. The best systems are evolvable: they can be adapted to new situations with only minor modification. How can we design systems that are flexible in this way?
It would be nice if all we had to do to add a new feature to a program was to add some code, without changing the existing code base. We can often do this by using certain organizing principles in the construction of the code base and incorporating appropriate hooks at that time.
Observations of biological systems tell us a great deal about how to make flexible and evolvable systems. Techniques originally developed in support of symbolic artificial intelligence can be viewed as ways of enhancing flexibility and adaptability in programs and other engineered systems. By contrast, common practice of computer science actively discourages the construction of systems that are easily modified for use in novel settings.
We have often programmed ourselves into corners and had to expend great effort refactoring code to escape from those corners. We have now accumulated enough experience to feel that we can identify, isolate, and demonstrate strategies and techniques that we have found to be effective for building large systems that can be
adapted for purposes that were not anticipated in the original design. In this book we share some of the fruits of our over 100 years of programming experience.
#### **This book**
This book was developed as the result of teaching computer programming at MIT. We started this class many years ago, intending to expose advanced undergraduate students and graduate students to techniques and technologies that are useful in the construction of programs that are central to artificial intelligence applications, such as mathematical symbolic manipulation and rulebased systems. We wanted the students to be able to build these systems flexibly, so that it would be easier to combine such systems to make even more powerful systems. We also wanted to teach students about dependencies—how they can be tracked, and how they can be used for explanation and to control backtracking.
Although the class was and is successful, it turned out that in the beginning we did not have as much understanding of the material as we originally believed. So we put a great deal of effort into sharpening our tools and making our ideas more precise. We now realize that these techniques are not just for artificial intelligence applications. We think that anyone who is building complex systems, such as computer-language compilers and integrated development environments, will benefit from our experience. This book is built on the lectures and problem sets that are now used in our class.
#### **The contents**
There is much more material in this book than can be covered in a single-semester class. So each time we offer the class we pick and choose what to present. Chapter 1 is an introduction to our
programming philosophy. Here we show *flexibility* in the grand context of nature and of engineering. We try to make the point that flexibility is as important an issue as efficiency and correctness. In each subsequent chapter we introduce techniques and illustrate them with sets of exercises. This is an important organizing principle for the book.
In chapter 2 we explore some universally applicable ways of building systems with room to grow. A powerful way to organize a flexible system is to build it as an assembly of domain-specific languages, each appropriate for easily expressing the construction of a subsystem. Here we develop basic tools for the development of domain-specific languages: we show how subsystems can be organized around mix-and-match parts, how they can be flexibly combined with *combinators*, how *wrappers* can be used to generalize parts, and how we can often simplify a program by abstracting out a domain model.
In chapter 3 we introduce the extremely powerful but potentially dangerous flexibility technique of predicate-dispatched *generic procedures*. We start by generalizing arithmetic to deal with symbolic algebraic expressions. We then show how such a generalization can be made efficient by using type tags for data, and we demonstrate the power of the technique with the design of a simple, but easy to elaborate, adventure game.
In chapter 4 we introduce symbolic *pattern matching*, first to enable term-rewriting systems, and later, with *unification*, to show how type inference can easily be made to work. Here we encounter the need for *backtracking* because of segment variables. Unification is the first place where we see the power of representing and combining *partial-information* structures. We end the chapter with extending the idea to matching general graphs.
In chapter 5 we explore the power of *interpretation* and *compilation*. We believe that programmers should know how to escape the confines of whatever programming language they must use by making an interpreter for a language that is more appropriate for expressing the solution to the current problem. We also show how to naturally incorporate backtracking search by implementing
nondeterministic amb in an interpreter/compiler system, and how to use *continuations*.
In chapter 6 we show how to make systems of *layered data* and *layered procedures*, where each data item can be annotated with a variety of metadata. The processing of the underlying data is not affected by the metadata, and the code for processing the underlying data does not even know about or reference the metadata. However, the metadata is processed by its own procedures, effectively in parallel with the data. We illustrate this by attaching units to numerical quantities and by showing how to carry dependency information, giving the provenance of data, as derived from the primitive sources.
This is all brought together in chapter 7, where we introduce *propagation* to escape from the expression-oriented paradigm of computer languages. Here we have a wiring-diagram vision of connecting modules together. This allows the flexible incorporation of multiple sources of partial information. Using layered data to support tracking of dependencies enables the implementation of *dependency-directed backtracking*, which greatly reduces the search space in large and complex systems.
This book can be used to make a variety of advanced classes. We use the combinator idea introduced in chapter 2 and the generic procedures introduced in chapter 3 in all subsequent chapters. But patterns and pattern matching from chapter 4 and evaluators from chapter 5 are not used in later chapters. The only material from chapter 5 that is needed later is the introduction to amb in sections 5.4 and 5.4.1. The layering idea in chapter 6 is closely related to the idea of generic procedures, but with a new twist. The use of layering to implement dependency tracking, introduced as an example in chapter 6, becomes an essential ingredient in propagation (chapter 7), where we use the dependencies to optimize backtracking search.
#### **Scheme**
The code in this book is written in Scheme, a mostly functional language that is a variant of Lisp. Although Scheme is not a popular language, or widely used in an industrial context, it is the right choice for this book. [1](#page-16-0)
<span id="page-15-0"></span>The purpose of this book is the presentation and explanation of programming ideas. The presentation of example code to elucidate these ideas is shorter and simpler in Scheme than in more popular languages, for many reasons. And some of the ideas would be nearly impossible to demonstrate using other languages.
Languages other than those in the Lisp family require lots of ceremony to say simple things. The only thing that makes our code long-winded is that we tend to use long descriptive names for computational objects.
The fact that Scheme syntax is extremely simple—it is just a representation of the natural parse tree, requiring minimal parsing —makes it easy to write programs that manipulate program texts, such as interpreters, compilers, and algebraic expression manipulators.
It is important that Scheme is a permissive rather than a normative language. It does not try to prevent a programmer from doing something "stupid." This allows us to play powerful games, like dynamically modulating the meanings of arithmetic operators. We would not be able to do this in a language that imposes more restrictive rules.
Scheme allows assignment but encourages functional programming. Scheme does not have static types, but it has very strong dynamic typing that allows safe dynamic storage allocation and garbage collection: a user program cannot manufacture a pointer or access an arbitrary memory location. It is not that we think static types are not a good idea. They certainly are useful for the early exorcism of a large class of bugs. And Haskell-like type systems can be helpful in thinking out strategies. But for this book the intellectual overhead of static types would inhibit consideration of potentially dangerous strategies of flexibility.
Also Scheme provides special features, such as reified continuations and dynamic binding, that are not available in most other languages. These features allow us to implement such powerful mechanisms as nondeterministic amb in the native language (without a second layer of interpretation).
<span id="page-16-0"></span>[1](#page-15-0) We provide a short introduction to Scheme in Appendix B.
# <span id="page-17-0"></span>**Acknowledgments**
This book would not have been possible without the help of a great number of MIT students who have been in our classes. They actually worked the problems and often told us about bad choices we made and things we did wrong! We are especially indebted to those students who served as teaching assistants over the years. Michael Blair, Alexey Radul, Pavel Panchekha, Robert L. McIntyre, Lars E. Johnson, Eli Davis, Micah Brodsky, Manushaqe Muco, Kenny Chen, and Leilani Hendrina Gilpin have been especially helpful.
Many of the ideas presented here were developed with the help of friends and former students. Richard Stallman, Jon Doyle, David McAllester, Ramin Zabih, Johan deKleer, Ken Forbus, and Jeff Siskind all contributed to our understanding of dependency-directed backtracking. And our understanding of propagation, in chapter 7, is the result of years of work with Richard Stallman, Guy Lewis Steele Jr., and Alexey Radul.
We are especially grateful for the help and support of the functional-programming community, and especially of the Scheme Team. Guy Steele coinvented Scheme with Gerald Jay Sussman back in the 1970s, and he has given a guest lecture in our class almost every year. Arthur Gleckler, Guillermo Juan Rozas, Joe Marshall, James S. Miller, and Henry Manyan Wu were instrumental in the development of MIT/GNU Scheme. Taylor Campbell and Matt Birkholz have made major contributions to that venerable system. We also want to thank Will Byrd and Michael Ballantyne for their help with understanding unification with segment variables.
Hal Abelson and Julie Sussman, coauthors with Gerald Jay Sussman of *Structure and Interpretation of Computer Programs*, helped form our ideas for this book. In many ways this book is an advanced sequel to SICP. Dan Friedman, with his many wonderful students and friends, has made deep contributions to our understanding of programming. We have had many conversations about the art of programming with some of the greatest wizards, such as William Kahan, Richard Stallman, Richard Greenblatt, Bill Gosper, and Tom Knight. Working with Jack Wisdom for many years on mathematical dynamics helped clarify many of the issues that we address in this book.
Sussman wants to especially acknowledge the contributions of his teachers: ideas from discussions with Marvin Minsky, Seymour Papert, Jerome Lettvin, Joel Moses, Paul Penfield, and Edward Fredkin appear prominently in this text. Ideas from Carl Hewitt, David Waltz, and Patrick Winston, who were contemporaneous students of Minsky and Papert, are also featured here. Jeff Siskind and Alexey Radul pointed out and helped with the extermination of some very subtle bugs.
Chris learned a great deal about large-scale programming while working at Google and later at Datera; this experience has influenced parts of this book. Arthur Gleckler provided useful feedback on the book in biweekly lunches. Mike Salisbury was always excited to hear about the latest developments during our regular meetings at Google. Hongtao Huang and Piyush Janawadkar read early drafts of the book. A special thanks goes to Rick Dukes, the classmate at MIT who introduced Chris to the lambda papers and set him on the long road towards this book.
We thank the MIT Department of Electrical Engineering and Computer Science and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) for their hospitality and logistical support. We acknowledge the Panasonic Corporation (formerly the Matsushita Electric Industrial Corporation) for support of Gerald Jay Sussman through an endowed chair. Chris Hanson was also partially supported by CSAIL and later by Google for this work.
Julie Sussman, PPA, provided careful reading and serious criticism that forced us to reorganize and rewrite major parts of the text. She has also developed and maintained Gerald Jay Sussman over these many years.
Elizabeth Vickers, spouse of many years, provided a supporting and stable environment for both Chris and their children, Alan and Erica. Elizabeth also cooked many excellent meals for both authors during the long work sessions in Maine. Alan was an occasional but enthusiastic reader of early drafts.
Chris Hanson and Gerald Jay Sussman
---
## 다이어그램 페이지
### Page 1
![Page 1 — 다이어그램](설계원칙-001-020_images/page_1.png)