Below are exercises, often modified or extended, from Graham.
- Do not submit exercises not listed in a bundle. They're good practice but not for sending. If you get stuck, email me.
- Read the comments for an exercise before doing it.
- One exercise per submission, unless the description explicitly combines several exercises.
- Send only your code. Don't send Graham's code.
- Send only code that has been tested and critiqued.
Warning: many of the exercises suggest terrible names for the functions
to define. Use the function names given in the exercise descriptions below.
These are the names used by the unit tests
When inventing your own names, ignore Graham's examples and use the
principles of good naming.
Feel free to define subfunctions when they make the code clearer. In fact, you will be critiqued for not defining subfunctions when appropriate. Some students think the phrase "define a function" means "define just one function." That's not true in Lisp or anywhere else. That terminology is just how programmers describe the API (application program interface) for the public function that other code will call.
Exercise Name: Ex 2-4+7+8+9
Send the code for exercises 4, 7, 8, and 9 in one submission.
Some exercises ask for multiple solutions. Submit all requested versions. Use the same required function name for each version. Be careful when testing that each version was tested separately. Most Lisp editors have a way to evaluate a single function definition in a file.
For Exercise 2-9, explain why each solution fails, in terms a novice would understand not Lisp jargon, and fix each solution. Don't give an entirely different answer. That would not help a novice learn.
Even though the instructions say to use only what you've seen so far, use
IF, if you have an if-then-else-if chain, and
DO if you have a loop that calculating a value, be it a list, sum, or boolean.
See "The Point of DO" on page 89 and How do you DO? for examples of how simple DO loops can be, without any assignment or RETURNs in the body of the DO.
Exercise name: Ex 3-2
new-xxx are always a bad idea. "New" says nothing
about what the function does. And how long can something stay "new?"
The term "stable" is used in computer science
to refer to sorting algorithms that don't mess up the order of elements any
more than they have to. Common Lisp provides two sorting functions:
stable-sort to capture this difference.
Define stable versions of the three major
set functions: union, intersection, and difference.
These should work like the built-in versions (see Graham's glossary),
but items in the result
should have the same relative ordering they have in
if any, or in
l2, if any. Don't worry about handling
Avoid redundant code. Avoid needless CONSing or list scanning.
Exercise name: Ex 3-3
occurrences (yes, that is how you spell it)
It's very inefficient to do this by creating a new list of count pairs every
time you count a new element. Instead, keep a list of
(item . count)
pairs, initially empty. For each element in the input list, update this list
of pairs, where updating means incrementing the counter in the appropriate pair,
if one exists, otherwise adding a new pair.
To increment the count, use
(INCF (CDR pair)) which increments
CDR of a pair.
NOTE: it's OK to get "a little too long" from the Lisp Critic on this, though there are many reasonably short solutions.
Exercise name: Ex 3-5
Here is why the example returns
(7 6 3 7): The first element
is 7 (the first element of the original list) + 0 (the position of the first
element, counting from 0), the second element is 5 (from the original list)
plus position 1, the third element is 1 plus position 2, and the last element
is 4 plus position 3.
Exercise name: Ex 3-8
show-dots as described in Graham. This is a simple function
and should have simple code. Look at the test cases in
show-list to act like Lisp's PRINT, except that it should
print square brackets instead of parentheses. (Printing to a string and replacing
parentheses doesn't count.)
Both functions should be able to print any S-expression, including atoms.
Exercise name: Ex 3-9
There are two problems with this exercise as given. First, Graham's phrase "the longest finite path" makes no sense if you allow cycles in the network, as he does and should. There are an infinite number of finite paths when the network has cycles.
We could say "find the longest path with no cycles," but consider asking for the longest path from node A to node A. This has a cycle, but it's asking a reasonable question: what is the longest round-trip tour from A?
So, define your function to find the longest path with no internal cycles, i.e., no node repeated except possibly the first and last nodes.
The second problem is that
looking for longest paths means that Graham's breadth-first
BFS function is not appropriate. Why? Because
breadth-first search has to build
a queue of alternatives, and such a queue is only worth doing
if it lets you stop before you've tried all possibilities.
There's no way to find the longest path without trying all possibilities. Therefore, there is no point to keeping a queue. Depth-first search would work just fine and is more efficient..
Define a recursive
longest-path function that uses depth-first
search to explore all paths without internal cycles, one at a time.
That is, there will be a variable with the current
path, and perhaps another with the best path, but no list of all paths.
Exercise name: Ex 5-5
Do as specified. Put both versions in the same submission.
Exercise name: Ex 5-6
Put both versions in the same submission.
(intersperse '- '(a)) should return
(intersperse '- '()) should return
Avoid inefficiencies. There are two easy traps to fall into:
- a test in your iteration or recursion that can never be true after the first pass
- adding an item to the result that you then remove at the end
Exercise name: Ex 5-7
Define four versions, all with the same name. The fourth should
every. Put all four versions in the same submission.
return-from makes more sense than
return in part
Exercise name: Ex 5-8
Don't worry about using just one function, if more would make the code clearer or more efficient. Only one recursive function is needed though, and the key point is to get both values in one scan through the list.
subseq to create subvectors. This is
expensive. Don't call
length on every iteration either. This
is unnecessary. It also limits the generality of
max-min to take the keywords
:end, just like the other sequence functions do. With
:end, your code should be more efficient than code with
and multiple calls to
length, and more flexible at the same time.
Think about what other keyword arguments would provide similar efficiency and
Exercise name: Ex 5-9
Define both versions as specified. Put both versions in the same submission.
Important: In order for the test cases to tell if your code is indeed stopping immediately when a path is found, it needs to know what you put on the queue. For this to happen, change:
(defun bfs (end queue net) (if (null queue)
(defun bfs (end queue net) (if (empty-queue-p queue)
empty-queue-p is defined in exercise-tests.lisp. It works like null but also checks what's in the queue.
To pass the tests, you must also fix the code to handle cycles in the graph. Graham's code will find a path if one exists, no matter what, but will go into an infinite loop if no path exists and there are cycles in the graph.
Exercise name: Ex 6-2
finder function is not great Common Lisp. Run the Critic
on it and see.
Furthermore, it has a serious bug, as noted on the errata page at his web site.
Therefore, part of your job here is to reimplement the code to work correctly (pass the test cases) and be in good Common Lisp style.
bin-search so that
(bin-search object vector :key key-fn :start start :end end)
uses binary search to find an item in vector whose
key-fn item) value is equal to object. For example,
> (bin-search 8 #((1 a) (3 b) (5 c) (7 d) (8 e) (10 f)) :key #'car :start 3 :end 5) (8 E)
The search should go from start (inclusive) to end (exclusive). All parameters should take their usual values. In particular, in Lisp, C++, Java and elsewhere, the end of a sequence is the first index past the last element. Graham uses the index of the last element, which makes his arithmetic more complicated. With the standard value you can test for the end case and calculate the midpoint without a range calculation.
Do not define a
:test parameter. A
makes little sense here, because the code takes a vector previously sorted using
<, and that implicitly defines an equality predicate.
You should not need a helper function, just a recursive
Exercise name: Ex 6-5+6+7+8
Send code for exercises 5, 6, 7 and 8 in one submission.
The parameter to
be optional. If omitted, it should reset the internal data to its initial state.
(memoize function) should take one argument that is
itself a function of one argument, e.g.,
memoize should return a new function object, i.e., closure. The closure
should call function when it gets new argument values, and return
whatever function returns. When passed previously seen
equal values, the closure should return the previous answer without
calling function. If the closure is called with no arguments,
it should clear its internal table of argument values, and start over.
See the test cases in exercise-tests.lisp for examples.
Exercise name: Generalized BFS
Test cases: use the shortest-path tests, as described below.
State space search formalizes solving many problems as starting with some initial state S, and trying different actions that lead to new states, until you find a state that satisfies some goal predicate P. The solution is a list of states, called a path, showing the states from the final goal backward to the initial state.
Generalize the BFS function to do simple breadth-first state space search. Define
(bfs paths pred gen) to take three arguments:
- paths - A list of paths, where each path is a list of states, from newest to oldest
- pred - A predicate that takes a state and returns true if the state is a goal state
- gen - A predicate that takes a path and returns the list of states that can be reached legally from the current end of the path, given the other states in the path.
bfs should avoid cycles, i.e., it should not add a state
to any path twice. The gen function does not need to handle that case.
The code for
bfs should not need to change very much,
from your breadth-first code in
Try using the version that didn't need non-local exits.
The code will manage the queue of paths the same way.
Replace the steps that test for done and that generate new states to use
the functions passed in. It should not reverse the final answer.
Test by redefining
shortest-path to call the new
with the appropriate
lambda closures to search a graph. No other
new functions should be needed.
shortest-path should take the same parameters as before.
Test your code with
shortest-pathwill need to reverse the value returned by the generalized
Exercise: Iterative Deepening Search
Test cases: use the shortest-path tests, as described below.
Simple breadth-first search on large very bushy networks is impractical. If there are dozens or hundreds of branches, the queue quickly grows to 1000s of elements. But if you're looking for an optimal solution, e.g., the shortest path, what else can you do?
One answer that seems at first glance to be even more worse than breadth-first search is this:
- Try to find an answer with depth-first search limited to a depth of 1.
- If that fails, start over and try to find an answer with depth-first search limited to a depth of 2.
- If that fails, start over and try to find an answer with depth-first search limited to a depth of 3
- Repeat until either you find an answer.
The iterative deepening algorithm seems like it will be too expensive because step 2 will repeat all the work done in step 1, step 3 will repeat step 2 and so on.
However, while this repeated work does happen, the combinatorics of trees, specifically that there as many or more nodes at layer N than in the entire tree above layer N, turns out to make the total runtime cost of this algorithm very competitive with BFS , with much smaller memory costs. The same idea can be applied to more intelligent search algorithms.
Define the function
ids (iterative deepening search)
to take the same
repeatedly call a depth-first search function -- call it
depth-limited search -- that does a simple depth-first search to find an
extension of path.
dls should take an additional
parameter n, and should
- generate new states by applying gen to path
- stop and return the first path found such that pred is true of the CAR of path
- stop and fail if the depth of search reaches n
dls should not need a queue.
should avoid cycles, i.e, it should not add states already in path.
There is one tricky point. If you look at the algorithm as described on Wikipedia and elsewhere, it has a problem: it loops forever if there is no solution!
One option would be to pass a depth-limit parameter to
ids to tell it when to stop trying. This is useful to have.
It's necessary for both
bfs if infinitely long paths are possible.
But it's not a general or satisfying approach when infinitely long
paths are not possible.
bfs can handle searches that fail as long as there
there no infinite paths, because, eventually, nothing new can be added
to the queue, and all options have been explored.
To do the same thing,
need to be able to distinguish two kinds of search failure:
- True failure, when search could not extend a path any further
- Depth failure, when search reached the depth limit
If all the depth-first searches have true failure at depth N, then there's no point to going any deeper.
dls, and any subfunctions
you need. Test by defing
shortest-path to call
and confirm that all tests pass, including those where search fails.
Your code should be efficient and clean, including the handling
of the two kinds of failure.
Exercise name: Ex 7-2
Exercise 7-1 is good practice for this exercise, but only send Exercise 7-2.
Instead of making a list of the expressions in a file, define
(map-stream function stream)
to apply function to every Lisp expression in stream. It
(map-file function pathname)
to apply function to every Lisp expression in the file
named by pathname, and return
nil. These functions
can be very useful for checking code files, collecting the names of
functions defined in a file, and so on.
There are test cases only for
Exercise name: Ex 7-4
Do as specified. Currently, there are no test cases for this exercise.
Exercise name: Ex 7-5+6
To do this exercise, you need Graham's buffer and stream-subst code. The book version of the buffer code has an error. Use this version instead. Here's the stream-subst code, which you'll be changing.
stream-subst as described for Exercise 7-5, but
the wildcard character should be specifiable with a
keyword parameter, with the default
stream-subst as described for Exercise 7-6.
Change the code so that the pattern
old can be a sequence. (This
is a very easy change.)
Strings are sequences but so are lists and simple one-dimensional arrays.
By allowing non-string sequences, we can specify patterns with non-character
elements. In particular, you should change
so that the elements
digits, alphabetics, alphanumerics, and any characters, respectively, e.g.,
#(:digit :alpha :digit) would match any subsequence
of a digit, letter, and digit.
To simplify writing tests,
string-subst to use strings as input
and output streams for
Exercise name: Ex 8-4
Note that if you get things wrong, you typically need to reload ring.lisp and file.lisp, in that order.
Submit only the defpackage forms you wrote. Don't send Graham's code.
Exercise name: Ex 8-5
Henley is the random text generator in Figures 8.2 and 8.3. It generates poems based on word pairs found in an existing text. Page vi of the book has a poem by Henley, based on the word pairs in Milton's Paradise Lost.
The ostensible goal of this exercise is to define
(henley-p string) to return true
if and only if
string could have been generated
by generate-text. Hint: a better way to think of this is
that henley-p returns false if it can prove generate-text
could not have created the text.
The secondary goal is to take advantage of this exercise to incrementally improve the original code. While you personally will never write poor code (right?), you will often encounter it. It's not a good investment of time to clean up bad code. There's no benefit if no new functionality results. But, when you have to modify a stretch of code to fix a bug or add new features, then you should clean up the code you're working on so that future maintenance will be easier. A standard rule of thumb is that the next stretch of code that needs maintenance will be in or near the stretch that was just maintained. So clean up pays off.
Setup and test Henley
Get the Henley code (Figures 8.2 and 8.3) from
graham.lisp, and put it in
the file henley.lisp. Put
(in-package :cs325-user) at the top
as with other exercise code.
Compile and load the code. Make sure it compiles without error. (You might see a warning about see being undefined. That's because it's embedded in a let, which triggers some warnings. This should be harmless.)
Second, download the plain text file version of Milton's Paradise Lost from Project Gutenberg, and save it in paradise-lost.txt. Open it with your Lisp editor or any text editor and
- delete the Project Gutenberg notes at the front
- delete the Project Gutenberg notes at the end
Start Lisp, compile and load Henley, and
(read-text paradise-lost-pathname), to read
and process the entire poem. This should take just a few seconds, if Henley
Now try several calls to
(generate-text number) with
numbers like 10 and 20 and 50. You should see different Milton-like text
produced each time.
Clean up Henley
There are a number of ways to clean up the Henley code.
generate-text to be iterative.
In Scheme, you can count on tail-recursion optimization.
In Common Lisp, this is possible but not guaranteed.
Making the code iterative
will also allow you to drop the optional
Get rid of the
Such definitions are a pain to maintain. it also means you have no
way, short of reloading the whole form, to reset
SEE back to start. The better approach is to
MAKE-SEE that returns a closure
that does what
SEE did. Then any time you want
a fresh "SEE" function, you just call
Refactor the code to operate on a stream. That will make it more flexible, e.g., you can test it on other kinds of streams, like string streams.
(read-stream stream function)
to be a general version of
read-text. It should apply
function to every "word" in the text in
stream, as defined by Graham, i.e.,
a symbol for a word or punctuation character.
read-text to call
read-text to return the number of
entries in *words*, so that you can see if it actually
Clean up the
that the Lisp Critic is happy. An easy change is to replace
A more important change is to refactor the
do so that
do loop just handles character input until
the stream is empty. It calls another function -- with no loop in it --
that manages adding a character to a buffer and possibly outputing a token.
Recompile and reload henley.lisp, re-read paradise-lost.txt and confirm that generate-text still produces reasonable output.
(henley-p string) as described before.
henley-p takes a string, not a file.
henley-p should call
Unless you were very keen-eyed during clean-up, you should get a test failure. There's a bug in Graham's code. Find it and fix it. Don't create repeated code.
You should also test
henley-p on some random output from
generate-text. Here's an easy way to capture the printed
generate-text in a string, suitable for passing to
(setq str (with-output-to-string (*standard-output*) (generate-text 20)))
with-output-to-string is described in the book glossary.
Page 113 discusses rebinding global variables. In this case,
we temporarily rebind the global standard output stream
variable to a string stream, so that any call to (format t ...)
goes there instead of the terminal.
Submit for review all the code you changed. Briefly note what you changed and why. Do not submit functions you didn't change.
Exercise name: Ex 8-6
Do Exercise 8-5 first, because you need the predicate it defines to test your solution to this exercise.
It's not too difficult to use the existing hashtable to find a word that can precede a given word, and hence generate backwards from an initial word to the start of a sentence. But this is expensive and doesn't have the random generation statistical propertes of Henley.
A better approach is to create an equivalent hashtable for going backwards, using the data in the existing hashtable. Then generating in either direction is basically a matter of which hashtable you're using.
To test your code, feed the output of your generator into your Henley tester. It should always return true.
Exercise name: Ex 9-2
Define your function to take the number of cents and an optional list of the
available coins, largest to smallest, e.g.,
(make-change 72 '(25 10 1)).
The default list should be
(25 10 5 1).
You can assume that the coins are such that the simple greedy algorithm will get the answer with the fewest coins. For a more challenging exercise that doesn't make this assumption, after you do this, try make-best-change.
Exercise name: Ex 9-5
Graham says that
min have opposite signs.
This makes no sense. It should say that
(funcall f max) and
f min) have opposite signs.
Also, students frequently misunderstand the use of
epsilon. What Graham means (and what is usually intended) is
that you stop approximating when
epsilon of each other.
Finally, my best solution requires two functions, one to check the initial values and handle some special cases that have no solution or an exact solution, and another to do the actual solving. The style critic calls the first a little too long, the latter somewhat too long.
Exercise name: Ex 9-6
Do as specified.
See the challenge exercises for challenges based on exercises in this chapter.
Exercise name: Ex 10-3+5
Send code for exercises 3 and 5 in one submission.
Note that the test case Graham gives for
rules out using a function. If you don't see why, try running
the example with this definition:
(defun nth-expr (n &rest l) (nth (1- n) l))
However, that one test case still allows for a macro that evaluates all expressions, catching any errors. We'll take the interpretation that
nth-expr n exp1 exp2 exp3 ...)
should evaluate only
Here's a test case that your code has to handle correctly. Only one thing should print and it'll be clear if you're right.
(let ((n 3) (x "lose") (y "win")) (nth-expr n (format t "You ~S!" x) (format t "You ~S!" x) (format t "You ~S!" y) (format t "You ~S!" x) ))
Exercise name: Ex 10-6
This exercise requires you to understand how variable bindings work, but the answer is quite short. Your macro needs to work for both lexical and special variables.
Exercise name: Ex 10-8
Note the name change: Graham's name does not suggest the close connection
between this macro and
doublef correctly handles generalized references.
Exercise name: Ex 11-1+5
Send code for exercises 1 and 5 in one submission.
Exercise name: Ex 11-2
Do as specified.
Exercise name: Ex 12-3+4+5
Send all the queue functions in one submission.
Queues in Lisp are easy and useful, but Graham left out one essential function,
and should have made
make-queue more convenient. Define
to return true if a queue is empty. Redefine
(make-queue arg1 arg2
... to return a queue initialized with the items given, if any.
Exercise name: Ex 12-6+7
As far as I can tell, you need to do Exercise 7 first, before doing Exercise 6, so submit both as one submission.
For all the circular list exercises, make sure you set the global variable
*PRINT-CIRCLE*to true first! This tells Lisp to use a special printing algorithm that handles circular lists. Note that what Lisp prints can be typed in to make a circular list too. It's not just an output format.
As you might expect, by chapter 12, a simple question doesn't mean a simple answer. Graham should've said more.
First, for Exercise 7, something like
(eq l (cdr l))
won't work. This only catches a 1-element cdr-circular list, not lists like
(let ((l (list 1 2 3))) (nconc l l))
(let ((l (list 1 2 3))) (nconc l (cddr l)))
The obvious solution is to keep a list of every CDR and return true if you find the same list twice before hitting NIL. The problem with this solution is that it requires potentially large amounts of CONSing. If you assume a top-level circularity only in the final CDR, you can trade more pointer following to avoid CONSing.
Start a loop with two variables -- a "turtle" that CDR's down the list and a rabbit that CDDR's. That means the rabbit goes twice as fast as the turtle. If the list is CDR-circular, eventually the rabbit and turtle will bump into each other. If the list is not circular, the rabbit will hit NIL.
After you have CDR-circular lists working, then you can do Exercise 6. Careful! Make sure you test this thoroughly. It's easy to write a version that says false when it should say true.
Make sure your function works for regular lists too.
Exercise name: Ex 12-8
Graham's definition of CAR-circular on page 209 says "the loop passes through the car of some cons." This is pretty open, so I'm defining this problem to be much more general than CDR-circular. In fact, it's comparable to what Lisp has to do when printing with *PRINT-CIRCLE* set to true. All the solutions I've seen keep a stack of sublists.
My exercise-tests.lisp include both CAR-circular and CDR-circular lists, and, in some cases, both. If you believe one of the tests is wrong, email me. Before doing so, draw the test case in box and arrow notation, as shown in the figures in Chapter 12. If there is a loop that goes through the CAR side of a box, then it's CAR-circular.