Note: this wiki is now retired and will no longer be updated!

The static final versions of the pages are left as a convenience for readers. Note that meta-pages such as "discussion," "history," etc., will not work.

SICP exercise 2.37

From Drewiki
Jump to: navigation, search


Suppose we represent vectors <math>v = \left( v_i \right)</math> as sequences of numbers, and matrices <math>m = \left( m_{ij} \right)</math> as sequences of vectors (the rows of the matrix). For example, the matrix

<math> \begin{bmatrix}

1 & 2 & 3 & 4 \\
4 & 5 & 6 & 6 \\
6 & 7 & 8 & 9

\end{bmatrix} </math>

is represented as the sequence ((1 2 3 4) (4 5 6 6) (6 7 8 9)). With this representation, we can use sequence operations to express concisely the basic matrix and vector operations. These operations (which are described in any book on matrix algebra) are the following:

(dot-product v w) returns the sum <math>\textstyle \sum_i v_i w_i</math>;

(matrix-*-vector m v) returns the vector t, where <math>t_i = \textstyle \sum_j m_{ij} v_j</math>;

(matrix-*-matrix m n) returns the matrix p, where <math>p_{ij} = \textstyle \sum_k m_{ik} n_{kj}</math>;

(transpose m) returns the matrix n, where <math>n_{ij} = m_{ji}</math>.

We can define the dot product as

(define (dot-product v w)
  (accumulate + 0 (map * v w)))

Fill in the missing expressions in the following procedures for computing the other matrix operations. (The procedure accumulate-n is defined in exercise 2.36.)

(define (matrix-*-vector m v)
  (map <??> m))
(define (transpose mat)
  (accumulate-n <??> <??> mat))
(define (matrix-*-matrix m n)
  (let ((cols (transpose n)))
    (map <??> m)))


Here's the definition of accumulate-n from exercise 2.36:

(define nil (quote ()))
(define (accumulate-n op init seqs)
  (if (null? (car seqs))
      (cons (accumulate op init (map car seqs))
            (accumulate-n op init (map cdr seqs)))))

The result of a matrix times a column vector is the vector whose components are the row vectors of the matrix dotted with the column vector.

(define (matrix-*-vector m v)
  (map (lambda (row) (dot-product row v)) m))


(define m1 (list (list 1 2 3) (list 3 5 1) (list 1 1 1)))
(define v1 (list 1 2 3))
(matrix-*-vector m1 v1)


(14 16 6)

The transpose operation rotates the elements of a matrix along its diagonal; the transpose of matrix m is matrix n such that <math>n_{ij} = m_{ji}</math>. We can achieve this operation by consing elements of the original matrix a row at a time.

(define (transpose mat)
  (accumulate-n cons nil mat))


(define m2 (list (list 1 2 3) (list 4 5 6) (list 7 8 9)))
(transpose m2)


((1 4 7) (2 5 8) (3 6 9))

The result of an <math>n \times p</math> matrix A times a <math>p \times m</math> matrix B is the <math>n \times m</math> matrix C whose columns <math>C_j</math> are the matrix-times-vector product of matrix A times column <math>B_j</math> of B. This definition is given below:

(define (matrix-*-matrix m n)
  (let ((cols (transpose n)))
    (transpose (map (lambda (v) (matrix-*-vector m v))

Transposing matrix n gives us a matrix whose rows are the columns of n, so that when we apply matrix-*-vector to m and each row of the transposed matrix, we're effectively multiplying m by a column of the original matrix n. Since the map operation gives us columns of the resulting matrix C, we need to transpose the result of the map to get back to the row matrix representation used in our system.


(matrix-*-matrix m1 m2)


((30 36 42) (30 39 48) (12 15 18))

We can optimize this definition, however. The two transposes in the definition are due to our representation of matrices as row vectors and the fact that each column of the product matrix is the first matrix times the corresponding column vector of the second. We can eliminate one of the transposes by making an observation about the rows of the product matrix and the order of the sequences that represent matrices.

Given two matrices A and B, each element <math>c_{ij}</math> of row i in the product matrix <math>C = A B</math> is equal to the dot product of row i of A and column j of B. To calculate each row i of C, we take row i of A and calculate its dot product with each column of B, one dot product for each element of row i in C. Algebraically, that's the same as multiplying the transpose of matrix B times row i of A treated as a column vector. The algebraic result of this operation is a column vector, but if we treat it as a row vector, we can use the result directly as row i of the product matrix C. By mapping this multiplication over each row of matrix A, we can directly get the rows of product matrix C without needing to transpose the result as we did in the original definition.

With this optimization in mind, here's a new definition of matrix-*-matrix:

(define (matrix-*-matrix m n)
  (let ((cols (transpose n)))
    (map (lambda (row) (matrix-*-vector cols row))


(matrix-*-matrix m1 m2)


((30 36 42) (30 39 48) (12 15 18))

Personal tools