Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search

Matrix Multiplication

Let $ \mathbf{A}^{\!\hbox{\tiny T}}$ be a general $ M\times L$ matrix and let $ \mathbf{B}$ denote a general $ L\times N$ matrix. Denote the matrix product by $ \mathbf{C}=\mathbf{A}^{\!\hbox{\tiny T}}\,
\mathbf{B}$ . Then matrix multiplication is carried out by computing the inner product of every row of $ \mathbf{A}^{\!\hbox{\tiny T}}$ with every column of $ \mathbf{B}$ . Let the $ i$ th row of $ \mathbf{A}^{\!\hbox{\tiny T}}$ be denoted by $ \underline{a}^{\hbox{\tiny T}}_i$ , $ i=1, 2,\ldots,M$ , and the $ j$ th column of $ \mathbf{B}$ by $ \underline{b}_j$ , $ j=1,2,\ldots,N$ . Then the matrix product $ \mathbf{C}=\mathbf{A}^{\!\hbox{\tiny T}}\,
\mathbf{B}$ is defined as

$\displaystyle \mathbf{C}= \mathbf{A}^{\!\hbox{\tiny T}}\, \mathbf{B}= \left[\begin{array}{cccc}
<\underline{a}^{\hbox{\tiny T}}_1,\underline{b}_1> & <\underline{a}^{\hbox{\tiny T}}_1,\underline{b}_2> & \cdots & <\underline{a}^{\hbox{\tiny T}}_1,\underline{b}_N> \\
<\underline{a}^{\hbox{\tiny T}}_2,\underline{b}_1> & <\underline{a}^{\hbox{\tiny T}}_2,\underline{b}_2> & \cdots & <\underline{a}^{\hbox{\tiny T}}_2,\underline{b}_N> \\
\vdots & \vdots & \cdots & \vdots \\
<\underline{a}^{\hbox{\tiny T}}_M,\underline{b}_1> & <\underline{a}^{\hbox{\tiny T}}_M,\underline{b}_2> & \cdots & <\underline{a}^{\hbox{\tiny T}}_M,\underline{b}_N>
\end{array}\right].
$

This definition can be extended to complex matrices by using a definition of inner product which does not conjugate its second argument.H.2

Examples:

$\displaystyle \left[\begin{array}{cc} a & b \\ c & d \\ e & f \end{array}\right]
\cdot
\left[\begin{array}{cc} \alpha & \beta \\ \gamma & \delta \end{array}\right]
=
\left[\begin{array}{cc}
a\alpha+b\gamma & a\beta+b\delta \\
c\alpha+d\gamma & c\beta+d\delta \\
e\alpha+f\gamma & e\beta+f\delta
\end{array}\right]
$

$\displaystyle \left[\begin{array}{cc} \alpha & \beta \\ \gamma & \delta \end{array}\right]
\cdot
\left[\begin{array}{ccc} a & c & e \\ b & d & f \end{array}\right]
=
\left[\begin{array}{ccc}
\alpha a + \beta b & \alpha c + \beta d & \alpha e + \beta f \\
\gamma a + \delta b & \gamma c + \delta d & \gamma e + \delta f
\end{array}\right]
$

$\displaystyle \left[\begin{array}{c} \alpha \\ \beta \end{array}\right]
\cdot
\left[\begin{array}{ccc} a & b & c \end{array}\right]
=
\left[\begin{array}{ccc}
\alpha a & \alpha b & \alpha c \\
\beta a & \beta b & \beta c
\end{array}\right]
$

$\displaystyle \left[\begin{array}{ccc} a & b & c \end{array}\right]
\cdot
\left[\begin{array}{c} \alpha \\ \beta \\ \gamma \end{array}\right]
= a \alpha + b \beta + c \gamma
$

An $ M\times L$ matrix $ \mathbf{A}$ can be multiplied on the right by an $ L\times N$ matrix, where $ N$ is any positive integer. An $ L\times N$ matrix $ \mathbf{A}$ can be multiplied on the left by a $ M\times L$ matrix, where $ M$ is any positive integer. Thus, the number of columns in the matrix on the left must equal the number of rows in the matrix on the right.

Matrix multiplication is non-commutative, in general. That is, normally $ \mathbf{A}\,\mathbf{B}\neq \mathbf{B}\,\mathbf{A}$ even when both products are defined (such as when the matrices are square.)

The transpose of a matrix product is the product of the transposes in reverse order:

$\displaystyle (\mathbf{A}\mathbf{B})^{\hbox{\tiny T}} = \mathbf{B}^{\hbox{\tiny T}} \mathbf{A}^{\!\hbox{\tiny T}}
$

The identity matrix is denoted by $ \mathbf{I}$ and is defined as

$\displaystyle \mathbf{I}\isdef \left[\begin{array}{ccccc}
1 & 0 & 0 & \cdots & 0 \\
0 & 1 & 0 & \cdots & 0 \\
0 & 0 & 1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \cdots & \vdots \\
0 & 0 & 0 & \cdots & 1
\end{array}\right]
$

Identity matrices are always square. The $ N\times N$ identity matrix $ \mathbf{I}$ , sometimes denoted as $ \mathbf{I}_N$ , satisfies $ \mathbf{A}\cdot \mathbf{I}_N =\mathbf{A}$ for every $ M\times N$ matrix $ \mathbf{A}$ . Similarly, $ \mathbf{I}_M\cdot \mathbf{A}=\mathbf{A}$ , for every $ M\times N$ matrix $ \mathbf{A}$ .

As a special case, a matrix $ \mathbf{A}^{\!\hbox{\tiny T}}$ times a vector $ \underline{x}$ produces a new vector $ \underline{y}= \mathbf{A}^{\!\hbox{\tiny T}}\underline{x}$ which consists of the inner product of every row of $ \mathbf{A}^{\!\hbox{\tiny T}}$ with $ \underline{x}$

$\displaystyle \mathbf{A}^{\!\hbox{\tiny T}}\underline{x}= \left[\begin{array}{c}
<\underline{a}^{\hbox{\tiny T}}_1,\underline{x}> \\
<\underline{a}^{\hbox{\tiny T}}_2,\underline{x}> \\
\vdots \\
<\underline{a}^{\hbox{\tiny T}}_M,\underline{x}>
\end{array}\right].
$

A matrix $ \mathbf{A}^{\!\hbox{\tiny T}}$ times a vector $ \underline{x}$ defines a linear transformation of $ \underline{x}$ . In fact, every linear function of a vector $ \underline{x}$ can be expressed as a matrix multiply. In particular, every linear filtering operation can be expressed as a matrix multiply applied to the input signal. As a special case, every linear, time-invariant (LTI) filtering operation can be expressed as a matrix multiply in which the matrix is Toeplitz, i.e., $ \mathbf{A}^{\!\hbox{\tiny T}}[i,j] = \mathbf{A}^{\!\hbox{\tiny T}}[i-j]$ (constant along diagonals).

As a further special case, a row vector on the left may be multiplied by a column vector on the right to form a single inner product:

$\displaystyle \underline{y}^{\ast }{\underline{x}} = \langle \underline{x},\underline{y}\rangle % \ip brackets huge due to y-underbar
$


Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search

[How to cite this work]  [Order a printed hardcopy]  [Comment on this page via email]

``Mathematics of the Discrete Fourier Transform (DFT), with Audio Applications --- Second Edition'', by Julius O. Smith III, W3K Publishing, 2007, ISBN 978-0-9745607-4-8
Copyright © 2024-04-02 by Julius O. Smith III
Center for Computer Research in Music and Acoustics (CCRMA),   Stanford University
CCRMA