We can add a *smoothness objective* by adding
-norm of the
derivative to the objective function.

(4.79) |

The
-norm only cares about the *maximum derivative*.
Large
means we put more weight on the smoothness than the
side-lobe level.

This can be formulated as an LP by adding one optimization parameter which bounds all derivatives.

(4.80) |

In matrix form,

Objective function becomes

(4.81) |

The result of adding the Chebyshev norm of `diff(h)` to the
objective function to be minimized (
) is shown in
Fig.3.39. The result of increasing
to 20 is
shown in Fig.3.40.

[How to cite this work] [Order a printed hardcopy] [Comment on this page via email]

Copyright ©

Center for Computer Research in Music and Acoustics (CCRMA), Stanford University