The big advantage of making a virtual instrument based on a physical model is to obtain the entire range of expressive variations in the instrument in response to intuitive controls. Unfortunately, controlling physical models gracefully in real time can be quite difficult, especially with sustained instruments such as bowed strings, woodwinds, and most particularly the human voice. In general, we need to find an ``orthogonalizing'' software layer to place between the performer and model so that ``simple things are simple,'' yet everything is still possible. The VL1 does a surprisingly good job of fencing in the parameter space so that each voice almost always sounds, and is reasonably in tune. Research is proceeding in this direction, but at present, there seem to be few broadly applicable techniques in the open literature.