Optimization with David Shanno

Thursday, July 18, 2019

Dave Shanno, my colleague and friend who helped jump-start my career over 30 years ago, recently passed away. Dave was a brilliant mathematician and academic researcher who especially valued the usefulness and implementation of solutions—his perspective inspired me and many others in Operations Research. Today, machine learning is just one area that is benefitting from his work. In December, 2017, I interviewed Dave for the INFORMS Oral Histories program (video and full transcript are here).

Dave described how he began his career in the 1960’s by writing assembly language code to determine whether it was more economical to buy or lease IBM mainframes. He then entered graduate school and ended up working on a problem in nonlinear regression for his PhD thesis. In the early part of his academic career, he independently discovered the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm for conditioning quasi-Newton methods. Each of the four came to the same algorithm via a different mathematical journey, all ending in the same formula. Dave’s motivation came from his background in writing software to implement his algorithms. He relates in the interview that he had spent 30 years trying to “kill” the BFGS, and now it has come back as being useful in machine learning.

Up to the mid-1980’s, Dave was known for his work in unconstrained nonlinear optimization. In 1984, both of our lives changed when Karmarkar made a splash with his proof of a polynomial time algorithm for linear programming, which turned out to be equivalent to barrier methods for nonlinear programming. That prompted Dave, a nonlinear programming person, to become interested in the area of constrained optimization. Up to that point, Dave would always code his own algorithms, and he then met our colleague Roy Marsten at the University of Arizona, who then started coding up Dave’s algorithms. Dave and Roy worked together on an implementation of a dual interior point method.

I arrived at Princeton University in 1987 and was introduced to Dave. He was working on solving the feasibility problem for primal-dual interior point methods with Roy at the time. I was interested in those methods for linear programming and discovered in the spring of 1988 what is now known as the primal-dual infeasible interior point method. I did a first implementation demonstrating that the concept worked, and then Dave introduced me to Roy. We then did a more robust implementation which led to our seminal paper, "Computational experience with a primal-dual interior point method for linear programming," that brought the three of us a lot of recognition and a few awards. One of Dave’s key contributions in the summer of 1989 was figuring out that the search directions that I had derived were equivalent to applying Newton’s method to the KKT conditions of the linear program augmented with a barrier penalty function. I just now realized the similarity of arriving at the same search directions for linear programming via two very different mathematical approaches with Dave’s experience in discovering the BFGS algorithm at the same time as Broyden, Fletcher and Goldfarb.

Dave and I spent many long days and nights working on that implementation, as well as improvements to the base algorithm. At one point, Dave told me that he knew how to code and modified our Fortran code OB1 to try a new idea. It was not a pretty piece of code. At that point, Roy and I told Dave we wouldn’t let him touch the code again! As Dave notes in the interview, “Someone has once said, mathematicians are either philosophers or plumbers. And in those days, we were plumbers. You have to know the philosophy to be a good plumber. You have to know what problem you're solving, what the theory is.” This was probably one the main things I learned from my collaboration with Dave.

I will certainly miss Dave Shanno, and I know many of my colleagues in the mathematical optimization community will miss Dave as well.