Phase Retrieval I: Introduction
This is the first in a series of posts about Phase Retrieval, referred to more specifically in this context as as Image based Wavefront Sensing. Phase Retrieval was originally studied for the task of image reconstruction but was later adapted to be used to measure the optical path error of imaging systems, based on measurements of their point spread functions.
For an excellent introduction, search for Phase Retrieval: Hubble and the James Webb Space Telescope by Jim Fienup. My thesis, advised by Fienup, was a new kind of phase retrieval based on both unusual measurements, and a large amount of diversity. This technique was used to focus the Enhanced Engineering cameras aboard the Perseverance rover on Mars. To my knowledge, this is the furthest Phase Retrieval has gone from Earth.
Motivation: Why Phase Retrieval?
The most natural starting place for a discussion of phase retrieval is why do we need or want this in the first place? We have many tools for measuring wavefronts in optical systems, the most common of which may be the Fizeau interferometer, but of course there is the Shack Hartmann wavefront sensor, and other devices.
Phase Retrieval can go where other techniques cannot. If you wanted to test Hubble with a Fizeau after it was launched, you would need an auto-collimating flat of some 2.4m+ diameter. The cost of doing such a test on the ground (comparable to the HST itself) ultimately led to a failure in detecting the error in the Hubble primary prior to launch. This is not likely something feasible on orbit (and is certainly infeasible for something like the JWST so far from Earth).
On the ground, similar problems exist; if you wish to test your optical system in a vacuum chamber, most (but not all) commercial interferometers are not vacuum-compatible. Most organizations will choose another technique before spending $xM on a special vacuum compatible model that will see limited use. The interferometer also requires optical ground support equipment (OGSE) of similar size to the payload, which adds further cost and a flavor of unique test hardware for each system. Those are not desirable qualities.
If your system has a broad spectral bandwidth, then in this paradigm you would require several interferometers, or the mythical multispectral interferometer, to do measurements at several wavelengths. If you must probe all wavelengths simultaneously, then you may not be able to do interferometry at all, due to the lack of coherence between the wavelengths.
The shack hartmann sensor requires specialized OGSE to adapt to the size and F/# of the system under test, so this too is not perfect.
Both the interferometer and the SHWFS will have non common path auxiliary optics. When the measurement is made, special care is required to avoid confusing the errors of the auxiliary optics for errors of the system under test.
While it has its own, phase retrieval has none of these problems. It works in air or a vacuum, has no non-common path optics and can accommodate polychromatic illumination. Put another way, if you want to measure the optical path error of a system and don’t know how you could do so, Phase Retrieval may be a way. The “may” here is the operative word, due to those special challenges of phase retrieval.
Why not Phase Retrieval?
As mentioned previously, Phase Retrieval has its own problems. While this is not unique – all measurement schemes produce estimates and not truth, Phase Retrieval’s warts are perhaps uglier than others. The workings of a Fizeau interferometer, for example, are straightforward to explain from first principles. The same is true to an extent of the SHWFS, or even the phase contrast wavefront sensor given some assumptions.
In contrast to those methods, Phase Retrieval is not especially well grounded in first principles. While many authors have studied it and several have written about its fundamentals from various perspectives (Bauschke, Takajo, innumerable others) at its core phase retrieval is an iterative procedure that “happens to work.” This statement is itself too punitive; phase retrieval works often and well, but requires knowledge and skill on the part of its operator; more than the “just push a button” operation that Fizeau interferometry can be reduced to. In other words, I could give you a phase retrieval algorithm today, and you may not be able to make it produce meaningful output tomorrow.
Who uses Phase Retrieval?
Phase Retrieval is used all over the world for several tasks. In our chosen subset of image-based wavefront sensing, phase retrieval is most often applied to astronomical systems, though this is not its only utility. It can be performed on camera lenses and other systems; see, for example, the motivation for my thesis or the results of Dirksen, who has done this with very extreme systems. A few notable uses of phase retrieval include a long lineage of NASA’s grand observatories, including measurement of the Hubble error, planned usage for the fine phasing of JWST, measurement of Spitzer, and usage in both the Roman Space Telescope’s Wide Field instrument as well as its Coronagraph instrument.
In the next post, we will dive back 40 years and examine the iterative transform algorithms for phase retrieval, where this all began. Each of the algorithms described is implemented in PRAISE.