Way back in the 80s, I remember talking to someone about the delicacy of putting together the lens stack used to project the light that exposes circuit patterns onto silicon wafers. Those machines must seem hopelessly primitive today (pellicles were just being introduced), and yet, even then, the optics were a delicate matter.
As described to me, the lenses were carefully ground and assembled with thin sheets of paper between them – like the wax paper that keeps slices of deli cheese from sticking together. To assemble, you carefully positioned one lens over its adjoining one, and, when alignment was perfect, you gently slid the paper out. If you got it wrong, there was no return once the paper was removed: the lenses were so precisely ground that the cohesive attraction between the two lenses would make any further minute adjustments impossible.
While impressed by the Mission-Impossible-style operational delicacy, I couldn’t help wondering, “Why are all those lenses needed?” Why can’t you just grind one lens to do the job? That not being the focus of anything that I was working on, I didn’t pursue it further at the time.
I got to come back to the topic recently in a conversation with Tom Walker, group R&D director for the optical systems group (OSG) at Synopsys. This group arose as the product of Synopsys’s acquisition of Optical Research Associates (ORA) last October. Exploring this area of design was a natural follow-on to the prior look we took at LED and black silicon technology.
The optical tools they put out have wide applicability, but, in terms of system design, they include two very different ends of the component spectrum: photolithography used in the building of chips and “photon coercion” used in the building of flat panel displays. These come via two different tools that, in some regards, do the same thing in two different ways: they both track the behavior of photons as they interact with materials.
The oldest tool, released in 1975 (no, that’s not a misprint… and yes, that’s before some of you were born…), is Code V: it manages design of the more convention optics. It was involved in the optical fix that turned the Hubble Space Telescope from a pile of iron pyrite into a gold mine. It’s also used in the design of consumer cameras, military imaging, and photolithographic systems, to name a few other areas.
Code V performs sequential and non-sequential ray tracing. This means that you model the path that a photon takes as it leaves the sources and goes, well, wherever it goes. And that gets to the heart of the sequential/non-sequential distinction. With sequential ray tracing, it’s known where the photon will go: in a camera, for instance, the light from the image will go into and through the lens and impinge on the film or sensor. Non-sequential tracing involves less predictable paths, so you may have partial reflections or other behaviors where a single ray could spawn other “child” rays or perhaps even disappear through absorption as it interacts with whatever it hits.
One of main strengths that Synopsys claims with Code V gets to the heart of my old question about the need for stacking multiple lenses. The reason is freedom – degrees of it, that is. Each lens has a number of degrees of freedom – parameters that can be tuned. Examples include numerous higher-order aberrations, color corrections, spherical and chromatic aberrations, and astigmatism. You set the value of these for each lens, so the more lenses you have, the more degrees of freedom you have, and the better you can tune the behavior of the light.
But there is the problem of “too much of a good thing.” A modern lithographic projector has 50 – 100 lenses. There’s no way to manually optimize all of the properties of all of those lenses without tools. And that gets to what Code V is touted to do best: automatic optimization. It can take a set of desired metrics and optimize the various lens properties to achieve those overall metrics.
It also can help with the delicate assembly process, setting tolerances and establishing which things can be tweaked after the unit is assembled.
In case this makes it sound like Code V makes sophisticated lens design accessible to novices, Tom cautions that this isn’t the case. It’s like analog design: tools are needed, but they must be accompanied by a high level of expertise.
The other tool they make is called LightTools, released in 1995, and its purpose lies in the design of illumination. It models how a light source will illuminate a space, and it can track more sophisticated behaviors like light conversion and phosphorescence in addition to simple reflections and such. My first instinct was that this refers to lighting a room – figuring out which lights to put where to get the best effect. LightTools can do that, but so can lots of other really cheap software, so that’s not really what it’s used for.
One major application of this tool is in the design of flat panel display backlights. It may look like your display is lit from behind, but, in fact, it’s lit from the sides, even though, if it’s well designed, you don’t see the sides as any brighter than the middle. The light illuminates the surface behind the display and is reflected out evenly through the display via a pattern of bumps.
It’s the non-trivial arrangement of these bumps that takes the various photons and “coerces” them into reflecting out at the right time and place. That means that photons coming out near the sources need to be delayed so that they can come out when their middle-of-the-display siblings do; the bumps set up the paths for these photon. There may be millions of these bumps. And, even with a “coherent” laser, this is a polychromatic problem: the best lasers are merely narrowband sources, not “single” band. This is not something you can figure out by hand: LightTools is used to design the bump pattern.
You might think that this is a hard problem to solve, but that, once solved, it’s, well, solved. But the light sources keep changing, both in placement and light quality and number, and the sizes of displays keep changing. Each of these display variants needs its own bump pattern.
LightTools is also used in the packaging of LEDs, which may include, in addition to the LED light source, various reflective materials to direct the light or absorbent materials to restrict the light in addition to lenses and/or diffusers, phosphors, and other components or materials. The tools help to determine the best design.
Other areas of application are found in the automotive, medical, and aerospace industries where light sources and lenses may be combined. Headlights would be a rather familiar application.
One of the main differences between Code V and LightTools is that, while Code V traces individual photons, LightTools uses Monte Carlo simulations to establish the behavior. It’s not simulating a precise image; it’s simulating overall illumination.
And at the bleeding edge of optical design, Synopsys’s other simulation tools, where photon generation is modeled, can interface with the optical tools. Illumination can already be handled with LightTools in this manner, and they’re now working to interface Code V with circuit simulators to evaluate image creation.
And so, as we do more and more sophisticated things with photons, the use of optical design will necessarily accompany the use of circuit design – especially since old-fashioned optics process more information than our circuits do. It will be fun to watch them play together.