Lenses: from fire starters to smartphones and VR

In ancient times we see examples of magnifying crystals formed into a biconvex form as early as the 7th century BC† Whether the people of that period used them to make fire or to see is unclear. Yet it is famously said that Emperor Nero of Rome gladiatorial games watched through an emerald

Needless to say, the images we get through modern lenses are a lot more realistic. So how did we get from simple magnification systems to the complex lens systems we see today? We begin with a short journey through the history of the camera and lens, and finish with the very latest in lens design for smartphone cameras and VR headsets.

Theory and practice

Philosophers and scientists from most cultures and periods have reflected on light. Our modern theories of light date from the 17th century and the work of scientists such as Johannes Kepler, Willebrord Snellius, Issac Newton and Christiaan Huygens. Of course, it was not without controversy. Netwon and many others had advanced the idea that light was a particle that moved in a straight line like a ray, while Huygens and others proposed that light behaved more like a wave. For a while, Newton’s camp won.

This changed in the 19th century when Thomas Young’s interference experiments revealed data that no particle theory could explain. Fresnel succeeded in 1821 in describing light not as a longitudinal wave, but as a transverse wave. This became the actual theory for light, known as the Huygens-Fresnel principle, until Maxwell’s electromagnetic theory emerged and ended the era of classical optics.

Meanwhile, practical eyewear was probably invented in central Italy around 1290. Glasses spread all over the world and glasses makers also started making telescopes. The first patent for a telescope was filed in the Netherlands in 1608. However, the patent application was not granted because telescopes were already somewhat common at that time. These refracting telescopes were quite popular and often simple two-element systems. Reflecting telescopes like the one built by Newton in 1668 were built in part to prove his theories about: chromatic aberration† Finally, largely correct, he proved that lenses refract light toward a focal point, but that different wavelengths refract differently. This means that colors have different focal points, which distorts the image.

When film arrived on the scene, it was discovered that cameras also suffered from spherical aberration — the lens was unable to focus the image on a wide flat plane. Charles Chevalier created an achromatic lens that could control both chromatic and spherical aberrations. However, this meant that the aperture in the front was quite small (f/16), making the exposure twenty or thirty minutes.

While not useful for cameras, the fresnel lens came around this time in 1818 and saved hundreds if not thousands of ships. The French Lighthouse Commission hired Fresnel to design the lens and it worked out quite well. Perhaps because of this success, in 1840 the French government offered a prize to whoever could devise a lens that could reduce exposure times in cameras.

A portrait lens design by Petval of Austria in 1841, one of the earliest 2-element lens designs for photography
Diagram of Petzval’s 1841 portrait lens – crown glass in shaded pink, flint glass in shaded blue

Joseph Petzval was a math professor who rose to the challenge. Eight human artillery computers were lent to his project for six months by an Archduke – this was a groundbreaking design. In the end, he was not offered the award because he was not French, but his lens was the best performing of the year’s entries.

Petzval’s lens was one of the first four-element lens systems and one of the first lenses designed specifically for the camera rather than a recycled camera obscura or telescope part. As a result, it was a common lens design for the next century. While further adjustments were common, they were usually done through trial and error rather than going back to the mathematical underpinnings that created the lens in the first place.

The next leap forward came in 1890 with the Zeiss Protar, which used new types of glass with different refractive indices and different optical properties. Combining different glasses with each other resulted in a lens that corrected almost all aberrations. This type of lens is known as: an anastigmatand the Protar was the first.

There is much more history here around the rise of the Japanese lens manufacturers and the fall of the German ones. But we move on to the smartphone.

The modern smartphone

Modern smartphone system with three elements
Form US Patent US8558939B2

We briefly discussed it in our longer article on what makes up a smartphone† But modern smartphone lenses are complex because they needed to capture enough light while they were small. An excellent resource is this blog post we linked to? in the above article.

Many smartphones today still use a three-element lens system, heavily inspired by the cook triplet

It has the advantage of being fairly easy to explain and relatively easy to manufacture. The first lens has a high optical power and a low index of refraction and dispersion, as we cannot correct such aberrations. The second lens compensates for any aberrations that occur in the first and is a different material, reducing the spherical effect the first lens produces. The third lens corrects for the distortion of the first two and flattens the rays on the image plane.

From US Patent US20170299845A1

Then we go abruptly to something like this. Look at the lenses. None of them are pretty spherical shapes. Instead, they are strange and mysterious.

This is the lens stack from around an iPhone 7 – it’s not clear which patent was used in which phone. The front lens has a high optical power and the second lens tries to correct that. But then the last four lenses are all shaky shapes that correct for distortion and spherical aberration.

Unlike larger cameras, most of the lenses in a cell phone are made of the same material. Why? The simple answer is that they have to. Smartphone lenses are usually plastic rather than cut glass. Contrary to what you may think, you make them in plastic is more complex the glass† Anyone who has worked with resin can tell you that it’s not easy to get defect-free clear plastic† The plastics we can use for lenses come in just two main varieties, with two refractive indices to choose from. Glass comes across a spectrum, doped with different materials to get exotic IoR and Abbe numbers. In fact, some of the more exotic camera lenses contain radioactive materials such as thorium† However, plastics are better able to form unique shapes compared to glass. Lapping glass into anything other than a sphere is difficult to scale and produce consistently. Plastic is molded and can be in any shape you want.

In addition, there are plenty of other features that smartphones offer, such as optical image stabilization that uses MEMS to move the lens in response to the movement. This, of course, requires moving one or more lenses or even the camera module itself, which presents a host of issues as each lens plays a specific role in dealing with aberrations. In the latest iPhone 12, the CMOS image sensor moves instead of the lenses. This allows the lenses to retain much of their optical power while still correcting for aberrations.

VR headsets

If photography fueled lens innovation in the 1800s, it was likely the cell phone that powered it in the 2000s. But there’s another nich application that could shake things up in the near future: VR. Currently, VR headsets are big and bulky. They feel this way in part because so much of their weight is away from your face, making them pull down harder. If the headset could be thinner, that would make for a more comfortable experience.

At this point, much of that volume comes from the lenses and the distances it takes to focus the image so it looks correct when the headset is on. Recently Facebook/Oculus/Meta showed some of his prototype headsets, and a few tried to tackle this† Depending on what the user is looking at, the headset does things like vary the focal plane and correct lens distortion in software on the fly.

The future of lenses

Some say we can get rid of lenses altogether. Several companies, such asmetaalz, build waveguides from silicon nanostructures. The advantage is that it can be placed directly on top of the CMOS image sensor without a complicated housing. Because systems that used to use dozens of lenses to get the required accuracy and low levels of distortion can be compressed to just a single layer, it would allow normal cameras and spectrometers to shrink.

In addition, this is something VR headsets are very interested in, as waveguides can be built into the screens, allowing for a wider field of view with less weight and bulk. The future certainly holds many exciting new developments in lens design. As we get distortion-free lenses in more scenarios with more control, there are some going back to older lenses† Sometimes it’s for the nostalgia and sometimes it’s because they like the look. Perhaps if Emperor Nero looked through our various lenses, cameras, and VR headsets today, he might still prefer the ruby, optical distortions and all.

Leave a Comment

Your email address will not be published. Required fields are marked *