Several design engineers have come to us recently requesting lenses designed for characterizing light sources. Two hot topics are measuring light sources for facial recognition and LIDAR. Both involve characterizing the angular distribution of light from an NIR (850 – 980nm) laser using a conoscope and camera, but that’s where the similarities end. Both these applications are intriguing, so we thought we’d take a moment and discuss both facial recognition systems and LIDAR.
Measuring Facial Recognition Systems
Facial recognition presents a much more difficult challenge. For this application, the laser light passes through a diffractive optical element to project a pattern on the user’s face. The distance from the laser to the face is small, so the beam must be spread over a large angle. This means that the measurement system must be accurate over a wide range of angles, both in terms of distortion (mapping of angles to positions on the image sensor) and how uniformly the sensor is illuminated.
Both of these applications are well suited for measurement with conoscopes. These lenses create a map on the image sensor in which each pixel within a circle captures light emitted by the source in to a small range of angles. Because conoscopes capture all of the emitted angles in a single frame, the data can be collected in a few milliseconds. This makes it possible to measure numerous light sources in a short time.
Other types of light sources, such as optical fibers and LED’s can be measured with conoscopes. The main limitations are that the lens must be able to get close to the source and the source must be no more than a few millimeters across. How close is close? It depends, of course. If you want to collect light within a 160° cone (80° half-angle), close is about 2 mm unless you want to pay for some very large lenses. On the other hand, if the full angle (side to side) of the cone is only 60° (30° half-angle), 10 mm is reasonable.
LIDAR System Measurement
Measuring the light distribution from a laser for LIDAR is simple because the beam is simple. LIDAR is typically based on the time of flight concept – the distance from the laser to the object and back is determined by measuring how long it takes for a pulse of light to travel from the laser to the object and back. The laser beam must have low divergence so it can resolve relatively small objects that are up to a few hundred meters away from the laser. The only real challenge is that the lasers for this application are also powerful, over 10W, so it is necessary to dispose of a lot of light before it reaches the camera. Because of the small divergence angle, it is easy to resolve a few arc minutes.
LIDAR - Light Detection and Ranging
A LIDAR system is similar to RADAR, but it uses light waves instead of radio waves to map its surroundings. Because the wavelengths of light (10-9 meters) are much smaller than the wavelength of radio waves (103 meters), LIDAR is able to detect smaller objects and therefore can have a higher resolution.
LIDAR creates a 3D image of its surroundings by shooting out a pulse of light from the transmitter (usually a laser) that is scattered off an object. Then, the scattered light is detected by the receiver. The total time of flight (i.e., the time it takes for the light to travel from the transmitter to the object and back to the receiver), along with the speed of light, is then used to calculate the distance to the object as shown below.
This works great for measuring the distance from an object, but how is the scene “mapped” or imaged? The LIDAR transceiver doesn’t just shoot a single beam out in one direction. It scans the area in X and Y using a beam steering device (usually a rotating mirror) as seen in the image below. Some LIDAR systems transmit and receive over 100,000 pulses/second. This allows the system to create a 3D map of its surroundings, and since the pulse rate is so fast it can map the entire area very quickly.
LIDAR Precision Optics
Regardless of the application, the optics used in LIDAR systems must be designed and manufactured to a high degree of precision to obtain the best results for the LIDAR system. Well designed lenses allow for fast data collection and accurate 3D mapping due to high transmission and low distortion and other aberrations. A lens with a low f/# (often called a “fast” lens by photographers) will let more light into the system, allowing the detection of weaker signals which makes the LIDAR system more accurate and allows it to have a longer range.
The specific glass choice and optical coatings used are also very important. To have the most efficient system, glass choice and optical coatings should be chosen to get the best transmission for the specific wavelength(s) of the light source. For most LIDAR systems a laser with a single wavelength is used as the light source. LIDAR systems have been designed for wavelengths across the light spectrum, from ultraviolet (UV) to the infrared (IR). For LIDAR systems that will be used in close proximity to people, the power of the laser pulse must be low to make the system eye safe. Another option (currently being developed for LIDAR systems on autonomous vehicles) is to use a 1550nm wavelength laser because light at this wavelength is not focused by the human eye. This allows the use of higher power lasers resulting in systems with more range and accuracy as described in the previous paragraph.
In recent months, facial recognition has been implemented with increasing frequency (and with mixed success). LIDAR, too, is undergoing scrutiny for its potential role in the development of self-driving cars. We hope that these explanations give you a better understanding of the methods these technologies rely on.