Common photodetectors used in cameras, like the ones on smart phones, cannot distinguish color. Instead, once the photo is captured, the system will run it through a series of optical filters to fill in the blanks for red, blue, and green coloration. This process is called interpolation, and it’s been used since the late 1800s when the filters were simply just dyed lenses that were laid over the camera’s face. Modern electronic cameras have come a long way and are able to interpolate and compress images in seconds. But this takes a lot of computing power, and the quality of the image is entirely dependent on the device’s processing power.
But now, Pennsylvania State University researchers have developed a device that mimics human eyes. To do this, the team developed a new sensor array composed of thin perovskite films of different halogen compositions, which are each able to capture one wavelength of light. One sensor is designed to absorb only green wavelengths, another red, and another blue, much like the cone cells of the human retina.
These sensors are stacked vertically and are heavily unbalanced, creating a burst of electricity every time light passes through them, similar to how solar cells function. This asymmetric configuration allows the device to collect photocurrents without external bias, enabling it to harness energy and power itself. So, the device is much more accurate and efficient, creating high fidelity images without draining the power source. Researchers believe this could eventually lead to the development of battery-free cameras.
But those sensors would only get this device so far without the use of a powerful algorithm that could process the electrical signals being generated.
“The retina only receives the light and reads the electrical signal,” explained Kai Wang, assistant research professor at Penn State. “There is a very complicated neural network in our brain that processes that signal and creates an image in our brain.”
So, the team expanded to bring in researchers from electrical and computer engineering to design a neuromorphic algorithm that can process the images much like human brains do.
“And we think this is really important because with many diseases, like Alzheimer's, we know some part of our brain takes responsibility, but we do not know at the material level, especially at the molecular level, what is happening,” Wang elaborated. “We could use this technology to develop some diagnostic strategy for those diseases.”
The advances made with this system could lead to advancements in machine vision, artificial intelligence, and the Internet of Things, among others. The team is particularly excited about possibilities in integrating their system into brain-computer interfaces such as Elon Musk’s Neuralink.
Hou explained that the way this device works is so similar to how the human eye functions that they hope to develop it to the extent that the technology could be implanted into a human retina that is damaged to restore vision.
“By implanting our material into the eye, we could replace the retina to receive the light and generate the electrical signal, which would transmit through the interface to re-generate that region of the brain,” said Yuchen Hou, who worked on this project as part of his doctoral research and is now a process engineer at Lam Research in Oregon.
But unraveling the complexity of the human eye will take more time.
“We have billions of photoreceptor cells, which is a huge amount of data. But our brains remove a lot of unnecessary information to focus on seeing what we want to see,” Wang said. “With our technology, we’re trying to understand this intelligence, and we’re learning how to simplify that data.”
Comments