Harvard scientists have come up with a new way to create 3D images,
which could have a significant impact on areas such as medical imaging
and the 3D movie industry.
Unlike traditional 3D image generating methods, the new 3D technique relies solely on mathematics and computation, making do without special hardware and fancy lenses.
The technique, developed by Harvard School of Engineering and Applied Sciences researchers, uses a computational method that creates 3D images with only one, stationary lens. The computational method is named light-field moment imaging.
The research team, led by natural sciences professor Kenneth Crozier, based their study on the idea of determining how an image would look from a different angle, relying only on the information they could extract from the rays of light that enter the camera, while keeping the camera still. Basically, what Crozier and graduate student Antony Orth did was infer the angle of light at every pixel instead of measuring it.
Their solution was to take two images of the same subject, with the camera in the same position, but with the focus at different depths. The two images would be very similar, but yet they provide enough different information to allow a computer to create a stereo image by stitching them together.
According to researchers, this new 3D technique eliminates the need of using expensive hardware and offers an accessible alternative to creating 3D images. The technique has great potential to be applied to various fields, from medical imaging to 3D displays.
For instance, the light-field moment imaging method will offer an accessible way of creating 3D images of biological tissues, as microscopes based on this computational method will be able to produce depth measurements and imaging faster and more accurately.
The Harvard discovery could also have an impact on the movie industry, and soon eliminate the need of using expensive 3D cameras or 3D glasses. As Orth explained, future 3D movies based on this computational method could be completely different from what they are like today: when played back on the right screen, the audience would just have to move their heads and feel like they’re in the film, for a significantly more immersive experience.
What do you think of the new 3D technique? Is it more feasible to use this computational method instead of the regular hardware? And would the resulting 3D images have the same quality?
Unlike traditional 3D image generating methods, the new 3D technique relies solely on mathematics and computation, making do without special hardware and fancy lenses.
The technique, developed by Harvard School of Engineering and Applied Sciences researchers, uses a computational method that creates 3D images with only one, stationary lens. The computational method is named light-field moment imaging.
The research team, led by natural sciences professor Kenneth Crozier, based their study on the idea of determining how an image would look from a different angle, relying only on the information they could extract from the rays of light that enter the camera, while keeping the camera still. Basically, what Crozier and graduate student Antony Orth did was infer the angle of light at every pixel instead of measuring it.
Their solution was to take two images of the same subject, with the camera in the same position, but with the focus at different depths. The two images would be very similar, but yet they provide enough different information to allow a computer to create a stereo image by stitching them together.
According to researchers, this new 3D technique eliminates the need of using expensive hardware and offers an accessible alternative to creating 3D images. The technique has great potential to be applied to various fields, from medical imaging to 3D displays.
For instance, the light-field moment imaging method will offer an accessible way of creating 3D images of biological tissues, as microscopes based on this computational method will be able to produce depth measurements and imaging faster and more accurately.
The Harvard discovery could also have an impact on the movie industry, and soon eliminate the need of using expensive 3D cameras or 3D glasses. As Orth explained, future 3D movies based on this computational method could be completely different from what they are like today: when played back on the right screen, the audience would just have to move their heads and feel like they’re in the film, for a significantly more immersive experience.
What do you think of the new 3D technique? Is it more feasible to use this computational method instead of the regular hardware? And would the resulting 3D images have the same quality?