Light is the fastest in the universe, so it is necessary to be challenging to try to catch it on the go. We have had some success, but a new rig built by Caltech researchers draws down a scary 10 billion frames per second, which means that it can catch light when traveling together – and they plan to make it a hundred times faster . 19659002] Understanding how the light moves is fundamental to many fields, so it's not just inactive curiosity that drives Jinyang Liang's and his colleagues' efforts – not that there's anything wrong with that. But there are potential applications in physics, engineering and medicine that depend on the behavior of light on such a small scale, and so short that they are in the limit of what can be measured.
You may have heard of billions and trillion FPS cameras in the past, but it was probably "stretch cameras" that make some cheating to achieve these numbers.
If a light pulse can be replicated perfectly, you can send every millisecond, but compensate your camera's capture time with a smaller fraction, like a handful of fifty seconds (one billion times shorter). You would catch a pulse when it was here, the next when it was a little longer, the next when it was even longer, and so on. The end result is a movie that can not be distinguished in many ways from if you had taken the first pulse at high speed.
This is very effective – but you can not always count on being able to produce a light pulse one million times exactly the same way. Perhaps you need to see what happens when it goes through a carefully developed laser-etched lens that will be changed by the first pulse that strikes it. In such cases, you must capture the first pulse in real time – that means taking pictures not only with five-ten seconds precision, but only fifteen seconds from each other .
That's what the T-CUP method does. It combines a streaming camera with another static camera and a data collection method used in tomography.
"We knew that by using only a fifty-second band camera, image quality would be limited. To improve this, we added another camera that purchases a static image. Combined with the image purchased from the fifty-second camera, we could use it as called a Radon transformation to provide high-quality images while taking up ten trillion per second, explains co-author of the study Lihong Wang. It clarifies things right!
In any case, the method allows images – well, technical spatial temporal data cubes – to be caught just 100 fifty seconds apart – it's 10 billion per second or it would be if they wanted to run it for so long, but there's no storage event fast enough to write ten thousand data cubes per second. So they can just keep it running for a handful of frames in a row for now – 25 during the experiment you'll see a visualization here.
The 25 images show a five-hundred-second long laser pointer ls passing through a beam splitter – note how in this scale the time it takes for the light to pass through the lens itself is uncomfortable. You must pay attention to these things!
This accuracy level in real time is second to none, but the team is not yet finished.
"We already see opportunities to speed up to a quadrillion (10 15 ) frames per second!" Liang Liang in the press release. Capture the behavior of light on this scale and with this level of fidelity are leagues beyond what we could a few years ago, and can open up brand new fields or research lines in physics and exotic materials.
Liang et al. .s paper appeared today in the magazine Light.