> About 20 years ago, a neuroscientist named David Eagleman strapped a bunch of students into harnesses, hoisted them to the top of an imposing metal tower, and then, without warning, dropped them 150 feet. Though the students landed safely in nets, the experience was—by design—terrifying. Eagleton wanted to simulate the feeling of plummeting to one’s death.
FYI, the experiment is not as insane as the article makes it seem.
The subjects knew there would be a drop involved, and they timed others doing the drop first before estimating the elapsed time in their own drop.
This is also very noticeable in Video Games.
I remember the first time I played One Step From Eden, I thought I would never be able to keep up with it's frantic pace, but the more practice and understanding I had the more the game "slowed down". To a point of course, it's still a fast game but it feels orders of magnitude slower than initially.
Same with e.g. the Souls games, whose bosses are often designed to be visually and audibly overwhelming when you first encounter them. But after a while, when you get better, come back to a fight later, or watch someone else playing, you'll see it very differently from playing yourself, to the point where a lot of things just feel painfully slow and/or clearly telegraphed.
Doesn't make it much easier though as the window for when you should hit that dodge button is still narrow.
I fly fighter jets in a simulator called DCS. When you get task saturated you can feel the time speed up and then slow down. Hearing the RWR scream "missile missile missile" at you slows down the seconds to a crawl as you yank the stick and pray you turned in time to out run it. Then time speeds up to a frantic rush when you are trying to operate the radar and not hit a mountain at the same time.
Human time sense is just so weird when you pay attention to it a little.
Interesting, recently I'm playing squash more often and I'm improving. One of my observations, or mental notes, is that the ball is slow. I have all the time to look at it, see where it's going and decide what's the best moment to hit it back. I thought this observation was the result of being more calm and focused, but maybe it's my brain that's getting faster at this precise task.
The other day my friend bounced a ball off the ceiling and we tried to catch it. It's surprisingly frustrating, not expecting the speed from the rebound plus gravity acceleration.
> When you get better at juggling, objects really start falling down in slow motion
One tangental optical note effect I only recently noticed is that I shift my eyes quickly to a spinning ceiling fan there is a moment where the fan blade(s) appear to be effectively stationary -- and then transition to the blur that one normally sees.
A few hasty, disconnected thoughts about slow-motion:
1) Back in the day, you'd use slowmo if you wanted to make something look bigger and more impressive, like scale model work or making a human-sized person look like a giant[0]. Maybe people just figured out the same effect works at 1:1 scale. Or maybe it started working at 1:1 scale after people got used to it being associated with big and impressive things.
2) It's just become a lot easier and cheaper, in the same sort of way that shallow depth of field was everywhere after large-sensor consumer video cameras started appearing (notably the Canon 5D mkii). You don't even have to remember to overcrank the camera, you can fake it in post with Twixtor or its descendents.
3) Not sure what the state of play is now, but for a while higher frame rates were one of the main things distinguishing "cinema" cameras. Eg. maybe you could shoot at 180fps but only with an extra crop factor or with certain codecs. Maybe that focused film makers' minds on it a bit.
4) I don't think you ever see step printing [1] anymore (which is when you repeat frames, instead of overcranking or interpolating them). Maybe it's due a comeback.
I didn't know it was called step printing. Thanks. Remember it from 1983 or whenever R.E.M. did the Radio Free Europe video (just the live performance bits): https://youtu.be/Ac0oaXhz1u8?t=47
I watched Pixar’s “Elio” with the kids yesterday and I could swear there was a step printing effect in a montage.
So I don’t think it’s quite that uncommon. For editors it serves a useful purpose as an effect that feels perceptually different than regular slow motion and adds variety to cuts.
Animations are, or at least were, often animated on twos[1], leading to 12 frames per second.
This was used to good effect in Spider-Man: Into The Spider-Verse where they mixed animating on ones (24 fps) and twos[2], making one character appear more skilled than the other for example.
A lot of cell animations in anime are animated in much higher numbers too than just "on twos". Cell animation was slow, so anything they could do to reduce the number of tweens they would do. Most common is to animate in layers so the most difficult thing to animate, usually the character, is static while background elements have more motion.
In the opposite direction, I really like timelapse footage as well. While slow motion shows a lot of detail by showing more information than normally seen causing events to take longer to view, timelapse shows less information revealing processes too slow for us to see in less time.
It's great when the footage was shot with an appropriate shutter angle. And terrible when you become familiar with interpolation artifacts from artificially generating frames, because then you will start to notice it everywhere, kind of like bad kerning.
I don't understand this comment. Of course the person mastering/editing the movie will have to know what they are doing. They need to ensure it's done properly. AI image generation is just a technique in the toolbox to achieve that.
That's not always practical, and sometimes even impossible for extreme slow motion shots. You aren't going to get cinema level quality out of a 5000FPS camera. For the same reason it's not a solution to say "just make every effect practical" instead of CGI.
Really high quality AI-based offline interpolation methods do exist already, though the usual caveats still apply: the larger the motion, the less good it is. Though whether the quality is passable is a decision that needs to be made in any case.
It already is; modern video games and graphics hardware have a trick where they render a frame at a lower resolution, then use AI to upscale it to full screen. Apparently AI upscaling is faster than rendering at higher resolution.
But understandable, as The Matrix is generally speaking better known than John Woo films. I mean you named the director instead of one of his films, which implies the director is better known than his films.
The Matrix also did an interesting take on freeze frame animation with the 360 degree simultaneous camera capture thing. They'd use CGI for that nowadays. Actually they used a lot more CGI in fight scenes in the sequels already. Which I think is a shame.
Slow motion is a workaround for the bad motion quality that results from the extremely low frame rate of 24 fps. If we standardized on something less bad, e.g. 120 fps, it wouldn't be necessary. With modern digital cameras and efficient LED lighting this won't be unreasonably expensive.
The exact number “24” may be a standard for historical reasons, but a frame rate AROUND 24 is not an arbitrary standard. Higher frame rates require more light so that the image is properly exposed for the 1/(2x frame rate) of a second (assuming a 180-degree shutter angle) that the shutter is open. Doubling the frame rate requires doubling the amount of incident light, so going to 120 fps from 24 requires ~5x more light for a given ISO rating and aperture.
If you think about how light falls off in proportion to the square of its distance from the source—and that generally actors don’t stand in one place, but move through large spaces where they must appear to be lit evenly—you start to see that this is not just a question of “efficient LED lighting.” Shooting at high frame rates requires an enormous amount of light that cannot easily (read: cheaply, quickly, without higher expenses) be brought to bear in a normal production outside of controlled studio conditions.
High-CRI LED lighting is about ten times as efficient as the old film-making standard of incandescent lighting. Inverse square law affects incandescent lighting in exactly the same way, but they still managed to shoot indoor scenes at 24fps using it. Switching incandescent lighting for LED, keeping light locations and power use the same, will make every point about 10 times brighter than it was before. Therefore LED lighting is enough to shoot at about 240fps.
I’m guessing you haven’t worked in the film business, as ‘more efficient’ is not the same as ‘brighter’ in real-world conditions on a film set. LED lighting brings efficiencies in power use, weight, and heat, but nowhere near the quoted figure in terms of raw output due to optical loss, color control, diffusion, among other things.
Unless you are James Cameron shooting Avatar III on a soundstage with (close to) a blank-cheque from the studio, you are still limited in terms of space (the constraints of the location given the size of the light and its supporting stand), time (the time to set up and adjust each light properly, including last-minute adjustments), labor (someone’s got to plug all that in, run the cables, etc.), and cost/availability (you don’t always get the lights you want for a given budget).
Beyond that, you’re also considering aperture and ISO from a creative standpoint; maybe you don’t want to shoot wide open for reasons of image control, and so you may spend your lumen budget on ensuring that a particular scene can be exposed at, say, f/5.6 at ISO 100. Or you may want to spend your lumens on lens filtration, which produces a specific effect but further cuts down the incident light.
In short, no, you do not have 10x light available to spend on frame rate, and for any marginal gains in raw output, most cinematographers are thinking about what creative choices it opens up for the film; I would never burn additional lumens to shoot at 120fps just for the sake of A/V fanboys on the internet, unless the scene requires slow-motion or high-speed capture for postproduction reasons. Technical choices in this industry should always be motivated by the need to solve creative problems effectively, quickly, and within budget.
I'm not sure about that. The few examples of high framerate movies I saw, it made the fact that they are on a set and that they're acting very obvious. It felt like watching a theater play. The micromovements, now visible, gave it all away.
Obviously you can make 60+ FPS cinematography work well, games do it all the time. But whether that's practical in live action, I'm not sure. I certainly haven't seen an example that didn't make me cringe yet. Even in non-cinema settings, such as on YouTube, the presentation style usually needs adjusting a bit.
Speaking of, I do wish that all the cinematography-focused content creators on YouTube stopped using 24 and 30 FPS out of vanity though at least... Though it would help if YouTube rolled out support for HFR (120+ fps) as well, so that those who include 24p movie snippets don't need to compromise.
I don't understand it at all. As I child, I once visited Futuroscope[0] in France, a kind of cinema theme park. It has many different projection systems, e.g. 3D, dome screen, ultra-wide screen. The one that impressed me most was the Showscan[1]. This is analog film running at 60fps. I liked it so much I watched both Showscan movies they were showing twice. 60fps is by no means high frame rate, but even that was an enormous improvement over 24fps.
I'm not aware of any movies shot in non-interleaved 120fps (AFAIK all the movies advertised as "120fps" are 2*60fps stereoscopic with the frames interleaved between the eyes). Considering how much better games look in high frame rate compared to 60fps I'd love to see a non-interleaved 120fps movie.
Seems like you can get it in 120 fps 2D, from Wikipedia:
> To accommodate the film's wide release, various additional versions of the film were created.[3] They include 120 fps in 2D and 60 fps in 3D as well as today's current standard of 24 fps. The film also received a Dolby Cinema release, with two high dynamic range versions that can accommodate 2D and 3D, with up to 120 fps in 2K resolution.
I thought that was 2 * 60fps stereoscopic (which technically is 120fps so there's no false advertising, just not in the same way as Showscan is 60fps). I'm also not aware of any way to watch it in 2 * 60fps if you missed the original showings.
FYI, the experiment is not as insane as the article makes it seem.
The subjects knew there would be a drop involved, and they timed others doing the drop first before estimating the elapsed time in their own drop.
Unfortunately for those of you who want to try for yourselves, it closed down in 2021.
When you get better at juggling, objects really start falling down in slow motion (e.g a glass from a cupboard).
I guess my brain stores trajectories in cache instead of having to compute them and I get higher fps than I used to.
Doesn't make it much easier though as the window for when you should hit that dodge button is still narrow.
Human time sense is just so weird when you pay attention to it a little.
One tangental optical note effect I only recently noticed is that I shift my eyes quickly to a spinning ceiling fan there is a moment where the fan blade(s) appear to be effectively stationary -- and then transition to the blur that one normally sees.
The fan, I believe is similar to a clock ticking and a type of saccadic masking. (1)
Related to the optokinetic response. (2)
I’m sure someone with much more knowledge than I could better clarify however.
(1)https://en.wikipedia.org/wiki/Saccadic_masking
(2) https://en.wikipedia.org/wiki/Optokinetic_response
And now, when there's an accidental falling object, often my hand just moves to the exact correct position to catch it.
1) Back in the day, you'd use slowmo if you wanted to make something look bigger and more impressive, like scale model work or making a human-sized person look like a giant[0]. Maybe people just figured out the same effect works at 1:1 scale. Or maybe it started working at 1:1 scale after people got used to it being associated with big and impressive things.
2) It's just become a lot easier and cheaper, in the same sort of way that shallow depth of field was everywhere after large-sensor consumer video cameras started appearing (notably the Canon 5D mkii). You don't even have to remember to overcrank the camera, you can fake it in post with Twixtor or its descendents.
3) Not sure what the state of play is now, but for a while higher frame rates were one of the main things distinguishing "cinema" cameras. Eg. maybe you could shoot at 180fps but only with an extra crop factor or with certain codecs. Maybe that focused film makers' minds on it a bit.
4) I don't think you ever see step printing [1] anymore (which is when you repeat frames, instead of overcranking or interpolating them). Maybe it's due a comeback.
[0] https://www.youtube.com/watch?v=2XWiV1J4zKc
[1] https://www.youtube.com/watch?v=ju65gr9sUjk
So I don’t think it’s quite that uncommon. For editors it serves a useful purpose as an effect that feels perceptually different than regular slow motion and adds variety to cuts.
This was used to good effect in Spider-Man: Into The Spider-Verse where they mixed animating on ones (24 fps) and twos[2], making one character appear more skilled than the other for example.
[1]: https://businessofanimation.com/why-animation-studios-are-an...
[2]: https://youtu.be/jEXUG_vN540?t=150
It is a very cool tool, but not a silver bullet, just for context.
The Matrix also did an interesting take on freeze frame animation with the 360 degree simultaneous camera capture thing. They'd use CGI for that nowadays. Actually they used a lot more CGI in fight scenes in the sequels already. Which I think is a shame.
If you think about how light falls off in proportion to the square of its distance from the source—and that generally actors don’t stand in one place, but move through large spaces where they must appear to be lit evenly—you start to see that this is not just a question of “efficient LED lighting.” Shooting at high frame rates requires an enormous amount of light that cannot easily (read: cheaply, quickly, without higher expenses) be brought to bear in a normal production outside of controlled studio conditions.
Unless you are James Cameron shooting Avatar III on a soundstage with (close to) a blank-cheque from the studio, you are still limited in terms of space (the constraints of the location given the size of the light and its supporting stand), time (the time to set up and adjust each light properly, including last-minute adjustments), labor (someone’s got to plug all that in, run the cables, etc.), and cost/availability (you don’t always get the lights you want for a given budget).
Beyond that, you’re also considering aperture and ISO from a creative standpoint; maybe you don’t want to shoot wide open for reasons of image control, and so you may spend your lumen budget on ensuring that a particular scene can be exposed at, say, f/5.6 at ISO 100. Or you may want to spend your lumens on lens filtration, which produces a specific effect but further cuts down the incident light.
In short, no, you do not have 10x light available to spend on frame rate, and for any marginal gains in raw output, most cinematographers are thinking about what creative choices it opens up for the film; I would never burn additional lumens to shoot at 120fps just for the sake of A/V fanboys on the internet, unless the scene requires slow-motion or high-speed capture for postproduction reasons. Technical choices in this industry should always be motivated by the need to solve creative problems effectively, quickly, and within budget.
Obviously you can make 60+ FPS cinematography work well, games do it all the time. But whether that's practical in live action, I'm not sure. I certainly haven't seen an example that didn't make me cringe yet. Even in non-cinema settings, such as on YouTube, the presentation style usually needs adjusting a bit.
Speaking of, I do wish that all the cinematography-focused content creators on YouTube stopped using 24 and 30 FPS out of vanity though at least... Though it would help if YouTube rolled out support for HFR (120+ fps) as well, so that those who include 24p movie snippets don't need to compromise.
I'm not aware of any movies shot in non-interleaved 120fps (AFAIK all the movies advertised as "120fps" are 2*60fps stereoscopic with the frames interleaved between the eyes). Considering how much better games look in high frame rate compared to 60fps I'd love to see a non-interleaved 120fps movie.
[0] https://en.wikipedia.org/wiki/Futuroscope
[1] https://en.wikipedia.org/wiki/Showscan
I think Billy Lynn's Long Halftime Walk? Although I'm not sure if you can actually watch it at 120fps.
> To accommodate the film's wide release, various additional versions of the film were created.[3] They include 120 fps in 2D and 60 fps in 3D as well as today's current standard of 24 fps. The film also received a Dolby Cinema release, with two high dynamic range versions that can accommodate 2D and 3D, with up to 120 fps in 2K resolution.
https://en.wikipedia.org/wiki/Billy_Lynn%27s_Long_Halftime_W...