I predict that within a fairly short while, there will be a variety of different camera “feel” programs.
It’s going to take a year, maybe two, but pretty soon even entry-level drones will have this sort of capability. Perhaps this will usher in the era of Star Wars-like “trench run” footage instead of “floating high above it all” footage.
There’s some AI magic being thrown around, but it appears to me that the main thing that’s pushed the envelope is differential vision – being able to infer depth using triangulation like we do with our eyes. The AI magic claims are interesting – the drone predicts human motion for several seconds ahead, and navigates to where it can get a camera eye view of the target’s predicted location. That’s sort of what a camera-person does, too; it looks like probabalistic models all the way down, to me.
It’s impressive – but then, we should expect impressive from a $3,000 drone. In a year, they’ll all be doing it; the next generation will include multiple cameras and then it’s just a matter of software.
I expect there will be some interesting mishaps, too; though I was listening to Adam Savage’s blog where they were talking about running around with one and how “we couldn’t crash it.”
Sounds like a big plus, not having to worry about someone flying a drone into the back of your head, while trying to get a cool shot.
I hope they build a “ground following” mode – I want to see a rabbit’s eye-level view of mountain-biking.