Comments

  1. Moggie says

    I see Aperture Science has been busy.

    It’s hard not to anthropomorphise those robots! I wanted them to take a bow at the end.

  2. thecalmone says

    As an engineer who has worked in the automotive industry, all I could think was “Get away from those things, they can kill you”.

  3. leftwingfox says

    Yeah, pretty sure that’s the former. They programmed the robots and camera to perform the choreography, created 3d designs based on the camera position, then played back those images with a projector onto the scene as it’s being performed. Robot arms allow the programming to be precise and repeatedly controlled.

    The upshot is that a live audience sitting by the camera would see live exactly what the youtube viewers see. Sufficiently advanced technology, indeed.

  4. consciousness razor says

    Projection mapping, so either this was probably choreographed with a projector playing 3d images on the object, or the 3d was mapped onto the box in post.

    It said at the start that “all content was captured in camera.” Meaning not in post.

  5. consciousness razor says

    And yes, it’s worth noting they have to automate the camera too, for the 3D to be convincing. It’s part of the dance just like the two robotic arms.

  6. Duckbilled Platypus says

    The upshot is that a live audience sitting by the camera would see live exactly what the youtube viewers see. Sufficiently advanced technology, indeed.

    But sitting in any other place would bust the illusion, right? Because the animation can only be tweaked to look good from that position, what with the whole they’re-flat-surfaces-but-look-we’ve-created-the-illusion-of-depth-which-is-only-convincing-if-you-look-at-it-from-exactly-this-spot projection.

    I think it’s awesome that they are able to adjust 3D projection convincingly based on the position and angles of the surfaces relative to the position of the camera, but it’s only going to work on camera right now – or with an audience of, say, a few people huddled closely together. It’s kind of like with the old days stereos where you had to sit at some exact spot to hear the stereo balance correctly.

    I wonder if this could ever be taken to the next step. Without using glasses, that is.

  7. =8)-DX says

    But sitting in any other place would bust the illusion, right? Because the animation can only be tweaked to look good from that position, what with the whole they’re-flat-surfaces-but-look-we’ve-created-the-illusion-of-depth-which-is-only-convincing-if-you-look-at-it-from-exactly-this-spot projection.

    Not necessarily: a lot of the “3D” there was just isometric optical illusions, so you’d see it as 3D from most viewpoints, just with less depth. Add to that the fact that people are used to seeing 3D displayed on a flat screen and the high contrast, and I think it would’ve been effective for a large part of the audience.

    But PZ, your main question should have been: why did they need the meat-puppet!

  8. jnorris says

    The meat-puppet was required to secure funding. The hairless apes demanded token representation. It’s a bother, but some are just sooo cute.

  9. consciousness razor says

    Because the animation can only be tweaked to look good from that position, what with the whole they’re-flat-surfaces-but-look-we’ve-created-the-illusion-of-depth-which-is-only-convincing-if-you-look-at-it-from-exactly-this-spot projection.

    Heh, the shorter term for that is just “perspective.” Just like in a painting, it would still look good if you were watching (fairly close) to the side of the camera, but the illusion wouldn’t seem as realistic. Especially not when motion and such is involved, because that’s another sort of thing for us to detect when it’s a little “off.”

    I wonder if this could ever be taken to the next step. Without using glasses, that is.

    Well, how could you get that sort of virtual reality without glasses? Brain implants? Summoning a non-Euclidean cephalopodian horror to bend geometry to its will and deceive us all? I like art, but I can’t go for that.

  10. consciousness razor says

    The meat puppet was necessary for the story. Had to see it get eaten by the robots. Kind of the whole point.

  11. Duckbilled Platypus says

    Not necessarily: a lot of the “3D” there was just isometric optical illusions, so you’d see it as 3D from most viewpoints, just with less depth. Add to that the fact that people are used to seeing 3D displayed on a flat screen and the high contrast, and I think it would’ve been effective for a large part of the audience.

    Hmmm… I don’t know – the surfaces weren’t fixed, they were moving, and the projection seemed to be tweaked to show, say, the inside of the box when viewed from one particular point in the room. If you’d sit at the right edge of a large room and you’d see a projection where they show the left inner side of the box, it works – but not if you’d see the right inner side while the whole surface is practically to your left.

    Maybe I’m focusing at the extremes, though, and maybe there is a large audience sitting area where the illusion ends up well. Either way this would be excellent wallpaper for my living room.

  12. Trebuchet says

    I’m thinking there may have been not just two, but five robots: Two visible, one for the camera, and two projecting images.

    Kind of a clever trick there at the end, changing from the live human to an image of him.

  13. says

    @13

    I wonder if this could ever be taken to the next step. Without using glasses, that is.

    They can and have. When I was in tech school they had some people programming for 3D tvs that didn’t require glasses. It was a lot of work and detail, and the TV’s cost like 50K for a 32 inch, but the illusion was amazing. The angle and POV shifted as you moved around the television.

    This was 3 years ago.

  14. Duckbilled Platypus says

    @consciousness razor: correct, perspective. I don’t know how I can tangle that up.

    Well, how could you get that sort of virtual reality without glasses? Brain implants? Summoning a non-Euclidean cephalopodian horror to bend geometry to its will and deceive us all? I like art, but I can’t go for that.

    I would answer the question if I could. I’m not a star in 3D projections nor the technology involved, but I would be thinking at some technological state in which we can project different images into the room based on viewing angle (if we forget about distance for now). I realize there’s an infinite amount of angles you can look at a surface, but they may be limited to ranges of normal viewing angles, with intervals. Kind of like those ribbed images we used to have as kids (what are they called anyway) where, if you move your head, you’d see the picture change.

    Only, well, better.

  15. Duckbilled Platypus says

    @22:

    They can and have. When I was in tech school they had some people programming for 3D tvs that didn’t require glasses. It was a lot of work and detail, and the TV’s cost like 50K for a 32 inch, but the illusion was amazing. The angle and POV shifted as you moved around the television.

    Thanks for answering that. Damn. I don’t think I will ever be able to afford that kind of wallpaper.

  16. Rich Woods says

    Are they actually projecting images onto the two surfaces? I thought the surfaces were flat-screen TVs, each playing their own video sequence.

  17. consciousness razor says

    They can and have. When I was in tech school they had some people programming for 3D tvs that didn’t require glasses. It was a lot of work and detail, and the TV’s cost like 50K for a 32 inch, but the illusion was amazing. The angle and POV shifted as you moved around the television.

    This was 3 years ago.

    Yeah, I’ve seen those. Impressive stuff. I guess with increased resolution (past what people could normally see), you could shove in even more angles than are possible now. But $50k is already a fuckload of money.

  18. stever says

    $50K 3 years ago implies that in another 3 years you’ll see it at Best Buy for around $3K. And in another ten years, the ad industry will be throwing solid-looking illusions in your face, with your name on them, wherever you go. We’ll probably need legislation to keep a CGI Ronald McDonald from jumping in front of your car at red lights to sell you the latest McWhatever.

  19. Duckbilled Platypus says

    @ 29

    We’ll probably need legislation to keep a CGI Ronald McDonald from jumping in front of your car at red lights to sell you the latest McWhatever.

    There will, however, be some satisfaction in running that horrible clown over. Well, through, anyway.

  20. chrislawson says

    The image on the screens moved with the camera position, so either the camera was mounted on a programmed robot or it was mounted with a computer that fed position information to the computer running the animation to adjust the 3D effects in real time. Impressive either way.

  21. consciousness razor says

    The image on the screens moved with the camera position, so either the camera was mounted on a programmed robot or it was mounted with a computer that fed position information to the computer running the animation to adjust the 3D effects in real time.

    I’m sure that’s programmed in advance, not in real time. That wouldn’t have been possible in this case. For example, the video of the guy at the end couldn’t have been done in real time (not sure where the “real” guy went, but it wasn’t in front of yet another live camera with adequate lighting), so neither could the corresponding camera movements (which were small, granted, but the point is there’s no need to make the system that much more complicated). If it were just the animations, I guess you could do a lot of the 3D rendering in real-time, but it would take a really monstrous supercomputer to pull it off. And really, when you’re already doing so much design work ahead of time, there’s not much point in waiting until the last second for the computer to do whatever it’s going to do. Of course, if it were a game or an interactive installation, that’s another story.

  22. madscientist says

    That was beautifully done. Those weren’t TV screens or anything – they were simply projection screens. I wonder how many hundreds of hours went into producing that.

  23. sirbedevere says

    Parts of that were kind of cool. But I found the “all content was captured in camera” a little disingenuous. Sort of like Photoshopping the shit out of an image, taking a photograph of the print and then claiming “all content was captured in camera”.

  24. consciousness razor says

    Parts of that were kind of cool. But I found the “all content was captured in camera” a little disingenuous.

    That’s just a way of describing how it was produced. It’s not as if they’re claiming the animation “content” itself is real live-action footage, because basically nobody will be confused about whether it’s animation. They’re telling you how the animations made their way into the video; they say it that way because it could have been done in post-production like ordinary CGI (or photoshopping), which would’ve been put in after having some footage of blank greenscreens moving around. That’s a different sort of process, and it wouldn’t demonstrate the kind of technical work they’re trying to show off. They want people to know the company can do this sort of thing (which many don’t do), not this other sort of thing (which lots and lots of companies can do). Because this is about promoting the company, not fooling the kiddies into believing magic is real.

  25. leftwingfox says

    @Duckbilled Platypus: There’s certainly an optimal viewing angle, but I’m not versed enough in the physics to guess how wide that angle would be.

  26. says

    We’ll probably need legislation to keep a CGI Ronald McDonald from jumping in front of your car at red lights to sell you the latest McWhatever…

    (Files under ‘nightmare fuel’…)

    ‘Kay then… Guess I’m off to turn on every light in the house, drink a whole pot of coffee, build a sandbag bunker in the kitchen and sit inside it tightly clutching a loaded shotgun. Night all.

    (/I’m told with Ronald McDonalds, you should at least triple tap.)

  27. madtom1999 says

    As someone who plays with 3D computer simulations (and robotics) it seems they have limited themselves severely. Its a bit like taking the combustion engine and using it to drive a horse to the field to be ploughed.

  28. keinsignal says

    Having actually seen a similar performance in person ((ISAM 2.0 – here’s a pretty good video of the original: http://www.youtube.com/watch?v=lX6JcybgDFo), I can correct a misconception I’m seeing in a lot of these comments – the viewer’s perspective isn’t particularly important. The illusion is effective pretty much no matter where you’re sitting in a typical theatre – you’ll notice the camera moves in many parts of the video, and it doesn’t ruin or noticeably alter the effect. Your brain will generally interpret the scene as if you’re looking “through” the screen – just like looking at a painting, really. Any illusion of a third dimension persists so long as you’re looking at it more-or-less head-on.

    I really wish they’d turned the camera around at the end though, because I would love to see the projector rig for this. They need to be projecting pretty much perpendicular to the screen surface, I’d think, and the projection is really tightly controlled here – there’s little to no spillage off to the sides of the screens as far as I can see. Given the range and speed of motion here, I’d love to know how that was accomplished.

  29. sonofrojblake says

    I found the “all content was captured in camera” a little disingenuous.

    What they’ve done here is emphatically NOT analogous to photoshopping an image of, say, a model, then taking a picture of the photoshopped image.

    What they’ve done is the equivalent of spending about ten thousand man hours making sure the model’s hair, skin, eyes, nose, lips, nails, clothing, accessories shoes and surrounding environment look precisely just so… then taken a photo, and not had to do ANYTHING to that photo after the event, because all the work was done up front. And, crucially, what you see in the photo is what you would have seen had you been there.

    It’s a very specific contrast to if they’d done the CGI in post, which is the sort of thing any kid could do in their bedroom on a laptop.

  30. David Marjanović says

    Not bad… not bad at all.

    Well, it’s pretty, yes, but what is it supposed to *be*?

    A demonstration. A proof of concept.

  31. Moggie says

    madtom1999:

    As someone who plays with 3D computer simulations (and robotics) it seems they have limited themselves severely. Its a bit like taking the combustion engine and using it to drive a horse to the field to be ploughed.

    I look forward to seeing your showreel, then.

  32. Dr Marcus Hill Ph.D. (arguing from his own authority) says

    I’m not sure how wide the effective angle of viewing would be for the portions where there were optical illusions of depth entirely contained in the flat screens, but the shot near the end where the screens show wireframes of the portions of the robots concealed behind them only works for an extremely small angle.