![]() ![]() Lanman had already learned that throwing more processing power at the problem wasn’t feasible. Gaze-contingent blur has to deliver rapid-fire and near-instant defocusing to match essentially every eye movement, with a level of fidelity that can’t be achieved by simply dropping the resolution of objects that the wearer is no longer focusing on. And rendered blur isn’t a one-off process applied to a scene while it’s being developed or when the viewer first encounters it. Those early efforts exposed the dual challenge of rendering truly realistic blur in VR, which requires combining incredibly high render speeds with the levels of image quality required by advanced head-mounted displays. These fast but inaccurate methods of creating “game blur” ran counter to Half Dome’s mission, which is to faithfully reproduce the way light falls on the human retina.Īfter months spent exploring traditional techniques for optimizing computational displays, the results still weren’t fast enough to produce truly real-time blur that accurately matched physical reality. Our process couldn’t draw from existing techniques for rendering real-time blur in non-VR games, which have more to do with cinematography than realism, generating eye-catching cinematic effects (such as a pleasantly defocused background) geared specifically for flatscreen monitors and TVs. Software-based defocusing, however, was proving to be a major obstacle. Even just a few months into the project, early prototypes were showing promising results for creating sharp focus within VR. Lanman recognized the need for rendered blur back in 2015, in the early stages of the Half Dome project (which he also leads). “Whether you're playing a video game for hours or looking at a boring spreadsheet, eye strain, visual fatigue and just having a beautiful image you’re willing to spend your day with, all of that matters.” ![]() “This is about all-day immersion,” says Douglas Lanman, FRL’s Director of Display Systems Research. One of the biggest potential benefits of realistic retinal blur is more comfortable VR experiences. While varifocal VR headsets can deliver a crisp image anywhere the viewer looks, DeepFocus allows us to render the rest of the scene just the way it looks in the real world: naturally blurry.” Those blurry regions help our visual system make sense of the three-dimensional structure of the world, and help us decide where to focus our eyes next. “Our eyes are like tiny cameras: When they focus on a given object, the parts of the scene that are at a different depth look blurry. And the key to a truly realistic experience is a combination of focused and defocused visuals. “Our end goal is to deliver visual experiences that are indistinguishable from reality,” says Marina Zannoli, a vision scientist at FRL who joined the DeepFocus project early on. In fact, the more accurate the rendered blur, the less likely the viewer is to notice it. Rendered blur is essential for ultrarealistic VRĭeepFocus, which was developed by a multidisciplinary team of researchers at FRL, has nothing to do with cinematic aesthetics or splashy visuals. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |