If they are building a new kind of 3d display then this may explain why they need a lot of money. They cannot just buy an already massmanufactured display like Oculus is buying it from Samsung.
I wonder if, instead of the micro lens array mentioned in other links, they have a fiber with a tip acting as a lens, such that the larger scene is transmitted to each "pixel" in the raster. Each raster point would get a slightly different scene rendered to it, be displayed by the fiber tip, and thus produce the array of images necessary for multi focal viewing. I don't know enough about optics to know if this is possible, but it sure would be cool!
The fiber scanned display appears to be a strand of fiber, vibrated in one or two dimensions by a pizo-electric. The "pixels" are formed by the end of the fiber strand.
This tech certainly has a long way to go. I managed to use a prototype of the light field tech at gtc back to back with the oculus dk2. It's certainly cool research, but don't get the impression the light field protypes are offering a vr experience anywhere near what oculus has offered with any of their hardware.
GREAT VIDEO. Wow, I didn't really understand the whole point of this, until about the 4 minute mark in that video.
It allows you to focus on near objects, and focus on far objects. It's not an infinite depth-of-field display... You visually get to chose where you focus!
It's like the DISPLAY version of a Lytro camera! (I wonder if you could hook up a one to the other!)
Yeah definitely, combined with reading/watching the Nvidia info it starts to make sense - the technology is incredible on many levels but the problem is it means you've only got 1/5th -> 1/6th the resolution. if they're truly solved (and patented) that with some stroke of genius, they could completely win the VR wars. Also interesting to wonder whether the solution could be applicable to a light field camera.
In the Nvidia video they are using OLED screens from another HMD. It looks like it has a bunch of microlenses on top of it, maybe 200 or so. So it seems like resolution would be a big issue since small form factor OLEDs only go up to 2560 x 1440 in current devices available to consumers.
A whole lot of money for a whole lot of buzzword but no information in sight... Magic Leap's own website has even less information.
"On Oculus Rift and pretty much every other virtual and augmented reality experience, what the viewer sees is flat and floating in space at a set distance."
I was under the impression that the Oculus Rift had full stereoscopic 3d? Either I'm wrong or this article is wrong.
In a way, stereoscopic 3D is seeing something "flat and floating in space at a set distance". The focus distance is the same for both your eyes, and the same for anything you look at.
In the real world, your eyes refocus for objects at different distances. It's not just a "stereoscopic effect". The actual focus distance - what's blurry and what's not blurry - changes.
This doesn't happen with devices like the Rift or a 3D movie screen. Your eyes may have to swivel in and out to align the images, but they don't have to refocus.
I think John Carmack on one of the Oculus panels was pondering, if infinite focus isn't actually an improvement over having to converge all the times. From any other mouth I'd take that as trying to rationalize a flaw of the display tech that Oculus is currently using, but Carmack isn't really known to do that.
I'd like to compare regular infinite focus tech with something like light fields, and see if it improves presence. If it doesn't improve on presence or immersion significantly, then I'd rather have relaxing infinite focus I think.
When your eyes look at an object in 3D space, they not only swivel so both are looking at it at the same time (convergence), but your eye muscles manipulate the lenses so the object is clear and not blurry (focus). Think of how a camera focusses it's lens to bring an object into clear sharp view.
So in the case of the Oculus, the lenses in your eyes remain focussed on the images on the screens right in front of you, rather than changing depending on how far away the object is in the virtual scene. So there's a disparity there that causes your brain to go wtf and stops it from fully accepting what it's seeing as real.
One way to think of it: When you take a picture with a camera, different objects in the photo may be in or out of focus, depending on how far away from the camera they are.
Closer objects may be sharp and distant objects blurry, or vice versa. Or somewhere in between.
Of course you can aim the camera one direction or another to choose its field of view - which objects appear in the frame and where - but that's completely separate from which of those objects in focus.
Focusing is one thing, aiming the camera is another.
And that doesn't change at all when you have two cameras. You can aim them both at the same thing, you can aim them off into the distance, but you still have to focus them both.
When I look at something far away, close things are out of focus. When I look at close things, distance is out of focus. When I watch a 3D movie, I just align the glasses and things are focused according to the camera that shot it.
It explains what they're talking about. Your eyes can focus on different parts of the view - it's not just an infinite depth-of-field display like we're used to. It's like the display version of a Lytro camera.
Light field displays literally show a different image to one part of your retina than another. It is not incorrect to explain that technologies like the rift are 'flat', as, while they may show two images, they are simply flat images on a display. Light field tech combines an array of images that recreates the way light works in reality.
Light field displays are to the Rift like the Rift is to a 3DTV.
The Rift and 3D TVs are fundamentally the same. Light field adds 2 dimensions of information (2 axis that control the angle light travels). To me it is the difference between (2D+2D) and (2D×2D), or <x,y,z> and <x,y,z,∆x,∆y,∆z>.
No, the article is right, "set distance" means that the eye focuses on a distance that is fixed, namely the distance to screens. It can have full stereoscopic 3D and still force the human eye to focus at a given distance. This is a big change from real life, where the eye is continuously changing the focus based on the depth of objects of interest.
Other, non-screen based technologies, such as DLP [1] would allow the rendered field of depth to adjust based on the eye's focus, allowing the scene to be more realistic, and reducing mental fatigue. I think there was a different company using someting like this. [2]
Yeah, I work on VR and know of alternatives to stereoscopic displays. I still maintain that the writer is wrong and doesn't understand what they're describing:
"On Oculus Rift and pretty much every other virtual and augmented reality experience, what the viewer sees is flat and floating in space at a set distance. What Magic Leap purports to do is make you think you’re seeing a real 3-D object on top of the real world."
Good stereoscopic 3D does not give a sensation of seeing something flat and floating in space at a set distance.
The article isn't wrong, it's just ambiguously written. When you are viewing stereoscopic 3D, you are indeed looking at a surface that is flat and at a fixed distance from your eyes. This has consequences in terms of how your eyes are used to working versus how they have to work in a situation like this.
So you might have fun wondering how to build something that doesn't work like that.
You're not wrong, but I still think the article is wrong. They are probably paraphrasing a statement from Magic Leap and miscomprehending it in the process.
> So you might have fun wondering how to build something that doesn't work like that.
For head-mounted use, there are light-field glasses like those of Doug Lanman. Who knows if that is anything like what Magic Leap is actually working on. They are so absurdly secretive that all I've heard so far are contradictory rumors.
I will be very surprised if this is true. I could see Google buying out something like TechnicalIllusions but my experience with super secretive folks is that the lack of additional eyeballs on the technology results in big gaps are discovered right when folks think it should be "done."
To have realistic 3D technology you need to have 3 things:
1. Stereopsis - different image for both eyes.
2. Accommodation - ability of eye lense to focus on different distances.
3. vergence - eyes rotate slightly inward or outward so that the projection of an image is always in the center of both retinas
Modern 3d technology (movies, VR) provides only sereopsis without accommodation or vergence and vergence-accommodation conflict can cause headaches. You also notice that if you look at different part of the screen from where the movie camera is focused on, the image is blurry.
Conventional lens-screen technology (like Olucus Rift) can't solve these problems. You need something new like light field technology or hologram technology. This is why hundreds of millions if not several billions are needed to develop this technology.
Minor correction: conventional 3d displays provide stereopsis AND vergence, but without accommodation (and the corresponding focal blur). Hence why there is a vergence-accommodation conflict, because one is provided without the other, while normally they are linked.
$500M sounds like a typo. That is absolutely unheard of for a stealth company. For reference, Oculus had a $75M round last December, and they had ignited an entire VR developer community. Uber had a record setting round earlier this year and raised $1.2 billion. But Uber is... well, Uber.
They already raised $50mil Series A in February, it doesn't seem that unrealistic. Remember, a typical film budget for these guys easily breaks into the hundreds of millions.
I've been following this company for about a month now, and following this technology (light field displays) for a few years. I think this technology will basically replace all forms of screens eventually; it's just a question of timing. I'm annoyed I didn't hear about this company earlier, because they're based in my home town, but I just moved to California.
If the technology is as significant as it appears to be, doesn't it reach into enough billion (and trillion?) dollar industries? The investment in this kind of technology isn't just about gaming or Hollywood.
Communication is absolutely the killer app for these technologies. You lose SO much from staring at people through a screen, if you could trick your Amygdala into believing a far away person is real and in front of you it would be an astounding leap with massive benefits and implications across everything humans do.
It's about gaming and entertainment because that is by far the biggest, most visible market with the most ROI in hardware and software. The upfront investment and man power required to push a technology from early adopters to widespread adoption isn't usually trivial. Just like gaming pushed scientific research to new heights by commoditizing GPUs, VR gaming will be just a big a boon for all industries.
There are some _single_ financial firms with a trillion under management. Blackrock comes to mind. So it's not a stretch that there are industries in that range.
The top 3 companies are in the same industry and just those 3 together have well over a trillion dollars annual revenue. It is a counterexample to you claim that not single industry is in the trillion dollar range.
Anyone have anything concrete on what this is, how it works, whether it's more than vapourware? On the basis of publicly available information it sounds a lot like ... nothing. Expensive nothing, for that matter.
I do believe there is something here. I know Grahame Devine from the Santa Cruz circuit and met him a couple of times. He was serious about his one-man band independent iOS development, and he seemed very happy about staying in Santa Cruz.
And then I heard he had upped and left to Florida with his family, which is a big change. He has the rap sheet [1] to do pretty much whatever he wants, so I took his move to Florida as a big sign that Magic Leap exists (moving to Florida is no joke, and he's not a pump-and-dump kinda guy) and whatever does exist is cool (he wants to build cool things, and there's plenty of that in the Bay Area).
From the looks of the website this may in fact be a next-generation hallucinogenic drug :). Seriously though, the technology looks interesting but I would be careful getting too excited about this. When lots of credible people put tons of money and hype into things....we get stuff like Clinkle and Segway and Color.
Seems like the next evolution step in VR. But developers are already struggling to constantly hit 70FPS for the Oculus, where they have to render two scenes. Rendering 35×35 different images per eye (albeit not the whole scene) will increase computation power significantly - so I guess it will take some time until this becomes possible with consumer hardware. In their paper they mention rendering the video with FPS ranging between 15-70, using a NVIDIA Quadro K5000 graphics card (which is a beast, selling for about 1'900$).
Right. Because as Clinkle has shown us, stealth companies raising massive rounds at gigantic valuations without any major traction works out well for everyone involved.
They have some very serious talent on their payroll.
I noticed Yann LeCun (inventor of convolutional neural networks and head of Facebook's new AI lab) congratulated Gary Bradski on the funding.
If you're not familiar with him, here's his LinkedIn profile.
https://www.linkedin.com/in/garybradski
Disappointed Google is going into AR (ok, ok - "cinematic reality") rather than VR. AR is much more limiting than VR. With VR you can be "anywhere". With AR, you're in the same place - just with some stuff added on top. I'm sure AR will find its own killer apps, but I think VR has much more potential.
This might be some really cool technology, but $500 million rounds for pre-revenue, pre-real product, pre-publicly known companies is what happens when you're at the end of the line for a funding bubble.
This story has the best and worst of the current SV environment in one.
On one hand, it's amazing to see that the boundaries of technology are moving so fast that someone could come along a year after Oculus makes big news with something that might blow them out of the water (according to marketing hype).
On the other hand, $500M for an unlaunched startup with no users is crazy. That money could be stretched so far in so many ways and it's being dumped into a company without a dime in revenue. There is clearly no attempt to be 'lean'. It's hard to believe that they really 'need' $500M to develop this new tech, they're not flying to space and they're not pushing a medical device through FDA trials.
I guess the best thing to do is sit back and watch.
Ultra-High Resolution Scanning Fiber Display for HMDs
http://www.sbir.gov/sbirsearch/detail/415788
...Magic Leap is working to commercialize low cost, compact, high field of view, high resolution consumer wearable display systems.
A quick search for "Fiber Scanned Display" explains the technology:
http://www.hitl.washington.edu/projects/mfabfiber/
If they are building a new kind of 3d display then this may explain why they need a lot of money. They cannot just buy an already massmanufactured display like Oculus is buying it from Samsung.