So clearly the author is sympathetic to the aura that Magic Leap is cultivating through secrecy and selective media engagements:
>When it arrives–best guess is within the next 18 months–it could usher in a new era of computing, a next-generation interface we’ll use for decades to come.
...which may be a pretty valid statement or sentiment overall, but later in the article, we get to the crux of the "potential" future of the device / platform:
>Eventually Magic Leap sees its greatest impact in business applications, especially medical imaging and retail (imagine “trying on” garments at home, seamlessly). But as with most technologies, entertainment offerings will lead the way.
I might be picking on a low hanging fruit here, but seriously, one of the compelling reasons to look forward to Magic Leap is so it can have a tie in with the Home Shopping Network or QVC or Nieman Marcus?
The more and more I see the little teaser images (and the occasional demo reel like the office game) I'm cautiously suspicious the technology is perfectly reasonable but figuring out great content for the platform will be the difference between whether it will align with the hype and promotion or be closer to the storyline of the Nintendo Virtual Boy.
I'm just frumpy after years of seeing the difference between renders for PR purposes and actual in-game / in-environment footage as a player / user, which, I do think as tech has improved, isn't happening as much anymore in commercials and whatnot, which I like. This particular situation and eventual roll out will prove a lot I'm sure, one way or another..."soon-ish" as he says...
I think that is a very salient observation. To reach its full potential the unit needs both pixel occlusion (which is knocking out the light your eye would normally see with light you generation) and pixel registration (which is keeping the pixel in the same place in 3-space outside of you). All of the demos focus on occlusion (the creatures seem like they really are in your room) and skate over registration accuracy. If you were going to tall someone how to fix something, registration accuracy would be critical (for example)
Also people are very "forgiving" if things seem to wiggle a bit in the world but looking at the Google Tango device this weekend I got to see some of the challenges of keeping registration of the pixels high. Its frustrating if your ruler end point is "jiggling" a bit close to the spot where you want to start a measurement.
So without good registration you get eye fatigue like you used to get with pilots flying long range on bumpy flights constantly looking at instruments. Only in the case of pilots they can look outside to the clouds (which don't "bump" in space).
You could always "cheat" and use a marker. It's obviously not as powerful but if I can put a marker on any surface and create a virtual screen, it would already be huge.
This has about as much hype as Ginger (Segway Transporter) did before it was launched. Disrupting multi-billion dollar industries in a single bound, changing the world forever at it's launch.
But in reality what it really takes to disrupt an industry is both a disruptive product and a really compelling mass-market price point. One of the reasons the Segway PT failed is it was $6k to buy. That severely limits the addressable market to only the rich and those that really want a new form of transport.
This is the same problem Magic Leap and Microsoft's Hololens have. If the product is more than say around $700, it's not going to be a mass consumer product, and it's not going to disrupt entire industries. The iPhone is a good example of a product that is sold for about the maximum price point a user can tolerate in most countries, and even then a lot of iPhones are sold in installments to bring the upfront cost down.
I find it hard to imagine that Magic Leap can hit that entry level price point in a Version 1 product, and if they can't then there will be quite a few big tech companies gunning to catch up with their technology by the time they hit a V2 to erode there exclusivity. Just look at how Google turned Android around after the iPhone launched to go with full screen touch just 6 months after the iPhone.
The way I recollect it, this is so far very different from Segway. In the runup to the Segway release/announcement, all the hype was coming from the Segway people and the press was repeating it because they had nothing else to report besides potted biographies of Dean Kamen and his previous inventions. I don't remember tech journalists going in, getting to use the Segway for hours, and coming out and telling the world that it was going to change cities forever (even if they aren't allowed to explain exactly what 'Ginger' was). With Magic Leap, on the other hand, we have many people going in, using it live, and coming out and saying it's going to change everything, including people who have been watching tech hype bubbles form & pop since before you or I were born, like Kevin Kelly. I think we can be pretty confident that there is some extremely impressive technology in there and that the descriptions are not totally bogus, even if we remain uncertain how successful it'll be (eg VR headsets are technically impressive and do deliver presence and deliver on many promises, but have not yet set the world on fire).
Counterpoint: the classical POV is that you start with a niche and worry about crossing the chasm later (and they require very different marketing strategies etc.). I'd argue Tesla is a modern example of executing that overall strategy from the getgo.
Tesla is / was only a half-niche though, cars are a strong and established market. They also had the (huge) benefit that their customers got tax breaks and subsidies because their cars are emission-free, dropping their effective price.
Aside from mis-valuing Magic Leap, these examples are completely implausible.
Ask Pixar how much time goes into draping a single VR garment on a single body. It's hard work, regardless of how good your imaging tech is. The ongoing hurdles are about modeling, based on cloth properties and personal anatomy. The margins on "try on at home" absolutely don't justify modeling every singly piece of clothing into a technically-accurate VR solution, and if you do anything less people will return your produce when it looks nothing like the VR visualization.
Asserting that a good VR technique will justify real-world modeling is downright stilly, and it leads me to ignore non-tech assessments of whether that project is worthwhile.
Very cool, and better than I knew existed. Thanks.
And yet... Check out the bodies displayed at 2:08. A good range, but nowhere near the range in effect for real people - and at least half of them look noticeably wrong to me.
The cloth displays aggressive horizontal wrinkling, in a way that makes no sense. Check out the center-most figure, whose shirt curves in below the stomach with extensive wrinkles that gravity should remove. All of the figures display confusingly broken folding on the horizontal plane - the leftmost model stretches the cloth across the chest, then displays bizarre smooth-to-wavy patterns at stomach height as you move from front to side. And as far as I can see, all of this is weak to multi-layered, stiff, or tailored cloth.
So yeah, I appreciate this, but I think it's woefully inadequate to modeling even cheap, popular clothing. That's going to take a few more breakthrough as large as this one at least, so Magic Leap's VR abilities aren't nearly enough to justify this business model.
Recommend you look at this history of emerging technologies announced at SIGGRAPH.
Tends not to be a linear progression at all; if the attention of more than a couple researchers is applied to any of these things then they grow very quickly - couple this with improvements in parallel processing power and it becomes quite viable.
(see: physically based lighting, rigid body -> full crumple simulation, fluid dynamics, Photoshop's "smart fill", or really any game tech)
Apropos of nothing: Same applies in terms of hardware - what we can do with light field cameras now was science fiction just over a decade ago, but within a couple years of crossing a research tipping point we went from 10x10' camera arrays to <$100 hand-held commercial products.
source: graphics tech and SIGGRAPH nerd for the last 23 years
actually there was an extremely slow build up of this tech, with a sudden and rapid escalation - but between 1908 and 2010 was a very shallow incline :)
I was also disappointed by that statement. I would expect much more from the first applications. Let me try to be constructive and think of some:
+ To have a cat even if you can't feed it or clean up (elderly).
+ To find out how it is to have a baby, how to feed it, how to care for it.
+ Dating first with an avatar before going for the real deal.
+ Searching for things without having to open cabinets or boxes.
+ Colored lights etc will be outdated. Colors, paintings, candles, can all be virtual.
+ Augmentation including audio can give a voice to your pets, to inanimate objects, even the food on your plate. Your plant will tell you that it's thirsty.
Also I think screens are definitely not outdated. I look forward to have a screen the size of a wall and "wormhole" functionality so I can virtually couple it to a similar wall somewhere else. Then I will feel much closer to my grandma for example. Or my wife to her loved ones at the other side of the ocean.
I don't want to live in a world with many of those features. I don't want my pets to talk to me, have my plants tell me they're thirsty, or go on a date with an avatar. And, as a parent of a two-year-old, trust me, no sort of application can let you find out how it is to have a baby.
You don't have to feel weird/old-fashioned. It's just a matter of taste. The solution is to not use the software features the poster above you mentioned.
I agree with that myself. It's not an endorsement of the future.
What I think however is that we have to consider what it means to interact with the "real world".
If we filter out advertisements through augmented reality, will it get us closer to the real world?
If we can remove visual clutter, noise pollution, will the people without glasses be bombarded with spam they can't filter out?
Moreover, will the world become more real by knowing things about the meat on your plate that you currently can ignore? Will we be able to effort not knowing things?
# Conversation with John Smith
## Family
* How is his daughter doing in ballet? (guaranteed, last discussed 1m3d ago)
* Was his wife's birthday last weekend? (70% certainty, Facebook feed. [Review post])
## Video games
* What is his current rank in League of Legends? (guaranteed, from your LoL friends list)
* Has he tried any new champions in League of Legends? (the answer was "yes" 1m3d ago)
## Shared occurrences
* Relate your experience with your participation in a hackathon last weekend.
o Has he been to a hackathon? (shared interest in software development)
o Does he have any techniques for making travel bearable? Has he ever been to upstate New York? (he travels often)
* Talk about your upcoming shared trip to Canada.
o Has he previously visited Canada? (unknown)
o Does he have his ski equipment ready? (the answer was "no" 1m3d ago)
o Is he ready to have a good time? (the answer was "yes" 1m3d ago)
o Relate an anecdote about the last time you went. (you have not already done this)
Bug report: I selected relate an anecdote but wasn't provided with one. My social facade is failing! John is beginning to suspect I am using The Glasses.
I'm pretty sure a lot of bodyguards will be using the glasses to spot people with specific attributes (face shows stress, etc.) in a crowd and love the terminator version.
>but seriously, one of the compelling reasons to look forward to Magic Leap is so it can have a tie in with the Home Shopping Network or QVC or Nieman Marcus?
I personally think this would be really cool, and would facilitate entirely new modes of interaction with people in this field e.g. designers and sales workers. Imagine being able to connect with a clothing designer remotely, and have them model you in a virtual outfit before a mirror, with an practically infinite vocabulary of color, texture, and pattern at their disposal. A 2-way conversation with a design professional with a vastly accelerated prototype-feedback loop.
Maybe I'm just bored, but I'm quite excited about what we will have to create as interface patterns with mixed reality. Even if there were no useful content made for it, I'm pretty sure I could spend hours trying building interfaces and see how well they go, just like I can spend hours trying to 3d print a ball or testing the limits of printing on thin air.
In any way, I can't see this not being a technological advance once we'll have played with it as makers for a decent amount of years.
I've been hearing rumors about these guys for years and it never seems to be any more than what's in this article-- "we saw some special effects and our wallets flipped themselves out of our back pockets onto the table and began spewing money".
People so often underestimate the difficulty of productizing significant UI changes, not realizing that getting content to migrate to the new shiny will be at least as hard as getting rid of Flash on the web.
John Carmack joining Oculus was a quick reason to think: "hm, this team has people who know how to get things done and launch."
Maybe Magic Leap has the same types of engineers [does Graeme have the same reputation as hyper productive dev?], but there are reasons Carmack has this reputation [1].
And I've known and worked with people like this. People who are utterly relentless. Insanely productive. And honestly? I would always bet on them. Every single time. It's just a major key that I don't know anything about on the Magic Leap team.
By the time Carmack joined the Oculus Rift's CV1 technology and design was mostly finished. Carmack is reportedly focused on Oculus's mobile offerings which we have yet to see or is indeed the Santa Cruz prototype which, if rumors are to believed, is a mobile SoC powered inside-out markerless VR headset. When I saw the SC prototype I instantly thought this is what Carmack must has been working on. Not shoving cellphones into $20 goggles and working out the kinks like many think mobile VR would turn out.
I think there's some practical considerations here with the "magic engineer" who can do anything quickly and, of course, the laws of diminishing or even negative returns outlined in books like the Mythical Man-Month. Even the impressive Santa Cruz is years out from being sold. Graphically it cannot compete with the PC-based systems it would ultimately compete with.
It seems like every one of the guys who have gone to work at Id Software with Carmack have found themselves completely overwhelmed. They're all competent and brilliant people, but they're not Carmack.
I think that having an (the) entire industry writing solutions for Magic Leap is exactly what they are envisioning. It would be a platform, i.e. Chilton would build an app for it.
Now, whether or not they are actually going to be the biggest thing in graphical interfaces since the flat screen is another question...
I'm skeptical too, but I could envision a path where this gets a lot of the work done at a tooling level. All these products will have CAD models behind them already, AutoCad and SolidWorks, et al, could introduce tooling to pump out the assets necessary for this at not-terrible levels of effort, couldn't they?
One of the advisors to Magic Leap is currently the CEO of Onshape (and was also the creator of SolidWorks). There will surely be an Onshape app for Magic Leap at some point.
Back in the 90's when I was doing 3D on the web, we looked at this and the results weren't compelling. The models used for CAD/CAM aren't the same sort of thing you would want a GPU to try and render in a 3D scene. Just as one trivial example, none of them had normals so you'd need to do a lot of model cleanup simply to get them to render as a solid object.
CAD data is getting used more often in 3d graphics (although there's still some cleanup). It's used commonly for product shots and starting to be used for games. The most recent GDC had a McLaren pulled from CAD data running real time in Unreal Engine[1]. I honestly don't think much cleanup was done because of the timeline for projects like these (I thought I heard 6 weeks start to finish focusing on custom engine development for interaction and shading).
In my experience, the problem is more in how the model is organized; having ever screw or washer as a separate piece of geometry causes problems (3d tools have trouble with very large geometry counts even if the poly count isn't high). CAD data also doesn't work very well with non-hard materials like upholstery; stitching is likely missing, as well as things that make cloth/leather look normal like pinching or gravity pushing on things.
I'm thinking that Hololens rev. 2 will be out before Magic Leap rev. 1 is out and will be cheaper. So unless Magic Leap is an order of magnitude better than Hololens, they have failed.
Say what you will about Microsoft, at least they already have a platform on which this "new era of computing" can be based: Windows. That is to say, it makes tremendous more sense to evolve from what we have now into the "new era of computing", than to write an entire platform from scratch, which is what I assume Magic Leap is trying to do.
Magicleap = supposedly infinite (or they don't exist at all?)
Hololens says they have 2.3 million “light points”. I don't know much about light points other than a few paragraphs, but I wonder how Magicleap actually measures light density and how/if it compares or correlates.
1920x1080 ~= 2 million pixels. Assuming an optical scanning system, you could give them the benefit of the doubt that they can route R, G, and B through a single light point during scanning.
I presume a "light point" is some sort of shaped structure on their glass which allows the scanned light to escape to the eye when requested. And in theory, might be a place where they can do interference cancellation of incoming light to darken points. All their hype probably points to metamaterials-style shaping of light.
Remember that pixels aren't rectangles (hence why scaling images larger always looks more blocky or blurry than native res), they're point samples. So a "light point" should be equivalent to a pixel in the simple case. Whether or not they can bend light between points to make it "infinite", who knows. Any of that is always going to be limited by the resolution that the GPU can crunch in time.
Diode lasers can easily amplitude modulate at 10s of MHz, which can provide more effective pixels than 4k displays. If they're still using the piezoelectric beam steering they demo'd a while back, then that will likely be their scanning speed limiter, and thus the pixel density limiter. Their most recent patents suggest they've moved on to something more advanced, though, so it's tough to say for sure.
Regardless, GPUs will likely limit the render resolution for the first one or two products. Upscaling might be possible, though.
I may regret this comment in the future, but Magic Leap represents the worst of tech to me. Extreme secrecy while at the same time pushing vapid press releases in the media without sharing any real information.
Either collaborate and share or be secret and reveal, but this is the worst combination. If there was a way to short this company I would do it.
I believe Magic Leap has a new kind of display technology, but the article is pretty bullshit-sounding with statements like this:
"Throw out your PC, your laptop and your mobile phone, because the computing power you need will be in your glasses, and they can make a display appear anywhere, at any size you like."
Yeah sure, they not only made a new display tech but integrated the computing power of desktops or laptops into glasses. I wonder how they cool the GTX 1080 cards on your head...
Magic leap is a pair of glasses you wear on your face that displays AR imagery through a clear glass lens. There's a cable leading from these glasses to your pocket where you have a tablet-size device doing all the heavy lifting, most likely using mobile SoC's. Pretty sure you can't run a desktop CPU and GPU on a pocketable device.
You don't get 1080gtx performance or anything remotely close to that with a setup like this. Its mobile-level graphics. Oculus's recent showing of its mobile SoC-based Santa Cruz prototype shows fairly poor graphics compared to PC-based VR like the Rift or Vive. I imagine the Magic Leap has less pixels to push considering it isn't doing any backgrounds, but still, those very impressive marketing shots of high resolution dragons flying around and interacting with you might not really be possible without a lot of compromises on graphic quality, fov, framerate, etc. Worse, no one wants to talk about transparency, especially in well-lit rooms. The few hands-on Hololens reports we've gotten make this out to be a big issue, as well as fov, and fundamentally the Magic Leap and what hololens is using could be the same or very similar technologies.
My gut feeling is that no one has yet to handle the transparency issue (and this is why MS can show off the hololens with confidence) and because of that it makes sense for Magic Leap to keep its product secretive. The tech press was not too kind to Hololens prototypes due to transparecy issues and Magic Leap doesn't need that kind of negative press as it continues to fund-raise and develop its product.
> There's a cable leading from these glasses to your pocket where you have a tablet-size device doing all the heavy lifting, most likely using mobile SoC's. Pretty sure you can't run a desktop CPU and GPU on a pocketable device.
> You don't get 1080gtx performance or anything remotely close to that with a setup like this. Its mobile-level graphics.
A.) the pocket device is the light source array that feed the scanning fibers. A processor drives each light source linearly. Neither the light source nor the driving processor are likely to be head mounted for quite some time, but I don't see that being an issue given the dramatic improvement over existing interfaces (i.e. I'd rather have a pair of glasses with a tether than a helmet any day of the week);
B.) In light field VRDs, resolution is not limited by active matrix array addressing as it is to a degree in LCD displays of mobile devices. Instead, resolution primarily depends on how quickly you can resonate the tip of a scanning fiber and concurrently how quickly you can pulse that fiber's light source in a synchronized manner (each pulse matching the bit color/intensity of the intended pixel);
C.) Multiple fibers form a composite image;
D.) Maximum resolution is only needed in the foveal region (i.e. a few deg about the center of user's eye gaze at any given time), so eye tracking can be utilized to minimize processing. The remainder of displayed content is low resolution, albeit still critical, peripheral imagery.
My team & I have been batting around with a slightly different approach to the light field VRD, which we're thinking about simply releasing as a FOSH project.
Well, our light field display concept is fairly different from ML's in the manner in which we scan the light into the eye. We propose using very narrow collimated wavefronts instead of broad/potentially curved wavefronts. The system can still be used to support accomodation, potentially to a greater degree than is possible using layered DOE's (I don't know for sure) which we do not propose to use.
The resultant display has an inherently smaller exit pupil, but we'll layout a method to actively expand the exit pupil, enough to support typical maximum eye rotation range and minimum eye pupil dilation (~2mm) without vignetting the virtual image with natural eye movements. The benefit of this method is virtually unlimitted FOV.
Certainly still in development, but we think releasing what we've come up with thus far into the wild as FOSH and seeing what the crowd can bring to the design is a fun idea.
I'm currently working on a simple "Hello World" write up that will explain the device components, some basic plans for prototyping done to date and the development hurdles that still need to be overcome. I'm planning to use this to introduce the project. It's definitely not an easy project, but I don't see any component that can't be fabricated by a crafty homebrew engineer.
One important differentiating factor between this and other (consumer) HMD's is the scanning display.
The Oculus et al need to render a rectangular matrix of evenly spaced pixels. If you want a larger field of view or higher resolution this comes with an exponential increase in number of pixels. With a high frequency scanning display and eye tracking you are not locked to a static spatial resolution (see https://en.m.wikipedia.org/wiki/Foveated_imaging).
There's a whole bunch of other problems when it comes to a render pipeline for something links this, but at least on the physical front it enables super high perceived resolution with less overall points of light to render.
Foveated rendering only became relevant when Nvidia recently started supporting Multi-Resolution Shading. As new techniques and tools like this get better support we can be a lot more efficient with lower powered hardware.
It was interesting to hear Carmack last year talk about re-introducing interlacing as a way to reduce latency.
Although, I'm just reinforcing what you're saying =)
Huh, interlacing is a neat idea to reduce latency, but I'm guessing you'd want a form of intra-frame temporal anti-aliasing, otherwise it might look pretty fugly. Whether or not this would neutralise any latency advantage due to the computational overhead of doing this, I'm not sure...
For VR, the consensus I've seen seem to push for multi-sample AA > full-screen AA > temporal AA (I feel like I've even seen no AA is better than temporal AA). I'm not quite sure if it's performance, architecture (they really prefer forward renderers instead of deferred renderers for low latency), or aesthetics. In the little bit I've done, when just playing with knobs temporal AA looks better to me. Without AA specular highlights are way too distracting and pop too much.
This reference[2] talks about the extra velocity buffer needed for temporal AA and the fact it tends to over-blur which can fuzz out fine details (low resolution is a touchy subject in VR).
There's a lot of moving pieces when making decisions. In talking about interlacing, he was ideally talking about not having to create a whole image buffer before sending data to the hardware--just rendering parts (well, scanline) of the image that moved.
From what I read here, I think they are aiming for a system that has 'enough' resolution on the entire field of view.
'Enough' is retina resolution, but only on the fovea, and way less the further away you go from the fovea. That can significantly decrease the amount of work needed to render a scene.
Moreover, color-wise, you don't need color except for the center of your field of view (you need cones to see color, and cones can only be found in the fovea).
Building such a screen probably isn't that hard; you would need to build new production lines because everything out there is built for rectangular grids of equal-sized pixels, but that probably is 'just' a matter of investing money and time.
Writing the software to efficiently draw scenes on such a screen probably is a bit harder; there will be lots of optimizations that need to be discovered (what's the equivalent of Bresenham line drawing, for example?)
The biggest challenge will be to keep the high-resolution part of your screen centered on where the retina looks. That requires eye tracking and some way to move your screen. Saccades move at hundreds of degrees per second; you would have to match that.
If that's what they are building, and they succeed, I think it will take over the world, even if they initially sell it at $5000 an eye.
>If you want a larger field of view or higher resolution this comes with an exponential increase in number of pixels.
How does that work? Shouldn't it be just linear in FOV? Or quadratic if you are talking about solid angle? This doesn't seem like one of those things that grows exponentially.
> - They could do hard occlusion with an LCD mask. Or they could punt til v2, and soft-occlude with blur using their stacked diffraction plates.
Unfortunately, you cannot easily achieve "hard occlusion" with an LCD mask. Pinlight display is a project that did explore that idea: http://pinlights.info
LCDs can attenuate individual rays of light. However, the images we see of real objects in our environment are generated from light in the form of planar wave fronts. So, if you have a near eye LCD micro display and you turn on a single LCD pixel in an attempt to block light emanating from a certain point on a real object, most of the light emanating from that point will simply go around the LCD pixel, enter the pupil and for a point on you retina. The total brightness of the image will dim a little dependant on the size of the LCD pixel, but not by much for a single microdisplay pixel (i.e. a few micron pitch).
The Pinlight papers do talk about ways around that issue, using LCD diffraction pattern masks, but I still think the residual diffraction and reduced overall lens transparency associated with the LCD micro display makes the approach an even bigger challenge.
If you hold out your thumb between your eye and the sun, such that your thumb appears to just block the edges of the sun, then you will be effectively blocking the entire set of overlapping wavefronts emanating from the sun (that would have formed multiple "pixels" forming the sun image on your retina), preventing their arrival at your cornea and ultimately entering your pupil. Notice that your thumb is about about 1.5ft from your eye.
Could you emulate that with a transparent LCD display? Yes, but that display would need to be large and about 1.5ft from your eye. Not really practical for a near eye (head mounted display).
If you try to accomplish this occlusion directly with an LCD microdiplay (such as those found in many projectors) in a near eye application, at best you end up with a blackish smudge with very fuzzy edges. To see this in action, look at Figure 11.D in the following paper:
Most of the light that is received by the retina in day to day life is highly incoherent thus it is unlikely that you can produce a destructive wavefront for it.
One of their patents mentions using stacked diffraction optical elements. Details unclear:
>[0216] Such may be used to cancel light from the planar waveguides with respect to light from the background or real world, in some respects similar to noise canceling headphones.
I believe this particular excerpt is referring to the use of the layered and controllable DOEs to curve light projected from their scanning fibers that ultimately forms the virtual object image on the retina, rather than using the DOEs to attenuate ambient light (see my reply to Kelsolaar above).
However, if I'm interpreting their literature correctly, you are correct in that they are able to use (at least a portion of) these DOE layers to generate the moire patterns that will ultimately be capable of form black/clear image masks.
I agree that the lack of coherence (spatial & temporal) of ambient lighy would makes the use of diffraction gratings to actively create destructive interference patterns on the retina a futile efftort. Ambient light is generally a white lighy (lacks temporal coherence) and many wavefronts originate closer to the eye than the optical infinity (~8m) and are therefore fairly divergent (curved and thus lack spatial coherence). The resulting interference patterne would therefore be very blurry. Too blurry to form a useful black pixel image on the retina.
On the other hand, I believe ML is proposing to use a gratings to display black/clear images. In this setup, the incident ambient light wavefronts are selectively redirected using bragg gratings (why they keep saying Braff gratings" in their literature is beyond me). This technique has been around for many years and is still captivating to watch even in this day and age - see:
The realization that this affect retains its "sharpness" for near eye applications in concert with (I believe) the realization that you can utilize the electonically-controlled diffraction grating layers you already have in your device (for the purpose of controlling wavefront curvature to support eye depth accommodation) depth is really quite clever. It's truly an exciting prospectct!
There is a bigger problem, as well. Tracking, the glasses will need to use a lot of the GPU (and CPU) processing power for simply tracking the environment. So they don't have backgrounds, that doesn't account for much anyway, like maybe 10% of GPU power goes to that. Tracking an environment takes like an entire card at the moment.
Microsoft created an ASIC they call an HPU (holographic processing unit) that does this work for the GPU. It takes data from the depth sensors and sends the resulting point data straight to the CPU.
One of the researchers posted a 30ish min video on the hardware inside the Hololens on YouTube. It's this HPU / sensors that Microsoft is licensing to the hardware manufacturers for the upcoming VR and AR headsets. We'll know more December 8th, and after their hardware expo later this year.
Reminds me that a company a while back tried to pitch such a system on the cheap. It was basically a bulky visor that you put your smartphone into. inside the visor was a angled piece of glass that would make your phone screen appear to be hovering in the air in front of you.
Maybe never. Pascal is 16nm process and the mid-range gaming card, the 1070, uses up to 150 watts. A "powerful" mobile chipset uses a tiny fraction of that even at its highest settings. 100 watts for the CPU and another 150watts for the GPU is a staggering 250watts of continuous power no pocket-able lithium-ion battery can handle.
If we want current gen graphics we're more than likely looking at a PC backpack solution. More than likely we'll probably be looking at some low watt solution, like the 35W AMD chipset in the Solus Q and accept a lower level of graphic quality, but still well above mobile standards. This may involve wearing a battery belt or battery hip-pouch for extended use. For lower run times the battery can be part of the headstrap with no need for anything to be pocketable.
Part of me says just distribute the battery all around, thighs, legs, torso and you'll at least have good power. Another part of me says this is ludicrous, and is akin to strapping thermite and a bunch of loose strike anywhere matches into pouches all around your body.
Which Pascal? Assuming the smallest desktop Pascal, GP107 is at 768 CUDA cores. Tegra X2 Pascal has around 256 CUDA cores. They seem to be clocked similarly. The Tegra also has a lot lower memory bandwidth and stuff like that but we'll put that aside for now. We'll need the Tegra to x3 in performance. Maybe in 6? years we can get a GP107 performance in tablet form factor. The GTX 1080 has 2560 CUDA cores. That's another 3x from GP107. So maybe 12 years for ~1080 performance in tablets.
To be fair, if the tracking and resolution is first-in-class, there are a billion business applications that won't require major graphics horsepower. I'm going to buy a vive soon to play around with virtual desktop, but I don't have high hopes for it since I know the resolution isn't there yet. If they can come up with something light weight and high resolution that lets me see the rest of my environment I'm all in.
Don't buy the Vive for virtual desktop unless you don't care about your money. I have the Vive and it's awesome for games. Virtual desktop is cool too, but it's not something you want to use for any length of time. It's like looking at your phone stretched out on a 50 foot screen. VR videos are about the only use. Anything else is maybe just a glimpse into the future. The resolution is too low right now.
That sounds like a dreadful future. Sitting for extended periods in virtual meetings with this headset on without any chance to refocus your eyes, drink something, doodle.
> That sounds like a dreadful future. Sitting for extended periods in virtual meetings with this headset on without any chance to refocus your eyes, drink something, doodle.
Ugh. I just imagined a room full of people sitting around a conference table with VR headsets on. Not a pretty thought indeed.
well that's quite sad. I was hoping a vr headset could be a valid substitute for multi-monitor setups, we just have to hope they get to that point. Smartphones kind of peaked around 4th gen, so we should wait a couple more gens of vr at least :p
Got my Vive last week. The other thing you don't realize is because of the lenses, the edges of your vision are blurry. So, you can't scan with your eyes like a regular monitor. I too wanted a virtual monitor and it's just not there yet.
Their initial design was a fiber projector that oscillated in a spiral pattern with something like 512 rings (I can't remember the exact number). Along the the spiral you can have any resolution, only limited by how fast you can modulate the laser. Details along the spiral would be sharp, anything along the normal (spokes radiating inwards) would be only have 1024 distinct levels of detail.
But their later patents showed whole arrays of these fiber projectors. They are probably limited by the optics in-between the fiber and the eye and by the display controller bandwidth more than anything.
Yes and no, you want to have a ton of time tinkering in a lab and the room to experiment and fail... but who would know Edison if he didn't eventually ship the lightbulb?
Also considering how litigious/secretive things are — it must suck to work on something for so long and be contractually obligated to avoid talking about it in public.
Latency is critical-path for VR not just in existing solutions, but fundamentally. Speed-of-light delays are sufficient to cripple remote-processing approaches to VR, so we won't see cloud solutions unless we're wrong about the fundamentals of relativity.
I wouldn't rule it out completely, though. There is some opportunity to mitigate latency, such as performing the heavy lifting remotely with local transformations on a simplified description of the scene to account for head movement and the like.
It would certainly be a much more complex solution that would still have inferior results compared to computing everything locally (just like cloud streaming now). Just that it's not "change the laws of physics" impossible to get to a workable implementation.
critical but not impossible to fix. I can't find it but Microsoft published a video some time ago to realize cheap VR/AR headsets which do rendering in the cloud
A previous article (possibly from over a year ago) said that the computing power and battery come from a cell-phone-sized device that you keep in a pocket and attach to the glasses via fiber.
Bandwidth is not the problem, latency is. Bandwidth can be fixed by throwing more bandwidth at the problem.
Cloud gaming has not done all that well; it's right at the edge of feasibility due to latency. VR moves it back off the edge to "unquestionably infeasible". If VR gaming were to take off and become a dominant paradigm it would make cloud gaming even harder to justify.
I had to look it up, but it appears OnLive was shut down in 2015 and patents sold off to Sony.
I was completely skeptical at their claims with respect to latency. I never believed they could get the service to work, and for enough people to make it profitable. It appears quite a few people were able to enjoy the service at decent enough framerates. Which is surprising.
I recently tried the PS4's remote play functionality. My friend in another city was able to play my bloodborne, which is a very timing-sensitive game, perfectly fine.
A few weeks ago my cousin ran games at 1080p on an EC2 GPU instance and used Steam's In-Home Streaming to play remotely and it worked perfectly fine.
I think the latency is now there, if you have a fast connection and live relatively close to the datacenter.
I used onlive for a year or so and really enjoyed it. There were some issues, but even the multiplayer FPSs were playable. Most of the time I was playing single player games that were less twitchy, and if you had done a blind test I probably wouldn't have been able to tell you if it was running locally or not.
Gaming and banking/trading are definitely the informative projects here.
Pseudo-real-time gaming is possible, but only for regional servers. Live global gaming is off the table permanently unless known physics changes; speed of light delays are larger than human reaction times. Right now, the best we can do is predictive techniques and continental servers, which have barely enabled MMO shooters.
VR gaming worsens the problem - 80ms delays in VR are nauseating and game-ruining. Global latency isn't a solvable problem via money or devotion.
You sound like you think wireless is the only option...? All my consoles are wired. Anything that was doing cloud gaming would take an ethernet port unless the manufacturer is just an idiot. It might work with wireless but it would recommend wired.
There are plenty of devices without HEVC hardware decode, though, which is basically a prerequisite to 4K. And software decoding is god-awful for anything besides South Park.
(It's not actually required, but the efficiency loss between H.264 and HEVC for 4K is pretty massive.)
Supposedly, because it's only rendering a small portion of your field of view at any given time, the compute resource required to power it is substantially less. I seem to also remember reading about it not using a standard pixel-filling kind of rendering, and using something else that requires less power instead.
Ok - it seems that they 'absolutely' have something bewildering, and eye-popping. There are enough 'no information' testimonials by smart people to get that.
BUT
I'm still pretty weary of billions getting pumped into a business that remains unproven.
+ Maybe development is very expensive
+ Maybe the goggles cost to much
+ Maybe they just don't have quite enough content to justify the high costs
+ Maybe the experience 'isn't quite there'
So many times these 'wow' toys have proven to be near-duds when they actually hit market reality.
I can't wait to try it, but I'm also looking forward to seeing the reality of it.
I was struck by the careful phrasing in the article and didn't want to dog pile on it at first - I note other reasons elsewhere why I'm rather dubious - but here's the line:
>The centerpiece of Magic Leap’s technology is a head-mounted display, but the final product should fit into a pair of spectacles. When you’re wearing the device, it doesn’t block your view of the world; the hardware projects an image directly onto your retina through an optics system built into a piece of semitransparent glass (the product won’t fry your eyeballs; it’s replicating the way we naturally observe the world instead of forcing you to stare at a screen).
This is purely my impression, but reading the line that says "the final product should..." leads me to believe they still have significant development hurdles that have been unconquered thus far, and being coy about the release window (the journalist speculated 18 months, the CEO just went with "soon-ish") encourages me to read between the lines. Others may interpret the line differently - that's fine - but I'm simply sharing my impression. Lots of things "should" work the way their visionary has in mind...
I'm mostly poking fun at the way it was written :)
From the write up that was posted here weeks ago, it looks like the technology is really really interesting, but not for the reasons this author thinks. projection through optics isn't really anything new; projecting individual light points through a fiber cable with an oscillator at the end like one of those spinning LED clocks[1] but on your face is pretty cool though...
I remember that we were going to revamp existing cities and build new cities in accordance to this wonderful transportation device that would change everything, of which I cannot recall the name. I currently see them used as joke props in movies.
If you include 'hover-boards' these battery powered, two wheeled, self balancing, things sold over 2.5 million of the things in just the USA in 2015. So, it really is becoming increasingly widespread.
You can see this as and extension of the skateboards vs scooters vertical stick argument.
PS: For comparson 17.5 million cars where sold in 2015 and
12.5 million bikes with 20+" wheels.
> So, it really is becoming increasingly widespread.
I don't know if this is really true. I saw a lot of hoverboards on my university campus last year. I don't see nearly as many this year. I'd like to see sales data for at least another year before declaring that anything is becoming increasingly widespread.
Earlier in the year I would have attributed this to issues with the supply chain and how hard it was to get the upgraded devices. Now, however, I believe that the exploding battery issues heavily reported in the media and the subsequent Amazon refund program put a massive cool on purchasing and usage.
Anyway, I think we are just seeing the hype / adoption cycle: https://en.wikipedia.org/wiki/Hype_cycle Basically, when nobody has X a lot of people get it all at once, but soon enough people have it and so they wait till the old one breaks before getting a replacement. Still if long terms sales stabilize at say 1 million units per year that's still significant.
These guys aren't building an invoicing app. They're building a brand new high-tech platform. The traditional startup approach clearly isn't going to work well here. The line for what a "minimum viable product" could be is pretty high... and once it is out, you can't upgrade it again (well not cost effectively at least). Hardware has a different cycle. Traditional waterfall methodology is still kind of essential for hardware. Frankly, I'm super excited something like this is coming out of a startup, and not Apple. It's nice to have some diversity.
They can't launch until the thing is at least portable - and in the carefully-managed rumours are right, it started out as an optics bench and then a tethered rack and ... nobody's seen anything that can be launched. We're still waiting for that "DK1".
If appearances are to be trusted, Magic Leap is not doing a DK1.
Considering the hundreds of people they've hired and their partners (Disney, ILM, Weta)
They are launching with applications already, a bit like a games console more than anything.
Has there ever been a company that built this kind of hype over years and then in the end actually delivered something truly revolutionary? Judging from companies like Cuil (not sure if the name is right), Segway or Color they will fizzle out once their product gets on the market, if ever.
The hype is probably for recruiting. But, "Magic" in the name is like the handicap principle of biology, means the founders had a big prior exit and can raise despite the name ;) Like Magic Cap operating system — usually means good talent, but they take too long to ship.
If past history is the only thing to go on you are correct. However there are quite a few level A players investing huge sums of money in this. I have to think they are paying attention.
Or, they are so afraid of potentially missing out on the next big thing that they are putting money towards the project in spite of any warning signs that are there.
I have Theranos as counterexample for the A player argument :). But my main question is: is there a an example where such an approach worked and revolutionized the industry? I can't think of any.
I'm not exactly sure how to classify it. Would you consider early institutional investors in Microsoft, Amazon, or Facebook?
Anecdotally I've heard that lots of VC firms passed on all three during their very early days, but I'm not convinced that those are good examples of completely new markets.
Thanks! I was immediately thinking. I would love some glasses to remove visual overload! Get rid of advertisements by switching on the ad blocker in my glasses.
Magic Leap has been bleeding talent as a result of their relocating from the Bay Area to Florida [1]. As I understand it, this decision was made by and for one of their (independently wealthy) founders.
Magic Leap was always in Florida? They are known for that. Even Devine, their chief games guy, talks about not wanting to move to Florida years ago when it was a tiny company.
This article brings up a big issue: even if Magic Leap fails to deliver, they just might own all of the relevant patents. This could make AR needlessly expensive for decades.
except they seem like they're all business method or software patents which will much hard to enforce:
At the time the Mayo case was decided, there was some uncertainty over whether it applied only to natural principles (laws of nature) or more generally to patent eligibility of all abstract ideas and general principles, including those involved in software patents. The Alice decision confirmed that the test was general.
And WebVan needed a distribution network and warehouses.
I'm not saying they're not making an awesome product. I'm saying it's possible not enough people will want to buy it in order to justify building a factory and campus.
It's not my money though and I've never seen the product. This is obviously a HUGE gamble that will either be a raging success or will become a joke about the follies of venture capital (i.e. WebVan 2.0).
Sorry, but this article is way to fluffy. Big words, and not much to show for it. The way I see it, it's basically an ad for the author's upcoming book.
Magic Leap itself does seem to be doing amazing stuff, given how much money has been pumped into it. At least, I hope that's the case!
This is going to end up like that one episode of Angela Anaconda where they're all really excited about the new Jiggle Fruit but it ends up tasting bad and they don't like it.
AR headsets already exist, are not vaporware, already are selling into enterprise markets (e.g. Atheer, Lumus). They're clunky and expensive.
MLeap is the only one I know of which has raised the $ needed for consumer-level polish and price. Probably because of the mini fiber-scan display and diffraction zone plates for natural focus. The other major pieces (waveguide or holo lens, IR TOF camera, SLAM, eye tracking, voice) have been done before.
> Ask your virtual assistant to deliver a message to a coworker and it might walk out of your office, reappear beside your colleague’s desk via his or her own MR headset and deliver the message in person.
Why do people buy into this junk.
The easiest way is plain text. email/slack etc
VR is useless for the office. Things are solved via text/speech, occasionally a graph helps but we all know they are mostly for show.
A magical ability to make things 3D doesn't make work easier.
> In one of its demos the Magic Leap team shows off a computer-generated “virtual interactive human,” life-size and surprisingly realistic. Abovitz and his team imagine virtual people (or animals or anything else) as digital assistants–think Siri on steroids
How does a 3d Siri make the backend suddenly easier?
The AI to analysis what you are saying? The AI to answer correctly? The tech to speak smoothly to you? The things that actually matter and work fine in 2d.
A good computer game can costs 100's of millions. These are going to be even more expensive. Even gaming will take a while to kick off.
I've tried the Hololens dev version. It's very cool, unlike anything else I've had on my head. Those who let me try it don't seem to be overly worried or impressed with Magic Leap, but there's no way of telling until we see both side-by-side.
It's cool, but the fov is narrow, and the area of visible pixels changes significantly with the position and orientation of the device relative to your eyes. Reach up and slide it a little bit up or down on your nose and you gain/lose enough area that you can spend minutes adjusting knobs and fittings to find something that seems usefully minimal. It's not particularly comfortable in general. And, as a wireless standalone unit compared to a desktop computer, it renders relatively very low quality 3d graphics because of the need for high framerate stereoscopic output and the typical constraints of a several-years-ago smartphone.
With investments from tech companies AND banks AND movie studios, and with half-a-dozen R&D outposts all over the world, it's hard to believe these investors just bought into vaporware.
Having worked with VC's before, there isn't always a clear domain understanding of the problem they are investing in. Additionally, once funding snowballs, later investors can incorrectly assume due diligence has been taken care of by earlier investors.
I would expect that we've seen what MagicLeap is targeting in both Google Glass and later Microsoft HoloLens. So conceptually the systems and optics are proven at some early stage. Instead of projecting into a lens that is then reflected into the eye, they claim they are projecting directly into the eye. Either way the optics should be similar. This means that as with any AR (instead of VR) system, they will not be able to project the absence of light in a brightly lit environment. (Creating shadows/darkness should be impossible)
The fact that Beeple was impressed with this is enough to get me excited about it. Granted, his reaction may have something to do with the ML team sitting right beside him as he demoed the product... so, cautiously optimistic then. It will be something cool. https://twitter.com/magicleap/status/752540648323026944
I saw some video from Magic Leap some time ago, some animated character got behind the chair in augmented vr. Aren't they just using infrared to get distance to object, then they are using ML to classify and distinct them all, then when they have sizes of object they calculate behind which element their character will be so they can position it good and make parts of it transparent while crossing under the chair?
Does anyone know if it works the same as the hololens in that it is purely additive light - it can only make the image brighter rather than being able to make black pixels. This severely limits the kind of thing hololens will ever be able to do - it certainly isn't going to be replacing any screens.
I get this feeling that Magic Leap is marginally better than Micrsoft's Hololens. But after seeing the Hololens, they decided that they need more work to beat Microsft. Marginally better isn't going to cut it.
I originally heard the projected images directly onto your retina, if that is the case I don't know how it would work with eye movements? Very interested to see what is underneath all the hype.
Seems like most people here are skeptical/dubious of ML. I don't blame them there is a lot of hype. But it is quite fascinating technology. If you look seriously at the detailed technical coverage on how the technology works [1] and watch the recent Stanford lecture by Brian Schowengerdt [2], Co-Founder of Magic Leap, then I think you will see why ML is actually quite special.
Personally I am fascinated and optimistic about the potential represented by scanning fiber + waveguide display for AR and VR. I'm optimistic about the future of this tech platform far more than any other AR/VR tech out there mainly owing to form factor it affords (lightweight glasses) and the superior bio-compatibility of the light-field it produces (all day use). On principal its imagery is clearly light-years ahead of anything using a flat screen with lenses and its potential form factor could be surprisingly close to regular glasses. The image quality and form factor are huge part of why this tech could replace phone or a tv or a laptop. I am sure the first iterations will be cool but not perfect. I expect a progression much like that of the iPhone. It works, its awesome, then it keeps getting better and better and better.
IMHO this really does look like a viable platform for the future of computing for the next 10-20 years. People still buy PCs but the laptop largely replaces most uses for it other than hard-core gaming. Similarly I could see the same effect happening with screens vs ML over time. I say all this with the caveat that the form factor really is much much closer to glasses than something like HoloLense.
Here are couple things I have learned about the tech that kind of blew my mind:
* The same tech used to display the light-field can also be used to capture images of they eye for eye tracking, no extra camera needed. It is conceivable that the same scanning fiber technology could be used to dramatically miniaturize the sensors needed for SLAM inputs as well.
* The detail falloff of human vision from a small point in the center is huge. We largely piece together a detailed picture over time in our minds with quick eye movements. This means that good eye-tracking + foveated rendering largely mitigates concerns about achieving a very high perceived resolution with manageable IO.
* I assumed this kind of display would only be able to produce semitransparent ghostly images. Not so. Apparently it can block incoming light from points where it is projecting.
* It is not simple stereoscopy, the light hitting the eye is focusable at the distance the virtual object appears to exist in real space rather than at the surface of the waveguide. This is why it is so compatible with the human visual system and can be used all day as opposed to other tech. This has to do with how the visual system to point both eyes at an object in space is wired in sync with the system that focuses the lenses. No other system, to my knowledge, accommodates this linkage and thus causes strain. Watch the video to understand this more.
Hi there,
About usability: I have used the hololens for entire battery charges (about 3hours). I work on various experiments in AR, including path finding, assembly, etc. Had no noticeable eye fatigue.
Hololens works great already, and although light field may bring a technology imporvement, ML seems to come a tad too late (I mean, all the ML videos I have seen can be produced with an hololens)... but I'd love to be convinced.
I'd say yes and no. Many people who have complained about the small FOV of Hololens expect applications where you want virtual stuff everywhere. Certainly nice for all around usage, but not necessary for many others. Again, after hours of continuous use, I've come to "forget" the FOV problem. The key is to switch from eye movement to head movement. When you know you have to look straight at the virtual content to see it, it's actually quite manageable. Don't get me wrong: I'll take 120° anytime, but I know it's around the corner, with Hololens or ML.
In the meantime, I'll take what exists. I've seen so much vaporware in 25 years in the AR field...
In a nutshell, Hololens has a fixed focal depth of 2m and objects are scaled proportionally to mimic distance. Magic Leap uses stacked optical elements that can be turned on/off (source of the name digital wave guide they use) and each plane is a different focal depth. So, by mimicking the way light would reach the eye normally, the ML results in more realistic objects in the world.
i'm hoping its real... but all that time and all the hype with no substance is confidence shaking. i mean, why would you not allay the obvious fears of investors and potential customers?
there are also the deep technical problems which are widely acknowledged in the field and clearly not addressed. a top spec desktop machine struggles to keep up with your eyes movements /if they are known in advance/ - the idea that a pear of glasses coupled with /any/ peripheral, even one the size of your house, can contain the hardware to do this in real-time is quite far-fetched - even accounting for sensors with feedback and clever mechanical tricks.
there is no mention of how to deal with occlusion. no amount of throwing light at your retina can solve that problem... you need something to intelligently block it out. the marketing hype images make this seem like a solved problem...
maybe there is something that works, but i have no faith that magic leap is anything more than a scam targetting VCs. the same opinion i have grudgingly kept since i first saw anything of it... because its a cool idea and i want it.
i hope these issues are publicly addressed so i can buy it early in confidence instead of waiting to see if its as good as the sales-pitch... which, sadly, there is no indication it will be - because none of the obvious issues have been addressed.
>When it arrives–best guess is within the next 18 months–it could usher in a new era of computing, a next-generation interface we’ll use for decades to come.
...which may be a pretty valid statement or sentiment overall, but later in the article, we get to the crux of the "potential" future of the device / platform:
>Eventually Magic Leap sees its greatest impact in business applications, especially medical imaging and retail (imagine “trying on” garments at home, seamlessly). But as with most technologies, entertainment offerings will lead the way.
I might be picking on a low hanging fruit here, but seriously, one of the compelling reasons to look forward to Magic Leap is so it can have a tie in with the Home Shopping Network or QVC or Nieman Marcus?
The more and more I see the little teaser images (and the occasional demo reel like the office game) I'm cautiously suspicious the technology is perfectly reasonable but figuring out great content for the platform will be the difference between whether it will align with the hype and promotion or be closer to the storyline of the Nintendo Virtual Boy.
I'm just frumpy after years of seeing the difference between renders for PR purposes and actual in-game / in-environment footage as a player / user, which, I do think as tech has improved, isn't happening as much anymore in commercials and whatnot, which I like. This particular situation and eventual roll out will prove a lot I'm sure, one way or another..."soon-ish" as he says...