I do like it when people set themselves challenges like this. Clear beginning and end, finishing something in combination with iterative learning is very satisfying. A bit like a hackaton or a game jam. But this made me think, because in a way it limits yourself in what you can do with what you previously learned. The more you learn the more complex projects you can create and halfway through you can create something which doesn't fit in a day anymore; complexity pushes build time up exponentially. E.g. it needs some extra tooling for generating procedural content. So what I propose is instead of 'over the course of 30 days I will every single day finish a project', why not follow the Fibonacci sequence: I start with nothing (procrastinating), next a one day project, followed by another single day effort. Stepping up with a 2 day project -> a big 3 day project -> 5 day full blown project -> 8 day epic. And finally: a full 13 days working on a single masterpiece! (33 days in total).
While there is of course value to tackling a more long term project, the following excerpt of Art and Fear comes to mind:
" The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the "quantity" group: fifty pound of pots rated an "A", forty pounds a "B", and so on. Those being graded on "quality", however, needed to produce only one pot -- albeit a perfect one -- to get an "A". Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the "quantity" group was busily churning out piles of work - and learning from their mistakes -- the "quality" group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay. "
With this scheme if you do even 15 projects, you're already into multi-year time frames and your 25th project will take 200 years. Fibonacci numbers grow quickly!
Knowing when to stop is one of the most important things about delivering a project. After trying this method a couple of times I think many will have learned when to draw the line.
I'm trying something like this to get over shipping anxiety. First a project scoped to one day, then a project scoped to a week, and then finally a project scoped to a month. So far I've done the one day project: https://github.com/milesrichardson/docker-nfqueue-scapy
I like that except that when learning something it's hard to know in advance how many days it will take. Instead, you could substitute key pieces of functionality for days.
You render twice per frame: half the "screen" rendered with a camera at ([-eyeOffset, 0, 0] * cameraRot) and the other rendered with a camera at ([+eyeOffset, 0, 0] * cameraRot). The main thing is this has some surprising performance implications (such as geometry complexity being more important than shader fillrate).
You also need to keep an untraditionally high frame rate without dropping frames (90 FPS on the desktop headsets), which also has significant performance/app architecture implications.
It's actually not that radically different in terms of graphics; a game programmer should feel right at home.
The harder part is the UX implications when you realize "controlling the camera" is no longer in your hands. That might require fundamentally rethinking how your game and/or app functions.
There is one major difference in terms of graphics programming - deferred rendering doesn't work well in VR since it's incompatible with proper multisampled antialiasing, and the edge-detect and/or temporal AA methods typically used instead are too blurry when combined with the low perceived resolution of today's VR headsets.
For this reason there's been a trend back towards forward rendering, with some modern twists to efficiently handle many dynamic lights like deferred does. UE4 for example:
That's a significant limitation for a modern technique.
Here's the full algorithm for anyone curious:
> The Forward Renderer works by culling lights and Reflection Captures to a frustum-space grid. Each pixel in the forward pass then iterates over the lights and Reflection Captures affecting it, sharing the material with them. Dynamic Shadows for Stationary Lights are computed beforehand and packed into channels of a screen-space shadow mask, leveraging the existing limit of 4 overlapping Stationary Lights.
> Forward renderer now supports shadowing from movable lights and light functions.
> Only 4 shadow casting movable or stationary lights can overlap at any point in space, otherwise the movable lights will lose their shadows and an on-screen message will be displayed.
In this instance, nothing afaict. I looked at a few of them which are generic "here's how you make a 3D scene with a shark" type tutorials for three.js (a popular browser-based 3D engine based on webgl).
The cynic in me thinks that perhaps "VR tutorial" gets more clicks these days ;)
From the graphics programmer perspective? Nothing radically different at all. It's the same real-time rendering techniques that have existed for a while. VR is only different in design methodology really (and performance requirements, as noted in another comment). But in terms of programming, there's no huge leap between graphics programming for a flatscreen or VR game.
I would like to learn more about viewport but when you click docs it requires you to make an account first... if I have to make an account just to find out what your product does I leave and never look back.
it doesn't ask you to create account. It asks you to give your email address so that we can send you the docs :) You can also try the 360 photos demo at https://viewportvr.co/demo
Seems like a great project. Would have loved to see some more concrete projects though; maybe do a 10VR projects for 30days and have something more substantial .. A+ on effor though
Another challenge is desktop streaming. I achieved it using webvr-boilerplate (threejs) and jsmpeg-vnc. I still need to do lots of tweaking but it works quite well over LAN to mobile.
The other projects did most of the heavy lifting. jsmpeg-vnc does the screen capture and sending the data over websocket to jsmpeg which renders it to canvas without buffering. I then took the canvas and used it as a texture in webvr-boilerplate. I needed to make sure the texture was a power of 2 and set texture.needsUpdate to true on each render update. Three.js made it easy to switch the cube out for a plane which I moved a bit closer to the camera. The command arguments I used for jsmpeg-vnc: -b 1000 -s 1024×512 -f 60 -p 9999 "desktop". jsmpeg-vnc doesn't support sound which doesn't bother me since I can use the computer speakers but I'd like to add mouse capture for when the headset is on. And here's a screenshot of it on my Moto G (2nd gen): http://imgur.com/a/uhRuN
Yea that is generally the approach, the tripods are a common artifact in 360 degree videos and photos, if is of professional quality it gets blurred or patched some other manner.