Anyone playing around with WebVR? I dipped my toes in via a-frame which is pretty simple to get started with. I don’t have much to show for it other than this really clunky proof of concept I made last week during a bout of travel related insomnia. You can see the code here, fork your own copy, etc.
Given the weird that I created relatively quickly, I think with a bit more effort it could be really weird.
Note - if you watch the above on a mobile browser, it doesn’t load media correctly. I need to fix it. You’ll miss out on the video playing. I guess mobile browsers need an action to kick things off.
Not really sure what I’m looking for here, but interested to see where this community intersects with…CYBERSPACE.
I played a little (and would like to work on VR further but currently don’t have time) and both times that I created a VR project I used a-frame so +1 for recommending it.
The first project was an AR radio play which used a-frame + ar.js (https://github.com/jeromeetienne/AR.js). Inside a building we displayed AR markers which when viewed displayed an 3d shape and played a part of radio play. The main idea was that by exploring the building the radio play could be experienced in non linear way.
The second project was building a really ugly VR based on polish comedy show: http://firmanty.com/kiepski/ I think the only interesting thing here is the second corridor where you can enjoy output of GAN trained by my friend on characters of that show.
One of the problematic things for me in WebVR that there isn’t any standard input for interacting with VR because somebody could run the page using computer with keyboard and mouse, someone else using smartphone with cardboard etc. so a lot of work is needed for an interface which will be accessible to everyone.
Any chance you could give a short list of skills you need to build reasonable VR experiences as a solo creator? If I want to go deeper, do I pick up Blender for 3d modeling? Shaders for WebGL? What’s a reasonable learning pathway for WebVR once you get over the A-Frame hurdle?
Seems like digging into three.js is a natural next step, given that a-frame is built on top of it. But the book of shaders is fantastic and will be lots of fun!
I’ve always been curious about webVR, but not much of a line coder, so been slow going.
I’ve been increeeedibly excited about VR as a medium, especially w/r/t to intersections re: ambisonics and virtual surround sound, conceptual overlaps re: ‘sampling’ and object libraries, plunderphonics, etc etc.
It’s been slow going, but I’ve been kludging together some pieces using Unity speaking to Ableton and Max via OSC.
The compositional possibilities seem so exciting – imagine a grids sequenced virtual sculpture or environment, impossible spatial movement, etc.
So excited to hear there’s other folks looking in this direction (or at least VR) on Lines, too!
I worked on a WebVR tool called Patches with some kick-ass ex-demosceners based in Helsinki. We used Three.js under the hood. Then we did something else using React VR / React 360. Then a 3D WebGL game using AFrame. Three.js ends up in the internals of all these because it abstracts WebGL pretty well.
It’s complex and time-consuming to produce WebVR. It’s good for weird, and weird is somewhat easier to make on it, especially with custom shaders and to work on a specific device. In fact VR is a good medium for digital art on its own, gives you a sense of having been somewhere (as opposed to having seen something). It’s a fun medium to work with if you’re not trying to get anywhere fast. The “Vaporwave” aesthetic isn’t compulsory, maybe it just lends itself readily to the medium. 3D sound is a bit hard to do, and doesn’t sound too nice until you can get to the actual audio graph under the hood and aggressively reconfigure panning and dynamics.
Learning pathway is steep, for WebVR you need:
- CGI basic scene knowledge (models, materials, mapping, lights, camera)
- Three.js or AFrame knowledge (how to construct the above)
- 3D math for transformations (Vec3, Euler, Quaternion, Matrix)
- Shader knowledge
- Blender or another CGI tool for modelling (or fixing third-party models)
- An image editor for textures
- ? for paths and animation
- web audio graph nodes for sound (no algo reverb, but you can construct your own:)
AKA The Web dev process, circa '97
Don’t worry, it’ll get better. Ish.
i don’t worry but i doubt it will get better, even ish, like the general availability web of the late 90s. it literally hasn’t the same momentum, and the same commercial/political/open-source will behind it.
To be honest I am also a beginner in that regard, but I think @eesn rised a very good points. My own advice would be if you are experimenting/starting with a medium give yourself just some time to play with it, use simple shapes or models downloaded from internet to quickly verify ideas, and then if you feel you are going somewhere you could always use what you created as a demo to collaborate with someone who can model 3d graphics etc.
Yeah, this line is a reason I started down the path. I admire physical sound / media installations, but I don’t want to do all the legwork to find my own way of participating in that world. I’m a dabbler. Two big experiences for me recently were a big sound / visuals projected in a dome experience at Desert Daze two years ago and Meow Wolf. Both experiences made me want to do more with my sound work…VR seems like a way to dabble (did I mention I’m a dabbler?) with that without actually building a physical space.
While my comment was somewhat frivolous it wasn’t without thought, so I’m going to shift gears and disagree a bit.
The commercial second-coming of VR has legitimised the field after a long period in the wilderness (not unlike its cohort AI). The first wave suffered because it had to overcome two separate barriers: technology and culture. The technology is now at a level that matches the vision (yeah, let’s say pun intended). And there is a lot of weight behind making it culturally accepted; it’s a challenge to name a CE company not heavily investing, or at least hedging. Momentum is not a problem – some form of VR/AR/MR will become dominant in the next five years, and that will become entrenched in society within a similar period following.
As that happens the tools will, are, becoming streamlined and standardised, because the creative bloc and industry will demand it. (Think IE5, then WebKit/Chrome.) As it is not controversial to say that the browser (née software) has eaten the world, it’s also no stretch to posit the browser will be the de facto platform for XR too.
What I’m trying to point out is not some technological impossibility but how there’s an intersection of factors - predominantly not technological - that destine the medium to go towards a small scale end. I’ve listed some technicalities but I feel success is much more determined by the quality of content and scale of consumption. VR cannot have the scale and reach of AR, but AR hasn’t the immersion of VR. Both end up expensive to produce and I’d argue difficult to consume. It’s a vast subject to tackle, and I wouldn’t dare begin going into detail on the intricacies of wearing a shoe on my face, unintentionally attacking various room fixtures, or the unnerving feeling, after seeing the logs produced, that there’s an eye directly pressed against my face. And then, WebXR is on the largely 2D web.
I’d love it if someone went into detail on the early UX experience of now established artforms (in this context), or just provided pointers?
On the difficult to consume front, we should remember that VR makes lots of people sick—even when the content is created by experts—and that this sickness disproportionately affects women.