I just finished a short series of experimentations about virtual reality called VR Diary. Here's a brief summary of why and how.
VR, but what for?
Creating content for virtual reality is easy. A-Frame is probably the easiest way to get started. It's entirely free and open-source and only requires a basic of HTML programming. On other projects, I used Google VR SDK for Unity and the graphic UI of Unity allows you to create content without even typing a single line of code.
So now the real question is what are we going to use VR for? VR is just a medium and in itself has no value.
This is what I explored in this project. VR is an immersive and engaging platform so I decided to use it to share personal experiences with the readers.
It took a direction towards philosophical questioning and reality challenging that I hadn't expected. But overall it's been successful from a personal standpoint: the experiments are consistent (except maybe the first one) and I've been able to go through the 6 days.
Working as a hobby
An essential aspect to this experiment is being on holiday with a lot of free time. I dedicated about 2 or 3 hours a day actively working on the VR experiment and writing the post. The rest of the day was spent thinking passively about what to do next.
This liberty was essential to generate new ideas to experiment. Creative people know you can't just sit at a desk and expect to get new ideas. That's when you go out for a walk, sit in the grass, spend some time in a zoo or a museum that you release the most of your creative potential.
I thoroughly enjoy this lifestyle of working actively a few hours in the morning and passively for the rest of the day. Maybe that's how life should be: dedicating most of your time to creativity and self-expression and work only for how long is necessary. This is something I want to explore more in the future.
On Wednesday 12, October (2 days ago) the first Mozilla IOT London meetup took place at the Mozilla Common Space near London Bridge.
A bit of history
The meetup was rebranded from the previous Firefox OS meetup group. A lot happened at Mozilla this year. The Firefox OS project was turned into a community project and the Connected Devices team was created.
Now that things got more stable we decided it was a good time to resume a series of meetups.
This was the first session so we're still learning and experimenting, but we're planning on doing monthly or bimonthly meetups. This first session was not recorded, but the plan is to record or stream the next ones.
The first session
Meetups are and should always be about meeting people so we want to emphasise the social aspect of it and making a place where ideas are shared and discussions happen naturally.
For this first session, we had a nice lineup of speakers:
In addition to the amazing speakers above I wanted to thank Mandy for the coordination, Dietrich and the devrel team for sponsoring the meetup and all the attendees.
We're super happy to receive feedback from the participants. If you happen to be in London during one our of next sessions, make sure you're attending and let us know what would you like this meetup to be?
See you soon!
Lately I experimented with virtual reality (VR) using Google Cardboard, my mobile phone and the amazing A-Frame library.
I gave it a try at drawing myself VR landscapes. It turns out VR textures are a form of anamorphosis. Some artists like Dali drew such projections. To reconstitute a normal image, you need a special device, in our case a VR environment.
What I learnt
VR is an immersive technology. To feel yourself part of a VR landscape, we apply a texture to the sky (see the
<a-sky> element in A-Frame). A texture is a simple image that is distorted to the inside of a sphere.
Drawing a landscape for such a projection is really counter-intuitive, but here are some of my findings.
The easiest way to get started with drawing your own landscapes for VR is to start with a image (or paper, canvas...) twice as large than high (once scanned, 4096 × 2048 works pretty well).
In an equirectangular projection, the top line of the image is distorted in such a way that it's merged in a single point. The same apply to the bottom. This means that the closer to the top or bottom an object is the smaller it will be rendered in the VR scene. its width will be squeezed.
The left and right sides of the image are merged together. To make it feel seamless, be sure to draw something that looks the same on both lateral sides or stitch lines may appear.
The horizon line
If you draw a horizontal line right in the middle of the image, this is going to be rendered as your horizon line when you place the camera in the centre of the sphere.
You can play with it. Make it slightly higher and the user will feel that they're trapped in the group (or maybe surrounded by mountains), making the final result a bit suffocating.
If you lower the horizon line, the user will think they're flying.
Your field of view fits in roughly 1/4 of the image width. If you draw a single element larger than 1/4 of the width, it will force the user to look left and right to see it entirely, without being able to see it completely at once.
Be creative with it. Big elements like monuments or mountains should be larger than 1/4 of the width. Smaller elements like vehicles or trees should be much smaller.
Sphere in the sky
As explained above, the equirectangular projection will merge the top of the image to a single point. If you want to draw a circular object right above the user, draw a stripe going from the top of the image on a small fraction of the height. This will render as a perfect circle if the line is parallel to the top of the image.
Of course this principle applies to the bottom of the image too.
I uploaded my experiments. On your mobile, go to:
Grab a Google Cardboard and insert your phone. You can change the landscape with the click button.
Drawing for an equirectangular projection leads to results that are hard to predict. Another technique is to draw on the 6 sides of a cube and use a software to turn this cubic texture into an equirectangular projection, but I haven't tried myself.
If you want to give it a go, you can reuse the code on Github. Just replace the images by yours. Then, share your creations and ideas in the comments section below.