Mozilla IOT Meetup London October 2016

On Wednesday 12, October (2 days ago) the first Mozilla IOT London meetup took place at the Mozilla Common Space near London Bridge.

A bit of history

The meetup was rebranded from the previous Firefox OS meetup group. A lot happened at Mozilla this year. The Firefox OS project was turned into a community project and the Connected Devices team was created.

Now that things got more stable we decided it was a good time to resume a series of meetups.

This was the first session so we're still learning and experimenting, but we're planning on doing monthly or bimonthly meetups. This first session was not recorded, but the plan is to record or stream the next ones.

The first session

Meetups are and should always be about meeting people so we want to emphasise the social aspect of it and making a place where ideas are shared and discussions happen naturally.

Francisco speaking about the creation of the Connected Devices team.

For this first session, we had a nice lineup of speakers:

In addition to the amazing speakers above I wanted to thank Mandy for the coordination, Dietrich and the devrel team for sponsoring the meetup and all the attendees.

What's next?

We're super happy to receive feedback from the participants. If you happen to be in London during one our of next sessions, make sure you're attending and let us know what would you like this meetup to be?

The talented Becky Jones delivering an inspiring talk.

See you soon!

Landscapes in Virtual Reality

Lately I experimented with virtual reality (VR) using Google Cardboard, my mobile phone and the amazing A-Frame library.

I gave it a try at drawing myself VR landscapes. It turns out VR textures are a form of anamorphosis. Some artists like Dali drew such projections. To reconstitute a normal image, you need a special device, in our case a VR environment.

What I learnt

VR is an immersive technology. To feel yourself part of a VR landscape, we apply a texture to the sky (see the <a-sky> element in A-Frame). A texture is a simple image that is distorted to the inside of a sphere.

Drawing a landscape for such a projection is really counter-intuitive, but here are some of my findings.

Equirectangular projection

The easiest way to get started with drawing your own landscapes for VR is to start with a image (or paper, canvas...) twice as large than high (once scanned, 4096 × 2048 works pretty well).

In an equirectangular projection, the top line of the image is distorted in such a way that it's merged in a single point. The same apply to the bottom. This means that the closer to the top or bottom an object is the smaller it will be rendered in the VR scene. its width will be squeezed.

The left and right sides of the image are merged together. To make it feel seamless, be sure to draw something that looks the same on both lateral sides or stitch lines may appear.

The ideal ratio for a VR landscape texture

The horizon line

If you draw a horizontal line right in the middle of the image, this is going to be rendered as your horizon line when you place the camera in the centre of the sphere.

You can play with it. Make it slightly higher and the user will feel that they're trapped in the group (or maybe surrounded by mountains), making the final result a bit suffocating.

If you lower the horizon line, the user will think they're flying.

The horizon line in VR

4 sides

Your field of view fits in roughly 1/4 of the image width. If you draw a single element larger than 1/4 of the width, it will force the user to look left and right to see it entirely, without being able to see it completely at once.

Be creative with it. Big elements like monuments or mountains should be larger than 1/4 of the width. Smaller elements like vehicles or trees should be much smaller.

The 4 sides of the user

Sphere in the sky

As explained above, the equirectangular projection will merge the top of the image to a single point. If you want to draw a circular object right above the user, draw a stripe going from the top of the image on a small fraction of the height. This will render as a perfect circle if the line is parallel to the top of the image.

Of course this principle applies to the bottom of the image too.

Drawing a sun in a virtual reality environment

The result

I uploaded my experiments. On your mobile, go to: Grab a Google Cardboard and insert your phone. You can change the landscape with the click button.

My drawings

Drawing for an equirectangular projection leads to results that are hard to predict. Another technique is to draw on the 6 sides of a cube and use a software to turn this cubic texture into an equirectangular projection, but I haven't tried myself.

If you want to give it a go, you can reuse the code on Github. Just replace the images by yours. Then, share your creations and ideas in the comments section below.

Building a Voice Assistant in JavaScript

tl;dr: You can build a voice assistant in JavaScript with existing open source libraries but the result sucks.

In the last couple of days, I explored the voice in the browser. Here are my findings.

What's available

Most of the voice related libraries in JavaScript that I found are simple wrappers around the Web Speech API. Under the hood browsers use external services so I ruled out these libraries immediately as I wanted something that works offline.

What I found:

Both these projects are brought to the web thanks to Emscripten and asm.js.

Building a (not so) intelligent voice assistant

Thanks to a discussion with my friend Thomas this weekend, CENode.js was brought to my attention. It's an open source project to enable human-machine conversation based on a human-friendly format. It can be used to simulate intelligence.

I had all the elements at hand to build a browser-based, offline voice assistant (Think Siri, OK Google, Amazon Echo...).

Coding (or rather gluing)

Gluing all parts together was unsurprisingly easy. Pocketsphinx.js calls a function when the web worker has a final hypothesis:

worker.onmessage = (evt) => {
  if ( !== undefined && {

I can feed this hypothesis to CENode.js by creating a new card:

function gotHypothesis(hypothesis) {
  const card = "there is a nl card named '{uid}' that is to the agent 'agent1' and is from the individual 'User' and has the timestamp '{now}' as timestamp and has '" + hypothesis.replace(/'/g, "\\'") + "' as content.";

Then I just have to wait until the deck is polled for new cards to my attention (my name is User). The new card contains the answer to my request and I can just do:


The function speak will call espeak.synth() of speak.js.

Adding a grammar

This is it for the JavaScript code. But an important part is still missing. Pocketsphinx.js needs to be fed with a grammar and the pronunciation of all the words used in the grammar. This is used to reduce the scope of detected phrases. So I'll conveniently reuse all the possible requests supported by CENode.js.

Considering the CENode.js demo about astronomy, here's a list of some requests that it can understand (let me know if I missed any):

What orbits Mars?
What does Titan orbit?
What is Saturn?
What is a star?
List instances of type planet

The first step is to compute all the possible permutations that make sense in English and that CENode.js will understand. I found 124 of them. This is the corpus I'll use in the next steps.

The grammar used by pocketsphinx.js is a Finite State Grammar. Think of a finite state machine where each state adds a word to the phrase:

s = 0 // s is the state with initial value 0.

[]    -> [What] -> [does] -> [Titan] -> orbit?
s = 0    s = 1     s = 2     s = 3      s = 4

The grammar for this simple phrase is:

const grammar = {
  numStates: 5,
  start: 0,
  end: 4,
  transitions: [
    { from: 0, to: 1, word: "WHAT" },
    { from: 1, to: 2, word: "DOES" },
    { from: 2, to: 3, word: "TITAN" },
    { from: 3, to: 4, word: "ORBIT" },
    { from: 4, to: 4, word: "<sil>" }

The more phrases you add, the more complex it gets. For each state, pocketsphinx.js uses the grammar to figure out what is a possible next step. If your grammar is correct, you cannot end up with meaningless sentences.

I wrote a small script to build grammars from a corpus of phrases that I may publish if anyone is interested.

Adding pronunciation

For each word in the corpus, pocketsphinx.js needs to know how to pronounce it.

I fed my corpus to Sphinx Knowledge Base Tool. I simply took the *.dic file from the gz archive, split it on line breaks and split again each line on tabs and I got something like:

const wordList = [
  ["WHAT","W AH T"],
  ["WHAT(2)","HH W AH T"],
  ["ORBIT","AO R B AH T"],
  ["MARS","M AA R Z"],

The result

Et voila ! That's all I needed to get an astronomy centred voice assistant that can tell me what satellites orbit around what planets and other detailed infos. I have to say it's pretty cool, specially because I'm a big fan of astronomy and I got this project done in under 2 days.

As a side note I tested it only on Firefox Nightly on my laptop. Results may differ on Chrome or on mobile devices.

The result though is not really convincing. First of all pocketsphinx.js voice recognition is bad. I can probably blame it on my accent (I'm French, remember), but it's so frustrating to say "What orbits Saturn?" hundreds of time with no results while my colleague got understood at the first try! The good thing at least is it can't return phrases out of the grammar, so it will always output something that makes sense.

CENode.js is also super limited and I don't really know how to improve it beyond the example provided (the format is well documented though). Some simple phrases, outside of the understanding scope, fail miserably:

What is orbited by Titan? --> fails
What does Titan orbit?    --> succeeds

But if you ask "What is Saturn?", you get:

Saturn is a planet. Saturn orbits the star 'sun' and is orbited by the moon 'Titan'...

The passive voice does not seem to be supported in requests.

Finally, what is output by speak.js is pretty terrifying. Whatever the voice you choose, they all sound robotic. They have a cool retro feeling if that's what you're after (and you can argue that it works better for something related to astronomy and space), but they're far from the quality of commercial products.

Going further

This experimentation was limited in time so I didn't want to spend too much on it, but if I had had a real project to build, I'd had seriously looked at:

  • Finding better language packages for pocketsphinx.js.
  • Building better grammar files for pocketsphinx.js.
  • Using CENode.js' advanced features.
  • Using better voice packages in espeak.js.
  • Supporting more languages outside of English

I don't think it's worth sharing the code that's messy and I'll leave it as an exercise to the reader, unless somebody is really interested.

What I can share though is the FSG generator from a corpus. It needs some cleaning but the code can run on node or in the browser.

There's probably tons of mistakes and approximations in this post, I'm by no means a voice expert, so please correct me if need be. Also if you have any ideas on how to improve it or if I missed an existing library, please do let me know in the comments. Thanks!