The Origin of the Metaverse

I’ve been somewhat obsessed and excited about virtual reality after acquiring the Oculus Rift Development Kit One (or DK1) and experiencing just a little bit of what is being dubbed “presence” (see Mike Abrash’s presentation at Dev Days on what this is). My excited only increased after Sony announced their own VR headset, called the Sony Morpheus, lending credence to the idea that VR has finally arrived. This has all put me in a wildly speculative mood. I believe this technology will catch on very quickly because it will provide the most immersive and engaging experiences as of yet conceived. It will finally allow us to escape.

My consumption has not just been DK1 demos and projects, but I’ve been loyally following the latest updates on reddit.com/r/oculus and other blogs, as well as consuming any novel related to VR. I quickly devoured Ready Player One and I’m half way through Neal Stephenson’s Snow Crash. William Gibson’s Neuromancer will be next. Common to each of these novels is the idea of a virtual world separate from our own in which people from all over the world connect to explore fantastical environments in which the physical laws of the real universe do not apply. This often involves strapping a device to our face, something akin to goggles, along with sporting gear on our hands and body that enhance the experience by simulating touch, often called haptics. The virtual worlds are named something different in each novel (OASIS, Metaverse, Matrix). I like Snow Crash’s “metaverse” the most.

It seems a metaverse is inevitable as these headsets get better. Given the social nature of our species, it seems obvious that we’d also want to experience the metaverse together. We want to, or rather need to, share experiences with each other. Experiences in the metaverse will be novel, plentiful, and unconstrained by the physical properties of reality – we will want to share them.

React.js Diffs

Lately I’ve been looking into Facebook’s open source React.js library. It’s a front-end javascript user interface library that has a few interesting features. The most interesting one to me was something referred to on a React.js blog post as reconciliation:

When your component is first initialized, the render method is called, generating a lightweight representation of your view. From that representation, a string of markup is produced, and injected into the document. When your data changes, the render method is called again. In order to perform updates as efficiently as possible, we diff the return value from the previous call to render with the new one, and generate a minimal set of changes to be applied to the DOM.

Finally the post continues:

We call this process reconciliation.

In my research, or rather “googlesearch”, I stumpled upon this article which goes into more detail about the diffing process. I highly recommend giving it a read as it goes into detail into the heuristics of how React most efficiently adds/removes elements from the DOM, since doing a real “diff” would not be efficient.

I like the idea of seeing this diffing process, so I wrote a simple example that demonstrates the reconcilation process. In order to “see” the process, however, you will need to turn on “Show Painted Rectangles” in the Chrome debugger to see the result (open Chrome Dev Tools, press ESC, and go to the rendering tab).

See Example

Youtube for the Oculus Rift

video-vr-extension screenshot
Source | Demo

I’ve done it. Finally finished. It took me a whole day and most the night, and the better part of the next morning, but I can finally say I’ve finished one of the things I’ve been longing for – the ability to watch Youtube videos in the Rift!

I’ve implemented it as a Chrome extension, and there are many things that can be improved:

  • More settings (e.g. custom width, height)
  • More sources (vimeo? netflix?)
  • More bridges (only vr.js is supported)

I will followup with a more detailed technical breakdown of the hacks required to get this working.

iBeacon Background Advertising

The last week or so I’ve been playing around with a Bluetooth technology called iBeacon by Apple. Basically it allows an iOS device (e.g. iPhone, iPad, or estimote) to advertise a signal to other iOS devices within a certain proximity. Most examples demonstrated are museum or merchant focused. As a merchant for example, a store might place an iBeacon in a static location, such as the entrance, and it would begin transmitting to devices that are organically carried through its doors. Once a receiver detects the iBeacon signal it might notify the user of special discounts or coupons or a myriad of other marketing ploys. A museum might place iBeacons discreetly at each painting, and after downloading their app a user might get historical information and the painter’s biography delivered to their phone as they navigate the museum.

While I like the museum examples presented, frankly I’m not excited about an increased bombardment of advertising as I walk through the world. However, beyond the merchant and museum applications, I think there is a lot of potential in the peer to peer application space. In particular, ideas that become possible when a device both advertises and receives iBeacon signals. Imagine walking into the subway and being challenged by another user on the same train to a game of Rock, Paper, Scissors. Or a simple dating app that advertises availability while out at the bar. Or a multitude of other gaming applications (see Nintendo’s “Street Pass for examples). Not to mention ideas related to daisy chaining iBeacon devices into areas where reception and wifi isn’t available.

There is one very big problem with the technology as it now stands that prevents these ideas from taking off. A iOS device cannot advertise while the app is running in the background!

It is possible to receive iBeacon signals while in the background, or even if the application is closed (after the latest update - see this). However, advertising those signals can only be done while the app is running in the foreground.

This is definitely a bummer. Fortunately there are ways using traditional Core Bluetooth to create these applications - but it looks rather difficult. See this github project as an example. I’m also unsure what the battery draining implications are, or if the application has to be running in the background. I will be exploring this option in the upcoming weeks to get at least one idea off the ground.

I believe the lack of background broadcasting really reduces the peer to peer applications that would be possible otherwise. Apple has got to know this, right? Or is there a reason why background advertising is not supported?

Hacker School Checkpoint

It’s hard to believe but I’m two months into Hacker School which means there is not much time left (1 month!). It has passed quickly so I wanted to write a post that serves as a kind of “checkpoint” to briefly look at what I had hoped to accomplish in this batch and to orient myself for the final stretch. Firstly, I had a gander at my application (yes I saved it) and took note of the projects I had listed there:

  • Browser based Oculus Rift video player.The final project might be a browser based 360 video player with Oculus Rift support.
  • Complete the “Elements of Computing Systems” textbook - build a computer starting from logic gates, CPU, memory, (virtually) then moving on through the software heirarchy into a complete integrated system.
  • Develop an iphone application game called “Quick Draw”. Much like a country western, as you approach other players of the game in the real world with your phone, the first to take their phone out of their pocket and point it towards the other will win the “match”.

I only listed these three projects, which wasn’t very ambitious really. I have roughly completed the first two (I have a couple chapters left of nand2tetris). I haven’t done any mobile development so far.

Additionally I have spent time working on other projects that I did not plan at the outset:

  • Learn a bit of Elm and expose myself to FRP (Functional Reactive Programming). Created a simple game called Vessel.
  • Spent about a week learning about Haskell. Elm is very similiar so it was a natural segue.

I guess that’s it for the major projects. It appears kind of measly when written out like this, but what isn’t described here are the multitude of small “learning events” that I participated in. Examples include short talks by the facilitators, the weekly lecture on Mondays by residents, presentations by students on Thursdays, and most importantly the informal pairing and discussions that occur every day and during chat. Other things I’ve learned are functional programming in Python, what a Smalltalk programming session looks like, how git internals work, assembly programming in x64, writing a very simple kernel, and weird distros of linux, just to name a few.

Looking forward I hope to finish strong and plan to complete a few more things:

  • Complete an iphone application and distribute in the app store
  • Learn some more Haskell. One of my goals was to learn a functional language.
  • A script that can easily be added to a page to take a canvas element and place it into a 3d scene so it can be played in the Rift. I’m imagining being able to play Vessel or any other HTML5 canvas based game in the Rift.

HTML5 Panoramic 360 Video

Demo Screenshot
Source | Demo

I’ve finally completed a project that I’ve had in mind for quite some time: A 360 HTML5 viewer. This was a project that was initially conceived during a hackathon a few months ago (the source for those experiments are located here)

Most excitedly the player has optional Oculus Rift Support. The player seeks to replicate what existing projects can already do (e.g. Total Cinema 360 and VR Player) but in the web browser. However, the player is limited to browsers that support WebGL and users must have the vr.js project installed for Oculus Rift support. It’s only been tested in Chrome (and not very thoroughly I might add). The videos on the demo page are mp4 which only work on Chrome (except for sintel, which should work in Firefox).

I’ve used a lot of web based players in the midst of building this thing and most are Flash based. I did find one by Kolor which looked pretty impressive.

I still don’t know much about the actually stitching process and the algorithms behind them. I’m also not sure about the projections I’ve chosen. I’m still very new to “360” video but I think as VR takes off it will become much more prevalant.

Thanks to the airpano people for their amazing videos, which I’m using to demonstrate the player.

Vessel - Writing a Game in Elm

Vessel Screenshot
Source | Play

This week at Hacker School I started working with a language called Elm. The language’s creator (Evan Czaplicki) was a resident during the week so I had the absolute best resource available to me while playing with it. The language is functional (based on Haskell, or so I’m told) and features Functional Reactive Programming, among other things. In other words, Elm would throw a lot of new concepts at my imperative brain.

Initially progress was slow. At the end of the first day all I had to show was a program that wouldn’t compile. I made some gains the next day by talking to fellow students who were familiar with functional concepts and eventually paired with Evan for about an hour which helped immensely (special thanks to Brian as well - check out his Elm flappy bird clone). The end result was a “tunnel” type game much like the one that I used to play on my TI-83+ graphing calculator in high school. In all it took me about three days to go from zero knowledge to small functioning game.

Some Observations…

Elm’s most distinguishing feature is Functional Reactive Programming. The general idea is that you can bring in variables that change (or don’t) over time. For example, the mouse’s x and y coordinates can be represented by the Mouse.position signal. As a user moves their mouse, its coordinate is pushed through the program. When these signals are pushed (that’s probably not the best term) can be controlled by a function called sampleOn. At every sample the functions invoked by foldp get to work and begin to “update” the game state (I say update but what I really mean is a new state is created based on the existing state - it’s immutable).

The foldp function was difficult for me to grasp initially. This function is similiar to foldl and foldr functions, but works over Signal inputs. My mental model of this concept is to imagine Signal inputs as an array, where only the first value is accessible. This value represented the latest “signal.” Eventually I came to understand that the “p” in foldp stands for “past” which helped it make more sense. It probably didn’t help that I was unfamiliar with the other “fold” functions as well and had to learn those too.

I began to start thinking functionally towards the end. This meant making sure each function had all the information needed via its args to do its job and thinking in terms of composing functions together rather than passing objects by reference into functions. In other languages you can get a way with modifying a global variable in a function without taking the time to refactor, eliminating referential transparency, and potentially introducing bugs in the future as complexity increases.

I do think Elm has exposed me to a different way to model and write a program. As I was working on the game, I almost inevitably had a working program if I could get it to compile. I think the key insight for me was that Elm was forcing me to face my ineptitude at the compile stage. If I made the game with javascript I would of encountered these bugs at runtime and I would undoubtedly have additional lurking issues related to stagnating and/or state changes due to side effects, resulting in situations I never predicted, finally creating abnormal behavior and/or crashes. When working with Elm, I didn’t have to re-run the program often to find bugs. If there were problems encountered during runtime, it was more of the “my explosions don’t look like explosions” and not some errant issue.

Going forward I believe it will not be difficult to adopt a more functional style in my javascript code. I have yet to read Functional Programming in Javascript but now I have a very good reason to. The topic used to feel imposing, but now it’s a little less so.

Hacker School: Week 2

I’ve finished week one of Hacker School and want to summarize the first week as best I can. First of all I feel I have been making great progress throught the nand2tetris.org course. I’ve completed roughly a chapter a day. I’ve started reading chapter 7 but will hold off on starting the exercises while I digest the material I’ve been exposed to. I want to complete a few other smallish projects before starting again. One project is writing the minimal amount of C++ code to interface with the Oculus Rift. Another is playing with iOS development (flappy bird clone?).

Overall initial misgivings about leaving my job and forgoing a salary (scary) have been swept away as I have learned more about computers in the last week than I have in the last year. Besides just following the nand2tetris course, there have been small and informal talks and gatherings regarding topics like git and python internals, process profiling, and even some live javascript game programming. The experience of my first week in NYC is another experience altogether perhaps worthy of its own post.

First Day of Hacker School

First day of Hacker School was a whirlwind of activity. I realized how little prepared I was for the reality of starting my projects. It appeared others were in the same situation, so I think I was in the majority on this. I regret not taking the opportunity to speak to an alumni (everyone received an email with contact information for alumni who didn’t mind speaking to incoming students). If I had I may have not wasted most of the day, although I think there is something to be said for acclimating oneself to the environment - it’s important to not be too hard on oneself. Socializing a bit and getting to know some of the other students feels like it may be one of the biggest benefits of attending Hacker School.

We began our day with brief introductions to the founders (Nick, Sonali, Dave), followed by some talks on general guidelines and introductions by the facilitators (Tom, Alan, Allison). Students volunteered their fears and what they were excited about. I did not feel scared, only worried about what I’m going to be doing for the next three months. It feels like there is so much time, and so little all at once. There are a lot of people (60ish?), and I have as many self doubts about what exactly I will be getting out of this experience. I definitely feel like Hacker School if for a specific breed - it’s not for everyone.

I mostly began reading through ‘Elements of Computing Systems’ by Nisan and Schocken. I’ve finished reading the first chapter and began the exercises. It was difficult thinking because of the noisy/busy environment. I don’t think Hacker School is suited for deep problem solving activities (at least not the main room) but more for collaborative and social activities centered around learning something. Will have to reevaluate projects I have in mind and redirect myself. I joined the nand2tetris group at the least, and will continue working through the book, but will start doing the reading at night as homework, while leaving Hacker School open for other activities.

Lastly, I think I’m going to reset my linode instance. I installed an instance of Gitlab and it’s definitely insecure. I also can’t remember what else that’s been installed. Another project…

2014 Resolutions

2013 has passed and a new year has begun. I’ve resolved to start setting achievable goals for the future, and the start of the new year seems an appropriate opportunity to do so.

  • Write at least 100 words a day. In a journal, in this blog, or on a napkin… it doesn’t matter. As long as it’s not for work and/or typical email communications.
  • Exercise more. Run. Climb. Continue to develop healthy exercise habits
  • Simplicity, simplicity, simplicity! Follow Thoreau’s mantra, simplify accounts and possessions. Focus on what’s important, time is too precious to be spent managing stuff.
  • Eat well. Less meat. More vegetables. Less coffee and beer, more water.
  • Learn. Read more books. Learn a new language. No more TV. Less sports (except world cup!). Stop dawdling on facebook and with other online distractions.
  • Exist. Travel. Enjoy the moments.

Hello World

As a start to getting my technical house in order, I’ve setup this blog. I’ve gone with a minimalist approach as I find that the easiest to maintain and understand.

I’m quite honestly not sure what my goals are for my blog. I think I was more interested in the setup process than the actual blogging itself. Perhaps I’ll figure it out on the fly. I’ll get practice writing at the very least.

Finally, I’m not sure who will be reading my posts (besides maybe my girlfriend.. Hey!). It feels kind of like keeping a personal diary. Here’s to hoping I’ll have something interesting to say in the coming months!

Sean