Advertisement

Hololens Development

Started by May 22, 2016 07:54 AM
1 comment, last by jezham 8 years, 5 months ago

I've had the rare privilege of being invited by Microsoft to participate in the worlds very first holographic hackathon this weekend. I thought I'd share a few quick notes and thoughts so far:

Mind blowing stuff:

-The hololens is able to recognize voice commands. You can say, "Cortana, record this." and it will start taking video feed recordings!

-It has NO wires. The thing is a mobile computer which is strapped to your head. Take a minute to think about that and let it sink in.

-You can use three hand gestures to create user input commands

-You can place holograms on top of tables and on walls. How freakin' cool is that?! This is done by 'spatializing surfaces'.

-The devices uses cameras and infrared lasers to visualize your environment geometry and generates meshes. The advances in computer vision alone are mind blowing.

-The glasses are rendering at 240 frames per second!!! If you thought the 90 FPS for VR was high... lol

The unexpected stuff:

-Developing for augmented reality is about three times harder than for virtual reality (especially when it comes to design considerations).

-Augmented reality development is NOT the same as virtual reality development. VR is more focused on immersion and presence, while AR is more about layering extra information over the existing world. I can't stress how important this fundamental difference is between the platform types.

-The field of view was disappointingly small (think of looking at the world through a 15 inch monitor)

-The glasses use additive light to create holograms, so you can't draw dark colors such as black! That's just... invisible!

-The magic of AR is being able to keep a hologram stationary even though the observer is not

-It's designed for indoor use

-It can't handle non-stationary objects very well. Definitely don't take it out to sea.

-Dark surfaces and reflective surfaces will not give good results

-People are finding use cases for AR which not even science fiction writers could have envisioned (such as someone remotely using a tablet to draw in your view port as they see what you see, called 'telepresence')

-It's not going to work as well in cluttered and messy environments because all that junk is going to get turned into triangle meshes.

-A lot of the hardware design decisions were made to preserve the battery life of the device (laser pulse frequency, camera resolutions, screen size, refresh rates, cpu, etc).

The disappointing stuff:

-It only works with Windows 10, not Windows 7. (clarification edit: Your developer machine has to be running Win10, though I haven't tested this to be 100% sure.)

-At this point, only Unity3D supports it. No UE4 support yet :(

-Debugging is really hard, if not next to impossible because you deploy out to the device. This slows down dev iteration cycles.

-I need to get a lot more practice with AR if I'm going to be any good at it in the future

Final thought: The hololens is VERY impressive from the technical standpoint. However, it is also the first generation of hardware for this type of technology. It is the worlds first high quality mass consumer market ready device which is capable of rendering holograms on top of surfaces. I expect that the technological advances will increase rapidly in this area within the next 5 - 10 years. I think there are a lot of very interesting problems which Hololens could solve, but the solutions for it can't just be solutions taken from VR and ported over to AR (see: fundamental difference in objectives).

I was at the HoloLens hackathon too! I agree with all of the points you brought up.

There were two big issues that bothered me:

  1. The deployment process is so time consuming sometimes, which makes iterating on things kind of a pain. To test on the device, you must hit build in Unity, then open the Visual Studio project and build that, then deploy to the HoloLens device over USB. I wasted a lot of time early on trying to get my holograms to look just right in the headset, but I soon learned that I should have looked for more clever ways to iterate as much as possible within Unity before deploying to the device. At one point we finished the project early ("early" in that we didn't need an all-nighter on Saturday night) and I considered making something random for fun instead of going to sleep, but I chose not to because I dreaded the thought of staying up all night looking at the deployment loading bar. D:
  2. I couldn't quite figure out how to wear the device right? Maybe my head is too big, or oddly shaped, but I kept wishing for an overhead strap.

But overall it was really awesome. I just want to keep playing with it, to be honest, to see what sorts of weird hologram-related things I can make.

Advertisement

I wasn't at the event!

Reading these posts reminds me of when Marmalade SDK would involve deploying over USB via some tool, a lot less frustrating to just try stuff in the simulator. But over time the process became a lot easier as the deployment tool improved. But still not exactly like say Wii Homebrew where you just press Ctrl-R and the code is on a Wii in a second.

It would be nice to try VR one day, I've only wanted to for about three decades.

This topic is closed to new replies.

Advertisement