Simen Haugo

PhD student in Engineering Cybernetics
Norwegian University of Science and Technology
Department of Engineering Cybernetics

I'm currently doing a PhD in robotic vision. My research is focused on methods to represent and acquire compact and meaningful models of the three-dimensional environment, that can be transmitted efficiently and used easily by a human operator or programmer. I'm especially interested in using procedural and mathematical descriptions of geometry - which can express a rich variety of concepts similar to a human language - and the problem of obtaining such descriptions from sensor data.


Writing

My favorites are highlighted in yellow.

Short
Aug 2018
Better research practice
Article
May 2018
Working from home: A history of telerobot displays
Booklet
Apr 2018
30 dubious ways to find your robot without GPS
Short
Mar 2018
A life of OpenGL programming
Short
Feb 2018
Rotations and mathematical hammers II
Short
Jan 2018
Rotations and mathematical hammers I
Short
Dec 2017
Papers I loved in 2017
Short
Nov 2017
Lugging a 120cm tube across the world for a conference
Lecture
Oct 2017
If a stranger on the train asked you about computer vision
Lecture
Sep 2017
Visualizing computer programs
Technical
Jul 2017
Real-time video capture for computer vision
Technical
Jun 2017
The case of Huffman and the missing table
Technical
Aug 2016
International Aerial Robotics Competition: a postmortem
Technical
Jul 2013
Raymarching Distance Fields (old blog)

30 dubious ways to find your robot without GPS

Work done during PhD at NTNU

A booklet about finding out where your robot is and what's around it, organized by sensing technologies. Current table of contents:

Latest version: sensors.pdf (last updated July 2018)



Ascend NTNU

In 2015, I co-founded a robotics team at my university, with a focus on building autonomous drones. We have participated several times in the International Aerial Robotics Competition.

You can follow our team's progress at our homepage.

I worked on such things as video capture/streaming, simulation, visualization, visual-inertial localization and object tracking.

You can read a 'post mortem' of our first attempt.

vdb

A library for making interactive visualizations, with:

Latest release (github)
A talk I gave in 2017 (transcript)

Prototype GUIs

Visualize 3D data

Make annotation tools

Tweak parameters live

Continuous signed distance functions for 3D vision

Work done during PhD at NTNU

To make a robot do interesting things you need to know what is around it, in particular the geometry, but traditional representations of geometry, such as point clouds or meshes, are not very practical to work with, as the information carried is too low-level. The solution is to model our knowledge of the world, e.g. the kinds of objects we expect to interact with. This leads to a dilemma: how do we store and process all the objects in the world in a computer program? In this paper we explore the use of a mathematical, procedural representation to tackle this issue.

Continuous signed distance functions for 3D vision
Simen Haugo, Annette Stahl, Edmund Brekke.
2017 International Conference on 3D Vision (3DV).

3D model registration in sparse 3D reconstructions

Master's thesis at NTNU

To make use of models of objects, we need to recognize them in the data obtained from sensors. In my master thesis I discuss ways to detect and localize objects modeled with signed distance functions, particularly in the context of sparse 3D point clouds as may be obtained using visual SLAM.

csdf-thesis.pdf



Tracking robot vacuum cleaners

At Ascend I worked on tracking robotic vacuum cleaners from a camera. The methods and failures are described in this report:

roomba.pdf

Inside-out position tracking

At Ascend I worked on a visual position tracking system for a drone. The goal was to track its global position within a 20x20 meter arena, aided by a grid pattern. The problem was confounded by additional sports markings and unrelated patterns, moving bystanders, variable lighting, and objects moving on the arena.

Solving this problem robustly involved a variety of computer vision techniques, including a novel Hough transform, hypothesis verification-in-the-loop, and a clever trick for rectifying fisheye images in real-time.

I optimized the algorithm with SIMD instructions, and digged into video capture and frame decoding, in order to run at 60 fps on an embedded platform.

You can see it in action in the IARC 2016 'post mortem'.

AI simulation and debugging tool for IARC

At Ascend I made a simulator for members of the AI group, that would let them test and debug their algorithms. It had extensive debugging functionality, like history scrubbing, command history, status displays, and the ability to record runs.

You can see more in the IARC 2016 'post mortem'.

Mission status viewer

Our robotics team built an autonomous drone that can fly along paths inside, without GPS or any external tracking system - only inside-out tracking. With all the things that can go wrong, it's important to have their status available in one place. This GUI tool gives us a live video feed from on-board cameras, lets us draw flight paths, see position state estimates, see commanded velocity and detected obstacles, reset the Kalman filter, and even see CPU load and temperatures. (But the best feature is the drone's tiny animated propellers.)

You can see more in the IARC 2016 'post mortem'.

Simen Haugo © 2019
BY-NC-SA 4.0