PhD student in Engineering Cybernetics
Norwegian University of Science and Technology
Department of Engineering Cybernetics
I'm currently doing a PhD in robotic vision. My research is focused on methods to represent and acquire compact and meaningful models of the three-dimensional environment, that can be transmitted efficiently and used easily by a human operator or programmer. I'm especially interested in using procedural and mathematical descriptions of geometry - which can express a rich variety of concepts similar to a human language - and the problem of obtaining such descriptions from sensor data.
Work done during PhD at NTNU
A booklet about finding out where your robot is and what's around it, organized by sensing technologies. Current table of contents:
Latest version: sensors.pdf (last updated July 2018)
In 2015, I co-founded a robotics team at my university, with a focus on building autonomous drones. We have participated several times in the International Aerial Robotics Competition.
You can follow our team's progress at our homepage.
I worked on such things as video capture/streaming, simulation, visualization, visual-inertial localization and object tracking.
You can read a 'post mortem' of our first attempt.
A library for making interactive visualizations, with:
Work done during PhD at NTNU
To make a robot do interesting things you need to know what is around it, in particular the geometry, but traditional representations of geometry, such as point clouds or meshes, are not very practical to work with, as the information carried is too low-level. The solution is to model our knowledge of the world, e.g. the kinds of objects we expect to interact with. This leads to a dilemma: how do we store and process all the objects in the world in a computer program? In this paper we explore the use of a mathematical, procedural representation to tackle this issue.
Continuous signed distance functions for 3D vision
Simen Haugo, Annette Stahl, Edmund Brekke.
2017 International Conference on 3D Vision (3DV).
At Ascend I worked on tracking robotic vacuum cleaners from a camera. The methods and failures are described in this report:
roomba.pdfAt Ascend I worked on a visual position tracking system for a drone. The goal was to track its global position within a 20x20 meter arena, aided by a grid pattern. The problem was confounded by additional sports markings and unrelated patterns, moving bystanders, variable lighting, and objects moving on the arena.
Solving this problem robustly involved a variety of computer vision techniques, including a novel Hough transform, hypothesis verification-in-the-loop, and a clever trick for rectifying fisheye images in real-time.
I optimized the algorithm with SIMD instructions, and digged into video capture and frame decoding, in order to run at 60 fps on an embedded platform.
You can see it in action in the IARC 2016 'post mortem'.
At Ascend I made a simulator for members of the AI group, that would let them test and debug their algorithms. It had extensive debugging functionality, like history scrubbing, command history, status displays, and the ability to record runs.
You can see more in the IARC 2016 'post mortem'.
Our robotics team built an autonomous drone that can fly along paths inside, without GPS or any external tracking system - only inside-out tracking. With all the things that can go wrong, it's important to have their status available in one place. This GUI tool gives us a live video feed from on-board cameras, lets us draw flight paths, see position state estimates, see commanded velocity and detected obstacles, reset the Kalman filter, and even see CPU load and temperatures. (But the best feature is the drone's tiny animated propellers.)
You can see more in the IARC 2016 'post mortem'.
© Simen Haugo
BY-NC-SA 4.0