Skip to main content

Exhibit Workshop from Camera to 3D Space

Exhibit Workshop from Camera to 3D Space's Norm Chan visited the Exploratorium with Matt Bell from Matterport, a company that has created a camera which scans a space and generates a fully textured, interactive 3D model.  They set up the camera in our exhibit development workshop to capture 3D images of the space.  For those who haven't visited the Exploratorium, our exhibit development workshop is an amazing space where our exhibit developers as well as visiting artists, educators, and tinkerers create the interactive exhibits in the museum.  It's a busy, often chaotic space with many different types of tools, materials, and machines.  We provide a direct view of the workshop from the South Gallery of the museum, so that visitors can get a peek at how our exhibits are made and taken care of.

I was intrigued by the interactive walk-through of the space that was created with the Matterport camera (see below) and wanted to learn more about the technology and what was going on underneath the hood.  Seeing the photographic images stitched together and being able to navigate from capture point to capture point is familiar and compelling, but my mind instantly went to questions like "what does the 3D model of the space look like?" and "how detailed is the underlying mesh?"

I asked Scott Adams from Matterport some questions to find out more about how their camera generates the 3D model.

[RR] How does ambient light affect how the 3D scanning camera captures geometry and textures?  Does having more illumination make for more accurate models?

[SA] As long as the environment has comfortable/normal interior lighting, the system will perform happily.  We generally advise turning on all available internal lighting, but that's more to avoid gloomy corners.  Unless we're talking about a space that is intentionally quite dark, there's should be no impact on scanning.  The only thing to avoid is direct sunlight; the infra-red in sunlight can interfere with the IR depth sensors and cause alignment issues.

[RR] Do you scan with the camera at the same height as you move it through a space? How does the camera height affect the data captured?

[SA] Great question. By default, we advise keeping the camera at 5-6 feet tall. This makes for normal viewing for the average person. If you're looking for the best possible mesh/coverage of an area, you can drop the camera down to 2-3 feet. This will help fill out the underside of tables and such. In practice, very few users do this unless there's a compelling reason -- it makes navigation a bit odd (as the height changes between locations). If you're curious, try doing 2-3 test scans in a single room, changing camera heights between scans, to see the effect.

[RR] Is there a Google Cardboard version of Matterport available for experimentation?

[SA] We haven't announced a release date yet, but yes, Matterport will indeed be available for use with Google Cardboard in 2016! Currently, you can experience Matterport’s VR app on the Samsung GearVR.

Exhibit Development Shop 3D Mesh from above

3D polygonal mesh of the Exploratorium Exhibit Development Workshop with viewpoint from suspended walkway over the South Gallery

3D Shaded Wireframe Mesh of Exhibit Development Workshop

View of 3D model of Exhibit Development Workshop showing mesh triangles and textures lit by a directional light 

Here's's video interview with Matt in which he explains more about how the camera works: