In a recent article by Emily Frank, Sebastian Heath and Chantal Stein I learned how to combine Structure from Motion 3D models with RTI and MBI imagery. This is quiet interesting, as the two-dimensionality of RTI and MBI was always a restriction. But let’s back up…

SfM, RTI and MBI

Structure from Motion (SfM) is a method of creating 3D models by photographing an object with many overlapping shots. A software can combine these photographs into a 3D mesh. This method is a great and cheap alternative to laser scanners, which are more accurate, but expensive. SfM can be utilised with very small objects as well as whole landscapes.

Reflectance Transformation Imaging is a technology the uses a stationary camera and moving lights. This way, you create several photos from the same position, but with different lighting directions. A software again can overlap these images to create a so-called RTI file. With this you have a 2D photo in which you can interactively change the lighting positions to your liking. In addition, you can apply filters to the file to create overamplified renderings to make the slightest details visible. A Normals visualisation is also possible.

Multiband Imaging on the other hand describes photos that are made in the Infrared or Ultraviolet spectrum. MBI photos can make things very visible, that are hardly seen by the naked eye. Egyptian Blue for example is a colour hardly visible, but shines like the moon in the night when photographed with MBI.


So in the article, the authors describe how to combine the static images of RTI and MBI (which are 2D images) with a photogrammetry 3D model. The way is quiet simple: While creating the 3D model, a normal photo from the same positions from where the RTI and MBI photos were taken is used to create the model. When creating the texture for the 3D model at the end however, the normal photo is replaced by either the RTI or MBI photo and the UV-texture is created using only this one photo. This way, you can create a 3D model of an object with a normal texture, a RTI texture and/or a MBI texture. You can import this into a 3D software and play around with visibilities and animations.

As the authors stress, this visualisation is a great tool to… well to visualise. I totally agree. The 3D model looks amazing and can show way more than an usual 3D model can. Using Blender for example live 3D views enable the user to actually use the 3D software as an analytical tool with different textures modes and if you want, you can also create animations to show the visualisation outside of the 3D software.


While I have to try it myself, I imagine that creating the UV-texture out of the one single image can result in unwanted artefacts. As long as the object is flat, it might work. More recent RTI images however are also created with other more three-dimensional objects. I imagine, that the RTI or MBI texture can’t cover the whole object.

Also, although the authors might have tried, I missed the possibility to use the Normals visualisation from the RTI as a Normals channel in the 3D object. I am not that familiar with Blender, but I imagine this is a possibility. This way the display of topography would be much more detailed.

Lastly, the process of taking the pictures is a bit slower than normally. I also think this is hardly possible with an RTI-Dome, so that you necessarily have to use Highlight RTI.

All in all I was very much intrigued with the article and can’t wait to try it out.


Sebastian Hageneuer

Founder & Editor

About the Author

My name is Sebastian. I am a research associate at the Institute of Archaeology at the University of Cologne, Germany, Discipline for Archaeoinformatics. My special interest lies in reconstructing ancient architecture and thinking about ways to present archaeological knowledge to other researchers and the public in an informative and appealing way. I teach 3D documentation of material culture as well as 3D modelling and archaeological reconstruction and work on several projects as part of my job.

View Articles