1. 2. 3D and Catch! A Practical Photogrammetry Class for Forensic Science

She stood over the axe, pressing away at the iPad screen, shuffling to the left a little after each tap on the glass. The room was full of lab coated figures, iPads and smartphones in hand. Some circled human crania, others shoes and empty bottles. All tapping and pressing and moving on, around, in sequence.

This scene has been a regular occurrence in our lab at Teesside University over the past month. This is because we’ve been running a practical class on photogrammetry for our first year students.

Photogrammetry in the Car Lab

Data collection in the car lab

Recently Prof. Tim Thompson invited me and fellow PhD researcher Paul Norris to run a series of three-hour practical sessions intended to provide an overview of the applications of photogrammetry in forensic science, and insights into the creation of 3D models. The aim was to host groups of around 20 students in our labs for a series of one-off sessions.

Why Photogrammetry?

It’s clear that our world is changing at a rapid pace due to innovation in digital technology, improved data storage and sharing capabilities. One area which has been subject to exponential growth in popularity is the creation of 3D imagery for recording and analysis of objects and spaces, which may be linked to other data sets (see The Atlantic for further discussion). Many academic projects make use of this type of technology, for example the Smithsonian has made 3D models of an excavation at Jamestown publicly available (Smithsonian X 3D). There are various methods which may be used to create 3D models but of these, photogrammetry arguably has the least onerous equipment and systems requirements.

In short, photogrammetry involves making measurements of surfaces and areas from photographs. These measurements can be used to model three-dimensional motions of the surfaces which feature in the photographs. It is possible to estimate three-dimensional coordinates of points on a surface by using two or more overlapping images, called stereoscopic images – this estimation is known as stereophotogrammetry. The common points in each image are used to triangulate the three-dimensional location of those points. To obtain this data, we use cameras to capture images from multiple angles and heights around an object or surface. The images are then processed in software which detects those common points and can create a three-dimensional model. The images may be used to create a texture to overlay on the model, which appears as an accurate colour representation of these features.

This means that all that is required is a device to capture images, such as a digital camera or smartphone, and a computer with decent processing power and the necessary software. Another option is a device with internet connectivity, as images can be processed in the cloud, via applications such as 123DCatch from Autodesk. Many people have access to digital cameras and computers, while smartphones have a high penetration rate in many countries. For our students, exposure to and understanding of this technology could provide inspiration for future projects and aid them in interpreting photogrammetric data.

Class Design

The plan was to get the students to gather images and build models in 123D Catch due to accessibility and ease of processing. Given that processing times can be relatively lengthy the final design assumed that it would be best to review the important elements of the method and outputs first. This would enable the students to plan and collect their data as early as possible. Following data collection the models could be left to process while explaining further aspects of the theory. This would cover platforms for sharing data, interacting with 3D models using Google Cardboard, and reinforcing knowledge through a class quiz.

Drawing the Outlines: Class Part 1

The class introduction provided a review of the basics of photogrammetry and potential applications. We looked at the use of photogrammetry and CT scan data in the analysis of forensic evidence from Thali et al. (2003) and Brüschweiler et al. (2003) who published a series of articles integrating 3D photogrammetric models with CAD models and data from CT scans as part of the “Virtopsy” project (Thali et al. 2005; Bolliger et al. 2008; Bolliger and Thali 2015) (which, incidentally, resulted in the amazing ‘Virtobot’ robotic system (Breitbeck et al. 2013)). Their work included examples such as the overlay of rubber bullets onto photogrammetric models of bruising on a cadaver, and of a shotgun model onto a photogrammetric model of a wound. Other tests for the application of photogrammetry in forensics can be found in Slot et al. (2013) and Urbanová et al. (2015).

I was keen to emphasize the application of this technology across disciplines. From archaeology, an example was detailed 3D models from Ducke et al. (2011) of the mass Viking grave discovered at Weymouth, UK. This was a great illustration of the application of photogrammetry to a complex archaeological context containing human remains, providing relatively easy access and preservation of a range of views and spatial data which would have been more difficult using other methods. The application to a mass grave illustrates the potential for recording such contexts in forensic investigations. We also showed examples from other sites such as Çatalhöyük, where 3D recording has been included in the archaeological workflow for several years (Berggren et al. 2015; Forte et al. 2012; Forte et al. 2015; Haddow et al. 2015). Another good example of archaeological application can be seen in the ‘Virtual Taphonomy’ approach of Wilhelmson and Dell’Unto (2015), or in De Reu et al. (2013).

From palaeontology Falkingham et al. (2014) provided another example of a great application. They reconstructed a 3D model of dinosaur tracks at the Paluxy River Dinosaur Chase Sequence from 70 year old photos. Using only 16 photos they were able to reconstruct the sequence, which is now subdivided between various institutions, and compare the full sequence to the historic plans. (Peter’s blog has some excellent content on the application of photogrammetry, and he has an open challenge for anyone who wants to improve on those Paluxy Chase models!).

We touched on some of the ethical issues involved in the creation of such models, and considerations for sharing. These aspects need to be handled carefully, and clearly, any models produced are subject to the same ethical and legal requirements as any documentation from forensic cases. Other points to consider are intellectual property, copyright and ownership of digital models: what constitutes the ‘author’s own intellectual creation’, when the model will be a copy of a physical object or area? This is a fast-developing field, which also has implications for the ways in which data may be shared and reused.

Practical Activities & Demos: Class Part 2

For the practical activities students were split into groups and provided with iPads. They chose objects to model, and then were to document their plan; the objectives and expected outputs, considering ethics and any risks involved the data capture process.

Modelling the Shoe

Recording the shoe!

We provided a range of objects from our lab at Teesside University such as replica crania with bullet trauma, axes, and saws. Students used their own objects for modelling too, for example, their shoes ended up on the benches. In addition, we took trips to the car lab, where our students were able to take images of cars set up for mock forensic investigations.

Towards the end of the class we had a bit of fun. First, to recap on the learning from the first half, we conducted a quick multiple choice quiz – a great tool for helping students remember the principles. After this we explored publicly available 3D models on SketchFab and 123DCatch using a version of Google Cardboard that I purchased for the class.

Student in Lab using Google Cardboard

Rocking Google Cardboard in the Lab

These enabled us to engage the students in virtual forensics and archaeology in an innovative way, providing an example of how data can be shared and visualised. The combination of Google Cardboard and the SketchFab platform was highly immersive and interactive.

Screen Shot 2016-03-13 at 17.22.27

Shoeprint” A publicly available model on 123DCatch by fovdjan, Commons Attribution-NonCommercial-Share Alike license.

Finally, when the first models processed, we reviewed and critiqued them together as a group using the iPads, providing suggestions for improved data collection and set-up.

Thoughts

There were several aspects that we were interested in: the timing and structure of the activities, whether our guidance was sufficient for students to create models, and if the class was interesting and motivating.

Timing

A key consideration in the class design was timing. In the end, we were able to complete the explanations and collect the data in good time. The models then processed in the cloud while the other demos and explanations took place. The first session finished ahead of time, but from the second session onwards we added the car lab visit to the schedule, which took up the remainder of the time. We reviewed some processed models in class and had a project on 123D Catch, where all student models were finally captured for sharing and review of the outputs.

What happened with the models?

The students loved the data gathering exercise. They were eager to get started as soon as we explained what the practical entailed. Sometimes this did mean that they so wanted to get started that they didn’t follow all of the guidance, but it was good to see them get stuck into the data gathering exercise. Being able to visualise the models and analyse them in class was very useful for the students’ learning process. For example, some models looked a little ‘holey’, due to lacks of overlapping images, and others showed dark areas due to shadows. On the other hand, some were perfect! Students also began to realise that where they placed their objects could have an impact on their model creation. If they were larger objects, like the cars, or placed on higher benches, they sometimes found they couldn’t easily photograph parts of the object – a stepladder was available for the cars, but only some students made use of it.

Stepladders at the ready in the car lab

Stepladders at the ready!

3D Model of Saw

3D model of a saw in the lab by fenton emma Commons Attribution-NonCommercial-Share Alike license

3D Model of Replica Human Cranium

3D model of a replica human cranium in the lab by jamespriestly2011 Commons Attribution-NonCommercial-Share Alike license

Room layout and lighting should be carefully considered when planning data capture. In our case, both the lab and car lab provided challenges. In the lab, the lighting cast shadows at times, and in the car lab there was significant reflection on the car surfaces. I think both were very useful in developing an understanding of the practicalities of collecting photographs for photogrammetry as dealing with data collection in uncontrolled environments is a real life challenge.

Another potential issue encountered was movement in the background of their images – having multiple people standing around objects in the room meant that there was some background interference which 123D Catch struggled with. This is something that can be managed quite easily, by timing the photos correctly, having ample space, and making sure the shot backgrounds are clear. We recommended the use of ad-hoc markers for referencing during model creation and later calibration and measurement. Nevertheless, most did not make use of targets during data collection.

The App

123D Catch has an intuitive user interface and data collection and upload is relatively straightforward. However, following data collection it would have been useful to be able to scroll through images and select or change multiple images, rather than one by one. We did experience errors during processing, sometimes because of connectivity issues with Wi-Fi, which meant that data was lost, other times the processing failed – likely due to issues in the data collection. It was neat to have the ability to download to multiple devices: we loaded it to the university iPads, but our students often downloaded direct to their own devices when they saw it was available on the app store. During the first week we handed out 10 iPads, but as it became obvious that they preferred to use their own tablets and smartphones, we ended up handing out around three iPads per session.

An important note on using shared devices – it’s always a good idea to make sure that you’ve logged out of all of your accounts. Leaving your accounts logged in can compromise your data security!

Engagement

The students appeared to enjoy the ‘hands-on’ nature of the class, and were able to actually create models successfully. We also found out surprisingly, that people love Google Cardboard this really enhanced the experience of the outputs of photogrammetry. They were so popular that several students asked to order their own during the class. There has been lots of discussion about our experience of 3D artefacts (di Franco et al. 2015), and the impact of virtual archaeology (Forte 2010; di Franco et al. 2012). I believe that this technology, combined with mobile devices, has already started to revolutionise our communications.

Suggestions

Some suggestions for similar activities would be to consider:

1. Exercise design: incorporating challenges and providing time for reflection and feedback
2. Interactive activities: the shared project on 123D Catch and Google Cardboard were successful in this case
3. Equipment requirements
4. Application limitations
5. Room layout
6. Types of object

Within the field of forensics, photogrammetry can maximise the potential of available evidence, and take advantage of temporal moments which are later destroyed – providing a detailed, three-dimensional model of a scene or specific subject, which can be easily analysed, measured and viewed from multiple angles, and can be preserved for future study.

For example the Virtopsy project highlighted the fact that 3D models of injuries and whole cadavers can be highly useful when the physical body will be buried and will decompose. Similarly, old photographs may be used to create models in forensic settings similar to the case of the reconstructed Paluxy Chase dinosaur tracks – for example footwear impressions.

Running these classes was great fun, and nothing is more invaluable than the exchange between students and researchers. You learn as much as you teach! Many thanks go to Paul and Tim for their cooperation, and our great students for taking part and granting permission to be photographed.

You can explore more detail of the practical via the class Prezi I made and our class models on the 123DCatch Project.

Bibliography
Berggren, Å., N. Dell’Unto, M. Forte, S. Haddow, I. Hodder, J. Issavi, N. Lercari, C. Mazzucato, A. Mickel, and J. S. Taylor. 2015. “Revisiting Reflexive Archaeology at Çatalhöyük: Integrating Digital and 3D Technologies at the Trowel’s Edge.” Antiquity 89 (344): 433–448. doi:10.15184/aqy.2014.43.

Bolliger, S. A., and M. J. Thali. 2015. “Imaging and Virtual Autopsy: Looking Back and Forward.” Phil. Trans. R. Soc. B 370 (1674). doi:10.1098/rstb.2014.0253.

Bolliger, S. A., M. J. Thali, S. Ross, U. Buck, S. Naether, and P. Vock. 2008. “Virtual Autopsy Using Imaging: Bridging Radiologic and Forensic Sciences. A Review of the Virtopsy and Similar Projects.” European Radiology 18 (2): 273–282. doi:10.1007/s00330-007-0737-4.
Breitbeck, Robert, Wolfgang Ptacek, Lars Ebert, Martin Furst, and Gernot Kronreif. 2013. “Virtobot – A Robot System for Optical 3D Scanning in Forensic Medicine.” Proceedings of the 4th International Conference on 3D Body Scanning Technologies, Long Beach CA, USA, 19-20 November 2013: 84–91. doi:10.15221/13.084.

Brüschweiler, W., M. Braun, R. Dirnhofer, and M. J. Thali. 2003. “Analysis of Patterned Injuries and Injury-Causing Instruments with Forensic 3D/CAD Supported Photogrammetry (FPHG): An Instruction Manual for the Documentation Process.” Forensic Science International 132 (2): 130–138. doi:10.1016/S0379-0738(03)00006-9.

De Reu, J., G. Plets, G. Verhoeven, P. De Smedt, M. Bats, B. Cherretté, W. De Maeyer, et al. 2013. “Towards a Three-Dimensional Cost-Effective Registration of the Archaeological Heritage.” Journal of Archaeological Science 40 (2): 1108–1121. doi:10.1016/j.jas.2012.08.040.

Di Franco, P., F. Galeazzi, and C. Camporesi. 2012. “3D Virtual Dig: A 3D Application for Teaching Fieldwork in Archaeology.” Internet Archaeology 32 (4). doi:10.11141/ia.32.4

Di Franco, Paola Di Giuseppantonio, Carlo Camporesi, Fabrizio Galeazzi, and Marcelo Kallmann. 2015. “3D Printing and Immersive Visualization for Improved Perception of Ancient Artifacts.” Presence: Teleoperators and Virtual Environments 24 (3): 243–264. doi:10.1162/PRES_a_00229.

Ducke, B., D. Score, and J. Reeves. 2011. “Multiview 3D Reconstruction of the Archaeological Site at Weymouth from Image Series.” Computers and Graphics (Pergamon) 35 (2): 375–382. doi:10.1016/j.cag.2011.01.006.

Falkingham, P. 2015. “The Historical Photogrammetry Challenge – over to You!” Falkingham Lab Webpage. Accessed 10-Oct-2015. https://pfalkingham.wordpress.com/2015/01/30/the-historical-photogrammetry-challenge-over-to-you/.

Falkingham, Peter L., Karl T. Bates, and James O. Farlow. 2014. “Historical Photogrammetry: Bird’s Paluxy River Dinosaur Chase Sequence Digitally Reconstructed as It Was prior to Excavation 70 Years Ago.” PLoS ONE 9 (4): e93247. doi:10.1371/journal.pone.0093247.

Fenton, E. 2016. “3D Model of Saw.” Accessed 10-Mar-2016. http://www.123dapp.com/fullpreview/embedViewer?assetId=5181366
Forte, M. 2010. “Introduction to Cyber-Archaeology.” Archaeopress: Oxford.

Forte, M., N. Dell’Unto, J. Issavi, L. Onsurez, and N. Lercari. 2012. “3D Archaeology at Çatalhöyük.” International Journal of Heritage in the Digital Era 1 (3): 351–378. doi:10.1260/2047-4970.1.3.351.

Forte, M., N. Dell’Unto, K. Jonsson, and N. Lercari. 2015. “Interpretation Process at Çatalhöyük Using 3D.” In Hodder, I. and Marciniak, A. (eds.) Assembling Çatalhöyük, Maney: Leeds.

Fovdjan. “Shoeprint.” Accessed 03-Mar-2016. http://www.123dapp.com/fullpreview/embedViewer?assetId=4218791

Haddow, S. D., C. J. Knüsel, B. Tibbetts, M. Milella, and B. Betz. 2015. “Human Remains.” In Çatalhöyük Archive Report 2015, 85–101. doi:10.13140/RG.2.1.4879.7525.

LaFrance, Adrienne. 2016. “Archaeology’s Information Revolution.” The Atlantic. Accessed 03-Mar-2016. http://www.theatlantic.com/technology/archive/2016/03/digital-material-worlds/471858/.

Priestley, J. 2016. “3D Model of Replica Human Cranium.” Accessed 10-Mar-2016. http://www.123dapp.com/fullpreview/embedViewer?assetId=5258354.

SketchFab. 2015. “SketchFab.” Accessed 10-Oct-2016. https://sketchfab.com.

Slot, L., P. K. Larsen, and N. Lynnerup. 2014. “Photogrammetric Documentation of Regions of Interest at Autopsy—A Pilot Study.” J Forensic Sci 59: 226–230. doi:10.1111/1556-4029.12289.

Smithsonian. 2015. “Smithsonian X 3D Explorer: Jamestown Chancel Burial Excavation.” Accessed 03-Mar-2016. http://3d.si.edu/model/jamestown-chancel-burial-excavation-overall-site.

Thali, M J, M Braun, U Buck, E Aghayev, C Jackowski, P Vock, M Sonnenschein, and R Dirnhofer. 2005. “VIRTOPSY – Scientific Documentation, Reconstruction and Animation in Forensic: Individual and Real 3D Data Based Geo-Metric Approach Including Optical Body/object Surface and Radiological CT/MRI Scanning.” Journal of Forensic Sciences 50 (2): 428–442. doi:10.1520/JFS2004290.

Thali, M. J., M. Braun, J. Wirth, P. Vock, and R. Dirnhofer. 2003. “3D Surface and Body Documentation in Forensic Medicine: 3-D/CAD Photogrammetry Merged with 3D Radiological Scanning.” Journal of Forensic Sciences 48 (6): 1356–1365.

Urbanová, P., P. Hejna, and M. Jurda. 2015. “Testing Photogrammetry-Based Techniques for Three-Dimensional Surface Documentation in Forensic Pathology.” Forensic Science International 250: 77–86. doi:10.1016/j.forsciint.2015.03.005.

Wilhelmson, H., and N. Dell’Unto. 2015. “Virtual Taphonomy: A New Method Integrating Excavation and Postprocessing in an Archaeological Context.” American Journal of Physical Anthropology 157 (2): 305–321. doi:10.1002/ajpa.22715.

One thought on “1. 2. 3D and Catch! A Practical Photogrammetry Class for Forensic Science

  1. Pingback: Models and Metadata: how are 3D models of human remains and funerary archaeology shared on a digital platform? – bonesburialsandblackcoffee

Leave a Reply