Browser unable to execute script; please use the site map to navigate the site.

Modelling Digital Autopsies on Medical Autopsies

Olivier

2015

(Citation)Citation information

M. S. Olivier. “Modelling Digital Autopsies on Medical Autopsies”. In: American Academy of Forensic Sciences 67th Annual Scientific Meeting. Orlando, FL, USA, 2015

(Abstract)Abstract

Forensic examination is at the core of digital forensics practice. Such examination uses science to provide an answer to a question that is relevant for legal or related purposes. The question may entail association, event reconstruction or some other insight that may be determined from forensic evidence. Once an answer x is obtained the question can be reformulated in the form: Is x true? The outcome of such an examination then is a (qualified) yes or no (or unable to determine). The need to qualify the yes or no relates to the level of certainty with which the question can be answered.

The issue of certainty (or, conversely, error rate) has plagued digital forensics significantly over its short history. Casey proposed a ‘qualitative’ certainty scale that provides vocabulary to express (un)certainty. Cohen suggests that uncertainty may be derived from processor error rates. A number of other researchers mention the issue of error rates, but do not provide answers.

The premise of this paper is that certainty is increased when alternative explanations have been eliminated. It is noted that the research on digital forensic examination is limited and often focusses on specific technologies that are subject to change — and hence to an increased lack of uncertainty in any examination of such technology that does not match the known technology exactly. Cohen’s and Carrier’s work on digital forensic examinations are notable exceptions. This paper suggests that more technology neutral research on the examination of digital forensic evidence needs to be done, before it will be possible to give a clearer indication of the (un)certainty of any given finding.

As a member of the family of forensic sciences, learning from other forensic sciences may provide valuable insights. Metaphors from the medical sciences are commonly used, such as the notion of a dead investigation or the Autopsy Forensics Platform. It therefore seems potentially useful to conduct a thought experiment that explores the similarities and differences between medical autopsies and (potential) digital forensic autopsies.

The first striking analogy is the fact that in the latter part of the 1800s medico-legal death investigations experienced a crisis due to the variety of (questionable) methods used. Rudolf Virchow is credited with establishing a method to conduct autopsies that met scientific standards and that subsequently became the standard protocol for such autopsies.

One of the characteristics of the Virchow protocol is that the entire body is examined irrespective of the presumed cause of death. The intention, as already suggested above, is to rule out any other explanations of the cause of death (or of potential contributing factors). To this end the body is first examined externally and then internally. The internal examination consists of a systematic removal of organs; each removed organ is weighed and examined in other manners to establish that it is consistent with the normal characteristics of such an organ. It is this verification that “everything else” is normal which is arguably one of the major differences between digital forensics and a forensic autopsy. A second significant difference when applying the metaphor is that the primary intention of a forensic autopsy is to determine the cause of death, while the digital autopsy aims to discover some other cause or evidence in a digital artefact.

In the case of the medical autopsy obvious or hypothesised causes of death are examined first, if possible. Protocols exist for analysis of gunshot wounds, lacerations, abrasions and other injuries. In a similar vein we suggest that digital evidence will be examined as was done previously (or new protocols may be developed for specific incidents). The intention of the remainder of the generic protocol is to rule out other explanations (or identify other possible explanations where they may exist).

To continue the thought experiment the challenge is therefore to determine whether it is possible (and meaningful) to remove ‘organs’ from a digital artefact and express an opinion on the normality of such ‘organs’.

The obvious equivalent of an organ in a digital environment is a system (such as an operating system or a database system), a subsystem, and an application. The National Software Reference Library at NIST already catalogues the hashes of files that occur ‘normally’ in such operating systems and other applications. The thought experiment described here requires one to go a step or three further: a given version of a system, subsystem or application (henceforth system) will consist of a number of ‘known’ files. Ideally (at least) two sets are required: a set of file hashes of a ‘minimal’ installation of the application and a set of hashes of a full installation. In both cases only the constant files should be included; files that are subject to change, such as configuration files and logs, will be considered below. Assume that such a database exists and, for any system Si it contains the set Si of hashes corresponding to a full install and Si corresponding to a minimal installation. Assume that F is the set of hashes of all found files. Then FSi = {f F|f ∈↑ Si} is a potential system (or ‘organ’); if Si FSi then FSi may be considered ‘normal’. It also means that every f FSi has been accounted for. Since files may be shared by systems it is possible for f FSi and f FSj for ij.

This leaves two categories of files that have not been tested for ‘normality’: the variable files (such as logs associated with a system) and data files. In both cases ‘normality’ arguably means that they are syntactically correct. The only difference between these two categories is that the absence or presence of the former may be (in) consistent with a system that is present (or absent) in the evidence. The mere existence of the other files does not indicate normality (or otherwise).

The above, in essence, verifies that the anatomy of the system is ‘normal’ in all non-evidentiary respects. The analogy can be taken further to consider the physiological aspects — one where cause and effect, and, hence, evidence is more likely to be found.

In conclusion, the thought experiment shows that it is indeed viable (and arguably useful) to conduct a digital autopsy on digital artefacts where it is feasible. It is simple to find examples where this may not be practical (such as in a live examination). Also note that a ‘normal’ system cannot necessarily be ignored during the examination, because a normal system may have vulnerabilities that played a role in the incident being investigated. However, the knowledge that the system was indeed normal, means that the ‘normal’ vulnerability are likely to be known and it therefore simplifies the examination.

Finally, note that much of the medical forensic autopsy is nor performed by a pathologist, but by a technician (known under various names in different jurisdictions). The pathologist is primarily involved in the examination of abnormalities or factors dictated by the circumstances of the incident. In a similar vein, the current paper also suggests that much of the checking for (ab)normality can be relegated to a technician (or even automated) to enable the digital forensic scientist to attend to matters in the investigation that require the application of science.

(BibTeX record)BibTeX reference

@conference(autopsy,
author={Martin S Olivier},
title={Modelling Digital Autopsies on Medical Autopsies},
booktitle={American Academy of Forensic Sciences 67th Annual Scientific Meeting},
address={Orlando, FL, USA},
year={2015} )