Terry Peters: Bringing Research to Clinical Practice

Interview with Terry Peters by Christoph by Christoph Bichlmeier following up on his keynote talk “The Role of Augmented Reality Displays for Guiding Intra-cardiac Interventions” at ISMAR 2014. In the interview Peters states that none of all existing image-guided procedures procedures use autonomous robots:

„They are all master-slave devices – and I do not expect this to change in the next 10 years. A potential role for an autonomous robot is however to replace some personnel functions in the operating room.“

I think the question of autonomy is not a problem of the future. Already today we need to think the robot-surgeon relation as a form of collective agency that is carried out in a cooperation of human and machine actors rather than being a neutral process. Read the Interview at medicalaugmentedreality.com

 

In the Image Laboratory of Neurosurgery

Jahrestagung Exzellenzcluster Bild Wissen Gestaltung. Berlin-Brandenburgische Akademie der Wissenschaften, Nov 15, 2014 (with Kathrin Friedrich, Anna Roethe, Thomas Picht)

How do images determine medical procedures and decisions? Why can it be more productive for clinical practice if a media scientist and a neurosurgeon look jointly at an MRI scan? How can the „blind spots“ in medical imaging practice be dealt with in an interdisciplinary manner and made epistemically fruitful for their respective visual understanding?
Taking as an example a typical case documentation from neurosurgical practice, fundamental stages in the course of treatment are illuminated by means of a dialog between different image practice disciplines. The physician’s diagnostic interest in the image comes up against cultural and media studies process analyses in which medical imagery itself is the focus of attention. The course of treatment is followed jointly from the patient’s first MRI scan, which is significant for the anatomical and morphological assessment, via the functional diagnosis of important brain functions, which extends the image findings  decisively, and the preoperative planning to the operating room, where the patient is treated on the basis of the images discussed. The image-critical issues and observations that arise in this process are intended to provide an insight into how different disciplinary perspectives of images can complement each other in such a way that longerterm synergies for theory and practice take shape in an »image laboratory« of this kind that also keep an eye on the usability of newer, image-guided therapies for the patient and are in the end intended to improve the quality of treatment.

The Charles Sanders Peirce Archive

Imeji Days 2014. Institut für Kunst- und Bildgeschichte, Humboldt University Berlin, Oct 21, 2014 (with Tullio Viola, Franz Engel, Frederik Wellmann)

The Peirce Archive aims at designing and implementing an image-based archiving repository of the Nachlass of the American philosopher Charles S. Peirce. The archive will assemble around 100,000 manuscript pages that span more than five decades, and touch on very different disciplinary domains. With an eye to improving the methodology of investigation of Peirce’s intellectual legacy, the archive will set up a citable online research environment to search, filter, describe, comment, link, compare, share, export, edit, and browse the manuscript pages, using the metadata management software imeji.

Ferngesteuerte Operationen: über das Problem der Mensch-Maschine Kooperation bei bildgeführten Interventionen

Tagung „Techniken des Leibes“, DFG-Netzwerk »Kulturen der Leiblichkeit«. Literaturwerkstatt Berlin, Sep 26, 2014

Der Beitrag untersucht dieses Gefüge von Anschauung, Operativität und Bildlichkeit in der minimal-invasiven Chrirurgie anhand der medialen Produktionsbedingungen des roboter-assistierten Operationssystems „Da Vinci“. Dabei soll anhand konkreter Fallbeispiele der Architektur, Navigation und Steuerung des Systems gezeigt werden, dass dem Herstellerversprechen erhöhter medizinischer Effizienz und Präzision bildgeführter Operationssysteme eine praktische Kenntnis der Differenz zwischen Körper und Bild entgegenstehen muss, die nicht in Abrenzung zum Maschinellen verhandelt werden kann, sondern als Synthese von Chirurg und Operationssystem gedacht werden muss.

Augmented Reality Aerial Navigation

So langsam kommen die ersten consumer-Anwendungen auf den Markt, bei denen Visio und Visualisierung verschmelzen. Navigation ist dabei bisher eins der wichtigsten Entwicklungsfelder. „Aero Glass“ ist eine Augmented Vision Awendung für Piloten. Was im Militärischen Bereich mit Millionenbeträgen mit eher mäßigem Erfolg entwickelt wurde, macht nun Endnutzer-Technologie wie  Epson Moverio oder Google Glass möglich: „Augmented Reality meets Synthetic Vision and is the next big step in safety and information for pilots.  Over the past decade, GA pilots’ ability to visualize terrain, navigation, traffic (ADS-B), weather, and airspace has become easier, along with improvements in convenience and safety items like emergency, preflight, inflight, and landing checklists. But handy as this information is, accessing it requires pilots to take their eyes off the sky, and often access multiple screens and devices.  As even a HUD (head-up display) is in a fixed location, Aero Glass has integrated all these functions, and made them available to pilots, wherever their head is turned, with 3D, 360-degree perspective, and is premiering its augmented vision glasses, wearable information for pilots.“ (citation and image from www.glass.aero) Also check the video on vimeo.

Surgical Strikes: Die Echtzeit Kill Chain als Bildproblem

Workshop „Imaging the Drone’s Vision: A Survey of its Aesthetic Qualities“, Humboldt Universität zu Berlin, Jul 11, 2014

Der Vortrag thematisiert drei zentrale Bildprobleme von Drohnenoperationen: 1. Veränderungen in der medialen Anordnung: wenn das Angeschaute nicht mehr gesehen, sondern  noch visualisiert wird, wenn das, was gesehen werden kann, ausschliesslich ins Bild verlagert wird und sich dem Auge selbst entzieht, wenn es „nur“ noch mediale Erfahrung ist, wenn es immer weniger an die Fähigkeiten und Funktionen des menschlichen Auges gebunden ist, dann scheint der Umgang mit diesen Bildern einen entschiedenen Status zu besitze. 2. Veränderungen der zeitlichen Dimension: erst entsprechende Echtzeit-Visualisierungsverfahren ermöglichen die unmittelbare Interaktion zwischen Operator und Drohne. Damit verbindet sich eine Verschiebung der militärischen Nutzung von zeitlich nachgeordneten Bildern der Planung und Überwachung hin zu anwendungsorientierten Echtzeit-Interventionen in bewaffnete Konflikte. 3. Unter dem Begriff Mobilisierung verhandelt der Vortrag schließlich das veränderte Verhältnis zwischen Körper und Bild: die räumliche Verfügbarkeit und Flexibilität von Sensor- und Überwachungstechnologien hat das Schlachtfeld in einen distanzierten und entkörperlichten Raum verwandelt, in denen Bildgebungsverfahren die Interaktion der Akteure determinieren. Der Einsatz von Drohnen trennt das Operationsgebiet vom Körper und dem unmittelbaren sinnlichen Zugriff des Soldaten. Dabei ist nicht die Position des Auges bzw. der Standpunkt des Betrachters maßgeblich für das, was gesehen wird, sondern die einzig und allein Perspektive der Maschine.

Hands on Instruments

Organized by Ramona A. Braun „Hands on Instruments“ brings together international scholars at the University of Cambridge to talk about instrumental practice in twentieth century science.

The conference asks for the „ways in which gestures and hand movements are an important part of creative, analytical and didactic processes in science, medicine and technology, and argues that the human hand has a major impact on research and research-related activities. Gestures are not only considered as ways to support communication through language: hand movements are also creative and teaching devices. Manual practice informs scientific and medical research on every level. This conference explores the importance of hand movements in connection with investigative and analytical thought processes in twentieth century science, medicine and technology. We put special emphasis on the new digital technologies such as screens and robots. We describe the impact of standardized and rationalized hand movements in wrist and finger movements, continuing efforts to document manual practices in history of science focused on earlier centuries.“

The Medical Image between Vision and Visualization

Conference „The Visual Image and the Future of the Medical Humanities„, Institute for the Medical Humanities, University of Texas Medical Branch. Galveston, USA, May 10, 2014

The trend towards minimally invasive and robot-assisted surgical procedures confronts medical treatment with a dilemma: on the one hand, there are possible benefits for the patient, such as less trauma, shorter hospitalization, and an improved recovery process. On the other, these procedures involve fundamental difficulties for the surgeon, whose ability to access the operation field and to navigate the instruments is diminished in comparison with traditional open surgery. This increased surgical complexity results from the fact that in image-guided surgical interventions the patient’s body needs to be accessed remotely with special instruments that have to be guided by v!isualization techniques instead of interventions executed within the range of the physician’s hands and eyes.

Performing surgery via visual interfaces such as screens or optical devices introduces a layer of iconicity between physician and patient that presents new challenges to iconic knowledge, clinical practices and technical solutions. Major visualization deficits of image-guided interventions include the limitation of the surgeon’s field of vision, the lack of immersive hand-eye coordination, and the gap between three-dimensional perception and two-dimensional images. Despite the introduction of flexible camera angles, force feedback systems or s!tereo video endoscopy in response to those deficits, minimally invasive surgery is still far from achieving the direct visualization advantage of open surgery.

In order to tackle that problem the paper will present and evaluate current approaches from medical augmented reality and computer vision research that promise to close this visual gap by displaying the operating field from the surgeon’s perspective. It will address the methods and discuss the problems that go along with the goal to eliminate the disparity between vision and visualization by augmenting the point of view with visual images. The paper argues that the implementation of augmented reality into medical therapy corresponds to a form of iconic knowledge that represents a key task for the medical humanities.

The medical image between vision and visualization

I will give a talk at the „Visual Image and the Future of the Medical Humanities“ conference taking place 8-11 May 2014 in Galveston, Texas. In the paper „The medical image between vision and visualization How Augmented Reality is going to change the surgeon‘s point of view“ I will present and evaluate current approaches from medical augmented reality and computer vision research that promise to close this visual gap by displaying the operating field from the surgeon’s perspective.

My assumption is that the trend towards minimally invasive and robot-assisted surgical procedures confronts medical treatment with a dilemma: on the one hand, there are possible benefits for the patient, such as less trauma, shorter hospitalization, and an improved recovery process. On the other, these procedures involve fundamental difficulties for the surgeon, whose ability to access the operation field and to navigate the instruments is diminished in comparison with traditional open surgery. This increased surgical complexity results from the fact that in image-guided surgical interventions the patient’s body needs to be accessed remotely with special instruments that have to be guided by visualization techniques instead of interventions executed within the range of the physician’s hands and eyes.

Performing surgery via visual interfaces such as screens or optical devices introduces a layer of iconicity between physician and patient that presents new challenges to iconic knowledge, clinical practices and technical solutions. Major visualization deficits of image-guided interventions include the limitation of the surgeon’s field of vision, the lack of immersive hand-eye coordination, and the gap between three-dimensional perception and two-dimensional images. Despite the introduction of flexible camera angles, force feedback systems or s!tereo video endoscopy in response to those deficits, minimally invasive surgery is still far from achieving the direct visualization advantage of open surgery.

In order to tackle that problem the paper will present and evaluate current approaches from medical augmented reality and computer vision research that promise to close this visual gap by displaying the operating field from the surgeon’s perspective. It will address the methods and discuss the problems that go along with the goal to eliminate the disparity between vision and visualization by augmenting the point of view with visual images. The paper argues that the implementation of augmented reality into medical therapy corresponds to a form of iconic knowledge that represents a key task for the medical humanities.