NAGIOS: RODERIC FUNCIONANDO

Mixed Reality Annotation of Robotic-Assisted Surgery videos with real- time tracking and stereo matching

Repositori DSpace/Manakin

IMPORTANT: Aquest repositori està en una versió antiga des del 3/12/2023. La nova instal.lació está en https://roderic.uv.es/

Mixed Reality Annotation of Robotic-Assisted Surgery videos with real- time tracking and stereo matching

Mostra el registre parcial de l'element

dc.contributor.author Portalés, Cristina
dc.contributor.author Gimeno, Jesús
dc.contributor.author Salvador, Antonio
dc.contributor.author García Fadrique, Alfonso Luis
dc.contributor.author Casas Yrurzum, Sergio
dc.date.accessioned 2023-06-16T11:56:30Z
dc.date.available 2023-06-17T04:45:06Z
dc.date.issued 2023 es_ES
dc.identifier.citation Portalés, C., Gimeno, J., Salvador, A., García-Fadrique, A., & Casas-Yrurzum, S. (2023). Mixed Reality Annotation of Robotic-Assisted Surgery videos with real-time tracking and stereo matching. Computers & Graphics, 110, 125-140. es_ES
dc.identifier.uri https://hdl.handle.net/10550/88338
dc.description.abstract Robotic-Assisted Surgery (RAS) is beginning to unlock its potential. However, despite the latest advances in RAS, the steep learning curve of RAS devices remains a problem. A common teaching resource in surgery is the use of videos of previous procedures, which in RAS are almost always stereoscopic. It is important to be able to add virtual annotations onto these videos so that certain elements of the surgical process are tracked and highlighted during the teaching session. Including virtual annotations in stereoscopic videos turns them into Mixed Reality (MR) experiences, in which tissues, tools and procedures are better observed. However, an MR-based annotation of objects requires tracking and some kind of depth estimation. For this reason, this paper proposes a real-time hybrid tracking–matching method for performing virtual annotations on RAS videos. The proposed method is hybrid because it combines tracking and stereo matching, avoiding the need to calculate the real depth of the pixels. The method was tested with six different state-of-the-art trackers and assessed with videos of a sigmoidectomy of a sigma neoplasia, performed with a Da Vinci® X surgical system. Objective assessment metrics are proposed, presented and calculated for the different solutions. The results show that the method can successfully annotate RAS videos in real-time. Of all the trackers tested for the presented method, the CSRT (Channel and Spatial Reliability Tracking) tracker seems to be the most reliable and robust in terms of tracking capabilities. In addition, in the absence of an absolute ground truth, an assessment with a domain expert using a novel continuous-rating method with an Oculus Quest 2 Virtual Reality device was performed, showing that the depth perception of the virtual annotations is good, despite the fact that no absolute depth values are calculated. es_ES
dc.language.iso en es_ES
dc.publisher Elsevier es_ES
dc.subject mixed reality es_ES
dc.subject annotation es_ES
dc.subject Robotic-Assisted Surgery es_ES
dc.subject tracking es_ES
dc.subject stereo matching es_ES
dc.title Mixed Reality Annotation of Robotic-Assisted Surgery videos with real- time tracking and stereo matching es_ES
dc.type journal article es_ES
dc.subject.unesco UNESCO::CIENCIAS TECNOLÓGICAS es_ES
dc.identifier.doi 10.1016/j.cag.2022.12.006 es_ES
dc.accrualmethod CI es_ES
dc.embargo.terms 0 days es_ES
dc.type.hasVersion VoR es_ES
dc.rights.accessRights open access es_ES

Visualització       (4.486Mb)

Aquest element apareix en la col·lecció o col·leccions següent(s)

Mostra el registre parcial de l'element

Cerca a RODERIC

Cerca avançada

Visualitza

Estadístiques