EP3599982A1 - A 3d reconstruction system - Google Patents

A 3d reconstruction system

Info

Publication number
EP3599982A1
EP3599982A1 EP18771352.4A EP18771352A EP3599982A1 EP 3599982 A1 EP3599982 A1 EP 3599982A1 EP 18771352 A EP18771352 A EP 18771352A EP 3599982 A1 EP3599982 A1 EP 3599982A1
Authority
EP
European Patent Office
Prior art keywords
light
structured light
features
reconstruction system
computer system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP18771352.4A
Other languages
German (de)
French (fr)
Other versions
EP3599982A4 (en
Inventor
Michael Saas HANSEN
Morten Rufus Blas
Mads Ockert FOGTMANN
Steen Møller Hansen
André Hansen
Henriette Schultz KIRKEGAARD
Sebastian Hoppe NESGAARD JENSEN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cilag GmbH International
Original Assignee
3DIntegrated ApS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 3DIntegrated ApS filed Critical 3DIntegrated ApS
Publication of EP3599982A1 publication Critical patent/EP3599982A1/en
Publication of EP3599982A4 publication Critical patent/EP3599982A4/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000094Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine

Definitions

  • the invention relates to a 3D reconstruction system for determining a 3D profile of an object.
  • the 3D reconstruction system is in particular suitable for use in surgery, such as minimally invasive surgery.
  • MIS Minimally invasive surgery
  • laparoscopy has been used increasingly in recent years due to the benefits compared to conventional open surgery as it reduces the trauma to the patient skin and optionally further tissue, leaves smaller scars, minimizes post-surgical pain and enables faster recovery of the patient.
  • MIS neoscopy
  • endoscopy arthroscopy
  • thoracoscopy a method of performing both diagnostic and surgical procedures.
  • a body cavity such as the abdominal or pelvic cavity
  • An endoscope such as a laparoscope may be inserted through an incision and be conventionally connected to a monitor, thereby enabling the surgeon to see the inside of the body cavity, such as an abdominal or pelvic cavity.
  • a surgical instrument is inserted through the same or usually another incision.
  • the body cavity sometimes called "surgery cavity”
  • the body cavity is inflated with a fluid, preferably gas e.g.
  • US 2013/0296712 describes an apparatus for determining endoscopic dimensional measurements, including a light source for projecting light patterns on a surgical sight including shapes with actual dimensional measurements and fiducials, and means for analysing the projecting light patterns on the surgical site by comparing the actual dimensional
  • WO 2013/163391 describes a system for generating an image, which the surgeon may use for measuring the size of or distance between structures in the surgical field by using an invisible light for marking a pattern to the surgical field.
  • the system comprises a first camera; a second camera; a light source producing light at a frequency invisible to the human eye; a dispersion unit projecting a predetermined pattern of light from the invisible light source; an instrument projecting the predetermined pattern of invisible light onto a target area; a band pass filter directing visible light to the first camera and the predetermined pattern of invisible light to the second camera; wherein the second camera images the target area and the predetermined pattern of invisible light, and computes a three-dimensional image.
  • US2008071140 discloses an endoscopic surgical navigation system which comprises a tracking subsystem to capture data representing positions and orientations of a flexible endoscope during an endoscopic procedure, to allow co-registration of live endoscopic video with intra-operative and/or pre- operative scan images. Positions and orientations of the endoscope are detected using one or more sensors and/or other signal-producing elements disposed on the endoscope.
  • US6503195 describes a real-time structured light depth extraction system includes a projector for projecting structured light patterns comprising a positive pattern and an inverse pattern with onto an object of interest.
  • a camera samples light reflected from the object synchronously with the projection of structured light patterns and outputs digital signals indicative of the reflected light.
  • An image processor/controller receives the digital signals from the camera and processes the digital signals to extract depth information of the object in real time.
  • a system for generating augmented reality vision of surgical cavities for viewing internal structures of the organs of a patient to determine the minimal distance to a cavity surface or organ of a patient is described in "Augmented reality in laparoscopic surgical oncology” by Stephane Nicolau et al. Surgical Oncology 20 (2011) 189-201 and "An effective visualization technique for depth perception in augmented reality-based surgical navigation" by Choi Hyunseok et al. The international journal of medical robotics and computer assisted surgery, 2015 May 5. doi: 10.1002/rcs.l657.
  • the surgical instrument comprises a laser pointing instrument to project laser spots.
  • the distance between instrument and organ may be estimated by using images of optical markers mounted on the tip of the instrument and images of the laser spots projected by the same instrument.
  • These systems generally comprise a projector and a camera which are spatially interconnected.
  • An object of the present invention is to provide a 3D reconstruction system for performing a determination of a tissue field in 3D space with high accuracy.
  • the 3D reconstruction system of the invention is especially suitable for performing a determination of a tissue field, such as a field for performing diagnostics and/or a surgery field.
  • tissue field is herein used to designate any surface areas of a mammal body, such as natural surface areas including external organs, e.g. skin areas and/or internal surface areas of natural openings and surface areas exposed by surgery and/or surfaces of a minimally invasive surgery cavity.
  • the tissue field is advantageously an in vivo tissue field.
  • the tissue field may include areas of organs, such as internal organs that have been exposed by surgery, e.g. surfaces of a heart, a spleen or a gland.
  • the phrase "determination of a tissue field in 3D space" is herein used to designate a determination of a property of the tissue field or a part thereof, and/or a determination of the tissue field or a part thereof relative to a selected unit such as a surgical tool.
  • the property may be a tissue type determination and/or a size determination, such as a topologic size
  • the 3D reconstruction system of the invention is preferably suitable for performing a determination of a tissue field in 3D space, more preferably for performing real time determination in 3D space.
  • 3D determinations may be performed with a very high accuracy.
  • the 3D reconstruction system comprises
  • the frames are digital frames and each frame comprises a set of pixel data associated with a time attribute, such as an actual time or a relative time, e.g. a time from start of a procedure or from a start time set by an operator.
  • the structured light beam has a centre axis which may advantageously be determined as the centre axis of the structured light beam.
  • the structured light beam comprises a cross-sectional light structure which means the light beam as seen in a cross sectional view e.g. as projected perpendicularly to a plan surface.
  • the cross-sectional light structure comprises a plurality of light features which are recognizable by the computer system from the set of pixel data.
  • the light features may comprise an indefinite number of light features, such as an indefinite number of fractions of the cross-sectional light structure which may be recognized by the computer system.
  • the light features may be optically recognizable by comprising an optically recognizable attribute, such as a geometrical attribute (e.g. a local shape), an intensity attribute and/or a wavelength attribute.
  • the optically recognizable attributes are recognizable from the pixel data.
  • the set of pixel data comprises at least one value for each pixel.
  • the value may be 0 for a pixel that does not detect any light.
  • the values of the respective pixels may for example represent one or more wavelengths, the intensity of one or more wavelengths, total intensity and etc.
  • Values of a group of pixels may represent a geometrical attribute e.g. a line and/or a pattern of pixels with corresponding values.
  • the computer system may be a single computer or a group of computers which are in data communication with each other e.g. by wire or wireless.
  • the computer system comprises a processor, such as a multi-core processor.
  • the computer system forms part of a robot, such as a robot controller processing system configured for operating and controlling movement of the robot.
  • the 3D reconstruction system may be operated with a relatively low processing power (CPU) while at the same time be operating with a high accuracy in real time.
  • the computer system is configured for storing data representing the projected structured light beam.
  • the data set representing the projected light beam may be transmitted or determined by the computer system.
  • the computer system comprises a memory configured for storing the projected structured light beam in the form of the reference structured light data set.
  • the memory optionally stores the reference structured light data set or as it will be elaborated a plurality of reference data sets each associated with properites of a structured light beam including data representing recognizable light features of the light beam.
  • projected structured light beam means the structured light beam as projected from the projector device.
  • the projected structured light beam has the orientation and position (pose) corresponding to the beam as projected.
  • the pose of the projected structured light beam can therefore be estimated to be the same as the pose of the projector device.
  • the projected structured light beam includes a group of electromagnetic waves projected from the projector and propagating along parallel or diverging directions and wherein the light is textured seen in a cross-sectional view orthogonal to a center axis (herein also referred to as the optical axis) of the group of electromagnetic waves i.e. the light has areas of higher intensity, and areas of lower intensities or no intensity which is not a natural Gaussian intensity distribution of a light beam.
  • the terms "light pattern” and "light texture” are used interchangeably.
  • the data representing the projected light beam is referred to as a "set of reference structured light data" or a
  • the projected structured light beam may be stored in the form of a reference structured light data set.
  • the reference structured light data set comprises at least a set of the light features of the projected structured light beam.
  • the computer system is configured for
  • frame means a frame comprising reflections of the structured light beam from the tissue field, i.e. a frame acquired while the projector is projecting the structured light beam.
  • the computer system is configured for recognizing a plurality of the set of light features including a plurality of primary light features from two or more received sets of pixel data having corresponding time attribute, such as sets of pixel data of frames of a multi camera image acquisition device.
  • Matching of features of stereo pairs of images is known in the art, but heretofore it has never been considered to perform feature matching between a projected light beam and an image to estimate the spatial position of the projector device.
  • a very effective method and system for 3D reconstruction of acquired image(s) which may perform a real time 3D reconstruction with a high accuracy using relatively simple algorithms and which algorithms further may be processed using a relative low processing power (CPU).
  • the matching of recognized primary light features may be performed according to principles known from the art of feature matching of stereo images for example by applying homographical iterative closest match algorithms and/or as described in the article "Wide Baseline Stereo Matching" by Philip Pritchett and Andrew Zisserman, Robotics Research Group,
  • the matching of the recognized primary light features may preferably comprise matching the pixel data representing the primary light features with pixel data of the reference structured light data set.
  • the computer system may estimate the spatial position of the projector device relative to at least a part of the tissue field determined e.g. from the position of the image acquisition device.
  • the spatial position of the projector device relative to at least a part of the tissue field may be determined from the position of the image acquisition device at the time of acquiring the image processed and preferably
  • the computer system is configured for matching the recognized primary light features with corresponding light features of the projected structured light beam and based on the matches estimating the spatial position of the projector device relative to at least a part of the tissue field as the spatial position determined from the position of the image acquisition device.
  • the projector device need not be within the field of view of the image acquisition device since the computer system based on the light feature matches may determine the spatial position of the projector device relative to at least a part of the tissue field as it would have been imaged if it had been within the field of view of the image acquisition device.
  • the image acquisition device and/or the projector device may be
  • the view of field of the image acquisition device may be relatively narrow and preferably focused predominantly onto the tissue field.
  • the acquired images of the tissue field may be of a very high quality and reveal many details, which may not have been revealed using an image acquisition device with a higher field of view and/or depth of focus.
  • the computer system may be configured for receiving a reference structured light data set via a calibration step.
  • the system as such may not require calibration once the computer has the reference structured light data.
  • the computer system may perform the one or more determinations of the tissue field based on the spatial position of the projector device and the recognized light features e.g. using trigonometrical algorithms for example as described in US
  • the computer system may perform the one or more determinations of the tissue field based on the spatial position of the projector device estimated as described herein and the recognized light features using the reconstruction models and algorithms described in WO2015/151098.
  • the computer system may now calculate in a simple way the one or more determinations of the tissue field even where the projector and/or the image acquisition device is moved independently of each other.
  • the phrase "estimate the spatial position of the projector device" includes a determination e.g. a calculation of the spatial position of the projector device which may be further refined e.g. as explained below.
  • body cavity is herein used to denote any gas and/or liquid filled cavity within a mammal body.
  • the cavity may be a natural cavity or it may be an artificial cavity which has been filled with a fluid (in particular gas) to reach a desired size.
  • the cavity may be a natural cavity which has been enlarged by being filled with a fluid.
  • the body cavity is a minimally invasive surgical cavity.
  • distal and proximal should be interpreted in relation to the orientation of tools used in connection with diagnostics and/or surgery, such as minimally invasive surgery.
  • real time is herein used to mean the time required by the computer to receive and process optionally changing data, such as
  • intraoperative data optionally in combination with other data, such as predetermined data, reference data set, estimated data which may be non- real time data such as constant data or data changing with a frequency of above 1 minute to return the real time information to the operator.
  • estimated data may include a short delay, such as up to 5 seconds, preferably within 1 second, more preferably within 0.1 second of an occurrence.
  • the term "operator” is used to designate a human operator (human surgeon) or a robotic operator i.e. a robot programmed to perform a minimally invasive diagnostics or surgical procedure on a patient.
  • the term “operator” also includes a combined human and robotic operator, such as a robotic assisted human surgeon.
  • access port means a port into a body cavity provided by a cannula inserted into an incision through the mammal skin and through which cannula an instrument may be inserted.
  • peernetration hole means a hole through the mammal skin without any cannula.
  • the term “rigid connection” means a connection which ensures that the relative position between rigidly connected elements is substantially constant during normal use.
  • cannula means herein a hollow tool adapted for being inserted into an incision to provide an access port as defined above.
  • projector means “projector device” unless otherwise specified.
  • a camera baseline means the distance between cameras or camera units. The distance is - unless otherwise specified - determined as the distance between the lens' center points (optical axis) corresponding to the distance between the center of the images acquired by the two
  • a projector-camera baseline means the distance between the camera/camera unit and the projector. The distance is - unless otherwise specified - determined as the distance between the camera lens center of the camera/camera unit and the center of the projector. Often the surface of the tissue field may be very curved.
  • target area or "target site” of the tissue field e.g. of the minimally invasive surgical cavity is herein used to designate an area which the surgeon may have focus on, e.g. for diagnostic purpose and/or for surgical purpose.
  • tissue site may be any site of the tissue field e.g. a target site.
  • the tissue field may e.g. comprise a surgical field of an open surgery or a minimally invasive surgery.
  • the tissue field comprises surfaces of the intestine and the throat.
  • skin is herein used to designate the skin of a mammal.
  • the skin may include additional tissue which is or is to be penetrated by a penetrator tip or through which an incision for an access port is made or may be made.
  • minimally surgical instrument means herein a surgical instrument which is suitable for use in surgery performed in natural and/or artificial body openings of a human or animal,, such as for use in minimally invasive surgery.
  • corresponding time attributes is used to mean attributes that represent a substantially identical time.
  • the set of pixel data may advantageously be subjected to an error and correction e.g. to detect and correct and/or discharge corrupted data, such data that have been corrupted due to transmission, data that include error reflection e.g. due to moisture at the tissue field, data that are missing due to occlusions or absorption.
  • the error and correction may e.g. be provided by adding to the set of pixel data prior to transmission to the computer system some redundancy e.g. by adding extra data, which the computer system may use to check consistency of the set of pixel data, and to recover data that have been determined to be corrupted.
  • redundancy is incorporated into the structured light pattern by designing the cross-sectional light structure of the projected structured light beam to provide that the set of pixel data comprise redundant data.
  • Error and correction schemes are well known in the art and the skilled person will be capable of adapting such error and correction schemes to be used in the present invention.
  • the error and correction of the set(s) of pixel data may comprise withdrawing of values representing background frame(s). This will be described in further details below.
  • the reflected light is subjected to an optical filtering which may further be useful in obtaining high quality frames. Also this is described in further details below.
  • the projector device of the structured light is subjected to an optical filtering which may further be useful in obtaining high quality frames. Also this is described in further details below.
  • the front of the projector device from where the structured light beam is projected and the front of the image acquisition device collecting the light for imaging the surface of the tissue field onto where the structured light is impinging and from where the image is acquired may be arranged in a triangular configuration.
  • a very accurate determination of the spatial position of the projector device may be obtained using for example algorithm based on geometrical math.
  • the computer system comprises the reference structured light data set and comprises an algorithm that from the matched primary light features and their orientation and optionally distortion of primary recognized features may determine the triangular configuration between the projector device, the image acquisition device and the tissue field and thus based on this perform 3D determinations of the tissue field, such as 3D distances and/or
  • topographical configurations of the tissue field are desired using trigonometrical, kinematic calculations for determining the spatial position and orientation of the projector device relative to at least a part of the tissue field.
  • the triangular configuration between the projector device, the image acquisition device and the tissue field may be determined without the projector device being within the field of view of the image acquisition device.
  • the determination of the spatial position and orientation of the projector device may be determined at a stationary or variable frequency e.g. the frequency may be increase where the movements of the projector and/or the image acquisition device is increasing.
  • the 3D reconstruction system may operate with high accuracy even where the distance between the projector and the image acquisition device is relatively high, such as up about 45 cm, such as up to about 30 cm, such as up to about 15 cm, such as up to about 10 cm, such as up to about 5 cm, such as up to about 3 cm, such as up to about 2 cm.
  • the estimated spatial position comprises an estimated distance in 3D space between the tissue field and the projector device.
  • the distance in 3D space between the tissue field and the projector device may be preprogrammed or operator selectable and for example be a shortest distance, a distance in a selected vector direction, a distance from a center of the front of the projector device from where the structured light beam is projected, a distance to a specific point of the tissue field e.g. to a protruding point of the tissue field and/or a target site of the tissue field e.g. a nerve..
  • the distance in 3D space between the tissue field and the projector device is the shortest euclidean distance between the tissue field and the point of the projector device corresponding to the center axis of the projected structured light beam, preferably the shortest euclidean distance together with a coordinate vector direction of the euclidean distance.
  • the distance in 3D space between the tissue field and the projector device is given by the x, y, and z coordinates in a 3D coordinate system.
  • the estimated spatial position comprises an estimated distance in 3D space between the tissue field and the projector device from a point of view of the image acquisition device, such as a minimum distance between the tissue field and the projector device.
  • the estimated spatial position comprises the estimated distance in 3D space between the tissue field and the projector device as well as an estimated relative orientation of the projector.
  • the computer system is configured for generating a representation of the spatial position of the projector device as seen from a point of view which is different from the image acquisition device.
  • the computer system may comprise algorithm(s) for performing the required geometrical determinations.
  • the algorithm for performing the calculation of the representation of the spatial position of the projector device as seen from a point of view which is different from the image acquisition device may for example use optional geometrical distortions of recognized light features as a part of the basis for the determination, e.g. for determining the angle between the projector device and the image acquisition device.
  • the computer system may in an embodiment know the spatial or a relative spatial position (preferably including distance and angle) of the image acquisition device which may additionally be applied for improving the accuracy of the 3D determination.
  • the system comprises a sensor arrangement arranged for determining the spatial or a relative spatial position (preferably including distance and angle) of the image acquisition device e.g. for determining the distance between the projector and the image acquisition device.
  • the sensor arrangement is preferably configured for determining the distance and the relative orientation between the projector and the image acquisition device.
  • the sensor arrangement may in principle be any kind of sensor arrangement capable of determining the distance and optionally the orientation
  • determination(s) such as for example a sensor arrangement comprising a transmitter and a receiver located at or associated with respectively the projector and the image acquisition device.
  • the term associated with means in this connection that there is a known and/or rigid interconnection with the projector or the image acquisition device with which the sensor is associated with.
  • the sensor arrangement may e.g. comprise a first sensor on or associated with a first robot arm configured for being connected to the projector e.g. via an instrument and a second sensor on or associated with a second robot arm configured for being connected to the image acquisition device e.g. via an instrument.
  • the computer system comprises or is supplied with data representing the divergence of the projected structured light beam.
  • This data representing the divergence may advantageously form part of the reference structured light data set.
  • the estimated spatial position comprises an estimated orientation, e.g. comprising a vector coordinate set and/or comprising at least one orientation parameter selected from yaw, roll and pitch or any combination thereof.
  • the estimated spatial position comprises two or more, such as all of the orientation parameters yaw, roll and pitch.
  • the orientation parameters yaw, roll and pitch and their relation are generally known within the art of airborne LIDAR technology.
  • the estimated spatial position comprises an estimated distance and at least one orientation parameter, such as an orientation parameter selected from yaw, roll and pitch, preferably the estimated spatial position comprises two or more, such as all of the orientation parameters yaw, roll and pitch.
  • the estimated spatial position comprises an estimated shortest or longest distance between a selected point of the projector and the tissue field.
  • the estimated spatial position comprises an estimated distance described by 3 values in 3D space (e.g. x, y, and z values in a 3D coordinate system) and an estimated orientation e.g. described by 3 values in 3D space (e.g. x, y, and z values in a 3D coordinate system).
  • the values in 3D space representing distance and orientations are preferably values in a common coordinate system.
  • the estimated spatial position and orientation is described by two end points in a coordinate system e.g. end points defined by two sets of x, y, and z values in a 3D coordinate system.
  • the estimated spatial position comprises an estimated distance represented by 2 set of values in a common 3D coordinate system, each set of values comprises an x, a y and a z value.
  • the estimation of the spatial position comprises estimating the pose (position and orientation) of the structured light as projected from the projector relative to the orientation and position of the reflected light from the tissue field.
  • the estimation of the spatial position of the projector device comprises estimating the pose of the projected structured light beam. In an embodiment the estimation of the spatial position of the projector device is determined to be the estimation of the pose of the projected structured light beam, preferably determined as projected i.e. at the position of the projector.
  • the estimated orientation and optionally the estimated spatial position on the projector is determined using quaternion based geometrical algorithms.
  • Mathematical methods based on or including quaternions are well known. The quaternion model was first described by the Irish mathematician William Rowan Hamilton in 1843. The quaternion model provide a convenient mathematical model for representing orientations and rotations of objects in three dimensions. Further information about quaternions may be found in Altmann, S.L., 2005. Rotations, quaternions, and double groups. Courier Corporation and/or in D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47(1/2/3):7-42, April- June 2002.
  • the set of pixel data may be subjected to outlier removal i.e. removing data values that lie in the tail of the statistical distribution of a set of data values and therefore are estimated to likely be incorrect.
  • the computer system is configured for estimating one or more of the orientation parameters yaw, roll and pitch at least partly based on the matches of light features.
  • information relating to the orientation may be transmitted to the computer system from another source e.g. another sensor.
  • one or more of the orientation parameters yaw, roll and pitch is/are transmitted to the computer system and/or determined by the computer system independently from the light features.
  • one or more of the orientation parameters yaw, roll and pitch is/are at least partly obtained from a sensor located at or associated with the structured light arrangement and/or the image acquisition device to sense at least one of the orientation parameters yaw, roll and pitch.
  • the sensor located at or associated with the structured light arrangement and/or the image acquisition device may advantageously be configured to determine the relative orientation between the structured light arrangement and the image acquisition device.
  • the computer system is configured for estimating the homographic transformation between the matched features and based on the homographic transformation determining the spatial position, such as the pose of the projector device.
  • the matching of one single recognized light feature with the corresponding light feature of the projected structured light beam may suffice where the tissue field is relatively plan, the light feature comprises both orientation and position attributes and/or where the light feature is perfectly recognized.
  • the number of recognized primary light features which are matched to corresponding light features of the projected structured light beam is at least 2, preferably at least 3, such as at least 5, such as at least 7. Thereby a higher accuracy may be obtained.
  • the computer system is configured for identifying a first number of pairs of matched features with a homographic transformation that corresponds within a threshold and preferably for applying this homographic transformation as the estimated homographic transformation e.g. to thereby estimate the pose of the projector device.
  • the first number of pairs of matched features is at least two, such as at least 3, such as at least 5.
  • the computer system may be configured for identifying one or more second pairs of matched features with a transformation which differs beyond the threshold from the homographic transformation of the first number of pairs of matched features.
  • the computer system may further be configured for correcting and/or discharging the pixel data representing the recognized feature(s) of the second pairs of matched features in particular where the transformation of the second pair(s) of matched light features differs far beyond the threshold and/or where the transformation of the second pair of matched light features is determined from one single pair of matched light features.
  • recognized light features reflected from areas of the tissue field having large topographical height differences relative to the average tissue field may be disregarded or corrected for the recognized light features to be used as primary recognized light features in the estimation of the spatial position of the projector device.
  • a timely associated 2D image of an element associated to the projector may be used to confirm the estimation of the spatial position of the projector or e.g. to rule out erroneous estimates of the spatial position of the projector, e.g. due to undesired light reflections which may for example occur at very curved surfaces or where the angle between the emitted light beam and the surface at the body cavity reflecting the light pattern is very far from normal e.g. 10 degrees or less. It has been observed that in such situation some of the estimation e.g. 1 out of 10-100, may be an erroneous estimate.
  • the timely associated 2D image may for example be a frame acquired by the image acquisition device with corresponding time attribute as the set(s) of pixel data used for the estimation of the spatial position of the projector.
  • the element associated to the projector may be an instrument, such as a minimally invasive surgical instrument to which the projector is fixed or mounted.
  • the 2D image may e.g. comprise a tip of the instrument together with a surface part of the tissue field.
  • the tip may reveal the tip orientation and correlate this to the determination of the projector orientation and the relation between the tissue field and the tip may be correlated to the spatial position of the projector.
  • the image acquisition device may comprise a single camera or several cameras e.g. a stereo camera.
  • the image acquisition device comprises a single camera configured for acquiring the frames.
  • the sets of pixel data of the respective frames are associated with respective consecutive time attributes
  • the image acquisition device has one single camera it may be desired to use a higher number of recognized features for matching with the corresponding light features of the projected structured light beam.
  • the computer system may be configured for matching recognized features of one set of pixel data with corresponding recognized features of a subsequent or previous set of pixel data.
  • the matching of recognized light features from one set of pixel data with corresponding recognized features of a subsequent or previous set of pixel data is referred to as time shifted matching.
  • the computer system is configured for repeating the steps of • receiving a set of pixel data associated with a time attribute and representing a single frame
  • the number of recognized primary features for matching may advantageously be at least 3, such as at least 5, such as from 6 to 100, such as from 8 to 25.
  • the frames comprise a plurality of frames acquired with a wide projector-camera baseline i.e. with a relative large distance between the projector and the camera relative to the distance between the projector and the tissue field.
  • the computer system may, in one or more of the repeating steps, further be configured for performing time shifted matching comprising matching the primary light features with corresponding primary light features of at least one set of pixel data associated with a previous time attribute, and for applying the time shifted matching in the estimation of the spatial position of the projector device.
  • the previous time attribute is up to about 10 seconds, such as up to about 5 seconds, such as up to one second earlier than the time attribute of the set of pixel data processed in the current processing step.
  • the time shifted matching may comprise feature matching over 3, 4 or more sets of pixel data of subsequently acquired images.
  • the image acquisition device may advantageously comprise a multi camera configured for acquiring sets of the frames where each set of frames comprises at least two simultaneously acquired frames and the sets of pixel data of the frames of a set of frames are associated with corresponding time attribute representing the time of acquisition.
  • corresponding time is used to mean a substantially identical time attribute.
  • the number of recognized features in this embodiment is at least 2, such as at least 3, such as from 5 to 100, such as from 6 to 25.
  • the image acquisition device comprises two or more camera, such as at least one stereo camera, the computer system is advantageously
  • stereo matching comprising matching the primary light features of two or more of the respective sets of pixel data with each other.
  • the stereo matching may e.g. be performed in one or more of repetitions of the above steps.
  • the stereo matching may advantageously be a wide camera baseline stereo matching e.g. using epipolar geometry.
  • the two or more cameras of the multi camera may be integrated in one unit or in separate units.
  • the computer system is configured for performing time shifted stereo matching comprising matching the primary light features with corresponding primary light features of at least one set of pixel data associated with a previous time attribute, and for applying the time shifted matching in the estimation of the spatial position of the projector device.
  • the previous time attribute is up to about 10 seconds, such as up to about 5 seconds, such as up to one second earlier than the time attribute of the set of pixel data processed in the current processing step.
  • the multi camera may advantageously comprise a stereo camera comprising two coordinated image acquisition units.
  • the image acquisition units may be arranged to acquire wide camera baseline images.
  • the image acquisition units are arranged with a fixed relative distance to each other, preferably of at least about 5 mm, such as at least about 1 cm, such as at least about 2 cm.
  • the distance between two cameras arranged for acquiring image of a surface that is to be determined should be equal or as close to equal as possible to the distance between the cameras and the surface.
  • the distance(s) between image acquisition units is advantageously relatively small, such as less than 5 mm, such as from about 0.1 to about 3 mm.
  • the camera baseline i.e. the distance between the camera units
  • the distance between the camera units will be relatively narrow compared to the distance to the tissue field. This may for example be compensated by operating using a wide projector-camera baseline e.g. as described below.
  • the 3D reconstruction system is configured for operating using both a wide camera baseline and a wide projector-camera baseline.
  • the computer system may further be configured for performing stereo matching and preferably wide field stereo matching of tissue field features - i.e. features that represent local tissue field areas having a characteristic attribute, such as a protrusion, a depression and/or a local lesion. Since the 3D reconstruction system is capable of determine sizes the computer system may determine the size and/or volume of for example a hernie and/or a protrusion, a depression and/or a local lesion.
  • the multi camera comprises coordinated image acquisition units arranged with substantially parallel optical axis and/or centre axis.
  • the multi camera comprises coordinated image acquisition units arranged with optical axis having a relative angle to each other of up to about 45 degrees, such as up to about 30 degrees, such, as up to about 15 degrees.
  • the coordinated image acquisition units are arranged to have an overlapping field of view.
  • the overlapping field of view is relatively large since only light features recognized in two or more images of the multi camera may be matched.
  • the coordinated image acquisition units have an at least about 10 % overlapping field of view, such as at least about 25 % overlapping field of view, such as at least about 50% overlapping field of view.
  • the overlapping field of view is generally determined as an angular field of view.
  • the image acquisition device has a stereo field of view - determined as the maximal overlapping field of view - of up to about 60 degrees, such as up to about 50 degrees, such as up to about from at least about 40 degrees, such as from about 5 degrees to about 50 degrees.
  • the image acquisition device has a maximal field of view of from at least about 5 degrees to about 160 degrees, such as up to about 120 degrees, such as from about 10 to about 100, such as from about 15 to about 50 degrees, such as from about 20 to about 40 degrees.
  • the max field of view may be determined in any rotational orientation including horizontally, vertically or diagonally orientation.
  • the image acquisition device has field of view adapted to cover the tissue field without including the projector device.
  • the 3D reconstruction system need not acquire images including the projector device and thus the 3D reconstruction system is a very flexible system which may operate with limited field(s) of view while simultaneously providing highly accurate 3D determinations.
  • the structured light arrangement and the image acquisition device are located on the same movable instrument. In an embodiment the structured light arrangement is located on one movable instrument and the image acquisition device is located on another independently movable instrument.
  • the one or more cameras of the image acquisition device may be any one or more cameras of the image acquisition device.
  • the one or more cameras is/are located on a cannula.
  • the 3D reconstruction system is advantageously adapted for operating using a wide projector- camera baseline.
  • the term "projector-camera baseline” means the distance between the camera and the projector.
  • the projector-camera baseline is dynamic and variable and may be selected by the surgeon to ensure the desired depth resolution.
  • the projector-camera baseline advantageously is selected in dependence of the distance between the projector and the tissue field, preferably such that the projector-camera baseline is matching the distance between the projector and the tissue field which has been found to provide very accurate
  • the projector-camera baseline is from about 1/16 to about 16 times the distance between the camera and the tissue field, such as from about 1/4 to about 4 times the distance between the camera and the tissue field, such as from about half to about 2 times the distance between the camera and the tissue field, such as 1 time the distance between the camera and the tissue field.
  • the 3D reconstruction system is adapted for operating using a wide projector-camera baseline comprising a projector-camera distance up to about 45 cm, such as up to about 30 cm, such as up to about 15 cm, such as up to about 10 cm, such as up to about 5 cm, such as up to about 3 cm, such as up to about 2 cm. It has been found that operating using a wide base line ensures an even higher accuracy for size determinations, such as tissue field volume
  • tissue field topologic size determinations are possible.
  • the reconstruction system is adapted for operating using a varying projector-camera baseline, preferably comprising that the projector and the image acquisition device is movable independently of each other.
  • the computer system is adapted for determine the projector- camera baseline. This may be provided by the estimation of the homographic transformation between the matched features at a given time.
  • the computer system is adapted for determine the projector- camera baseline as a function of time.
  • the computer system is configured for associating a plurality of determined projector-camera baseline(s) with a timely
  • the data link may e.g. be provided by the time attributes, for example to provide that each determined projector-camera baseline is associated to a time attribute representing the time of the determined projector-camera baseline.
  • the determined projector-camera baselines and the of sets of pixel data may thereafter be linked using their respective time attribute for example such that a determined projector-camera baselines is linked to the set of pixel data having closest time attribute match.
  • the 3D reconstruction system is adapted for operating using a non-rigid structure from motion.
  • a non-rigid structure from motion This is in particular desired for size determinations, such as tissue field volume determinations and/or tissue field topologic size determinations e.g. where local motion may occur e.g. near a vessel or nerve.
  • the non-rigid structure from motion technique is e.g.
  • the computer system comprises data representing an angle of divergence of the projected structured light beam.
  • the computer system may determine the spatial position of the projector device with an even higher accuracy using relatively simple algorithms.
  • the projected structured light beam has an angle of divergence and the computer system stores or is configured for storing the angle of divergence in its memory.
  • the angle of divergence data may e.g. be a part of the reference structured light data set.
  • the angle of divergence may for example be at least about 10 degrees, such as at least about 20 degrees, such as at least about 30 degrees, such as at least about 40 degrees relative to the centre axis of the structured light.
  • the optimal angle of divergence may advantageously be selected in dependence of how close the projector device is adapted for being located relative to the tissue field.
  • the angle of divergence may advantageously be substantially rotationally symmetrical, thereby the structured light arrangement may be provided using relatively simple optical structures.
  • the computer system is configured for acquiring the angle of divergence from an operator via a user interface and/or by a
  • the angle of divergence may be tunable e.g. by a preprograming or by operator intervention. For example an operator may use a larger angle of divergence where the projector device is closer to the tissue field and a smaller angle of divergence where the projector device is further from the tissue field.
  • the angle of divergence is fixed. In an embodiment the angle of divergence is tunable according to a preprogramed routine and/or by operator instructions.
  • the computer system is configured for determining the angle of divergence by a calibration.
  • the calibration may for example comprise projecting the projected structured light beam from a preselected distance and toward a known surface area, recording the reflected structured light and determining the angle of divergence, such as projecting the projected structured light beam from a preselected distance and with its centre axis orthogonal to the known surface area.
  • the angle of divergence may for example be determined as the beam divergence, ⁇ :
  • Dl is the largest cross-sectional dimension orthogonal to the centre axis of the structured light beam as projected from the projector device or at a first distance from the projector device
  • D2 is the largest cross-sectional dimension orthogonal to the centre axis of the structured light at a second larger distance from the projector e.g. at a surface
  • L is the distance between Dl and D2 and wherein the distances is determined along the centre axis of the light pattern.
  • the data representing the angle of divergence for the or each projected structured light beam is included in the or in each respective reference structured light data set(s)
  • the reference structured light data set for each projected structured light beam will be known to the computer system and may be applied in the at least one determination of the tissue field.
  • the structured light beam may in practice have any kind of optically detectable structure.
  • the wavelength(s), structure and intensity of the structured light beam is advantageously selected to be reflective from the tissue field, such as a field of soft tissue, such as a tissue field exposed by surgery.
  • the cross-sectional light structure of the projected structured light beam comprises optically distinguished areas, such as a pattern of areas of light and areas of no-light and/or areas of light of a first quality of a character and areas of light of a second quality of the character, wherein the character advantageously is selected from light intensity, wavelength and/or range of wavelengths.
  • the structured light beam may for example be a pattern of light of a certain wavelength range with intermediate areas of no light or areas of light with a more narrow range of wavelength.
  • the pattern may e.g. be strips, cross hatched lines or any other lines, or shapes.
  • the structured light beam may e.g. be provided by providing the projector device with one or more optical filters and/or by a projector device comprising a diffractive optical element (DOE), a spatial light modulator, a multi-order diffractive lens, a holographic lens, a Fresnel lens, a mirror arrangement, a digital micromirror device (DMD) and/or a computer regulated optical element, such as a computer regulated mechanically optical element e.g. a mems (micro-electro-mechanical) element.
  • the structured light arrangement comprises light blocking element(s) that blocks parts of the light to form the structuring or part of the structuring of the structured light beam.
  • the blocking elements may e.g. be blocking strips arranged on or forming part of the projector device.
  • the structured light arrangement comprises a fiber optic probe comprising the projector device configured to project a structured light beam onto at least a section of the tissue field.
  • the fiber optic probe advantageously comprises a structured light generating and projecting device and a bundle of fiber guiding the structured light to the projector device for projecting the structured light beam onto at least a section of the tissue field.
  • the structured light generating and projecting device will in the following be referred to as the structured light device.
  • the structured light device may be any device that can generate a suitable structured light.
  • the size of the structured light device is not very important.
  • the fibers of the fiber bundle has each a light receiving end and a light emitting end.
  • the fiber bundle has a light receiving end and a light emitting end.
  • the light receiving end of the fiber bundle is operatively coupled to the structured light device for receiving at least a part of the structured light from the structured light device.
  • the operatively coupling may include one or more lenses and/or objectives e.g. for focusing the structured light to be received by the light receiving end of the fiber bundle.
  • the light emitting end of the fiber ends are arranged in an encasing to thereby form a probe-head comprising the projector device.
  • the probe-head may comprise one or more lenses for ensuring a desired projection of the structured light beam.
  • the fiber bundle advantageously comprises at least 10 optical fibers, such as at least 50 optical fibers, such as from about 100 to about 2000 optical fibers, such as from about 200 to about 1000 optical fibers.
  • the fibers of the fiber bundle may be identical or they may differ, e.g.
  • the structured light comprises a structuring of different wavelengths.
  • the fibers of the fiber bundle are substantially identical. In an embodiment, fibers of the fiber bundle are partly of fully fused to ensure a fixed relative location of the fiber ends.
  • the structured light arrangement is mounted to a minimally surgical instrument
  • the structured light arrangement and the minimally surgical instrument are advantageously in the form of a fiber optic probe instrument assembly as described below.
  • the structured light beam is provided as described in DK PA 2016 71005.
  • the cross-sectional light structure comprises a symmetrical or an asymmetrical light pattern which may be repeating or non-repeating.
  • the cross-sectional light structure is asymmetrical and has no symmetry plan. Thus the risk of erroneous matching of light features may be reduced or even avoided.
  • the light pattern advantageously comprises a plurality of light dots, an arch shape, ring or semi-ring shaped lines, a plurality of angled lines, a coded structured light configuration or any combinations thereof, preferably the pattern comprises a grid of lines, a crosshatched pattern optionally comprising substantially parallel lines.
  • the light pattern comprises a bar code, such as a QR code.
  • the light features comprise local light fractions comprising at least one optically detectable attribute.
  • the local light fractions may for example independently of each other each comprise an intensity attribute, a wavelength attribute, a geometric attribute or any combinations thereof.
  • Preferably at least one of the attributes is an orientation attribute and/or an orientation attribute.
  • each local light fraction has a beam area fraction of up to about 25 % of the area of the cross-sectional light structure.
  • each local light fraction has a beam area fraction of up to about 20 %, such as up to about 10 %, such as up to about 5 %, such up to about 3 % of the area of the cross-sectional light structure.
  • the area of the cross-sectional light structure of a light feature may overlap with, form part of or fully include the area of the cross-sectional light structure of another light feature.
  • the centre to centre distance between at least two of the light features is at least about 0.1 %, such as at least about 1 % of the maximal dimension of the cross-sectional light structure, preferably the centre to centre distance between at least two of the light features is at least about 10 %, such as at least about 25 %, such as at least about 50 % of the maximal dimension of the cross-sectional light structure.
  • the centre to centre distance of light features is determined as the distance of the light features at the projected structured light beam.
  • the distance of two corner features may be determined as the 2D euclidean distance between the corners e.g. diametrically determined.
  • each of the light features are represented by a local light fraction of the cross-sectional light structure having an optically detectable attribute, preferably each light feature comprises a local and characteristic light fraction of the projected structured light.
  • each of the light features independently of each other comprise a light fraction comprising two or more crossing lines, v-shaped lines, a single dot, a group of dots, a corner section, a pair of parallel lines, a circle or any combinations thereof and or any other geometrical shape(s).
  • each of the light features comprises at least one of a location attribute and an orientation attribute.
  • the one or more light features advantageously comprise a combined location and orientation attribute.
  • the set of light features may be a predefined set of light features or the set of light features may be selected by the computer system e.g. by selecting light features which may be relatively simple to recognize by the computer system.
  • the set of light features comprises predefined light features
  • the computer system is advantageously configured for searching for at least some of the predefined light features in the set of pixel data and preferably for recognizing the predefined light features if present and preferably without being distorted beyond a threshold.
  • Data representing the predefined light features may be transmitted to the computer system e.g. together with the reference structured light data set and/or the computer system may acquire the predefined light features from a database e.g. together with the reference structured light data set.
  • the computer system is configured for defining the set of light features from the reference structured light data set representing the projected structured light beam.
  • the computer system may be configured for defining the light features of the set of light features as light features with attributes which make the light features relatively simple to be recognized from the set of pixel data of the acquired images.
  • the computer system may be programmed to define the light features of the set of light features according to preferred attributes.
  • the computer system is advantageously configured for searching for at least some of the defined light features in the received set of pixel data.
  • the primary light features may be preprogramed in the computer system or preferably the computer system is configured to select the primary light features from the recognized light features.
  • the computer system may e.g. be configured for selecting the primary light features according to a set of selection rules, for example comprising selecting recognized light features having both orientation attribute and position attribute, selecting recognized light features which have a distance beyond a threshold distance to one or more other already selected recognized light features, selecting recognized light features representing corner segments of the cross-sectional light structure, selecting recognized light features representing square pattern segments of the cross-sectional light structure and/or combinations thereof.
  • the selection of the primary light features may preferably be performed as the light features are recognized and continue until a sufficient number of primary light features have been selected.
  • the sufficient number of primary light features may be determined by the computer system or it may be a predefined number programmed into the computer system and/or transmitted to the computer system.
  • the computer system may have a preprogramed number as the sufficient number of primary light features and the computer may be configured to overwrite this number upon instruction of an operator to apply another number as the sufficient number of primary light features.
  • the primary light features include at least 3 primary light features arranged with a triangular configuration to each other.
  • the computer system comprises an artificial intelligent processing system configured for selecting qualified light features.
  • the artificial intelligent processing system may be any artificial intelligent processing system suitable for selecting qualified light features - i.e. light features which comprise at least one position attribute and/or at least one orientation attribute.
  • the artificial intelligent processing system may be any artificial intelligent processing system suitable for selecting qualified light features - i.e. light features which comprise at least one position attribute and/or at least one orientation attribute.
  • the artificial intelligent processing system may be trained to select qualified light features by being presented to a number of qualified light features (labeled as being qualified) and a number of non-qualified light features (labeled as being nonqualified) and based on these presentations and labelling being trained to distinguish between qualified and non-qualified light features.
  • the artificial intelligent processing system comprises machine learning algorithms, such as machine learning algorithms for supervised deep learning and/or machine learning algorithms for non- supervised deep learning.
  • machine learning algorithms such as machine learning algorithms for supervised deep learning and/or machine learning algorithms for non- supervised deep learning.
  • processing system are configured for including pre-operation data and/or inter-operation data, e.g. from a cloud of data in the selection of qualified light features.
  • the computer system is configured to recognize the primary light features from the received pixel data.
  • the computer system is configured for selecting the primary light features from the recognized light features and the computer system comprises a primary light feature threshold for selecting light features qualified for representing primary light features, the primary light features threshold preferably comprises a location attribute sub-threshold and an orientation attribute sub-threshold.
  • the primary light feature threshold may for example comprise minimum intensity, minimum identity probability, minimum orientation probability, minimum asymmetry, minimum centre distance and etc.
  • the primary light features are identified as light features comprising at least one of a location attribute and an orientation attribute.
  • a plurality of the primary light features is identified as light features comprising both a location attribute and an orientation attribute.
  • An orientation attribute is generally an attribute with a degree of asymmetry and preferably rotational asymmetry, more preferably at most two fold symmetry and preferably one fold symmetry.
  • the asymmetry may be an asymmetry in light intensity, in geometrical shape, in wavelength and/or range of wavelengths.
  • the computer system may be configured for determining the orientation of the light feature in the cross section plan of the structured light and thereby also the rotational orientation of the structured light relative to the projector device.
  • the location attribute is represented by a light point, such as the cross of crossing lines, the tip of v-shaped lines, the position of a constraint along a line, the amplitude of a wave shaped line or the centre of a light dot and etc.
  • the orientation attribute is represented by one or more asymmetrical geometrical shapes, such as the orientation of the lines of crossing lines, the orientation of the lines of v-shaped lines, orientations of the wave of a wave shaped line or the orientation of imaginary lines between dots of a group of dots.
  • the computer system comprises an artificial intelligent processing system configured for selecting the primary light features from the recognized light features.
  • the artificial intelligent processing system may for example be a trained artificial intelligent processing system which has been trained to select qualified light features e.g. according to the above
  • the image acquisition device may in principle be any image acquisition device capable of acquisition of digital images.
  • the image acquisition device comprises at least one image acquisition unit comprising a pixel sensor array, such as charge-coupled device (CCD) image sensor, or a complementary metal-oxide-semiconductor (CMOS) image sensor.
  • CCD charge-coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the image acquisition unit comprises an array of pixel sensors each comprising a photodetector (such as an avalanche photodiode (APD), a photomultiplier or a metal-semiconductor-metal photodetector (MSM photodetector).
  • a photodetector such as an avalanche photodiode (APD), a photomultiplier or a metal-semiconductor-metal photodetector (MSM photodetector).
  • the image acquisition unit comprises active pixel sensors (APS).
  • each pixel comprises an amplifier which makes the operation of the image acquisition unit faster, more preferably the image acquisition unit comprises at least about 0.1 Mega pixels, such as at least about 1 Mega pixels, such as at least about 5 Mega pixels.
  • the image acquisition device comprises and/or is associated with an optical filter.
  • the optical filter may for example be a wavelength filer and/or a polaroid filter, for example a linear or a circular polarizer.
  • the optical filter is arranged to provide that at least some of the light reflected from the tissue field is filtered prior to reaching the camera(s) of the image acquisition device.
  • Such an optical filter may be applied to further ensure that the pixel data of the frames are subjected to as low noise as possibly.
  • the image acquisition device comprises and/or is associated with at least one linear polarization filter
  • the 3D reconstruction system is configured for acquiring one or more frames of reflected light of the structured light beam from the tissue field where the reflected light has been filtered by the at least one linear polarization filter which ensures light polarized in a first direction is blocked while the remaining light passes through.
  • the 3D reconstruction system may further be configured for acquiring a one or more other frames of reflected light of the structured light beam from the tissue field where the reflected light has been filtered by the at least one linear polarization filter with light polarized in an orthogonal direction to the first direction is blocked.
  • Light reflecting off a surface will tend to be polarized, with the direction of polarization (the way that the electric field vectors are pointing) being parallel to the plane of the interface.
  • the computer system may acquire further data for performing a 3D reconstruction of a part of or the entire tissue field.
  • the frame rate may be selected in dependence of the intended use of the 3D reconstruction system and in particular in dependence on how fast it is intended to move the structured light arrangement and/or the image acquisition device.
  • the image acquisition unit has a frame rate of at least about 10 Hz, such as at least about 25 Hz, such as at least about 50 Hz, such as at least about 75 Hz.
  • the image acquisition units are advantageously timely coordinated to acquire images simultaneously.
  • the two or more image acquisition units preferably have the same frame rate.
  • the frame rate is from about 10 to about 75 Hz.
  • the image acquisition unit has an even higher frame rate, such as up to about 300 Hz, such as up to about 500 Hz, such as up to about 1000 Hz.
  • the structured light arrangement is configured for being pulsed, preferably having a pulse duration and a pulse frequency.
  • pulsed is herein applied that the structured light arrangement is projecting the structured light beam in pulses.
  • the pulses may e.g. be provided by pulsing the light source e.g. by a light source driver and/or using a shutter and/or by any other means capable of switching the light on an off.
  • the pulse duration is the time of one pulse of projected structured light.
  • the time between pulses is referred to as inter pulse time.
  • the 3D reconstruction system may perform other light based measurements in the inter pulse time between pulses, e.g. using one or more other light sources.
  • the 3D reconstruction system may perform other light based measurements in the inter pulse time between pulses, e.g. using one or more other light sources.
  • the imaging light arrangement for imaging the tissue field and advantageously the imaging light arrangement is pulsed asynchronous relatively to the structured light arrangement.
  • the structured light arrangement has a pulse duration, i.e. the timely length of the pulse, which is from about half to about twice an inter pulse time between pulses, such as from about 0.01 to about 1.5 the inter pulse time, such as from 0.05 to about 1 the inter pulse time, such as from 0.1 to about 0.5 the inter pulse time.
  • the pulse duration and/or the pulse rate may advantageously be selectable by the surgeon, e.g. regulated by the computer system.
  • measurements and/or determinations using one or more other projected light beams e.g. additional structured light beams and/or beams comprising IR wavelength may be performed in the inter pulse time without disturbing the 3D determination of the tissue field by the 3D reconstruction system.
  • the structured light arrangement has a pulse rate adjusted relative to the frame rate of the image acquisition unit.
  • the pulse rate adjusted relative to the frame rate of the image acquisition unit.
  • the structured light arrangement has a pulse rate adjusted relative to the frame rate of the image acquisition unit, to provide that the 3D reconstruction system is configured for acquiring the plurality of frames comprising reflected structured light and for acquiring a plurality of background frames between pulses of the projected structured light.
  • the background frames are acquired in inter pulse time and preferably while also an optional illumination light is shut off.
  • the structured light arrangement has a pulse rate adjusted relative to the frame rate of the image acquisition unit to provide that the image acquisition units acquires one or more background frames during the inter pulse time.
  • the structured light arrangement has a pulse rate adjusted relative to the frame rate of the image acquisition unit to provide that the image acquisition units acquires every 2 nd , 3 rd 4 th 5 th or 6 th frames as background frames during the inter pulse time and preferably the remaining frames during the pulse time when the structured light beam is on.
  • the structured light arrangement may advantageously have a structured light controller for adjusting the pulse rate.
  • the structured light controller may also be configured for controlling the pulse length of the structured light beam.
  • the structured light controller is preferably in data communication with the computer system or it forms part of the computer system.
  • the computer system may advantageously control both pulse rate and frame rate and be configured for the timely control thereof.
  • the computer system is configured for withdrawing pixel values of one or more background frame from the respective sets of pixel data for thereby reducing noise as indicated above.
  • the computer system is configured for withdrawing pixel values from a timely nearest background frame from each of the respective sets of pixel data. This noise reduction is preferably performed prior to recognizing a plurality of the light features of the set light features, the set of pixel data.
  • the time attribute may be a relative time attribute or an actual time attribute or a combination thereof.
  • a first set of pixel data may be associated with an actual time attribute and subsequent sets of pixel data may be associated with relative time attributes, which are relative with respect to the actual time attribute of the first set of pixel data.
  • the computer system is advantageously configured for communicating with the image acquisition device and preferably the structured light arrangement by wire or wireless.
  • the computer system preferably comprises at least one processor, at least one memory, and at least one user interface, such as a graphical interface, a command line interface, an audible interface, a touch based interface, a holographic interface or any of the user interface.
  • the user interface advantageously comprises at least a display/monitor (a screen e.g. a touch screen) and/or a printer.
  • the computer system may advantageously be configured for receiving patient data via a user interface and/or for acquiring patient data from a database.
  • the patient data may for example comprise data representing the tissue field e.g. at another point of time and/or data that represent a similar tissue field e.g. a tissue field from a patient of similar age and/or gender or from a group of similar patients.
  • the patient data comprise pre-operation data (data obtained before starting a procedure e.g. a surgical procedure) and/or inter-operation data (data obtained during a procedure e.g.
  • a surgical procedure such as data obtained by a scanning or other measuring methods, such as a CT scanning, a MR scanning, an ultrasound scanning, a fluorescence imaging and/or a PET scanning and/or such data estimated and/or calculated for groups of patient.
  • a scanning or other measuring methods such as a CT scanning, a MR scanning, an ultrasound scanning, a fluorescence imaging and/or a PET scanning and/or such data estimated and/or calculated for groups of patient.
  • suitable pre-operation and/or inter-operation data comprise for example data representing measurements and/or estimations obtained by the methods described in the review article "Novel methods for mapping the cavernous nerves during radical prostatectomy" by Fried, N. M. & Burnett, A. L. Nat. Rev. Urol. 12, 451 ⁇ 60 (2015); published online 10 August 2015; doi:10.1038/nrurol.2015.174.
  • Fluorescence imaging has shown to be a helpful tool during or prior to surgery e.g. for improved identification for repair of damaged tissues. Further information about Fluorescence imaging for surgical guidance is for example disclosed in "Fluorescence Imaging in Surgery” by Ryan K. Orosco et al. IEEE Rev Biomed Eng. 2013; 6: 178-187. (Published online 2013 Jan 15.
  • the computer system is configured for applying the patient data for validating pixel data and/or for repairing incomplete pixel data.
  • the reference structured light data set may be loaded to the computer system by any method.
  • the reference structured light data set may e.g. be transmitted to the computer system by an operator and/or the computer system may be configured for acquiring the reference structured light data set from a database e.g. in response to an instruction from an operator and/or based on an code included in the structured light beam, such as an optically detectable code.
  • a database e.g. in response to an instruction from an operator and/or based on an code included in the structured light beam, such as an optically detectable code.
  • Such reference structured light database comprises a plurality of sets of reference structured light data sets each linked to a unique code may e.g. form part of the depiction system.
  • the computer system switch to project another structured light beam and simultaneously to apply the set of reference structured light data associated to this another structured light beam.
  • the computer system is configured for receiving and storing reference structured light data set representing the structured light beam including the set of light features.
  • the computer system is preferably configured for receiving the reference structured light data set via a
  • the computer system may be configured for using the reference structured light data set for recognizing the light features from pixel data and preferably for identifying and/or selecting the primary features.
  • the calibration is preferably performed by arranging the projector of the structured light arrangement and image acquisition unit(s) of the image acquisition device in predefined spatially positions relative to a plan surface, projecting the structured light to impinge the plan surface and acquiring reflected light by the image acquisition device.
  • the computer system may also determine and store data representing the angle of divergence of the structured light beam.
  • the computer system may advantageously be configured for matching the primary features and corresponding features of the reference structured light data set using homographical matching principles, e.g. involving trigonometric algorithms and/or epipolar geometric algorithms.
  • the computer system is configured for estimating the spatial position of the projector device based on the matches between the primary features and corresponding features of the reference structured light data set.
  • the computer system may be configured for performing the at least one determination of the tissue field based on the spatial position of the projector device and the recognized light features by using trigonometric algorithms, e.g. to determine surface topography e.g. comprising height differences and/or other surface shapes.
  • the computer system is configured for performing the at least one determination of the tissue field, wherein the at least one determination comprises a distance between the projector device, a 3D structure of at least a part of the tissue field, a size determination of at least a part of the tissue field, such as a size and/or a volume of an organ section.
  • the at least one determination of the tissue field comprises a determination of a position of a nerve and/or a vein e.g. relative to a tool and/or to the projector.
  • the at least one determination of the tissue field comprises a volume determination e.g. of a cancer knot.
  • the determination of the tissue field is advantageously determinations in 3D space - i.e. actual determinations not limited by a point of view.
  • a determination of the tissue field may advantageously comprise an actual distance between the projector device and the tissue field (not limited to a view direction, such as a view from the image acquisition device) e.g. such as the closest point of the tissue field and/or a point of the tissue field selected by an operator.
  • the projector device is fixed to an instrument, such as a minimally invasive surgical instrument for surgical and/or diagnostic used, the instrument may be moved with very high accuracy relative to the tissue field.
  • the computer system may be configured for performing the at least one determination of the tissue field based of pixel data having corresponding time attribute.
  • the computer system may be configured to supply with data from set(s) of pixel data having time attribute(s) within e.g. up to 1 s of the data in question, such as within 0.1 s, such as within 0.5 of the data in question.
  • the computer system is configured for performing the at least one determination of the tissue field based of pixel data having two or more different time attributes.
  • the computer system may advantageously be configured to display the at least one determination of the tissue field on a display, such as a screen e.g. continuously in real time and/or upon request from an operator.
  • the projector of the structured light arrangement and the image acquisition device may be fixed relative to each other or they may be independent. In an embodiment the projector of the structured light arrangement and the image acquisition device are fixed to each other with an angle of up to about 45 degrees.
  • the projector of the structured light arrangement and the image acquisition device are independently movable.
  • the 3D reconstruction system comprises a sensor arrangement for determining the position and orientation of the image acquisition device e.g. relative to the projector device and/or relative to a robot.
  • the computer is configured for calibrating the position and orientation of the image acquisition device relative to the projector device.
  • the computer system may thereby acquire data representing the position and orientation of the image acquisition device relative to the projector device.
  • the computer system may be configured for further refining the estimation of the spatial position of the projector device relative to the tissue field and/or the determination of the tissue field in 3D space.
  • the 3D reconstruction system comprises a sensor arrangement for determining the position and orientation of the projector device e.g. relative to the image acquisition device and/or relative to a robot.
  • the computer system is preferably configured for repeating the above described estimation of the spatial position of the projector device relative to the tissue field and/or the determination of the tissue in real time as the computer system acquires the set of pixel data representing the consecutive images acquired by the image acquisition device to thereby provide a real time determination of the tissue field.
  • the 3D reconstruction system may be applied in surgery to expose pulsating areas of the tissue field and thus warn a surgeon to avoid accidently cutting into an artery or similar. Also accidents caused by other patient movements may be avoided by using the 3D reconstruction system. It has been found that the 3D reconstruction system may be applied for determining the changes of tissue field in real time, such as movements caused by patient movements e.g. local movement, such as peristaltic movements causing movements of the tissue field. Thus, the 3D reconstruction system may in particular be beneficial for use in surgery e.g. to ensure accurate cutting, avoiding undesired damage of tissue and shorting the time of operation.
  • the at least one determination of the tissue field comprises determining a local movement of the tissue field, such as a pulsating movement and/or a movement caused by manipulation of the tissue e.g. caused by an instrument, such as a laparoscope.
  • the computer system of the 3D reconstruction system is adapted for controlling movement of one or more instruments.
  • the computer system of the 3D reconstruction system is adapted for controlling movement of a robot e.g. a robotic surgeon, preferably connected to or forming part (integrated with) the 3D
  • the 3D reconstruction system ensures that the operator has a high 3D perception including a perception of sizes and distances.
  • the operator may use the 3D reconstruction system for performing volumetric determinations of selected tissue parts, such as nodules protuberances and thickened tissue parts.
  • the computer system is configured for repeating in real time the determination of the tissue field for consecutive sets of pixel data of the received frames.
  • the structured light beam may advantageously be substantially constant for each determination of the tissue field.
  • the 3D reconstruction system it may be desired to change the angle of divergence of the structured light beam and/or to change the structure of the structured light beam.
  • the structured light beam is changed from
  • the computer system may be configured for changing the structured light beam e.g. upon receipt of an instruction from an operator.
  • the computer system is configured for running a routine in real time comprising repeating steps i-iii, i. calculating a spatial position of the projector device from one or more sets of pixel data having corresponding and/or subsequent time attribute(s)
  • the computer system is configured for running a routine in real time comprising repeating steps i-iii, i. calculating a spatial position of the projector device from one or more sets of pixel data having corresponding time attribute
  • the 3D reconstruction system or parts thereof may be mounted to or incorporated in one or more surgical and/or diagnostic instruments for optimizing movements of such instruments during a diagnostic procedure and/or a surgical procedure.
  • At least the projector device of the structured light arrangement is mounted to or integrated with a minimally surgical instrument, such as a minimally surgical instrument selected from a penetrator, an endoscope, an ultrasound transducer instrument or an invasive surgical instrument comprising a grasper, a suture grasper, a stapler, forceps, a dissector, hooks, scissors, suction instrument, clamp instrument, electrode, curette, ablators, scalpels a biopsy and retractor instrument, a trocar or any combination comprising one or more of the abovementioned.
  • a minimally surgical instrument selected from a penetrator, an endoscope, an ultrasound transducer instrument or an invasive surgical instrument comprising a grasper, a suture grasper, a stapler, forceps, a dissector, hooks, scissors, suction instrument, clamp instrument, electrode, curette, ablators, scalpels a biopsy and retractor instrument, a trocar or any combination comprising one or more of the abovementioned.
  • At least image acquisition unit(s) of the image acquisition device is mounted to or integrated with a minimally surgical instrument, such as a minimally surgical instrument selected from a penetrator, an endoscope, an ultrasound transducer instrument or an invasive surgical instrument comprising a grasper, a suture grasper, a stapler, forceps, a dissector, hooks, scissors, suction instrument, clamp instrument, electrode, curette, ablators, scalpels a biopsy and retractor instrument, a trocar or any combination comprising one or more of the abovementioned.
  • a minimally surgical instrument selected from a penetrator, an endoscope, an ultrasound transducer instrument or an invasive surgical instrument comprising a grasper, a suture grasper, a stapler, forceps, a dissector, hooks, scissors, suction instrument, clamp instrument, electrode, curette, ablators, scalpels a biopsy and retractor instrument, a trocar or any combination comprising one or more of the abovementione
  • At least the projector device of the structured light arrangement and at least the image acquisition unit(s) of the image acquisition device is mounted to or integrated with separate minimally surgical instruments, such as minimally surgical instruments independently of each other selected from a penetrator, an endoscope, an ultrasound transducer instrument or an invasive surgical instrument comprising a grasper, a suture grasper, a stapler, forceps, a dissector, hooks, scissors, suction instrument, clamp instrument, electrode, curette, ablators, scalpels a biopsy and retractor instrument, a trocar or any combination comprising one or more of the abovementioned.
  • the minimally surgical instrument may be a rigid or a bendable minimally surgical instrument.
  • the minimally surgical instrument comprises an articulating length section e.g. a distal length section.
  • the structured light arrangement and the ultrasound transducer instrument is preferably in the form of a structured light ultrasound instrument as described below.
  • the structured light arrangement may be as the projector probe disclosed in the co-pending application DK PA 2016 71005.
  • the structured light arrangement comprises a light source optically connected to the projector device.
  • the light source may in principle be any kind of light source.
  • the light source may be a coherent light source or an incoherent light source.
  • Examples of light sources include a semiconductor light source, such as a laser diode and/or a VCSEL light source as well as any kind of laser sources including narrow bandwidth sources and broad band sources.
  • the light source comprises a laser light source, such as a laser emitting diode, a fibre laser.
  • the determination of light is based on full width at half maximum (FWHM) determination unless otherwise specified or clear from the context.
  • the light source is a fibre laser and/or a semiconductor laser, the light source preferably comprises a VCSEL or a light emitting diode (LED).
  • the light source is adapted for emitting modulated light, such as pulsed or continuous-wave (CW) modulated light, preferably with a frequency of at least about 200 Hz, such as at least about 100 KHz, such as at least about 1 MHz, such as at least about 20 MHz, such as up to about 200 MHz or more.
  • modulated light such as pulsed or continuous-wave (CW) modulated light
  • CW continuous-wave
  • the wavelength or wavelengths may in principle comprise any wavelengths, such as from the low UV light to high IR light e.g. up to 3 pm or larger.
  • the wavelength (s) of the light source for forming the structured light beam is invisible to the human eye.
  • the light source is configured for emitting at least one electromagnetic wavelength within the UV range of from about 10 nm to about 400 nm, such as from about 200 to about 400 nm. In an embodiment the light source is configured for emitting at least one electromagnetic wavelength within the visible range of from about 400 nm to about 700 nm, such as from about 500 to about 600 nm. In an embodiment the light source is configured for emitting at least one electromagnetic wavelength within the IR range of from about 700 nm to about 1 mm, such as from about 800 to about 2500 nm.
  • the band width of the light source may be narrow or wide, however, often it is desired to use a relatively narrow wavelength for cost reasons and optionally for allowing distinguishing between light emitted from or projected from different elements e.g. from a 3D reconstruction system of an
  • the light source has a band width of up to about 50 nm, such as from 1 nm to about 40 nm.
  • the light source has a band width which is larger than about 50 nm, such as a supercontinuum band width spanning over at least about 100 nm, such as at least about 500 nm.
  • arrangement may comprise two or more light sources, such as two LEDs having different wavelengths.
  • the 3D reconstruction system comprises two or more structured light arrangements.
  • the two or more structured light arrangements may be adapted to operate simultaneously, independently of each other or asynchronous.
  • the two or more structured light arrangements are adapted to operate independently of each other.
  • the two or more structured light arrangements are adapted to operate asynchronous.
  • the two or more structured light arrangements may comprise respective light sources that differs from each other e.g. with respect to intensity and/or wavelength(s).
  • At least one light source of the 3D reconstruction system is an IR (infrared) light containing light source comprising light waves in the interval of from about 0.7 pm to about 4 pm, such as below 2 pm. It has been found that using IR light may provide a very effective system for determining sub tissue surface structures such as a vein. By identifying the subsurface position of for example a vein of another critical structure the surgeon may ensure not to damage such structure e.g. during a surgical intervention at the tissue field.
  • IR infrared
  • the two or more light source it pulsed asynchronous preferably such that they do not have timely overlapping pulse duration.
  • the computer system is configured for determine one or more properties of a target site in the in the tissue field based on wavelength of light reflected from the target site.
  • the computer system may comprise or being in communication with a spectroscope, such as a digital spectroscope for recognizing wavelengths in the reflected light.
  • the spectroscope in an IR spectroscope.
  • the spectroscope may e.g. form part of the image acquisition device or it may be an
  • independent spectroscope such as a spectroscope comprising an IR transmitter and a spectroscopic sensor.
  • certain properties of the tissue may be determined. This can for example be the oxygen level in the tissue and changes thereof, and the type of tissue.
  • the reflected light can be used to determine what kind of organ the tissue is part of, which indicates to the surgeon what organs are which and thereby assisting the surgeon to an area of interest.
  • the computer system is adapted to determine oxygen level of a tissue site, changes thereof and type of tissue at the tissue site where the tissue site may be the entire tissue field, an organ at the tissue field, a section of the tissue field and /or a tissue structure or another structure at a preselected depth of the tissue site, such as a sub tissue surface vein.
  • the tissue site may e.g. be a target site for the surgeon.
  • the at least one light source may preferably be wavelength tunable.
  • the wavelength(s) of the light source may for example be selectable by the computer system and/or the surgeon. In an embodiment the wavelength(s) of the light source is selectable based on a feedback signal from the computer system.
  • the computer system is configured for determine a boundary about a target site having at least one different property than tissue surrounding the target site.
  • the computer system may be configured for determine a size of the target site based on the determined boundary, such as a periphery, an area or preferably a volume.
  • the computer system is configured for performing frame stitching comprising stitching at least two sets of pixel data of the frames comprising reflections of the structured light beam from the tissue field.
  • the stitched set of pixel data preferably comprises a stitched image data set representing a larger tissue field than each set of pixel data.
  • the frame stitching comprises stitching sets of pixel data associated with different time attributes.
  • the different time attributes are preferably consecutive time attributes.
  • the computer system is configured for continuously stitching in the real time received frames to the stitched image data set.
  • the computer system may be configured for unstitching and/or removing pixel data, e.g. by removing pixel data having a time attribute older than a preselected time and or by removing pixel data from the stitched image data set where the pixel data represents a site of the larger tissue field having a distance to a target site and/or a center site which is larger than a preselected distance.
  • the 3D reconstruction system is configured for performing a plurality of topological determinations of the tissue field and the computer system is configured for performing topological stitching comprising stitching at least two of the topological determinations, such as from 3 to 100 of the topological determinations, such as from 5 to 50 of the topological
  • the topological determination may e.g. comprise determining a plurality of points of the tissue field in 3D for example comprising the spatially relation between the points to obtain a point cloud and the topological stitching may comprise stitching point clouds of the topological determinations to a super point cloud comprising the point clouds spatially combined with each other to represent a larger and/or refined topological determination of the tissue field.
  • the computer system may be configured to perform further 3D
  • volume determinations such as volume determinations from the super point cloud.
  • the computer system is configured for performing topological stitching of a plurality of topological determinations obtained from consecutive acquired frames, preferably such that the frames comprises frames obtained with the projector and/or the image acquisition device at different positions and/or angles relative to the tissue field.
  • the invention also comprises a method for performing a determination of a tissue field.
  • the method comprises
  • performing at least one determination of the tissue field based on the spatial position of the projector device and the recognized light features.
  • Preferred embodiments of the method comprise the methods that the computer system of the 3D reconstruction system is configured to perform as described above.
  • the invention also comprises a robot comprising the 3D reconstruction system as described above.
  • the robot is advantageously a surgery robot e.g. for performing minimally invasive surgery.
  • the projector device of the structured light arrangement and the image acquisition device are disposed on individually movable arms of the robot.
  • the projector device and/or the image acquisition device may e.g. be disposed on respective surgical instrument e.g. held by one of the individually movable arms of the robot.
  • the "disposed" as used in the explanation that the projector device and the image acquisition device are disposed on respective arms of the robot is used to mean that the projector device/image acquisition device may be integrated with, mounted to, held by, instated through or in other way engaged with the robot arm(s).
  • the robot has a controller processing system which comprises or is in data communication with the computer system of the 3D reconstruction system.
  • the computer system comprises a feedback algorithm for controlling movements of at least one of the individually movable arms of the robot in response to the determinations of the tissue field.
  • the 3D reconstruction system may comprise a sensor
  • the 3D reconstruction system comprises a sensor arrangement for determining the position and orientation of the image acquisition device relative to the projector device and/or relative to a location of the robot e.g. a location of a robot arm.
  • the robot may e.g. be as described in WO16057980, W013116869 and/or in US213030571 with the difference that the robot comprises the 3D
  • the robot comprises two or more robot arms e.g. as a robot of the ALF-X system or the SurgiBot system as marketed and disclosed by TransEnterix, Inc, where the projector device of the structured light arrangement and the image acquisition device are disposed on individually movable arms .
  • the ALF-X System robot has been granted a CE Mark in Europe for use in abdominal and pelvic surgery and comprises a multi-port robotic surgery robot which allows up to four arms to control robotic instruments and a camera.
  • the projector device and the image acquisition device may be disposed on any of these robot arms.
  • the SurgiBot System robot is a single-incision, patient-side robotic-assisted surgery system and comprises a robot with a number of flexible, articulating robot arms held together in a single collar for insertion of instruments through the articulated robot arms through a single incision for thereafter introducing the robot arms with instruments through the collar for performing minimally invasive surgery within a cavity. All features of the inventions and embodiments of the invention as described herein including ranges and preferred ranges may be combined in various ways within the scope of the invention, unless there are specific reasons not to combine such features.
  • the invention also relates to a fiber optic probe instrument assembly suitable for use in minimally invasive surgery.
  • the fiber optic probe instrument assembly comprises a fiber optic probe and a minimally surgical instrument.
  • the minimally surgical instrument may for example be as described above,
  • the minimally surgical instrument is advantageously selected from a penetrator, an endoscope, an ultrasound transducer instrument or an invasive surgical instrument comprising a grasper, a suture grasper, a stapler, forceps, a dissector, hooks, scissors, suction instrument, clamp instrument, electrode, curette, ablators, scalpels a biopsy and retractor instrument, a trocar or any combination comprising one or more of the abovementioned minimally surgical instruments.
  • the fiber optic probe comprises a structured light generating and projecting device (generally called structured light device), a bundle of optical fibers and a projector device.
  • structured light device is configured for generating a structured light.
  • the structured light device may in principle have any size, because the structured light device is not adapted to be near the surgical site e.g. it is not adapted for being inserted into any natural or artificial cavities of a human or animal patient subjected to surgery.
  • the structured light generated by the structured light device may e.g. be as described above. It should be observed that the structured light generated by the structured light device may have a relatively large cross-sectional area compared to the cross-sectional area of the structured light delivered to and emitted by the projector device as the structured light beam.
  • the structured light generated by the structured light device is generated by a pixel based image projector, where each fiber input end is arranged to receive light/no light from one or more pixels.
  • the pattern may be a dynamic patter which may be chanded dynamically or in desired steps.
  • the structured light generated by the structured light device may for example include a structure of wavelength variations and/or intensity variation over the cross-section of the structured light.
  • the cross-sectional light structure of the structured light generated by the structured light device comprises optically distinguished areas, such as a pattern of areas of light and areas of no-light and/or areas of light of a first quality of a character and areas of light of a second quality of the character, wherein the character advantageously is selected from light intensity, wavelength and/or range of wavelengths.
  • the structured light generated by the structured light device may for example be a pattern of light of a certain wavelength range with intermediate areas of no light or areas of light with a more narrow range of wavelength.
  • the pattern may e.g. be strips, cross hatched lines or any other lines, or shapes.
  • the bundle of fibers has a light receiving end and a light emitting end and is arranged for receiving at least a portion of the structured light from the structured light generating device at its light receiving end and for delivering at least a portion of the light to the projector device.
  • the fiber bundle advantageously comprises at least 10 optical fibers, such as at least 50 optical fibers, such as from about 100 to about 2000 optical fibers, such as from about 200 to about 1000 optical fibers.
  • the optical fibers are advantageously very thin and closely packed, such that the total cross-sectional area of the fiber bundle at least at a portion of its length nearest the light emitting end is sufficiently narrow for being inserted into any natural or artificial cavities of a human or animal patient subjected to surgery.
  • the total cross-sectional area of the fiber bundle corresponds to or is smaller the projecting area of the projector device.
  • the fibers of the fiber bundle may be identical or they may differ, e.g.
  • the structured light comprises a structuring of different wavelengths.
  • the fibers of the fiber bundle are substantially identical.
  • fibers of the fiber bundle are partly of fully fused at least a portion of its length nearest the light emitting end to ensure a fixed relative location of the fiber ends.
  • At least the projector device is mounted to or integrated with the minimally surgical instrument.
  • the projector device configured to project the structured light as a structured light beam onto at least a section of a tissue field.
  • the light receiving end of the fiber bundle is operatively coupled to the structured light device for receiving at least a part of the structured light from the structured light device.
  • the operatively coupling may include one or more lenses and/or objectives e.g. for focusing the structured light to be received by the light receiving end of the fiber bundle.
  • the fiber optic probe instrument assembly comprises one or more lenses and/or objectives arranged between the light emitting end of the bundle of fibers and the projector, preferably the projector comprises a micro lens.
  • the light emitting end of the fiber ends are arranged in an encasing to thereby form a probe-head comprising the projector device.
  • the probe-head may comprise one or more lenses for ensuring a desired projection of the structured light.
  • the projector is formed by a protecting coating or cover for the emitting end of the fiber bundle, preferably forming part of the encasing.
  • the light emitting end of the bundle of fibers are arranged in an encasing to form a probe-head comprising the projector device, preferably the probe-head comprises one or more lenses, such as micro lenses.
  • the probe-head may advantageously have a maximal cross-sectional diameter of up to about 1 cm, such as up to about 8 mm, such as up to about 6 mm. Thereby ensuring that the probe-head may be inserted together with a distal end portion of the minimally surgical instrument to which it is mounted or integrated into a natural or artificial cavity of a human or animal patient subjected to surgery.
  • the projector device and/or probe-head is advantageously mounted to the minimally surgical instrument at the distal end portion of the minimally surgical instrument or near the distal end, to ensure that the projector device may be inserted into a natural or artificial cavity together with the distal end portion of the minimally surgical instrument to which it is mounted or integrated of a human or animal patient subjected to surgery.
  • the minimally surgical instrument has a distal portion that may be articulated at an articulating length section thereof
  • the projector device and/or probe-head is advantageously mounted to the minimally surgical instrument at the distal end portion at a location at or closer to the distal end of the minimally surgical instrument than the articulating length section.
  • the relative position between the distal end of the minimally surgery relative to the projector device may be determined by a computer system which comprises data representing the articulating state of the articulating length section.
  • the invention also relates to a structured light ultrasound instrument.
  • the structured light ultrasound instrument comprises an ultrasound transducer instrument and a structured light arrangement, wherein the structured light arrangement comprises a projector device for projecting a structured light beam to a tissue field, such as a tissue field within a natural or artificial body cavity.
  • the wherein the structured light arrangement may be as described above.
  • At least the projector device is mounted to or integrated with the ultrasound transducer instrument.
  • ultrasound transducer instrument for imaging before or during surgery.
  • Such prior art ultrasound transducer instruments are for example market by BK Ultrasound.
  • BK Ultrasound it has been a major problem to navigate the ultrasound transducer instrument relative to the surface of the tissue that are scanned by the ultrasound transducer instrument and in particular, it has been very difficult to match the obtained ultrasound images with actual images of the tissue surface.
  • the prior art ultrasound transducer instruments have been capable of identifying damage or malignant tissue areas, it has been difficult for the surgeon to actually find the exact location of the damage or malignant tissue areas.
  • the structured light ultrasound instrument of the invention it is now possible to obtain an improved correlation between an ultrasound image and a surface image of a tissue area and surface.
  • the structured light ultrasound instrument may advantageously form part of the above described 3D reconstruction system.
  • the ultrasound transducer instrument has a distal portion with a distal end and an ultrasound head located at the distal end.
  • the ultrasound head preferably has a maximal cross-sectional diameter of up to about 2 cm, such as up to about 1.5 cm, such as up to about 1 cm, such as up to about 8 mm, such as up to about 6 mm.
  • the ultrasound transducer instrument may be suitable for use in artificial or natural openings of a human or animal.
  • the ultrasound transducer instrument has an articulating length section at the distal portion, the articulating length section is preferably arranged proximally to the ultrasound head.
  • the projector device is located at the distal portion of the ultrasound transducer instrument.
  • the projector device is located distally to the articulating length section.
  • the invention also comprises a minimally invasive surgery navigation system suitable for ensuring a desired and improved navigation of an ultrasound transducer instrument during minimally invasive surgery.
  • the minimally invasive surgery navigation system comprises a structured light ultrasound instrument as described above, an endoscope and a computer system.
  • the endoscope comprises an image acquisition device configured for recording data representing reflected rays from the emitted pattern and for transmitting the rays reflected from the a surface section of a tissue field to the computer system.
  • the image acquisition device may be as described above.
  • the computer system is configured
  • the computer system may be as described above wherein the computer is further configured for
  • the minimally invasive surgery navigation system and the 3D reconstruction system is a combined system.
  • the surface data is 2D surface data e.g. a simple surface image.
  • the surface data is 3D data e.g. determined by the 3D
  • the minimally invasive surgery navigation system is preferably configured for operating in real time.
  • a timely associated 2D image surface section of the surgical field may be used to confirm or improve the correlation the ultra sound image to the 2D and/or 3D surface data.
  • the timely associated 2D image may for example be a frame acquired by the image acquisition device with corresponding time attribute as the set(s) of data representing reflected rays.
  • the light emitting end of the bundle of fibers are arranged in an encasing to form a probe-head comprising the projector device, preferably the probe-head comprises one or more lenses, such as micro lenses.
  • the probe-head may advantageously have a maximal cross-sectional diameter of up to about 1 cm, such as up to about 8 mm, such as up to about 6 mm. Thereby ensuring that the probe-head may be inserted together with a distal end portion of the minimally surgical instrument to which it is mounted or integrated into a natural or artificial cavity of a human or animal patient subjected to surgery.
  • the projector device and/or probe-head is advantageously mounted to the minimally surgical instrument at the distal end portion of the minimally surgical instrument or near the distal end, to ensure that the projector device may be inserted into a natural or artificial cavity together with the distal end portion of the minimally surgical instrument to which it is mounted or integrated of a human or animal patient subjected to surgery.
  • the projector device and/or probe-head is advantageously mounted to the minimally surgical instrument at the distal end portion at a location at or closer to the distal end of the minimally surgical instrument than the articulating length section.
  • the relative position between the distal end of the minimally surgery relative to the projector device may be determined by a computer system which comprises data representing the articulating state of the articulating length section.
  • the invention also relates to a structured light ultrasound instrument.
  • the structured light ultrasound instrument comprises an ultrasound transducer instrument and a structured light arrangement, wherein the structured light arrangement comprises a projector device for projecting a structured light beam to a tissue field, such as a tissue field within a natural or artificial body cavity.
  • the wherein the structured light arrangement may be as described above.
  • At least the projector device is mounted to or integrated with the ultrasound transducer instrument.
  • ultrasound transducer instrument for imaging before or during surgery.
  • Such prior art ultrasound transducer instruments are for example market by BK Ultrasound.
  • BK Ultrasound it has been a major problem to navigate the ultrasound transducer instrument relative to the surface of the tissue that are scanned by the ultrasound transducer instrument and in particular, it has been very difficult to match the obtained ultrasound images with actual images of the tissue surface.
  • the prior art ultrasound transducer instruments have been capable of identifying damage or malignant tissue areas, it has been difficult for the surgeon to actually find the exact location of the damage or malignant tissue areas.
  • the structured light ultrasound instrument of the invention it is now possible to obtain an improved correlation between an ultrasound image and a surface image of a tissue area and surface.
  • the structured light ultrasound instrument may advantageously form part of the above described3D reconstruction system.
  • the ultrasound transducer instrument has a distal portion with a distal end and an ultrasound head located at the distal end.
  • the ultrasound transducer instrument has an articulating length section at the distal portion, the articulating length section is preferably arranged proximally to the ultrasound head. To ensure that the projected light beam reaches the surface tissue it is desired that the projector device is located at the distal portion of the ultrasound transducer instrument.
  • the projector device is located distally to the articulating length section.
  • the projector device is located proximally to the articulating length section.
  • the invention also comprises a minimally invasive surgery navigation system suitable for ensuring a desired and improved navigation of an ultrasound transducer instrument during minimally invasive surgery.
  • the minimally invasive surgery navigation system comprises a structured light ultrasound instrument as described above, an endoscope and a computer system.
  • the endoscope comprises an image acquisition device configured for recording data representing reflected rays from the emitted pattern and for transmitting the rays reflected from the a surface section of a tissue field to the computer system,
  • the image acquisition device may be as described above.
  • the computer system is configured
  • the computer system may be as described above wherein the computer is further configured for
  • the minimally invasive surgery navigation system and the 3D reconstruction system is a combined system.
  • the surface data is 2D surface data e.g. a simple surface image.
  • the surface data is 3D data e.g. determined by the 3D
  • the minimally invasive surgery navigation system is preferably configured for operating in real time.
  • FIG. 1 is a schematic illustration of an embodiment of a 3D reconstruction system of the invention in use for performing a 3D determination of a tissue field.
  • Figure 2 is a schematic illustration of an embodiment of a 3D reconstruction system of the invention in use for performing a 3D determination of a tissue field in a minimally invasive surgical cavity.
  • Figure 3 is a schematic illustration of another embodiment of a 3D
  • Figure 4 illustrates an example of a flow chart of data processing of a 3D reconstruction system 3D reconstruction system of an embodiment of the invention.
  • Figure 5 illustrates another example of a flow chart of data processing of a 3D reconstruction system 3D reconstruction system of an embodiment of the invention.
  • Figure 6 illustrates a further example of a flow chart of data processing of a 3D reconstruction system 3D reconstruction system of an embodiment of the invention.
  • Figure 7 is a schematic illustration of an image of a tissue field reflecting a structured light pattern projected from a projector device relative to a reference structured light data set.
  • Figure 8 illustrates a method where a 3D reconstruction system is performing a volumetric determination of a tissue field.
  • Figure 9 illustrates a method where a 3D reconstruction system is performing a size determination of a tissue field.
  • Figure 10 illustrates an example of a structured light.
  • Figure 10a illustrates examples of light features of the structured light of figure 10.
  • Figure 11 illustrates another example of a structured light.
  • Figure 11a illustrates examples of light features of the structured light of figure 11.
  • Figure 12 illustrates a further example of a structured light.
  • Figure 12a illustrates examples of light features of the structured light of Figure 12.
  • Figures 13a, 13b and 13c illustrate further examples of light features.
  • Figures 14a, 14b and 14c illustrate further examples of structured light.
  • Figure 15 is a schematic illustration of stereo image feature matching.
  • Figure 16 is a schematic view of a portion of a penetrator for use in minimally invasive surgery and where a projector device of a 3D reconstruction system of an embodiment of the invention is disposed at a tip of the penetrator.
  • Figures 17a and 17b illustrate a part a penetrator member with a projector device of a 3D reconstruction system of an embodiment of the invention disposed near the tip of the penetrator and, wherein the projector device has a first folded position and a second unfolded/pivoted position.
  • Figure 18 is a schematic illustration of a structured light arrangement comprising a light source - waveguide - optical projector and focusing lens assembly suitable for forming part of an embodiment of 3D reconstruction system of the invention.
  • Figure 19 is a schematic illustration of a projector probe which may form part of a structured light of an embodiment of 3D reconstruction system of the invention.
  • Figure 20 is a schematic illustration of a beam expanding lens arrangement which may form part of a structured light of an embodiment of 3D
  • Figure 21 is a schematic illustration of a robot comprising a 3D reconstruction system of an embodiment of the invention.
  • Figures 22a-22c illustrates examples of image acquisition devices suitable for a 3D reconstruction system of embodiments of the invention.
  • Figure 23 is a schematic illustration of a fiber optic probe.
  • Figure 24 is a schematic illustration of a fiber optic probe instrument assembly of an embodiment of the invention comprising the fiber optic probe of figure 23.
  • Figures 25a-25d illustrates cross-sectional views of examples of fiber bundles.
  • Figure 26 is a schematic illustration of a distal portion of a fiber optic probe instrument assembly of an embodiment of the invention comprising the fiber optic probe of figure 23.
  • Figure 27-33 illustrates a structured light ultrasound instrument and a minimally invasive surgery navigation system in use during a minimally invasive surgery system.
  • the 3D reconstruction system illustrated in figure 1 comprises a structured light arrangement 1 with a projector device la configured to project a structured light beam onto at least a section of a tissue field 3.
  • the 3D reconstruction system also comprises an image acquisition device 2
  • the image acquisition device 2 comprises a stereo camera 2a for acquiring digital images(frames). Each frame comprises a set of pixel data and the set of pixel data is associated with a time attribute by the image acquisition device or by the computer system 6, which also form part of the 3D reconstruction system. As illustrated with the waves W the structured light arrangement 1 and the image acquisition device are both in data communication with the computer system.
  • the image acquisition device is configured for transmitting the acquired frames - in the form of sets of pixel data - in real time to the computer system 6 and the computer system is configured for receiving the frames in real time - in the form of sets of pixel data.
  • the computer system 6 is configured for receiving data from the structured light arrangement 1 representing the projected structured light beam. These data are stored in a memory of the computer system 6 as reference
  • the computer system 6 is further configured for
  • features including a plurality of primary light features from the received set(s) of pixel data having corresponding time attribute and optionally from previous set(s) of pixel data,
  • tissue field • performing at least one determination of the tissue field 3 based on the spatial position of the projector device and the recognized light features.
  • the tissue field may be rather curved and the 3D
  • reconstruction system may e.g. be configured for determining the shortest Euclidean distance between a not shown instrument and the tissue field i.e. the actual distance irrespectively of the point of view.
  • the 3D reconstruction system illustrated in figure 2 is here applied in a minimally invasive surgical procedure and comprises a structured light arrangement 11 with a projector 11a projecting a structured light beam as illustrated with rays lib onto at least a section of a tissue field 13.
  • the 3D reconstruction system also comprises an image acquisition device 12 configured for acquiring frames comprising reflections 15 of the structured light beam from the tissue field 13.
  • the image acquisition device 12 configured for acquiring frames comprising reflections 15 of the structured light beam from the tissue field 13.
  • a not shown camera also referred to as an image acquisition unit.
  • the image acquisition device 12 is wired to the computer 16a of the computer system for transmitting in real time sets of pixel data representing acquired frames.
  • the structured light arrangement 11 is wired to the computer 16a of the computer system for transmitting data representing the projected structured light beam - the data represent at least a set of light features of the projected structured light beam.
  • a tip portion comprising the projector device 11a of the structured light arrangement 11 is inserted through the skin 10 of a patient into the minimally invasive surgical cavity to project the rays lib onto an intestine area I of the tissue field.
  • a portion comprising the camera 12a of the image acquisition device 12 is inserted through the skin 10 of the patient into the minimally invasive surgical cavity via a cannula port 12c to acquire the frames of the reflected structured light 15.
  • the computer system comprises a display (as screen) 16b and the computer 16a is configured for
  • the computer system 16a, 16b may also be configured for controlling the operation of the image acquisition device 12 and/or the structured light arrangement 11.
  • the 3D reconstruction system illustrated in figure 3 comprises a structured light arrangement 2, with a portion comprising a projector device inserted through the skin 20 of a patient into a minimally invasive surgical cavity to project the rays onto the tissue field.
  • the 3D reconstruction system also comprises an image acquisition device 22 partly inserted into the minimally invasive surgical cavity and configured for acquiring frames comprising reflections 25 of the structured light beam from the tissue field.
  • the image acquisition device 22 comprises camera for acquiring digital frames which are transmitted to a not shown computer system of the 3D
  • the computer system is configured for ⁇ recognizing a plurality of light features of received set(s) of light
  • features including a plurality of primary light features from the received set(s) of pixel data having corresponding time attribute and optionally from previous set(s) of pixel data and matching the recognized primary light features with corresponding light features of the projected structured light beam,
  • the 3D determination may e.g. be 3D reconstruction e.g. topological reconstruction, determining augmented reality view of tissue field, performing volumetric measures, tracking instrument relative to tissue field, etc.
  • the computer system may receive pre-operation data and/or intra-operation data which may e.g. be used for refining the recognition step, the matching step, the estimation of projector device spatial position and/or the 3D determination.
  • the flow chart of figure 4 illustrates a process scheme of data processing steps which the computer system may be configured to perform.
  • step 4a) "image capture", the computer system receives a set of pixel data representing an acquired frame and with an associated time attribute.
  • the computer system may store previous set(s)of pixel data (set(s) of pixel data representing previously acquired frames with a time attribute representing an earlier point in time).
  • step 4b) "Recognizing light features"
  • the computer system searches for light features in the set(s) of pixel data.
  • step 4c) "Selecting primary light features" the computer system selects light features which are qualified for being used primary light features e.g. in respect to one or more thresholds and preferably light features with at least an orientation attribute, a position attribute or a combination thereof.
  • the selected light features are deemed to be primary light features.
  • step 4d) "Match primary light features", the computer system matches the primary light features with corresponding light features of the reference structured light data set.
  • step 4e) "Estimating spatially position of the projector device” the computer system is estimating the spatial position of the projector device using the best match of features.
  • the computer system may be configured for applying an iterative estimation procedure to find the estimation where most of the matched features are valid. Thereby primary features reflected from very curved areas of the tissue field may be ignored for the estimation of the spatial position of the projector device.
  • step 4f) "Estimating location of features on tissue field”, the computer system is estimating the location of recognized light features (e.g. including primary light features) on the tissue field inclusive the spatial location relative to the now estimated spatial position of the projector device.
  • step 4g) "Calculate tissue field map", the computer system calculates topological data (3D data) of the tissue field.
  • the one or more steps may be performed iteratively. Further the computer system may apply additional steps, such a data rectifying steps, outlier removal steps, epipolar matching where a stereo camera has been used and etc.
  • the flow chart of figure 5 illustrates a process scheme of another example of data processing steps which the computer system may be configured to perform.
  • step 5a) "image capture", the computer system receives a set of pixel data representing an acquired frame and with an associated time attribute.
  • the computer system may store previous sets of pixel data (set of pixel data representing previously acquired frames with a time attribute representing an earlier point in time).
  • step 5b) "Image rectifying", the computer system rectifies the set(s) of pixel data by subjecting the data to an error and correction procedure e.g. as described above.
  • step 5c) "Detect features” the computer system is recognizing light features e.g. by searching for light features in the set(s) of pixel data. The computer system further selects which are qualified for being used a primary light features e.g. in respect to one or more thresholds and preferably light features with at least an orientation attribute, a position attribute or a combination thereof. The selected light features are deemed to be primary light features.
  • step 5d) "Match features" the computer system matches the primary light features with corresponding light features of the reference structured light data set.
  • Steps 5c) and 5d) may for example include the steps i-iv of extracting light features, e.g. represented by local pattern fractions e.g. corner, corner connections, square arrangements and any other fractions of the structured light e.g. as described above and matching these light features in an iterative process.
  • steps i-iv of extracting light features e.g. represented by local pattern fractions e.g. corner, corner connections, square arrangements and any other fractions of the structured light e.g. as described above and matching these light features in an iterative process.
  • step 5e) "Outlier removal and rough pose & orientation estimation"
  • the computer system performs a rough estimation of the spatial position of the projector device using the match of features - refines the estimation, removes outlier data and repeats as many times as required e.g. until further modifications are below a preselected threshold.
  • Step 5f) "Refine pose and orientation" is a continuation of step 5e) to perform the final estimation of the spatial position of the projector device.
  • step 5g) "Dense 3D reconstruction"
  • the computer system estimates the location of recognized light features on the tissue field inclusive the spatial location relative to the now estimated spatial position of the projector device and calculates topological data (3D data) of the tissue field.
  • the flow chart of figure 6 illustrates a process scheme of a further example of data processing steps which the computer system may be configured to perform.
  • step 6a) "capture stereo image"
  • the computer system receives a set of pixel data representing stereo acquired frames having corresponding associated time attribute.
  • step 6b) "Recognizing light features from each image”
  • the computer system searches for corresponding light features in each of the sets of pixel data and selects corresponding primary light features.
  • step 6c) "Matching features to reference light data”
  • the computer system matches the primary light features from one or preferably both sets of pixel data with corresponding light features of the reference structured light data set.
  • step 6d) "Epipolar feature matching”, the computer system matches the primary light features and optionally other light features between the sets of pixel data of the stereo frames.
  • the match relative to the reference data set and the epipolar matching may be performed as an iterative process.
  • the step may
  • step 6e) "Estimating spatially position of the projector device” the computer system estimates the spatial position of the projector device using the best match of features including the epipolar matching.
  • the computer system may be configured for applying an iterative estimation procedure to find the estimation where most of the matched features are valid.
  • step 6f) "Estimating location of features on tissue field”
  • the computer system is estimating the location of recognized light features (e.g. including primary light features) on the tissue field inclusive the spatial location relative to the now estimated spatial position of the projector device.
  • the computer system may for example receive and apply pre-operation data and/or inter-operative data in the location estimation to thereby further refine the 3D determination.
  • step 6g) "Calculate tissue field map"
  • the computer system calculates topological data (3D data) of the tissue field.
  • Figure 7 illustrates the matching of light features of data of a set of pixel data representing an image 30 with data of a reference structured light data set representing the structured light 31 as projected by the projector device (the projected structured light beam). As illustrated with the lines L light features comprising corner, crossing lines and etc. may be matched.
  • Figure 8 illustrates that the 3D reconstruction system is performing a volumetric determination of a tissue field based on the light features of the set of pixel data representing an image 30 after having estimated the spatial position of the projector device.
  • the computer system can calculate the geometrically spatial position of the light features and form a map M where formations may be detected, e.g. by fitting the data representing position attributes of light features 32 to a circle as shown. Thereby the size and the volume of a protrusion may be determined.
  • Figure 9 illustrates that the 3D reconstruction system is performing a size determination of a tissue field based on the light features of the set of pixel data representing an image 30 after having estimated the spatial position of the projector device.
  • Figure 10 illustrates an example of a structured light in the form of a grid pattern. Examples of light features which may be extracted from the grid pattern are shown in figure 10a including for example a sub-grid, a square, a cross and a corner.
  • a position attribute may e.g. be provided as the point of cross in a sub-grid or a cross or as a corner edge of a sub-grid or a corner.
  • An orientation attribute may for example be representing the orientation of lines of the shown light features.
  • Figure 11 illustrates an example of a structured light in the form of a grid pattern with random dots. Examples of light features which may be extracted from the grid pattern are shown in figure 11a including for example a grid feature, a grid with dots, a group of dots or a single dot.
  • Figure 12 illustrates an example of a structured light in the form of pattern of dots. Examples of light features which may be extracted from the grid pattern are shown in figurel2a. As seen the group of dots can be matched to the pattern of dots as illustrated with the dotted ring.
  • Figures 13a, 13b and 13c illustrate further examples of light features comprising light features with various characteristic shapes, light feature with various colors and light features with various intensities of light.
  • Figures 14b illustrates a structured light comprises a number of parallel lines e.g. comprising different attributes - e.g. size, colour, structure and etc.
  • Figures 14c illustrates a structured light in the form of a bar coded structure.
  • Figure 15 is a schematic illustration of stereo image feature matching showing the matching of light features between the first image 42 and the second image of stereo images. Corresponding features of at least one of the images 43 are matched to the reference structured light data set. 41.
  • Figure 16 shows a distal portion of penetrator 50.
  • the penetrator comprises a channel 55 for supplying light e.g. via an optical fiber arrangement to a projector device 56 arranged at the tip of the penetrator.
  • the penetrator 60 of figures 17a and 17b has a distal portion 62 with a tip 64, an obstruction 65 ensures that the penetrator 60 is not penetrating too deep into a minimally invasive surgical cavity and a proximal portion 63, for handling the penetrator 60 by an operator.
  • the penetrator 60 comprises a projector device 66 forming part of a structured light arrangement of a 3D reconstruction system of an embodiment of the invention.
  • the projector device 66 is in a folded position where the projector device 66 is folded into a sleeve of the penetrator 60. When the projector device 66 is in this first position the penetrator 60 may penetrate through the skin of a patient.
  • the structured light arrangement illustrated in figure 18 comprises a light source 72, a waveguide 71, an optical projector 76 and focusing lens 75.
  • the waveguide 71 is an optical fiber and the optical projector 76 is a DOE
  • the light pattern is projected in the desired direction and focused by the focusing lens 75.
  • the projected pattern has a diverging angle 0.
  • the projector probe illustrated in figure 19 comprises an optical fiber 81 with a proximal end 81' and a distal end, a beam expanding lens 82 and a projector 86 with a distal front face 86'.
  • the optical fiber 81, the beam expanding lens 82 and the projector 86 are fused in the fused interfaces F.
  • the optical fiber 81, the beam expanding lens 82 and the projector 86 are arranged in a hermetic metal housing 80 preferably using epoxy seal 89.
  • the light When a light beam is pumped from a not shown light source into the proximal end 81' of the optical fiber 81, the light will propagate through the optical fiber 81 collimated in the core of the optical fiber. From the fiber 81 the light will pass into the beam expanding lens 82 which is advantageously a GRIN lens and the beam will expand as the light propagates through the beam expanding lens 82. At the exit of the beam expanding lens 82 the light will be collimated and it will propagate into the projector which is advantageously a DOE.
  • the light pattern will be shaped and the projector will project a divergent light pattern.
  • the projector may advantageously comprise an optical filter or an optical filter layer as described above to prevent and/or remove fog/mist.
  • the optional optical filter or an optical filter layer is indicated with the dotted part 86a of the projector 86.
  • the beam expanding lens arrangement illustrated in figure 20 comprises a beam expanding lens 92 having a length L.
  • the light is fed from a not shown light source via an optical fiber 91 to the proximal end of the beam expanding lens 92.
  • the light enters the beam expanding lens and due to a continuous change of the refractive index within the lens material, the light rays rl are continuously bent to thereby expand the diameter of the beam as the light propagates through the beam expanding lens 92 along its length L.
  • the beam expanding lens 302 the light is collimated to form a beam with substantially parallel rays r2 of light.
  • the collimated light may be transmitted further to the DOE of a projector probe as illustrated in figure 19.
  • FIG 21 is a schematic illustration of a robot 100 comprising a first movable robot arm 100a and a second movable robot arm 100b.
  • the robot 100 comprises a minimally invasive surgery tool 107a and a projector device 107 of a structured light arrangement disposed on its first robot arm 100a and an image acquisition device 108 disposed on its second robot arm 100b.
  • the invasive surgery tool 107a with the projector device 107 and the image acquisition device 108 are passes through not shown cannula ports through the skin 106 of a patient into a minimally invasive surgical cavity with a tissue field.
  • the robot arms 100a and 100b are outside the cavity. In practice it is desired that the robot arms 100a and 100b are inserted through the cannula ports into the cavity for thereby increasing movability of the projector device and the image acquisition device.
  • the projector device 107 projects a structured light beam SB to imping onto the tissue field and at least a part of the light of the structured light beam SB is reflected as a reflected light pattern RP.
  • the image acquisition device 108 acquires frames comprising at least a part of the reflected light pattern RP.
  • the robot also comprises a computer system comprising a data collecting arrangement 102, a computer 101 with processing capability and a controller processing system 104 for controlling the operation of the robot and in particular the robot arms 100a, 100b.
  • the frames acquired by the image acquisition device is collected in real time in the data collecting arrangement 102 and stored in the form of sets of pixel data with associated time attributes.
  • the data collecting arrangement 102 also stores the reference structured light data set.
  • the computer 101 is requesting data from the data collecting arrangement 102 and is processing the data as described above to estimate the spatially position of the projector device and further the computer 101 performs 3D determinations of the tissue field.
  • the 3D determinations may be transmitted to a display 103 to be visualizes for a human surgeon.
  • the 3D determinations may further be transmitted to the controller processing system 104 for being used in the algorithm determining the movement of the robot arms.
  • the controller processing system 104 may further provide a feed back to the computer 101 including data describing previous or expected moves of the robot arms 100a, 100b and the computer 101 may apply these feedback data to refine the 3D determinations.
  • the image acquisition device also comprises a projector which projects a second structured light beam SBa towards the tissue field and this second structured light beam SBa is at least partly reflected as a reflected pattern RPa.
  • the wavelength(s) of the two structured light beams SB, SBA preferably differs such that the computer 1001 may distinguish between the two reflected light patterns RP, RPa and features thereof.
  • FIG 22a illustrates an example of an image acquisition device comprising one or more single cameras 112a, 112b incorporated into separate camera housings 111a, 111b. Where there are more cameras 112a, 112b it is desirable that the cameras 112a, 112b may be moved separately e.g. by independently tilting and/or twisting the camera housings.
  • the cameras 112a, 112b may be operated by a common electronic circuitry to ensure that the cameras 112a, 112b are operating concurrently with each other and preferably with same frame rate.
  • Figure 22b illustrates an example of an image acquisition device comprising a stereo camera with a stereo camera housingll3 comprising flexible housing arms 113a, 113b each encasing a camera 114a, 114b.
  • the flexible housing arms 113a, 113b ensures that the cameras 114a, 114b can be positions with variable base line within a selected range.
  • An embodiment of the 3D reconstruction system comprising a stereo camera with variable base line is very advantageous because the base line may be optimized during a procedure to thereby result in a very detained 3D analysis of the tissue field including highly accurate determined determinations of sizes of protruding parts and/or cavities of the tissue field and optionally volume determinations of such protrusions and/or cavities.
  • the fiber optic probe shown in figure 23 comprises a structured light device 122, a bundle of optical fibers 124 and a probe-head 125 comprising a projector device encased in a probe-head 125.
  • a not shown light source is arranged to feed light to the structured light device 122 via a fiber 121.
  • the light source forms an integrated part of the structured light device 122.
  • the light source may be as the sight sourced deisclosed above.
  • the light source is a laser light source e.g. LED.
  • the structured light device 122 is configured for generating a structured light. Since the size of the structured light device 122 is not very important, any device suitable for generating the light source may be applied e.g. as described above.
  • the structured light device 122 is arranged to deliver at least a portion of the structured light to the light receiving end 124a of the fiber bundle 124.
  • a number of lenses and/or objectives 123 are arranged between an output end 122a of the structured light device 122 and the light receiving end 124a of the fiber bundle 124. The lenses and/or objectives 123 ensures an effective focusing of the structured light to be received by the light receiving end 124a of the fiber bundle 124.
  • the light is propagating to the probe-head 125.
  • the light emitting end 124b of the fiber bundle 124 for high stability it is desired that the light emitting end 124b of the fiber bundle 124 is encased in and forms part of the probe-head 125.
  • One or more not shown lenses may e.g. be arranged between the light emitting end 124b of the fiber bundle 124 and the projector.
  • the probe-head shown comprises a micro lens 125a which may be the projector device or alternatively a not shown projector device is arranged in front of the micro lens 125s
  • the projector device of the probe-head is configured to project the structured light as a structured light beam e.g. onto at least a section of a tissue field.
  • Figure 24 shows the fiber optic probe mounted to an instrument 127 to form a fiber optic probe instrument assembly of an embodiment of the invention.
  • the minimally surgical instrument 127 comprises a distal portion 127a adapted to be inserted into any natural or artificial cavities of a human or animal patient subjected to surgery.
  • the minimally surgical instrument is a grasper, comprising a pair of grasper arms 127b which forms the distal end and the tip of the minimally surgical instrument 127.
  • the probe-head 125 is mounted to the minimally surgical instrument 127 at its distal portion 127a
  • Figures 25a-25d illustrates cross-sectional views of examples of fiber bundles suitable for use in a fiber optic probe instrument assembly of embodiments of the invention.
  • Figure 26 shows a distal portion of a fiber optic probe instrument assembly, where the minimally surgical instrument is an endoscope 137.
  • the endoscope 137 comprises a camera channel 137b and a probe channel 137a. In the shown embodiment a camera has not been inserted through the camera channel. A probe-head 135 and a length portion of the fiber bundle 134 has been passed through the probe channel.
  • Figure 27-33 illustrates a structured light ultrasound instrument and a minimally invasive surgery navigation system in use during a minimally invasive surgery system.
  • the illustration of figure 27 show a tissue surface 141 within a body cavity.
  • the tissue surface has a tumor 142 which may be malignant and is to be diagnosed and optionally removed.
  • a distal portion of a structured light ultrasound instrument 143 as described above is inserted into the cavity.
  • the structured light ultrasound instrument 143 comprises an ultrasound head 144 comprising a transceiver for emitting and receiving ultrasound.
  • Proximally to the ultrasound head 144 the distal portion of the structured light ultrasound instrument 143 has an articulating length section 145, which can be articulated with a high degree of freedom.
  • the articulating movement are preferably computer controlled or at least the computer system comprises data representing the movements and articulated position of the articulating length section 145, such that the computer system can determine the relative position between the projector device 146 and the ultrasound head 144.
  • the projector device is located proximally to the articulating length section 145 to ensure a desired location for emitting the light pattern onto a desired tissue field of the tissue surface 141.
  • the structured light 147 here in the form of a plurality of light dots are impinging onto the tissue field and a portion of the light is reflected.
  • An endoscope comprising an image acquisition device is configured for acquiring frames comprising reflections of the structured light 147.
  • Each frames (images) comprises a set of pixel data associated with a time attribute and advantageously the set of pixel data is transmitted to the computer system for being computed according to the 3D reconstruction system as escribed above.
  • the computer system is configured for performing a 3D reconstruction in real time using these acquired frames ads described above and at the same time the ultrasound head 144 is scanning the tissue below the tissue surface.
  • combined 3D reconstruction system/minimally invasive surgery navigation system is configured to in real time determining the spatial location and orientation of the non-articulated portion of the structured light ultrasound instrument 143, of the ultrasound head 144 and the endoscope (camera) 148.
  • the combined 3D reconstruction system/minimally invasive surgery navigation system As shown in the figure 28 the combined 3D reconstruction system/minimally invasive surgery navigation system.
  • the system further overlay a depiction 143a, 144a, 148a of the spatial location and orientation of respectively the non-articulated portion of the structured light ultrasound instrument 143, of the ultrasound head 144 and the endoscope (camera) 148.
  • the surgeon has full information about the coordinates (spatial location) and the orientation of the non-articulated portion of the structured light ultrasound instrument 143, of the ultrasound head 144 and the endoscope (camera) 148.
  • the structured light has been changed to an angular pattern.
  • Figure 29 illustrates a view seen from the endoscope (the probe spatial location is shown but not the orientation. The system advantageously allow the surgeon to switch these data on and off.
  • Figure 30 illustrates the image of the tissue field and a corresponding a 2D ultrasound image 149. In real time, both images will continuously be replaced by real time images.
  • the tissue near the tumor may comprise a vein 140 or another critical structure which the surgeon should aim not do damage during surgery. Due to the present invention the surgeon may locate such critical structure and thereby ensune unneccesary damaging
  • Figure 31 illustrates the image of the tissue field and a corresponding a 3D ultrasound image 150. In real time, both images will continuously be replaced by real time images.
  • To couple the 3D ultrasound image 150 i.e. to determine spatial location and orientation of the 3D ultrasound image 150 the it is required to know the coordinates and orientation of the ultrasound head 144 and at least the orientation and advantageously also the coordinate of the endoscope 148.
  • Figure 32 show a view of the image of the tissue field and the corresponding a 3D ultrasound image 150, where the 3D ultrasound image 150 and the 2D and/or 3D surface data determined from the frames of the endoscope has been correlated in 3D orientation, spatial position and size.
  • Figure 33 show two different view corresponding to the view of figure 32 where the surgeon may dynamically change the x,y,z angle of one or both of the views.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A 3D reconstruction system for performing a determination of a tissue field is disclosed. The system has a structured light arrangement with a projector device configured to project a structured light beam onto at least a section of the tissue field, an image acquisition device configured for acquiring frames comprising reflections of the structured light beam from the tissue field, each frame comprises a set of pixel data associated with a time attribute, and a computer system. The projected structured light beam is stored as reference structured light data set comprising data representing a set of light features. The computer is configured for • in real time receiving frames acquired by the image acquisition device, • recognizing a plurality of light features including primary light features of the set of light features from the received set of pixel data, • matching the recognized primary light features with corresponding light features of the projected structured light beam and based on the matches estimating the spatial position of the projector device relative to at least a part of the tissue field, and • performing at least one determination of the tissue field based on the spatial position of the projector device and the recognized light features.

Description

A 3D RECONSTRUCTION SYSTEM
TECHNICAL FIELD
The invention relates to a 3D reconstruction system for determining a 3D profile of an object. The 3D reconstruction system is in particular suitable for use in surgery, such as minimally invasive surgery.
BACKGROUND ART
Minimally invasive surgery (MIS) and in particular laparoscopy has been used increasingly in recent years due to the benefits compared to conventional open surgery as it reduces the trauma to the patient skin and optionally further tissue, leaves smaller scars, minimizes post-surgical pain and enables faster recovery of the patient.
There are different kinds of MIS such as laparoscopy, endoscopy, arthroscopy and thoracoscopy. Whereas many of the MIS procedures are mainly for examination within natural openings of mammals, endoscopy, such as laparoscopy has in recent years developed to be a preferred method of performing both diagnostic and surgical procedures.
In minimally invasive surgery the surgeon accesses a body cavity, such as the abdominal or pelvic cavity, through one or more small incisions or the surgeon may access a body cavity via a natural opening. An endoscope, such as a laparoscope may be inserted through an incision and be conventionally connected to a monitor, thereby enabling the surgeon to see the inside of the body cavity, such as an abdominal or pelvic cavity. In order to perform the surgical procedure, a surgical instrument is inserted through the same or usually another incision. Usually the body cavity (sometimes called "surgery cavity") around the surgical site is inflated with a fluid, preferably gas e.g. carbon dioxide in order to create an 'air' space within the body cavity to make space for the surgeon to view the surgical site and move the laparoscopic instruments. In order to improve the 3D surface determination for the surgeon, in particular to make it easier for the surgeon to determine the sizes of various organs, tissues, and other structures in a surgical site, several in-situ surgical metrology methods have been provided in the prior art. Different types of optical systems have been applied to provide an improved vision of the surgical site.
Also in other connections it may be difficult or expensive to obtain 3D surface data from internal body cavity surfaces of a patient or other areas of a patient which are difficult to access. Also in connection with robotic operations improved methods of obtaining 3D surface data are desired.
In prior art methods the operator has for example used a CT scan to obtain 3D surface data.
US 2013/0296712 describes an apparatus for determining endoscopic dimensional measurements, including a light source for projecting light patterns on a surgical sight including shapes with actual dimensional measurements and fiducials, and means for analysing the projecting light patterns on the surgical site by comparing the actual dimensional
measurements of the projected light patterns to the surgical site.
WO 2013/163391 describes a system for generating an image, which the surgeon may use for measuring the size of or distance between structures in the surgical field by using an invisible light for marking a pattern to the surgical field. The system comprises a first camera; a second camera; a light source producing light at a frequency invisible to the human eye; a dispersion unit projecting a predetermined pattern of light from the invisible light source; an instrument projecting the predetermined pattern of invisible light onto a target area; a band pass filter directing visible light to the first camera and the predetermined pattern of invisible light to the second camera; wherein the second camera images the target area and the predetermined pattern of invisible light, and computes a three-dimensional image. US2008071140 discloses an endoscopic surgical navigation system which comprises a tracking subsystem to capture data representing positions and orientations of a flexible endoscope during an endoscopic procedure, to allow co-registration of live endoscopic video with intra-operative and/or pre- operative scan images. Positions and orientations of the endoscope are detected using one or more sensors and/or other signal-producing elements disposed on the endoscope.
US6503195 describes a real-time structured light depth extraction system includes a projector for projecting structured light patterns comprising a positive pattern and an inverse pattern with onto an object of interest. A camera samples light reflected from the object synchronously with the projection of structured light patterns and outputs digital signals indicative of the reflected light. An image processor/controller receives the digital signals from the camera and processes the digital signals to extract depth information of the object in real time.
A system for generating augmented reality vision of surgical cavities for viewing internal structures of the organs of a patient to determine the minimal distance to a cavity surface or organ of a patient is described in "Augmented reality in laparoscopic surgical oncology" by Stephane Nicolau et al. Surgical Oncology 20 (2011) 189-201 and "An effective visualization technique for depth perception in augmented reality-based surgical navigation" by Choi Hyunseok et al. The international journal of medical robotics and computer assisted surgery, 2015 May 5. doi: 10.1002/rcs.l657.
"Autonomous retrieval and positioning of surgical instruments in robotized laparoscopic surgery using visual servoing and laser pointers" by Krupa et al.
Robotics and Automation, 2002. Proceedings. ICRA Ό2. IEEE International Conference 11-15 May 2002 - published 07 August 2002 - discloses a robotic vision system that automatically retrieves and positions surgical instruments in robotized laparoscopic surgery. The surgical instrument comprises a laser pointing instrument to project laser spots. The distance between instrument and organ may be estimated by using images of optical markers mounted on the tip of the instrument and images of the laser spots projected by the same instrument.
A similar approach for providing depth perception during minimally invasive surgery is described in US2016/0360954. In this method a structured light is projected from a projector carrying a fiducial marker and the distance between the projector and the surface reflecting the emitted light may be estimated.
Other prior art systems for 3D reconstructions are for example described in US2009244260, US2014085421, US2014052005, US2014052005 and
US2016143509
These systems generally comprise a projector and a camera which are spatially interconnected.
DISCLOSURE OF INVENTION
An object of the present invention is to provide a 3D reconstruction system for performing a determination of a tissue field in 3D space with high accuracy.
In an embodiment it is an object to provide a 3D reconstruction system for determining a distance in 3D space e.g. between an instrument and a tissue field.
In an embodiment it is an object to provide a 3D reconstruction system for determining the contour of at least a part of the tissue, such as for determining tissue topography in real time and with a very high accuracy.
In an embodiment it is an object to provide a 3D reconstruction system for determining the changes of tissue field in real time, such as determining a pulsating movement at the tissue field. In an embodiment it is an object to provide a 3D reconstruction system for determining the changes of tissue field in real time, such as movements caused by manipulation using an instrument or patient movements e.g. local movement, such as peristaltic movements causing movements of the tissue field.
In an embodiment it is an object to provide a 3D reconstruction for performing a determination of a tissue field in 3D space with a high accuracy while simultaneously allowing movements of one or more tools of the system during the measurements. In an embodiment it is an object to provide a 3D reconstruction system for performing a determination of a tissue field in 3D space, which system is dynamic and which may be used for providing a 3D scan of a tissue field for diagnostics and/or surgery.
In an embodiment it is an object to provide a 3D reconstruction system for performing a determination of a tissue field in 3D space providing a highly accurate size determination of the target area.
In an embodiment it is an object to provide a 3D reconstruction system for performing a volumetric determination of an organ or a part thereof. This and other objects have been solved by the invention or embodiments thereof as defined in the claims or as described herein below.
It has been found that the invention or embodiments thereof have a number of additional advantages, which will be clear to the skilled person from the following description.
The 3D reconstruction system of the invention is especially suitable for performing a determination of a tissue field, such as a field for performing diagnostics and/or a surgery field.
The term tissue field is herein used to designate any surface areas of a mammal body, such as natural surface areas including external organs, e.g. skin areas and/or internal surface areas of natural openings and surface areas exposed by surgery and/or surfaces of a minimally invasive surgery cavity. The tissue field is advantageously an in vivo tissue field. The tissue field may include areas of organs, such as internal organs that have been exposed by surgery, e.g. surfaces of a heart, a spleen or a gland.
The phrase "determination of a tissue field in 3D space" is herein used to designate a determination of a property of the tissue field or a part thereof, and/or a determination of the tissue field or a part thereof relative to a selected unit such as a surgical tool. The property may be a tissue type determination and/or a size determination, such as a topologic size
determination, an area size determination and/or a volume determination.
The 3D reconstruction system of the invention is preferably suitable for performing a determination of a tissue field in 3D space, more preferably for performing real time determination in 3D space. In particular due to the structure and the programming of the 3D reconstruction, 3D determinations may be performed with a very high accuracy.
The 3D reconstruction system comprises
• a structured light arrangement comprising a projector device
configured to project a structured light beam onto at least a section of a tissue field,
• an image acquisition device configured for acquiring frames
comprising reflections of the structured light beam from the tissue field, and
• a computer system. The frames are digital frames and each frame comprises a set of pixel data associated with a time attribute, such as an actual time or a relative time, e.g. a time from start of a procedure or from a start time set by an operator. The structured light beam has a centre axis which may advantageously be determined as the centre axis of the structured light beam.
The structured light beam comprises a cross-sectional light structure which means the light beam as seen in a cross sectional view e.g. as projected perpendicularly to a plan surface. The cross-sectional light structure comprises a plurality of light features which are recognizable by the computer system from the set of pixel data. The light features may comprise an indefinite number of light features, such as an indefinite number of fractions of the cross-sectional light structure which may be recognized by the computer system. Thus, the light features may be optically recognizable by comprising an optically recognizable attribute, such as a geometrical attribute (e.g. a local shape), an intensity attribute and/or a wavelength attribute.
The optically recognizable attributes are recognizable from the pixel data. The set of pixel data comprises at least one value for each pixel. The value may be 0 for a pixel that does not detect any light. The values of the respective pixels may for example represent one or more wavelengths, the intensity of one or more wavelengths, total intensity and etc. Values of a group of pixels may represent a geometrical attribute e.g. a line and/or a pattern of pixels with corresponding values. The computer system may be a single computer or a group of computers which are in data communication with each other e.g. by wire or wireless. The computer system comprises a processor, such as a multi-core processor.
In an embodiment the computer system forms part of a robot, such as a robot controller processing system configured for operating and controlling movement of the robot.
It has been found that the 3D reconstruction system may be operated with a relatively low processing power (CPU) while at the same time be operating with a high accuracy in real time. The computer system is configured for storing data representing the projected structured light beam. As it will be described further the data set representing the projected light beam may be transmitted or determined by the computer system. In an embodiment the computer system comprises a memory configured for storing the projected structured light beam in the form of the reference structured light data set. The memory optionally stores the reference structured light data set or as it will be elaborated a plurality of reference data sets each associated with properites of a structured light beam including data representing recognizable light features of the light beam.
The term "projected structured light beam" means the structured light beam as projected from the projector device. Thus, the projected structured light beam has the orientation and position (pose) corresponding to the beam as projected. The pose of the projected structured light beam can therefore be estimated to be the same as the pose of the projector device.
The projected structured light beam includes a group of electromagnetic waves projected from the projector and propagating along parallel or diverging directions and wherein the light is textured seen in a cross-sectional view orthogonal to a center axis (herein also referred to as the optical axis) of the group of electromagnetic waves i.e. the light has areas of higher intensity, and areas of lower intensities or no intensity which is not a natural Gaussian intensity distribution of a light beam. The terms "light pattern" and "light texture" are used interchangeably. The data representing the projected light beam is referred to as a "set of reference structured light data" or a
"reference structured light data set".
Thus, the projected structured light beam may be stored in the form of a reference structured light data set. The reference structured light data set comprises at least a set of the light features of the projected structured light beam. The computer system is configured for
• in real time receiving frames acquired by the image acquisition device,
• recognizing a plurality of the set of light features including a plurality of primary light features from the received set of pixel data,
· matching the recognized primary light features with corresponding light features of the projected structured light beam and based on the matches estimating a spatial position of the projector device relative to at least a part of the tissue field, and
• performing at least one determination of the tissue field based on the spatial position of the projector device and the recognized light features.
Unless otherwise specified or clear from the context the term "frame" means a frame comprising reflections of the structured light beam from the tissue field, i.e. a frame acquired while the projector is projecting the structured light beam.
In an embodiment the computer system is configured for recognizing a plurality of the set of light features including a plurality of primary light features from two or more received sets of pixel data having corresponding time attribute, such as sets of pixel data of frames of a multi camera image acquisition device.
Matching of features of stereo pairs of images is known in the art, but heretofore it has never been considered to perform feature matching between a projected light beam and an image to estimate the spatial position of the projector device. According to the present invention a very effective method and system for 3D reconstruction of acquired image(s) has been provided which may perform a real time 3D reconstruction with a high accuracy using relatively simple algorithms and which algorithms further may be processed using a relative low processing power (CPU). The matching of recognized primary light features may be performed according to principles known from the art of feature matching of stereo images for example by applying homographical iterative closest match algorithms and/or as described in the article "Wide Baseline Stereo Matching" by Philip Pritchett and Andrew Zisserman, Robotics Research Group,
Department of Engineering Science, Oxford University, 0X1 3PJ. IEEE
International Conference on Computer Vision, page 754-760, 1998.
The matching of the recognized primary light features may preferably comprise matching the pixel data representing the primary light features with pixel data of the reference structured light data set.
Based on the matches of light features the computer system may estimate the spatial position of the projector device relative to at least a part of the tissue field determined e.g. from the position of the image acquisition device.
Thus, the spatial position of the projector device relative to at least a part of the tissue field may be determined from the position of the image acquisition device at the time of acquiring the image processed and preferably
determined from the position and orientation of the image acquisition device at the time of acquiring the image processed.
Thus, advantageously the computer system is configured for matching the recognized primary light features with corresponding light features of the projected structured light beam and based on the matches estimating the spatial position of the projector device relative to at least a part of the tissue field as the spatial position determined from the position of the image acquisition device. Thereby the projector device need not be within the field of view of the image acquisition device since the computer system based on the light feature matches may determine the spatial position of the projector device relative to at least a part of the tissue field as it would have been imaged if it had been within the field of view of the image acquisition device. Thus, advantageously the image acquisition device and/or the projector device may be
independently moved and further the view of field of the image acquisition device may be relatively narrow and preferably focused predominantly onto the tissue field. Thereby the acquired images of the tissue field may be of a very high quality and reveal many details, which may not have been revealed using an image acquisition device with a higher field of view and/or depth of focus.
Further, it has been found that calibration of the 3D reconstruction system may not be required. In an embodiment the computer system may be configured for receiving a reference structured light data set via a calibration step. However, in this embodiment the system as such may not require calibration once the computer has the reference structured light data.
Knowing the spatial position of the projector device, the computer system may perform the one or more determinations of the tissue field based on the spatial position of the projector device and the recognized light features e.g. using trigonometrical algorithms for example as described in US
2016/0360954.
In an embodiment the computer system may perform the one or more determinations of the tissue field based on the spatial position of the projector device estimated as described herein and the recognized light features using the reconstruction models and algorithms described in WO2015/151098.
Since the spatial position of the projector device is determined from the same set of pixel data as the primary light features and/or from sets of pixel data having time attributes within a narrow time range e.g. within 1 s, such as within 0.1 s, such as within 0.05 s, the computer system may now calculate in a simple way the one or more determinations of the tissue field even where the projector and/or the image acquisition device is moved independently of each other. The phrase "estimate the spatial position of the projector device" includes a determination e.g. a calculation of the spatial position of the projector device which may be further refined e.g. as explained below.
The term "body cavity" is herein used to denote any gas and/or liquid filled cavity within a mammal body. The cavity may be a natural cavity or it may be an artificial cavity which has been filled with a fluid (in particular gas) to reach a desired size. The cavity may be a natural cavity which has been enlarged by being filled with a fluid. Advantageously the body cavity is a minimally invasive surgical cavity. The terms "distal" and "proximal" should be interpreted in relation to the orientation of tools used in connection with diagnostics and/or surgery, such as minimally invasive surgery.
The phrase "real time" is herein used to mean the time required by the computer to receive and process optionally changing data, such as
intraoperative data, optionally in combination with other data, such as predetermined data, reference data set, estimated data which may be non- real time data such as constant data or data changing with a frequency of above 1 minute to return the real time information to the operator. "Real time" may include a short delay, such as up to 5 seconds, preferably within 1 second, more preferably within 0.1 second of an occurrence.
The phrases "programmed for or to", "configured for or to" and "adapted for or to" are used interchangeably unless otherwise specified or clear from the context.
The term "operator" is used to designate a human operator (human surgeon) or a robotic operator i.e. a robot programmed to perform a minimally invasive diagnostics or surgical procedure on a patient. The term "operator" also includes a combined human and robotic operator, such as a robotic assisted human surgeon. The term "access port" means a port into a body cavity provided by a cannula inserted into an incision through the mammal skin and through which cannula an instrument may be inserted. The term "penetration hole" means a hole through the mammal skin without any cannula. The term "rigid connection" means a connection which ensures that the relative position between rigidly connected elements is substantially constant during normal use.
The term "cannula" means herein a hollow tool adapted for being inserted into an incision to provide an access port as defined above. The term "projector" means "projector device" unless otherwise specified.
The phrase "a camera baseline" means the distance between cameras or camera units. The distance is - unless otherwise specified - determined as the distance between the lens' center points (optical axis) corresponding to the distance between the center of the images acquired by the two
camera/camera units.
The phrase "a projector-camera baseline" means the distance between the camera/camera unit and the projector. The distance is - unless otherwise specified - determined as the distance between the camera lens center of the camera/camera unit and the center of the projector. Often the surface of the tissue field may be very curved. The terms "target area" or "target site" of the tissue field e.g. of the minimally invasive surgical cavity is herein used to designate an area which the surgeon may have focus on, e.g. for diagnostic purpose and/or for surgical purpose. The term "tissue site" may be any site of the tissue field e.g. a target site. The tissue field may e.g. comprise a surgical field of an open surgery or a minimally invasive surgery. In an embodiment the tissue field comprises surfaces of the intestine and the throat. The term "skin" is herein used to designate the skin of a mammal. As used herein the skin may include additional tissue which is or is to be penetrated by a penetrator tip or through which an incision for an access port is made or may be made. The term "minimally surgical instrument" means herein a surgical instrument which is suitable for use in surgery performed in natural and/or artificial body openings of a human or animal,, such as for use in minimally invasive surgery.
The phrase "corresponding time attributes" is used to mean attributes that represent a substantially identical time.
The term "about" is generally used to include what is within measurement uncertainties. When used in ranges the term "about" should herein be taken to mean that what is within measurement uncertainties is included in the range. It should be emphasized that the term "comprises/comprising" when used herein is to be interpreted as an open term, i.e. it should be taken to specify the presence of specifically stated feature(s), such as element(s), unit(s), integer(s), step(s) component(s) and combination(s) thereof, but does not preclude the presence or addition of one or more other stated features. The terms "projector device" and "projector" are used interchangeable.
Throughout the description or claims, the singular encompasses the plural unless otherwise specified or required by the context.
Prior to recognizing a plurality of the light features of the set light features, the set of pixel data may advantageously be subjected to an error and correction e.g. to detect and correct and/or discharge corrupted data, such data that have been corrupted due to transmission, data that include error reflection e.g. due to moisture at the tissue field, data that are missing due to occlusions or absorption. The error and correction may e.g. be provided by adding to the set of pixel data prior to transmission to the computer system some redundancy e.g. by adding extra data, which the computer system may use to check consistency of the set of pixel data, and to recover data that have been determined to be corrupted. In an embodiment redundancy is incorporated into the structured light pattern by designing the cross-sectional light structure of the projected structured light beam to provide that the set of pixel data comprise redundant data.
Error and correction schemes are well known in the art and the skilled person will be capable of adapting such error and correction schemes to be used in the present invention.
In an embodiment the error and correction of the set(s) of pixel data may comprise withdrawing of values representing background frame(s). This will be described in further details below.
In an embodiment the reflected light is subjected to an optical filtering which may further be useful in obtaining high quality frames. Also this is described in further details below. The projector device of the structured light
arrangement and the image acquisition device may be arranged
independently of each other and optionally be individually movable relative to the tissue field. Thus, the front of the projector device from where the structured light beam is projected and the front of the image acquisition device collecting the light for imaging the surface of the tissue field onto where the structured light is impinging and from where the image is acquired may be arranged in a triangular configuration. According to an embodiment of the invention it has been found that by matching data representing primary light features from the image of the structured light impinging onto the tissue field with corresponding light features of the reference structured light data set representing the projected structured light beam a very accurate determination of the spatial position of the projector device may be obtained using for example algorithm based on geometrical math. The computer system comprises the reference structured light data set and comprises an algorithm that from the matched primary light features and their orientation and optionally distortion of primary recognized features may determine the triangular configuration between the projector device, the image acquisition device and the tissue field and thus based on this perform 3D determinations of the tissue field, such as 3D distances and/or
topographical configurations of the tissue field. Generally, it is desired using trigonometrical, kinematic calculations for determining the spatial position and orientation of the projector device relative to at least a part of the tissue field. Thus, the triangular configuration between the projector device, the image acquisition device and the tissue field may be determined without the projector device being within the field of view of the image acquisition device. Thereby a very flexibly system is provided which is both simple to use for an operator and simultaneously is configured for performing very accurate 3D determinations, such as size determinations, volume, determinations, distance determinations and etc.
The determination of the spatial position and orientation of the projector device may be determined at a stationary or variable frequency e.g. the frequency may be increase where the movements of the projector and/or the image acquisition device is increasing.
It has been found that the 3D reconstruction system may operate with high accuracy even where the distance between the projector and the image acquisition device is relatively high, such as up about 45 cm, such as up to about 30 cm, such as up to about 15 cm, such as up to about 10 cm, such as up to about 5 cm, such as up to about 3 cm, such as up to about 2 cm.
In an embodiment the estimated spatial position comprises an estimated distance in 3D space between the tissue field and the projector device. The distance in 3D space between the tissue field and the projector device may be preprogrammed or operator selectable and for example be a shortest distance, a distance in a selected vector direction, a distance from a center of the front of the projector device from where the structured light beam is projected, a distance to a specific point of the tissue field e.g. to a protruding point of the tissue field and/or a target site of the tissue field e.g. a nerve.. In an embodiment the distance in 3D space between the tissue field and the projector device is the shortest euclidean distance between the tissue field and the point of the projector device corresponding to the center axis of the projected structured light beam, preferably the shortest euclidean distance together with a coordinate vector direction of the euclidean distance. In an embodiment the distance in 3D space between the tissue field and the projector device is given by the x, y, and z coordinates in a 3D coordinate system.
In an embodiment the estimated spatial position comprises an estimated distance in 3D space between the tissue field and the projector device from a point of view of the image acquisition device, such as a minimum distance between the tissue field and the projector device.
In an embodiment the estimated spatial position comprises the estimated distance in 3D space between the tissue field and the projector device as well as an estimated relative orientation of the projector.
In an embodiment the computer system is configured for generating a representation of the spatial position of the projector device as seen from a point of view which is different from the image acquisition device. The computer system may comprise algorithm(s) for performing the required geometrical determinations. The algorithm for performing the calculation of the representation of the spatial position of the projector device as seen from a point of view which is different from the image acquisition device may for example use optional geometrical distortions of recognized light features as a part of the basis for the determination, e.g. for determining the angle between the projector device and the image acquisition device. As described below the computer system may in an embodiment know the spatial or a relative spatial position (preferably including distance and angle) of the image acquisition device which may additionally be applied for improving the accuracy of the 3D determination. In an embodiment the system comprises a sensor arrangement arranged for determining the spatial or a relative spatial position (preferably including distance and angle) of the image acquisition device e.g. for determining the distance between the projector and the image acquisition device. The sensor arrangement is preferably configured for determining the distance and the relative orientation between the projector and the image acquisition device. The sensor arrangement may in principle be any kind of sensor arrangement capable of determining the distance and optionally the orientation
determination(s), such as for example a sensor arrangement comprising a transmitter and a receiver located at or associated with respectively the projector and the image acquisition device. The term associated with" means in this connection that there is a known and/or rigid interconnection with the projector or the image acquisition device with which the sensor is associated with.
Where the 3D reconstruction system is connected to or integrated with a robot the sensor arrangement may e.g. comprise a first sensor on or associated with a first robot arm configured for being connected to the projector e.g. via an instrument and a second sensor on or associated with a second robot arm configured for being connected to the image acquisition device e.g. via an instrument. Further for increasing the sized determination it is desired that the computer system comprises or is supplied with data representing the divergence of the projected structured light beam. This data representing the divergence may advantageously form part of the reference structured light data set. Advantageously the estimated spatial position comprises an estimated orientation, e.g. comprising a vector coordinate set and/or comprising at least one orientation parameter selected from yaw, roll and pitch or any combination thereof. Preferably, the estimated spatial position comprises two or more, such as all of the orientation parameters yaw, roll and pitch. The orientation parameters yaw, roll and pitch and their relation are generally known within the art of airborne LIDAR technology. In an embodiment the estimated spatial position comprises an estimated distance and at least one orientation parameter, such as an orientation parameter selected from yaw, roll and pitch, preferably the estimated spatial position comprises two or more, such as all of the orientation parameters yaw, roll and pitch.
In an embodiment the estimated spatial position comprises an estimated shortest or longest distance between a selected point of the projector and the tissue field.
In an embodiment the estimated spatial position comprises an estimated distance described by 3 values in 3D space (e.g. x, y, and z values in a 3D coordinate system) and an estimated orientation e.g. described by 3 values in 3D space (e.g. x, y, and z values in a 3D coordinate system). The values in 3D space representing distance and orientations are preferably values in a common coordinate system. In an embodiment the estimated spatial position and orientation is described by two end points in a coordinate system e.g. end points defined by two sets of x, y, and z values in a 3D coordinate system. In an embodiment the estimated spatial position comprises an estimated distance represented by 2 set of values in a common 3D coordinate system, each set of values comprises an x, a y and a z value.
Preferably the estimation of the spatial position comprises estimating the pose (position and orientation) of the structured light as projected from the projector relative to the orientation and position of the reflected light from the tissue field.
In an embodiment the estimation of the spatial position of the projector device comprises estimating the pose of the projected structured light beam. In an embodiment the estimation of the spatial position of the projector device is determined to be the estimation of the pose of the projected structured light beam, preferably determined as projected i.e. at the position of the projector.
In an embodiment the estimated orientation and optionally the estimated spatial position on the projector is determined using quaternion based geometrical algorithms. Mathematical methods based on or including quaternions are well known. The quaternion model was first described by the Irish mathematician William Rowan Hamilton in 1843. The quaternion model provide a convenient mathematical model for representing orientations and rotations of objects in three dimensions. Further information about quaternions may be found in Altmann, S.L., 2005. Rotations, quaternions, and double groups. Courier Corporation and/or in D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47(1/2/3):7-42, April- June 2002.
In an embodiment after having matched the primary light features the set of pixel data may be subjected to outlier removal i.e. removing data values that lie in the tail of the statistical distribution of a set of data values and therefore are estimated to likely be incorrect. In an embodiment the computer system is configured for estimating one or more of the orientation parameters yaw, roll and pitch at least partly based on the matches of light features. Alternatively or in addition information relating to the orientation may be transmitted to the computer system from another source e.g. another sensor. Advantageously one or more of the orientation parameters yaw, roll and pitch is/are transmitted to the computer system and/or determined by the computer system independently from the light features.
In an embodiment one or more of the orientation parameters yaw, roll and pitch is/are at least partly obtained from a sensor located at or associated with the structured light arrangement and/or the image acquisition device to sense at least one of the orientation parameters yaw, roll and pitch. The sensor located at or associated with the structured light arrangement and/or the image acquisition device may advantageously be configured to determine the relative orientation between the structured light arrangement and the image acquisition device.
In an embodiment the computer system is configured for estimating the homographic transformation between the matched features and based on the homographic transformation determining the spatial position, such as the pose of the projector device.
In an embodiment the computer system comprises an iterative closest feature algorithm for matching the recognized primary light features with
corresponding light features of the projected light beam.
In principle the matching of one single recognized light feature with the corresponding light feature of the projected structured light beam may suffice where the tissue field is relatively plan, the light feature comprises both orientation and position attributes and/or where the light feature is perfectly recognized. However, for practical operation it is desired that the number of recognized primary light features which are matched to corresponding light features of the projected structured light beam is at least 2, preferably at least 3, such as at least 5, such as at least 7. Thereby a higher accuracy may be obtained.
In an embodiment the computer system is configured for identifying a first number of pairs of matched features with a homographic transformation that corresponds within a threshold and preferably for applying this homographic transformation as the estimated homographic transformation e.g. to thereby estimate the pose of the projector device.
Advantageously the first number of pairs of matched features is at least two, such as at least 3, such as at least 5.
The computer system may be configured for identifying one or more second pairs of matched features with a transformation which differs beyond the threshold from the homographic transformation of the first number of pairs of matched features. The computer system may further be configured for correcting and/or discharging the pixel data representing the recognized feature(s) of the second pairs of matched features in particular where the transformation of the second pair(s) of matched light features differs far beyond the threshold and/or where the transformation of the second pair of matched light features is determined from one single pair of matched light features.
Thereby recognized light features reflected from areas of the tissue field having large topographical height differences relative to the average tissue field may be disregarded or corrected for the recognized light features to be used as primary recognized light features in the estimation of the spatial position of the projector device.
In an embodiment a timely associated 2D image of an element associated to the projector may be used to confirm the estimation of the spatial position of the projector or e.g. to rule out erroneous estimates of the spatial position of the projector, e.g. due to undesired light reflections which may for example occur at very curved surfaces or where the angle between the emitted light beam and the surface at the body cavity reflecting the light pattern is very far from normal e.g. 10 degrees or less. It has been observed that in such situation some of the estimation e.g. 1 out of 10-100, may be an erroneous estimate. The timely associated 2D image may for example be a frame acquired by the image acquisition device with corresponding time attribute as the set(s) of pixel data used for the estimation of the spatial position of the projector.
In an embodiment the element associated to the projector may be an instrument, such as a minimally invasive surgical instrument to which the projector is fixed or mounted. The 2D image may e.g. comprise a tip of the instrument together with a surface part of the tissue field. The tip may reveal the tip orientation and correlate this to the determination of the projector orientation and the relation between the tissue field and the tip may be correlated to the spatial position of the projector. The image acquisition device may comprise a single camera or several cameras e.g. a stereo camera.
In an embodiment the image acquisition device comprises a single camera configured for acquiring the frames. The sets of pixel data of the respective frames are associated with respective consecutive time attributes
representing a time of acquisition as described above.
Where the image acquisition device has one single camera it may be desired to use a higher number of recognized features for matching with the corresponding light features of the projected structured light beam.
Additionally or alternatively the computer system may be configured for matching recognized features of one set of pixel data with corresponding recognized features of a subsequent or previous set of pixel data.
The matching of recognized light features from one set of pixel data with corresponding recognized features of a subsequent or previous set of pixel data is referred to as time shifted matching.
In an embodiment the computer system is configured for repeating the steps of • receiving a set of pixel data associated with a time attribute and representing a single frame,
• recognizing the set of light features from the set of pixel data,
• identifying and/or selecting the primary light features from the
recognized set of light features,
• identifying corresponding light features in the reference structured light data set representing the features of the projected structured light beam,
• matching the primary light features with the corresponding light
features of the projected structured light beam,
• estimating the spatial position of the projector device relative to at least a part of the tissue field, and
• performing at least one determination of the tissue field.
In this embodiment the number of recognized primary features for matching may advantageously be at least 3, such as at least 5, such as from 6 to 100, such as from 8 to 25.
It is preferred that the frames comprise a plurality of frames acquired with a wide projector-camera baseline i.e. with a relative large distance between the projector and the camera relative to the distance between the projector and the tissue field.
In an embodiment the computer system may, in one or more of the repeating steps, further be configured for performing time shifted matching comprising matching the primary light features with corresponding primary light features of at least one set of pixel data associated with a previous time attribute, and for applying the time shifted matching in the estimation of the spatial position of the projector device. Advantageously the previous time attribute is up to about 10 seconds, such as up to about 5 seconds, such as up to one second earlier than the time attribute of the set of pixel data processed in the current processing step. In an embodiment the time shifted matching may comprise feature matching over 3, 4 or more sets of pixel data of subsequently acquired images.
For increasing accuracy, in particular where the tissue field is very
uneven/curved, the image acquisition device may advantageously comprise a multi camera configured for acquiring sets of the frames where each set of frames comprises at least two simultaneously acquired frames and the sets of pixel data of the frames of a set of frames are associated with corresponding time attribute representing the time of acquisition.
The phrase "corresponding time" is used to mean a substantially identical time attribute.
In an embodiment the computer system is configured for repeating the steps of
• receiving sets of pixel data associated with a corresponding time
attribute and representing the sets of frames,
· recognizing the set of light features from each of the respective sets of pixel data,
• identifying and/or selecting corresponding primary light features from the recognized set of light features of the respective sets of pixel data,
• identifying corresponding light features in the reference structured light data set representing the features of the projected structured light beam,
• matching the primary light features of each of the respective sets of pixel data with the corresponding light features of the projected structured light beam,
· estimating the spatial position of the projector device relative to at least a part of the tissue field, and
• performing at least one determination of the tissue field.
Advantageously the number of recognized features in this embodiment is at least 2, such as at least 3, such as from 5 to 100, such as from 6 to 25. Where the image acquisition device comprises two or more camera, such as at least one stereo camera, the computer system is advantageously
configured for performing stereo matching comprising matching the primary light features of two or more of the respective sets of pixel data with each other. The stereo matching may e.g. be performed in one or more of repetitions of the above steps.
The stereo matching may advantageously be a wide camera baseline stereo matching e.g. using epipolar geometry.
The two or more cameras of the multi camera may be integrated in one unit or in separate units.
In an embodiment the computer system is configured for performing time shifted stereo matching comprising matching the primary light features with corresponding primary light features of at least one set of pixel data associated with a previous time attribute, and for applying the time shifted matching in the estimation of the spatial position of the projector device.
Advantageously the previous time attribute is up to about 10 seconds, such as up to about 5 seconds, such as up to one second earlier than the time attribute of the set of pixel data processed in the current processing step.
The multi camera may advantageously comprise a stereo camera comprising two coordinated image acquisition units. The image acquisition units may be arranged to acquire wide camera baseline images. In an embodiment the image acquisition units are arranged with a fixed relative distance to each other, preferably of at least about 5 mm, such as at least about 1 cm, such as at least about 2 cm. In the stereo camera technology it is known that in order to obtain an accurate size determination, the distance between two cameras arranged for acquiring image of a surface that is to be determined should be equal or as close to equal as possible to the distance between the cameras and the surface. However, for ensuring a small size of the image acquisition device the distance(s) between image acquisition units is advantageously relatively small, such as less than 5 mm, such as from about 0.1 to about 3 mm.
Thereby the camera baseline, i.e. the distance between the camera units, will be relatively narrow compared to the distance to the tissue field. This may for example be compensated by operating using a wide projector-camera baseline e.g. as described below.
In an embodiment of the invention it has been found that by using a wide projector-camera baseline e.g. where the distance between the projector and a camera is relative large, such as at least about 1/16 of the distance between either or both of the projector and the camera and the tissue field highly accurate size determinations may be obtained even using mono camera or with a narrow camera baseline. In an embodiment the 3D reconstruction system is configured for operating using both a wide camera baseline and a wide projector-camera baseline.
In an embodiment the computer system may further be configured for performing stereo matching and preferably wide field stereo matching of tissue field features - i.e. features that represent local tissue field areas having a characteristic attribute, such as a protrusion, a depression and/or a local lesion. Since the 3D reconstruction system is capable of determine sizes the computer system may determine the size and/or volume of for example a hernie and/or a protrusion, a depression and/or a local lesion.
In an embodiment the multi camera comprises coordinated image acquisition units arranged with substantially parallel optical axis and/or centre axis. In an embodiment the multi camera comprises coordinated image acquisition units arranged with optical axis having a relative angle to each other of up to about 45 degrees, such as up to about 30 degrees, such, as up to about 15 degrees. In order to perform the stereo matching the coordinated image acquisition units are arranged to have an overlapping field of view. Preferably the overlapping field of view is relatively large since only light features recognized in two or more images of the multi camera may be matched. Preferably the coordinated image acquisition units have an at least about 10 % overlapping field of view, such as at least about 25 % overlapping field of view, such as at least about 50% overlapping field of view. The overlapping field of view is generally determined as an angular field of view.
Advantageously the image acquisition device has a stereo field of view - determined as the maximal overlapping field of view - of up to about 60 degrees, such as up to about 50 degrees, such as up to about from at least about 40 degrees, such as from about 5 degrees to about 50 degrees.
For a single camera it is desired in an embodiment that the image acquisition device has a maximal field of view of from at least about 5 degrees to about 160 degrees, such as up to about 120 degrees, such as from about 10 to about 100, such as from about 15 to about 50 degrees, such as from about 20 to about 40 degrees.
The max field of view may be determined in any rotational orientation including horizontally, vertically or diagonally orientation. In an embodiment the image acquisition device has field of view adapted to cover the tissue field without including the projector device. As described above the 3D reconstruction system need not acquire images including the projector device and thus the 3D reconstruction system is a very flexible system which may operate with limited field(s) of view while simultaneously providing highly accurate 3D determinations.
In an embodiment the structured light arrangement and the image acquisition device are located on the same movable instrument. In an embodiment the structured light arrangement is located on one movable instrument and the image acquisition device is located on another independently movable instrument.
The one or more cameras of the image acquisition device may
advantageously be located on a movable instrument. In an embodiment the one or more cameras is/are located on a cannula.
To ensure a very high a desirable depth resolution the 3D reconstruction system is advantageously adapted for operating using a wide projector- camera baseline. The term "projector-camera baseline" means the distance between the camera and the projector. Advantageously the projector-camera baseline is dynamic and variable and may be selected by the surgeon to ensure the desired depth resolution. Thus it has been found that the projector-camera baseline advantageously is selected in dependence of the distance between the projector and the tissue field, preferably such that the projector-camera baseline is matching the distance between the projector and the tissue field which has been found to provide very accurate
determinations.
Advantageously the projector-camera baseline is from about 1/16 to about 16 times the distance between the camera and the tissue field, such as from about 1/4 to about 4 times the distance between the camera and the tissue field, such as from about half to about 2 times the distance between the camera and the tissue field, such as 1 time the distance between the camera and the tissue field.
In an embodiment the 3D reconstruction system is adapted for operating using a wide projector-camera baseline comprising a projector-camera distance up to about 45 cm, such as up to about 30 cm, such as up to about 15 cm, such as up to about 10 cm, such as up to about 5 cm, such as up to about 3 cm, such as up to about 2 cm. It has been found that operating using a wide base line ensures an even higher accuracy for size determinations, such as tissue field volume
determinations and/or tissue field topologic size determinations.
In an embodiment the reconstruction system is adapted for operating using a varying projector-camera baseline, preferably comprising that the projector and the image acquisition device is movable independently of each other.
Advantageously the computer system is adapted for determine the projector- camera baseline. This may be provided by the estimation of the homographic transformation between the matched features at a given time.
Advantageously the computer system is adapted for determine the projector- camera baseline as a function of time.
In an embodiment the computer system is configured for associating a plurality of determined projector-camera baseline(s) with a timely
corresponding plurality of sets of pixel data and preferably to provide that projector-camera baseline data representing the projector-camera baseline for each of the plurality of sets of pixel data is linked with the respective sets of pixel data. The data link may e.g. be provided by the time attributes, for example to provide that each determined projector-camera baseline is associated to a time attribute representing the time of the determined projector-camera baseline. The determined projector-camera baselines and the of sets of pixel data may thereafter be linked using their respective time attribute for example such that a determined projector-camera baselines is linked to the set of pixel data having closest time attribute match.
In an embodiment the 3D reconstruction system is adapted for operating using a non-rigid structure from motion. This is in particular desired for size determinations, such as tissue field volume determinations and/or tissue field topologic size determinations e.g. where local motion may occur e.g. near a vessel or nerve. The non-rigid structure from motion technique is e.g.
disclosed in "A Simple Prior-free Method for Non-Rigid Structure-from-Motion Factorization" by Yuchao Dai et al. International Journal of Computer Vision April 2014, Volume 107, Issue 2, pp 101-122.
To further increase the accuracy of the 3D determinations it is desired that the computer system comprises data representing an angle of divergence of the projected structured light beam. By knowing the angle of divergence the computer system may determine the spatial position of the projector device with an even higher accuracy using relatively simple algorithms.
In an embodiment the projected structured light beam has an angle of divergence and the computer system stores or is configured for storing the angle of divergence in its memory. The angle of divergence data may e.g. be a part of the reference structured light data set.
The angle of divergence may for example be at least about 10 degrees, such as at least about 20 degrees, such as at least about 30 degrees, such as at least about 40 degrees relative to the centre axis of the structured light. The optimal angle of divergence may advantageously be selected in dependence of how close the projector device is adapted for being located relative to the tissue field.
The angle of divergence may advantageously be substantially rotationally symmetrical, thereby the structured light arrangement may be provided using relatively simple optical structures.
Where the angle of divergence is not fully rotationally symmetrical it is desired that the angle of divergence is at most two fold symmetrical, i. e. one fold or two fold rotationally symmetrical. In an embodiment the computer system is configured for acquiring the angle of divergence from an operator via a user interface and/or by a
preprograming and/or from a database. Thereby the angle of divergence may be tunable e.g. by a preprograming or by operator intervention. For example an operator may use a larger angle of divergence where the projector device is closer to the tissue field and a smaller angle of divergence where the projector device is further from the tissue field.
In an embodiment the angle of divergence is fixed. In an embodiment the angle of divergence is tunable according to a preprogramed routine and/or by operator instructions.
In an embodiment the computer system is configured for determining the angle of divergence by a calibration. The calibration may for example comprise projecting the projected structured light beam from a preselected distance and toward a known surface area, recording the reflected structured light and determining the angle of divergence, such as projecting the projected structured light beam from a preselected distance and with its centre axis orthogonal to the known surface area.
The angle of divergence may for example be determined as the beam divergence, ©:
Θ = 2arctan
wherein Dl is the largest cross-sectional dimension orthogonal to the centre axis of the structured light beam as projected from the projector device or at a first distance from the projector device, D2 is the largest cross-sectional dimension orthogonal to the centre axis of the structured light at a second larger distance from the projector e.g. at a surface and L is the distance between Dl and D2 and wherein the distances is determined along the centre axis of the light pattern.
Advantageously the data representing the angle of divergence for the or each projected structured light beam is included in the or in each respective reference structured light data set(s) Thereby the reference structured light data set for each projected structured light beam will be known to the computer system and may be applied in the at least one determination of the tissue field.
The structured light beam may in practice have any kind of optically detectable structure. The wavelength(s), structure and intensity of the structured light beam is advantageously selected to be reflective from the tissue field, such as a field of soft tissue, such as a tissue field exposed by surgery.
In an embodiment the cross-sectional light structure of the projected structured light beam comprises optically distinguished areas, such as a pattern of areas of light and areas of no-light and/or areas of light of a first quality of a character and areas of light of a second quality of the character, wherein the character advantageously is selected from light intensity, wavelength and/or range of wavelengths.
The structured light beam may for example be a pattern of light of a certain wavelength range with intermediate areas of no light or areas of light with a more narrow range of wavelength. The pattern may e.g. be strips, cross hatched lines or any other lines, or shapes.
The structured light beam may e.g. be provided by providing the projector device with one or more optical filters and/or by a projector device comprising a diffractive optical element (DOE), a spatial light modulator, a multi-order diffractive lens, a holographic lens, a Fresnel lens, a mirror arrangement, a digital micromirror device (DMD) and/or a computer regulated optical element, such as a computer regulated mechanically optical element e.g. a mems (micro-electro-mechanical) element. In an embodiment the structured light arrangement comprises light blocking element(s) that blocks parts of the light to form the structuring or part of the structuring of the structured light beam. The blocking elements may e.g. be blocking strips arranged on or forming part of the projector device. In an embodiment the structured light arrangement comprises a fiber optic probe comprising the projector device configured to project a structured light beam onto at least a section of the tissue field.
The fiber optic probe advantageously comprises a structured light generating and projecting device and a bundle of fiber guiding the structured light to the projector device for projecting the structured light beam onto at least a section of the tissue field. The structured light generating and projecting device will in the following be referred to as the structured light device.
The structured light device may be any device that can generate a suitable structured light.
Since the structured light device need not to be close to the target field, i.e. it need not to be inserted into a body cavity, the size of the structured light device is not very important.
The fibers of the fiber bundle has each a light receiving end and a light emitting end. Thus the fiber bundle has a light receiving end and a light emitting end. The light receiving end of the fiber bundle is operatively coupled to the structured light device for receiving at least a part of the structured light from the structured light device. The operatively coupling may include one or more lenses and/or objectives e.g. for focusing the structured light to be received by the light receiving end of the fiber bundle.
Advantageously, the light emitting end of the fiber ends are arranged in an encasing to thereby form a probe-head comprising the projector device. The probe-head may comprise one or more lenses for ensuring a desired projection of the structured light beam. The fiber bundle advantageously comprises at least 10 optical fibers, such as at least 50 optical fibers, such as from about 100 to about 2000 optical fibers, such as from about 200 to about 1000 optical fibers. The fibers of the fiber bundle may be identical or they may differ, e.g.
comprising two or more types of optical fibers. This may be advantageous where the structured light comprises a structuring of different wavelengths.
In an embodiment the fibers of the fiber bundle are substantially identical. In an embodiment, fibers of the fiber bundle are partly of fully fused to ensure a fixed relative location of the fiber ends.
In an embodiment where the structured light arrangement is mounted to a minimally surgical instrument, the structured light arrangement and the minimally surgical instrument are advantageously in the form of a fiber optic probe instrument assembly as described below.
In an embodiment the structured light beam is provided as described in DK PA 2016 71005.
Advantageously the cross-sectional light structure comprises a symmetrical or an asymmetrical light pattern which may be repeating or non-repeating. In an embodiment the cross-sectional light structure is asymmetrical and has no symmetry plan. Thus the risk of erroneous matching of light features may be reduced or even avoided.
The light pattern advantageously comprises a plurality of light dots, an arch shape, ring or semi-ring shaped lines, a plurality of angled lines, a coded structured light configuration or any combinations thereof, preferably the pattern comprises a grid of lines, a crosshatched pattern optionally comprising substantially parallel lines.
In an embodiment the light pattern comprises a bar code, such as a QR code.
In an embodiment the light features comprise local light fractions comprising at least one optically detectable attribute. The local light fractions may for example independently of each other each comprise an intensity attribute, a wavelength attribute, a geometric attribute or any combinations thereof. Preferably at least one of the attributes is an orientation attribute and/or an orientation attribute.
In an embodiment each local light fraction has a beam area fraction of up to about 25 % of the area of the cross-sectional light structure. Preferably each local light fraction has a beam area fraction of up to about 20 %, such as up to about 10 %, such as up to about 5 %, such up to about 3 % of the area of the cross-sectional light structure. The area of the cross-sectional light structure of a light feature may overlap with, form part of or fully include the area of the cross-sectional light structure of another light feature. To ensure a high accuracy of the orientation determination of the projector device relative to the reflected structured light features it is generally desired that at least some of the light features and preferably at least some of the primary light features have a centre-to centre distance which is of a substantial size relative to the dimension of the cross-sectional light structure. In an embodiment the centre to centre distance between at least two of the light features is at least about 0.1 %, such as at least about 1 % of the maximal dimension of the cross-sectional light structure, preferably the centre to centre distance between at least two of the light features is at least about 10 %, such as at least about 25 %, such as at least about 50 % of the maximal dimension of the cross-sectional light structure. The centre to centre distance of light features is determined as the distance of the light features at the projected structured light beam. For example the distance of two corner features may be determined as the 2D euclidean distance between the corners e.g. diametrically determined. In an embodiment each of the light features are represented by a local light fraction of the cross-sectional light structure having an optically detectable attribute, preferably each light feature comprises a local and characteristic light fraction of the projected structured light. In an embodiment each of the light features, independently of each other comprise a light fraction comprising two or more crossing lines, v-shaped lines, a single dot, a group of dots, a corner section, a pair of parallel lines, a circle or any combinations thereof and or any other geometrical shape(s). Preferably each of the light features comprises at least one of a location attribute and an orientation attribute. The one or more light features advantageously comprise a combined location and orientation attribute.
The set of light features may be a predefined set of light features or the set of light features may be selected by the computer system e.g. by selecting light features which may be relatively simple to recognize by the computer system.
In an embodiment the set of light features comprises predefined light features, and the computer system is advantageously configured for searching for at least some of the predefined light features in the set of pixel data and preferably for recognizing the predefined light features if present and preferably without being distorted beyond a threshold.
Data representing the predefined light features may be transmitted to the computer system e.g. together with the reference structured light data set and/or the computer system may acquire the predefined light features from a database e.g. together with the reference structured light data set.
In an embodiment the computer system is configured for defining the set of light features from the reference structured light data set representing the projected structured light beam. The computer system may be configured for defining the light features of the set of light features as light features with attributes which make the light features relatively simple to be recognized from the set of pixel data of the acquired images. The computer system may be programmed to define the light features of the set of light features according to preferred attributes. The computer system is advantageously configured for searching for at least some of the defined light features in the received set of pixel data.
The primary light features may be preprogramed in the computer system or preferably the computer system is configured to select the primary light features from the recognized light features. The computer system may e.g. be configured for selecting the primary light features according to a set of selection rules, for example comprising selecting recognized light features having both orientation attribute and position attribute, selecting recognized light features which have a distance beyond a threshold distance to one or more other already selected recognized light features, selecting recognized light features representing corner segments of the cross-sectional light structure, selecting recognized light features representing square pattern segments of the cross-sectional light structure and/or combinations thereof. The selection of the primary light features may preferably be performed as the light features are recognized and continue until a sufficient number of primary light features have been selected. The sufficient number of primary light features may be determined by the computer system or it may be a predefined number programmed into the computer system and/or transmitted to the computer system. For example the computer system may have a preprogramed number as the sufficient number of primary light features and the computer may be configured to overwrite this number upon instruction of an operator to apply another number as the sufficient number of primary light features.
In an embodiment the primary light features include at least 3 primary light features arranged with a triangular configuration to each other.
In an embodiment the computer system comprises an artificial intelligent processing system configured for selecting qualified light features.
The artificial intelligent processing system may be any artificial intelligent processing system suitable for selecting qualified light features - i.e. light features which comprise at least one position attribute and/or at least one orientation attribute. The artificial intelligent processing system
advantageously is programmed to have a light feature selecting algorithm e.g. according to a set of thresholds as described above and/or the artificial intelligent processing system may be trained to select qualified light features by being presented to a number of qualified light features (labeled as being qualified) and a number of non-qualified light features (labeled as being nonqualified) and based on these presentations and labelling being trained to distinguish between qualified and non-qualified light features.
In an embodiment the artificial intelligent processing system comprises machine learning algorithms, such as machine learning algorithms for supervised deep learning and/or machine learning algorithms for non- supervised deep learning. In an embodiment the artificial intelligent
processing system are configured for including pre-operation data and/or inter-operation data, e.g. from a cloud of data in the selection of qualified light features.
Advantageously the computer system is configured to recognize the primary light features from the received pixel data.
In an embodiment the computer system is configured for selecting the primary light features from the recognized light features and the computer system comprises a primary light feature threshold for selecting light features qualified for representing primary light features, the primary light features threshold preferably comprises a location attribute sub-threshold and an orientation attribute sub-threshold.
The primary light feature threshold may for example comprise minimum intensity, minimum identity probability, minimum orientation probability, minimum asymmetry, minimum centre distance and etc. In an embodiment the primary light features are identified as light features comprising at least one of a location attribute and an orientation attribute. Preferably a plurality of the primary light features is identified as light features comprising both a location attribute and an orientation attribute. An orientation attribute is generally an attribute with a degree of asymmetry and preferably rotational asymmetry, more preferably at most two fold symmetry and preferably one fold symmetry. The asymmetry may be an asymmetry in light intensity, in geometrical shape, in wavelength and/or range of wavelengths. Thereby the computer system may be configured for determining the orientation of the light feature in the cross section plan of the structured light and thereby also the rotational orientation of the structured light relative to the projector device.
In an embodiment the location attribute is represented by a light point, such as the cross of crossing lines, the tip of v-shaped lines, the position of a constraint along a line, the amplitude of a wave shaped line or the centre of a light dot and etc.
In an embodiment the orientation attribute is represented by one or more asymmetrical geometrical shapes, such as the orientation of the lines of crossing lines, the orientation of the lines of v-shaped lines, orientations of the wave of a wave shaped line or the orientation of imaginary lines between dots of a group of dots.
In an embodiment the computer system comprises an artificial intelligent processing system configured for selecting the primary light features from the recognized light features. The artificial intelligent processing system may for example be a trained artificial intelligent processing system which has been trained to select qualified light features e.g. according to the above
disclosure.
The image acquisition device may in principle be any image acquisition device capable of acquisition of digital images. Advantageously the image acquisition device comprises at least one image acquisition unit comprising a pixel sensor array, such as charge-coupled device (CCD) image sensor, or a complementary metal-oxide-semiconductor (CMOS) image sensor. Such image acquisition units are well known to the skilled person.
Preferably the image acquisition unit comprises an array of pixel sensors each comprising a photodetector (such as an avalanche photodiode (APD), a photomultiplier or a metal-semiconductor-metal photodetector (MSM photodetector). Preferably the image acquisition unit comprises active pixel sensors (APS). Advantageously each pixel comprises an amplifier which makes the operation of the image acquisition unit faster, more preferably the image acquisition unit comprises at least about 0.1 Mega pixels, such as at least about 1 Mega pixels, such as at least about 5 Mega pixels.
In an embodiment the image acquisition device comprises and/or is associated with an optical filter. The optical filter may for example be a wavelength filer and/or a polaroid filter, for example a linear or a circular polarizer.
Advantageously the optical filter is arranged to provide that at least some of the light reflected from the tissue field is filtered prior to reaching the camera(s) of the image acquisition device.
Such an optical filter may be applied to further ensure that the pixel data of the frames are subjected to as low noise as possibly.
In an embodiment the image acquisition device comprises and/or is associated with at least one linear polarization filter, the 3D reconstruction system is configured for acquiring one or more frames of reflected light of the structured light beam from the tissue field where the reflected light has been filtered by the at least one linear polarization filter which ensures light polarized in a first direction is blocked while the remaining light passes through. The 3D reconstruction system may further be configured for acquiring a one or more other frames of reflected light of the structured light beam from the tissue field where the reflected light has been filtered by the at least one linear polarization filter with light polarized in an orthogonal direction to the first direction is blocked.
Light reflecting off a surface will tend to be polarized, with the direction of polarization (the way that the electric field vectors are pointing) being parallel to the plane of the interface.
Thus by polarizing the reflected light e.g. on one or more polarization directions the computer system may acquire further data for performing a 3D reconstruction of a part of or the entire tissue field. The frame rate may be selected in dependence of the intended use of the 3D reconstruction system and in particular in dependence on how fast it is intended to move the structured light arrangement and/or the image acquisition device. Advantageously the image acquisition unit has a frame rate of at least about 10 Hz, such as at least about 25 Hz, such as at least about 50 Hz, such as at least about 75 Hz. Where the image acquisition device comprises two or more image acquisition units, the image acquisition units are advantageously timely coordinated to acquire images simultaneously. Thus, for multi-camera image acquisition devices the two or more image acquisition units preferably have the same frame rate.
In an embodiment the frame rate is from about 10 to about 75 Hz.
In an embodiment the image acquisition unit has an even higher frame rate, such as up to about 300 Hz, such as up to about 500 Hz, such as up to about 1000 Hz.
Advantageously the structured light arrangement is configured for being pulsed, preferably having a pulse duration and a pulse frequency. The term "pulsed" is herein applied that the structured light arrangement is projecting the structured light beam in pulses. The pulses may e.g. be provided by pulsing the light source e.g. by a light source driver and/or using a shutter and/or by any other means capable of switching the light on an off. The pulse duration is the time of one pulse of projected structured light. The time between pulses is referred to as inter pulse time.
By providing the structured light arrangement to project the structured light beam in pulses the 3D reconstruction system may perform other light based measurements in the inter pulse time between pulses, e.g. using one or more other light sources. In an embodiment the 3D reconstruction system
comprises an imaging light arrangement for imaging the tissue field and advantageously the imaging light arrangement is pulsed asynchronous relatively to the structured light arrangement.
In an embodiment the structured light arrangement has a pulse duration, i.e. the timely length of the pulse, which is from about half to about twice an inter pulse time between pulses, such as from about 0.01 to about 1.5 the inter pulse time, such as from 0.05 to about 1 the inter pulse time, such as from 0.1 to about 0.5 the inter pulse time. The pulse duration and/or the pulse rate may advantageously be selectable by the surgeon, e.g. regulated by the computer system.
By having a relative long inter pulse time one or more additional
measurements and/or determinations using one or more other projected light beams, e.g. additional structured light beams and/or beams comprising IR wavelength may be performed in the inter pulse time without disturbing the 3D determination of the tissue field by the 3D reconstruction system.
In an embodiment the structured light arrangement has a pulse rate adjusted relative to the frame rate of the image acquisition unit. The pulse rate
(frequency) of the structured light arrangement is advantageously adjusted such that the acquisition of the frames comprising reflections of the structured light beam from the tissue field is performed when the structured light beam is on. When discussing frames in relating to frames obtained using a pulsed structured light arrangement it should be observed that the term "frame" may include background frame if the context allows. In an embodiment the structured light arrangement has a pulse rate adjusted relative to the frame rate of the image acquisition unit, to provide that the 3D reconstruction system is configured for acquiring the plurality of frames comprising reflected structured light and for acquiring a plurality of background frames between pulses of the projected structured light. Thus the background frames are acquired in inter pulse time and preferably while also an optional illumination light is shut off.
In an embodiment the structured light arrangement has a pulse rate adjusted relative to the frame rate of the image acquisition unit to provide that at least about half of the total acquired frames are the frames comprising reflected structured light
In an embodiment the structured light arrangement has a pulse rate adjusted relative to the frame rate of the image acquisition unit to provide that the image acquisition units acquires one or more background frames during the inter pulse time. Preferably the structured light arrangement has a pulse rate adjusted relative to the frame rate of the image acquisition unit to provide that the image acquisition units acquires every 2nd, 3rd 4th 5th or 6th frames as background frames during the inter pulse time and preferably the remaining frames during the pulse time when the structured light beam is on.
The structured light arrangement may advantageously have a structured light controller for adjusting the pulse rate. The structured light controller may also be configured for controlling the pulse length of the structured light beam. The structured light controller is preferably in data communication with the computer system or it forms part of the computer system. Thus the computer system may advantageously control both pulse rate and frame rate and be configured for the timely control thereof.
In an embodiment the computer system is configured for withdrawing pixel values of one or more background frame from the respective sets of pixel data for thereby reducing noise as indicated above. Preferably the computer system is configured for withdrawing pixel values from a timely nearest background frame from each of the respective sets of pixel data. This noise reduction is preferably performed prior to recognizing a plurality of the light features of the set light features, the set of pixel data. As mentioned above, the time attribute may be a relative time attribute or an actual time attribute or a combination thereof.
For example a first set of pixel data may be associated with an actual time attribute and subsequent sets of pixel data may be associated with relative time attributes, which are relative with respect to the actual time attribute of the first set of pixel data.
The computer system is advantageously configured for communicating with the image acquisition device and preferably the structured light arrangement by wire or wireless. The computer system preferably comprises at least one processor, at least one memory, and at least one user interface, such as a graphical interface, a command line interface, an audible interface, a touch based interface, a holographic interface or any of the user interface. The user interface advantageously comprises at least a display/monitor (a screen e.g. a touch screen) and/or a printer.
For optimizing the determination of the tissue field the computer system may advantageously be configured for receiving patient data via a user interface and/or for acquiring patient data from a database. The patient data may for example comprise data representing the tissue field e.g. at another point of time and/or data that represent a similar tissue field e.g. a tissue field from a patient of similar age and/or gender or from a group of similar patients. In an embodiment the patient data comprise pre-operation data (data obtained before starting a procedure e.g. a surgical procedure) and/or inter-operation data (data obtained during a procedure e.g. a surgical procedure), such as data obtained by a scanning or other measuring methods, such as a CT scanning, a MR scanning, an ultrasound scanning, a fluorescence imaging and/or a PET scanning and/or such data estimated and/or calculated for groups of patient. Examples of suitable pre-operation and/or inter-operation data comprise for example data representing measurements and/or estimations obtained by the methods described in the review article "Novel methods for mapping the cavernous nerves during radical prostatectomy" by Fried, N. M. & Burnett, A. L. Nat. Rev. Urol. 12, 451^60 (2015); published online 10 August 2015; doi:10.1038/nrurol.2015.174.
Fluorescence imaging has shown to be a helpful tool during or prior to surgery e.g. for improved identification for repair of damaged tissues. Further information about Fluorescence imaging for surgical guidance is for example disclosed in "Fluorescence Imaging in Surgery" by Ryan K. Orosco et al. IEEE Rev Biomed Eng. 2013; 6: 178-187. (Published online 2013 Jan 15.
doi: 10.1109/RBME.2013.2240294.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3762450/In an embodiment the computer system is configured for applying the patient data for validating pixel data and/or for repairing incomplete pixel data.
The reference structured light data set may be loaded to the computer system by any method. The reference structured light data set may e.g. be transmitted to the computer system by an operator and/or the computer system may be configured for acquiring the reference structured light data set from a database e.g. in response to an instruction from an operator and/or based on an code included in the structured light beam, such as an optically detectable code. Such reference structured light database comprises a plurality of sets of reference structured light data sets each linked to a unique code may e.g. form part of the depiction system. In an embodiment the computer system switch to project another structured light beam and simultaneously to apply the set of reference structured light data associated to this another structured light beam.
In an embodiment the computer system is configured for receiving and storing reference structured light data set representing the structured light beam including the set of light features. The computer system is preferably configured for receiving the reference structured light data set via a
calibration step or via a user interface. The computer system may be configured for using the reference structured light data set for recognizing the light features from pixel data and preferably for identifying and/or selecting the primary features.
The calibration is preferably performed by arranging the projector of the structured light arrangement and image acquisition unit(s) of the image acquisition device in predefined spatially positions relative to a plan surface, projecting the structured light to impinge the plan surface and acquiring reflected light by the image acquisition device. Thereby the computer system may also determine and store data representing the angle of divergence of the structured light beam.
The computer system may advantageously be configured for matching the primary features and corresponding features of the reference structured light data set using homographical matching principles, e.g. involving trigonometric algorithms and/or epipolar geometric algorithms.
Advantageously the computer system is configured for estimating the spatial position of the projector device based on the matches between the primary features and corresponding features of the reference structured light data set.
The computer system may be configured for performing the at least one determination of the tissue field based on the spatial position of the projector device and the recognized light features by using trigonometric algorithms, e.g. to determine surface topography e.g. comprising height differences and/or other surface shapes.
Advantageously the computer system is configured for performing the at least one determination of the tissue field, wherein the at least one determination comprises a distance between the projector device, a 3D structure of at least a part of the tissue field, a size determination of at least a part of the tissue field, such as a size and/or a volume of an organ section.
In an embodiment the at least one determination of the tissue field comprises a determination of a position of a nerve and/or a vein e.g. relative to a tool and/or to the projector.
In an embodiment the at least one determination of the tissue field comprises a volume determination e.g. of a cancer knot.
The determination of the tissue field is advantageously determinations in 3D space - i.e. actual determinations not limited by a point of view. Thus, for example a determination of the tissue field may advantageously comprise an actual distance between the projector device and the tissue field (not limited to a view direction, such as a view from the image acquisition device) e.g. such as the closest point of the tissue field and/or a point of the tissue field selected by an operator. Thus where the projector device is fixed to an instrument, such as a minimally invasive surgical instrument for surgical and/or diagnostic used, the instrument may be moved with very high accuracy relative to the tissue field.
The computer system may be configured for performing the at least one determination of the tissue field based of pixel data having corresponding time attribute.
Where the data from data with corresponding time attribute is not sufficient the computer system may be configured to supply with data from set(s) of pixel data having time attribute(s) within e.g. up to 1 s of the data in question, such as within 0.1 s, such as within 0.5 of the data in question.
In an embodiment the computer system is configured for performing the at least one determination of the tissue field based of pixel data having two or more different time attributes.
The computer system may advantageously be configured to display the at least one determination of the tissue field on a display, such as a screen e.g. continuously in real time and/or upon request from an operator.
The projector of the structured light arrangement and the image acquisition device may be fixed relative to each other or they may be independent. In an embodiment the projector of the structured light arrangement and the image acquisition device are fixed to each other with an angle of up to about 45 degrees.
In an embodiment the projector of the structured light arrangement and the image acquisition device are independently movable.
In an embodiment the 3D reconstruction system comprises a sensor arrangement for determining the position and orientation of the image acquisition device e.g. relative to the projector device and/or relative to a robot. In an embodiment the computer is configured for calibrating the position and orientation of the image acquisition device relative to the projector device.
The computer system may thereby acquire data representing the position and orientation of the image acquisition device relative to the projector device. The computer system may be configured for further refining the estimation of the spatial position of the projector device relative to the tissue field and/or the determination of the tissue field in 3D space. In an embodiment the 3D reconstruction system comprises a sensor arrangement for determining the position and orientation of the projector device e.g. relative to the image acquisition device and/or relative to a robot.
Thereby the estimation of the spatial position of the projector device relative to the tissue field may be further improved.
The computer system is preferably configured for repeating the above described estimation of the spatial position of the projector device relative to the tissue field and/or the determination of the tissue in real time as the computer system acquires the set of pixel data representing the consecutive images acquired by the image acquisition device to thereby provide a real time determination of the tissue field.
It has been found that the 3D reconstruction system may be applied in surgery to expose pulsating areas of the tissue field and thus warn a surgeon to avoid accidently cutting into an artery or similar. Also accidents caused by other patient movements may be avoided by using the 3D reconstruction system. It has been found that the 3D reconstruction system may be applied for determining the changes of tissue field in real time, such as movements caused by patient movements e.g. local movement, such as peristaltic movements causing movements of the tissue field. Thus, the 3D reconstruction system may in particular be beneficial for use in surgery e.g. to ensure accurate cutting, avoiding undesired damage of tissue and shorting the time of operation.
In an embodiment the at least one determination of the tissue field comprises determining a local movement of the tissue field, such as a pulsating movement and/or a movement caused by manipulation of the tissue e.g. caused by an instrument, such as a laparoscope.
In an embodiment the computer system of the 3D reconstruction system is adapted for controlling movement of one or more instruments. In an embodiment the computer system of the 3D reconstruction system is adapted for controlling movement of a robot e.g. a robotic surgeon, preferably connected to or forming part (integrated with) the 3D
reconstruction system. The 3D reconstruction system ensures that the operator has a high 3D perception including a perception of sizes and distances. The operator may use the 3D reconstruction system for performing volumetric determinations of selected tissue parts, such as nodules protuberances and thickened tissue parts.
In an embodiment the computer system is configured for repeating in real time the determination of the tissue field for consecutive sets of pixel data of the received frames.
The structured light beam may advantageously be substantially constant for each determination of the tissue field.
In some use of the 3D reconstruction system it may be desired to change the angle of divergence of the structured light beam and/or to change the structure of the structured light beam.
Thus, in an embodiment the structured light beam is changed from
determination of the tissue field to a next determination of the tissue field. In an embodiment the computer system may be configured for changing the structured light beam e.g. upon receipt of an instruction from an operator.
In an embodiment the computer system is configured for running a routine in real time comprising repeating steps i-iii, i. calculating a spatial position of the projector device from one or more sets of pixel data having corresponding and/or subsequent time attribute(s)
ii. calculating the determination of the tissue field based on the spatial position of the projector device and the light features recognized from pixel data having corresponding and/or subsequent time attribute(s) iii. displaying the determination of the tissue field.
In an embodiment the computer system is configured for running a routine in real time comprising repeating steps i-iii, i. calculating a spatial position of the projector device from one or more sets of pixel data having corresponding time attribute
ii. calculating the determination of the tissue field based on the spatial position of the projector device and the light features recognized from pixel data having corresponding time attribute
iii. displaying the determination of the tissue field. The 3D reconstruction system or parts thereof may be mounted to or incorporated in one or more surgical and/or diagnostic instruments for optimizing movements of such instruments during a diagnostic procedure and/or a surgical procedure.
In an embodiment at least the projector device of the structured light arrangement is mounted to or integrated with a minimally surgical instrument, such as a minimally surgical instrument selected from a penetrator, an endoscope, an ultrasound transducer instrument or an invasive surgical instrument comprising a grasper, a suture grasper, a stapler, forceps, a dissector, hooks, scissors, suction instrument, clamp instrument, electrode, curette, ablators, scalpels a biopsy and retractor instrument, a trocar or any combination comprising one or more of the abovementioned.
In an embodiment at least image acquisition unit(s) of the image acquisition device is mounted to or integrated with a minimally surgical instrument, such as a minimally surgical instrument selected from a penetrator, an endoscope, an ultrasound transducer instrument or an invasive surgical instrument comprising a grasper, a suture grasper, a stapler, forceps, a dissector, hooks, scissors, suction instrument, clamp instrument, electrode, curette, ablators, scalpels a biopsy and retractor instrument, a trocar or any combination comprising one or more of the abovementioned. In an embodiment at least image acquisition unit(s) of the image acquisition device and the projector device of the structured light arrangement are mounted to or integrated with a minimally surgical instrument.
In an embodiment at least the projector device of the structured light arrangement and at least the image acquisition unit(s) of the image acquisition device is mounted to or integrated with separate minimally surgical instruments, such as minimally surgical instruments independently of each other selected from a penetrator, an endoscope, an ultrasound transducer instrument or an invasive surgical instrument comprising a grasper, a suture grasper, a stapler, forceps, a dissector, hooks, scissors, suction instrument, clamp instrument, electrode, curette, ablators, scalpels a biopsy and retractor instrument, a trocar or any combination comprising one or more of the abovementioned. The minimally surgical instrument may be a rigid or a bendable minimally surgical instrument. In an embodiment the minimally surgical instrument comprises an articulating length section e.g. a distal length section.
In an embodiment where at least the projector device of the structured light arrangement is mounted to or integrated with an ultrasound transducer instrument, the structured light arrangement and the ultrasound transducer instrument is preferably in the form of a structured light ultrasound instrument as described below.
In an embodiment the structured light arrangement may be as the projector probe disclosed in the co-pending application DK PA 2016 71005. Advantageously the structured light arrangement comprises a light source optically connected to the projector device. The light source may in principle be any kind of light source. The light source may be a coherent light source or an incoherent light source. Examples of light sources include a semiconductor light source, such as a laser diode and/or a VCSEL light source as well as any kind of laser sources including narrow bandwidth sources and broad band sources. Preferably the light source comprises a laser light source, such as a laser emitting diode, a fibre laser.
The determination of light, including wavelengths, bandwidth, shape and similar is based on full width at half maximum (FWHM) determination unless otherwise specified or clear from the context. In an embodiment the light source is a fibre laser and/or a semiconductor laser, the light source preferably comprises a VCSEL or a light emitting diode (LED).
In an embodiment the light source is adapted for emitting modulated light, such as pulsed or continuous-wave (CW) modulated light, preferably with a frequency of at least about 200 Hz, such as at least about 100 KHz, such as at least about 1 MHz, such as at least about 20 MHz, such as up to about 200 MHz or more.
The wavelength or wavelengths may in principle comprise any wavelengths, such as from the low UV light to high IR light e.g. up to 3 pm or larger. Thus, in an embodiment the wavelength (s) of the light source for forming the structured light beam is invisible to the human eye.
In an embodiment the light source is configured for emitting at least one electromagnetic wavelength within the UV range of from about 10 nm to about 400 nm, such as from about 200 to about 400 nm. In an embodiment the light source is configured for emitting at least one electromagnetic wavelength within the visible range of from about 400 nm to about 700 nm, such as from about 500 to about 600 nm. In an embodiment the light source is configured for emitting at least one electromagnetic wavelength within the IR range of from about 700 nm to about 1 mm, such as from about 800 to about 2500 nm.
The band width of the light source may be narrow or wide, however, often it is desired to use a relatively narrow wavelength for cost reasons and optionally for allowing distinguishing between light emitted from or projected from different elements e.g. from a 3D reconstruction system of an
embodiment of the invention and from an endoscope . In an embodiment the light source has a band width of up to about 50 nm, such as from 1 nm to about 40 nm.
In an embodiment the light source has a band width which is larger than about 50 nm, such as a supercontinuum band width spanning over at least about 100 nm, such as at least about 500 nm. The structured light
arrangement may comprise two or more light sources, such as two LEDs having different wavelengths.
In an embodiment the 3D reconstruction system comprises two or more structured light arrangements. The two or more structured light arrangements may be adapted to operate simultaneously, independently of each other or asynchronous. In an embodiment the two or more structured light arrangements are adapted to operate independently of each other.
In an embodiment the two or more structured light arrangements are adapted to operate asynchronous.
The two or more structured light arrangements may comprise respective light sources that differs from each other e.g. with respect to intensity and/or wavelength(s).
In an embodiment at least one light source of the 3D reconstruction system is an IR (infrared) light containing light source comprising light waves in the interval of from about 0.7 pm to about 4 pm, such as below 2 pm. It has been found that using IR light may provide a very effective system for determining sub tissue surface structures such as a vein. By identifying the subsurface position of for example a vein of another critical structure the surgeon may ensure not to damage such structure e.g. during a surgical intervention at the tissue field.
In an embodiment the two or more light source it pulsed asynchronous preferably such that they do not have timely overlapping pulse duration.
In an embodiment the computer system is configured for determine one or more properties of a target site in the in the tissue field based on wavelength of light reflected from the target site. The computer system may comprise or being in communication with a spectroscope, such as a digital spectroscope for recognizing wavelengths in the reflected light.
Advantageously the spectroscope in an IR spectroscope. The spectroscope may e.g. form part of the image acquisition device or it may be an
independent spectroscope, such as a spectroscope comprising an IR transmitter and a spectroscopic sensor.
By analyzing the reflected light, certain properties of the tissue may be determined. This can for example be the oxygen level in the tissue and changes thereof, and the type of tissue. For example the reflected light can be used to determine what kind of organ the tissue is part of, which indicates to the surgeon what organs are which and thereby assisting the surgeon to an area of interest.
In an embodiment the computer system is adapted to determine oxygen level of a tissue site, changes thereof and type of tissue at the tissue site where the tissue site may be the entire tissue field, an organ at the tissue field, a section of the tissue field and /or a tissue structure or another structure at a preselected depth of the tissue site, such as a sub tissue surface vein. The tissue site may e.g. be a target site for the surgeon.
The at least one light source may preferably be wavelength tunable. The wavelength(s) of the light source may for example be selectable by the computer system and/or the surgeon. In an embodiment the wavelength(s) of the light source is selectable based on a feedback signal from the computer system.
In an embodiment the computer system is configured for determine a boundary about a target site having at least one different property than tissue surrounding the target site. The computer system may be configured for determine a size of the target site based on the determined boundary, such as a periphery, an area or preferably a volume.
To increase the field of view further while still maintaining a very high resolution is has been found to be very advantageous to stitch frames together e.g. as known from the art of photo stitching e.g. using the stitching described in US2016188992, US2016247325, US2015371420 and/or
US2015172620.
In an embodiment the computer system is configured for performing frame stitching comprising stitching at least two sets of pixel data of the frames comprising reflections of the structured light beam from the tissue field. The stitched set of pixel data preferably comprises a stitched image data set representing a larger tissue field than each set of pixel data.
In an embodiment the frame stitching comprises stitching sets of pixel data associated with different time attributes. The different time attributes are preferably consecutive time attributes. In an embodiment the computer system is configured for continuously stitching in the real time received frames to the stitched image data set. The computer system may be configured for unstitching and/or removing pixel data, e.g. by removing pixel data having a time attribute older than a preselected time and or by removing pixel data from the stitched image data set where the pixel data represents a site of the larger tissue field having a distance to a target site and/or a center site which is larger than a preselected distance.
In an embodiment the 3D reconstruction system is configured for performing a plurality of topological determinations of the tissue field and the computer system is configured for performing topological stitching comprising stitching at least two of the topological determinations, such as from 3 to 100 of the topological determinations, such as from 5 to 50 of the topological
determinations.
The topological determination may e.g. comprise determining a plurality of points of the tissue field in 3D for example comprising the spatially relation between the points to obtain a point cloud and the topological stitching may comprise stitching point clouds of the topological determinations to a super point cloud comprising the point clouds spatially combined with each other to represent a larger and/or refined topological determination of the tissue field. The computer system may be configured to perform further 3D
determinations, such as volume determinations from the super point cloud.
Advantageously the computer system is configured for performing topological stitching of a plurality of topological determinations obtained from consecutive acquired frames, preferably such that the frames comprises frames obtained with the projector and/or the image acquisition device at different positions and/or angles relative to the tissue field.
The invention also comprises a method for performing a determination of a tissue field. The method comprises
• projecting a structured light beam comprising a set of recognizable light features from a projector device and onto at least a section of the tissue field, acquiring frames comprising reflections of the structured light beam from the tissue field ,
• recognizing a plurality of the light features including a plurality of
primary light features from the reflections of the structured light beam, • matching the recognized primary light features with corresponding light features of the projected structured light beam,
• based on the matches estimating the spatial position of the projector device relative to at least a part of the tissue field, and
· performing at least one determination of the tissue field based on the spatial position of the projector device and the recognized light features.
Preferred embodiments of the method comprise the methods that the computer system of the 3D reconstruction system is configured to perform as described above.
The invention also comprises a robot comprising the 3D reconstruction system as described above.
The robot is advantageously a surgery robot e.g. for performing minimally invasive surgery. Advantageously the projector device of the structured light arrangement and the image acquisition device are disposed on individually movable arms of the robot. The projector device and/or the image acquisition device may e.g. be disposed on respective surgical instrument e.g. held by one of the individually movable arms of the robot. The "disposed" as used in the explanation that the projector device and the image acquisition device are disposed on respective arms of the robot is used to mean that the projector device/image acquisition device may be integrated with, mounted to, held by, instated through or in other way engaged with the robot arm(s). In an embodiment the robot has a controller processing system which comprises or is in data communication with the computer system of the 3D reconstruction system. Advantageously the computer system comprises a feedback algorithm for controlling movements of at least one of the individually movable arms of the robot in response to the determinations of the tissue field.
For example the 3D reconstruction system may comprise a sensor
arrangement for determining the position and orientation of the projector device relative to the image acquisition device and/or relative to a location of the robot or a part of the robot e.g. as described above.
In an embodiment the 3D reconstruction system comprises a sensor arrangement for determining the position and orientation of the image acquisition device relative to the projector device and/or relative to a location of the robot e.g. a location of a robot arm.
The robot may e.g. be as described in WO16057980, W013116869 and/or in US213030571 with the difference that the robot comprises the 3D
reconstruction system.
In an embodiment the robot comprises two or more robot arms e.g. as a robot of the ALF-X system or the SurgiBot system as marketed and disclosed by TransEnterix, Inc, where the projector device of the structured light arrangement and the image acquisition device are disposed on individually movable arms . The ALF-X System robot has been granted a CE Mark in Europe for use in abdominal and pelvic surgery and comprises a multi-port robotic surgery robot which allows up to four arms to control robotic instruments and a camera. The projector device and the image acquisition device may be disposed on any of these robot arms. The SurgiBot System robot is a single-incision, patient-side robotic-assisted surgery system and comprises a robot with a number of flexible, articulating robot arms held together in a single collar for insertion of instruments through the articulated robot arms through a single incision for thereafter introducing the robot arms with instruments through the collar for performing minimally invasive surgery within a cavity. All features of the inventions and embodiments of the invention as described herein including ranges and preferred ranges may be combined in various ways within the scope of the invention, unless there are specific reasons not to combine such features. The invention also relates to a fiber optic probe instrument assembly suitable for use in minimally invasive surgery. The fiber optic probe instrument assembly comprises a fiber optic probe and a minimally surgical instrument.
The minimally surgical instrument may for example be as described above,
In an embodiment the minimally surgical instrument is advantageously selected from a penetrator, an endoscope, an ultrasound transducer instrument or an invasive surgical instrument comprising a grasper, a suture grasper, a stapler, forceps, a dissector, hooks, scissors, suction instrument, clamp instrument, electrode, curette, ablators, scalpels a biopsy and retractor instrument, a trocar or any combination comprising one or more of the abovementioned minimally surgical instruments.
The fiber optic probe comprises a structured light generating and projecting device (generally called structured light device), a bundle of optical fibers and a projector device. The structured light device is configured for generating a structured light. The structured light device may in principle have any size, because the structured light device is not adapted to be near the surgical site e.g. it is not adapted for being inserted into any natural or artificial cavities of a human or animal patient subjected to surgery.
The structured light generated by the structured light device may e.g. be as described above. It should be observed that the structured light generated by the structured light device may have a relatively large cross-sectional area compared to the cross-sectional area of the structured light delivered to and emitted by the projector device as the structured light beam. Advantageously the structured light generated by the structured light device is generated by a pixel based image projector, where each fiber input end is arranged to receive light/no light from one or more pixels. Thereby the pattern may be a dynamic patter which may be chanded dynamically or in desired steps.
The structured light generated by the structured light device may for example include a structure of wavelength variations and/or intensity variation over the cross-section of the structured light.
In an embodiment the cross-sectional light structure of the structured light generated by the structured light device comprises optically distinguished areas, such as a pattern of areas of light and areas of no-light and/or areas of light of a first quality of a character and areas of light of a second quality of the character, wherein the character advantageously is selected from light intensity, wavelength and/or range of wavelengths. The structured light generated by the structured light device may for example be a pattern of light of a certain wavelength range with intermediate areas of no light or areas of light with a more narrow range of wavelength. The pattern may e.g. be strips, cross hatched lines or any other lines, or shapes.
The bundle of fibers has a light receiving end and a light emitting end and is arranged for receiving at least a portion of the structured light from the structured light generating device at its light receiving end and for delivering at least a portion of the light to the projector device.
The fiber bundle advantageously comprises at least 10 optical fibers, such as at least 50 optical fibers, such as from about 100 to about 2000 optical fibers, such as from about 200 to about 1000 optical fibers.
The optical fibers are advantageously very thin and closely packed, such that the total cross-sectional area of the fiber bundle at least at a portion of its length nearest the light emitting end is sufficiently narrow for being inserted into any natural or artificial cavities of a human or animal patient subjected to surgery. Advantageously the total cross-sectional area of the fiber bundle corresponds to or is smaller the projecting area of the projector device.
The fibers of the fiber bundle may be identical or they may differ, e.g.
comprising two or more types of optical fibers. This may be advantageous where the structured light comprises a structuring of different wavelengths.
In an embodiment the fibers of the fiber bundle are substantially identical.
In an embodiment, fibers of the fiber bundle are partly of fully fused at least a portion of its length nearest the light emitting end to ensure a fixed relative location of the fiber ends.
At least the projector device is mounted to or integrated with the minimally surgical instrument. The projector device configured to project the structured light as a structured light beam onto at least a section of a tissue field.
In an embodiment the light receiving end of the fiber bundle is operatively coupled to the structured light device for receiving at least a part of the structured light from the structured light device. The operatively coupling may include one or more lenses and/or objectives e.g. for focusing the structured light to be received by the light receiving end of the fiber bundle.
In an embodiment the fiber optic probe instrument assembly comprises one or more lenses and/or objectives arranged between the light emitting end of the bundle of fibers and the projector, preferably the projector comprises a micro lens.
Advantageously, the light emitting end of the fiber ends are arranged in an encasing to thereby form a probe-head comprising the projector device. The probe-head may comprise one or more lenses for ensuring a desired projection of the structured light. In an embodiment the projector is formed by a protecting coating or cover for the emitting end of the fiber bundle, preferably forming part of the encasing.
In an embodiment the light emitting end of the bundle of fibers are arranged in an encasing to form a probe-head comprising the projector device, preferably the probe-head comprises one or more lenses, such as micro lenses.
The probe-head may advantageously have a maximal cross-sectional diameter of up to about 1 cm, such as up to about 8 mm, such as up to about 6 mm. Thereby ensuring that the probe-head may be inserted together with a distal end portion of the minimally surgical instrument to which it is mounted or integrated into a natural or artificial cavity of a human or animal patient subjected to surgery.
The projector device and/or probe-head is advantageously mounted to the minimally surgical instrument at the distal end portion of the minimally surgical instrument or near the distal end, to ensure that the projector device may be inserted into a natural or artificial cavity together with the distal end portion of the minimally surgical instrument to which it is mounted or integrated of a human or animal patient subjected to surgery. Where the minimally surgical instrument has a distal portion that may be articulated at an articulating length section thereof, the projector device and/or probe-head is advantageously mounted to the minimally surgical instrument at the distal end portion at a location at or closer to the distal end of the minimally surgical instrument than the articulating length section. Alternatively the relative position between the distal end of the minimally surgery relative to the projector device may be determined by a computer system which comprises data representing the articulating state of the articulating length section. The invention also relates to a structured light ultrasound instrument. The structured light ultrasound instrument comprises an ultrasound transducer instrument and a structured light arrangement, wherein the structured light arrangement comprises a projector device for projecting a structured light beam to a tissue field, such as a tissue field within a natural or artificial body cavity.
The wherein the structured light arrangement may be as described above.
At least the projector device is mounted to or integrated with the ultrasound transducer instrument. Generally, it is known to use ultrasound transducer instrument for imaging before or during surgery. Such prior art ultrasound transducer instruments are for example market by BK Ultrasound. However, heretofore it has been a major problem to navigate the ultrasound transducer instrument relative to the surface of the tissue that are scanned by the ultrasound transducer instrument and in particular, it has been very difficult to match the obtained ultrasound images with actual images of the tissue surface. Thus, even though the prior art ultrasound transducer instruments have been capable of identifying damage or malignant tissue areas, it has been difficult for the surgeon to actually find the exact location of the damage or malignant tissue areas.
Thanks to the structured light ultrasound instrument of the invention, it is now possible to obtain an improved correlation between an ultrasound image and a surface image of a tissue area and surface.
The structured light ultrasound instrument may advantageously form part of the above described 3D reconstruction system.
Advantageously the ultrasound transducer instrument has a distal portion with a distal end and an ultrasound head located at the distal end. The ultrasound head preferably has a maximal cross-sectional diameter of up to about 2 cm, such as up to about 1.5 cm, such as up to about 1 cm, such as up to about 8 mm, such as up to about 6 mm. Thereby the ultrasound transducer instrument may be suitable for use in artificial or natural openings of a human or animal.
In an embodiment the ultrasound transducer instrument has an articulating length section at the distal portion, the articulating length section is preferably arranged proximally to the ultrasound head.
To ensure that the projected light beam reaches the surface tissue it is desired that the projector device is located at the distal portion of the ultrasound transducer instrument.
In an embodiment the projector device is located distally to the articulating length section.
In an embodiment the projector device is located proximally to the
articulating length section.
The invention also comprises a minimally invasive surgery navigation system suitable for ensuring a desired and improved navigation of an ultrasound transducer instrument during minimally invasive surgery. The minimally invasive surgery navigation system comprises a structured light ultrasound instrument as described above, an endoscope and a computer system.
The endoscope comprises an image acquisition device configured for recording data representing reflected rays from the emitted pattern and for transmitting the rays reflected from the a surface section of a tissue field to the computer system. The image acquisition device may be as described above.
The computer system is configured
• for receiving the data representing reflected rays from the endoscope, • for receiving, storing and/or determining 2D and/or 3D surface data representing the surface section of the minimally invasive surgery cavity,
• for calculating the positon and orientation of the ultrasound transceiver probe using the data representing reflected rays,
• obtaining a 2D and/or 3D ultrasound image from the ultrasound transceiver probe, and
• correlating in 3D orientation, spatial position and size the 2D and/or 3D surface data to the ultrasound image.
The computer system may be as described above wherein the computer is further configured for
• obtaining a 2D and/or 3D ultrasound image from the ultrasound transceiver probe, and
• correlating in 3D orientation, spatial position and size the 3D surface data to the ultrasound image. In an embodiment the minimally invasive surgery navigation system and the 3D reconstruction system is a combined system.
In an embodiment the surface data is 2D surface data e.g. a simple surface image.
Preferably, the surface data is 3D data e.g. determined by the 3D
reconstruction system described above e.g. integrated with the minimally invasive surgery navigation system.
The minimally invasive surgery navigation system is preferably configured for operating in real time.
In an embodiment a timely associated 2D image surface section of the surgical field may be used to confirm or improve the correlation the ultra sound image to the 2D and/or 3D surface data. The timely associated 2D image may for example be a frame acquired by the image acquisition device with corresponding time attribute as the set(s) of data representing reflected rays. In an embodiment the light emitting end of the bundle of fibers are arranged in an encasing to form a probe-head comprising the projector device, preferably the probe-head comprises one or more lenses, such as micro lenses. The probe-head may advantageously have a maximal cross-sectional diameter of up to about 1 cm, such as up to about 8 mm, such as up to about 6 mm. Thereby ensuring that the probe-head may be inserted together with a distal end portion of the minimally surgical instrument to which it is mounted or integrated into a natural or artificial cavity of a human or animal patient subjected to surgery.
The projector device and/or probe-head is advantageously mounted to the minimally surgical instrument at the distal end portion of the minimally surgical instrument or near the distal end, to ensure that the projector device may be inserted into a natural or artificial cavity together with the distal end portion of the minimally surgical instrument to which it is mounted or integrated of a human or animal patient subjected to surgery.
Where the minimally surgical instrument has a distal portion that may be articulated at an articulating length section thereof, the projector device and/or probe-head is advantageously mounted to the minimally surgical instrument at the distal end portion at a location at or closer to the distal end of the minimally surgical instrument than the articulating length section.
Alternatively the relative position between the distal end of the minimally surgery relative to the projector device may be determined by a computer system which comprises data representing the articulating state of the articulating length section.
The invention also relates to a structured light ultrasound instrument. The structured light ultrasound instrument comprises an ultrasound transducer instrument and a structured light arrangement, wherein the structured light arrangement comprises a projector device for projecting a structured light beam to a tissue field, such as a tissue field within a natural or artificial body cavity.
The wherein the structured light arrangement may be as described above.
At least the projector device is mounted to or integrated with the ultrasound transducer instrument.
Generally, it is known to use ultrasound transducer instrument for imaging before or during surgery. Such prior art ultrasound transducer instruments are for example market by BK Ultrasound. However, heretofore it has been a major problem to navigate the ultrasound transducer instrument relative to the surface of the tissue that are scanned by the ultrasound transducer instrument and in particular, it has been very difficult to match the obtained ultrasound images with actual images of the tissue surface. Thus, even though the prior art ultrasound transducer instruments have been capable of identifying damage or malignant tissue areas, it has been difficult for the surgeon to actually find the exact location of the damage or malignant tissue areas.
Thanks to the structured light ultrasound instrument of the invention, it is now possible to obtain an improved correlation between an ultrasound image and a surface image of a tissue area and surface. The structured light ultrasound instrument may advantageously form part of the above described3D reconstruction system.
Advantageously the ultrasound transducer instrument has a distal portion with a distal end and an ultrasound head located at the distal end. In an embodiment the ultrasound transducer instrument has an articulating length section at the distal portion, the articulating length section is preferably arranged proximally to the ultrasound head. To ensure that the projected light beam reaches the surface tissue it is desired that the projector device is located at the distal portion of the ultrasound transducer instrument.
In an embodiment the projector device is located distally to the articulating length section.
In an embodiment the projector device is located proximally to the articulating length section.
The invention also comprises a minimally invasive surgery navigation system suitable for ensuring a desired and improved navigation of an ultrasound transducer instrument during minimally invasive surgery. The minimally invasive surgery navigation system comprises a structured light ultrasound instrument as described above, an endoscope and a computer system.
The endoscope comprises an image acquisition device configured for recording data representing reflected rays from the emitted pattern and for transmitting the rays reflected from the a surface section of a tissue field to the computer system, The image acquisition device may be as described above.
The computer system is configured
• for receiving the data representing reflected rays from the endoscope, · for receiving, storing and/or determining 2D and/or 3D surface data representing the surface section of the minimally invasive surgery cavity,
• for calculating the positon and orientation of the ultrasound transceiver probe using the data representing reflected rays,
• obtaining a 2D and/or 3D ultrasound image from the ultrasound transceiver probe, and
• correlating in 3D orientation, spatial position and size the 2D and/or 3D surface data to the ultrasound image. The computer system may be as described above wherein the computer is further configured for
• obtaining a 2D and/or 3D ultrasound image from the ultrasound transceiver probe, and · correlating in 3D orientation, spatial position and size the 3D surface data to the ultrasound image.
In an embodiment the minimally invasive surgery navigation system and the 3D reconstruction system is a combined system.
In an embodiment the surface data is 2D surface data e.g. a simple surface image.
Preferably, the surface data is 3D data e.g. determined by the 3D
reconstruction system described above e.g. integrated with the minimally invasive surgery navigation system.
The minimally invasive surgery navigation system is preferably configured for operating in real time.
Brief description of preferred embodiments and elements of the invention
The above and/or additional objects, features and advantages of the present invention will be further elucidated by the following illustrative and non- limiting description of embodiments of the present invention, with reference to the appended drawings.
The figures are schematic and are not drawn to scale and may be simplified for clarity. Throughout, the same reference numerals are used for identical or corresponding parts. Figure 1 is a schematic illustration of an embodiment of a 3D reconstruction system of the invention in use for performing a 3D determination of a tissue field. Figure 2 is a schematic illustration of an embodiment of a 3D reconstruction system of the invention in use for performing a 3D determination of a tissue field in a minimally invasive surgical cavity.
Figure 3 is a schematic illustration of another embodiment of a 3D
reconstruction system of the invention in use for performing a 3D
determination of a tissue field in a minimally invasive surgery cavity.
Figure 4 illustrates an example of a flow chart of data processing of a 3D reconstruction system 3D reconstruction system of an embodiment of the invention. Figure 5 illustrates another example of a flow chart of data processing of a 3D reconstruction system 3D reconstruction system of an embodiment of the invention.
Figure 6 illustrates a further example of a flow chart of data processing of a 3D reconstruction system 3D reconstruction system of an embodiment of the invention.
Figure 7 is a schematic illustration of an image of a tissue field reflecting a structured light pattern projected from a projector device relative to a reference structured light data set.
Figure 8 illustrates a method where a 3D reconstruction system is performing a volumetric determination of a tissue field.
Figure 9 illustrates a method where a 3D reconstruction system is performing a size determination of a tissue field.
Figure 10 illustrates an example of a structured light.
Figure 10a illustrates examples of light features of the structured light of figure 10.
Figure 11 illustrates another example of a structured light. Figure 11a illustrates examples of light features of the structured light of figure 11.
Figure 12 illustrates a further example of a structured light.
Figure 12a illustrates examples of light features of the structured light of Figure 12.
Figures 13a, 13b and 13c illustrate further examples of light features.
Figures 14a, 14b and 14c illustrate further examples of structured light.
Figure 15 is a schematic illustration of stereo image feature matching.
Figure 16 is a schematic view of a portion of a penetrator for use in minimally invasive surgery and where a projector device of a 3D reconstruction system of an embodiment of the invention is disposed at a tip of the penetrator.
Figures 17a and 17b illustrate a part a penetrator member with a projector device of a 3D reconstruction system of an embodiment of the invention disposed near the tip of the penetrator and, wherein the projector device has a first folded position and a second unfolded/pivoted position.
Figure 18 is a schematic illustration of a structured light arrangement comprising a light source - waveguide - optical projector and focusing lens assembly suitable for forming part of an embodiment of 3D reconstruction system of the invention. Figure 19 is a schematic illustration of a projector probe which may form part of a structured light of an embodiment of 3D reconstruction system of the invention.
Figure 20 is a schematic illustration of a beam expanding lens arrangement which may form part of a structured light of an embodiment of 3D
reconstruction system of the invention. Figure 21 is a schematic illustration of a robot comprising a 3D reconstruction system of an embodiment of the invention.
Figures 22a-22c illustrates examples of image acquisition devices suitable for a 3D reconstruction system of embodiments of the invention. Figure 23 is a schematic illustration of a fiber optic probe.
Figure 24 is a schematic illustration of a fiber optic probe instrument assembly of an embodiment of the invention comprising the fiber optic probe of figure 23.
Figures 25a-25d illustrates cross-sectional views of examples of fiber bundles. Figure 26 is a schematic illustration of a distal portion of a fiber optic probe instrument assembly of an embodiment of the invention comprising the fiber optic probe of figure 23. Figure 27-33 illustrates a structured light ultrasound instrument and a minimally invasive surgery navigation system in use during a minimally invasive surgery system. The 3D reconstruction system illustrated in figure 1 comprises a structured light arrangement 1 with a projector device la configured to project a structured light beam onto at least a section of a tissue field 3. The 3D reconstruction system also comprises an image acquisition device 2
configured for acquiring frames comprising reflections 5 of the structured light beam from the tissue field 3. The image acquisition device 2 comprises a stereo camera 2a for acquiring digital images(frames). Each frame comprises a set of pixel data and the set of pixel data is associated with a time attribute by the image acquisition device or by the computer system 6, which also form part of the 3D reconstruction system. As illustrated with the waves W the structured light arrangement 1 and the image acquisition device are both in data communication with the computer system. The image acquisition device is configured for transmitting the acquired frames - in the form of sets of pixel data - in real time to the computer system 6 and the computer system is configured for receiving the frames in real time - in the form of sets of pixel data.
The computer system 6 is configured for receiving data from the structured light arrangement 1 representing the projected structured light beam. These data are stored in a memory of the computer system 6 as reference
structured light data set (herein also referred to as "reference data set"). The computer system 6 is further configured for
• recognizing a plurality of light features of received set(s) of light
features including a plurality of primary light features from the received set(s) of pixel data having corresponding time attribute and optionally from previous set(s) of pixel data,
• matching the recognized primary light features with corresponding light features of the projected structured light beam - i.e. the reference data set - and based on the matches estimating the spatial position of the projector device la relative to at least a part of the tissue field 3, and
• performing at least one determination of the tissue field 3 based on the spatial position of the projector device and the recognized light features. As it can be seen the tissue field may be rather curved and the 3D
reconstruction system may e.g. be configured for determining the shortest Euclidean distance between a not shown instrument and the tissue field i.e. the actual distance irrespectively of the point of view.
The 3D reconstruction system illustrated in figure 2 is here applied in a minimally invasive surgical procedure and comprises a structured light arrangement 11 with a projector 11a projecting a structured light beam as illustrated with rays lib onto at least a section of a tissue field 13.
The 3D reconstruction system also comprises an image acquisition device 12 configured for acquiring frames comprising reflections 15 of the structured light beam from the tissue field 13. The image acquisition device 12
comprises a not shown camera (also referred to as an image acquisition unit).
The image acquisition device 12 is wired to the computer 16a of the computer system for transmitting in real time sets of pixel data representing acquired frames. The structured light arrangement 11 is wired to the computer 16a of the computer system for transmitting data representing the projected structured light beam - the data represent at least a set of light features of the projected structured light beam.
A tip portion comprising the projector device 11a of the structured light arrangement 11 is inserted through the skin 10 of a patient into the minimally invasive surgical cavity to project the rays lib onto an intestine area I of the tissue field. A portion comprising the camera 12a of the image acquisition device 12 is inserted through the skin 10 of the patient into the minimally invasive surgical cavity via a cannula port 12c to acquire the frames of the reflected structured light 15.
The computer system comprises a display (as screen) 16b and the computer 16a is configured for
• recognizing a plurality of light features of one or more received sets of light features having corresponding time attribute and optionally from previous sets of pixel data for applying time shifted matching as described above,
• matching the recognized primary light features with corresponding light features of the projected structured light beam - i.e. the reference data set - and based on the matches estimating the spatial position of the projector device 11a relative to at least a part of the tissue field 13,
• performing topological determination of the tissue field 3 based on the spatial position of the projector device and the recognized light features, and
• displaying the determined topology on the display 16b. It should be understood that the computer system 16a, 16b may also be configured for controlling the operation of the image acquisition device 12 and/or the structured light arrangement 11.
The 3D reconstruction system illustrated in figure 3 comprises a structured light arrangement 2, with a portion comprising a projector device inserted through the skin 20 of a patient into a minimally invasive surgical cavity to project the rays onto the tissue field. The 3D reconstruction system also comprises an image acquisition device 22 partly inserted into the minimally invasive surgical cavity and configured for acquiring frames comprising reflections 25 of the structured light beam from the tissue field.
The image acquisition device 22 comprises camera for acquiring digital frames which are transmitted to a not shown computer system of the 3D
reconstruction system in the form of sets. As illustrated in the process scheme the computer system is configured for · recognizing a plurality of light features of received set(s) of light
features including a plurality of primary light features from the received set(s) of pixel data having corresponding time attribute and optionally from previous set(s) of pixel data and matching the recognized primary light features with corresponding light features of the projected structured light beam,
• based on the matches estimating the spatial position of the projector device la relative to at least a part of the tissue field 3, and
• performing at least one 3D determination of the tissue field 3 based on the spatial position of the projector device and the recognized light features.
The 3D determination may e.g. be 3D reconstruction e.g. topological reconstruction, determining augmented reality view of tissue field, performing volumetric measures, tracking instrument relative to tissue field, etc. As illustrated the computer system may receive pre-operation data and/or intra-operation data which may e.g. be used for refining the recognition step, the matching step, the estimation of projector device spatial position and/or the 3D determination. The flow chart of figure 4 illustrates a process scheme of data processing steps which the computer system may be configured to perform.
In step 4a) "image capture", the computer system receives a set of pixel data representing an acquired frame and with an associated time attribute. The computer system may store previous set(s)of pixel data (set(s) of pixel data representing previously acquired frames with a time attribute representing an earlier point in time).
In step 4b) "Recognizing light features", the computer system searches for light features in the set(s) of pixel data.
In step 4c) "Selecting primary light features", the computer system selects light features which are qualified for being used primary light features e.g. in respect to one or more thresholds and preferably light features with at least an orientation attribute, a position attribute or a combination thereof. The selected light features are deemed to be primary light features.
In step 4d) "Match primary light features", the computer system matches the primary light features with corresponding light features of the reference structured light data set.
In step 4e) "Estimating spatially position of the projector device" the computer system is estimating the spatial position of the projector device using the best match of features. The computer system may be configured for applying an iterative estimation procedure to find the estimation where most of the matched features are valid. Thereby primary features reflected from very curved areas of the tissue field may be ignored for the estimation of the spatial position of the projector device. In step 4f) "Estimating location of features on tissue field", the computer system is estimating the location of recognized light features (e.g. including primary light features) on the tissue field inclusive the spatial location relative to the now estimated spatial position of the projector device. In step 4g) "Calculate tissue field map", the computer system calculates topological data (3D data) of the tissue field.
It should be noted that the one or more steps may be performed iteratively. Further the computer system may apply additional steps, such a data rectifying steps, outlier removal steps, epipolar matching where a stereo camera has been used and etc.
The flow chart of figure 5 illustrates a process scheme of another example of data processing steps which the computer system may be configured to perform.
In step 5a) "image capture", the computer system receives a set of pixel data representing an acquired frame and with an associated time attribute. The computer system may store previous sets of pixel data (set of pixel data representing previously acquired frames with a time attribute representing an earlier point in time).
In step 5b) "Image rectifying", the computer system rectifies the set(s) of pixel data by subjecting the data to an error and correction procedure e.g. as described above.
In step 5c) "Detect features", the computer system is recognizing light features e.g. by searching for light features in the set(s) of pixel data. The computer system further selects which are qualified for being used a primary light features e.g. in respect to one or more thresholds and preferably light features with at least an orientation attribute, a position attribute or a combination thereof. The selected light features are deemed to be primary light features. In step 5d) "Match features", the computer system matches the primary light features with corresponding light features of the reference structured light data set.
Steps 5c) and 5d) may for example include the steps i-iv of extracting light features, e.g. represented by local pattern fractions e.g. corner, corner connections, square arrangements and any other fractions of the structured light e.g. as described above and matching these light features in an iterative process.
In step 5e) "Outlier removal and rough pose & orientation estimation", the computer system performs a rough estimation of the spatial position of the projector device using the match of features - refines the estimation, removes outlier data and repeats as many times as required e.g. until further modifications are below a preselected threshold.
Step 5f) "Refine pose and orientation" is a continuation of step 5e) to perform the final estimation of the spatial position of the projector device.
In step 5g) "Dense 3D reconstruction", the computer system estimates the location of recognized light features on the tissue field inclusive the spatial location relative to the now estimated spatial position of the projector device and calculates topological data (3D data) of the tissue field. The flow chart of figure 6 illustrates a process scheme of a further example of data processing steps which the computer system may be configured to perform.
In step 6a) "capture stereo image", the computer system receives a set of pixel data representing stereo acquired frames having corresponding associated time attribute.
In step 6b) "Recognizing light features from each image", the computer system searches for corresponding light features in each of the sets of pixel data and selects corresponding primary light features. In step 6c) "Matching features to reference light data", the computer system matches the primary light features from one or preferably both sets of pixel data with corresponding light features of the reference structured light data set. "In step 6d) "Epipolar feature matching", the computer system matches the primary light features and optionally other light features between the sets of pixel data of the stereo frames.
As indicated the match relative to the reference data set and the epipolar matching may be performed as an iterative process. The step may
advantageously include outlier removal.
In step 6e) "Estimating spatially position of the projector device" the computer system estimates the spatial position of the projector device using the best match of features including the epipolar matching. The computer system may be configured for applying an iterative estimation procedure to find the estimation where most of the matched features are valid.
In step 6f) "Estimating location of features on tissue field", the computer system is estimating the location of recognized light features (e.g. including primary light features) on the tissue field inclusive the spatial location relative to the now estimated spatial position of the projector device. As indicated the computer system may for example receive and apply pre-operation data and/or inter-operative data in the location estimation to thereby further refine the 3D determination.
In step 6g) "Calculate tissue field map", the computer system calculates topological data (3D data) of the tissue field. Figure 7 illustrates the matching of light features of data of a set of pixel data representing an image 30 with data of a reference structured light data set representing the structured light 31 as projected by the projector device (the projected structured light beam). As illustrated with the lines L light features comprising corner, crossing lines and etc. may be matched.
Figure 8 illustrates that the 3D reconstruction system is performing a volumetric determination of a tissue field based on the light features of the set of pixel data representing an image 30 after having estimated the spatial position of the projector device. By estimating the location of recognized light features of the reflected structured light on the tissue field as imaged including determining the spatial location relative to the estimated spatial position of the projector device the computer system can calculate the geometrically spatial position of the light features and form a map M where formations may be detected, e.g. by fitting the data representing position attributes of light features 32 to a circle as shown. Thereby the size and the volume of a protrusion may be determined.
Figure 9 illustrates that the 3D reconstruction system is performing a size determination of a tissue field based on the light features of the set of pixel data representing an image 30 after having estimated the spatial position of the projector device. By estimating the location of recognized light features of the reflected structured light on the tissue field as imaged including
determining the spatial location relative to the estimated spatial position of the projector device the computer system can calculate the geometrically spatial position of the light features and form a map M where sizes and distances may be detected, e.g. by fitting the data representing position attributes of light features 32 to a circle as shown. Thereby the size or a distance may be determined. Figure 10 illustrates an example of a structured light in the form of a grid pattern. Examples of light features which may be extracted from the grid pattern are shown in figure 10a including for example a sub-grid, a square, a cross and a corner. A position attribute may e.g. be provided as the point of cross in a sub-grid or a cross or as a corner edge of a sub-grid or a corner. An orientation attribute may for example be representing the orientation of lines of the shown light features.
Figure 11 illustrates an example of a structured light in the form of a grid pattern with random dots. Examples of light features which may be extracted from the grid pattern are shown in figure 11a including for example a grid feature, a grid with dots, a group of dots or a single dot.
Figure 12 illustrates an example of a structured light in the form of pattern of dots. Examples of light features which may be extracted from the grid pattern are shown in figurel2a. As seen the group of dots can be matched to the pattern of dots as illustrated with the dotted ring.
Figures 13a, 13b and 13c illustrate further examples of light features comprising light features with various characteristic shapes, light feature with various colors and light features with various intensities of light.
Figures 14b illustrates a structured light comprises a number of parallel lines e.g. comprising different attributes - e.g. size, colour, structure and etc.
Figures 14c illustrates a structured light in the form of a bar coded structure.
Figure 15 is a schematic illustration of stereo image feature matching showing the matching of light features between the first image 42 and the second image of stereo images. Corresponding features of at least one of the images 43 are matched to the reference structured light data set. 41.
Figure 16 shows a distal portion of penetrator 50. The penetrator comprises a channel 55 for supplying light e.g. via an optical fiber arrangement to a projector device 56 arranged at the tip of the penetrator.
The penetrator 60 of figures 17a and 17b has a distal portion 62 with a tip 64, an obstruction 65 ensures that the penetrator 60 is not penetrating too deep into a minimally invasive surgical cavity and a proximal portion 63, for handling the penetrator 60 by an operator. At the distal portion 62 the penetrator 60 comprises a projector device 66 forming part of a structured light arrangement of a 3D reconstruction system of an embodiment of the invention. In figure 17a the projector device 66 is in a folded position where the projector device 66 is folded into a sleeve of the penetrator 60. When the projector device 66 is in this first position the penetrator 60 may penetrate through the skin of a patient. Thereafter the projector device 66 may be unfolded e.g. by being pivoted to an operative position where the projector device is configured for projecting the structured light beam P. The structured light arrangement illustrated in figure 18 comprises a light source 72, a waveguide 71, an optical projector 76 and focusing lens 75. The waveguide 71 is an optical fiber and the optical projector 76 is a DOE
(Diffractive Optical Element). The light pattern is projected in the desired direction and focused by the focusing lens 75. The projected pattern has a diverging angle 0.
The projector probe illustrated in figure 19 comprises an optical fiber 81 with a proximal end 81' and a distal end, a beam expanding lens 82 and a projector 86 with a distal front face 86'. The optical fiber 81, the beam expanding lens 82 and the projector 86 are fused in the fused interfaces F. The optical fiber 81, the beam expanding lens 82 and the projector 86 are arranged in a hermetic metal housing 80 preferably using epoxy seal 89.
When a light beam is pumped from a not shown light source into the proximal end 81' of the optical fiber 81, the light will propagate through the optical fiber 81 collimated in the core of the optical fiber. From the fiber 81 the light will pass into the beam expanding lens 82 which is advantageously a GRIN lens and the beam will expand as the light propagates through the beam expanding lens 82. At the exit of the beam expanding lens 82 the light will be collimated and it will propagate into the projector which is advantageously a DOE. Here the light pattern will be shaped and the projector will project a divergent light pattern. The projector may advantageously comprise an optical filter or an optical filter layer as described above to prevent and/or remove fog/mist. The optional optical filter or an optical filter layer is indicated with the dotted part 86a of the projector 86.
The beam expanding lens arrangement illustrated in figure 20 comprises a beam expanding lens 92 having a length L. As illustrated the light is fed from a not shown light source via an optical fiber 91 to the proximal end of the beam expanding lens 92. The light enters the beam expanding lens and due to a continuous change of the refractive index within the lens material, the light rays rl are continuously bent to thereby expand the diameter of the beam as the light propagates through the beam expanding lens 92 along its length L. At the exit of the beam expanding lens 302 the light is collimated to form a beam with substantially parallel rays r2 of light. Hereafter the collimated light may be transmitted further to the DOE of a projector probe as illustrated in figure 19. Figure 21 is a schematic illustration of a robot 100 comprising a first movable robot arm 100a and a second movable robot arm 100b. The robot 100 comprises a minimally invasive surgery tool 107a and a projector device 107 of a structured light arrangement disposed on its first robot arm 100a and an image acquisition device 108 disposed on its second robot arm 100b. The invasive surgery tool 107a with the projector device 107 and the image acquisition device 108 are passes through not shown cannula ports through the skin 106 of a patient into a minimally invasive surgical cavity with a tissue field. In the shown embodiment the robot arms 100a and 100b are outside the cavity. In practice it is desired that the robot arms 100a and 100b are inserted through the cannula ports into the cavity for thereby increasing movability of the projector device and the image acquisition device.
The projector device 107 projects a structured light beam SB to imping onto the tissue field and at least a part of the light of the structured light beam SB is reflected as a reflected light pattern RP. The image acquisition device 108 acquires frames comprising at least a part of the reflected light pattern RP.
The robot also comprises a computer system comprising a data collecting arrangement 102, a computer 101 with processing capability and a controller processing system 104 for controlling the operation of the robot and in particular the robot arms 100a, 100b.
The frames acquired by the image acquisition device is collected in real time in the data collecting arrangement 102 and stored in the form of sets of pixel data with associated time attributes. The data collecting arrangement 102 also stores the reference structured light data set.
The computer 101 is requesting data from the data collecting arrangement 102 and is processing the data as described above to estimate the spatially position of the projector device and further the computer 101 performs 3D determinations of the tissue field. The 3D determinations may be transmitted to a display 103 to be visualizes for a human surgeon.
The 3D determinations may further be transmitted to the controller processing system 104 for being used in the algorithm determining the movement of the robot arms. In an embodiment the controller processing system 104 may further provide a feed back to the computer 101 including data describing previous or expected moves of the robot arms 100a, 100b and the computer 101 may apply these feedback data to refine the 3D determinations.
In the shown embodiment the image acquisition device also comprises a projector which projects a second structured light beam SBa towards the tissue field and this second structured light beam SBa is at least partly reflected as a reflected pattern RPa. The wavelength(s) of the two structured light beams SB, SBA preferably differs such that the computer 1001 may distinguish between the two reflected light patterns RP, RPa and features thereof.
The computer may apply the second reflected light pattern RPa to further refine the 3D determinations. Figure 22a illustrates an example of an image acquisition device comprising one or more single cameras 112a, 112b incorporated into separate camera housings 111a, 111b. Where there are more cameras 112a, 112b it is desirable that the cameras 112a, 112b may be moved separately e.g. by independently tilting and/or twisting the camera housings. The cameras 112a, 112b may be operated by a common electronic circuitry to ensure that the cameras 112a, 112b are operating concurrently with each other and preferably with same frame rate.
Figure 22b illustrates an example of an image acquisition device comprising a stereo camera with a stereo camera housingll3 comprising flexible housing arms 113a, 113b each encasing a camera 114a, 114b. The flexible housing arms 113a, 113b ensures that the cameras 114a, 114b can be positions with variable base line within a selected range. An embodiment of the 3D reconstruction system comprising a stereo camera with variable base line is very advantageous because the base line may be optimized during a procedure to thereby result in a very detained 3D analysis of the tissue field including highly accurate determined determinations of sizes of protruding parts and/or cavities of the tissue field and optionally volume determinations of such protrusions and/or cavities.
The fiber optic probe shown in figure 23 comprises a structured light device 122, a bundle of optical fibers 124 and a probe-head 125 comprising a projector device encased in a probe-head 125.
In the shown embodiment, a not shown light source is arranged to feed light to the structured light device 122 via a fiber 121. In a variation thereof, the light source forms an integrated part of the structured light device 122. The light source may be as the sight sourced deisclosed above. Advantageously, the light source is a laser light source e.g. LED.
The structured light device 122 is configured for generating a structured light. Since the size of the structured light device 122 is not very important, any device suitable for generating the light source may be applied e.g. as described above.
The structured light device 122 is arranged to deliver at least a portion of the structured light to the light receiving end 124a of the fiber bundle 124. A number of lenses and/or objectives 123 are arranged between an output end 122a of the structured light device 122 and the light receiving end 124a of the fiber bundle 124. The lenses and/or objectives 123 ensures an effective focusing of the structured light to be received by the light receiving end 124a of the fiber bundle 124.
From the light emitting end 124b of the fiber bundle 124, the light is propagating to the probe-head 125. In the shown embodiment the light emitting end 124b of the fiber bundle 124, however for high stability it is desired that the light emitting end 124b of the fiber bundle 124 is encased in and forms part of the probe-head 125.
One or more not shown lenses may e.g. be arranged between the light emitting end 124b of the fiber bundle 124 and the projector. The probe-head shown comprises a micro lens 125a which may be the projector device or alternatively a not shown projector device is arranged in front of the micro lens 125s
The projector device of the probe-head is configured to project the structured light as a structured light beam e.g. onto at least a section of a tissue field.
Figure 24 shows the fiber optic probe mounted to an instrument 127 to form a fiber optic probe instrument assembly of an embodiment of the invention. The minimally surgical instrument 127 comprises a distal portion 127a adapted to be inserted into any natural or artificial cavities of a human or animal patient subjected to surgery.
In the present embodiment the minimally surgical instrument is a grasper, comprising a pair of grasper arms 127b which forms the distal end and the tip of the minimally surgical instrument 127.
The probe-head 125 is mounted to the minimally surgical instrument 127 at its distal portion 127a
Figures 25a-25d illustrates cross-sectional views of examples of fiber bundles suitable for use in a fiber optic probe instrument assembly of embodiments of the invention.
As it can be seen the fiber bundle in figure 25 a is very closely packed, whereas the fiber bundle of figure 25d is not so closely packed and comprises a lower number of fibers. Figure 26 shows a distal portion of a fiber optic probe instrument assembly, where the minimally surgical instrument is an endoscope 137. The endoscope 137 comprises a camera channel 137b and a probe channel 137a. In the shown embodiment a camera has not been inserted through the camera channel. A probe-head 135 and a length portion of the fiber bundle 134 has been passed through the probe channel.
Figure 27-33 illustrates a structured light ultrasound instrument and a minimally invasive surgery navigation system in use during a minimally invasive surgery system.
The illustration of figure 27 show a tissue surface 141 within a body cavity. The tissue surface has a tumor 142 which may be malignant and is to be diagnosed and optionally removed. A distal portion of a structured light ultrasound instrument 143 as described above is inserted into the cavity. The structured light ultrasound instrument 143 comprises an ultrasound head 144 comprising a transceiver for emitting and receiving ultrasound. Proximally to the ultrasound head 144 the distal portion of the structured light ultrasound instrument 143 has an articulating length section 145, which can be articulated with a high degree of freedom. The articulating movement are preferably computer controlled or at least the computer system comprises data representing the movements and articulated position of the articulating length section 145, such that the computer system can determine the relative position between the projector device 146 and the ultrasound head 144.
The projector device is located proximally to the articulating length section 145 to ensure a desired location for emitting the light pattern onto a desired tissue field of the tissue surface 141. The structured light 147, here in the form of a plurality of light dots are impinging onto the tissue field and a portion of the light is reflected. An endoscope comprising an image acquisition device is configured for acquiring frames comprising reflections of the structured light 147. Each frames (images) comprises a set of pixel data associated with a time attribute and advantageously the set of pixel data is transmitted to the computer system for being computed according to the 3D reconstruction system as escribed above.
As the ultrasound head 144 is moved over the tissue field a plurality of frames are captured by the image acquisition device. The computer system is configured for performing a 3D reconstruction in real time using these acquired frames ads described above and at the same time the ultrasound head 144 is scanning the tissue below the tissue surface.
As illustrated in figure 28 combined 3D reconstruction system/minimally invasive surgery navigation system is configured to in real time determining the spatial location and orientation of the non-articulated portion of the structured light ultrasound instrument 143, of the ultrasound head 144 and the endoscope (camera) 148. As shown in the figure 28 the combined 3D reconstruction system/minimally invasive surgery navigation system. The system further overlay a depiction 143a, 144a, 148a of the spatial location and orientation of respectively the non-articulated portion of the structured light ultrasound instrument 143, of the ultrasound head 144 and the endoscope (camera) 148. Thereby the surgeon has full information about the coordinates (spatial location) and the orientation of the non-articulated portion of the structured light ultrasound instrument 143, of the ultrasound head 144 and the endoscope (camera) 148. As shown in figure 28 the structured light has been changed to an angular pattern.
Figure 29 illustrates a view seen from the endoscope (the probe spatial location is shown but not the orientation. The system advantageously allow the surgeon to switch these data on and off. Figure 30 illustrates the image of the tissue field and a corresponding a 2D ultrasound image 149. In real time, both images will continuously be replaced by real time images. As indicated the tissue near the tumor may comprise a vein 140 or another critical structure which the surgeon should aim not do damage during surgery. Due to the present invention the surgeon may locate such critical structure and thereby ensune unneccesary damaging
thereof. Figure 31 illustrates the image of the tissue field and a corresponding a 3D ultrasound image 150. In real time, both images will continuously be replaced by real time images. To couple the 3D ultrasound image 150 i.e. to determine spatial location and orientation of the 3D ultrasound image 150 the it is required to know the coordinates and orientation of the ultrasound head 144 and at least the orientation and advantageously also the coordinate of the endoscope 148.
Figure 32 show a view of the image of the tissue field and the corresponding a 3D ultrasound image 150, where the 3D ultrasound image 150 and the 2D and/or 3D surface data determined from the frames of the endoscope has been correlated in 3D orientation, spatial position and size.
Simultaneously the registration of the ultrasound reveals the location of critical structures which the surgeon can now avoid.
Figure 33 show two different view corresponding to the view of figure 32 where the surgeon may dynamically change the x,y,z angle of one or both of the views.

Claims

PATENT CLAIMS
1. A 3D reconstruction system for performing a determination of a tissue field, the reconstruction system comprising
-a structured light arrangement comprising a projector device configured to project a structured light beam onto at least a section of the tissue field,
-an image acquisition device configured for acquiring frames comprising reflections of said structured light beam from said tissue field, each frame comprises a set of pixel data associated with a time attribute, and
-a computer system, wherein the structured light beam has a centre axis and a cross-sectional light structure comprising light features which are recognizable by said computer system from said set of pixel data, the computer system is configured for storing said projected structured light beam in the form of a set of reference structured light data set comprising data representing a set of light features, the computer system being configured
• for in real time receiving frames acquired by said image acquisition device,
· for recognizing a plurality of light features of said set of light features including a plurality of primary light features from said received set of pixel data,
• for matching said recognized primary light features with corresponding light features of said projected structured light beam and based on said matches estimating the spatial position of the projector device relative to at least a part of said tissue field, and • for performing at least one determination of the tissue field based on the spatial position of the projector device and said recognized light features, preferably said spatial position of the projector device is the spatial position determined from the position of the image acquisition device.
2. The 3D reconstruction system of claim 1, wherein the computer system comprises a memory configured for storing said projected structured light beam in the form of said reference structured light data set, said memory optionally stores said reference structured light data set.
3. The 3D reconstruction system of claim 1 or claim 2, wherein said estimated spatial position comprises an estimated distance in 3D space between the tissue field and the projector device.
4. The 3D reconstruction system of claim 1 or claim 2, wherein said estimated spatial position comprises an estimated distance in 3D space between the tissue field and the projector device as well as an estimated relative orientation of the projector device.
5. The 3D reconstruction system of any one of the preceding claims, wherein said estimated spatial position comprises an estimated distance in 3D space between the tissue field and the projector device from a point of view of the image acquisition device, such as a minimum distance between the tissue field and the projector device, optionally the computer system being configured for generating a representation of the spatial position from a point of view different from the image acquisition device.
6. The 3D reconstruction system of any one of the preceding claims, wherein said estimated spatial position comprises an estimated distance and at least one orientation parameter, such as an orientation parameter selected from yaw, roll and pitch, preferably the estimated spatial position comprises two or more, such as three orientation parameters yaw, roll and pitch.
7. The 3D reconstruction system of any one of the preceding claims, wherein said estimated spatial position comprises an estimated shortest or longest distance between a selected point of the projector device and the tissue field.
8. The 3D reconstruction system of any one of the preceding claims, wherein said estimated spatial position comprises an estimated distance represented by 3 values in 3D space and an estimated orientation
represented by 3 values in 3D space, said values in 3D space preferably are values in a common coordinate system.
9. The 3D reconstruction system of any one of the preceding claims, wherein said estimated spatial position comprises an estimated distance represented by 2 set of values in a common 3D coordinate system, each set of values comprises an x, an y and a z value.
10. The 3D reconstruction system of any one of claims 7-9, wherein the computer system is configured for estimating one or more of the orientation parameters yaw, roll and pitch at least partly based on said matches.
11. The 3D reconstruction system of any one of claims 7-10, wherein one or more of the orientation parameters yaw, roll and pitch is/are transmitted to the computer system and/or determined by the computer system
independently of the light features.
12. The 3D reconstruction system of claim 11, wherein one or more of the orientation parameters yaw, roll and pitch is/are at least partly obtained from a sensor located at or associated with the structured light arrangement and/or the image acquisition device to sense at least one of the orientation
parameters yaw, roll and pitch.
13. The 3D reconstruction system of any one of the preceding claims, wherein the system comprises a sensor arrangement arranged for determining the distance between the projector device and the image acquisition device, said sensor arrangement preferably being configured for determining the distance and the relative orientation between the projector device and the image acquisition device, said sensor arrangement optionally comprises a transmitter and a receiver located at or associated with respectively the projector device and the image acquisition device.
14. The 3D reconstruction system of any one of the preceding claims, wherein the computer system is configured for estimating the homographic transformation between the matched features and based on said
homographic transformation determining the spatial position of the projector device.
15. The 3D reconstruction system of any one of the preceding claims, wherein the computer system comprises an iterative closest feature algorithm for matching said recognized primary light features with corresponding light features of said projected light beam.
16. The 3D reconstruction system of any one of the preceding claims, wherein the computer system is configured for identifying a first number of pairs of matched features with a homographic transformation that
corresponds within a threshold and preferably for applying said homographic transformation as the estimated homographic transformation, preferably said first number of pairs of matched features is at least two, such as at least 3, such as at least 5.
17. The 3D reconstruction system of claim 16, wherein the computer system is configured for identifying one or more second pairs of matched features with a transformation which differs beyond the threshold from the homographic transformation of the first number of pairs of matched features, preferably the computer system being configured for correcting and/or discharging the pixel data representing said recognized feature(s) of the second pairs of matched features.
18. The 3D reconstruction system of any one of the preceding claims, wherein said image acquisition device comprises a single camera configured for acquiring said frames, the sets of pixel data of said respective frames are associated with respective consecutive time attributes representing a time of acquisition.
19. The 3D reconstruction system of claim 18, wherein the computer system is configured for repeating the steps of
• receiving a set of pixel data associated with a time attribute and
representing a single frame,
· recognizing said set of light features from said set of pixel data,
• identifying and/or selecting said primary light features from said
recognized set of light features,
• identifying corresponding light features in said reference structured light data set representing said features of said projected structured light beam,
• matching said primary light features with said corresponding light
features of said projected structured light beam,
• estimating said spatial position of the projector device relative to at least a part of said tissue field, and
· performing at least one determination of said tissue field, wherein the number of recognized primary features preferably is at least 3, such as at least 5, such as from 6 to 100, such as from 8 to 25.
20. The 3D reconstruction system of claim 19, wherein the computer system in one or more of the repeating steps further is configured for performing time shifted matching comprising matching said primary light features with corresponding primary light features of at least one set of pixel data associated with a previous time attribute, and for applying said time shifted matching in the estimation of said spatial position of said projector device, preferably said previous time attribute is up to about 10 seconds, such as up to about 5 seconds, such as up to one seconds earlier than the time attribute of the set of pixel data processed in the current processing step.
21. The 3D reconstruction system of any one of claims 1-17, wherein said image acquisition device comprises a multi camera configured for acquiring sets of said frames, each set of frames comprise at least two simultaneously acquired frames, the sets of pixel data of said frames of a set of frames are associated with corresponding time attribute representing the time of acquisition.
22. The 3D reconstruction system of claim 21, wherein the computer system is configured for repeating the steps of
• receiving sets of pixel data associated with a corresponding time
attribute and representing said sets of frames,
• recognizing said set of light features from each of said respective sets of pixel data,
· identifying and/or selecting corresponding primary light features from said recognized set of light features of said respective sets of pixel data,
• identifying corresponding light features in said reference structured light data set representing said features of said projected structured light beam,
• matching said primary light features of each of said respective sets of pixel data with said corresponding light features of said projected structured light beam,
• estimating said spatial position of the projector device relative to at least a part of said tissue field, and
• performing at least one determination of said tissue field, wherein the number of recognized features preferably is at least 2, such as at least 3, such as from 5 tolOO, such as from 6 to 25.
23. The 3D reconstruction system of claim 22, wherein the computer system in one or more of the repeating steps further is configured for performing stereo matching comprising matching said primary light features of two or more of said respective sets of pixel data with each other.
24. The 3D reconstruction system of any one of claims 21-23, wherein the computer system in one or more of the repeating steps further is configured for performing time shifted matching comprising matching said primary light features with corresponding primary light features of at least one set of pixel data associated with a previous time attribute, and for applying said time shifted matching in the estimation of said spatial position of said projector device, preferably said previous time attribute is up to about 10 seconds, such as up to about 5 seconds, such as up to one seconds earlier than the time attribute of the set of pixel data processed in the current processing step.
25. The 3D reconstruction system of any one of claims 21-24, wherein the multi camera comprises a stereo camera comprising two coordinated image acquisition units, said image acquisition units preferably being arranged to acquire wide camera baseline images, preferably said image acquisition units being arranged with a fixed relative distance to each other, preferably of at least about 5 mm, such as at least about 1 cm, such as at least about 2 cm.
26. The 3D reconstruction system of any one of claims 21-25, wherein the multi camera comprises coordinated image acquisition units arranged with substantially parallel optical axes.
27. The 3D reconstruction system of any one of claims 21-26, wherein the multi camera comprises coordinated image acquisition units arranged with optical axes having a relative angle to each other of up to about 45 degrees, such as up to about 30 degrees, such, as up to about 15 degrees.
28. The 3D reconstruction system of any one of claims 21-27, wherein the coordinated image acquisition units are arranged to have an overlapping field of view, preferably with an at least about 10 % overlapping field of view, such as at least about 25 % overlapping field of view, such as at least about 50% overlapping field of view.
29. The 3D reconstruction system of any one of 21-28, wherein the image acquisition device has a stereo field of view of up to about 60 degrees, such as up to about 50 degrees, such as up to about from at least about 40 degrees, such as from about 5 degrees to about 50 degrees.
30. The 3D reconstruction system of any one of the preceding claims, wherein the image acquisition device has a maximal field of view of from at least about 5 degrees to about 160 degrees, such as up to about 120 degrees, such as from about 10 to about 100, such as from about 15 to about 50 degrees, such as from about 20 to about 40 degrees.
31. The 3D reconstruction system of any one of the preceding claims, wherein the image acquisition device has field of view adapted to cover the tissue field without including the projector device.
32. The 3D reconstruction system of any one of the preceding claims, wherein the 3D reconstruction system is adapted for operating using a wide projector-camera baseline, preferably the 3D reconstruction system is adapted for operating using a wide projector-camera baseline comprising a projector-camera distance up to about 45 cm, such as up to about 30 cm, such as up to about 15 cm, such as up to about 10 cm, such as up to about 5 cm, such as up to about 3 cm, such as up to about 2 cm.
33. The 3D reconstruction system of claim 32, wherein the 3D
reconstruction system is adapted for operating using a varying projector- camera baseline, preferably comprising that the projector and the image acquisition device is movable independently of each other.
34. The 3D reconstruction system of claim 32 or claim 33, wherein the computer system is adapted for determine the projector-camera baseline as a function of time, preferably the computer system is configured for associating a plurality of determined projector-camera baselines with respectively timely corresponding sets of pixel data and preferably to provide that projector- camera baseline data representing the projector-camera baseline for each of said plurality of sets of pixel data is linked with said respective sets of pixel data.
35. The 3D reconstruction system of any one of the preceding claims, wherein the projected structured light beam has an angle of divergence, said computer system stores or is configured for storing said angle of divergence, said angle of divergence is preferably at least about 10 degrees, such as at least about 20 degrees, such as at least about 30 degrees, such as at least about 40 degrees relative to the optical axis of the structured light.
36. The 3D reconstruction system of claim 35, wherein the angle of divergence is substantially rotationally symmetrical.
37. The 3D reconstruction system of claim 35, wherein the angle of divergence is at most two fold symmetrical.
38. The 3D reconstruction system of any one of claims 35-37, wherein the computer system is configured for acquiring said angle of divergence from an operator via a user interface and/or by a preprograming and/or from a database.
39. The 3D reconstruction system of any one of claims 35-38, wherein the computer system is configured for determining said angle of divergence by a calibration comprising projecting the projected structured light beam from a preselected distance and toward a known surface area, recording the reflected structured light and determining the angle of divergence.
40. The 3D reconstruction system of any one of claims 35-39, wherein the angle of divergence is determined as the beam divergence, Θ: θ =
wherein Dl is the largest cross-sectional dimension orthogonal to the centre axis of the structured light beam as projected from the projector device or at a first distance from the projector device, D2 is the largest cross-sectional dimension orthogonal to the centre axis of the structured light at a second larger distance from the projector device e.g. at a surface and L is the distance between Dl and D2 and wherein the distances are determined along the centre axis of the light pattern.
41. The 3D reconstruction system of any one of claims 35-40, wherein the angle of divergence is fixed or tunable according to a preprogramed routine and/or by operator instructions.
42. The 3D reconstruction system of any one of claims 35-41, wherein data representing the angle of divergence for the or each projected structured light beam is included in the or in each respective reference structured light data set(s).
43. The 3D reconstruction system of any one of the preceding claims, wherein said cross-sectional light structure of the projected structured light comprises optically distinguishing areas, such as a pattern of areas of light and areas of no-light and/or areas of light of a first quality of a character and areas of light of a second quality of the character, wherein the character advantageously is selected from light intensity, wavelength and/or range of wavelengths.
44. The 3D reconstruction system of claim 43, wherein said a cross- sectional light structure comprises a symmetrical or an asymmetrical light pattern, preferably the light pattern comprises a plurality of light dots, an arch shape, ring or semi-ring shaped lines, a plurality of angled lines, a coded structured light configuration or any combinations thereof, preferably the pattern comprises a grid of lines, a crosshatched pattern optionally comprising substantially parallel lines.
45. The 3D reconstruction system of any one of the preceding claims, wherein the light features comprise local light fractions comprising at least one optically detectable attribute, preferably comprising an intensity attribute, a wavelength attribute, a geometric attribute or any combinations thereof, preferably at least one of said attributes is an orientation attribute and/or an orientation attribute.
46. The 3D reconstruction system of claim 45, wherein each local light fraction has a beam area fraction of up to about 25 % of the area of the cross-sectional light structure, preferably each local light fraction has a beam area fraction of up to about 20 %, such as up to about 10 %, such as up to about 5 %, such up to about 3 % of the area of the cross-sectional light structure.
47. The 3D reconstruction system of any one of the preceding claims, wherein the centre to centre distance between at least two of the light features is at least about 1 % of the maximal dimension of the cross-sectional light structure, preferably the centre to centre distance between at least two of the light features is at least about 10 %, such as at least about 25 %, such as at least about 50 % of the maximal dimension of the cross-sectional light structure.
48. The 3D reconstruction system of any one of the preceding claims, wherein each of said light features are represented by a local light fraction of the cross-sectional light structure having an optically detectable attribute, preferably each light feature comprises a local and characteristic light fraction of the projected structured light.
49. The 3D reconstruction system of any one of the preceding claims, wherein each of said light features independently of each other comprise a light fraction comprising two or more crossing lines, v-shaped lines, a single dot, a group of dots, a corner section, a pair of parallel lines, a circle or any combinations thereof, preferably each of said light fractions comprise at least one of a location attribute and an orientation attribute.
50. The 3D reconstruction system of any one of the preceding claims, wherein said set of light features comprises predefined light features, said computer system being configured for searching for at least some of the predefined light features in said set of pixel data.
51. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured for defining said set of light features from the reference structured light data set representing said projected structured light beam, said computer system being configured for searching for at least some of the defined light features in said received set of pixel data.
52. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured to select said primary light features from said recognized light features.
53. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured to recognize said primary light features from said received pixel data.
54. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured for selecting said primary light features from said recognized light features, said computer system comprises a primary light feature threshold for selecting light features qualified for representing primary light features, said primary light features threshold preferably comprises a location attribute sub-threshold and an orientation attribute sub-threshold.
55. The 3D reconstruction system of any one of the preceding claims, wherein said primary light features are identified as light features comprising at least one of a location attribute and an orientation attribute, preferably a plurality of said primary light features are identified as light features comprising both a location attribute and an orientation attribute.
56. The 3D reconstruction system of any one the preceding claims 49-55, wherein said location attribute is represented by a light point, such as the cross of crossing lines, the tip of v-shaped lines, the position of a constraint along a line, the amplitude of a wave shaped line or the centre of a light dot.
57. The 3D reconstruction system of any one the preceding claims 49-56, wherein said orientation attribute is represented by one or more asymmetrical geometrical shapes, such as the orientation of the lines of crossing lines, the orientation of the lines of v-shaped lines, orientations of the wave of a wave shaped line or the orientation of imaginary lines between a group of dots.
58. The 3D reconstruction system of any one of the preceding claims, wherein the image acquisition device comprises at least one image acquisition unit comprising a pixel sensor array, such as a charge-coupled device (CCD) image sensor, or a complementary metal-oxide-semiconductor (CMOS) image sensor.
59. The 3D reconstruction system of claim 49, wherein the at least one image acquisition unit has a frame rate of at least about 10 Hz, such as at least about 25 Hz, such as at least about 50 Hz, such as at least about 75 Hz, preferably where the image acquisition device comprises two or more image acquisition units, the image acquisition units are timely coordinated to acquire images simultaneously, preferably the image acquisition units have same frame rate.
60. The 3D reconstruction system of any one of the preceding claims, wherein said structured light arrangement comprises one or more optical filters, a diffractive optical element (DOE), a spatial light modulator, a multi- order diffractive lens, a holographic lens, a Fresnel lens, a mirror
arrangement, a digital micromirror device (DMD) and/or a computer regulated optical element, such as a computer regulated mechanically optical element e.g. a mems (micro-electro-mechanical) element.
61. The 3D reconstruction system of any one of the preceding claims, wherein said structured light arrangement comprises a fiber optic probe comprising the projector device configured to project a structured light beam onto at least a section of the tissue field.
62. The 3D reconstruction system of any one of the preceding claims, wherein said structured light arrangement is mounted to a minimally surgical instrument, preferably the structured light arrangement and the minimally surgical instrument are in the form of a fiber optic probe instrument assembly of any one of claims 111-116.
63. The 3D reconstruction system of any one of the preceding claims, wherein said structured light arrangement is configured for being pulsed, preferably having a pulse duration and a pulse frequency.
64. The 3D reconstruction system of claim 60, wherein said structured light arrangement has a pulse duration which is from about half to about twice an inter pulse time between pulses, such as from about 0.8 to about 1.2 the inter pulse time.
65. The 3D reconstruction system of claim 60 or claim 61, wherein said structured light arrangement has a pulse rate adjusted relative to the frame rate of said image acquisition unit, said pulse rate of said structured light arrangement is advantageously adjusted such that the acquisition of the frames comprising reflections of the structured light beam from the tissue field is performed when the structured light beam is on.
66. The 3D reconstruction system of any one of claims 60 - 62„ wherein said structured light arrangement has a pulse rate adjusted relative to the frame rate of said image acquisition unit, to provide that said 3D
reconstruction system is configured for acquiring said plurality of frames comprising reflected structured light and for acquiring a plurality of
background frames between pulses of said projected structured light.
67. The 3D reconstruction system of claim 66, wherein said structured light arrangement has a pulse rate adjusted relative to the frame rate of said image acquisition unit to provide that at least about half of the total acquired frames are said frames comprising reflected structured light.
68 The 3D reconstruction system of any one of claims 60 - 67, wherein said structured light arrangement has a pulse rate adjusted relative to the frame rate of said image acquisition unit to provide that the image acquisition units acquires one or more background frames during the inter pulse time, preferably the structured light arrangement has a pulse rate adjusted relative to the frame rate of said image acquisition unit to provide that the image acquisition units acquires every 2nd, 3rd 4th 5th or 6th frames as background frames during the inter pulse time and preferably the remaining frames during the pulse time when the structured light beam is on.
69. The 3D reconstruction system of any one of claims 60 - 68, wherein said structured light arrangement has a structured light controller for adjusting the pulse rate and preferably the pulse length of said structured light beam, said structured light controller preferably being in data
communication with the computer system or it forms part of the computer system.
70. The 3D reconstruction system of any one of claims 60 - 69, wherein said computer system is configured for withdrawing pixel values of one or more background frame from said respective sets of pixel data for thereby reducing noise, preferably the computer system is configured for withdrawing pixel values from a timely nearest background frame from each of said respective sets of pixel data.
71. The 3D reconstruction system of any one of the preceding claims, wherein said image acquisition device comprises and/or is associated with an optical filter, such as a wavelength filer and/or with a polaroid filter, for example a linear or a circular polarizer.
72. The 3D reconstruction system of any one of the preceding claims, wherein the time attribute is a relative time attribute or an actual time attribute or a combination thereof.
73. The 3D reconstruction system of any one of the preceding claims, wherein the computer system is configured for communicating with said image acquisition device and preferably said structured light arrangement by wire or wireless, the computer system preferably comprises at least one processor, at least one memory, and at least one user interface.
74. The 3D reconstruction system of any one of the preceding claims, wherein the computer system is configured for receiving patient data via a user interface and/or for acquiring patient data from a database, said patient data preferably comprise pre-operation data and/or inter-operation data, such as data obtained by a scanning, such as a CT scanning, a MR scanning, an ultrasound scanning, a fluorescence imaging and/or a PET scanning and/or such as data estimated and/or calculated for groups of patients.
75. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured for receiving and storing reference structured light data set representing said structured light beam including said set of light features, said computer system is preferably configured for receiving said reference structured light data set via a calibration step or via a user interface, said computer system is preferably configured for using said reference structured light data set for recognizing said light features from pixel data and preferably for identifying and/or selecting said primary features.
76. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured for matching said primary features and corresponding features of said reference structured light data set using trigonometric algorithms.
77. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured for estimating said spatial position of the projector device based on said matches between said primary features and corresponding features of said reference structured light data set.
78. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured for performing said at least one determination of the tissue field based on the spatial position of the projector device and said recognized light features by using trigonometric algorithms.
79. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured for performing said at least one determination of the tissue field, wherein said at least one determination comprises a distance between the projector device, a 3D structure of at least a part of the tissue field, a size determination of at least a part of the tissue field, such as a size e.g. a periphery and/or a volume of an organ section.
80. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured for performing said at least one determination of the tissue field based of pixel data having corresponding time attribute.
81. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured for performing said at least one determination of the tissue field based of pixel data having two or more time attributes.
82. The 3D reconstruction system of claim 81, wherein said at least one determination of the tissue field comprises determining a local movement of the tissue field, such as a pulsating movement and/or a movement caused by manipulation of the tissue e.g. caused by the surgeon.
83. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured to display said at least one determination of the tissue field on a display, such as a screen.
84. The 3D reconstruction system of any one of the preceding claims, wherein said projector device of said structured light arrangement and said image acquisition device are fixed relative to each other, preferably with an angle of up to about 45 degrees.
85. The 3D reconstruction system of any one of the preceding claims, wherein said projector of said structured light arrangement and said image acquisition device are independently movable.
86. The 3D reconstruction system of any one of the preceding claims, wherein said system comprises a sensor arrangement for determining the position and orientation of the image acquisition device relative to said projector device.
87. The 3D reconstruction system of any one of the preceding claims, wherein the computer is configured for calibrating position and orientation of the image acquisition device relative to said projector device.
88. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured for repeating in real time the determination of the tissue field for consecutive sets of pixel data of said received frames.
89. The 3D reconstruction system of any one of the preceding claims, wherein said structured light beam is substantially constant for each determination of the tissue field.
90. The 3D reconstruction system of any one of the preceding claims 1-89, wherein said structured light beam is changed from one determination of the tissue field to a next determination of the tissue field.
91. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured for running a routine in real time comprising repeating steps i-iii, i. calculating a spatial position of the projector device from one or more sets of pixel data having corresponding and/or subsequent time attribute(s)
ii. calculating said determination of the tissue field based on said spatial position of the projector device and said light features recognized from pixel data having corresponding and/or subsequent time attribute(s) iii. displaying said determination of the tissue field.
92. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured for running a routine in real time comprising repeating steps i-iii, i. calculating a spatial position of the projector device from one or more sets of pixel data having corresponding time attribute
ii. calculating said determination of the tissue field based on said spatial position of the projector device and said light features recognized from pixel data having corresponding time attribute
iii. displaying said determination of the tissue field.
93. The 3D reconstruction system of any one of the preceding claims, wherein at least said projector device of said structured light arrangement and at least said image acquisition unit(s) of said image acquisition device are mounted to or integrated with separate minimally surgical instruments, such as minimally surgical instruments independently of each other selected from a penetrator, an endoscope, an ultrasound transducer instrument or an invasive surgical instrument comprising a grasper, a suture grasper, a stapler, forceps, a dissector, hooks, scissors, suction instrument, clamp instrument, electrode, curette, ablators, scalpels, a trocar, a biopsy and retractor instrument or any combination comprising one or more of the
abovementioned.
94. The 3D reconstruction system of any one of the preceding claims, wherein at least said projector device of said structured light arrangement is mounted to or integrated with an ultrasound transducer instrument, said structured light arrangement and said ultrasound transducer instrument is preferably in the form of a structured light ultrasound instrument according to any one of claims 117-123.
95. The 3D reconstruction system of any one of the preceding claims, wherein said structured light arrangement comprises a light source optically connected to the projector device, said light source preferably comprises a laser light source, such as a laser emitting diode, a fibre laser.
96. The 3D reconstruction system of any one of the preceding claims, wherein said system comprises two or more structured light arrangements, preferably two or more structured light arrangements are adapted to operate simultaneously, independently of each other or asynchronous.
97. The 3D reconstruction system of any one of claim 95 and claim
96wherein said at least one light source being an IR (infrared) light containing light source comprising light waves in the interval of from about 0.7 pm to about 4 pm, such as below 2 pm.
98. The 3D reconstruction system of any one of claims 95 - 97, wherein said computer system being configured for determine one or more properties of a tissue site in the in the tissue field based on wavelength of light reflected from said tissue site, preferably the computer system is adapted to determine oxygen level of a tissue site, changes thereof and type of tissue at the tissue site, preferably the tissue site may be the entire tissue field, an organ at the tissue field, a section of the tissue field and /or a tissue structure and/ or another structure at a preselected depth of the tissue site, such as a sub tissue surface vein.
99. The 3D reconstruction system of any one of claims 95 - 98, wherein said at least one light source being wavelength tunable, the wavelength(s) of said light source preferably being selectable by the computer system and/or the surgeon, more preferably the wavelength(s) of said light source being selectable based on a feedback signal from the computer system.
100. The 3D reconstruction system of claim 98 or claim 99, wherein said computer system being configured for determine a boundary about a target site having at least one different property than tissue surrounding said target site, said computer system preferably being configured for determine a size of said target site based on said determined boundary, such as a periphery, an area or preferably a volume.
101. The 3D reconstruction system of any one of the preceding claims, wherein said computer system is configured for performing frame stitching comprising stitching at least two sets of pixel data of said frames comprising reflections of said structured light beam from said tissue field, said stitched set of pixel data preferably comprises a stitched image data set representing a larger tissue field than each set of pixel data.
102. The 3D reconstruction system of claim 101, wherein said frame stitching comprises stitching sets of pixel data associated with different time attributes, said different time attributes are preferably consecutive time attributes, preferably computer system preferably being configured for continuously stitching in said real time received frames to said stitched image data set.
103. The 3D reconstruction system of any one of the preceding claims, wherein the system is configured for performing a plurality of topological determinations of said tissue field and the said computer system is configured for performing topological stitching comprising stitching at least two of said topological determinations, such as from 3 to 100 of said topological determinations, such as from 5 to 50 of said topological determinations.
104. The 3D reconstruction system of claim 103, wherein the computer system is configured for performing topological stitching of a plurality of topological determinations obtained from consecutive acquired frames.
105. A method for performing a determination of a tissue field, the method comprises,
• projecting a structured light beam comprising a set of recognizable light features from a projector device and onto at least a section of the tissue field, acquiring frames comprising reflections of said structured light beam from said tissue field ,
• recognizing a plurality of said light features including a plurality of primary light features from said reflections of said structured light beam,
• matching said recognized primary light features with corresponding light features of said projected structured light beam,
• based on said matches estimating the spatial position of the projector device relative to at least a part of said tissue field, and
• performing at least one determination of the tissue field based on the spatial position of the projector device and said recognized light features.
106. A robot comprising the 3D reconstruction system as claimed in any one of the preceding claims.
107. A robot as claimed in claim 106, wherein said projector device of said structured light arrangement and said image acquisition device are disposed on individually movable arms of said robot, preferably said projector device being disposed on a surgical instrument held by one of said individually movable arms, said robot preferably being a surgery robot adapted for performing minimally invasive surgery.
108. A robot as claimed in claim 106 or claim 107, wherein said robot has a controller processing system comprising or in data communication with said computer system, said computer system comprises a feedback algorithm for controlling movements of at least one of said individually movable arms of said robot in response to the determinations of the tissue field.
109. A robot as claimed in any one of claims 106-108, wherein said 3D reconstruction system comprises a sensor arrangement for determining the position and orientation of the projector device relative to the image acquisition device and/or relative to a location at the robot.
110. A robot as claimed in any one of claims 601-109, wherein said 3D reconstruction system comprises a sensor arrangement for determining the position and orientation of the image acquisition device relative to the projector device and/or relative to a location at the robot.
111. A fiber optic probe instrument assembly suitable for use in minimally invasive surgery, the assembly comprises a fiber optic probe and a surgery instrument, the fiber optic probe comprises a structured light generating and projecting device, a bundle of optical fibers and a projector device, the structured light generating and projecting device is configured for generating a structured light, the bundle of fibers has a light receiving end and a light emitting end and is arranged for receiving at least a portion of the structured light from the structured light generating and projecting device at its light receiving end and for delivering at least a portion of the light to the projector device and at least said projector device is mounted to or integrated with said surgery instrument.
112. The fiber optic probe instrument assembly of claim 111, wherein the bundle of fibers comprises at least 10 optical fibers, such as at least 50 optical fibers, such as from about 100 to about 2000 optical fibers, such as from about 200 to about 1000 optical fibers.
113. The fiber optic probe instrument assembly of claim 111 or claim 112, wherein the fiber optic probe instrument assembly comprises one or more lenses and/or objectives arranged between the structured light device and the bundle of fibers, preferably for focusing the structured light to be received by the light receiving end of the fiber bundle.
114. The fiber optic probe instrument assembly of any one of claims 111-
113, wherein the fiber optic probe instrument assembly comprises one or more lenses and/or objectives arranged between the light emitting end of the bundle of fibers and the projector, preferably the projector comprises a micro lens.
115. The fiber optic probe instrument assembly of any one of claims 111-
114, wherein the light emitting end of the bundle of fibers are arranged in an encasing to form a probe-head comprising the projector device, preferably the probe-head comprises one or more lenses, such as micro lenses.
116. The fiber optic probe instrument assembly of claim 115, wherein the probe-head has a maximal cross-sectional diameter of up to about 1.2 cm, such as up to about 1 cm, such as up to about 8 mm, such as up to about 6 mm.
117. A structured light ultrasound instrument suitable for use in minimally invasive surgery, the structured light ultrasound instrument comprises an ultrasound transducer instrument and a structured light arrangement, wherein the structured light arrangement comprises a projector device for projecting a structured light beam to a tissue field, wherein at least the projector device is mounted to or integrated with the ultrasound transducer instrument.
118. The structured light ultrasound instrument of claim 117, wherein the ultrasound transducer instrument has a distal portion with a distal end and an ultrasound head located at said distal end.
119. The structured light ultrasound instrument of claim 118, wherein the ultrasound head has a maximal cross-sectional diameter of up to about 2 cm, such as up to about 1.5 cm, such as up to about 1 cm, such as up to about 8 mm, such as up to about 6 mm.
120. The structured light ultrasound instrument of claim 118 or claim 119, wherein the ultrasound transducer instrument has an articulating length section at said distal portion, said articulating length section is preferably arranged proximally to said ultrasound head.
121. The structured light ultrasound instrument of any one of claims 118- 110, wherein the projector device is located at said distal portion of said ultrasound transducer instrument.
122. The structured light ultrasound instrument of claim 120, wherein the projector device is located distally to said articulating length section.
123. The structured light ultrasound instrument of claim 120, wherein the projector device is located proximally to said articulating length section.
124. A minimally invasive surgery navigation system comprising a structured light ultrasound instrument of any one of claims 117-123, an endoscope and a computer system, the endoscope comprises an image acquisition device configured for recording data representing reflected rays from the emitted pattern and for transmitting the rays reflected from the a surface section of a tissue field to the computer system, the computer system is configured
• for receiving the data representing reflected rays from the endoscope, · for receiving, storing and/or determining 2D and/or 3D surface data representing the surface section of the minimally invasive surgery cavity, • for calculating the positon and orientation of said ultrasound transceiver probe using said data representing reflected rays,
• obtaining a 2D and/or 3D ultrasound image from the ultrasound transceiver probe, and · correlating in 3D orientation, spatial position and size the 2D and/or 3D surface data to the ultrasound image.
EP18771352.4A 2017-03-20 2018-03-20 A 3d reconstruction system Pending EP3599982A4 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DKPA201770193 2017-03-20
DKPA201770430 2017-06-01
PCT/DK2018/050048 WO2018171851A1 (en) 2017-03-20 2018-03-20 A 3d reconstruction system

Publications (2)

Publication Number Publication Date
EP3599982A1 true EP3599982A1 (en) 2020-02-05
EP3599982A4 EP3599982A4 (en) 2020-12-23

Family

ID=63585007

Family Applications (1)

Application Number Title Priority Date Filing Date
EP18771352.4A Pending EP3599982A4 (en) 2017-03-20 2018-03-20 A 3d reconstruction system

Country Status (2)

Country Link
EP (1) EP3599982A4 (en)
WO (1) WO2018171851A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200015899A1 (en) 2018-07-16 2020-01-16 Ethicon Llc Surgical visualization with proximity tracking features
CN109870386B (en) * 2019-04-03 2024-06-07 浙江省工程物探勘察设计院有限公司 Sample density testing system for rock-soil investigation test
CN110349237B (en) * 2019-07-18 2021-06-18 华中科技大学 Fast volume imaging method based on convolutional neural network
CN110495900B (en) * 2019-08-19 2023-05-26 武汉联影医疗科技有限公司 Image display method, device, equipment and storage medium
US11776144B2 (en) 2019-12-30 2023-10-03 Cilag Gmbh International System and method for determining, adjusting, and managing resection margin about a subject tissue
US12002571B2 (en) 2019-12-30 2024-06-04 Cilag Gmbh International Dynamic surgical visualization systems
US11284963B2 (en) 2019-12-30 2022-03-29 Cilag Gmbh International Method of using imaging devices in surgery
US11759283B2 (en) 2019-12-30 2023-09-19 Cilag Gmbh International Surgical systems for generating three dimensional constructs of anatomical organs and coupling identified anatomical structures thereto
US11896442B2 (en) 2019-12-30 2024-02-13 Cilag Gmbh International Surgical systems for proposing and corroborating organ portion removals
US11744667B2 (en) 2019-12-30 2023-09-05 Cilag Gmbh International Adaptive visualization by a surgical system
US11832996B2 (en) 2019-12-30 2023-12-05 Cilag Gmbh International Analyzing surgical trends by a surgical system
US11484245B2 (en) 2020-03-05 2022-11-01 International Business Machines Corporation Automatic association between physical and visual skin properties
US11659998B2 (en) * 2020-03-05 2023-05-30 International Business Machines Corporation Automatic measurement using structured lights
CN111637850B (en) * 2020-05-29 2021-10-26 南京航空航天大学 Self-splicing surface point cloud measuring method without active visual marker
WO2022209156A1 (en) * 2021-03-30 2022-10-06 ソニーグループ株式会社 Medical observation device, information processing device, medical observation method, and endoscopic surgery system
US20230062782A1 (en) * 2021-09-02 2023-03-02 Cilag Gmbh International Ultrasound and stereo imaging system for deep tissue visualization
CN116740325B (en) * 2023-08-16 2023-11-28 广州和音科技文化发展有限公司 Image stitching method and system based on exhibition hall scene three-dimensional effect design

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9216743D0 (en) * 1992-08-07 1992-09-23 Epstein Ruth A device to calibrate adsolute size in endoscopy
US6503195B1 (en) 1999-05-24 2003-01-07 University Of North Carolina At Chapel Hill Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction
JP3887377B2 (en) * 2004-01-19 2007-02-28 株式会社東芝 Image information acquisition apparatus, image information acquisition method, and image information acquisition program
US7824328B2 (en) 2006-09-18 2010-11-02 Stryker Corporation Method and apparatus for tracking a surgical instrument during surgery
JP5118867B2 (en) * 2007-03-16 2013-01-16 オリンパス株式会社 Endoscope observation apparatus and operation method of endoscope
JP2009240621A (en) 2008-03-31 2009-10-22 Hoya Corp Endoscope apparatus
IT1401669B1 (en) 2010-04-07 2013-08-02 Sofar Spa ROBOTIC SURGERY SYSTEM WITH PERFECT CONTROL.
DE102010050227A1 (en) 2010-11-04 2012-05-10 Siemens Aktiengesellschaft Endoscope with 3D functionality
EP2689708B1 (en) 2011-04-27 2016-10-19 Olympus Corporation Endoscopic apparatus and measurement method
US11510600B2 (en) * 2012-01-04 2022-11-29 The Trustees Of Dartmouth College Method and apparatus for quantitative and depth resolved hyperspectral fluorescence and reflectance imaging for surgical guidance
KR102088541B1 (en) 2012-02-02 2020-03-13 그레이트 빌리프 인터내셔널 리미티드 Mechanized multi­instrument surgical system
WO2013163391A1 (en) 2012-04-25 2013-10-31 The Trustees Of Columbia University In The City Of New York Surgical structured light system
US20130296712A1 (en) 2012-05-03 2013-11-07 Covidien Lp Integrated non-contact dimensional metrology tool
TWI533675B (en) 2013-12-16 2016-05-11 國立交通大學 Optimal dynamic seam adjustment system and method for images stitching
US11116383B2 (en) 2014-04-02 2021-09-14 Asensus Surgical Europe S.à.R.L. Articulated structured light based-laparoscope
US20150371420A1 (en) 2014-06-19 2015-12-24 Samsung Electronics Co., Ltd. Systems and methods for extending a field of view of medical images
BR112017005251A2 (en) * 2014-09-17 2017-12-12 Taris Biomedical Llc Methods and Systems for Bladder Diagnostic Mapping
GB2545588B (en) 2014-09-22 2018-08-15 Shanghai United Imaging Healthcare Co Ltd System and method for image composition
EP3204420B1 (en) 2014-10-10 2020-09-02 The United States of America, as represented by The Secretary, Department of Health and Human Services Methods to eliminate cancer stem cells by targeting cd47
US10368720B2 (en) 2014-11-20 2019-08-06 The Johns Hopkins University System for stereo reconstruction from monoscopic endoscope images
JP6450589B2 (en) 2014-12-26 2019-01-09 株式会社モルフォ Image generating apparatus, electronic device, image generating method, and program
US9866815B2 (en) * 2015-01-05 2018-01-09 Qualcomm Incorporated 3D object segmentation
JP6618704B2 (en) * 2015-04-10 2019-12-11 オリンパス株式会社 Endoscope system
US10512508B2 (en) 2015-06-15 2019-12-24 The University Of British Columbia Imagery system

Also Published As

Publication number Publication date
EP3599982A4 (en) 2020-12-23
WO2018171851A1 (en) 2018-09-27

Similar Documents

Publication Publication Date Title
EP3599982A1 (en) A 3d reconstruction system
US20230380659A1 (en) Medical three-dimensional (3d) scanning and mapping system
US20220241013A1 (en) Quantitative three-dimensional visualization of instruments in a field of view
US20210345855A1 (en) Real time correlated depiction system of surgical tool
US11357593B2 (en) Endoscopic imaging with augmented parallax
CN107260117B (en) Chest endoscope for surface scan
EP3359012B1 (en) A laparoscopic tool system for minimally invasive surgery
Maier-Hein et al. Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
US9220399B2 (en) Imaging system for three-dimensional observation of an operative site
CN118284386A (en) Surgical system with intra-and extra-luminal cooperative instruments
CN112741689B (en) Method and system for realizing navigation by using optical scanning component
CN111281534B (en) System and method for generating a three-dimensional model of a surgical site
US20230196595A1 (en) Methods and systems for registering preoperative image data to intraoperative image data of a scene, such as a surgical scene
US20230062782A1 (en) Ultrasound and stereo imaging system for deep tissue visualization
CN118302122A (en) Surgical system for independently insufflating two separate anatomical spaces
CN118284368A (en) Surgical system with devices for endoluminal and extraluminal access
EP4228492A1 (en) Stereoscopic endoscope with critical structure depth estimation
EP4149340A1 (en) Systems and methods for image mapping and fusion during surgical procedures

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20191002

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20201119

RIC1 Information provided on ipc code assigned before grant

Ipc: A61B 1/05 20060101ALI20201113BHEP

Ipc: G06T 7/00 20170101ALI20201113BHEP

Ipc: A61B 1/04 20060101AFI20201113BHEP

Ipc: A61B 1/00 20060101ALI20201113BHEP

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: CILAG GMBH INTERNATIONAL

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 7/00 20060101ALI20230220BHEP

Ipc: A61B 1/00 20060101ALI20230220BHEP

Ipc: A61B 1/05 20060101ALI20230220BHEP

Ipc: A61B 1/04 20060101AFI20230220BHEP

INTG Intention to grant announced

Effective date: 20230313

GRAJ Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted

Free format text: ORIGINAL CODE: EPIDOSDIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

INTC Intention to grant announced (deleted)
17Q First examination report despatched

Effective date: 20230718