CN115696069A - Imaging method for 3D endoscope imaging system and 3D endoscope imaging system - Google Patents

Imaging method for 3D endoscope imaging system and 3D endoscope imaging system Download PDF

Info

Publication number
CN115696069A
CN115696069A CN202211328400.6A CN202211328400A CN115696069A CN 115696069 A CN115696069 A CN 115696069A CN 202211328400 A CN202211328400 A CN 202211328400A CN 115696069 A CN115696069 A CN 115696069A
Authority
CN
China
Prior art keywords
image data
display mode
information
brightness
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211328400.6A
Other languages
Chinese (zh)
Inventor
潘维枫
李洋
吴晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Wuhan Mindray Medical Technology Research Institute Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Wuhan Mindray Medical Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd, Wuhan Mindray Medical Technology Research Institute Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority to CN202211328400.6A priority Critical patent/CN115696069A/en
Publication of CN115696069A publication Critical patent/CN115696069A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Endoscopes (AREA)

Abstract

An imaging method for a 3D endoscopic imaging system and a 3D endoscopic imaging system, the method comprising: acquiring first image data acquired by a first image sensor and second image data acquired by a second image sensor, and acquiring attitude data acquired by an attitude sensor, wherein the attitude data is used for indicating the spatial positions corresponding to the first image data and the second image data; determining a currently applied display mode in the first display mode and the second display mode at least according to the attitude data; when the first display mode is determined to be applied, generating stereoscopic image data according to the first image data and the second image data, and controlling a display to display a stereoscopic image; when it is determined that the second display mode is applied, two-dimensional image data is generated from the first image data and the second image data, the display is controlled to display the two-dimensional image, and the two-dimensional image is displayed upright according to the attitude data. The two-dimensional image integrates information of two paths of image data, and abnormal states can be overcome.

Description

Imaging method for 3D endoscope imaging system and 3D endoscope imaging system
Technical Field
The invention relates to the field of medical equipment, in particular to an imaging method for a 3D endoscope imaging system and the 3D endoscope imaging system.
Background
The endoscope can present the tissue morphology of internal organs and the pathological change condition in vivo of a patient in the minimally invasive surgery process, is convenient for diagnosis and operation implementation, and is one of important tools for modern medical diagnosis and treatment. The traditional endoscope only provides two-dimensional plane images, and doctors can only judge the depth information of each organ in a human body cavity according to experience during operation, so that the traditional endoscope is only suitable for simple operations. Compared with the traditional endoscope, the stereoscopic endoscope capable of stereoscopically observing the area can provide three-dimensional information, better reflect the real situation of a scene, enable a doctor to feel the visual effect of a three-dimensional picture in the operation process, provide vivid visual perception and help the doctor to more accurately operate the endoscope.
Two paths of image signals are required to be collected for displaying a stereoscopic endoscope image, one path of image signal in the two paths of image signals is usually output when the display mode of the stereoscopic image is switched to the display mode of the two-dimensional image, and when the path of image signal has abnormal conditions such as shielding or dirt, the loss of picture content can be caused. Also, the stereoscopic effect of the stereoscopic endoscopic image depends on the horizontal parallax of the image, and when the stereoscopic image is displayed, if the head of the operator habitually rotates following the image, the stereoscopic effect is lost and the picture is ghosted.
Disclosure of Invention
A series of concepts in a simplified form are introduced in the summary section, which is described in further detail in the detailed description section. This summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
A first aspect of embodiments of the present invention provides an imaging method for a 3D endoscopic imaging system, including:
acquiring first image data acquired by a first image sensor of the 3D endoscopic imaging system and second image data acquired by a second image sensor of the 3D endoscopic imaging system, and acquiring gesture data acquired by a gesture sensor, wherein the gesture data is used for indicating a spatial position corresponding to the first image data and the second image data;
determining a currently applied display mode in a first display mode and a second display mode according to at least the gesture data;
when the currently applied display mode is determined to be the first display mode, generating stereoscopic image data according to the first image data and the second image data, and controlling a display to display a stereoscopic image according to the stereoscopic image data;
when the currently applied display mode is determined to be the second display mode, generating two-dimensional image data according to the first image data and the second image data, controlling the display to display a two-dimensional image according to the two-dimensional image data, and enabling the two-dimensional image to be displayed upright according to the posture data.
In some embodiments, the method further comprises determining a currently applied display mode among the first display mode and the second display mode according to a user instruction, wherein:
when the user instruction indicates that the first display mode is adopted, determining the first display mode as a currently applied display mode to display the stereoscopic image;
when the user instruction indicates that the second display mode is adopted, the second display mode is determined as the currently applied display mode so as to display the two-dimensional image.
In some embodiments, the 3D endoscopic imaging system includes an endoscope including an insertion portion and an operation portion, the first and second sensors being provided in the insertion portion, the insertion portion being used to be inserted into a site to be observed of a patient to acquire the first and second image data of the site to be observed of the patient, the posture data including a rotation angle of the insertion portion;
the determining a currently applied display mode in a first display mode and a second display mode according to at least the gesture data comprises:
when the difference value between the rotation angle and the reference angle is smaller than a preset threshold value, determining the first mode as a currently applied display mode to display the stereo image;
and when the difference value between the rotation angle and the reference angle is greater than or equal to the preset angle, determining the second mode as a currently applied display mode to display the two-dimensional image.
In some embodiments, the reference angle is an angle at which there is no vertical disparity between the first image data and the second image data.
In some embodiments, the generating two-dimensional image data from the first image data and the second image data comprises:
and determining the brightness information of the two-dimensional image data according to the first brightness information of the first image data and the second brightness information of the second image data.
In some embodiments, the determining luminance information of the two-dimensional image data from first luminance information of the first image data and second luminance information of the second image data comprises:
detecting whether the first image data and the second image data have abnormity or not;
when the first image data is detected to be abnormal, adjusting second brightness information of the second image data according to first brightness information of the first image data to obtain two-dimensional image data;
when the second image data is detected to be abnormal, adjusting the first brightness information of the first image data according to the second brightness information of the second image data to obtain the two-dimensional image data.
In some embodiments, the detecting whether the first image data and the second image data have an anomaly comprises:
determining a similarity between the first image data and the second image data;
when the similarity between the first image data and the second image data is smaller than a first threshold value, determining a first definition of the first image data and a second definition of the second image data, if the first definition is lower than the second definition, determining that the first image data is abnormal, and if the first definition is higher than the second definition, determining that the second image data is abnormal.
In some embodiments, the two-dimensional image data includes partial information in the first image data and partial information in the second image data.
A second aspect of embodiments of the present invention provides an imaging method for a 3D endoscopic imaging system, the method comprising:
acquiring first image data and second image data;
determining a display mode of a current application in a first display mode and a second display mode;
when the currently applied display mode is determined to be the first display mode, generating stereoscopic image data according to the first image data and the second image data, and controlling a display to display a stereoscopic image according to the stereoscopic image data;
and when the currently applied display mode is determined to be the second display mode, generating two-dimensional image data according to the first image data and the second image data, and controlling the display to display a two-dimensional image according to the two-dimensional image data.
In some embodiments, the first image data is acquired by a first image sensor of the 3D endoscopic imaging system and the second image data is acquired by a second image sensor of the 3D endoscopic imaging system.
In some embodiments, the two-dimensional image data includes partial information in the first image data and partial information in the second image data.
In some embodiments, the method further comprises: acquiring attitude data in the process of acquiring the first image data and the second image data, wherein the attitude data is used for indicating the spatial positions corresponding to the first image data and the second image data;
the determining the currently applied display mode in the first display mode and the second display mode includes: and determining the currently applied display mode in the first display mode and the second display mode according to a user instruction and/or the gesture data.
In some embodiments, determining the currently applied display mode among the first display mode and the second display mode according to the user instruction comprises:
when the user instruction indicates that the first display mode is adopted, determining the first display mode as a currently applied display mode to display the stereoscopic image;
when the user instruction indicates that the second display mode is adopted, the second display mode is determined as the currently applied display mode so as to display the two-dimensional image.
In some embodiments, the 3D endoscopic imaging system includes an endoscope including an insertion portion and an operation portion, the first sensor and the second sensor being provided in the insertion portion, the insertion portion being used to be inserted into a site to be observed of a patient to acquire the first image data and the second image data of the site to be observed of the patient, the posture data including a rotation angle of the insertion portion;
determining a currently applied display mode in a first display mode and a second display mode according to the gesture data, comprising:
when the difference value between the rotation angle and a reference angle is smaller than a preset threshold value, determining the first mode as a currently applied display mode to display the stereoscopic image;
and when the difference value between the rotation angle and the reference angle is greater than or equal to the preset angle, determining the second mode as a currently applied display mode to display the two-dimensional image.
In some embodiments, the reference angle is an angle at which there is no vertical parallax between the first image data and the second image data.
In some embodiments, the method further comprises: acquiring attitude data in the process of acquiring the first image data and the second image data, wherein the attitude data is used for indicating the spatial positions corresponding to the first image data and the second image data;
rotating the two-dimensional image data according to the pose data so that the two-dimensional image is displayed upright.
In some embodiments, the method further comprises: when a setting instruction for causing the two-dimensional image to be displayed in a non-upright state is received, the two-dimensional image is displayed in the non-upright state, and the direction of the two-dimensional image is associated with the spatial positions corresponding to the first image data and the second image data.
In some embodiments, the method further comprises: acquiring attitude data in the process of acquiring the first image data and the second image data, wherein the attitude data is used for indicating the spatial positions corresponding to the first image data and the second image data;
and rotating the stereo image data according to the attitude data so that the stereo image is displayed in an upright manner.
In some embodiments, the method further comprises: when a setting instruction for causing the stereoscopic image to be displayed non-upright is received, the stereoscopic image is displayed in a non-upright state, and the direction of the stereoscopic image is associated with the spatial position corresponding to the first image data and the second image data.
In some embodiments, the generating two-dimensional image data from the first image data and the second image data comprises:
and determining the brightness information of the two-dimensional image data according to the first brightness information of the first image data and the second brightness information of the second image data.
In some embodiments, the determining the luminance information of the two-dimensional image data from the first luminance information of the first image data and the second luminance information of the second image data comprises:
detecting whether the first image data and the second image data have abnormity or not;
when the first image data is detected to be abnormal, adjusting second brightness information of the second image data according to first brightness information of the first image data to obtain two-dimensional image data;
when the second image data is detected to be abnormal, adjusting the first brightness information of the first image data according to the second brightness information of the second image data to obtain the two-dimensional image data.
In some embodiments, the adjusting the second brightness information of the second image data according to the first brightness information of the first image data to obtain the two-dimensional image data includes:
carrying out weighted summation on the average brightness of the first image data and the average brightness of the second image data to obtain first weighted brightness;
obtaining a first brightness coefficient according to the ratio of the first weighted brightness to the average brightness of the second image data;
adjusting the brightness value of each pixel in the second image data according to the first brightness coefficient to obtain the brightness value of each pixel in the two-dimensional image data;
the adjusting first brightness information of the first image data according to second brightness information of the second image data to obtain the two-dimensional image data includes:
carrying out weighted summation on the average brightness of the second image data and the average brightness of the first image data to obtain second weighted brightness;
obtaining a second brightness coefficient according to the ratio of the second weighted brightness to the average brightness of the first image data;
and adjusting the brightness value of each pixel in the first image data according to the second brightness coefficient to obtain the brightness value of each pixel in the two-dimensional image data.
In some embodiments, when the average brightness of the first image data and the average brightness of the second image data are weighted and summed to obtain a first weighted brightness, the weight coefficient of the first image data is smaller than the weight coefficient of the second image data;
when the average brightness of the second image data and the average brightness of the first image data are weighted and summed to obtain a second weighted brightness, the weight coefficient of the first image data is greater than the weight coefficient of the second image data.
In some embodiments, the determining luminance information of the two-dimensional image data from first luminance information of the first image data and second luminance information of the second image data further comprises:
when it is detected that neither the first image data nor the second image data is abnormal, adjusting second brightness information of the second image data according to first brightness information of the first image data to obtain the two-dimensional image data, or adjusting first brightness information of the first image data according to second brightness information of the second image data to obtain the two-dimensional image data.
In some embodiments, the generating two-dimensional image data from the first image data and the second image data comprises:
and determining color information of the two-dimensional image data according to the first color information of the first image data and the second color information of the second image data.
In some embodiments, the determining color information of the two-dimensional image data from first color information of the first image data and second color information of the second image data comprises:
detecting whether the first image data and the second image data have abnormity or not;
when the first image data is detected to be abnormal, adjusting second color information of the second image data according to first color information of the first image data to obtain two-dimensional image data;
when the second image data is detected to be abnormal, adjusting the first color information of the first image data according to the second color information of the second image data to obtain the two-dimensional image data.
In some embodiments, the adjusting second color information of the second image data according to first color information of the first image data to obtain the two-dimensional image data includes:
weighted summation is carried out on the average color value of the first image data and the average color value of the second image data to obtain a first weighted color value;
obtaining a first color coefficient according to the ratio of the weighted color value to the average color value of the second image data;
adjusting the color value of each pixel in the second image data according to the first color coefficient to obtain the color value of each pixel in the two-dimensional image data;
the adjusting first color information of the first image data according to second color information of the second image data to obtain the two-dimensional image data includes:
performing weighted summation on the average color value of the second image data and the average color value of the first image data to obtain a second weighted color value;
obtaining a second color coefficient according to the ratio of the second weighted color value to the average color value of the second image data;
and adjusting the color value of each pixel in the second image data according to the second color coefficient to obtain the color value of each pixel in the two-dimensional image data.
In some embodiments, when the average color value of the first image data and the average color value of the second image data are weighted and summed to obtain a first weighted color value, the weight coefficient of the first image data is smaller than the weight coefficient of the second image data;
and when the average color value of the second image data and the average color value of the first image data are subjected to weighted summation to obtain a second weighted color value, the weight coefficient of the first image data is greater than the weight coefficient of the second image data.
In some embodiments, the determining color information of the two-dimensional image data from first color information of the first image data and second color information of the second image data further comprises:
when it is detected that neither the first image data nor the second image data is abnormal, adjusting second color information of the second image data according to first color information of the first image data to obtain the two-dimensional image data, or adjusting first color information of the first image data according to second color information of the second image data to obtain the two-dimensional image data.
In some embodiments, the detecting whether the first image data and the second image data have an anomaly comprises:
determining a similarity between the first image data and the second image data;
when the similarity between the first image data and the second image data is smaller than a first threshold value, determining a first definition of the first image data and a second definition of the second image data, if the first definition is lower than the second definition, determining that the first image data is abnormal, and if the first definition is higher than the second definition, determining that the second image data is abnormal.
A third aspect of the embodiments of the present invention provides a 3D endoscopic imaging system, including an endoscope and a camera host connected to the endoscope, where the endoscope includes an insertion portion and an operation portion, and the insertion portion is used for being inserted into a site to be observed of a patient;
the endoscope comprises a first image sensor and a second image sensor which are respectively used for acquiring first image data and second image data;
the camera host is used for acquiring the first image data and the second image data from the endoscope so as to execute the method.
In some embodiments, the 3D endoscopic imaging system further includes a posture sensor disposed on the endoscope, configured to collect posture data and send the posture data to the camera host, where the posture data is used to indicate spatial positions corresponding to the first image data and the second image data.
According to the imaging method for the 3D endoscope imaging system and the 3D endoscope imaging system, the display modes can be switched according to the user instruction and/or the posture data, the stereoscopic image can be ensured to have a good stereoscopic effect in the first display mode, the two-dimensional image displayed in the second display mode integrates information of two paths of image data, and organization information can be displayed for a user when one path of image is abnormal.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
In the drawings:
FIG. 1 shows a schematic block diagram of a 3D endoscopic imaging system according to an embodiment of the present invention;
FIG. 2 shows a schematic flow diagram of an imaging method for a 3D endoscopic imaging system according to an embodiment of the present invention;
FIG. 3 shows a schematic diagram of determining an imaging mode according to a range of roll angles, according to an embodiment of the invention;
FIG. 4 illustrates a schematic diagram of generating two-dimensional image data from first image data and second image data in the presence of occlusions, according to an embodiment of the invention;
FIG. 5 illustrates a schematic diagram of generating two-dimensional image data from first image data and second image data without occlusion according to an embodiment of the invention;
fig. 6 shows a schematic flow diagram of an imaging method for a 3D endoscopic imaging system according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
It is to be understood that the present invention may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of the present invention. Alternative embodiments of the invention are described in detail below, however, the invention may be practiced in other embodiments that depart from these specific details.
In the following, a 3D endoscopic imaging system according to an embodiment of the present application is first described with reference to fig. 1, and fig. 1 shows a schematic structural block diagram of a 3D endoscopic imaging system 100 according to an embodiment of the present invention.
As shown in fig. 1, the 3D endoscopic imaging system 100 includes at least an endoscope and a camera main unit 150 connected to the endoscope, the endoscope includes an insertion portion 130 and an operation portion 160,
the insertion part 130 is used for inserting into a part to be observed of a patient, and the insertion part 130 and the operation part 160 may be an integral structure or a separable structure; the endoscope further comprises a first image sensor and a second image sensor (not shown) for acquiring first image data and second image data, respectively, which are exemplarily provided at the distal end of the insertion portion 130 of the endoscope. The camera host 150 is configured to acquire the first image data and the second image data from the endoscope to perform image processing thereon. The 3D endoscopic imaging system 100 further includes a light source 110, a light guide 120, a cable 140, and an attitude sensor (not shown), the light source 110 is connected to the endoscope through the light guide 120, the operation unit 160 is connected to the imaging host 150 through the cable 140, and the light source 110 is connected through the light guide 120.
The light source 110 is used for providing an illumination light source to the observed part. The light source 110 may include a visible light source and a special light source. Illustratively, the light source may be an LED light source, may provide a plurality of monochromatic lights of different wavelength ranges, a combined light of the plurality of monochromatic lights, or a white light source of a broad spectrum. The special light source may be a laser light source corresponding to a fluorescent reagent, such as near infrared light. In some embodiments, a fluorescent reagent is injected into the site to be observed prior to imaging using the 3D endoscopic imaging system, and the fluorescent reagent absorbs the laser light generated by the laser light source to generate fluorescence.
The insertion section 130 of the endoscope includes an endoscope tube, an image sensor, and an illumination optical path. The front end of the lens tube is used for being inserted into a human body and extends into a part to be detected. The illumination light path is connected to the light guide beam 120, and is configured to irradiate the light generated by the light source 110 to the to-be-detected portion of the target object. The image sensor specifically includes a first image sensor and a second image sensor for converting an optical signal into an electrical signal, including but not limited to a CCD sensor, a CMOS sensor, and the like. The image signal collected by the image sensor is sent to the camera host 150 for subsequent image processing after being subjected to preliminary signal processing in the operation unit 160, where the preliminary signal processing includes amplification, filtering, and the like. The optical axes of the first image sensor and the second image sensor may be disposed in parallel or at an angle. The first image data and the second image data collected by the first image sensor and the second image sensor can correspond to a stereo pair image observed by left and right eyes of a person, so that binocular stereo vision of the eyes of the person is simulated.
The other end of the operation unit 160 is connected to the camera host 150 via the cable 140, and transmits the first image data and the second image data to the camera host 150 via the cable 140 for processing. In some embodiments, the operating unit 160 may also transmit the image data to the camera host 150 by wireless transmission.
In some embodiments, a processor is provided in the camera host 150, and the processor acquires image data output by the camera 160, processes the image data, and outputs the processed image data. Illustratively, the 3D endoscopic imaging system 100 further includes a display 170, and the camera host 150 is connected to the display 170 through a video connection line for transmitting the endoscopic image to the display 170 for display.
It should be noted that fig. 1 is only an example of the 3D endoscopic imaging system 100, and does not constitute a limitation on the 3D endoscopic imaging system 100, and the 3D endoscopic imaging system 100 may include more or less components than those shown in fig. 1, or combine some components, or different components, for example, the 3D endoscopic imaging system 100 may further include a dilator, a smoke control device, an input-output device, a network access device, and the like.
An imaging method for a 3D endoscopic imaging system of an embodiment of the present invention is described below with reference to fig. 2. The method may be implemented by the 3D endoscopic imaging system described with reference to fig. 1, and may specifically be implemented by a camera host of the 3D endoscopic imaging system. Fig. 2 is a schematic flow chart of an imaging method 200 for a 3D endoscopic imaging system in an embodiment of the present invention, including in particular the steps of: in step S210, acquiring first image data acquired by a first image sensor of the 3D endoscopic imaging system and second image data acquired by a second image sensor of the 3D endoscopic imaging system, and acquiring pose data acquired by a pose sensor, where the pose data is used to indicate spatial positions corresponding to the first data and the second image data;
in step S220, determining a currently applied display mode in a first display mode and a second display mode according to at least the gesture data;
in step S230, when it is determined that the currently applied display mode is the first display mode, generating stereoscopic image data according to the first image data and the second image data, and controlling a display to display a stereoscopic image according to the stereoscopic image data;
in step S240, when it is determined that the currently applied display mode is the second display mode, two-dimensional image data is generated according to the first image data and the second image data, the display is controlled to display a two-dimensional image according to the two-dimensional image data, and the two-dimensional image is displayed upright according to the posture data.
According to the imaging method 200 for the 3D endoscopic imaging system in the embodiment of the present invention, the currently applied display mode can be determined according to the pose data, and thus the two-dimensional image or the stereoscopic image is determined to be displayed; the two-dimensional image refers to the information of the first image data and the second image data, and has a clearer and more stable imaging effect; when the two-dimensional image is displayed, the two-dimensional image is displayed upright according to the attitude data, so that the two-dimensional image is closer to the observation effect of human eyes.
In the embodiment of the present invention, the first display mode refers to a display mode for displaying a stereoscopic image, and the second display mode refers to a display mode for displaying a two-dimensional image, and for example, the user may manually select to adopt the first display mode or the second display mode as a currently applied display mode as needed. When the user selects to adopt the first display mode, generating stereoscopic image data according to the first image data and the second image data, and displaying a stereoscopic image on the display; when the user selects to adopt the second display mode, two-dimensional image data is generated from the first image data and the second image data, and a two-dimensional image is displayed on the display.
The user can also select to automatically switch the display mode of the current application, and when the user selects to automatically switch the display mode, the display mode of the current application is automatically determined in the first display mode and the second display mode according to the gesture data collected by the gesture sensor.
The attitude sensor can be arranged at any position of the endoscope and synchronously moves with the first image sensor and the second image sensor, and specifically can comprise an accelerometer, a gyroscope, a magnetometer and the like. Since the first sensor and the second sensor are provided in the insertion portion of the endoscope, and the first image sensor and the second image sensor are also rotated about the tube axis when the insertion portion is rotated about the tube axis, the attitude data collected by the attitude sensor includes the rotation angle of the insertion portion, that is, the rotation angle of the insertion portion about the tube axis. This angle of rotation may also be referred to as the roll angle of the insertion portion.
As shown in fig. 3, when the rotation angle of the insertion portion is 0 ° or 180 °, there is only horizontal parallax and no vertical parallax between the first image data and the second image data, and the optimal stereoscopic effect is obtained; when the rotation angle of the insertion portion is in the first rotation range (- θ to θ) or the third rotation range (90 ° + θ to 180 ° + θ), the vertical parallax between the first image data and the second image data is small, and the stereoscopic image composed of the first image and the second image also has a good stereoscopic effect. If the rotation angle of the insertion portion is in the second rotation range (θ to 90 ° + θ) or the fourth rotation range (180 ° + θ to- θ), the vertical parallax increases, and a good stereoscopic effect cannot be obtained when displaying a stereoscopic image.
Therefore, determining the currently applied display mode according to the gesture data collected by the gesture sensor specifically includes: when the difference value between the rotation angle and the reference angle is smaller than a preset threshold value, determining the first mode as a currently applied display mode to display the stereoscopic image; and when the difference between the rotation angle and the reference angle is greater than or equal to a preset angle, determining the second mode as a currently applied display mode to display the two-dimensional image. Here, the reference angle may be an angle at which the first image data and the second image data do not have vertical parallax, and is 0 ° and 180 ° in fig. 3.
As described above, if it is determined that the second display mode is currently applied according to the user instruction, or it is determined that the second display mode is currently applied according to the posture data, two-dimensional image data is generated according to the first image data and the second image data, and a two-dimensional image is displayed on the display according to the two-dimensional image data. In the embodiment of the invention, the two-dimensional image data comprises at least part of information in the first image data and at least part of information in the second image data, rather than only one path of image data, so that even if one path of image data is shielded or polluted, the loss of picture content can be avoided, and the stability of the imaging effect is ensured.
In the embodiment of the present invention, when the first image data and the second image data are fused to generate the two-dimensional image data, the detail information of one path of image data may be fused with the global information of the other path of image data. Wherein the global information of the image data may include luminance information. Specifically, in the fusion of the image data, the luminance information of the two-dimensional image data may be determined from the first luminance information of the first image data and the second luminance information of the second image data. Because the brightness information of the two-dimensional image data is fused with the brightness information of the two images, when the image data for providing the detail information is switched, the brightness of the finally displayed two-dimensional image still cannot be changed too obviously.
When determining the image data for providing the detail information, whether the first image data and the second image data have the abnormality or not can be detected, wherein the abnormality of the image data can include the condition that the image data has the occlusion, the dirt and the like. When no two paths of image data are abnormal, any path of image is taken to provide detail information, and the definition of the two-dimensional image can be guaranteed, so that the detail information of any path of image data can be taken, the brightness information of the two paths of images is fused, and the fused two-dimensional image data is finally obtained, namely, the second brightness information of the second image data can be adjusted according to the first brightness information of the first image data to obtain the two-dimensional image data, and the first brightness information of the first image data can also be adjusted according to the second brightness information of the second image data to obtain the two-dimensional image data. Optionally, when there is no abnormality in the two paths of image data, the definitions of the two paths of image data may be compared, and one path of image with higher definition is taken to provide detail information.
On the contrary, when one of the images is abnormal, the image without the abnormality can be used as the image for providing the detail information, and the image data of the other image is used for providing the brightness information, so that the definition of the two-dimensional image is ensured. When the first image data is detected to have abnormality, the detail information of the second image data is taken, and the second brightness information of the second image data is adjusted according to the first brightness information of the first image data to obtain the two-dimensional image data. Similarly, when the second image data is detected to have abnormality, the detail information of the first image data is taken, and the first brightness information of the first image data is adjusted according to the second brightness information of the second image data to obtain the two-dimensional image data.
Referring to fig. 4, a schematic diagram of generating two-dimensional image data from first image data and second image data when there is occlusion in the first image data is shown. Because the first image data is shielded, the detail information of the second image data is taken, and the brightness information of the first image data and the second image data is fused to obtain the final two-dimensional image data, so that the two-dimensional image is displayed clearly, and the brightness is between the first image and the second image. Referring to fig. 5, a schematic diagram of generating two-dimensional image data from first image data and second image data when occlusion exists in both the first image data and the second image data is shown. At this time, the definition of the two-dimensional image data can be ensured by taking the detail information of the first image data or the second image data, and the brightness of the two-dimensional image is between the first image and the second image.
Illustratively, detecting whether the first image data and the second image data have the abnormality specifically includes: first, a similarity between the first image data and the second image data is determined. If the similarity between the first image data and the second image data is smaller than a first threshold th1, it is determined that one of the images may be blocked/dirty; in this case, a first definition of the first image data and a second definition of the second image data are further calculated, and the path of image data with lower definition is considered as the path of image data that is blocked/dirty, that is, if the first definition is lower than the second definition, it is determined that the first image data is abnormal, and if the first definition is higher than the second definition, it is determined that the second image data is abnormal.
When determining the Similarity between the first image data and the second image data, the description of the Similarity between the first image data and the second image data may be performed based on a Structure Similarity Index Measure (SSIM), and the calculation formula is:
Figure BDA0003912788330000141
wherein L and R represent first image data and second image data, respectively, and μ L And mu R Mean value of pixels, σ, of the first image data and the second image data, respectively L And σ R Pixel standard deviation, σ, of the first image data and the second image data, respectively LR C1 and c2 are correction constants for the pixel covariance of the first image data and the second image data. If SSIM (L, R)<th1, it is considered that one path of image data of the first image data and the second image data is abnormal. Of course, the evaluation index of the similarity in the embodiment of the present invention is not limited to SSIM, and other image similarity determination algorithms such as a hash algorithm and a histogram algorithm may also be used to calculate the similarity between the first image data and the second image data.
In calculating the sharpness of the first image data and the second image data, a plurality of different sharpness evaluation indexes may be used as well. For example, the edge extraction algorithm based on the Sobel operator may extract the edge intensities of the first image data and the second image data, and calculate the sharpness F of the first image data or the second image data according to the edge intensities, where the calculation formula is:
Figure BDA0003912788330000151
Figure BDA0003912788330000152
where img represents an input image, representing either the first image data or the second image data. In addition, other image definition calculation algorithms such as a Gaussian operator, a Laplacian operator, an image variance and the like can be used for describing the definition of the first image data or the second image data, and one path of image data with lower definition can be determined as image data with abnormal conditions such as shielding and dirt.
The fusing the luminance information of the two image data paths may specifically include performing weighted summation on the luminance information of the first image data and the luminance information of the second image data to obtain the luminance information of the final two-dimensional image data. If there is no anomaly in the first image data and the second image data, the weight W1 of the first image data and the weight W2 of the second image data are close to each other, for example, W1=0.5 and W2=0.5, at the time of weighted summation, and thus, even if the image data providing the detail information is switched, the luminance of the finally displayed two-dimensional image does not change significantly.
When the first image data has abnormality, the second brightness information of the second image data is adjusted according to the first brightness information of the first image data. Specifically, the average brightness of the first image data and the average brightness of the second image data are weighted and summed to obtain a first weighted brightness; obtaining a first brightness coefficient according to the ratio of the first weighted brightness to the average brightness of the second image data; and adjusting the brightness value of each pixel in the second image data according to the first brightness coefficient to obtain the brightness value of each pixel in the two-dimensional image data.
The average brightness of the first image data is the average value of the brightness of all pixels in the first image data, wherein the brightness of each pixel is the value of a brightness channel Y after the image data is converted from an RGB domain to a YCbCr domain. When the first image data is image data in which an abnormality exists, the brightness of the output second image data may be calculated by the following formula:
yout = Y2 (W1 mean (Y1) + W2 mean (Y2))/mean (Y2) (formula 4)
Wherein, Y1 and Y2 refer to the luminance channels of the first image data and the second image data, respectively, and contain the detail information of the first image data or the second image data. mean (Y1) and mean (Y2) are the average luminance of the first image data and the average luminance of the second image data, respectively, and represent global luminance information of the first image data and the second image data; w1 is a weight coefficient of the first image data, and W2 is a weight coefficient of the second image data. As can be seen from equation 4, the luminance channel Yout of the finally output two-dimensional image data includes the detail information (Y2) of the second image data, and the global luminance information (mean (Y1) and mean (Y2)) of the first image data and the second image data are fused.
Further, since the brightness of the first image data in the presence of occlusion or contamination is significantly low, in order to avoid the first image data from significantly reducing the brightness of the two-dimensional image obtained by fusion, when the average brightness mean (Y1) of the first image data and the average brightness mean (Y2) of the second image data are weighted and summed to obtain the first weighted brightness, the weight coefficient W1 of the second image data is greater than the weight coefficient W2 of the first image data, for example, W1=0.05, and W2=0.95.
Similarly, adjusting the first luminance information of the first image data according to the second luminance information of the second image data includes: carrying out weighted summation on the average brightness of the second image data and the average brightness of the first image data to obtain second weighted brightness; obtaining a second brightness coefficient according to the ratio of the second weighted brightness to the average brightness of the first image data; adjusting the brightness value of each pixel in the first image data according to the second brightness coefficient to obtain the brightness value of each pixel in the two-dimensional image data; specifically, the formula 4 above can be referred to, and details are not described herein.
In addition to fusing the luminance information of the two image data, in some embodiments, the color information of the two images may also be fused. Specifically, the color information of the two-dimensional image data may be determined from first color information of the first image data and second color information of the second image data.
Similar to the fusion of the brightness information, when an image fusion mode based on the color information is adopted, whether the first image data and the second image data have abnormality or not can be detected, one path of image data without abnormality is taken to provide detail information, and one path of image data with abnormality is taken to provide color information. Specifically, when the first image data is detected to have abnormality, adjusting second color information of second image data according to first color information of the first image data to obtain two-dimensional image data; when the second image data is detected to have the abnormality, the first color information of the first image data is adjusted according to the second color information of the second image data to obtain the two-dimensional image data. When it is detected that neither the first image data nor the second image data is abnormal, the second color information of the second image data may be adjusted according to the first color information of the first image data, or the first color information of the first image data may be adjusted according to the second color information of the second image data.
When adjusting second color information of second image data according to first color information of first image data, performing weighted summation on an average color value of the first image data and an average color value of the second image data to obtain a first weighted color value; obtaining a first color coefficient according to the ratio of the first weighted color value to the average color value of the second image data; adjusting the color value of each pixel in the second image data according to the first color coefficient to obtain the color value of each pixel in the two-dimensional image data, namely:
cout = C2 (W3 mean (C1) + W4 mean (C2))/mean (C2) (formula 5)
Wherein C1 and C2 respectively refer to color channels of the first image data and the second image data, and may specifically be any color channel of RGB channels, which contains detail information of the first image data or the second image data. mean (C1) and mean (C2) are mean color values of the first image data and mean color values of the second image data, respectively, and represent global color information of the first image data and the second image data; w3 is a weight coefficient of the first image data, and W4 is a weight coefficient of the second image data.
Further, if the first image data has an anomaly, when the average color value C1 of the first image data and the average color value C2 of the second image data are weighted and summed, the weight coefficient W3 of the second image data is greater than the weight coefficient W4 of the first image data, so as to avoid that the color of the two-dimensional image is influenced too significantly by the abnormal first image data.
On the contrary, if the second image data has abnormality, the average color value of the first image data and the average color value of the second image data may be subjected to weighted summation to obtain a second weighted color value; obtaining a second color coefficient according to the ratio of the second weighted color value to the average color value of the second image data; and adjusting the color value of each pixel in the first image data according to the second color coefficient to obtain the color value of each pixel in the two-dimensional image data. Wherein the weight coefficient of the first image data is greater than the weight coefficient of the second image data. When there is no abnormality in both the first image data and the second image data, the weight coefficient W3 of the first image data and the weight coefficient W4 of the second image data may be close to or equal to each other.
And after the two-dimensional image data is obtained according to the first image data and the second image data, controlling a display to display a two-dimensional image according to the two-dimensional image data, and enabling the two-dimensional image to be displayed upright according to the attitude data acquired by the attitude sensor. The orthoscopic display of the two-dimensional image according to the pose data acquired by the pose sensor may particularly comprise rotating the orientation of the two-dimensional image according to the rotation angle of the insertion portion such that the orientation of the two-dimensional image is unchanged when the insertion portion is rotated around the tube axis. Referring to fig. 3, when the two-dimensional image is displayed upright, although the insertion portion has deviated from the reference angle, the display direction of the two-dimensional image may coincide with the display direction at the reference angle. The upright direction of the two-dimensional image is coincident with the upright direction of the stereoscopic image, and in some embodiments, in the first display mode, the stereoscopic image is upright and when switching from the first display mode to the second display mode, the two-dimensional image is also upright, so that the switching process of the images is in relatively smooth transition.
In some embodiments, when the second display mode is applied, the user may be allowed to select whether or not the two-dimensional image is caused to be displayed upright. When a setting instruction to cause the two-dimensional image to be displayed upright is received, the two-dimensional image is caused to be displayed upright according to the attitude data. When a setting instruction for displaying the two-dimensional image in a non-upright state is received, the two-dimensional image is displayed in the non-upright state, and the direction of the two-dimensional image is associated with the spatial position corresponding to the first image data and the second image data, that is, when the insertion portion is rotated about the tube axis, the two-dimensional image displayed on the display is also rotated.
And when the user selects to apply the first display mode or determines to apply the first display mode according to the attitude data acquired by the attitude sensor, generating stereoscopic image data according to the first image data and the second image data, and controlling the display to display the stereoscopic image. The stereoscopic image comprises a first image and a second image with certain parallax, the first image and the second image are respectively used for being output to the left eye and the right eye of a user, and a stereoscopic visual effect is achieved through the horizontal parallax between the first image and the second image. Therefore, the stereoscopic image data fuses all information in the first image data and the second image data.
The output mode of the stereoscopic image may be various, and for example, the stereoscopic image may be output through a dual display or a single display. The dual display output includes a head mounted display, a dual screen display, etc., allowing the left and right eyes of an observer to view the first and second images displayed on the different displays, respectively. The single display output mainly includes an active shutter type stereoscopic display mode, a passive polarization stereoscopic display mode, and the like. The active shutter type stereoscopic display mode needs to be matched with liquid crystal light valve glasses, the passive polarization stereoscopic display mode needs to be matched with polarization glasses, and the active shutter type stereoscopic display mode and the passive polarization stereoscopic display mode realize image separation of left and right eyes through matching of a display and the glasses.
When the stereo image is displayed, the stereo image can be displayed upright according to the attitude data, so that the head of an operator is prevented from rotating along with the image, and the loss of the stereo effect is avoided. Specifically, the stereoscopic image may be rotated according to the rotation angle of the insertion portion so that the first image and the second image always keep in agreement with the image direction at the reference angle during the rotation of the insertion portion around the tube axis, as shown in fig. 3.
Alternatively, when controlling the display to display a stereoscopic image according to stereoscopic image data, the user may also be allowed to select whether or not to cause the stereoscopic image to be displayed upright. When a setting instruction to cause stereoscopic images to be displayed upright is received, stereoscopic images are caused to be displayed upright in accordance with the attitude data as described above. When a setting instruction for enabling the stereoscopic image to be displayed in a non-upright state is received, the stereoscopic image is displayed in the non-upright state, the direction of the stereoscopic image is associated with the space position corresponding to the first image data and the second image data, namely when a user rotates the operation part and drives the insertion part to rotate, the stereoscopic image displayed on the display also rotates along with the operation part.
In summary, the imaging method 200 for the 3D endoscope imaging system according to the embodiment of the present invention can switch the display mode according to the posture data, and can ensure that the stereoscopic image has a better stereoscopic effect in the first display mode, and the two-dimensional image displayed in the second display mode integrates information of two paths of image data, and can also show organization information to the user when one path of image is abnormal.
An imaging method for a 3D endoscopic imaging system according to another embodiment of the present invention is described below with reference to fig. 6. The method may be implemented by the 3D endoscopic imaging system described with reference to fig. 1, and may specifically be implemented by a camera host of the 3D endoscopic imaging system. Fig. 6 is a schematic flow chart of an imaging method 600 for a 3D endoscopic imaging system in an embodiment of the present invention, including in particular the steps of:
in step S610, first image data and second image data are acquired;
in step S620, determining a currently applied display mode among the first display mode and the second display mode;
in step S630, when it is determined that the currently applied display mode is the first display mode, generating stereoscopic image data according to the first image data and the second image data, and controlling a display to display a stereoscopic image according to the stereoscopic image data;
in step S640, when it is determined that the currently applied display mode is the second display mode, two-dimensional image data is generated according to the first image data and the second image data, and the display is controlled to display a two-dimensional image according to the two-dimensional image data.
In some embodiments, in step S610, the first image data is captured by a first image sensor of the 3D endoscopic imaging system and the second image data is captured by a second image sensor of the 3D endoscopic imaging system. The first image data and the second image data have a parallax therebetween, thereby enabling a stereoscopic effect to be produced when the first image and the second image are simultaneously displayed.
In step S620, a display mode currently applied is determined among the first display mode and the second display mode. Specifically, the display mode of the current application may be determined among the first display mode and the second display mode according to the user instruction and/or the gesture data. Specifically, when the user instruction indicates that the first display mode is adopted, the first display mode is determined as the currently applied display mode to display the stereoscopic image. And when the user instruction indicates that the second display mode is adopted, determining the second display mode as the currently applied display mode to display the two-dimensional image. When the user instruction indicates that the display mode is automatically determined, the display mode of the current application is determined in the first display mode and the second display mode according to the gesture data.
Wherein the pose data is acquired during the acquisition of the first image data and the second image data, and is used for indicating the corresponding spatial positions of the first image data and the second image data. Illustratively, the 3D endoscopic imaging system includes an endoscope including an insertion portion in which a first sensor and a second sensor are provided and an operation portion for insertion into a site to be observed of a patient to acquire first image data and second image data of the site to be observed of the patient. The posture data may be collected by a posture sensor, and the posture sensor may be provided at any position of the insertion portion or the operation portion. The attitude data includes a rotation angle of the insertion portion, which can reflect spatial positions of the first image sensor and the second image sensor since the first image sensor and the second image sensor are disposed within the insertion portion.
Exemplarily, if a currently applied display mode is determined among a first display mode and a second display mode according to the posture data, when a difference between the rotation angle of the insertion part and the reference angle is less than a preset threshold, the first mode is determined as the currently applied display mode to display the stereoscopic image; and when the difference between the rotation angle and the reference angle is greater than or equal to a preset angle, determining the second mode as a currently applied display mode to display the two-dimensional image. Wherein the reference angle may be an angle at which the first image data and the second image data do not have vertical parallax. When the insertion part is positioned under the reference angle, the first image data and the second image data only have horizontal parallax, but do not have vertical parallax, and the optimal stereoscopic vision effect is achieved; when the difference between the rotation angle of the insertion portion and the reference angle is smaller than the preset angle, the vertical parallax between the first image data and the second image data is small, and the stereoscopic image composed of the first image and the second image also has a good stereoscopic effect, so that the stereoscopic image can be maintained to be displayed. If the difference between the rotation angle of the insertion portion and the reference angle is greater than the preset angle, the vertical parallax increases, and a good stereoscopic effect cannot be obtained when displaying a stereoscopic image, and thus the display is switched to the two-dimensional image display.
In step S630, when it is determined that the first display mode is applied, stereoscopic image data is generated according to the first image data and the second image data, and the display is controlled to display a stereoscopic image according to the stereoscopic image data. Illustratively, the stereoscopic image includes a first image and a second image having a certain parallax, for output to a left eye and a right eye of a user, respectively, and a stereoscopic effect is achieved by a horizontal parallax between the first image and the second image. Therefore, the stereoscopic image data fuses all information in the first image data and the second image data. The output mode of the stereoscopic image may be various, for example, the stereoscopic image may be output through a dual display or a single display, which is not limited in this embodiment of the present invention.
When the stereo image is displayed, the stereo image can be displayed upright according to the attitude data, so that the head of an operator is prevented from rotating along with the image, and the loss of the stereo effect is avoided. Specifically, the stereoscopic image may be rotated according to the rotation angle of the insertion portion so that the first image and the second image always keep in agreement with the image direction at the reference angle during the rotation of the insertion portion around the tube axis. Alternatively, when controlling the display to display a stereoscopic image according to stereoscopic image data, the user may also be allowed to select whether or not to cause the stereoscopic image to be displayed upright. When a setting instruction to cause stereoscopic images to be displayed upright is received, stereoscopic images are caused to be displayed upright in accordance with the attitude data as described above. When a setting instruction for enabling the stereoscopic image to be displayed in a non-upright state is received, the stereoscopic image is displayed in the non-upright state, the direction of the stereoscopic image is associated with the space position corresponding to the first image data and the second image data, namely when a user rotates the operation part and drives the insertion part to rotate, the stereoscopic image displayed on the display also rotates along with the operation part.
In step S640, when it is determined that the second display mode is applied, two-dimensional image data is generated from the first image data and the second image data, and the display is controlled to display a two-dimensional image from the two-dimensional image data. Illustratively, the two-dimensional image data includes partial information in the first image data and partial information in the second image data.
In some embodiments, when generating two-dimensional image data from first image data and second image data, luminance information of both may be integrated, i.e., luminance information of the two-dimensional image data may be determined from first luminance information of the first image data and second luminance information of the second image data. Specifically, whether the first image data and the second image data have abnormity is detected; when the first image data is detected to be abnormal, adjusting second brightness information of second image data according to first brightness information of the first image data to obtain two-dimensional image data; and when the second image data is detected to have abnormality, adjusting the first brightness information of the first image data according to the second brightness information of the second image data to obtain two-dimensional image data. The brightness information of the abnormal image can be used for adjusting the brightness information of the abnormal image, so that the two-dimensional image data comprises the detail information of the abnormal image and the brightness information of the two images, and the image brightness can be prevented from being changed too strongly when the image data for providing the detail information is switched.
Illustratively, detecting whether the first image data and the second image data have an abnormality includes: determining a similarity between the first image data and the second image data; when the similarity between the first image data and the second image data is smaller than a first threshold, determining a first definition of the first image data and a second definition of the second image data, if the first definition is lower than the second definition, determining that the first image data is abnormal, and if the first definition is higher than the second definition, determining that the second image data is abnormal. Because the first image data and the second image data are acquired aiming at the part to be observed, and the similarity is larger when the first image data and the second image data are not abnormal, the fact that the abnormality exists in one path of image data can be determined when the similarity between the first image data and the second image data is smaller than a first threshold value. At this time, the definitions of the two image data can be calculated, and the path of image data with lower definition is determined to have abnormality.
The adjusting the second brightness information of the second image data according to the first brightness information of the first image data to obtain the two-dimensional image data includes: carrying out weighted summation on the average brightness of the first image data and the average brightness of the second image data to obtain first weighted brightness; obtaining a first brightness coefficient according to the ratio of the first weighted brightness to the average brightness of the second image data; and adjusting the brightness value of each pixel in the second image data according to the first brightness coefficient to obtain the brightness value of each pixel in the two-dimensional image data. Similarly, adjusting the first luminance information of the first image data according to the second luminance information of the second image data to obtain the two-dimensional image data includes: carrying out weighted summation on the average brightness of the second image data and the average brightness of the first image data to obtain second weighted brightness; obtaining a second brightness coefficient according to the ratio of the second weighted brightness to the average brightness of the first image data; and adjusting the brightness value of each pixel in the first image data according to the second brightness coefficient to obtain the brightness value of each pixel in the two-dimensional image data.
When the average brightness of the first image data and the average brightness of the second image data are subjected to weighted summation to obtain first weighted brightness, the weight coefficient of the first image data is smaller than the weight coefficient of the second image data; when the average brightness of the second image data and the average brightness of the first image data are weighted and summed to obtain a second weighted brightness, the weight coefficient of the first image data is greater than the weight coefficient of the second image data. Namely, the image data without abnormality has a larger weight coefficient, so that the low brightness of the fused two-dimensional image caused by the low brightness of the image data with abnormality is avoided.
In addition, when it is detected that neither the first image data nor the second image data is abnormal, the second luminance information of the second image data may be adjusted according to the first luminance information of the first image data to obtain the two-dimensional image data, or the first luminance information of the first image data may be adjusted according to the second luminance information of the second image data to obtain the two-dimensional image data.
In further embodiments, color information of the two-dimensional image data may be determined based on first color information of the first image data and second color information of the second image data. Specifically, whether the first image data and the second image data have abnormality is detected; when the first image data is detected to be abnormal, adjusting second color information of second image data according to first color information of the first image data to obtain two-dimensional image data; and when the second image data is detected to have abnormality, adjusting the first color information of the first image data according to the second color information of the second image data to obtain two-dimensional image data.
Wherein, adjusting the second color information of the second image data according to the first color information of the first image data to obtain the two-dimensional image data comprises: carrying out weighted summation on the average color value of the first image data and the average color value of the second image data to obtain a first weighted color value; obtaining a first color coefficient according to the ratio of the weighted color value to the average color value of the second image data; and adjusting the color value of each pixel in the second image data according to the first color coefficient to obtain the color value of each pixel in the two-dimensional image data. Adjusting first color information of first image data according to second color information of second image data to obtain two-dimensional image data, comprising: performing weighted summation on the average color value of the second image data and the average color value of the first image data to obtain a second weighted color value; obtaining a second color coefficient according to the ratio of the second weighted color value to the average color value of the second image data; and adjusting the color value of each pixel in the second image data according to the second color coefficient to obtain the color value of each pixel in the two-dimensional image data.
When the average color value of the first image data and the average color value of the second image data are subjected to weighted summation to obtain a first weighted color value, the weight coefficient of the first image data is smaller than the weight coefficient of the second image data; when the average color value of the second image data and the average color value of the first image data are subjected to weighted summation to obtain a second weighted color value, the weight coefficient of the first image data is greater than the weight coefficient of the second image data. Namely, the image data of the path without the abnormality has larger weight coefficient.
In addition, when it is detected that neither the first image data nor the second image data is abnormal, the second color information of the second image data may be adjusted according to the first color information of the first image data to obtain the two-dimensional image data, or the first color information of the first image data may be adjusted according to the second color information of the second image data to obtain the two-dimensional image data.
And after the two-dimensional image data is obtained, controlling a display to display the two-dimensional image according to the two-dimensional image data. In some embodiments, the pose data may also be acquired during the acquisition of the first image data and the second image data, and the two-dimensional image data may be rotated according to the pose data to cause the two-dimensional image to be displayed upright. Wherein the pose data is used to indicate the spatial position corresponding to the first image data and the second image data. Alternatively, when the display is controlled to display a two-dimensional image based on two-dimensional image data, the user may be allowed to select whether or not to cause the two-dimensional image to be displayed upright. When a setting instruction to cause the two-dimensional image to be displayed upright is received, the two-dimensional image is caused to be displayed upright in accordance with the attitude data as described above. When a setting instruction for enabling the two-dimensional image to be displayed in a non-upright state is received, the two-dimensional image is displayed in the non-upright state, the direction of the two-dimensional image is associated with the space positions corresponding to the first image data and the second image data, namely when a user rotates the operation portion and drives the insertion portion to rotate, the two-dimensional image displayed on the display also rotates along with the operation portion.
In summary, the imaging method 600 for the 3D endoscopic imaging system according to the embodiment of the present invention can switch between the first display mode and the second display mode, so that the stereoscopic image has a better stereoscopic effect in the first display mode, and the two-dimensional image displayed in the second display mode integrates information of two paths of image data, so that organization information can be displayed to the user when there is an abnormality in one path of image.
Referring back to fig. 1, an embodiment of the present invention further provides a 3D endoscopic imaging system 100, including an endoscope and a camera host 150 connected to the endoscope, the endoscope including an insertion portion 130 and an operation portion 160, the insertion portion 130 being used for being inserted into a portion to be observed of a patient; the endoscope comprises a first image sensor and a second image sensor which are respectively used for acquiring first image data and second image data; the camera host 150 is configured to acquire the first image data and the second image data from the endoscope and execute an imaging method 200 for a 3D endoscopic imaging system according to an embodiment of the present invention.
Further, the 3D endoscope imaging system 100 further includes a posture sensor disposed on the endoscope, and is configured to collect posture data and send the posture data to the camera host 150, where the posture data is used to indicate spatial positions corresponding to the first image data and the second image data. The attitude sensor comprises at least one of an accelerometer, a gyroscope, a magnetometer and the like. The attitude sensor may be mounted at any position of the insertion portion 130 or the operation portion 160. The optical axes of the first image sensor and the second image sensor may be disposed in parallel or at an angle. The first image sensor and the second image sensor can simulate binocular stereoscopic vision of human eyes. The camera host 150 may determine a currently applied display mode in the first display mode and the second display mode according to a user instruction or gesture data acquired by a gesture sensor, and when the first display mode is determined to be applied, generate stereoscopic image data according to the first image data and the second image data, and control the display to display a stereoscopic image; when it is determined that the second display mode is applied, two-dimensional image data is generated from the first image data and the second image data, and the display is controlled to display a two-dimensional image.
The specific structure of the 3D endoscopic imaging system 100, and the specific steps of the imaging method 200 for the 3D endoscopic imaging system and the imaging method 600 for the endoscopic imaging system have been described above, and are not described in detail here. The 3D endoscopic imaging system 100 according to the embodiment of the present invention can switch the display mode according to a user instruction and/or posture data of the posture sensor, and can ensure that the stereoscopic image has a good stereoscopic effect in the first display mode, and the two-dimensional image displayed in the second display mode integrates information of two paths of image data, and can also show organization information to the user when one path of image is abnormal.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (32)

1. An imaging method for a 3D endoscopic imaging system, the method comprising:
acquiring first image data acquired by a first image sensor of the 3D endoscopic imaging system and second image data acquired by a second image sensor of the 3D endoscopic imaging system, and acquiring gesture data acquired by a gesture sensor, wherein the gesture data is used for indicating a spatial position corresponding to the first image data and the second image data;
determining a currently applied display mode in a first display mode and a second display mode according to at least the gesture data;
when the currently applied display mode is determined to be the first display mode, generating stereoscopic image data according to the first image data and the second image data, and controlling a display to display a stereoscopic image according to the stereoscopic image data;
when the currently applied display mode is determined to be the second display mode, generating two-dimensional image data according to the first image data and the second image data, controlling the display to display a two-dimensional image according to the two-dimensional image data, and enabling the two-dimensional image to be displayed in an upright mode according to the posture data.
2. The method of claim 1, further comprising determining a currently applied display mode among the first display mode and the second display mode according to a user instruction, wherein:
when the user instruction indicates that the first display mode is adopted, determining the first display mode as a currently applied display mode to display the stereoscopic image;
when the user instruction indicates that the second display mode is adopted, the second display mode is determined as the currently applied display mode so as to display the two-dimensional image.
3. The method according to claim 1, wherein the 3D endoscopic imaging system comprises an endoscope including an insertion portion and an operation portion, the first and second sensors being provided in the insertion portion, the insertion portion being used to be inserted into a site to be observed of a patient to acquire the first and second image data of the site to be observed of the patient, the posture data including a rotation angle of the insertion portion;
the determining a currently applied display mode in a first display mode and a second display mode according to at least the gesture data comprises:
when the difference value between the rotation angle and a reference angle is smaller than a preset threshold value, determining the first mode as a currently applied display mode to display the stereoscopic image;
and when the difference value between the rotation angle and the reference angle is greater than or equal to the preset angle, determining the second mode as a currently applied display mode to display the two-dimensional image.
4. The method of claim 3, wherein the reference angle is an angle at which there is no vertical disparity in the first image data and the second image data.
5. The method of claim 1, wherein generating two-dimensional image data from the first image data and the second image data comprises:
and determining the brightness information of the two-dimensional image data according to the first brightness information of the first image data and the second brightness information of the second image data.
6. The method of claim 5, wherein determining the luminance information of the two-dimensional image data from the first luminance information of the first image data and the second luminance information of the second image data comprises:
detecting whether the first image data and the second image data have abnormity or not;
when the first image data is detected to be abnormal, adjusting second brightness information of the second image data according to first brightness information of the first image data to obtain two-dimensional image data;
when the second image data is detected to be abnormal, adjusting first brightness information of the first image data according to second brightness information of the second image data to obtain the two-dimensional image data.
7. The method of claim 6, wherein said detecting whether an anomaly exists in the first image data and the second image data comprises:
determining a similarity between the first image data and the second image data;
when the similarity between the first image data and the second image data is smaller than a first threshold value, determining a first definition of the first image data and a second definition of the second image data, if the first definition is lower than the second definition, determining that the first image data is abnormal, and if the first definition is higher than the second definition, determining that the second image data is abnormal.
8. The method according to claim 1, wherein the two-dimensional image data includes partial information in the first image data and partial information in the second image data.
9. An imaging method for a 3D endoscopic imaging system, the method comprising:
acquiring first image data and second image data;
determining a currently applied display mode in the first display mode and the second display mode;
when the currently applied display mode is determined to be the first display mode, generating stereoscopic image data according to the first image data and the second image data, and controlling a display to display a stereoscopic image according to the stereoscopic image data;
and when the currently applied display mode is determined to be the second display mode, generating two-dimensional image data according to the first image data and the second image data, and controlling the display to display a two-dimensional image according to the two-dimensional image data.
10. The method of claim 9, wherein the first image data is acquired by a first image sensor of the 3D endoscopic imaging system and the second image data is acquired by a second image sensor of the 3D endoscopic imaging system.
11. The method of claim 9, wherein the two-dimensional image data includes partial information in the first image data and partial information in the second image data.
12. The method of claim 9, wherein the method further comprises: acquiring attitude data in the process of acquiring the first image data and the second image data, wherein the attitude data is used for indicating the spatial positions corresponding to the first image data and the second image data;
the determining a display mode of a current application in the first display mode and the second display mode includes: and determining the display mode of the current application in the first display mode and the second display mode according to a user instruction and/or the gesture data.
13. The method of claim 12, wherein determining a currently applied display mode among a first display mode and a second display mode according to the user instruction comprises:
when the user instruction indicates that the first display mode is adopted, determining the first display mode as a currently applied display mode to display the stereoscopic image;
when the user instruction indicates that the second display mode is adopted, the second display mode is determined as the currently applied display mode so as to display the two-dimensional image.
14. The method according to claim 12, wherein the 3D endoscopic imaging system includes an endoscope including an insertion portion and an operation portion, the first and second sensors being provided in the insertion portion, the insertion portion being used to be inserted into a site to be observed of a patient to acquire the first and second image data of the site to be observed of the patient, the posture data including a rotation angle of the insertion portion;
determining a currently applied display mode in a first display mode and a second display mode according to the attitude data, including:
when the difference value between the rotation angle and a reference angle is smaller than a preset threshold value, determining the first mode as a currently applied display mode to display the stereoscopic image;
and when the difference value between the rotation angle and the reference angle is greater than or equal to the preset angle, determining the second mode as a currently applied display mode to display the two-dimensional image.
15. The method of claim 14, wherein the reference angle is an angle at which there is no vertical disparity in the first image data and the second image data.
16. The method of claim 9, wherein the method further comprises: acquiring attitude data in the process of acquiring the first image data and the second image data, wherein the attitude data is used for indicating the spatial positions corresponding to the first image data and the second image data;
rotating the two-dimensional image data in accordance with the pose data to cause the two-dimensional image to be displayed upright.
17. The method of claim 16,
the method further comprises the following steps: when a setting instruction for causing the two-dimensional image to be displayed in a non-upright state is received, the two-dimensional image is displayed in a non-upright state, and the direction of the two-dimensional image is associated with the spatial position corresponding to the first image data and the second image data.
18. The method of claim 9, wherein the method further comprises: acquiring attitude data in the process of acquiring the first image data and the second image data, wherein the attitude data is used for indicating the spatial positions corresponding to the first image data and the second image data;
and rotating the stereo image data according to the attitude data so that the stereo image is displayed in an upright manner.
19. The method of claim 18,
the method further comprises the following steps: when a setting instruction for causing the stereoscopic image to be displayed in a non-upright state is received, the stereoscopic image is displayed in a non-upright state, and the direction of the stereoscopic image is associated with the spatial position corresponding to the first image data and the second image data.
20. The method of claim 9, wherein generating two-dimensional image data from the first image data and the second image data comprises:
and determining the brightness information of the two-dimensional image data according to the first brightness information of the first image data and the second brightness information of the second image data.
21. The method of claim 20, wherein determining the luminance information of the two-dimensional image data based on the first luminance information of the first image data and the second luminance information of the second image data comprises:
detecting whether the first image data and the second image data have abnormity or not;
when the first image data is detected to be abnormal, adjusting second brightness information of the second image data according to first brightness information of the first image data to obtain two-dimensional image data;
when the second image data is detected to be abnormal, adjusting the first brightness information of the first image data according to the second brightness information of the second image data to obtain the two-dimensional image data.
22. The method of claim 21, wherein the adjusting second luminance information of the second image data according to the first luminance information of the first image data to obtain the two-dimensional image data comprises:
carrying out weighted summation on the average brightness of the first image data and the average brightness of the second image data to obtain first weighted brightness;
obtaining a first brightness coefficient according to the ratio of the first weighted brightness to the average brightness of the second image data;
adjusting the brightness value of each pixel in the second image data according to the first brightness coefficient to obtain the brightness value of each pixel in the two-dimensional image data;
the adjusting first brightness information of the first image data according to second brightness information of the second image data to obtain the two-dimensional image data includes:
carrying out weighted summation on the average brightness of the second image data and the average brightness of the first image data to obtain second weighted brightness;
obtaining a second brightness coefficient according to the ratio of the second weighted brightness to the average brightness of the first image data;
and adjusting the brightness value of each pixel in the first image data according to the second brightness coefficient to obtain the brightness value of each pixel in the two-dimensional image data.
23. The method of claim 22, wherein when the average luminance of the first image data and the average luminance of the second image data are weighted and summed to obtain the first weighted luminance, the weight coefficient of the first image data is smaller than the weight coefficient of the second image data;
when the average brightness of the second image data and the average brightness of the first image data are subjected to weighted summation to obtain second weighted brightness, the weight coefficient of the first image data is larger than that of the second image data.
24. The method of claim 21, wherein determining the luminance information of the two-dimensional image data based on first luminance information of the first image data and second luminance information of the second image data, further comprises:
when it is detected that neither the first image data nor the second image data is abnormal, adjusting second brightness information of the second image data according to first brightness information of the first image data to obtain the two-dimensional image data, or adjusting first brightness information of the first image data according to second brightness information of the second image data to obtain the two-dimensional image data.
25. The method of claim 9, wherein generating two-dimensional image data from the first image data and the second image data comprises:
and determining color information of the two-dimensional image data according to the first color information of the first image data and the second color information of the second image data.
26. The method of claim 25, wherein determining color information for the two-dimensional image data based on the first color information for the first image data and the second color information for the second image data comprises:
detecting whether the first image data and the second image data have abnormity or not;
when the first image data is detected to be abnormal, adjusting second color information of the second image data according to first color information of the first image data to obtain two-dimensional image data;
when the second image data is detected to be abnormal, adjusting the first color information of the first image data according to the second color information of the second image data to obtain the two-dimensional image data.
27. The method of claim 26, wherein the adjusting second color information of the second image data according to first color information of the first image data to obtain the two-dimensional image data comprises:
weighted summation is carried out on the average color value of the first image data and the average color value of the second image data to obtain a first weighted color value;
obtaining a first color coefficient according to the ratio of the weighted color value to the average color value of the second image data;
adjusting the color value of each pixel in the second image data according to the first color coefficient to obtain the color value of each pixel in the two-dimensional image data;
the adjusting first color information of the first image data according to second color information of the second image data to obtain the two-dimensional image data includes:
performing weighted summation on the average color value of the second image data and the average color value of the first image data to obtain a second weighted color value;
obtaining a second color coefficient according to the ratio of the second weighted color value to the average color value of the second image data;
and adjusting the color value of each pixel in the second image data according to the second color coefficient to obtain the color value of each pixel in the two-dimensional image data.
28. The method of claim 27, wherein in weighted summing the average color value of the first image data and the average color value of the second image data to obtain a first weighted color value, the weight coefficient of the first image data is smaller than the weight coefficient of the second image data;
and when the average color value of the second image data and the average color value of the first image data are subjected to weighted summation to obtain a second weighted color value, the weight coefficient of the first image data is greater than the weight coefficient of the second image data.
29. The method of claim 28, wherein determining color information of the two-dimensional image data from first color information of the first image data and second color information of the second image data, further comprises:
when it is detected that neither the first image data nor the second image data is abnormal, adjusting second color information of the second image data according to first color information of the first image data to obtain the two-dimensional image data, or adjusting first color information of the first image data according to second color information of the second image data to obtain the two-dimensional image data.
30. The method of claim 21 or 25, wherein said detecting whether the first image data and the second image data have anomalies comprises:
determining a similarity between the first image data and the second image data;
when the similarity between the first image data and the second image data is smaller than a first threshold value, determining a first definition of the first image data and a second definition of the second image data, if the first definition is lower than the second definition, determining that the first image data is abnormal, and if the first definition is higher than the second definition, determining that the second image data is abnormal.
31. The 3D endoscope imaging system is characterized by comprising an endoscope and a camera host connected with the endoscope, wherein the endoscope comprises an insertion part and an operation part, and the insertion part is used for being inserted into a part to be observed of a patient;
the endoscope comprises a first image sensor and a second image sensor which are respectively used for acquiring first image data and second image data;
the camera host is configured to acquire the first image data and the second image data from the endoscope to perform the method of any of claims 1-30.
32. The 3D endoscopic imaging system of claim 31, further comprising a pose sensor disposed on the endoscope for collecting pose data indicating a spatial position corresponding to the first image data and the second image data and transmitting the pose data to the camera host.
CN202211328400.6A 2022-10-27 2022-10-27 Imaging method for 3D endoscope imaging system and 3D endoscope imaging system Pending CN115696069A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211328400.6A CN115696069A (en) 2022-10-27 2022-10-27 Imaging method for 3D endoscope imaging system and 3D endoscope imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211328400.6A CN115696069A (en) 2022-10-27 2022-10-27 Imaging method for 3D endoscope imaging system and 3D endoscope imaging system

Publications (1)

Publication Number Publication Date
CN115696069A true CN115696069A (en) 2023-02-03

Family

ID=85100157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211328400.6A Pending CN115696069A (en) 2022-10-27 2022-10-27 Imaging method for 3D endoscope imaging system and 3D endoscope imaging system

Country Status (1)

Country Link
CN (1) CN115696069A (en)

Similar Documents

Publication Publication Date Title
JP7124011B2 (en) Systems and methods of operating bleeding detection systems
US8911358B2 (en) Endoscopic vision system
US9516993B2 (en) Endoscope system
US10561304B2 (en) Medical stereoscopic observation device, medical stereoscopic observation method, and program
US20160295194A1 (en) Stereoscopic vision system generatng stereoscopic images with a monoscopic endoscope and an external adapter lens and method using the same to generate stereoscopic images
WO2014115371A1 (en) Image processing device, endoscope device, image processing method, and image processing program
CN109429060B (en) Pupil distance measuring method, wearable eye equipment and storage medium
KR20130108320A (en) Visualization of registered subsurface anatomy reference to related applications
US20220265125A1 (en) Wireless swivel camera laparoscopic instrument with a virtual mapping and guidance system
CN115919239A (en) Imaging method for 3D endoscopic imaging system and 3D endoscopic imaging system
WO2023021450A1 (en) Stereoscopic display and digital loupe for augmented-reality near-eye display
US10993603B2 (en) Image processing device, image processing method, and endoscope system
CN109919988A (en) A kind of stereoscopic image processing method suitable for three-dimensional endoscope
CN115311405A (en) Three-dimensional reconstruction method of binocular endoscope
WO2016076262A1 (en) Medical device
US20140002630A1 (en) Endoscopic apparatus and measuring method
JP2014161538A (en) Image processor, endoscope device, image processing method and image processing program
CN116158718A (en) Imaging and display method for endoscope system and endoscope system
US11200713B2 (en) Systems and methods for enhancing vision
CN116172493A (en) Imaging and display method for endoscope system and endoscope system
CN115696069A (en) Imaging method for 3D endoscope imaging system and 3D endoscope imaging system
US10855980B2 (en) Medical-image display control device, medical image display device, medical-information processing system, and medical-image display control method
US20140088353A1 (en) Stereo endoscope apparatus and image processing method in stereo endoscope apparatus
US11045071B2 (en) Image processing apparatus for endoscope and endoscope system
JP6600442B2 (en) Monocular Endoscopic Stereoscopic System Using Shape Reconstruction Method from Shadow and Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: Building 5, No. 828 Gaoxin Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province, 430206

Applicant after: Wuhan Mindray Biomedical Technology Co.,Ltd.

Applicant after: SHENZHEN MINDRAY BIO-MEDICAL ELECTRONICS Co.,Ltd.

Address before: 430223 floor 3, building B1, zone B, high tech medical device Park, No. 818, Gaoxin Avenue, Donghu New Technology Development Zone, Wuhan, Hubei Province (Wuhan area of free trade zone)

Applicant before: Wuhan Mairui Medical Technology Research Institute Co.,Ltd.

Country or region before: China

Applicant before: SHENZHEN MINDRAY BIO-MEDICAL ELECTRONICS Co.,Ltd.

CB02 Change of applicant information