CN117190138A - Medical lighting device, operating table, treatment chair, lighting controller, method and apparatus - Google Patents

Medical lighting device, operating table, treatment chair, lighting controller, method and apparatus Download PDF

Info

Publication number
CN117190138A
CN117190138A CN202311152946.5A CN202311152946A CN117190138A CN 117190138 A CN117190138 A CN 117190138A CN 202311152946 A CN202311152946 A CN 202311152946A CN 117190138 A CN117190138 A CN 117190138A
Authority
CN
China
Prior art keywords
target
mechanical arm
pose
treatment area
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311152946.5A
Other languages
Chinese (zh)
Inventor
孙玉春
周永胜
田素坤
柯怡芳
江泳
范宝林
翟文茹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University School of Stomatology
Original Assignee
Peking University School of Stomatology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University School of Stomatology filed Critical Peking University School of Stomatology
Priority to CN202311152946.5A priority Critical patent/CN117190138A/en
Publication of CN117190138A publication Critical patent/CN117190138A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The present disclosure provides a medical lighting device, operating table, treatment chair, and lighting controller, method and apparatus, relating to the technical field of medical equipment. A lighting control apparatus of the present disclosure includes: an image acquisition device configured to acquire a real-time image of a target treatment region direction of a patient; the controller is configured to determine a target irradiation pose according to the real-time image, generate a mechanical arm control signal according to the target irradiation pose and send the mechanical arm control signal to the mechanical arm; and the mechanical arm is connected with the illumination device and is configured to adjust the pose of the illumination device according to the mechanical arm control signal so as to enable the light irradiation direction of the illumination device to be matched with the position of the target treatment area.

Description

Medical lighting device, operating table, treatment chair, lighting controller, method and apparatus
Technical Field
The present disclosure relates to the technical field of medical devices, and in particular, to a medical lighting apparatus, operating table, treatment chair, and lighting controller, method and device.
Background
The shadowless lamp is special equipment for adjusting the angle of the lamp or the angle of the polished reflecting surface into annular illumination, thereby eliminating shadows or dead angles formed by convex-concave structures of the illuminated part, forming a drawing with uniform brightness and providing illumination for a treatment area. The shadowless lamp is generally fixed above the treatment chair, and the shadowless lamp is fixed at a specific position after a proper angle is adjusted before treatment.
In the current treatment, taking dental treatment as an example, a patient needs to lie on a dental treatment instrument, and a doctor adjusts the shadowless lamp light into the mouth of the patient so as to look at the affected part. Because the position of the patient in treatment can be changed, the lamplight at the fixed position can not be accurately irradiated to the affected part, and a doctor is required to adjust in real time.
Disclosure of Invention
It is an object of the present disclosure to improve the degree of automation and the efficiency of operation of a device.
According to an aspect of some embodiments of the present disclosure, there is provided a lighting control apparatus comprising: an image acquisition device configured to acquire a real-time image of a target treatment region direction of a patient; the controller is configured to determine a target irradiation pose according to the real-time image, generate a mechanical arm control signal according to the target irradiation pose and send the mechanical arm control signal to the mechanical arm; and the mechanical arm is connected with the illumination device and is configured to adjust the pose of the illumination device according to the mechanical arm control signal so as to enable the light irradiation direction of the illumination device to be matched with the position of the target treatment area.
In some embodiments, the image acquisition device comprises at least one of a binocular vision sensor or a three-dimensional image acquisition device.
In some embodiments, the illumination device is fixed to a free end of the mechanical arm, and the other end of the mechanical arm is a fixed end.
In some embodiments, the controller is configured to: extracting a preset characteristic point according to a real-time image of the direction of a target treatment area of a patient; determining three-dimensional coordinates of preset feature points, determining three-dimensional point cloud data in a preset range of a target treatment area of a patient, and updating a three-dimensional model; and planning a path of the mechanical arm according to the three-dimensional model, and generating a mechanical arm control signal.
In some embodiments, the controller is further configured to: acquiring oral and maxillofacial image data of a patient; marking the position of an oral and maxillofacial region by adopting a trained machine learning model according to the oral and maxillofacial model data, and obtaining the marked model data; acquiring information of a target treatment area, and determining position information of the target treatment area in the marked model according to the information of the target treatment area and the data of the marked model, wherein a controller determines updated position information of the target treatment area in the updated three-dimensional model according to the position information of the target treatment area in the marked model and the updated three-dimensional model, and determines a target irradiation pose according to the updated position information.
In some embodiments, the information of the target treatment area includes an identification of the target dental site.
In some embodiments, the controller is further configured to: determining the position of a target tool according to the real-time image of the direction of the target treatment area of the patient; a target treatment area is determined based on the position of the target tool.
In some embodiments, the image acquisition device is further configured to: collecting an environment image and sending the environment image to a controller; the controller is further configured to: and generating an environment model according to the environment image, and determining the initial pose of the mechanical arm.
In some embodiments, the controller is further configured to: determining an obstacle position according to the environmental model; according to the position of the obstacle and the irradiation pose of the target, planning a motion path of the mechanical arm and updating the planning result of the motion path in real time, wherein the motion path avoids the obstacle; and the controller generates a mechanical arm control signal according to the movement path planning result.
In some embodiments, the apparatus further comprises: a distance sensor attached to the mechanical arm and configured to send pre-warning information to the controller when detecting the presence of an object within a predetermined range; the controller is further configured to control the robotic arm to halt movement upon receipt of the alert information from the distance sensor.
In some embodiments, the controller is further configured to send a control signal to the lighting device to adjust at least one of a light source or brightness of the lighting device.
According to an aspect of some embodiments of the present disclosure, there is provided a medical lighting device comprising: any one of the lighting control devices above; and an illumination device configured to adjust the pose under the control of the illumination control device and emit light.
In some embodiments, the lighting device comprises a shadowless lamp.
In some embodiments, the shadowless lamp includes at least one of a Led light source, a laser light source, an ultraviolet light source, or a near infrared light source.
In some embodiments, the lighting control device conforms to at least one of: the laser light source comprises at least one of the following: laser light with a wavelength range of 405+/-20 nm; or the wavelength is 390-1600nm, and the power is 10 -3 -10 -1 Within W range, the energy is 10 -2 -10 2 J/cm 2 A laser within a range; the ultraviolet wavelength is 280-320 nm, and the irradiation intensity is 15 mu W/cm 2 Is a predetermined intensity interval of (2); the wavelength of the near infrared light is in the range of 780-2526 nm; or the wavelength of visible blue light is in the range of 400-500 nm, and the light intensity is 750-2000 mw +.cm 2 Within the range of
According to an aspect of some embodiments of the present disclosure, there is provided an operating table comprising: a patient support mechanism configured to support a patient; and any of the medical illumination devices mentioned above, configured to emit light toward a target treatment area of a patient.
According to an aspect of some embodiments of the present disclosure, there is provided a dental treatment chair comprising: a dental chair configured to hold a patient; and any of the medical illumination devices mentioned above, configured to emit light to a target treatment area of the oromaxillofacial region of a patient.
According to an aspect of some embodiments of the present disclosure, there is provided a lighting control method including: acquiring a real-time image of the direction of a target treatment area of a patient; and determining the target irradiation pose according to the real-time image, generating a mechanical arm control signal according to the target irradiation pose, and sending the mechanical arm control signal to the mechanical arm so as to adjust the pose of the lighting equipment according to the mechanical arm control signal, so that the light irradiation direction of the lighting equipment is matched with the position of the target treatment area.
In some embodiments, determining the target illumination pose from the real-time image, generating the robotic arm control signal from the target illumination pose comprises: extracting a preset characteristic point according to a real-time image of the direction of a target treatment area of a patient; determining three-dimensional coordinates of preset feature points, determining three-dimensional point cloud data in a preset range of a target treatment area of a patient, and updating a three-dimensional model; and planning a path of the mechanical arm according to the updated three-dimensional model, and generating a mechanical arm control signal.
In some embodiments, the method further comprises: acquiring oral and maxillofacial model data of a patient; labeling the position of an oral and maxillofacial region based on the trained machine learning model according to the oral and maxillofacial model data, and acquiring the labeled model data; acquiring information of a target treatment area, and determining position information of the target treatment area in the marked model according to the information of the target treatment area and the data of the marked model; determining the target illumination pose from the real-time image includes: determining updated position information of the target treatment area in the updated three-dimensional model according to the updated three-dimensional model and the position information of the target treatment area in the marked model; and determining the target irradiation pose according to the updated position information.
In some embodiments, determining the target illumination pose from the real-time image, generating the robotic arm control signal from the target illumination pose further comprises: determining the position of a target tool according to the real-time image of the direction of the target treatment area of the patient; a target treatment area is determined based on the position of the target tool.
In some embodiments, the method further comprises: acquiring an environment image acquired by image acquisition equipment; generating an environment model according to the environment image, and determining an initial pose of the mechanical arm, wherein the image acquisition equipment comprises at least one of a binocular vision sensor or a three-dimensional image acquisition equipment; generating the mechanical arm control signal according to the target irradiation pose comprises: and determining the target pose of the mechanical arm according to the target irradiation pose and the current pose of the mechanical arm, and generating a mechanical arm control signal according to the target pose of the mechanical arm.
In some embodiments, the method further comprises determining an obstacle location from the environmental model; generating the mechanical arm control signal according to the target irradiation pose comprises: planning a movement path of the mechanical arm according to the obstacle position and the target irradiation pose, wherein the movement path avoids the obstacle position; and generating a mechanical arm control signal according to the motion path planning result.
In some embodiments, the method further comprises: and under the condition that the early warning information from the distance sensor is received, controlling the mechanical arm to pause movement, wherein the distance sensor is attached to the mechanical arm, and under the condition that an object exists in a preset range, sending the early warning information to the controller.
In some embodiments, the method further comprises: the controller sends a control signal to the lighting device to adjust at least one of a light source or a brightness of the lighting device.
In some embodiments, the method further comprises: acquiring oromaxillofacial model sample data, wherein the oromaxillofacial model sample data comprises labels of the positions of all areas of the oromaxillofacial; training a machine learning model through the oral and maxillofacial model sample data until training is completed.
In some embodiments, the method further comprises: an image of the target tool and treatment location information on the target tool are acquired to identify the target tool from the real-time image and to determine a target treatment area from the treatment location information on the target tool.
According to an aspect of some embodiments of the present disclosure, there is provided a lighting controller comprising: a memory; and a processor coupled to the memory, the processor configured to perform any of the lighting control methods above based on instructions stored in the memory.
According to an aspect of some embodiments of the present disclosure, a non-transitory computer-readable storage medium is presented, having stored thereon computer program instructions which, when executed by a processor, implement the steps of any of the above lighting control methods.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the present disclosure, and together with the description serve to explain the present disclosure. In the drawings:
fig. 1 is a schematic diagram of some embodiments of a lighting control device of the present disclosure.
Fig. 2 is a schematic view of some embodiments of a medical lighting device of the present disclosure.
Fig. 3 is a schematic view of some embodiments of the surgical bed of the present disclosure.
Fig. 4 is a schematic view of some embodiments of a dental chair of the present disclosure.
Fig. 5 is a flow chart of some embodiments of the lighting control method of the present disclosure.
Fig. 6 is a flow chart of other embodiments of the lighting control method of the present disclosure.
Fig. 7 is a schematic diagram of some embodiments of a lighting controller of the present disclosure.
Fig. 8 is a schematic diagram of further embodiments of the lighting controller of the present disclosure.
Detailed Description
The technical scheme of the present disclosure is described in further detail below through the accompanying drawings and examples.
The inventor finds that in the mode in the related art, a doctor needs to adjust the shadowless lamp for a plurality of times in the treatment process, so that the operation time length and the pain of a patient are prolonged, and the operation efficiency is reduced.
In view of the above problems, the present disclosure provides a medical lighting device, an operating table, a treatment chair, and a lighting controller, a method and a device, so that the lighting device can adaptively adjust the irradiation position, reduce the work that a doctor needs to manually operate, and improve the operation efficiency.
A schematic diagram of some embodiments of the lighting control apparatus of the present disclosure is shown in fig. 1.
The image acquisition device 101 is capable of acquiring real-time images of the direction of a target treatment region of a patient. In some embodiments, the image capturing apparatus 101 is an apparatus capable of capturing three-dimensional information, such as a binocular vision sensor, and the real-time image includes a two-dimensional image obtained by capturing a considerable area from two positions (angles) or a three-dimensional image obtained based on two-dimensional image processing. The image acquisition device may also be a three-dimensional image acquisition device capable of acquiring a two-dimensional image and depth information, the real-time image comprising the two-dimensional image and the depth information of at least part of the pixel positions in the image. In some embodiments, a binocular vision sensor and a three-dimensional image acquisition apparatus may also be used in combination. In some embodiments, the image acquisition device comprises at least one of a monocular vision sensor, a binocular vision sensor, a multiview vision sensor, a three-dimensional image acquisition device.
In some embodiments, the mounting position of the image capturing device 101 is comparable to the position of the illumination device (e.g., shadowless lamp), thereby improving the accuracy of the illumination device position adjustment.
The image acquisition device 101 is in signal connection with the controller 102 by wired or wireless means. The controller 102 acquires a real-time image acquired by the image acquisition device 101, determines a target irradiation pose from the real-time image, generates a robot arm control signal from the target irradiation pose, and transmits the robot arm control signal to the robot arm 103. In some embodiments, the target irradiation pose is a pose that enables light emitted by the lighting device to irradiate the target treatment area according to the position of the target treatment area, and specifically includes the position and the pose (orientation) of the lighting device. In some embodiments, the robot arm control signal may be generated by a robot control system in the related art.
In some embodiments, the controller 102 may include a display device, and display the image acquired by the image acquisition apparatus 101 and other information that needs to be checked by the doctor through the display device, so that the doctor checks the information, and further improving the operation efficiency.
The robotic arm 103 is coupled to a lighting device, which in some embodiments is secured to a free end of the robotic arm, and the other end of the robotic arm is a fixed end that is secured to a predetermined location as desired, such as a device associated with an operating table, or a wall, base, or the like. The mechanical arm 103 can adjust its form and end angle according to the mechanical arm control signal, thereby having the effect of adjusting the pose of the illumination device connected to the end of the mechanical arm, so that the light irradiation direction of the illumination device is matched with the position of the target treatment area.
The illumination control equipment can identify and track the position change of the target treatment area of the patient, and control the mechanical arm to adjust the pose of the illumination mechanism, so that the light irradiation of the illumination equipment is automatically adjusted along with the position change of the target treatment area, the automation degree of the equipment is improved, the equipment is not required to be manually adjusted, and the operation efficiency is improved.
In some embodiments, the mechanical arm can avoid obstacles such as articles placed around and patients and medical staff in the process of adjusting the pose, so that the safety in operation is ensured.
In some embodiments, the image acquisition device 101 is capable of acquiring ambient information in addition to the real-time image of the target treatment region orientation of the patient. In some embodiments, the image capturing apparatus 101 may adjust its own angle at a predetermined period to capture surrounding information. The controller 102 determines the position and size of surrounding obstacles based on the environmental information, such as calculating the distance of the object by calculating the parallax between the two perspectives based on the binocular vision image, and determines the position and size of the obstacle based on the distance information and the image information. In some embodiments, the image captured by the camera may be analyzed and processed by an image processing technology, so as to detect and identify an obstacle in the environment, for example, detect an object on the ground through a depth image, and distinguish the object from the obstacle, so that the construction efficiency of the environment map is improved.
In some embodiments, when determining the position and size of the obstacle, or obtaining an environment map, the controller 102 may plan the movement path of the robot arm according to a predetermined algorithm (e.g., a machine learning algorithm). In some embodiments, the predetermined algorithm can be implemented based on artificial intelligence technology, and training of the network model is completed through learning features of different obstacles in the environment, so that path planning for planning obstacle avoidance is achieved. In some embodiments, the predetermined algorithm may be an intelligent detection method based on a deep learning network, and training an intelligent reasoning model based on a convolutional neural network according to a constructed optimal path data set (for example, obtained by intelligent methods such as an a-x algorithm, a Dijkstra algorithm, a costmap cost map, and the like) to predict a high-quality obstacle avoidance path.
In some embodiments, after the controller 102 generates the motion path of the robotic arm 103, a motion control signal is sent to the robotic arm to control the robotic arm to move according to the motion path. In some embodiments, the controller may control the speed and direction of movement of the robotic arm in real time. In some embodiments, during the exercise, the controller may recalculate the path to adjust the exercise state in real time according to the change condition of the surrounding environment (such as the update of the obstacle position) or the change condition of the patient position, thereby improving the timeliness and safety of the pose adjustment.
In some embodiments, the lighting control device may further comprise one or more distance sensors located on the robotic arm capable of detecting the distance of the obstacle. When the distance sensor detects that an obstacle exists in a preset range, a signal is sent to the controller, and the controller controls the mechanical arm to stop moving in an emergency mode, so that the reaction speed of obstacle avoidance is improved, and the safety in operation is further improved.
In some embodiments, dental treatment is exemplified in the following description. When the target irradiation area is to be changed, processing can be performed based on the data of the corresponding position.
In some embodiments, the controller is capable of extracting the predetermined feature points from a real-time image of the direction of the target treatment region of the patient. In some embodiments, the predetermined feature points may include locations that are easily identifiable inside and outside the mouth, such as the cusp location, lip location, predetermined locations of gums, etc. The number of the preset characteristic points is a plurality of, so that the accuracy of positioning the target treatment area is improved. Further, the controller determines three-dimensional coordinates of the predetermined feature points, determines three-dimensional point cloud data within a predetermined range of the target treatment area of the patient, and updates the three-dimensional model. In some embodiments, the three-dimensional model can be generated based on the patient oral and maxillofacial model data, and then updated in real time in operation, so that time delay caused by newly-built model in operation is avoided, and model accuracy and data processing efficiency are improved. In some embodiments, the three-dimensional point cloud data may be a dense point cloud, improving the accuracy of the model. In some embodiments, key points on the image are extracted through ORB (Oriented Fast and Rotated BRIEF) algorithm or FAST (Features from Accelerated Segment Test) algorithm, key points and descriptors are calculated, feature matching is performed on the acquired key frames of two adjacent pictures by utilizing the feature points, so that three-dimensional coordinates corresponding to the matched feature points are determined, and dense point clouds are constructed. And planning a path of the mechanical arm according to the three-dimensional model, and generating a mechanical arm control signal.
The illumination control equipment can update the three-dimensional model based on the acquired real-time image in a feature extraction mode, so that the accuracy of illumination tracking is improved, and the illumination effect on a target treatment area is ensured.
In some embodiments, the controller 102 pre-obtains patient oromaxillofacial model data and, from the oromaxillofacial model data, annotates the oromaxillofacial region locations based on the trained machine learning model, obtaining data of the annotated model. In some embodiments, the controller 102 may acquire the patient oromaxillofacial model data in advance during the surgical preparation based on the operation of the patient's pre-procedure. In some embodiments, a trained machine learning model (e.g., a deep learning model) is generated for training based on orofacial model sample data, wherein the orofacial model sample data includes an image of the orofacial model, and labels of the locations of various regions of the orofacial model, which may include gum locations, tooth locations, and in some embodiments, locations of each tooth location.
Further, the controller 102 obtains information of the target treatment region, and determines positional information of the target treatment region in the labeled model according to the information of the target treatment region and the data of the labeled model. In some embodiments, information of the target treatment area may be entered by the physician (e.g., providing an identification of the target site) or noted in the image. The controller acquires information of the target treatment area, and further determines the position information of the target treatment area in the marked model. In some embodiments, the controller 102 may be coupled to an input device to obtain input or information tagged in an image.
In the subsequent use process, the controller 102 determines updated position information of the target treatment region in the updated three-dimensional model according to the position information of the target treatment region in the marked model and the updated three-dimensional model, and determines the target irradiation pose according to the updated position information.
The illumination control equipment can determine the position of the target treatment area in the model based on the preprocessed data, and update the position in real time in the use process, so that the accuracy of locking the position of the target treatment area is improved, and the illumination effect on the target treatment area is ensured.
In some embodiments, the controller 102 may identify the tool for treatment or adjuvant therapy in the real-time image based on pre-acquired information of the tool for treatment or adjuvant therapy. In some embodiments, the tool may be a medical instrument commonly found in surgery, such as a mouth mirror. In some embodiments, the information of the tool includes image information and annotations of the tool. In some embodiments, the image information may be three-dimensional image information, or two-dimensional image information photographed from a plurality of angles to improve accuracy of image recognition. In some embodiments, the labeling of the tool may include the location of the region of the tool that is in contact with or corresponds to the location of the patient being treated, such as the tip region of a probe, or the specular region of an oral mirror, etc.
In some embodiments, the controller 102 uses the identified tool as a target tool and determines a target treatment area based on the location of the target tool. It should be noted that, the target treatment area may be only an area that needs to be observed by a doctor, and it is not necessary to perform a targeted treatment operation. Such an illumination control apparatus enables the illumination apparatus to track a change in the tool position to adjust the illumination position, ensuring the illumination state of the target tool position.
In some embodiments, the controller 102 determines a target irradiation pose of the illumination device using a location of an area where the tool contacts or corresponds to a location of the patient being treated as a target treatment area, and further generates a robotic arm control signal to control the robotic arm to adjust the pose of the illumination device. The illumination control equipment can enable the illumination equipment to track the change of the tool position to adjust the illumination position, and ensure the illumination state of the corresponding position of the target tool, so that doctors can observe, modify or increase the treatment area conveniently, the automation degree and the operation convenience of the equipment are improved, and the data processing capacity can be reduced.
In some embodiments, the controller 102 determines the area where the tool is in contact with or corresponds to the location of the patient being treated, the relative position in the labeled model, and thus the corresponding target treatment area of the relative position in the labeled model, e.g., the tooth or gum area that the physician needs to operate based on the relative position, as the target treatment area. In some embodiments, the labeled model includes a plurality of divided treatment areas, and the treatment area associated with the relative position can be determined as a target treatment area through a pre-trained prediction algorithm; the treatment region closest to the relative position may be determined as the target treatment region by calculating the euclidean distance. Such an illumination control apparatus can infer that a doctor needs to observe, modify or add a treatment area by tracking a change in the tool position, improving the degree of intelligence, degree of automation and convenience of operation of the apparatus.
In some embodiments, the image capture device 101 is also capable of capturing an image of the environment and transmitting the image to the controller 102. The controller 102 is also capable of generating an environmental model from the environmental image and determining an initial pose of the robotic arm. In the process of generating the mechanical arm control signal, the controller 102 adjusts the posture of the mechanical arm based on the initial position of the mechanical arm, gradually updates the current posture of the mechanical arm, and uses the updated posture of the mechanical arm as a data base for subsequent adjustment. The controller 102 plans the motion track of the mechanical arm according to the pose of the mechanical arm and the environmental model, and further generates a corresponding mechanical arm control signal. In some embodiments, considering that the limited location of the image capturing device with a mounting location comparable to the lighting device results in difficult ambient image capturing, the ambient image may be captured by cameras, video cameras, etc. disposed in various directions of the space in which the lighting control device is located.
The illumination control equipment can collect environment images in advance, determine the initial pose of the mechanical arm, provide a data basis for the generation of subsequent control signals, and improve the accuracy of mechanical arm control. In addition, the environment model can provide reference information of the motion path of the mechanical arm, determine the motion path capable of avoiding the obstacle, further generate a mechanical arm control signal, avoid danger and improve safety.
A schematic diagram of some embodiments of the medical illumination device of the present disclosure is shown in fig. 2.
The lighting control device is any one of the above mentioned. In some embodiments, the lighting control device comprises an image acquisition device 21, a controller 22, and a robotic arm 23.
The medical lighting device further comprises a lighting device 24 capable of adjusting the pose and emitting light under the control of the lighting control device. In some embodiments, the lighting control device may receive feedback signals sent by the lighting device 24 to grasp the operating state of the lighting device in real time, improving the reliability of the adjustment.
In some embodiments, the position of the illumination device 24 is comparable to the position of the image acquisition device 21, thereby reducing the difficulty of image processing and improving the accuracy of the illumination position. In some embodiments, the illumination device 24 may be a shadowless lamp, thereby improving the clarity of the doctor's view.
In such a medical lighting device, the lighting equipment can be controlled by the lighting control equipment to automatically adjust the light irradiation along with the position change of the target treatment area, so that the automation degree of the equipment is improved, the equipment is not required to be manually adjusted, and the operation efficiency of doctors is improved. In some embodiments, the light source within the shadowless lamp comprises one or more of a Led lamp, a laser, an ultraviolet light source, or a near infrared light source. In some embodiments, the light source of the shadowless lamp comprises a light source with a specified wavelength, which can display a specific physiological tissue such as a bacterial plaque or a specific microorganism, a nerve blood vessel or a pathological tissue such as a granulation tissue, and the specific wavelength range can be set and adjusted according to the display effect under different wavelength light in the related technical research, for example, the specific light source of the malignant tumor cell can be displayed. The adoption of the light source can assist doctors to clean the range of the cancer more accurately. And for example, the scope of necrotic bone can be displayed, which can assist doctors to remove dead bone more accurately and reserve healthy bone to the maximum scope.
In some embodiments, led light is applied to clinical scene surgical field illumination. In some embodiments, the laser light source is applied to detection of dental plaque, bacteria, pathological cells in the operation area, promotion of tissue healing, anti-inflammatory pain, etc., and in addition, may also be applied to detection of color marks on the surface of a dental crown (marks left on a bite paper placed in between after occlusion of a jaw tooth).
The medical lighting device can provide various light sources, provides other additional functions such as dental plaque irradiation and highlighting (different color highlighting different areas), promoting tissue healing, resisting inflammation and easing pain and the like while providing a lighting function, is convenient to apply to different scenes, improves the auxiliary capacity of medical lighting equipment to treatment, and improves user experience.
In some embodiments, a controller of the illumination device is included in the medical illumination apparatus that is capable of adjusting the light source of the illumination device, such as switching the light source in a Led lamp, laser, ultraviolet light source, or near infrared light source. In some embodiments, the wavelength of the laser can be switched. In some embodiments, the lighting device controller may be integrated in the lighting control device. In some embodiments, the functions of the controller of the lighting device are performed by the controller of the lighting control device. The medical lighting device can facilitate doctors to adjust the lighting state according to requirements, and improves flexible service capability.
In some embodiments, the light source within the shadowless lamp comprises a laser having a wavelength in the range of (405+ -20) nm. Such lasers are capable of detecting dental plaque on teeth in the form of auto-fluorescent imaging. Because the dental plaque film is not formed on the teeth in a healthy state, the dental plaque film does not have red autofluorescence, and the formed dental plaque film shows red autofluorescence under the irradiation of 405nm light, so that the dental plaque can be clearly displayed and is convenient to identify.
In some embodiments, the image acquisition device in the illumination control device can take a picture of the image acquisition area, for example, the image of the tooth surface is acquired under the condition that fluorescence is formed on the tooth by laser irradiation, so that the image information of dental plaque is acquired, and the intelligent detection and extraction of the dental plaque are realized by combining with the constructed plaque identification and segmentation method based on the deep learning network model, so that the recording and the subsequent use are convenient. In some embodiments, the image acquisition device in the lighting control device may include an optical interference filter, and the image is acquired after filtering by the optical interference filter, such as photographing the teeth, and extracting optical information of the healthy tooth portion. In some embodiments, the controller can process the image, such as image enhancement and image segmentation, to realize quantitative plaque index, i.e. obtain the ratio of plaque area to the whole tooth area, and carry out bacterial prompt for the operator, thereby being convenient for providing more comprehensive information for the doctor, improving the success rate of the operation and optimizing the postoperative recovery condition.
In some embodiments, the light source within the shadowless lamp comprises a low energy laser, such as a wavelength in the range 390-1600nm, with a power of 10 -3 -10 -1 Within W range, the energy is 10 -2 -10 2 J/cm 2 Laser light in a range. In some embodiments, the medical illumination apparatus may utilize the laser to illuminate the surgical field during the suturing phase after the end of the procedure to promote healing of soft tissue, bone tissue, and to act as an anti-inflammatory, analgesic effect.
In some embodiments, the light source within the shadowless lamp comprises ultraviolet light having a wavelength in the range of 280-320 nm and an irradiance of about 15 μW/cm 2 . The light source can improve loose bone and promote the synthesis of vitamin D.
In some embodiments, the light source within the shadowless lamp comprises near infrared light having a wavelength in the range of 780-2526 nm. The light source can be used for conveniently displaying the positions of tumor cells and analyzing and diagnosing tumor tissues.
In some embodiments, the light source in the shadowless lamp comprises visible blue light with a wavelength in the range of 400-500 nm and a light intensity in the range of 750-2000 mw/cm 2 Within the range. The use of such a light source helps to achieve a light curing effect during the dental crown bonding process.
In some embodiments, a medical care giver in the surgical procedure may perform control operations on the medical illumination device as needed, such as sending control information to a controller through a human-computer interaction interface, where the controller controls to switch one or more of a light source type, a wavelength, or an illumination intensity in the shadowless lamp, so that the irradiation of the shadowless lamp is convenient to meet a doctor's use requirement.
A schematic of some embodiments of the surgical bed of the present disclosure is shown in fig. 3.
The patient support mechanism 31 is configured to support a patient. In some embodiments, the patient support mechanism 31 includes a table top and a base. In some embodiments, the table top includes 5 active sections of a head plate, a back plate, a waist plate, a seat plate, and a leg plate.
The medical lighting device is any one of the above mentioned. In some embodiments, the lighting control device comprises an image acquisition device 21, a controller 22, a robotic arm 23, and a lighting device 24, wherein the light of the lighting device 24 is projected on the patient support mechanism 31.
In some embodiments, the patient support mechanism 31 and the medical illumination device may be a connected structure. In some embodiments, the connection may be a fixed connection to reduce the difficulty of position adjustment; in some embodiments, the connection may be a living connection, thereby increasing the range of positions that can be adjusted, making the surgical bed suitable for use with patients of different afflictions. In some embodiments, the two parts are independent and are matched with each other for use, so that the flexibility of movement of the equipment is improved.
In some embodiments, the patient support mechanism 31 further comprises a post to which the fixed end of the robotic arm 23 in the medical lighting device is connected.
According to the operating table, the illumination equipment can be automatically adjusted along with the position change of the target treatment area under the control of the illumination control equipment, so that the automation degree of the equipment is improved, the equipment is not required to be manually adjusted, and the operation efficiency of doctors is improved.
A schematic of some embodiments of the dental chair of the present disclosure is shown in fig. 4.
The dental chair 41 is capable of holding a patient. In some embodiments, the dental chair 41 may be an electric dental chair, the base plate is used for being fixed on the ground, the base plate is connected with the upper part of the dental chair through a bracket, and the action of the dental chair is controlled by a control switch. According to the treatment requirement, the dental chair can finish the actions of ascending, descending, pitching, leaning up, resetting and the like by manipulating the control switch button.
The medical lighting device is any one of the above mentioned. In some embodiments, the lighting control device comprises an image acquisition device 21, a controller 22, a robotic arm 23 and a lighting device 24, wherein the light of the lighting device 24 is projected on the dental chair 41.
In some embodiments, the dental chair 41 and the medical illumination device may be a connected structure. In some embodiments, the connection may be a fixed connection to reduce the difficulty of position adjustment; in some embodiments, the connection may be an active connection, increasing flexibility. In some embodiments, the two parts are independent and are matched with each other for use, so that the flexibility of movement of the equipment is improved.
According to the dental treatment chair, the illumination equipment can be automatically adjusted along with the position change of the target treatment area under the control of the illumination control equipment, so that the automation degree of the equipment is improved, the equipment is not required to be manually adjusted, and the operation efficiency of doctors is improved.
A flowchart of some embodiments of the lighting control method of the present disclosure is shown in fig. 5.
In step S52, a real-time image of the target treatment region direction of the patient is acquired. In some embodiments, a real-time image of the direction of the target treatment area is acquired by an image acquisition device. In some embodiments, the image capturing apparatus 101 is an apparatus capable of capturing three-dimensional information, such as a binocular vision sensor, and the real-time image includes two-dimensional images captured from two locations or three-dimensional images acquired based on two-dimensional image processing. The image acquisition device may also be a three-dimensional image acquisition device capable of acquiring a two-dimensional image and depth information, the real-time image comprising the two-dimensional image and the depth information of at least part of the pixel positions in the image. In some embodiments, a binocular vision sensor and a three-dimensional image acquisition apparatus may also be used in combination. In some embodiments, the controller acquires the real-time image through a data interface with the image acquisition device.
In step S54, a target irradiation pose is determined according to the real-time image, a mechanical arm control signal is generated according to the target irradiation pose and is sent to the mechanical arm, so that the mechanical arm control signal adjusts the pose of the lighting device, and the light irradiation direction of the lighting device is matched with the position of the target treatment area. In some embodiments, the target irradiation pose is a pose of an illumination device capable of enabling light emitted by the illumination device to irradiate the target treatment area according to the position of the target treatment area, and specifically includes the position and the pose of the illumination device.
In some embodiments, information of surrounding obstacles may be referenced during motion planning of the robotic arm. In some embodiments, the barrier includes a surrounding item, as well as the patient and healthcare worker. In some embodiments, the location and size of the obstacle may be determined based on the acquired real-time environmental information, such as image information acquired with an image acquisition device. In some embodiments, the image captured by the camera may be analyzed and processed by an image processing technology, so as to detect and identify an obstacle in the environment, for example, detect an object on the ground through a depth image, and distinguish the object from the obstacle, so that the construction efficiency of the environment map is improved.
In some embodiments, the path of movement of the robotic arm is planned according to a predetermined algorithm (e.g., a machine/deep learning algorithm) when determining the position and size of the obstacle, or after obtaining an environmental map. In some embodiments, the predetermined algorithm can be implemented based on artificial intelligence technology, and training of the network model is completed through learning features of different obstacles in the environment, so that path planning of planning obstacle avoidance paths is achieved. In some embodiments, the predetermined algorithm may be generated based on training of a deep learning algorithm, and according to a constructed optimal path data set (for example, obtained by intelligent methods such as an a-algorithm, a Dijkstra algorithm, a costmap cost map, etc.), an intelligent reasoning model based on a convolutional neural network is trained to predict a high-quality obstacle avoidance path.
In some embodiments, after the motion path of the mechanical arm is generated, a motion control signal is sent to the mechanical arm to control the mechanical arm to move according to the motion path. In some embodiments, the speed and direction of movement of the robotic arm is controlled in real time. In some embodiments, during the exercise, the exercise state is adjusted in real time by recalculating the path according to the change condition of the surrounding environment (such as the update of the obstacle position) or the change condition of the patient position, so that the timeliness and the safety of the pose adjustment are improved.
In some embodiments, the lighting control device may further comprise one or more distance sensors located on the robotic arm capable of detecting a distance to the obstacle. When the distance sensor detects an obstacle and the obstacle is in a preset distance range, a signal is sent to the controller, and the controller controls the mechanical arm to stop moving in an emergency mode, so that the reaction speed of obstacle avoidance is improved, and the safety in operation is further improved.
By the method, the illumination control equipment can identify and track the position change of the target treatment area of the patient, and control the mechanical arm to adjust the pose of the illumination mechanism, so that the light irradiation of the illumination equipment is automatically adjusted along with the position change of the target treatment area, the automation degree of the equipment is improved, the equipment is not required to be manually adjusted, and the operation efficiency is improved.
A flowchart of further embodiments of the lighting control method of the present disclosure is shown in fig. 6. In some embodiments, dental treatment is exemplified in the following description. When the target treatment area is other part, the treatment can be performed based on the data of the corresponding position.
In step 611, oromaxillofacial model data of the patient is acquired. In some embodiments, the oromaxillofacial model data is obtained from a patient during a surgical preparation procedure based on the operation of a patient's pre-procedure. In some embodiments, the oromaxillofacial model data may be acquired by way of an oral scan.
In step 612, the oromaxillofacial region positions are labeled based on the trained machine learning model from the oromaxillofacial model data, and the labeled model data is obtained. In some embodiments, the trained machine learning model is generated for training based on the oromaxillofacial model sample data. In some embodiments, the gum position and the tooth position are annotated in the annotated model, and in some embodiments, the position of each tooth position is also included.
In step 613, information of the target treatment region is acquired, and positional information of the target treatment region in the labeled model is determined based on the information of the target treatment region and the data of the labeled model. In some embodiments, the information of the target treatment region may be entered by a physician (e.g., providing an identification of the target dental site) or annotated in an image, the information of the target treatment region is matched with the annotation in the annotated model, and positional information of the target treatment region in the annotated model is determined.
In some embodiments, steps 611-612 may be performed prior to the beginning of the procedure, thereby saving processing time during the procedure.
In some embodiments, step 613 may be performed at the time of the beginning of the surgery, so as to improve the matching degree between the initially determined three-dimensional model and the subsequently updated three-dimensional model, reduce the amount of data to be modified, and improve the processing efficiency; in addition, the doctor can conveniently mark the target treatment area in real time, and the flexibility is improved.
In some embodiments, an environmental image acquired by an image acquisition device is acquired, an environmental model is generated according to the environmental image, and an initial pose of the mechanical arm is determined, so that a data basis is provided for generation of subsequent control signals. In addition, the environment model can also be used as a data base for executing obstacle avoidance in the control process of the mechanical arm, so that danger is avoided.
In step 621, a real-time image of the direction of the target treatment area of the patient is acquired.
In step 641, predetermined feature points are extracted from the real-time image of the target treatment region direction of the patient. In some embodiments, the predetermined feature points may include locations that are easily identifiable inside and outside the mouth, such as the cusp location, lip location, predetermined locations of gums, etc. The number of the preset characteristic points is a plurality of, so that the accuracy of positioning the target treatment area is improved.
In step 642, three-dimensional coordinates of the predetermined feature points are determined, and three-dimensional point cloud data within a predetermined range of the target treatment region of the patient is determined, updating the three-dimensional model.
In some embodiments, the three-dimensional model may be the labeled model determined in step 613, and further updated in real time during operation, so as to avoid time delay caused by newly creating a model during operation, and improve accuracy of the model and data processing efficiency.
In step 643, updated position information of the target treatment region in the updated three-dimensional model is determined according to the updated three-dimensional model and the position information of the target treatment region in the labeled model. In some embodiments, the location information may be spatial coordinates.
In step 644, a target illumination pose is determined from the updated position information. In some embodiments, the target illumination pose may include position information and pose information, such as coordinates of the illumination device and an illumination azimuth of the illumination device.
In step 645, a target pose of the robotic arm is determined based on the target illumination pose and the current pose of the robotic arm, and a robotic arm control signal is generated based on the target pose of the robotic arm. In some embodiments, based on the initial pose of the mechanical arm determined in step 613, the pose information of the mechanical arm may be updated in real time along with the adjustment of the mechanical arm, so as to obtain the current pose of the mechanical arm. And determining an adjustment parameter of the mechanical arm according to the relative difference between the current pose and the target pose, and generating a mechanical arm control signal. In some embodiments, the robot control system in the related art may be utilized to generate the robot arm control signal. In some embodiments, the robot path may be planned according to the established environment map using some intelligent search algorithm (e.g., an a-x algorithm, a Dijkstra algorithm, a costmap cost map, etc.). In some embodiments, the track and the motion strategy of the mechanical arm are calculated and determined by adopting a DWA (Dynamic Window Approach, dynamic window method) according to the current pose of the mechanical arm, so that the obstacle avoidance is realized while the motion of the mechanical arm is controlled.
By the method in the embodiment, data preparation can be performed before operation starts, so that the data volume required to be processed subsequently is reduced, and the sufficiency of operation preparation is improved; in the operation execution process, a control signal for the mechanical arm can be generated, so that the mechanical arm drives the lighting equipment to adjust illumination, the automation degree of the equipment is improved, the equipment is not required to be manually adjusted, and the operation efficiency of doctors is improved.
In some embodiments, the physician may add or update information of the target treatment area at any time, as appropriate to the circumstances during the procedure, such as by entering or annotating the added or modified surgical site in the image via an input device, performing the operations as initiated above in step 613.
By the method, emergency situations in operation can be conveniently handled, the temporarily-increased target treatment area can be provided with light following, and the operation efficiency of doctors is further improved.
In some embodiments, the physician may add, modify, or delete target treatment areas at any time by moving the treatment, adjunctive treatment tools. The target treatment area may be only the area that the doctor needs to observe, and no targeted treatment operation is necessarily required. In some embodiments, the controller may identify the tool for treatment or adjuvant therapy in the real-time image based on pre-acquired information of the tool for treatment or adjuvant therapy. In some embodiments, the tool may be a medical instrument commonly found in surgery, such as a mouth mirror. In some embodiments, the information of the tool includes image information and annotations of the tool. In some embodiments, the image information may be three-dimensional image information, or two-dimensional image information photographed from a plurality of angles to improve accuracy of image recognition. In some embodiments, the labeling of the tool may include the location of the region of the tool that is in contact with or corresponds to the location of the patient being treated, such as the tip region of a probe, or the specular region of an oral mirror, etc.
In some embodiments, the controller uses the identified tool as a target tool and determines a target treatment area based on the location of the target tool. By such a method, the illumination device can be enabled to track the change of the tool position to adjust the illumination position, ensuring the illumination state of the corresponding position of the target tool.
In some embodiments, the controller determines a target irradiation pose of the illumination device using a position of an area where the tool contacts or corresponds to a position of the patient to be treated as a target treatment area, and further generates a robot arm control signal to control the robot arm to adjust the pose of the illumination device. By the method, the illumination equipment can track the change of the tool position to adjust the illumination position, and the illumination equipment ensures the illumination state of the corresponding position of the target tool, so that doctors can observe, modify or increase the treatment area conveniently, the automation degree and the operation convenience of the equipment are improved, and the data processing capacity can be reduced.
In some embodiments, the controller determines the area of the tool that is in contact with or corresponds to the location of the patient to be treated, the relative position in the labeled model, and thus the corresponding target treatment area of the relative position in the labeled model, for example, the tooth or gum area that the physician needs to operate is determined from the relative position, and the tooth or gum area is taken as the target treatment area. In some embodiments, the labeled model includes a plurality of divided treatment areas, and the treatment area associated with the relative position can be determined as a target treatment area through a pre-trained prediction algorithm; the treatment region closest to the relative position may be determined as the target treatment region by calculating the euclidean distance. By means of the method, the doctor can infer that the treatment area needs to be observed, modified or increased by tracking the change of the tool position, and the intelligent degree, the automation degree and the operation convenience of the equipment are improved.
In some embodiments, the illumination control method of the present disclosure further comprises obtaining oromaxillofacial model sample data by collecting historical data. In some embodiments, the oromaxillofacial model sample data includes an identification of the location of the oromaxillofacial region. In some embodiments, a normalized high information content image dataset may be constructed, and sample data generated based on the normalized dataset. In some embodiments, based on the constructed normalized image dataset and the initial sample data, a constructed machine learning network model (e.g., CNN (Convolutional Neural Networks, convolutional neural network), GAN (Generative Adversarial Networks, generate countermeasure network), a few sample generation model) for augmenting the sample data is employed to generate new sample data, thereby preparing sufficient sample data for subsequent training, reducing the burden of manual labeling. A machine learning network model (e.g., CNN, GAN) is built and the machine/deep learning network model is trained using the oromaxillofacial model sample data until training is completed, such that the trained machine learning network model has the ability to identify different regions on the oromaxillofacial model, and the trained model is used to process the patient's oromaxillofacial model data in step 612 described above to obtain labeled model data. In some embodiments, the training of the machine learning model can be completed on the server and then sent to each controller for use. In some embodiments, whether the training is completed or not may be determined according to the processing effect of the machine learning model, for example, when the accuracy rate is greater than or equal to a predetermined ratio and the training loss function curve tends to be in a stable state, etc., the training is determined to be completed, otherwise, the training is continuously performed, so that the under-fitting or over-fitting condition is avoided, and the accuracy of the machine learning model is improved.
By the method, the region division can be firstly performed based on the user oral and maxillofacial model data, the accuracy of the subsequent target treatment region positioning is improved, and the degree of automation is further improved.
In some embodiments, control information is sent to the lighting device to switch the light source or illumination intensity of the lighting device according to the operation of the user (e.g., doctor). By the method, doctors can conveniently adjust the illumination state according to the requirements, and flexible service capability is improved.
In some embodiments, the controller modulates the Led lamp during the intraoperative state to provide sufficient illumination for the procedure. In some embodiments, under the condition that dental plaque needs to be irradiated, the shadowless lamp is controlled to emit laser with the wavelength range of (405+/-20) nm, so that the dental plaque can be clearly displayed and is convenient to identify. In some embodiments, the shadowless lamp is controlled to emit wavelengths in the range 390-1600nm at a power of 10 during the suturing phase after the end of the procedure -3 -10 -1 Within W range, the energy is 10 -2 -10 2 J/cm 2 Laser in the range, thereby promoting the healing of soft tissues and bone tissues and playing the roles of anti-inflammatory and pain relief.
A schematic structural diagram of one embodiment of a lighting controller of the present disclosure is shown in fig. 7. In some embodiments, the illumination controller is used as a controller in the illumination control device, is in signal connection with the image acquisition device and the mechanical arm, acquires the image acquired by the image acquisition device, and sends the generated mechanical arm control signal to the mechanical arm, wherein the free end of the mechanical arm is connected with the illumination device. The lighting controller comprises a memory 701 and a processor 702. Wherein: memory 701 may be a magnetic disk, flash memory, or any other non-volatile storage medium. The memory is used to store instructions in the corresponding embodiments of the lighting control method hereinabove. Processor 702 is coupled to memory 701 and may be implemented as one or more integrated circuits, such as a microprocessor or microcontroller. The processor 702 is configured to execute instructions stored in the memory, so that light irradiation of the illumination device can be automatically adjusted along with position change of a target treatment area, automation degree of the device is improved, manual adjustment of the device is not needed, and operation efficiency is improved.
In one embodiment, as also shown in fig. 8, the lighting controller 800 includes a memory 801 and a processor 802. The processor 802 is coupled to the memory 801 by a BUS 803. The lighting controller 800 may also be connected to external storage 805 via a storage interface 804 for invoking external data, and may also be connected to a network or another computer system (not shown) via a network interface 806. And will not be described in detail herein.
In the embodiment, the data instruction is stored by the memory, and then the instruction is processed by the processor, so that the light irradiation of the lighting equipment can be automatically adjusted along with the position change of the target treatment area, the automation degree of the equipment is improved, the equipment is not required to be manually adjusted, and the operation efficiency is improved.
In another embodiment, a computer readable storage medium has stored thereon computer program instructions which, when executed by a processor, implement the steps of the method in the corresponding embodiments of the lighting control method. It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Thus far, the present disclosure has been described in detail. In order to avoid obscuring the concepts of the present disclosure, some details known in the art are not described. How to implement the solutions disclosed herein will be fully apparent to those skilled in the art from the above description.
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present disclosure may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: the above embodiments are merely for illustrating the technical solution of the present disclosure and are not limiting thereof; although the present disclosure has been described in detail with reference to preferred embodiments, those of ordinary skill in the art will appreciate that: modifications may be made to the specific embodiments of the disclosure or equivalents may be substituted for part of the technical features; without departing from the spirit of the technical solutions of the present disclosure, it should be covered in the scope of the technical solutions claimed in the present disclosure.

Claims (28)

1. A lighting control apparatus comprising:
an image acquisition device configured to acquire a real-time image of a target treatment region direction of a patient;
the controller is configured to determine a target irradiation pose according to the real-time image, generate a mechanical arm control signal according to the target irradiation pose and send the mechanical arm control signal to the mechanical arm; and
and the mechanical arm is connected with the illumination equipment and is configured to adjust the pose of the illumination equipment according to the mechanical arm control signal so as to enable the light irradiation direction of the illumination equipment to be matched with the position of the target treatment area.
2. The lighting control device of claim 1, wherein the image acquisition device comprises at least one of a binocular vision sensor or a three-dimensional image acquisition device.
3. The lighting control device of claim 1, wherein the lighting device is fixed to a free end of the robotic arm, the other end of the robotic arm being a fixed end.
4. The lighting control device of claim 1, wherein the controller is configured to:
extracting a preset characteristic point according to a real-time image of the direction of a target treatment area of a patient;
determining three-dimensional coordinates of the preset feature points, determining three-dimensional point cloud data in a preset range of a target treatment area of a patient, and updating a three-dimensional model; and
And planning a path of the mechanical arm according to the three-dimensional model, and generating a mechanical arm control signal.
5. The lighting control device of claim 4, wherein the controller is further configured to:
acquiring oral and maxillofacial model data of a patient;
labeling the position of an oral and maxillofacial region based on the trained machine learning model according to the oral and maxillofacial model data, and acquiring the data of the labeled model;
acquiring information of a target treatment area, determining position information of the target treatment area in the marked model according to the information of the target treatment area and the data of the marked model, wherein the controller determines updated position information of the target treatment area in the updated three-dimensional model according to the position information of the target treatment area in the marked model and the updated three-dimensional model, and determines a target irradiation pose according to the updated position information.
6. The lighting control device of claim 5, wherein the information of the target treatment area comprises an identification of a target dental site.
7. The lighting control device of claim 4, wherein the controller is further configured to:
Determining the position of a target tool according to the real-time image of the direction of the target treatment area of the patient;
the target treatment area is determined based on the position of the target tool.
8. The lighting control device of claim 1, wherein,
the image acquisition device is further configured to: collecting an environment image and sending the environment image to the controller;
the controller is further configured to: and generating an environment model according to the environment image, and determining the initial pose of the mechanical arm.
9. The lighting control device of claim 8, wherein the controller is further configured to:
determining an obstacle position according to the environmental model;
according to the obstacle position and the target irradiation pose, planning a motion path of the mechanical arm and updating a motion path planning result in real time, wherein the motion path avoids the obstacle position;
and the controller generates a mechanical arm control signal according to the movement path planning result.
10. The lighting control device of claim 1, further comprising:
a distance sensor attached to the mechanical arm and configured to send early warning information to the controller when detecting the presence of an object within a predetermined range;
The controller is further configured to control the mechanical arm to suspend movement in case of receiving the pre-warning information from the distance sensor.
11. The lighting control device of claim 1, wherein the controller is further configured to send a control signal to the lighting device to adjust at least one of a light source or brightness of the lighting device.
12. A medical lighting device, comprising:
the lighting control apparatus of any one of claims 1 to 11; and
and the illumination device is configured to adjust the pose under the control of the illumination control device and emit light.
13. The medical lighting device of claim 12, wherein the illumination apparatus comprises a shadowless lamp.
14. The lighting control device of claim 13, wherein the shadowless lamp comprises at least one of a Led light source, a laser light source, an ultraviolet light source, or a near infrared light source.
15. The lighting control device of claim 14, wherein the lighting control device conforms to at least one of:
the laser light source comprises at least one of the following: laser light with a wavelength range of 405+/-20 nm; or the wavelength is 390-1600nm, and the power is 10 -3 -10 -1 Within W range, the energy is 10 -2 -10 2 J/cm 2 A laser within a range;
the ultraviolet wavelength is 280-320 nm, and the irradiation intensity is 15 mu W/cm 2 Is a predetermined intensity interval of (2);
the wavelength of the near infrared light is in the range of 780-2526 nm; or (b)
The wavelength of the visible blue light is in the range of 400-500 nm, and the light intensity is 750-2000 mw/cm 2 Within the range.
16. An operating table, comprising:
a patient support mechanism configured to support a patient; and
the medical illumination device of any one of claims 12-15, configured to emit light to a target treatment area of a patient.
17. A dental treatment chair comprising:
a dental chair configured to hold a patient; and
the medical illumination device of any one of claims 12-15, configured to emit light to a target treatment area of a patient's oromaxillofacial region.
18. A lighting control method, comprising:
acquiring a real-time image of the direction of a target treatment area of a patient;
and determining a target irradiation pose according to the real-time image, generating a mechanical arm control signal according to the target irradiation pose, and sending the mechanical arm control signal to a mechanical arm so as to adjust the pose of the lighting equipment according to the mechanical arm control signal, so that the light irradiation direction of the lighting equipment is matched with the position of a target treatment area.
19. The lighting control method of claim 18, wherein said determining a target illumination pose from said real-time image, generating a robotic arm control signal from said target illumination pose comprises:
extracting a preset characteristic point according to a real-time image of the direction of a target treatment area of a patient;
determining three-dimensional coordinates of the preset feature points, determining three-dimensional point cloud data in a preset range of a target treatment area of a patient, and updating a three-dimensional model; and
and planning a path of the mechanical arm according to the updated three-dimensional model, and generating a mechanical arm control signal.
20. The lighting control method of claim 19, further comprising:
acquiring oral and maxillofacial model data of a patient;
labeling the position of an oral and maxillofacial region based on the trained machine learning model according to the oral and maxillofacial model data, and acquiring the data of the labeled model; and
acquiring information of a target treatment area, and determining position information of the target treatment area in the marked model according to the information of the target treatment area and the data of the marked model;
the determining the target irradiation pose according to the real-time image comprises the following steps:
Determining updated position information of the target treatment area in the updated three-dimensional model according to the updated three-dimensional model and the position information of the target treatment area in the marked model;
and determining the target irradiation pose according to the updated position information.
21. The lighting control method of claim 19, wherein said determining a target illumination pose from said real-time image, generating a robotic arm control signal from said target illumination pose further comprises:
determining the position of a target tool according to the real-time image of the direction of the target treatment area of the patient;
the target treatment area is determined based on the position of the target tool.
22. The lighting control method of claim 18, further comprising:
acquiring an environment image acquired by image acquisition equipment;
generating an environment model according to the environment image, and determining an initial pose of the mechanical arm, wherein the image acquisition equipment comprises at least one of a binocular vision sensor or a three-dimensional image acquisition equipment;
the generating the mechanical arm control signal according to the target irradiation pose comprises the following steps: and determining the target pose of the mechanical arm according to the target irradiation pose and the current pose of the mechanical arm, and generating the mechanical arm control signal according to the target pose of the mechanical arm.
23. The lighting control method of claim 22, further comprising determining an obstacle location from the environmental model;
the generating the mechanical arm control signal according to the target irradiation pose comprises the following steps: according to the obstacle position and the target irradiation pose, planning a motion path of the mechanical arm and updating a motion path planning result in real time, wherein the motion path avoids the obstacle position; and generating a mechanical arm control signal according to the motion path planning result.
24. The lighting control method of claim 18, further comprising:
and under the condition that the early warning information from the distance sensor is received, controlling the mechanical arm to pause movement, wherein the distance sensor is attached to the mechanical arm, and under the condition that an object exists in a preset range, sending the early warning information to a controller.
25. The lighting control device of claim 18, further comprising: a control signal is sent to the lighting device to adjust at least one of a light source or a brightness of the lighting device.
26. The lighting control method of claim 20, further comprising at least one of:
Acquiring orofacial model sample data, wherein the orofacial model sample data comprises an identification of the position of an orofacial region, and training a machine learning model through the orofacial model sample data until training is completed; or (b)
An image of a target tool and treatment location information on the target tool is acquired to identify the target tool from the real-time image and a target treatment area is determined from the treatment location information on the target tool.
27. A lighting controller, comprising:
a memory; and a processor coupled to the memory, the processor configured to perform the method of any of claims 18-26 based on instructions stored in the memory.
28. A non-transitory computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of any of claims 18 to 26.
CN202311152946.5A 2023-09-07 2023-09-07 Medical lighting device, operating table, treatment chair, lighting controller, method and apparatus Pending CN117190138A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311152946.5A CN117190138A (en) 2023-09-07 2023-09-07 Medical lighting device, operating table, treatment chair, lighting controller, method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311152946.5A CN117190138A (en) 2023-09-07 2023-09-07 Medical lighting device, operating table, treatment chair, lighting controller, method and apparatus

Publications (1)

Publication Number Publication Date
CN117190138A true CN117190138A (en) 2023-12-08

Family

ID=88993668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311152946.5A Pending CN117190138A (en) 2023-09-07 2023-09-07 Medical lighting device, operating table, treatment chair, lighting controller, method and apparatus

Country Status (1)

Country Link
CN (1) CN117190138A (en)

Similar Documents

Publication Publication Date Title
AU2015202805B2 (en) Augmented surgical reality environment system
JP2020127790A (en) Dental mirror having integrated camera and applications thereof
CN104994805B (en) System and method for establishing virtual constraint boundaries
US11297285B2 (en) Dental and medical loupe system for lighting control, streaming, and augmented reality assisted procedures
CN104582624B (en) Automatic surgical operation and intervention procedure
CN110896609B (en) TMS positioning navigation method for transcranial magnetic stimulation treatment
WO2020062773A1 (en) Tms positioning navigation method used for transcranial magnetic stimulation treatment
KR101784063B1 (en) Pen-type medical fluorescence image device and system which registers multiple fluorescent images using the same
US20190008598A1 (en) Fully autonomic artificial intelligence robotic system
US20060111761A1 (en) Methods and apparatus for light therapy
CN109549705A (en) A kind of surgical robot system and its application method
US11553974B2 (en) Systems and methods for detection of objects within a field of view of an image capture device
CN105025835A (en) System for arranging objects in an operating room in preparation for a surgical procedure
CN117981003A (en) Collaborative composite video streaming layered over surgical sites and instruments
CN113855287B (en) Oral implantation operation robot with evaluation of implantation precision and control method
CN111531555A (en) Intelligent remote medical robot system
WO2019187495A1 (en) Dental treatment device having assist function
CN117190138A (en) Medical lighting device, operating table, treatment chair, lighting controller, method and apparatus
KR102195825B1 (en) System for guiding surgical operation through alarm function and method therefor
CN209826968U (en) Surgical robot system
JP7343470B2 (en) Control device, control method, and control program
CN112155553B (en) Wound surface evaluation system and method based on structured light 3D measurement
JP4417877B2 (en) Optical transceiver control system
KR102490579B1 (en) Dental robot
JP7428634B2 (en) Control device, control method, and control program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination