CN114419545A - Security protection method, system, computer device and storage medium - Google Patents

Security protection method, system, computer device and storage medium Download PDF

Info

Publication number
CN114419545A
CN114419545A CN202111660997.XA CN202111660997A CN114419545A CN 114419545 A CN114419545 A CN 114419545A CN 202111660997 A CN202111660997 A CN 202111660997A CN 114419545 A CN114419545 A CN 114419545A
Authority
CN
China
Prior art keywords
region
image
early warning
interest
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111660997.XA
Other languages
Chinese (zh)
Inventor
李自汉
王家寅
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Microport Medbot Group Co Ltd
Original Assignee
Shanghai Microport Medbot Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Microport Medbot Group Co Ltd filed Critical Shanghai Microport Medbot Group Co Ltd
Priority to CN202111660997.XA priority Critical patent/CN114419545A/en
Publication of CN114419545A publication Critical patent/CN114419545A/en
Priority to PCT/CN2022/137021 priority patent/WO2023104055A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Alarm Systems (AREA)

Abstract

The invention discloses a safety protection method, a system, computer equipment and a storage medium, wherein the safety protection method comprises the following steps: determining an interested area according to image information in a target operation end monitoring area acquired in real time; whether non-operators invade the region of interest is detected based on a target detection algorithm, if so, corresponding early warning action and/or protection action are/is triggered, so that major risks caused by accidental collision of the non-operators to the operation end are avoided, and the safety of the operation end is improved.

Description

Security protection method, system, computer device and storage medium
Technical Field
The invention relates to the technical field of medical instruments, in particular to a safety protection method, a safety protection system, computer equipment and a storage medium.
Background
With the popularization of the endoscopic surgical robot, the surgical robot is suitable for multi-surgical-room operations such as urology surgery, obstetrics and gynecology, cardiac surgery, thoracic surgery, hepatobiliary surgery and the like, the facing scenes are more and more complex, and meanwhile, some potential safety hazards are more exposed.
When a surgeon operates a surgical robot through a main operation end to perform a surgery, if a non-operator (e.g., a nurse or an assistant) inadvertently bumps into the main operation end of the surgeon, unintended movement of the surgical robot may be caused, causing a significant surgical risk.
Therefore, the research on the safety technology of the surgical robot and the establishment of a safe and reliable surgical robot protection system are important links for ensuring the smooth operation and the safety of patients, and have great significance.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a security protection method, system, computer device and storage medium for providing security protection for a target operation end.
In order to solve the above technical problem, a first aspect of the present application provides a security protection method, including:
determining an interested area according to image information in a target operation end monitoring area acquired in real time;
and detecting whether a non-operator invades the region of interest based on a target detection algorithm, and if so, triggering a corresponding early warning action and/or a protection action.
In the safety protection method provided in the above embodiment, the region of interest is determined according to image information in a monitoring area around the target operation terminal acquired in real time; and whether non-operators invade the region of interest is detected based on a target detection algorithm, if so, corresponding early warning action and/or protection action are/is triggered, so that the serious operation risk caused by the fact that the non-operators accidentally collide the target operation end is avoided, and the safety of the target operation end is improved.
In one embodiment, the determining the region of interest according to the image information in the monitoring region of the target operation terminal acquired in real time includes:
preprocessing the image information to obtain an image to be processed;
and determining the region of interest in the image to be processed.
In one embodiment, the pre-processing includes at least one of noise removal, de-dithering, and image distortion removal.
In one embodiment, the de-dithering process includes:
acquiring a multi-frame image contained in the image information;
determining one frame of the multi-frame images as a reference frame, and determining other images except the reference frame as target frames;
sequentially carrying out graying processing, binarization processing and image projection calculation on the multi-frame image;
calculating a correlation coefficient between the image projection calculation result of the target frame and the image projection calculation result of the reference frame;
and carrying out de-jitter processing on the target frame according to the correlation coefficient.
In one embodiment, the acquiring and determining the region of interest in the image to be processed includes:
and determining the region of interest in the image to be processed according to an interactive instruction input by a user.
In one embodiment, the method for detecting whether a non-operator invades the region of interest based on the target detection algorithm further includes:
and judging whether the target operation end is in a working mode, if so, detecting whether non-operators enter the region of interest based on a target detection algorithm.
In one embodiment, the detecting whether a non-operator invades the region of interest based on the target detection algorithm includes:
extracting the features of the image information in the region of interest based on a feature extraction network to obtain a feature extraction result;
and processing the feature extraction result through a preset classifier, and determining whether a non-operator is in the region of interest.
In one embodiment, the region of interest includes a first warning region and a second warning region located inside the first warning region; the triggering of the corresponding early warning action and/or protection action includes:
if the non-operator is detected to be in the first early warning area, triggering a corresponding first early warning action;
and if the non-operator is detected to be in the second early warning area, triggering a corresponding second early warning action and the protection action.
In one embodiment, the first warning action comprises triggering a first warning lamp to flash and/or sending out a voice prompt with a first triggering frequency;
the second early warning action comprises triggering a second early warning lamp to flicker and/or sending out a voice prompt of a second trigger frequency;
the protection action comprises triggering locking of the target operation end.
In one embodiment, the method further comprises the following steps:
and detecting whether a light curtain arranged around the target operation end in advance is shielded, and if so, triggering a third early warning action and the protection action.
In one embodiment, the third warning action includes triggering a third warning light to flash and/or sending a voice prompt with a third triggering frequency;
the protection action comprises triggering locking of the target operation end.
A second aspect of the present application provides a security protection system, comprising:
an image monitoring module configured to:
determining an interested area according to image information in a target operation end monitoring area acquired in real time;
detecting whether a non-operator invades the region of interest based on a target detection algorithm;
and the execution module is connected with the image monitoring module and used for triggering corresponding early warning action and/or protection action when a non-operator invades the region of interest.
The security protection system provided in the above embodiment includes an image monitoring module and an execution module connected to the image monitoring module; the image monitoring module determines an interested area according to image information in a monitoring area around a target operation end acquired in real time; detecting whether a non-operator invades the region of interest or not based on a target detection algorithm; when a non-operator invades the region of interest, the execution module triggers a corresponding early warning action and/or a corresponding protection action, so that a major risk caused by the fact that the non-operator accidentally hits the target operation end is avoided, and the safety of the operation end is improved.
In one embodiment, the image monitoring module comprises:
an image acquisition unit configured to: acquiring image information in a monitoring area of a target operation terminal in real time, and preprocessing the image information to obtain an image to be processed;
the image processing unit is connected with the image acquisition unit and the execution module and is configured to:
determining the region of interest in the image to be processed;
extracting the features of the image information in the region of interest based on a feature extraction network to obtain a feature extraction result;
and processing the feature extraction result through a preset classifier, and determining whether a non-operator is in the region of interest.
In one embodiment, the system further comprises:
the light curtain is arranged around the target operation end;
a light curtain monitoring module, connected to the execution module, configured to: and detecting whether the light curtain is shielded, if so, triggering the execution module to execute corresponding early warning action and/or protection action.
In a third aspect of the present application, a surgical robot system is provided, which includes a main operation end and a surgical trolley, wherein the main operation end is a target operation end, and the system further includes the safety protection system as described above.
A fourth aspect of the present application proposes a computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when executing the computer program.
A fifth aspect of the present application proposes a storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method as described above.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical solutions of the present invention more clearly understood and to implement them in accordance with the contents of the description, the following detailed description is given with reference to the preferred embodiments of the present invention and the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain drawings of other embodiments based on these drawings without any creative effort.
Fig. 1 is a schematic view of a usage scenario of a surgical robotic system provided in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a main operating end of a doctor according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a security protection method provided in an embodiment of the present application;
fig. 4 is a partial schematic flow chart of a security protection method provided in an embodiment of the present application;
FIG. 5 is a flowchart illustrating an image de-dithering process according to an embodiment of the present application;
FIG. 6 is a flow chart illustrating an image de-dithering process according to another embodiment of the present application;
fig. 7 is a schematic flowchart of a process for acquiring a region of interest provided in an embodiment of the present application;
FIG. 8 is a schematic partial flow chart of a security protection method according to another embodiment of the present application;
FIG. 9 is a schematic diagram of a convolution feature extractor provided in an embodiment of the present application;
FIG. 10 is a flow chart of an implementation of a K-means algorithm provided in an embodiment of the present application;
FIG. 11 is a schematic partial flow chart diagram illustrating a security protection method according to yet another embodiment of the present application;
FIG. 12 is a schematic partial flow chart diagram illustrating a security protection method according to yet another embodiment of the present application;
fig. 13 is a schematic structural diagram of a safety protection system provided in an embodiment of the present application;
FIG. 14 is a schematic structural diagram of a security system provided in another embodiment of the present application;
fig. 15 is a diagram of a security scenario provided in an embodiment of the present application;
FIG. 16 is a schematic view of a locking protection for a main operating end of a doctor according to an embodiment of the present application;
FIG. 17 is a schematic illustration of an interactive interface provided in an embodiment of the present application;
FIG. 18 is a schematic structural diagram of a security system provided in a further embodiment of the present application;
fig. 19 is a schematic view of a protection scenario of a monitoring module according to an embodiment of the present application;
fig. 20 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Description of reference numerals: 100. a doctor master operation end; 110. an adjustment member; 120. operating the arm; 130. a trolley component; 140. an image component; 200. an operation trolley; 201. a mechanical arm; 300. an image trolley; 400. a tool trolley;
10. an image monitoring module; 11. an image acquisition unit; 12. an image processing unit; 20. an execution module; 30. a light curtain monitoring module; 31. a transceiver unit; 31. and a monitoring unit.
Detailed Description
To facilitate an understanding of the present application, the present application will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present application are illustrated in the accompanying drawings. This application may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Where the terms "comprising," "having," and "including" are used herein, another element may be added unless an explicit limitation is used, such as "only," "consisting of … …," etc. Unless mentioned to the contrary, terms in the singular may include the plural and are not to be construed as being one in number.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present application.
In this application, unless otherwise expressly stated or limited, the terms "connected" and "connecting" are used broadly and encompass, for example, direct connection, indirect connection via an intermediary, communication between two elements, or interaction between two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
In order to protect the safety of the target operation end, one scheme in the conventional technology is to install a fixed mechanical guard rail around the target operation end to prevent non-operators from approaching, but the mechanical guard rail has a large volume, is inconvenient to carry, occupies space of an operating room, cannot flexibly adjust a safety protection area according to different operation scenes, and is inconvenient to use.
Therefore, the application provides a safety protection method, a system, computer equipment and a storage medium, a target operation end monitoring area is monitored in real time, an area of interest is determined according to image information in the target operation end monitoring area acquired in real time, whether non-operators invade the area of interest is detected based on a target detection algorithm, if the non-operators invade the area of interest, corresponding early warning action and/or protection action are/is triggered, so that the situation that the non-operators accidentally collide the target operation end to cause unexpected movement of the target operation end to cause serious operation risks is avoided, the safety of the target operation end is improved, meanwhile, the area of interest can be adjusted in real time according to actual scene requirements, and compared with a mechanical protective guard, the occupied space of an operating room is saved.
The safety protection method, the safety protection system, the computer device and the storage medium provided by the application can be applied to the usage scenario schematic diagram of the surgical robot system shown in fig. 1. The surgical robot system includes a doctor's main operation end 100, a surgical cart 200, an image cart 300, and a tool cart 400, and a main manipulator is provided on the doctor's main operation end 100. The surgical trolley 200 has at least two robot arms 201 on which the surgical instruments and the endoscope can be respectively mounted. The minimally invasive surgery treatment of the patient on the sickbed is realized through the remote operation of the doctor main operation end 100 and the main manipulator. Wherein, the main manipulator, the mechanical arm 201 and the surgical instrument form a master-slave control relationship. The doctor main operation terminal 100 has a display device that is connected to the endoscope mounted on the robot arm of the surgical cart 200 in communication, and can receive and display images acquired by the endoscope. The operator controls the movement of the robot arm and the surgical instrument by the main manipulator based on the image displayed on the display device of the main manipulator 100 of the doctor. The endoscope and the surgical instrument are each passed through a wound in the patient's body into the patient's site.
Fig. 2 is a schematic structural diagram of a main operating end 100 of a surgical robot, wherein the main operating end 100 includes an adjusting unit 110, a manipulating arm 120, a trolley unit 130, and an image unit 140. The two control arms 120 detect the hand movement information of the operator through the control handles at the tail ends thereof as the movement control input of the whole system; the trolley part 130 is a basic support for mounting other components, and the main operation end car is provided with movable casters which can be moved or fixed as required; the adjusting component 110 can electrically adjust the position of the device such as the manipulating arm, the image component, the armrest of the operator, and the like, i.e. the function of adjusting the parameters of the human machine. The image component 140 can provide the operator with the stereoscopic image detected from the image system, and provide reliable image information for the operator to perform the operation.
It should be noted that the safety protection method and system of the present embodiment can be applied not only to the usage scenario of the surgical robot system shown in fig. 1, but also to other operation terminals that need to be monitored in real time, and are not limited to the operating room scenario.
In a security protection method provided in an embodiment of the present application, as shown in fig. 3, the method includes the following steps:
step S10: determining an interested area according to image information in a target operation end monitoring area acquired in real time;
step S20: whether a non-operator invades the region of interest is detected based on a target detection algorithm, and if yes, corresponding early warning action and/or protection action are/is triggered.
When the present invention is applied to the surgical robot system shown in fig. 1, the target manipulation terminal in step S10 is the main manipulation terminal 100 in fig. 1. In the safety protection method provided by the embodiment, the region of interest is determined according to the image information in the monitoring area around the main operation end of the doctor acquired in real time; and whether non-operators invade the region of interest is detected based on a target detection algorithm, if so, corresponding early warning action and/or protection action are/is triggered, major operation risks caused by the fact that the non-operators accidentally collide the main operation end are avoided, and safety of the main operation end of a doctor is improved.
As an example, the size of the monitoring area of the main operation end is determined according to the detection area of the monitoring camera, and an interested area is selected from the monitoring area; the area of the region of interest is smaller than or equal to the detection area of the monitoring camera.
As an example, the Region Of Interest (ROI) includes a closed Region, and the shape Of the Region Of Interest includes a closed Region Of a preset shape, which may be a circle, a polygon, an ellipse, or the like; the shape can be regular or irregular; the occupied area of the region of interest can be changed according to the space size of the operating room and different operating scenes.
As an example, the protection action includes triggering the locking protection of the doctor main operation end 100; the early warning action comprises voice prompt with different levels of trigger frequency and/or flash of early warning light.
In one embodiment, as shown in FIG. 4, step S10: determining an interested area according to image information in a target operation end monitoring area acquired in real time, comprising the following steps of:
step S11: preprocessing image information to obtain an image to be processed;
step S12: and determining a region of interest in the image to be processed.
Specifically, the preprocessing includes at least one of noise removal, de-dithering, and image distortion removal.
As an example, the acquired image information is subjected to noise elimination to remove high-frequency and burr image points in the image information; the image distortion elimination can obtain camera internal parameters including distortion coefficients through the calibration of a monitoring camera, and then an image subjected to distortion elimination is obtained according to the camera internal parameters.
In one embodiment, the monitoring camera is used for collecting image information of the monitoring area, and the image information can be stored in the hard disk. Because of the instability of the external environment, the image is easy to shake, which is not beneficial to the subsequent operation processing, and the image information needs to be subjected to the shake removal processing so as to eliminate the shake generated by the shaking of the equipment.
Specifically, as shown in fig. 5, the de-jitter process includes the following steps:
step S111: acquiring a multi-frame image contained in image information;
step S112: determining one frame of the multi-frame images as a reference frame (for example, taking the first frame as the reference frame), and determining other images except the reference frame as target frames;
step S113: carrying out gray processing, binarization processing and image projection calculation on a multi-frame image in sequence;
step S114: calculating a correlation coefficient between the image projection calculation result of the target frame and the image projection calculation result of the reference frame;
step S115: and carrying out de-jitter processing on each target frame according to the calculated correlation coefficient.
Specifically, the correlation coefficient includes a correlation coefficient of a vertical projection and a correlation coefficient of a horizontal projection; reading in from a first frame image, adding 1 to the frame number once per processing until the image is completely processed, finishing acquiring a plurality of frame images, and determining a reference frame and a target frame; firstly, carrying out gray processing, binarization processing and image horizontal projection calculation on a multi-frame image in sequence, respectively calculating a target frame horizontal projection calculation result and a reference frame horizontal projection calculation result, determining an area with the maximum correlation coefficient with the reference frame horizontal calculation result, and obtaining an area with the maximum horizontal projection correlation coefficient, wherein the area can be a plurality of target frame horizontal projection images, so that the jitter of a monitoring camera in the horizontal direction is eliminated; selecting one frame from a plurality of target frame horizontal projection images contained in the area with the maximum horizontal projection correlation coefficient as a vertical reference frame and a vertical target frame, performing vertical projection calculation on the vertical reference frame and the vertical target frame, respectively calculating a vertical target frame projection calculation result and a vertical reference frame projection calculation result, determining the area with the maximum vertical projection correlation coefficient according to the vertical reference frame projection calculation result, and obtaining the area with the maximum vertical projection correlation coefficient to eliminate the jitter of the monitoring camera in the vertical direction, wherein the area with the maximum vertical projection correlation coefficient is the image to be processed. In the present embodiment, the correlation coefficient may be a pearson correlation coefficient.
As an example, referring to fig. 6 (the upper row in fig. 6 shows the flow steps of debouncing, and the lower row is a specific implementation manner of the corresponding steps), the graying processing is implemented by a weighted average method; the binarization processing comprises self-adaptive local binarization; the projection calculation comprises column summation of the image data; and calculating by adopting a Pearson correlation coefficient to obtain a correlation coefficient between the image projection calculation result of the target frame and the image projection calculation result of the reference frame so as to restore the image abnormality caused by jitter.
In one embodiment, step S12: determining a region of interest in an image to be processed, comprising: and determining a region of interest in the image to be processed according to an interactive instruction input by a user.
Specifically, the interactive instruction input by the user includes a mouse action instruction and/or a touch screen instruction.
Taking the acquisition of the mouse action command as an example, as shown in fig. 7, the step S12 of determining the region of interest includes:
step S121: acquiring a mouse action instruction;
step S122: judging whether to click the left button or not according to the mouse action instruction;
step S123: if the left key is clicked, judging whether the left key is clicked for the first time;
step S124: if the left key is clicked for the first time, recording the position of the left key clicked for the first time as the initial vertex of the interested area and marking the initial vertex in the image to be processed; if the left key is not clicked for the first time, recording the corresponding current position as the vertex of the region of interest, connecting the current position with the last vertex of the region of interest, and continuously judging whether the left key is clicked or not;
step S125: if the left key is not clicked, judging whether a right key is clicked or not;
step S126: if the right button is clicked, connecting the last point of the left button clicked by the mouse with the initial vertex of the region of interest to obtain the region of interest; if the mouse is not clicked, judging whether the mouse moves;
step S127: if the mouse moves, determining the current coordinate value of the mouse on the image to be processed; if the mouse is not moved, whether the left button is clicked or not is continuously judged.
Optionally, several graphics may be configured in advance, after a desired graphic frame is selected, the position is determined by dragging the graphic frame, and the size of the graphic is adjusted by pulling the image border to determine the region of interest, which may of course be determined in other embodiments, which is not specifically limited in this embodiment.
In one embodiment, the method for detecting whether a non-operator invades the region of interest based on the target detection algorithm at step S20 further includes:
step S101: and judging whether the target operation end 100 is in a working mode, if so, detecting whether non-operators enter the region of interest based on a target detection algorithm, otherwise, not detecting whether the non-operators enter the region of interest.
When the target operation terminal 100 is not in the working mode, it is not necessary to prohibit a non-operator from entering the periphery of the target operation terminal, and therefore the step of detecting whether the non-operator invades the region of interest according to the target detection algorithm is not performed, so as to avoid triggering the early warning action and/or the protection action to disturb the operator.
In one embodiment, as shown in FIG. 8, step S20: whether a non-operator invades an interested area or not is detected based on a target detection algorithm, and the method comprises the following steps:
step S21: carrying out feature extraction on image information in the region of interest based on a feature extraction network to obtain a feature extraction result;
step S22: and processing the feature extraction result through a preset classifier, and determining whether a non-operator is in the region of interest.
As an example, the target detection algorithm includes a feature extraction network and a preset classifier; the feature extraction network realizes feature extraction of image information in the region of interest, does not extract images outside the region of interest, and reduces image processing time. The feature extraction network comprises a convolutional feature extractor and a Darkne-53 network of YOLOv 3; fig. 10 is a schematic diagram of a convolution feature extractor, which is mainly implemented by interleaving and matching two convolution kernels, 1 × 1 and 3 × 3, where the convolution kernel of 1 × 1 is used for dimensionality reduction, and the convolution kernel of 3 × 3 is used for feature extraction. The Darknet-53 network is composed of a series of 1 x 1 and 3 x 3 convolution layers, each convolution layer is followed by a BN layer and a LeakyReLU layer, and the final full-connection layer of the whole network is calculated to be 53 layers in total, so that the detection precision is improved.
By way of example, fig. 11 is a flow chart for implementing the K-means algorithm, and the classifier classification is implemented by using the K-means algorithm, and includes the following steps:
step S211: generating K random positions as initial centroids, wherein each centroid replaces a clustering center;
step S212: calculating the Euclidean distance from each object to a clustering center, and classifying the object to the nearest clustering center according to the distance;
step S213: accumulating and calculating the average value of the objects in each class to be used as a new clustering center;
step S214: judging whether the new clustering center obtains smaller distance loss or not;
if yes, go to step S212;
if not, outputting the result.
The K-means algorithm for classifying by the classifier improves the accuracy of judging whether the non-operating personnel is in the region of interest, and can accurately distinguish whether the non-operating personnel is in the first early warning region or the second early warning region.
In one embodiment, as shown in fig. 11, the region of interest includes a first pre-warning region and a second pre-warning region located inside the first pre-warning region; step S20: triggering a corresponding early warning action and/or protection action, comprising:
step S23: if the non-operating personnel is detected to be in the first early warning area, triggering a corresponding first early warning action;
step S24: and if the non-operator is detected to be in the second early warning area, triggering a corresponding second early warning action and a corresponding protection action.
As an example, the first warning action includes triggering a first warning light to flash, and/or issuing a voice prompt at a first trigger frequency; the second early warning action comprises triggering a second early warning lamp to flicker and/or sending out a voice prompt of a second trigger frequency; stopping the first early warning action and the second early warning action after the non-operator exits the region of interest; the protection action comprises triggering the target operation end to be locked, and stopping locking protection after the non-operator exits from the second early warning area.
As an example, the second trigger frequency is greater than the first trigger frequency; the first and second warning lights may be different colors than may be the brightness of the trigger flash.
It should be noted that the region of interest may also be divided into a plurality of early warning regions, and the number of the plurality of early warning regions is greater than or equal to 3; can be adjusted according to the size of the operating room and the operation scene, and the application does not limit the operation.
In the safety protection method provided by the embodiment, the corresponding early warning action is triggered by monitoring which early warning area the non-operating personnel are in, and the first early warning action is triggered when the non-operating personnel are in the first early warning area; and when the non-operator is in the second early warning area, triggering a second early warning action, wherein the priority of the second early warning action is greater than that of the first early warning action, forming a grading early warning mechanism with different risk grades, reminding the non-operator to keep away from the region of interest, and improving the safety of the target operation end.
In one embodiment, as shown in fig. 12, the security protection method further includes the steps of:
step S30: and detecting whether a light curtain arranged around the target operation end in advance is shielded, and if so, triggering a third early warning action and a protection action.
Specifically, the third warning action includes triggering a third warning lamp to flash and/or sending out a voice prompt of a third trigger frequency, and stopping the third color lamp from flashing and the voice prompt of the third trigger frequency when the light curtain is not blocked; the protection action includes triggering the main target operation end 100 to lock, and when the non-operator exits from the light curtain area, the target operation end 100 stops locking protection and returns to a normal operation state.
As an example, the third trigger frequency is greater than the second trigger frequency; the color and/or brightness of the third early warning lamp are different from those of the first early warning lamp and the second early warning lamp.
As an example, the area of the light curtain is less than or equal to the area of the second warning region; the light curtain is located in the inner side of the second early warning area.
By way of example, the formed light curtain includes an electronic fence formed by visible rays that can be blocked by non-operators, wherein the visible rays include laser or infrared, among others.
In the light curtain monitoring provided by the embodiment, when the image monitoring cannot monitor whether a non-operator enters an interested area in real time due to external factors, the third early warning action and the protection action are triggered through the light curtain real-time monitoring, so that the safety of a target operation end is improved; the priority of light curtain monitoring is higher than that of image monitoring. The image monitoring and the light curtain monitoring jointly form a safety protection mechanism of a target operation end.
In an embodiment of the present application, as shown in fig. 13, a security protection system is provided for executing the security protection method as described above, and the security protection system includes an image monitoring module 10 and an execution module 20 connected to the image monitoring module. The image monitoring module 10 is configured to: determining an interested area according to image information in a target operation end monitoring area acquired in real time; detecting whether a non-operator invades the region of interest based on a target detection algorithm; the execution module 20 is used for triggering a corresponding early warning action and/or a protection action when a non-operator invades the region of interest.
In the security protection system provided in the above embodiment, the image monitoring module and the execution module connected to the image monitoring module; the image monitoring module is used for determining an interested area according to image information in a monitoring area around a target operation end of a doctor acquired in real time; detecting whether a non-operator invades the region of interest or not based on a target detection algorithm; when a non-operator invades the region of interest, the execution module triggers a corresponding early warning action and/or a corresponding protection action, so that a major risk caused by the fact that the non-operator accidentally hits the target operation end is avoided, and the safety of the target operation end is improved.
As an example, the execution module 20 comprises an audible and visual alarm for performing the pre-warning action.
In one embodiment, the region of interest includes a first pre-warning region and a second pre-warning region located inside the first pre-warning region; the execution module 20 includes a lock protection unit for triggering the target operation terminal 100 to execute a lock protection action when a non-operator intrudes into the second warning area.
In one embodiment, as shown in fig. 14, the image monitoring module 10 includes an image acquisition unit 11 and an image processing unit 12. The image acquisition unit 11 is configured to: acquiring image information in a monitoring area in real time, and preprocessing the image information to obtain an image to be processed; the image processing unit 12 is connected to both the image acquisition unit 11 and the execution module 20, and the image processing unit 12 is configured to: determining a region of interest in an image to be processed; carrying out feature extraction on image information in the region of interest based on a feature extraction network to obtain a feature extraction result; and processing the feature extraction result through a preset classifier, and determining whether a non-operator is in the region of interest.
Specifically, as shown in fig. 15, if the image processing unit 12 detects that the non-operator is in the first warning area, the execution module 20 is triggered to execute a corresponding first warning action; if the non-operator is detected to be in the second early warning area, the execution module 20 is triggered to execute the corresponding second early warning action and the corresponding protection action.
In particular, the execution module 20 is further configured to: triggering the first early warning lamp to flash and/or sending out a voice prompt of a first trigger frequency; or triggering the second early warning lamp to flicker and/or sending out a voice prompt of a second trigger frequency until the non-operator exits the second early warning area to stop the second early warning lamp to flicker and the voice prompt of the second trigger frequency, and the non-operator exits the first early warning area to stop the first early warning lamp to flicker and the voice prompt of the first trigger frequency; the execution module 20 is configured to: and triggering locking protection of the target operation end according to the second early warning information, as shown in fig. 16, until the non-operator exits the second early warning area to trigger the main operation end 100 to stop locking protection.
As an example, the safety protection system further comprises a UPS power supply that provides a stable uninterrupted power supply for the safety protection system. The image acquisition unit 11 includes a monitoring camera and a hard disk, and the camera acquires image information, transmits the image information via an internal video line, and stores the image information in the hard disk.
In one embodiment, as shown in fig. 17, an interactive interface is provided on the image processing module 10; the interactive interface is configured to: executing corresponding preset actions according to the received control instructions; the control instruction comprises at least one of a reference frame selecting instruction, a jitter removing instruction, an interested region selecting instruction and a target detection instruction; the preset action comprises at least one of reference frame selection, jitter removal, region of interest selection and target detection.
In one embodiment, the interactive interface is further configured to: when the image processing module 10 monitors that a non-operator invades the region of interest, a display frame of the interactive interface flickers for prompting and displays prompting information; wherein the interactive interface is electrically connected with the image processing unit 12.
By way of example, continuing with reference to FIG. 17, the interactive interface further includes an "OK" button and a "Cancel" button; after receiving the control instruction, the interactive interface triggers a 'confirmation' button before executing a corresponding preset action; if any preset action is to be cancelled or interrupted, a 'cancel' button is triggered.
In one embodiment, as shown in fig. 18, the security protection system further comprises: a light sheet and a light sheet monitoring module 30. A light curtain (not shown) disposed around a target operation end (e.g., the main operation end 100); the light curtain monitoring module 30 is connected to the execution module 20, and configured to: and detecting whether a light curtain arranged around the target operation end in advance is shielded, if so, triggering the execution module 20 to execute a corresponding third early warning action and/or a corresponding protection action.
Specifically, as shown in fig. 20, the light curtain monitoring module 30 includes a transceiver unit 31 and a monitoring unit 32. The transceiver unit 31 is disposed on the target operation end, and is used for transmitting and receiving visible rays to form a light curtain; the monitoring unit 32 is integrated in the target operation terminal, and is connected to the execution module 20, for monitoring whether the light curtain is blocked or not, and generating a monitoring signal when detecting that the light curtain is blocked.
By way of example, visible radiation includes laser or infrared, among others; the light curtain comprises an electronic fence.
In an embodiment, the execution module 20 is further configured to trigger the third warning light to blink according to the monitoring signal, send a voice prompt of a third trigger frequency, and stop the blinking of the third warning light and the voice prompt of the third trigger frequency when the light curtain is not blocked; the execution module 20 is further configured to trigger the target operation end locking protection according to the monitoring signal, and control the target operation end to stop the locking protection when the light curtain is not blocked.
In an embodiment of the present application, a surgical robot system is further provided, which includes a main operation end, a surgical trolley, and the safety protection system as described above, wherein the main operation end is used as a target operation end protected by the safety protection system. The surgical robot system may be, for example, the surgical robot system shown in fig. 1, and the main operation end is, for example, the main operation end 100 shown in fig. 1. Of course, the present application is not intended to limit the particular form of the surgical robot.
In an embodiment of the present application, a computer device is also proposed, as shown in fig. 20, comprising a memory and a processor, the memory having stored thereon a computer program, which when executed by the processor, implements the steps of the method as described above.
Those skilled in the art will appreciate that the architecture shown in fig. 20 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment of the present application, a storage medium is also proposed, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the method as described above.
It should be understood that the steps described are not to be performed in the exact order recited, and that the steps may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps described may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or in alternation with other steps or at least some of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (17)

1. A method of security protection, comprising:
determining an interested area according to image information in a target operation end monitoring area acquired in real time;
and detecting whether a non-operator invades the region of interest based on a target detection algorithm, and if so, triggering a corresponding early warning action and/or a protection action.
2. The method according to claim 1, wherein the determining the region of interest according to the image information in the monitoring region of the target operation terminal acquired in real time comprises:
preprocessing the image information to obtain an image to be processed;
and determining the region of interest in the image to be processed.
3. The method of claim 2, wherein the pre-processing comprises at least one of noise removal, de-dithering, and image distortion removal.
4. The method of claim 3, wherein the de-dithering process comprises:
acquiring a multi-frame image contained in the image information;
determining one frame of the multi-frame images as a reference frame, and determining other images except the reference frame as target frames;
sequentially carrying out graying processing, binarization processing and image projection calculation on the multi-frame image;
calculating a correlation coefficient between the image projection calculation result of the target frame and the image projection calculation result of the reference frame;
and carrying out de-jitter processing on the target frame according to the correlation coefficient.
5. The method according to claim 2, wherein the determining the region of interest in the image to be processed comprises:
and determining the region of interest in the image to be processed according to an interactive instruction input by a user.
6. The method according to any one of claims 1-5, wherein the method of detecting whether a non-operator has invaded the region of interest based on an object detection algorithm further comprises:
and judging whether the target operation end is in a working mode, if so, detecting whether non-operators enter the region of interest based on a target detection algorithm.
7. The method according to any one of claims 1-5, wherein the detecting whether a non-operator invades the region of interest based on an object detection algorithm comprises:
extracting the features of the image information in the region of interest based on a feature extraction network to obtain a feature extraction result;
and processing the feature extraction result through a preset classifier, and determining whether a non-operator is in the region of interest.
8. The method of any one of claims 1-5, wherein the region of interest comprises a first pre-warning region and a second pre-warning region located inside the first pre-warning region; the triggering of the corresponding early warning action and/or protection action includes:
if the non-operator is detected to be in the first early warning area, triggering a corresponding first early warning action;
and if the non-operator is detected to be in the second early warning area, triggering a corresponding second early warning action and the protection action.
9. The method of claim 8,
the first early warning action comprises triggering a first early warning lamp to flicker and/or sending out a voice prompt of a first trigger frequency;
the second early warning action comprises triggering a second early warning lamp to flicker and/or sending out a voice prompt of a second trigger frequency;
the protection action comprises triggering locking of the target operation end.
10. The method of any one of claims 1-5, further comprising:
and detecting whether a light curtain arranged around the target operation end in advance is shielded, and if so, triggering a third early warning action and the protection action.
11. The method of claim 10,
the third early warning action comprises triggering a third early warning lamp to flicker and/or sending out a voice prompt of a third triggering frequency;
the protection action comprises triggering locking of the target operation end.
12. A security system, comprising:
an image monitoring module configured to:
determining an interested area according to image information in a target operation end monitoring area acquired in real time;
detecting whether a non-operator invades the region of interest based on a target detection algorithm;
and the execution module is connected with the image monitoring module and used for triggering corresponding early warning action and/or protection action when a non-operator invades the region of interest.
13. The system of claim 12, wherein the image monitoring module comprises:
an image acquisition unit configured to: acquiring image information in a monitoring area of a target operation terminal in real time, and preprocessing the image information to obtain an image to be processed;
the image processing unit is connected with the image acquisition unit and the execution module and is configured to:
determining the region of interest in the image to be processed;
extracting the features of the image information in the region of interest based on a feature extraction network to obtain a feature extraction result;
and processing the feature extraction result through a preset classifier, and determining whether a non-operator is in the region of interest.
14. The system of claim 12, further comprising:
the light curtain is arranged around the target operation end;
a light curtain monitoring module, connected to the execution module, configured to: and detecting whether the light curtain is shielded, if so, triggering the execution module to execute corresponding early warning action and/or protection action.
15. A surgical robotic system comprising a main operating end and a surgical trolley, wherein the main operating end is a target operating end, characterized by further comprising a safety protection system according to any one of claims 12-14.
16. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 11 when executing the computer program.
17. A storage medium having a computer program stored thereon, the computer program, when being executed by a processor, realizing the steps of the method according to any of the claims 1 to 11.
CN202111660997.XA 2021-12-06 2021-12-30 Security protection method, system, computer device and storage medium Pending CN114419545A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111660997.XA CN114419545A (en) 2021-12-30 2021-12-30 Security protection method, system, computer device and storage medium
PCT/CN2022/137021 WO2023104055A1 (en) 2021-12-06 2022-12-06 Safety protection method and system, readable storage medium, and surgical robot system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111660997.XA CN114419545A (en) 2021-12-30 2021-12-30 Security protection method, system, computer device and storage medium

Publications (1)

Publication Number Publication Date
CN114419545A true CN114419545A (en) 2022-04-29

Family

ID=81270578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111660997.XA Pending CN114419545A (en) 2021-12-06 2021-12-30 Security protection method, system, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN114419545A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023104055A1 (en) * 2021-12-06 2023-06-15 上海微创医疗机器人(集团)股份有限公司 Safety protection method and system, readable storage medium, and surgical robot system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023104055A1 (en) * 2021-12-06 2023-06-15 上海微创医疗机器人(集团)股份有限公司 Safety protection method and system, readable storage medium, and surgical robot system

Similar Documents

Publication Publication Date Title
US20210157403A1 (en) Operating room and surgical site awareness
US10610307B2 (en) Workflow assistant for image guided procedures
CN100563744C (en) Be used for providing the system of trend analysis at calm and analgesic systems
EP3342343B1 (en) Radiograph interpretation assistance device and method
KR101795720B1 (en) Control method of surgical robot system, recording medium thereof, and surgical robot system
IL262915B2 (en) System and method for interactive event timeline
US10582900B2 (en) Object approach detection for use with medical diagnostic apparatus
WO2013141155A1 (en) Image completion system for in-image cutoff region, image processing device, and program therefor
US20200184638A1 (en) Systems and methods for enhancing surgical images and/or video
BR112020019815A2 (en) ANALYSIS AND PRESENTATION OF PHOTOPLETISMOGRAM DATA
US20180014903A1 (en) Head-mountable computing device, method and computer program product
CN113648066B (en) Collision detection method, electronic equipment and master-slave surgical robot
CN114419545A (en) Security protection method, system, computer device and storage medium
EP4216861A1 (en) Systems and methods for predicting and preventing bleeding and other adverse events
US20230360253A1 (en) Medical information processing system and medical information processing apparatus
US20230078329A1 (en) Operating room video analytic systems and methods
EP3975201A1 (en) Sterility in an operation room
CN117136028A (en) patient monitoring system
WO2023104055A1 (en) Safety protection method and system, readable storage medium, and surgical robot system
EP4293682A1 (en) Operating room monitoring and alerting system
WO2020258333A1 (en) Monitoring and processing method and device, and storage medium
CN115844532A (en) Method and apparatus for planning navigation
EP4334906A1 (en) Clinical activity recognition with multiple cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination