CN114927229A - Operation simulation method and device, electronic equipment and storage medium - Google Patents

Operation simulation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114927229A
CN114927229A CN202210425044.3A CN202210425044A CN114927229A CN 114927229 A CN114927229 A CN 114927229A CN 202210425044 A CN202210425044 A CN 202210425044A CN 114927229 A CN114927229 A CN 114927229A
Authority
CN
China
Prior art keywords
log
virtual pet
image
action
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210425044.3A
Other languages
Chinese (zh)
Inventor
贾彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New Ruipeng Pet Healthcare Group Co Ltd
Original Assignee
New Ruipeng Pet Healthcare Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New Ruipeng Pet Healthcare Group Co Ltd filed Critical New Ruipeng Pet Healthcare Group Co Ltd
Priority to CN202210425044.3A priority Critical patent/CN114927229A/en
Publication of CN114927229A publication Critical patent/CN114927229A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture

Landscapes

  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Surgery (AREA)
  • Urology & Nephrology (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The application relates to the technical field of artificial intelligence, and particularly discloses a surgery simulation method, a surgery simulation device, electronic equipment and a storage medium, wherein the method comprises the following steps: constructing an initial virtual pet according to the pet information and the inspection information of the pet to be operated so as to simulate the operation; displaying the virtual pet to an operating doctor, and acquiring a real-time operation video of the operating doctor on the initial virtual pet; performing multiple parameter adjustment processing on the initial virtual pet according to the real-time operation video to obtain a simulation result and an operation log, wherein the operation log is used for recording the operation process of an operation doctor on the initial virtual pet and the influence of the operation process on the initial virtual pet; and generating the feasibility rate of the operation according to the simulation result, and displaying the simulation result, the operation log and the feasibility rate to an operating doctor.

Description

Operation simulation method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a surgery simulation method, a surgery simulation device, electronic equipment and a storage medium.
Background
Currently, surgery is widely used in the treatment of diseases in pets. However, for life-threatening disease surgery, since different pets have different conditions, it is currently possible to determine whether the surgery is feasible only according to the experience of the operating physician, and then perform surgery attempts based on informing the breeder of the risk. Therefore, there is a need for a method for determining the feasibility rate of surgery with high interpretability, so as to determine the feasibility rate of the surgical plan and the possible problems before the surgery is performed.
Disclosure of Invention
In order to solve the above problems in the prior art, the embodiments of the present application provide a surgery simulation method, device, electronic device, and storage medium, which can obtain the feasibility rate of a surgery scheme and possible problems before surgery, and then improve the success rate of the surgery scheme.
In a first aspect, an embodiment of the present application provides a surgical simulation method, including:
constructing an initial virtual pet according to the pet information and the inspection information of the pet to be operated so as to simulate the operation;
displaying the virtual pet to an operating doctor, and acquiring a real-time operation video of the operating doctor on the initial virtual pet;
performing multiple parameter adjustment processing on the initial virtual pet according to the real-time operation video to obtain a simulation result and an operation log, wherein the operation log is used for recording the operation process of an operation doctor on the initial virtual pet and the influence of the operation process on the initial virtual pet;
and generating the feasibility rate of the operation according to the simulation result, and displaying the simulation result, the operation log and the feasibility rate to an operating doctor.
In a second aspect, embodiments of the present application provide a surgical simulator comprising:
the modeling module is used for constructing an initial virtual pet according to the pet information and the inspection information of the pet to be operated so as to simulate the operation;
the simulation module is used for showing the virtual pet to an operating doctor, acquiring a real-time operation video of the operating doctor on the initial virtual pet, and performing multiple parameter adjustment processing on the initial virtual pet according to the real-time operation video to obtain a simulation result and an operation log, wherein the operation log is used for recording the operation process of the operating doctor on the initial virtual pet and the influence of the operation process on the initial virtual pet;
and the display module is used for generating the feasibility rate of the operation according to the simulation result and displaying the simulation result, the operation log and the feasibility rate to an operating doctor.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor coupled to a memory for storing a computer program, the processor being configured to execute the computer program stored in the memory to cause the electronic device to perform the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, the computer program causing a computer to perform the method of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program, the computer operable to cause the computer to perform a method according to the first aspect.
The implementation of the embodiment of the application has the following beneficial effects:
in the embodiment of the application, the initial virtual pet with the same state as the pet to be operated is constructed through the pet information of the pet to be operated and the examination information reflecting the current diseased state of the pet to be operated. Then, the virtual pet is displayed to an operating doctor, so that the operating doctor can perform operation on the initial virtual pet, and then simulation of an operation scheme is realized. Meanwhile, in the process of operation, a real-time operation video of an operating doctor on the initial virtual pet is obtained, and then parameter adjustment processing is carried out on the initial virtual pet for many times according to the real-time operation video to obtain a simulation result and an operation log. Specifically, by identifying the operation action in the real-time operation video in real time and then adjusting the parameters of the virtual pet, the influence of the action on the virtual pet is displayed in front of the operation doctor in real time. For example, when the operator performs a dissection action, the dissection position, depth and size are identified through the video, and the corresponding position of the virtual pet is adjusted to show the dissected appearance. Meanwhile, recording the operation process of the operating doctor on the initial virtual pet and the influence of the operation process on the initial virtual pet through an operation log. And finally, generating the operation feasibility rate according to the simulation result, and displaying the simulation result, the operation log and the feasibility rate to an operating doctor. Therefore, by constructing the virtual pet, the surgical plan is previewed and simulated before the operation, so that not only can an operating doctor become familiar with the operation process, but also the operation result can be previewed, and the operation plan feasibility rate with high interpretability can be obtained. In addition, by recording an operation log of the operation process, risks possibly generated in the operation can be predicted, so that a doctor is assisted to optimize an operation scheme, and the success rate of the operation is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic hardware structure diagram of a surgical simulation apparatus according to an embodiment of the present disclosure;
FIG. 2 is a system framework diagram of a surgery simulation method in a scenario of surgery simulation for pets according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a surgical simulation method according to an embodiment of the present disclosure;
FIG. 4 is a schematic view of a virtual pet displayed on a flat panel display to a surgeon according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a method for performing multiple parameter adjustment processes on an initial virtual pet according to a real-time surgery video to obtain a simulation result and a surgery log according to an embodiment of the present application;
fig. 6 is a schematic diagram of grouping at least one second key working image according to the screening process to obtain at least one group of image sets according to the embodiment of the present application;
FIG. 7 is a block diagram illustrating functional modules of a surgical simulator provided in an exemplary embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present application are within the scope of protection of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
First, referring to fig. 1, fig. 1 is a schematic hardware structure diagram of a surgical simulation apparatus according to an embodiment of the present disclosure. The surgical simulation apparatus 100 includes at least one processor 101, a communication link 102, a memory 103, and at least one communication interface 104.
In this embodiment, the processor 101 may be a general processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more ics for controlling the execution of programs according to the present disclosure.
The communication link 102, which may include a pathway, conveys information between the aforementioned components.
The communication interface 104 may be any transceiver or other device (e.g., an antenna, etc.) for communicating with other devices or communication networks, such as an ethernet, RAN, Wireless Local Area Network (WLAN), etc.
The memory 103 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In this embodiment, the memory 103 may be independent and connected to the processor 101 through the communication line 102. The memory 103 may also be integrated with the processor 101. The memory 103 provided in the embodiments of the present application may generally have a nonvolatile property. The memory 103 is used for storing computer-executable instructions for executing the present application, and is controlled by the processor 101 to execute. The processor 101 is configured to execute computer-executable instructions stored in the memory 103, thereby implementing the methods provided in the embodiments of the present application described below.
In alternative embodiments, computer-executable instructions may also be referred to as application code, which is not specifically limited in this application.
In alternative embodiments, the processor 101 may include one or more CPUs, such as CPU0 and CPU1 in fig. 1.
In an alternative embodiment, the surgical simulator 100 may include a plurality of processors, such as processor 101 and processor 107 of FIG. 1. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In an alternative embodiment, if the surgical simulation apparatus 100 is a server, for example, the surgical simulation apparatus may be an independent server, or may be a cloud server that provides basic cloud computing services such as cloud service, cloud database, cloud computing, cloud function, cloud storage, web service, cloud communication, middleware service, domain name service, security service, Content Delivery Network (CDN), and big data and artificial intelligence platform. The surgical simulator 100 may further include an output device 105 and an input device 106. The output device 105 is in communication with the processor 101 and may display information in a variety of ways. For example, the output device 105 may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, a projector (projector), or the like. The input device 106 is in communication with the processor 101 and may receive user input in a variety of ways. For example, the input device 106 may be a mouse, a keyboard, a touch screen device, or a sensing device, among others.
The surgical simulator 100 may be a general purpose device or a special purpose device. The embodiment of the present application does not limit the type of the surgical simulator 100.
Furthermore, it should be noted that the simulation method provided by the present application can be applied to various plan simulation scenarios such as operation simulation, cultivation simulation, training simulation, etc. for pets. In the embodiment, a scenario of operation simulation of a pet will be taken as an example to explain the simulation method provided in the present application, and simulation methods in other scenarios are similar to the simulation method in the scenario of operation simulation of a pet, and are not repeated here.
Finally, fig. 2 is a system framework diagram of a surgery simulation method in a scenario of surgery simulation for pets according to an embodiment of the present application. Specifically, the system may include: a physician apparatus 201, a simulation apparatus 202, and a database 203. The physician device 201 may be a smart Phone (e.g., an Android Phone, an iOS Phone, a Windows Phone, etc.), a tablet computer, a palm computer, a notebook computer, a Mobile Internet device MID (MID for short), etc. Specifically, the physician device 201 is configured to acquire pet information and examination information of a pet to be operated, and send the acquired information to the simulation device 202. Meanwhile, the simulation apparatus 202 may further include a simulation processing apparatus 204 and a simulation exhibiting apparatus 205. The simulation processing device 204 may be a server, and is configured to receive the pet information and the examination information sent by the physician device 201, then call data in the database 203 according to the received information to construct a virtual pet, and display the constructed virtual pet to a surgeon through the simulation display device 205. Meanwhile, the simulation processing device 204 is further configured to receive the real-time surgery video fed back by the simulation display device 205, call data in the database 203 according to the real-time surgery video, and perform parameter adjustment on the virtual pet displayed by the simulation display device 205 in real time. In addition, the simulation processing device 204 is also used for determining the feasibility rate of the operation scheme according to the simulation result. The simulation demonstration device 205 may include a display device 206 and a camera device 207, wherein the display device 206 may be a flat panel display and/or a 3D projector for receiving and displaying the virtual pet, the feasibility rate of the surgical plan, the simulation result and the surgical log transmitted by the simulation processing device 204. The photographing device 207 may be a plurality of cameras with different viewing angles disposed around the display device, and is configured to acquire a real-time surgery video of the surgeon on the virtual pet, generate a corresponding surgery log, and send the video and the surgery log to the simulation processing device 204.
In the embodiment, by constructing the virtual pet, the surgical plan is previewed and simulated before the operation, so that not only can an operating doctor become familiar with the operation process, but also the operation result can be previewed, and the operation plan feasibility rate with high interpretability can be obtained. In addition, by recording the operation log of the operation process, the risk possibly generated in the operation can be predicted, so that a doctor is assisted to optimize the operation scheme, and the success rate of the operation is improved.
Hereinafter, the operation simulation method disclosed in the present application will be described by taking a scenario of operation simulation for a pet as an example:
referring to fig. 3, fig. 3 is a schematic flow chart of a surgical simulation method according to an embodiment of the present disclosure. The operation simulation method comprises the following steps:
301: and constructing an initial virtual pet according to the pet information and the inspection information of the pet to be operated so as to simulate the operation.
In the present embodiment, the pet information may include: variety information, age information, body type information, weight information, appearance information, and the like. The operating physician can acquire the pet information in the database by inputting the medical record number of the pet at the time of visiting the doctor in the doctor device 201, or the two-dimensional code of the information filling interface can be generated by the doctor device 201, and the owner of the pet to be operated scans the two-dimensional code and then automatically fills the two-dimensional code and uploads the two-dimensional code.
In this embodiment, the examination information may include information generated by each auxiliary device after examining the pet to be operated, and similarly, the operator may obtain the examination information in the database by inputting a medical record number of the pet at the time of the visit into the doctor device 201.
In this way, in the present embodiment, the standard physical model library can be searched by the breed information and age information in the pet information, and the physical model of the pet to be operated in the standard development state of the same breed and age can be acquired. Then, by the actual body type information of the pet to be operated, for example: the matched body model is adjusted by the information of body length, limb length, body width and the like, the weight information and the examination information. For example: and adjusting the proportion of muscles and bones, the visceral fat content and the like through the body type information and the weight information, and meanwhile, adding corresponding disease characteristics into the body model according to the inspection information, thereby obtaining the virtual pet which is the same as the pet to be operated.
302: and displaying the virtual pet to an operating doctor, and acquiring a real-time operation video of the operating doctor on the initial virtual pet.
In this embodiment, the virtual pet may be displayed to the surgeon via the display device 206. for example, the display device 206 may be a flat panel display and/or a 3D projector. Based on this, when the display device 206 is a flat display, as shown in fig. 4, the virtual pet can be presented in the display by means of a 3D model, and the surgeon can rotate, move and zoom in the 3D model in the display by means of touching. Meanwhile, different surgical instruments are switched through selection of the toolbar, and then corresponding surgical operation is performed on the position of the virtual pet corresponding to the display area in a touch mode. At this time, the corresponding photographing device 207 may further include screen recording software disposed in the display device 206, in addition to the cameras surrounding the display device 206 at the plurality of different viewing angles, and then may further acquire a screen recording video of the flat panel display in addition to the touch operation video of the operating physician on the display device 206, and then use the touch operation video and the screen recording video together as a real-time operation video of the operating physician on the initial virtual pet, thereby improving accuracy and reliability of subsequent analysis.
In this embodiment, when the display device 206 is a 3D projector, the virtual pet can be projected on the display stand in a 3D stereoscopic image manner, and the operating physician can directly perform an operation on the projected 3D stereoscopic image, so as to further restore the operating environment.
303: and performing multiple parameter adjustment processing on the initial virtual pet according to the real-time operation video to obtain a simulation result and an operation log.
In this embodiment, the operation log is used to record the operation procedure of the surgeon on the initial virtual pet and the influence of the operation procedure on the initial virtual pet. Specifically, as shown in fig. 5, a method for obtaining a simulation result and an operation log by performing multiple parameter adjustment processes on an initial virtual pet according to a real-time operation video according to this embodiment will be described below with reference to a scene when the display device 206 is a 3D projector, the method including:
501: in the ith parameter adjustment process, the first real-time video A is processed i Extracting the characteristics to obtain a first action characteristic B i
In this embodiment, i is an integer greater than or equal to 1, and when i is 1, the first real-time video a i Is a real-time surgery video. Illustratively, first, a first real-time video A may be processed i And analyzing frame by frame to obtain at least one first key action image. And then, screening the at least one first key action image according to the image content of each first key action image in the at least one first key action image to obtain at least one second key working image. Specifically, since the motion feature is extracted to clarify what operation the operating physician performs on the virtual pet, the above object cannot be satisfied with an image including only the operating physician or the pet to be operated. Therefore, the first key action images only containing the operating physician or the pet to be operated in the image content can be screened out, and then the rest of the first key action images are used as the at least one second key working image.
Then, in this embodiment, at least one second key working image may be grouped according to the process of the screening process, so as to obtain at least one group of image sets. Specifically, the at least one first key motion image is arranged according to the time sequence of the video frame corresponding to each first key motion image, so that the at least one second key work image can be grouped according to the position of the screened first key motion image in the at least one first key motion image as a separation point. Illustratively, as shown in fig. 6, there are 7 first key action images: A. b, C, D, E, F and G, after the image C and the image F are screened out, the remaining second key working images A, B, D, E and G can be divided into three groups according to the positions of the image C and the image F in the original sequence, wherein the three groups are respectively: images a and B, images D and E, and image G.
Then, in this embodiment, image feature extraction may be performed on each of the at least one group of image sets to obtain at least one image feature in one-to-one correspondence with the at least one group of image sets. Specifically, feature extraction may be performed on each second key working image in each group of image sets to obtain at least one key image feature. And then vertically splicing at least one key image feature according to the sequence of the occurrence time of each corresponding second key working image to obtain the image features of each group of image sets. The image features of each group of image sets can be collected to obtain the at least one image feature.
Finally, at least one image feature can be vertically spliced to obtain a first action feature B i
502: according to the first real-time video A i And the first action characteristic Bi to the first virtual pet C i Adjusting parameters to obtain a first simulation result D i
In the present embodiment, when i is 1, the first virtual pet C i Is the initial virtual pet. That is, at the first adjustment, the object of the adjustment is the initial virtual pet, and in the subsequent adjustment process, the object of the adjustment is the result of the previous adjustment. In particular, first a first real-time video a can be considered i The implementation position and operation information of the operation currently performed by the operator are determined. I.e. identifying the surgical action currently being done by the surgeon, and the location of the action in real time, for example: when the operation performed by the operating physician is to open the abdomen of the pet, the operation position is the abdomen, and the operation information is the operation of opening the abdomen. Further, more accurate identification of implementation location and operation information may be performed, such as: which area in the abdomen, the direction of dissection, depth, opening size, dissection technique, etc.
Then, the operation action information corresponding to the first action characteristic Bi is obtained by matching in the action library according to the first action characteristic Bi, and then the influence information of the operation on the implementation position is determined according to the operation information and the operation action information. Specifically, the influence information is used to identify a state change of the execution position before and after the surgical operation is performed. Taking the above mentioned example of dissection as an example, when the action is performed, the corresponding position will present a dissection opening along the dissection direction, and meanwhile, the muscle texture or internal organ presented under the opening is determined according to the dissection depth, the opening size and the dissection position for presentation. The influence information can be obtained by classifying the information.
Then, an adjustment parameter of the implementation position may be determined according to the influence information, and then the first virtual pet C may be operated according to the adjustment parameter i Adjusting parameters to obtain a first simulation result D i . Taking the above dissected example as an example, the influence information records the opening position, the opening direction, the opening depth, and the content (muscle texture or internal organs) presented under the opening, and thus the first virtual pet C can be adjusted according to the information i The model parameters of the corresponding part make it present an image that influences the recording in the information. Or, the first virtual pet C can be matched with a corresponding chartlet in the database according to the influence information i The map of the corresponding part in the first virtual pet C is replaced, and then the first virtual pet C is realized i Obtaining a first simulation result D by adjusting the parameters i
503: adding the procedure of the ith parameter adjustment processing to the first operation log E i In (2), get the second operation log F i
In the present embodiment, when i is 1, the first surgery log E i Is an empty log.
504: the first simulation result D is obtained i As the first virtual pet C in the i +1 th parameter adjustment processing i+1 Second surgery log F i As the first operation log E in the i +1 th parameter adjustment processing i+1 And acquiring a first real-time video A corresponding to the i +1 th parameter adjustment processing i+1 And (5) performing the (i + 1) th parameter adjustment processing until a plurality of times of parameter adjustment processing are performed to obtain a simulation result and an operation log.
In an alternative embodiment, the first surgical log E is added during the ith parameter adjustment process i In (2), get the second operation log F i And then, judging whether the current result meets an end condition, if so, directly ending multiple times of parameter adjustment processing, outputting a simulation result and an operation log, and further improving the processing efficiency. Specifically, the first motion characteristic B may be acquired i Then when the action tag is the ending action, directly ending the parameter adjustment processing for a plurality of times, and obtaining a first simulation result D i As a result of the simulation, a second surgical log F is recorded i As an adjustment log; and when the action tag is not the ending action, the parameter adjustment processing of the (i + 1) th time is continued. Or, a confirmation interface can be displayed to the operating doctor, a confirmation instruction of the operating doctor is received, and then when the confirmation instruction is an instruction for finishing the operation, the parameter adjustment processing is finished for multiple times, and the first simulation result D is processed i As a result of the simulation, a second surgical log F is recorded i As an adjustment log; when the action label is a continuous operation instruction, the parameter adjustment processing of the (i + 1) th time is continued.
304: and generating the feasibility rate of the operation according to the simulation result, and displaying the simulation result, the operation log and the feasibility rate to an operating doctor.
In the present embodiment, the simulation result may be examined for diseases to determine the health status after the operation. For example, the examination process of the simulation result can be determined according to the disease information of the pet to be operated, the simulation result is subjected to related examination, various examination indexes of the simulation result are determined, and then the examination indexes are compared with standard health indexes to determine the treatment result information. And then, carrying out related sequelae examination on the simulation result according to the disease information to determine sequelae information. And then determining the feasibility rate of the operation scheme according to the treatment result information and the sequelae information. Specifically, the treatment efficiency of the surgical plan may be determined according to the treatment result information, the severity level of each sequelae included therein may be determined according to the sequelae information, and the feasibility rate of the surgical plan may be determined according to the treatment efficiency and severity level. The feasibility rate can be expressed by a formula (I):
Figure BDA0003608129740000101
where P represents the feasibility rate of the surgical plan, Q represents the treatment efficiency, and max (r) represents the greatest severity level among the severity levels of the respective sequelae contained in the sequelae information.
In an alternative embodiment, the risk possibly encountered in the operation can be determined according to the operation log, then the risk rate of the operation scheme is determined according to the occurrence frequency, the occurrence probability and the like of the risk, and then the feasibility rate of the operation scheme is determined according to the risk rate, the treatment efficiency and the severity level. At this time, the feasibility rate can be expressed by the formula (ii):
Figure BDA0003608129740000102
wherein O represents a risk.
In summary, in the operation simulation method provided by the present invention, the initial virtual pet with the same state as the pet to be operated is constructed through the pet information of the pet to be operated and the inspection information reflecting the current diseased state of the pet to be operated. Then, the virtual pet is displayed to an operating doctor, so that the operating doctor can perform operation on the initial virtual pet, and then simulation of an operation scheme is realized. Meanwhile, in the process of operation, a real-time operation video of an operating doctor on the initial virtual pet is obtained, and then parameter adjustment processing is carried out on the initial virtual pet for many times according to the real-time operation video to obtain a simulation result and an operation log. Specifically, by identifying the operation action in the real-time operation video in real time and then adjusting the parameters of the virtual pet, the influence of the action on the virtual pet is displayed in front of the operation doctor in real time. For example, when the operator performs a dissection action, the dissection position, depth and size are identified through the video, and the corresponding position of the virtual pet is adjusted to show the dissected appearance. Meanwhile, the operation process of the operating doctor on the initial virtual pet and the influence of the operation process on the initial virtual pet are recorded through the operation log. And finally, generating the operation feasibility rate according to the simulation result, and displaying the simulation result, the operation log and the feasibility rate to an operating doctor. Therefore, by constructing the virtual pet, the surgical plan is previewed and simulated before the operation, so that not only can an operating doctor become familiar with the operation process, but also the operation result can be previewed, and the operation plan feasibility rate with high interpretability can be obtained. In addition, by recording the operation log of the operation process, the risk possibly generated in the operation can be predicted, so that a doctor is assisted to optimize the operation scheme, and the success rate of the operation is improved.
Referring to fig. 7, fig. 7 is a block diagram illustrating functional modules of a surgical simulation apparatus according to an embodiment of the present disclosure. As shown in fig. 7, the surgical simulator 700 includes:
the modeling module 701 is used for constructing an initial virtual pet according to the pet information and the inspection information of the pet to be operated so as to simulate the operation;
the simulation module 702 is configured to display the virtual pet to an operating physician, obtain a real-time operation video of the operating physician on the initial virtual pet, and perform multiple parameter adjustment processing on the initial virtual pet according to the real-time operation video to obtain a simulation result and an operation log, where the operation log is used to record an operation process of the operating physician on the initial virtual pet and an influence of the operation process on the initial virtual pet;
and the display module 703 is configured to generate the feasibility rate of the operation according to the simulation result, and display the simulation result, the operation log, and the feasibility rate to an operating physician.
In an embodiment of the present invention, in terms of performing multiple parameter adjustment processing on a virtual pet according to a real-time operation video to obtain a simulation result and an operation log, the simulation module 702 is specifically configured to:
in the ith parameter adjustment process, the first real-time video A is processed i Extracting the characteristics to obtain a first action characteristic B i Wherein i is an integer greater than or equal to 1, and when i ═ 1, the first real-time video a i A real-time surgical video;
according to the first real-time video A i And the first action characteristic Bi to the first virtual pet C i Adjusting parameters to obtain a first simulation result D i When i is 1, the first virtual pet C i Is an initial virtual pet;
adding the course of the ith parameter adjustment process to the first surgical log E i In (2), get the second operation log F i Wherein, when i is 1, the first surgery log E i The log is empty;
the first simulation result D is obtained i As the first virtual pet C in the i +1 th parameter adjustment processing i+1 Second surgery log F i As the first operation log E in the i +1 th parameter adjustment processing i+1 And acquiring a first real-time video A corresponding to the i +1 th parameter adjustment processing i+1 And (4) carrying out the (i + 1) th parameter adjustment processing until a plurality of times of parameter adjustment processing are carried out, and obtaining a simulation result and an operation log.
In the embodiment of the invention, the first real-time video A is processed i Extracting the characteristics to obtain a first action characteristic B i In an aspect, the simulation module 702 is specifically configured to:
for the first real-time video A i Performing frame-by-frame analysis to obtain at least one first key action image;
screening at least one first key action image according to the image content of each first key action image in the at least one first key action image to obtain at least one second key working image;
grouping the at least one second key working image according to the screening process to obtain at least one group of image sets;
performing image feature extraction on each group of image sets in at least one group of image sets to obtain at least one image feature, wherein the at least one image feature is in one-to-one correspondence with the at least one group of image sets;
vertically splicing at least one image feature to obtain a first action feature B i
In an embodiment of the present invention, in terms of extracting an image feature from each of at least one group of image sets to obtain at least one image feature, the simulation module 702 is specifically configured to:
extracting the characteristics of each second key working image in each group of image sets to obtain at least one key image characteristic;
vertically splicing at least one key image feature according to the sequence of the occurrence time of each corresponding second key working image to obtain the image feature of each group of image sets;
and collecting the image features of each group of image sets to obtain at least one image feature.
In an embodiment of the invention, the first real-time video A is processed according to the first real-time video A i And the first action characteristic Bi to the first virtual pet C i Adjusting parameters to obtain a first simulation result D i In an aspect, the simulation module 702 is specifically configured to:
according to the first real-time video A i Determining the implementation position and operation information of the operation currently performed by the operating doctor;
matching in the action library according to the first action characteristic Bi to obtain operation action information corresponding to the first action characteristic Bi;
determining influence information of the operation on the implementation position according to the operation information and the operation action information, wherein the influence information is used for identifying state changes of the implementation position before and after the operation is performed;
determining an adjustment parameter of the implementation position according to the influence information;
according to the adjustment parameters, the first virtual pet C is i Adjusting parameters to obtain a first simulation result D i
In the embodiment of the invention, the ith parameter is adjustedThe course of treatment is added to the first surgical log E i In (2), get the second operation log F i Then, the simulation module 702 is further configured to:
obtaining a first action characteristic B i The action tag of (1);
when the action label is the ending action, ending the parameter adjustment processing for a plurality of times, and enabling the first simulation result D i As a result of the simulation, a second surgical log F is recorded i As an adjustment log;
when the action tag is not the end action, the parameter adjustment processing is performed for the (i + 1) th time.
In the embodiment of the present invention, the process of the ith parameter adjustment process is added to the first surgical log E i In (2), get the second surgery log F i Then, the simulation module 702 is further configured to:
displaying a confirmation interface to a surgeon and receiving a confirmation instruction of the surgeon;
when the instruction is confirmed to be the instruction for ending the operation, ending the multiple parameter adjustment processing, and enabling the first simulation result D i As a result of the simulation, a second surgical log F is recorded i As an adjustment log;
when the action tag is a continuous operation instruction, the parameter adjustment processing is performed for the (i + 1) th time.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 8, the electronic device 800 includes a transceiver 801, a processor 802, and a memory 803. Connected to each other by a bus 804. The memory 803 is used to store computer programs and data, and can transfer the data stored in the memory 803 to the processor 802.
The processor 802 is configured to read the computer program in the memory 803 to perform the following operations:
constructing an initial virtual pet according to the pet information and the inspection information of the pet to be operated so as to simulate the operation;
displaying the virtual pet to an operating doctor, and acquiring a real-time operation video of the operating doctor on the initial virtual pet;
performing multiple parameter adjustment processing on the initial virtual pet according to the real-time operation video to obtain a simulation result and an operation log, wherein the operation log is used for recording the operation process of an operation doctor on the initial virtual pet and the influence of the operation process on the initial virtual pet;
and generating the feasibility rate of the operation according to the simulation result, and displaying the simulation result, the operation log and the feasibility rate to an operating doctor.
In an embodiment of the present invention, in terms of performing multiple parameter adjustment processes on a virtual pet according to a real-time surgery video to obtain a simulation result and a surgery log, the processor 802 is specifically configured to perform the following operations:
in the ith parameter adjustment process, the first real-time video A is processed i Extracting the characteristics to obtain a first action characteristic B i Wherein i is an integer greater than or equal to 1, and when i is 1, the first real-time video a i Video of a real-time procedure;
according to the first real-time video A i And the first action characteristic Bi to the first virtual pet C i Adjusting parameters to obtain a first simulation result D i When i is 1, the first virtual pet C i Is an initial virtual pet;
adding the course of the ith parameter adjustment process to the first surgical log E i In (2), get the second surgery log F i Wherein, when i is 1, the first surgery log E i Is an empty log;
the first simulation result D is obtained i As the first virtual pet C in the (i + 1) th parameter adjustment processing i+1 Second surgery log F i As the first operation log E in the i +1 th parameter adjustment processing i+1 And acquiring a first real-time video A corresponding to the i +1 th parameter adjustment processing i+1 And (5) performing the (i + 1) th parameter adjustment processing until a plurality of times of parameter adjustment processing are performed to obtain a simulation result and an operation log.
In the embodiment of the invention, the first real-time video A is processed i Extracting the characteristics to obtain a first action characteristic B i In an aspect, the processor 802 is specifically configured to perform the followingThe operation is as follows:
for the first real-time video A i Analyzing frame by frame to obtain at least one first key action image;
screening at least one first key action image according to the image content of each first key action image in the at least one first key action image to obtain at least one second key working image;
grouping the at least one second key working image according to the screening process to obtain at least one group of image sets;
performing image feature extraction on each group of image sets in at least one group of image sets to obtain at least one image feature, wherein the at least one image feature is in one-to-one correspondence with the at least one group of image sets;
vertically splicing at least one image characteristic to obtain a first action characteristic B i
In an embodiment of the present invention, in terms of performing image feature extraction on each image set in at least one image set to obtain at least one image feature, the processor 802 is specifically configured to perform the following operations:
extracting the characteristics of each second key working image in each group of image sets to obtain at least one key image characteristic;
vertically splicing at least one key image feature according to the sequence of the occurrence time of each corresponding second key working image to obtain the image feature of each group of image sets;
and collecting the image features of each group of image sets to obtain at least one image feature.
In an embodiment of the invention, the first real-time video A is processed according to the first real-time video A i And the first action characteristic Bi to the first virtual pet C i Adjusting parameters to obtain a first simulation result D i In an aspect, the processor 802 is specifically configured to perform the following operations:
according to the first real-time video A i Determining the implementation position and operation information of the operation currently performed by the operating doctor;
matching in the action library according to the first action characteristic Bi to obtain operation action information corresponding to the first action characteristic Bi;
determining influence information of the operation on the implementation position according to the operation information and the operation action information, wherein the influence information is used for identifying state changes of the implementation position before and after the operation is performed;
determining an adjustment parameter of the implementation position according to the influence information;
according to the adjustment parameters, the first virtual pet C is i Adjusting parameters to obtain a first simulation result D i
In the embodiment of the present invention, the process of the ith parameter adjustment process is added to the first surgical log E i In (2), get the second operation log F i Thereafter, the processor 802 is further configured to perform the following operations:
obtaining a first action characteristic B i The action tag of (1);
when the action label is the ending action, ending the parameter adjustment processing for a plurality of times, and enabling the first simulation result D i As a result of the simulation, a second surgery log F i As an adjustment log;
when the action tag is not the end action, the parameter adjustment processing is performed for the (i + 1) th time.
In the embodiment of the present invention, the process of the ith parameter adjustment process is added to the first surgical log E i In (2), get the second operation log F i Thereafter, the processor 802 is further configured to perform the following operations:
displaying a confirmation interface to a surgeon and receiving a confirmation instruction of the surgeon;
when the instruction is confirmed to be the instruction for ending the operation, ending the multiple parameter adjustment processing, and enabling the first simulation result D i As a result of the simulation, a second surgery log F i As an adjustment log;
when the action tag is a continuation operation instruction, the parameter adjustment processing is performed for the (i + 1) th time.
It should be understood that the surgical simulation device in the present application may include a smart Phone (such as an Android Phone, an iOS Phone, a Windows Phone, etc.), a tablet computer, a palm computer, a notebook computer, a Mobile Internet device MID (MID), a robot or a wearable device, etc. The above-described surgical simulation apparatus is merely exemplary and not exhaustive, and includes, but is not limited to, the above-described surgical simulation apparatus. In practical applications, the surgical simulation apparatus may further include: intelligent vehicle-mounted terminals, computer equipment and the like.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention can be implemented by combining software and a hardware platform. With this understanding in mind, all or part of the technical solutions of the present invention that contribute to the background can be embodied in the form of a software product, which can be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes instructions for causing a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods according to the embodiments or some parts of the embodiments.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing a computer program for execution by a processor to implement some or all of the steps of any of the surgical simulation methods as set forth in the above method embodiments. For example, the storage medium may include a hard disk, a floppy disk, an optical disk, a magnetic tape, a magnetic disk, a flash memory, and the like.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the surgical simulation methods as set out in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Furthermore, those skilled in the art should also appreciate that the embodiments described in the specification are optional embodiments and that the acts and modules referred to are not necessarily required for the application.
In the above embodiments, the description of each embodiment has its own weight, and for parts that are not described in detail in a certain embodiment, reference may be made to the description of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is merely a logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units may be integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, and the memory may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented, and specific examples have been applied herein to illustrate the principles and embodiments of the present application, but the foregoing detailed description of the embodiments is only provided to help understand the method and its core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A surgical simulation method, the method comprising:
constructing an initial virtual pet according to the pet information and the inspection information of the pet to be operated so as to simulate the operation;
displaying the virtual pet to an operating doctor, and acquiring a real-time operation video of the operating doctor on the initial virtual pet;
performing parameter adjustment processing on the initial virtual pet for multiple times according to the real-time operation video to obtain a simulation result and an operation log, wherein the operation log is used for recording an operation process of the operation doctor on the initial virtual pet and influences of the operation process on the initial virtual pet;
and generating the feasibility rate of the operation according to the simulation result, and displaying the simulation result, the operation log and the feasibility rate to the operating physician.
2. The method of claim 1, wherein performing multiple parameter adjustments on the virtual pet according to the real-time surgery video to obtain a simulation result and a surgery log comprises:
in the ith parameter adjustment process, the first real-time video A is processed i Extracting the characteristics to obtain a first action characteristic B i Wherein i is an integer greater than or equal to 1, and when i ═ 1, the first real-time video A i The real-time surgery video is obtained;
according to the first real-time video A i And the first action characteristic Bi to the first virtual pet C i Adjusting parameters to obtain a first simulation result D i When i is 1, the first virtual pet C i Is the initial virtual pet;
adding the course of the ith parameter adjustment process to a first surgical log E i In (2), get the second operation log F i Wherein the first surgery log E when i ═ 1 i Is an empty log;
the first simulation result D is obtained i As the first virtual pet C in the (i + 1) th parameter adjustment processing i+1 Second surgery log F i As the first operation log E in the i +1 th parameter adjustment processing i+1 And acquiring a first real-time video A corresponding to the (i + 1) th parameter adjustment processing i+1 And performing the (i + 1) th parameter adjustment processing until the simulation result and the operation log are obtained after the parameter adjustment processing is performed for multiple times.
3. The method of claim 2, wherein the pair of first real-time videos A i Extracting the characteristics to obtain a first action characteristic B i The method comprises the following steps:
for the first real-time video A i Analyzing frame by frame to obtain at least one first key action image;
screening the at least one first key action image according to the image content of each first key action image in the at least one first key action image to obtain at least one second key working image;
grouping the at least one second key working image according to the screening process to obtain at least one group of image sets;
performing image feature extraction on each group of image sets in the at least one group of image sets to obtain at least one image feature, wherein the at least one image feature is in one-to-one correspondence with the at least one group of image sets;
vertically splicing the at least one image characteristic to obtain the first action characteristic B i
4. The method of claim 3, wherein the performing image feature extraction on each of the at least one image set to obtain at least one image feature comprises:
performing feature extraction on each second key working image in each group of image sets to obtain at least one key image feature;
vertically splicing the at least one key image feature according to the sequence of the occurrence time of each corresponding second key working image to obtain the image feature of each group of image sets;
and collecting the image features of each group of image sets to obtain the at least one image feature.
5. The method of claim 2, wherein the method is performed in a batch modeCharacterized in that said first real-time video A is used as a basis i And the first action characteristic Bi to the first virtual pet C i Adjusting parameters to obtain a first simulation result D i The method comprises the following steps:
according to the first real-time video A i Determining an implementation position and operation information of a surgical operation currently performed by the surgical doctor;
matching in an action library according to the first action characteristic Bi to obtain operation action information corresponding to the first action characteristic Bi;
determining influence information of the operation on the implementation position according to the operation information and the operation action information, wherein the influence information is used for identifying state changes of the implementation position before and after the operation is performed;
determining an adjustment parameter of the implementation position according to the influence information;
according to the adjustment parameters, the first virtual pet C is matched i Adjusting parameters to obtain the first simulation result D i
6. Method according to any of claims 1-5, wherein the course of the i-th parameter adjustment process is added to a first surgery log E i In (2), get the second operation log F i Thereafter, the method further comprises:
acquiring the first action characteristic B i The action tag of (2);
when the action label is an ending action, ending the multiple parameter adjustment processing, and enabling the first simulation result D i As a result of the simulation, the second surgical log F is recorded i As the adjustment log;
and when the action tag is not the ending action, performing the parameter adjustment processing of the (i + 1) th time.
7. The method according to any of claims 1-5, wherein said parameter adjustment is performed for said ith timeProgram add into first surgery log E i In (2), get the second surgery log F i Thereafter, the method further comprises:
displaying a confirmation interface to the operating doctor and receiving a confirmation instruction of the operating doctor;
when the confirmation instruction is an instruction for ending the operation, ending the multiple parameter adjustment processing, and enabling the first simulation result D i As a result of the simulation, the second surgical log F is recorded i As the adjustment log;
and when the action tag is a continuous operation instruction, performing the parameter adjustment processing of the (i + 1) th time.
8. A surgical simulator, the device comprising:
the modeling module is used for constructing an initial virtual pet according to the pet information and the inspection information of the pet to be operated so as to simulate the operation;
the simulation module is used for showing the virtual pet to an operating doctor, acquiring a real-time operation video of the operating doctor on the initial virtual pet, and performing parameter adjustment processing on the initial virtual pet for multiple times according to the real-time operation video to obtain a simulation result and an operation log, wherein the operation log is used for recording an operation process of the operating doctor on the initial virtual pet and influences of the operation process on the initial virtual pet;
and the display module is used for generating the feasibility rate of the operation according to the simulation result and displaying the simulation result, the operation log and the feasibility rate to the operation doctor.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, the one or more programs including instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method according to any one of claims 1-7.
CN202210425044.3A 2022-04-21 2022-04-21 Operation simulation method and device, electronic equipment and storage medium Pending CN114927229A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210425044.3A CN114927229A (en) 2022-04-21 2022-04-21 Operation simulation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210425044.3A CN114927229A (en) 2022-04-21 2022-04-21 Operation simulation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114927229A true CN114927229A (en) 2022-08-19

Family

ID=82807358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210425044.3A Pending CN114927229A (en) 2022-04-21 2022-04-21 Operation simulation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114927229A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115543635A (en) * 2022-11-28 2022-12-30 成都泰盟软件有限公司 Virtual simulation resource dynamic adjustment method and device and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108744506A (en) * 2018-05-17 2018-11-06 上海爱优威软件开发有限公司 Pseudo-entity exchange method based on terminal and system
CN109032454A (en) * 2018-08-30 2018-12-18 腾讯科技(深圳)有限公司 Information displaying method, device, equipment and the storage medium of virtual pet
CN110019918A (en) * 2018-08-30 2019-07-16 腾讯科技(深圳)有限公司 Information displaying method, device, equipment and the storage medium of virtual pet
CN112820408A (en) * 2021-01-26 2021-05-18 北京百度网讯科技有限公司 Surgical operation risk determination method, related device and computer program product
CN114255478A (en) * 2021-12-29 2022-03-29 新瑞鹏宠物医疗集团有限公司 Pet distribution method, device, storage medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108744506A (en) * 2018-05-17 2018-11-06 上海爱优威软件开发有限公司 Pseudo-entity exchange method based on terminal and system
CN109032454A (en) * 2018-08-30 2018-12-18 腾讯科技(深圳)有限公司 Information displaying method, device, equipment and the storage medium of virtual pet
CN110019918A (en) * 2018-08-30 2019-07-16 腾讯科技(深圳)有限公司 Information displaying method, device, equipment and the storage medium of virtual pet
CN112820408A (en) * 2021-01-26 2021-05-18 北京百度网讯科技有限公司 Surgical operation risk determination method, related device and computer program product
CN114255478A (en) * 2021-12-29 2022-03-29 新瑞鹏宠物医疗集团有限公司 Pet distribution method, device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115543635A (en) * 2022-11-28 2022-12-30 成都泰盟软件有限公司 Virtual simulation resource dynamic adjustment method and device and computer equipment
CN115543635B (en) * 2022-11-28 2023-08-15 成都泰盟软件有限公司 Dynamic adjustment method and device for virtual simulation resources and computer equipment

Similar Documents

Publication Publication Date Title
US10790056B1 (en) Methods and systems for syncing medical images across one or more networks and devices
US10810735B2 (en) Method and apparatus for analyzing medical image
US20210365717A1 (en) Method and apparatus for segmenting a medical image, and storage medium
CN108961369B (en) Method and device for generating 3D animation
CN109035234B (en) Nodule detection method, device and storage medium
CN111275080A (en) Artificial intelligence-based image classification model training method, classification method and device
US20220254134A1 (en) Region recognition method, apparatus and device, and readable storage medium
CN112712906B (en) Video image processing method, device, electronic equipment and storage medium
JP2022524878A (en) Image analysis method, device, program
WO2020177348A1 (en) Method and apparatus for generating three-dimensional model
CN111598899A (en) Image processing method, image processing apparatus, and computer-readable storage medium
CN112614573A (en) Deep learning model training method and device based on pathological image labeling tool
CN111985197A (en) Template generation method based on medical information
CN114927229A (en) Operation simulation method and device, electronic equipment and storage medium
CN110533120B (en) Image classification method, device, terminal and storage medium for organ nodule
CN116188392A (en) Image processing method, computer-readable storage medium, and computer terminal
WO2022206024A1 (en) Internal tissue model construction method and terminal device
CN113724185B (en) Model processing method, device and storage medium for image classification
CN113409333B (en) Three-dimensional image cutting method and electronic equipment
CN114332553A (en) Image processing method, device, equipment and storage medium
CN113256622A (en) Target detection method and device based on three-dimensional image and electronic equipment
CN112820385A (en) Medical image browsing method, client and system
CN111429406A (en) Method and device for detecting breast X-ray image lesion by combining multi-view reasoning
US11626201B2 (en) Systems and methods to process electronic images for synthetic image generation
CN114822856A (en) Disease simulation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination